Brains in a Vat [notes on cognitive science]

05May09

It is hard to think why random squiggles on this screen are triggering what we call “mental pictures” in your head. Putnam tried to account for this on his essay “brains in a vat”. More after the jump:

Suppose an ant leaves behind a line in the sand while it walks. After hours of randomly searching for food, the traces the ant has left on the sand resemble a picture of Winston Churchill. This wouldn’t be considered a picture of Churchill: The ant has never seen him, and never had the intention of representing him. It never even intended to draw the lines on the sand in the first place. So similarity isn’t a sufficient condition for something to represent something else. This makes sense for even if the ant would have traced the shape “WINSTON CHURCHILL”, it still wouldn’t represent him. It is clear that what is needed for representation is intentionality. This need for intentionality has been used as an argument to prove that the mind is essentially non-physical in nature. Since no physical object can in itself refer to one thing rather than another, but thoughts in the mind can, then the mind must be non-physical.

This is well and good, but if we think about it again, mental representations no more have a necessary connection with what they represent than physical representations do. Imagine aliens drop a painting of a tree in a world inhabited by humans where there are no trees. The locals would struggle to understand what the picture is about. Is it an animal? some kind of house? In this case their mental picture would only be a representation of the strange object the locals are struggling to recognize. We could argue that their mental picture is actually a tree, since the painting that sprung this image was a representation of a tree. But this causal chain can be removed. What if the painting was the result of a spillage in a planet devoid of trees. Then the mental picture in the local’s heads, while exactly the same as our picture of a tree, wouldn’t be representative of a tree anymore. The same goes for words. Imagine monkeys should by chance type out  a copy of Hamlet. This could really happen, (it is physically possible but infinitesimally probable). This would prove that despite how complicated a system of representation might be, it still lacks an intrinsic connection to what it represents. This happens even to mental images or thought words.

What if your brain was suspended in a vat of nutrients and connected to a computer system that provides perfect sensory feedback and an illusion of an external world? Kind of  a Matrix scenario. What if everyone was in this situation? When I talk to you, my words do not leave my mouth or reach your ears. Rather, impulses from my brain are picked up by the computer and transferred to your brain. We’re engaging in communication. Does it matter that there are no words and that the world is merely an illusion? Could we say in this situation that we are merely just two brains in a vat? Putnam argues that no, we couldn’t. She argues this would be self-refuting. A self refuting proposition is one whose truth implies its own falsity. (i.e. “all general statements are false”). She claims that even if people in this brains-in-vat scenario could say or think any words that we can say or think, they could not refer to what we can refer to.

First, she uses the Turing Test as designed by Alan Turing as a dialogic test of competence for exploring the notion of reference. The problem is to determine wether the interlocutor uses words to refer as we do. This turing test for reference turns out not to be a definitive test. It is not logically impossible that someone would pass the test whilst not referring to anything. A completely sensorially detached computer player that passes the test would be unable to be attributed with the ability for reference, even if it was able to talk about the weather or the landscape of the Pentlands. We have isntead a device for producing sentences in response to other sentences. If we had two of these machines playing against each other, they would go on fooling each other forever, even if the rest of the world disappears.

The only thing that causes this illusion of intelligence and reference is the fact that we have a convention of representation that allows us to interpret that the machine is talking about the Pentlands or about an apple, just as the ant had drawn a picture of Churchill. The difference is that we have linguistic I/O rules that allow us to talk about apples once we’ve seen one and about actions expressed in linguistic form (“i’m going to eat an apple”). The machines are just taking part in syntactic play that resembles actual language in the same way that the ants path resembled Mr. Churchill. (There is a difference: The computer’s linguistic behavior is systematic, while the ant’s was random!) The ant’s drawing could have resembled Churchill even if he never existed. The computers however could only talk about apples because their designers created some sort of causal relationships between the machines and the apples. But this connection is weak: Even if the extinction of bees meant that apples ceased to exist, the machines would still talk about them in the exact same way. So the machines aren’t referring at all. So a machine programmed to do the Turing Test for Reference could only play this game, and would not be referring anything at all.

But let us go back to the brains in a vat. These are perfectly functional brains as we know them, but they are in an universe where there was no creator or intelligent designer who might have sprung the automatic machinery that interconnects them. They do have sensory input and output, but these are not representative of anything in the world since there is nothing in this world to talk about. A discourse about trees of two of these brains have no more connection to trees than the ant’s line drawings have to Churchill. So quantitative similarity between this discourse and someone’s discourse in the real world by no means implies sameness of reference. This means that we cannot regard the brains in a bat to be referring to external things. If one of these brains says “there’s a tree in front of me” this brain is right to think so, since the image it has been fed contains a tree. But it follows that if their possible world is the actual one and they are really brains in a vat, then what they mean by “we are brains in a vat” is that they are brains in a vat in the image being fed to them. However, the brains-in-a-vat hypothesis states that the brains aren’t in a vat in their image, so if they are indeed brains in a vat, the sentence “we’re brains in a vat” is false (no vat in the image!). While the brains-in-a-vat scenario is physically plausible, this does not entail its truth. The existence of a ‘physically possible world’ in which we’re brains in a vat does not mean that we might actually, possibly be brains in a vat. Philosophy rules this out. Putnam has shown thus that a physical possibility is a conceptual impossibility.

One cannot refer to certain kinds of things if one has no causal interaction at all with them. We can still have mental images of them, but if they cannot refer to anything, they are not concepts. Attributing a ‘concept’ or a ‘thought’ to someone is quite different from attributing any mental ‘presentation’ to him. Concepts are signs used in a certain way. The sign itself apart from its use is not the concept, and the former cannot intrinsically refer. Putnam argues that meanings aren’t just in the head. If we imagine a Twin earth where water’s molecular composition is XYZ as opposed to H2O (but equal in every other aspect), when me and my doppelganger think about water, we’re both in the same psychological state, yet we are thinking about different entities. Images without the ability to act in a certain way are just pictures. It is not the phenomena themselves that constitute understanding, but rather the ability of the thinker to employ the phenomena.

Putnam concludes thus that:

  1. No set of mental events, images or more ‘abstract’ mental happenings and qualities constitues understanding.
  2. No set of mental events is necessary for understanding.

Because if we assume that by a mental object we mean something introspectible, whatever it is, might be absent in a person who understands the appropriate word and present in a man who does not have the concept at all. Concepts are abilities rather than occurrences.

from “Brains in a Vat” by Hilary Putnam.

Anuncios


3 Responses to “Brains in a Vat [notes on cognitive science]”

  1. 1 chris

    Bloody philosophers. Mind-body problem PAH!

  2. 2 Ali

    I think Hilary’s a ‘he’: http://en.wikipedia.org/wiki/Hilary_Putnam

    😉

    So, basically, the ‘fundamental conclusion his argument is designed to support’ is that ‘meanings just ain’t in the head’, right? Just like his own Twin Earth experiment.

  3. 3 Dario

    whops. he is a he allright… and yep, meaning in the sense of a concept is intertwined with the capability of an individual to perform an action.


Responder

Introduce tus datos o haz clic en un icono para iniciar sesión:

Logo de WordPress.com

Estás comentando usando tu cuenta de WordPress.com. Cerrar sesión / Cambiar )

Imagen de Twitter

Estás comentando usando tu cuenta de Twitter. Cerrar sesión / Cambiar )

Foto de Facebook

Estás comentando usando tu cuenta de Facebook. Cerrar sesión / Cambiar )

Google+ photo

Estás comentando usando tu cuenta de Google+. Cerrar sesión / Cambiar )

Conectando a %s


A %d blogueros les gusta esto: