You are visiting the website of

MICHAEL HEAP

Return to Home Page

Return to List of Articles

'THE REPLICATED BRAIN' THOUGHT EXPERIMENT

The thought experiment

To understand this thought experiment it is important to accept the premise (temporarily, if you disagree with it) that conscious experience is solely the outcome of activity of the brain. There is nothing in addition to this, such as a soul or spirit (though of course much of conscious experience is derived from sensory input from the external world). The human brain and nervous system are structured in such a way as to make possible this activity. When the brain cannot engage in this activity we are no longer conscious. Therefore when we die there is eternal oblivion.

Now imagine that right now you are participating in a laboratory experiment in which a scientist has wired your brain up to a machine that is precisely equivalent to your brain (maybe another physiological brain or maybe a computer) and this machine is detecting and replicating exactly all the activity of your brain.

Since we have assumed that conscious experience is the outcome of activity of the brain and nothing in addition, then we can reasonably assume that the machine, by replicating exactly the activity of your brain, is experiencing consciousness and that its conscious experiences (what it sees, hears, feels, thinks, remembers, etc.) must be exactly the same as yours and no more than this.

Now ask yourself these questions:

  1. Am I --- (your name) or am I the machine?
  2. Can the scientist or anyone else help me answer this question?
  3. When the scientist announces that the machine is to be turned off, do I want this to happen?
  4. When the scientist announces that the machine has been turned off, what will be my reaction, if any?

This is only a thought experiment so we don't worry too much if it is not possible in practice.

I have invited colleagues and any other interested parties for their answers, feedback and comments and I have copied these below.

Feedback from lecture

I presented this thought experiment at a lecture I gave at the Royal Society of Medicine in December 2013. I used a volunteer from the audience - 'P' - someone who happens to have a great interest in consciousness. Someone in the audience suggested that P would know he were P if, for example, he were experiencing a pain in his big toe, as the machine would not have this experience. P disagreed and arrived at the conclusions that he would not know the answer to question 1 but he would feel OK about the scientist's intention to turn off the machine because there would still be a conscious person, P, continuing afterwards.

I will give my own replies to the questions later. Below is written feedback that I have received so far, some of it being the contributor's immediate thoughts (which may of course change with further consideration).

From 'B'

I find it very difficult to do this experiment, and I suspect it may be almost entirely to do with language. 'Consciousness' and 'conscious experience' are abstractions, whereas when I think about these matters, I think about 'conscious of...' something. Now it occurs to me that both I and the hypothesized machine cannot both really be conscious 'of' exactly the same things in this experiment.

The complication, I suspect, might also be analogous to one of those impossible Escher drawings, in that the experiment appears to set up a duality (the subject's brain and the machine) but then seems to be insisting on some sort of identity. An impasse?

But also I've been reading a bit of philosophy recently that argues, very persuasively I think, that 'you are not your brain'.

From 'B' again, later

I could have stayed upstairs and sent the machine to supper (which no doubt betrays my misunderstanding of the conundrum you've set). Something occurred to me later, the famous reply J.B.S. Haldane is reputed to have made to some breathless lady who accosted him at some gathering - 'Are you related to the Professor Haldane?' The reply - 'If identity is a relationship'.

Well, are Joe Bloggs' brain (in the experiment) and the scientist's machine in a reciprocal relationship of some kind - or are they merely in a purported relationship which on closer inspection turns out to be a hidden identity?

In short, the scientist is only pretending to have set up, and connected, a machine, well aware that there is no way that the subject (or the machine, if it exists at all) can tell the difference.

As far as I'm concerned, he or she can switch it off right away. It would concern me no more than if a copy of my digitized files in the computer-cloud were deleted - as long as I have the originals, the analogy exalted, as necessary, to transcendental, ontological levels accompanied by the music of Aeolian zephyrs.

And more from 'B'

Raymond Tallis cites the work of Jesper Hoffmeyer, who had discussed the way in which the biosphere in humans had been supplemented by 'a global semiosphere'. The semiosphere 'is the very fabric of the human world ...' Tallis describes how the way we point to objects to draw the attention of others to them, and how this sign, pointing, is joined by other pre-verbal ones to constitute a human world distinct from the natural one.

I suppose your hypothesized (clone?) computer could do this too! (I think other primates point, too, but I need to refresh my knowledge on this matter.)

From 'D'

I was interested in your me-machine conundrum. I thought you might be interested to know that, as part of our finals in the 1964 psychology degree examination at Liverpool, two questions (there were only three to choose from) I answered on the general paper were along the following lines.

  1. When can a computer or similar machine be considered to have become 'human'?
  2. What contributions has psychology made to space travel, and what might it contribute in the future?

I recall my 'punch-line' to the first question was to argue that, theoretically, it would be possible to feed all the world's relevant information to this question into the machine. As it would have more relevant information than the average human being, the machine is then asked the question, 'Are you a human being?' By definition, its answer would be definitive. Does that add anything to your own deliberations?

From 'G'

As regard your thought experiment ... interesting ... my reaction is:

Am I --- (your name) or am I the machine?

Can the scientist or anyone else help me answer this question?

No, I would retain a sense of identity, as would the machine, only it would intermingle with that of the machine because both complement each other.

When the scientist announces that the machine is to be turned off, do I want this to happen?

I would be neutral, because of the announcement it was a machine. However, the machine may try to retain consciousness, and the interaction between myself and the machine could be interesting, as a result of which I may ultimately object on grounds that it too has a right to survival as a conscious entity. On reflection, my reaction sounds rather double.

When the scientist announces that the machine has been turned off, what will be my reaction, if any?

The identical consciousness may work like the well-known 'rubber hand' of Ramachandran, upon which I would feel a sense of profound loss and emptiness.

These are my first reactions to this thought experiment.

From 'J1'

Your thought experiment reminds me of Putnam's 'brain in a vat'. A brain is removed from a body and biologically maintained in a suitably nutritious vat and connected to something that supplies an ongoing history of the equivalent of neuronal activity, which thereby amounts to its 'environment'.

Materialist oriented accounts of consciousness or, more generally, of mental life, stress the 'coupled' relationship between brain and body. They use terms like 'the embodied mind'. Although the bio-engineering would be prohibitively complex, Putnam's brain in a vat could, in principle, support mental life. And I think it could be argued that your 'backup brain' could also count, in principle, as supporting mental life. Suppose we thought of the system you describe as an organic brain coupled to a body and joined in parallel to an inorganic system that matched the brain's ongoing processes. If we assume that the inorganic system was added on completely non-intrusively, so that the organic brain-body system was unaware of it, then presumably the inorganic system could be removed without change in the organic system. Now suppose that bio-engineering could be done to switch the organic brain and the inorganic system so that the latter were connected to the organic body and the former somehow cloned the processes of the inorganic system. As before, if this were done non-intrusively then the organic brain could be removed without awareness from the rest of the system, leaving the system, consisting of the embodied inorganic system, continuing with its mental life.

While this doesn't provide answers to your questions it does slightly reframe them. Your questions are directed towards the mental life that is the subject of the questions. In other words they seek answers from the subject. Adopting the scenario I've outlined above your questions could be put equally well about a situation involving someone else, which is perhaps slightly less complex.

From 'M'

A quick comparison to your thought experiment, using situations from popular scifi: here, here, and here.

From 'J2'

Am I --- (your name) or am I the machine?

I read this experiment as my being attached to the machine in parallel, e.g. my survival does not depend upon this machine. While it might be possible to interact with this machine in ways impossible from interaction with the real me -not attached to the machine, I would still maintain an individual identity at least for myself.

Can the scientist or anyone else help me answer this question?

I am aware of myself and may be aware of the machine, so no one else can perfect my knowledge.

When the scientist announces that the machine is to be turned off, do I want this to happen?

Since my existence and Self are independent of the machine, I am not opposed to the machine being turned off.

When the scientist announces that the machine has been turned off, what will be my reaction, if any?

I have no specific reaction to the shutdown for the reasons previously stated.

From 'P1'

Am I --- (your name) or am I the machine?

I would be P.

Can the scientist or anyone else help me answer this question?

I don't think the scientist could answer my question. I would ask my family to validate that I am who I think I am, because they know me.

When the scientist announces that the machine is to be turned off, do I want this to happen?

No, I do not want the machine to be switched off. It is me. I don't want to die.

When the scientist announces that the machine has been turned off, what will be my reaction, if any?

I think I would feel loss.

From 'P2' and discussion with MH

P2: I have a problem with this experiment….I don't know how it would be logically possible to wire my brain up to a replica machine because my brain would be wired up to my sensory systems and motor systems while the machine would be wired up to my brain, not sensory and motor systems…. If however, it was wired up to the same sensorimotor connections and not my brain, but all the connections in my brain were faithfully replicated in the machine then you would have a whole brain simulation…which I understand is the tad ambitious aim of the European Brain Project about which there has been much fuss over the past few days.

MH: I am envisaging that the machine in some way faithfully detects and replicates all the activity in your brain that, in your words, 'gives rise to the subjective experience of consciousness at a higher level of description'. This will include more than information processed from sensory and motor systems (thoughts, memories, etc.) but I don't think that the machine itself need replicate the activity of the systems that provide this information. That is, the machine doesn't need an eye, ear, etc. In the experiment, P's brain has already done all this processing so the machine just has to replicate the results (seeing the laboratory, hearing the scientists, etc.).

P2: If it is a perfect stimulation i.e. replicated in every respect (sensory, motor, cognitive, affective, motivational etc., etc.) it will not know it's a separate machine. And later....

.P2: If the simulation was perfect then my guess would be that consciousness should also be an emergent property of the faithful simulation.

Am I --- (your name) or am I the machine?

P2: In the same way I can say 'I' ….then the simulation should also be able to say 'I' in the same way…otherwise it would not be a faithful simulation.

MH: But what is your answer? Can you tell if you are P or the machine?

P2: If it is a perfect simulation it cannot know it is a separate entity ….because the thing it is simulating won't have this experience.

MH: If (the answer to the above question is) 'yes' then the machine must be able to say to itself 'I am aware that I am a machine that is replicating exactly the present conscious experiences of P'. But how can it think this and know this if P is not thinking the same?

P2: It can't because it is a perfect simulation.

Can the scientist or anyone else help me answer this question?

P2: I don't have any trouble here….the concept of 'I' is a high-level description of specific patterns of neural processing in part of the biological machine (brain), in much the same way as hunger and happiness are. Actually, the concept of 'I' is applied to all motivational (I am hungry), cognitive (that's my idea/memory) sensorimotor (I kicked the ball) selections. If the artificial system is a truly faithful simulation then the same should apply….otherwise it would not be a faithful simulation.

MH: So can anyone help you answer the question 'Am I P or the machine?'?

P2: P is going to think 'I'm me' and, because the machine is faithfully replicating everything, it will also think 'I'm P too'.

MH: But P will also think just what you say here - 'I'm me' - and, because the machine is faithfully replicating everything, it will also think 'I'm P too', likewise the machine. So by this alone, the experiencer of this thought cannot know if it is P or the machine.

P2: That's absolutely right, but one will be in error (the simulator) and one won't (the simulated).

When the scientist announces that the machine is to be turned off, do I want this to happen?

P2: If the faithful simulation was able to appreciate what was being said, and could model the future consequences, and could appreciate the context that it was an artificial simulation, not a biological system, i.e. that it could be switched on and off … (continued below)

….MH: These can only be the thoughts of P; the machine is incapable of having its own separate conscious thoughts. So P might think 'If I am the machine…' (see below)

P2: Actually this P really does think he is an amazing machine...and is all too aware that at any minute something could happen that would effectively switch him off! …..again the simulating machine would think exactly the same.

P2: continued: … then it might be a bit unhappy….unless the scientist announced it would be switched on again at the end of the long flight….in which case it would be very happy to be switched off for the duration!!!!!

MH: OK, but remember it is P who is thinking all this and the machine is having exactly the same experience. Would that not mean that the thoughts will be 'IF I am the machine, then I am not going to exist anymore when the scientists end the experiment (and will no longer enjoy the pleasures and privileges of being P…..). In other words, death? (Actually, I have thought of the scenario where the machine is switched off but the experiment resumes the next day. On the second occasion, the machine will have the experience of having been P during the intervening period - since that will be P's experience; there won't be any gap.)

P2: Yep….I think that is what would happen.

When the scientist announces that the machine has been turned off, what will be my reaction, if any?

P2: The biological me that is being simulated in the artificial machine might think, phew, thank goodness there is no longer a system that is able accurately to simulate all those thoughts that I would prefer to keep private. The simulation would think nothing!!!!

MY ANSWERS TO THE QUESTIONS

  1. When I ask myself the question Am I MH or the machine? my answer is that I don't know, since both will be having exactly the same conscious experience (if we accept the initial premise).
  2. Nobody can help me with this, not even the scientists present.
  3. Consequently I would not want them to turn off the machine in case I am the machine and therefore they are exterminating me (unless of course I prefer oblivion to being MH).
  4. But when they do turn off the machine, as they must, I breathe a sigh of relief because it will now seem to me that when I was asking myself the question 'Am I MH or the machine' I was indeed MH and not the machine, which has just been 'exterminated'.

D's rejoinder to MH's answers above

But what happens if the machine, having the necessary attributes and skills, decides to take precipitate action and turns itself off - does that have any implications? Could it be saying, 'I am MH and I don't want him/us to suffer with this dilemma, so I shall cancel myself out.' But could that mean it is MH, otherwise it might have this sentiment for him?

MH's reply: Remember that all the machine can do is replicate the activity of your brain. So it can't have any thoughts of that the real you doesn't have.

D's response: But the real you might think, 'I don't want this machine mirroring my thoughts, I need to close it down'. (i) Can the machine override person-generated thoughts, and (ii) if it 'obeys' the close-down instruction, does that mean the person is instigating a close-down of her/his own brain?

P2's rejoinders to the MH's answers

To Answer 1: I don't think this is right…..you MH know you are not the machine because you can see your arms and legs and can move them at will. Also you can see the wires that connect your brain with the machine (or maybe it's done with wi-fi 1)….the machine can't do this because is not directly connected to sensory input or motor output.

To Answers 2 and 3: But you will know that you are not the machine because of your correct perception that you are a biological entity ….the perfect simulation in the machine is the one in error…because it too will think it has a biological entity….and be unconcerned when the scientist says I'm going to turn off the machine….

MH's reply: If MH's brain has processed the sensory inputs to the level of conscious experience (seeing MH's arms and legs, feeling them, etc.) the machine will have this conscious experience too because it is replicating the neural activity associated with this conscious experience. It will see MH's arms, legs, etc. (and assume ownership of them), and the wires, etc. And if MH decides to move e.g. his arm, the machine will have the experience of making that decision also.

P2: The answer is the same….the simulator will be in error and the simulated won't ….so when the scientist says the machine is to be turned off the simulated will have no worries….and because the simulation is faithful…neither will the simulator…..which means I can't really see the problem????