Miscellaneous Scattered Thoughts

[ENS] [ENS students] [David Madore]
[Mathematics] [Computer science] [Programs] [Linux] [Literature]


Note: These thoughts were scribbled rather hastily, as my brain was boiling with all sorts of ideas. They are pretty confused and badly written, they frequently contradict each other. However, I hope you find some interesting ideas in all this mess.

You may want to read, before or after reading this page, the following conversation I had with Yann Ollivier (yann.ollivier@ens.fr) on the subjects discussed below.


Duplicator

Imagine the Perfect Duplicating Machine: it takes any physical object (not too large), and constructs a copy of it. The copy is exactly identical to the original — atom for atom. (To formally remain consistent with the conservation of energy, we rule that an equal quantity of any kind of matter must be destroyed during the replication process.)

How useful such a machine would be! We could use it to transmute lead into gold or diamonds or whatever. By repeatedly copying something eatable we could be assured never to run out of food. Everyone could easily obtain an authentic painting by Michelangelo or Leonardo da Vinci - with Leonardo's fingerprints still on it! Computers could be copied just as easily as computer software. The possibilities are endless.

Of course the value of things would change completely. Being ``the authentic thing'' would become meaningless. Material wealth would be measured in kilograms and in nothing else (since kilograms of this are easily transformed to kilograms of that).

Free Hardware

Let us pause for a minute to ponder on the relevance of the matter duplicator on the issue of intellectual property.

In effect, all objects material would become essentially like software is now to us. The only material factor (and a relatively unimportant one) is mass, in analogy with size in the case of software. Beyond that, once the object exists, it can be copied ad libitum.

Would we see the appearance of copyrights on objects? That is to be feared: iisdem causis iidem effectus. So we might have the situation in which ``you are starving but I can't give you a copy of my hamburger because it's copyrighted''.

Copying humans

Sooner or later someone would end up using the matter duplicator on himself.

What would happen then is not difficult to predict. The copied human being would have the same memories as his twin — up to the point when the copy was performed. The copy would insist that he is the original, and that he was suddenly teleported from one place to the other.

The mere thought that somebody can share all your memories, have exactly your character, is frightening. Presumably the two twins would not get along very well: we are too used to our own individuality to tolerate sharing it with someone else — moreover, it tends to be the case that the faults we dislike the most are those which we ourselves possess. (One may object that this is not the case for biological identical twins — but those have the time to grow accustomed to the fact that they share many things with their sibling.)

The debate over authenticity would be dreadful of course — all the more dreadful as it cannot be settled. Each copy is fully convinced that he was the Genuine Thing. If we are careful we might distinguish the original from the copy at the start. But if they are mixed up initially there is no telling which is which.

Consciousness split

But these Gedankenexperminenten are nothing compared to the even more terrible idea of performing the experiment on oneself.

If I sit on the stool of the duplicator machine and somebody presses the button, who am I after the copy?

On the one hand I can argue that I am still the same person. After all, nothing special has happened to me. I have just been analyzed. A bunch of molecules have been reshaped a few feet from me — something I shouldn't care about (except only that this bunch of molecules is going to argue the hell out of me in a few minutes). I am still the same. Or am I?

On the other hand, since there is nothing to distinguish the original from the copy, we can argue that I am the copy. In fact, if the copy will be fully convinced of that. (And, if the copy was not me, who was it?) Has my consciousness split? Does that make any sense?

(Some people would argue that I will be both copies at once, in the sense that ``I'' will control them, i.e. the two bodies. This is patently absurd. The twins will not have any extrasensory perception skills just because they were one same individual some time back. True, they will know each other pretty damn well. But tell one twin a number and ask the other which it was, he can't answer that.)

The Duplicator as a teleporting device

Now suppose the device is as follows: it does not copy but displaces. This is a very nice way for crossing interstellar distances. The actual material particles don't need to move, only the information needs to travel, and it can do that faster than light if we believe in magic.

Now a good way of moving is to copy and destroy the original. At the same time. This is a mere transportation so there is nothing to fear. You enter the cabin, and your molecules are disintegrated while they are simultaneously ``reintegrated'' somewhere else.

Or is there really nothing to fear? To an external observer, nothing unusual has happened. I stepped into the teleportor's cabin in one place and I stepped out of it in another. If you ask me how I feel I will not find anything special. One moment I was here and one moment I was there. By a troubling coincidence this is precisely what the copy feels in the case of the duplicator.

Now what if the machine breaks? If it fails to create the remote copy, of course, the traveler dies, and that's ``all'' there is to it. But what if it fails to destroy the local copy. Then it has acted like a duplicator. And in that case how would the local copy react upon being told: ``Excuse-me sir, but we had a slight malfunction with the machine. However, I am happy to say that your true self has arrived safely on Andromeda. Now if you allow us, we will terminate you.'' (This thought is taken from ``The Emperor's New Mind'' by Roger Penrose.)

Threads of consciousness

In the case of the duplication of consciousness, is it ethical to destroy one or the other copy created (as to make a teleporting device)?

On the one hand, clearly no: both copies are just as human as the original, and they would react in the same horrified fashion if told that they are to be ``terminated''.

Let us however, for one moment, pursue a different (and crazier) line of thought. That of computer threads, or, better, threads of possibilities in nondeterministic computing.

A nondeterministic computer functions just as an ordinary computer, but with one special instruction: the fork primitive. Forking divides the program in as many threads as desired (presumably only finitely many) in which a certain variable has a given value. Each thread continues independently of the others; it is allowed to fork again. Forking into zero threads means dying. If a result is read (``observed'' in the quantum-mechanical sense) by the user, it is read from only one random (living) thread and all other threads cease to exist.

If we imagine consciousness split as in the model of nondeterministic computing, there is nothing wrong with killing one of the copies as long as at least one remains alive. That is, if I duplicate myself and somebody kills one of the copies then I retroactively become the other copy. Or, to state this differently, if I duplicate myself then I am the copy which will live the longer. I am predestined to be so. This crazy idea can be pursued even further to yield even stranger thoughts...

Death

The experiment of the consciousness split cannot be performed. One experience that is performed all the time, however (but never twice by the same person) is that of consciousness vanishing. Also known as death.

Death is neither more nor less incomprehensible as the idea of duplicating oneself. That is, it is ``externally'' perfectly comprehensible: there is nothing magic or mysterious about it happening to others. However, it is ``internally'' perfectly incomprehensible: we cannot see it happening to ourselves.

Is consciousness experimental?

In view of the above comment, we have a serious epistemological problem if we are to experiment with consciousness. We can not be sure of anyone's consciousness but our own. The strict laws of scientific method require that the experimenter never be part of the experiment itself. And yet in the case of consciousness he must be. Or are we to conclude that consciousness is a fraud?

Is consciousness material?

Some people will want to get around the problems presented by concluding that consciousness is not something material and that consequently the matter duplicator cannot duplicate it.

There are some serious objections to this also.

In the first place, what does ``immaterial'' mean? Even if consciousness is some very special addendum to the laws of physics, this does not mean that it cannot be copied also, by some other means. (For example, it quite clearly can be destroyed.) But let us leave this aside and assume that consciousness cannot be copied, that this is meaningless.

But then what happens when we use the matter duplicator anyway on a human being? We get a soulless human. Now what is a soulless human being? And, more importantly, can we distinguish a soulless human being from a normal one?

Will the soulless human drop dead? This is absurd: there is no medical reason for him to do so. His brain is perfectly normal since it is the identical copy of the original brain.

Will the soulless human react exactly as had been predicted? In this case, it can only be argued that consciousness is not something experimentally detectable and therefore that epistemology demands that we think no longer of it.

More importantly, will the soulless human realize that he is soulless? Will he himself feel the lack of this magic fluid of consciousness?

Perhaps, also, does consciousness have some relation with physics in the question of the reduction of the wave packet.

Is consciousness a magic fluid?

If we assume that consciousness is something magical (``the soul'') that it is outside the scope of the laws of physics we have a strange problem. Because that magic fluid is still inextricably tied with the human brain.

It is known that our character resides in our brain. Our memories reside in our brain. Our thoughts reside in our brain. Our intelligence resides in our brain. Making consciousness something magical and outside the brain is absurd — or rather, that idea must be dropped by Occam's razor.

Moreover, what happens when our brain is destroyed, when we die? Is the magic fluid of our consciousness spilled? Does it find another body? Does it join the collective consciousness of the Universe? Must we really go in this sort of mysticism?

Metempsychosis

The Hindu religion tells us that at our death our soul finds another body in which to reincarnate itself.

The thought of reincarnation is very strange, though. What is supposed to be common between ourselves and our past lives? Certainly not our memories. Our intelligence? Our character? Our thoughts? None of that, it would appear. Only this strange magic fluid of consciousness.

But then what does it mean? I can recognize someone's memories, possibly his character, perhaps even his intelligence. But what about his soul? How can I recognize the continuity? Once again, we are faced with inextricable epistemological difficulties.

My statement is that metempsychosis is not even false, it is meaningless. But then, of course, perhaps consciousness itself is meaningless.

The continuity of consciousness

We have the feeling that we are today the same person as yesterday we were and tomorrow we shall be. Only what does it mean to be the same person?

When we go to sleep, for example, our consciousness is ``switched off'' for a certain amount of time. How do we know it is ``the same'' when we wake up again — and how do we define this, for that matter? In other words, how do we distinguish this jump in time which characterizes sleep from the jump in space of teleportation, which we have seen causes a lot of problems in the definition of consciousness.

As Marvin Minsky points out (in ``The Society of Mind'', section 5.7), we are very good to our future self: we often sacrifice the pleasure of our ``present self'' to effect the safety and happiness of our ``future self''. From a strictly Darwinian point of view, this is understandable. But from a psychological point of view, this is slightly surprising. Am I so certain that this future self is indeed me?

Is consciousness a fraud?

We are pretty convinced of the reality of our own consciousness. But, after all, we only have our own word for it. Epistemologically speaking, this is not enough. (In fact, I actually have no evidence for the existence of any other consciousness than my own.)

So, is consciousness a fraud? It could be a pure epiphenomenon, systematically appearing in any sufficiently complex intelligent system. Or it could have some evolutionary reason to it: we need the continuity feeling that consciousness procures us in order to justify before our intelligence the fact that we act favorably towards our own future self.

Epistemologically this answer is rather satisfactory. Ethically it is horrible, however.

Solipsism

How do I have the proof that anybody but myself has a consciousness? I can feel mine (and even this is not really enough), but I only have your word as to the existence of your consciousness.

This solipsist interpretation of the world is quite convenient sometimes. Notably in some quantum-mechanical representations of consciousness. At this point we might as well believe in magic.

Is consciousness quantum?

Quantum mechanics predicts that an experiment having several possible outcomes will put the world in a hybrid state consisting of a linear superposition of these various outcomes similar to the threads of nondeterministic computing. When an observation is performed, there takes place a very mysterious phenomenon known as the collapse of the wave function, where all possibilities but one ``disappear''. This is excellent from the practical point of view, but nobody knows how or when this ``observation'' takes place.

In one interpretation of quantum mechanics, it is the conscious act of observing the result (and not the experiment itself) which forces the collapse.

So, is the collapse of the wave function essential to consciousness just as consciousness might be essential to the collapse of the wave function? Are our brains essentially quantum?

Magic and quantum magic

If I believe in the quantum vision of consciousness, in the solipsist variation, I might as well believe in magic. Indeed, if the very root of my consciousness is my ability to collapse wave functions within my brain, why not be able to collapse them elsewhere — in other words, among a world of possibilities, choose the ones I like best.

Similarly, I can believe I am immortal, because I will always choose (even without realizing it) the branch of time in which I am alive.

Now I had better stop before I reach the state of megalomania where I start proclaiming that I am God, and somebody shoots me to prove me that I am wrong ;-)

Processes and consciousness

The human brain is the most fantastic multitasking computer one could ever dream of. Curiously enough, we are essentially able to think of ``one thing at a time'' — or at least we think so. In fact, this is false. Our brain is perpetually engaged in hundreds of parallel activities, each involving a relatively well-defined group of neurons. Some are purely ``mechanical'' actions, while others are true thoughts.

Yet of these many processes only one is ``conscious'' while the others are ``subconscious''. This is strange as processes keep forking and dying within this turmoil of thoughts. In fact the Gedankenexperiment of consciousness split through the matter duplicator happens all the time within our brain (another meaning of ``Gedanken'' :-).

What we call consciousness is merely the dominant process within our brain. It is constantly changing (hence the consciousness glitches we experience when suddenly asking ourselves ``now, what was I thinking about?''). In fact, it is not even well defined or unique: two processes can compete for brain domination — as when we hesitate about something, two groups of neurons fighting each other and trying to grasp control. It can also be that for some time there is no really active process: in that case we are ``absent'', ``away'', ``distracted''.

Despite all this, we have the firm belief that consciousness is One and Unique, that we remain our same true self. This illusion of continuity is perhaps a result of the historical evolution of our brains.

Who is conscious?

Our firm (and perhaps built-in) belief of the Oneness of consciousness is in difficulty when confronted with certain borderline cases. We think that consciousness is an all-or-nothing phenomenon because that is how we experiment it (and also because we cannot think it magic otherwise). But is it so?

The problem with the all-or-nothing rule for consciousness is that we then have to fix a specific point for its appearance in phylogenetic and ontogenetic history. Or, in other words: are apes (and notably Bonobo chimpanzees, seemingly the most intelligent monkeys) or dolphins conscious? Are fetus conscious? Giving an answer to these questions without appealing to religious dogmata is far from being obvious.

The other kind of borderline cases is the ``virtual words'' problem. I wish to ask the following question seriously: are the characters within our dreams conscious? Are characters within novels conscious? In a way these correspond to processes within our brain, so they are not qualitatively different from our own consciousness.

For that matter, when are we conscious? Are we still conscious when we sleep? When we dream? When we are very tired? (I notice that being really tired does not truly affect my intelligence, but it does affect my consciousness, whatever that means.) How about human beings with various mind disorders? Can those impair consciousness?

Also in the ``virtual words'' line comes the following question. Suppose we had a really powerful computer, powerful enough to compute the behavior of entire moles of atoms, and we used this computer to simulate an entire human being within a completely virtual world. This virtual human being would behave just as we would in the same circumstances. If we simulate ourself, we have just performed the self-duplication experiment within a virtual world. So, is he conscious? Or is the reality of the physical world necessary for consciousness? For that matter, we can get rid of the simulation altogether and ask whether machines can be conscious — a question which we can only satisfactorily answer in the negative if we believe in some kind of ``magic fluid'' nature of consciousness. (Those who have read ``La invención de Morel'' by Adolfo Bioy Casares can also ask themselves whether Morel and the others are conscious, and whether the hero is going to lose his own consciousness.)

Finally, what about larger scale entities? Is an ant-hill conscious (note that it is intelligent — or at any rate certainly more intelligent than the individual ant)? Is humanity as a whole conscious (without our being aware of it — in much the same way that our brain cells are unaware of our own consciousness)?

Any theory of consciousness would have to answer those troublesome questions. Of course the simplest possible (but lazy) answer is that these questions are without object because consciousness does not exist.

Conscious machines?

So much for apes and dolphins. Now can a machine ever be conscious (as opposed to intelligent)?

If we do not want to argue about an kind of magical, ethereal nature of consciousness, or an essential quantum effect in its existence, we are forced to answer: yes, at least in theory.

Is it a crime to turn off a computer which is running a conscious program? Is it still a crime if we keep a full dump of memory so that the program can be restored later on? Or is it just like when we go to sleep? And then, is it a crime to destroy the tape? And what happens if we copy the tape and run the program again, simultaneously on two different computers? Do we have a consciousness split? What if we make a backup, then let the program run for a while, then kill it, and start again from the backup? Is that a crime? And if we just let it run for a few milliseconds before we kill and restart?

Intelligence

After having explored the murky mysteries of consciousness and of the soul, we turn to the dazzling light of intelligence.

Intelligence is not very much more easy to define than consciousness. But at least it can be observed experimentally, which consciousness cannot.

We are, it would seem, the most intelligent beings on Earth. We are also the only ones of whose consciousness we are certain. So is there some correlation between intelligence and consciousness? Is one necessary to the other? Or, perhaps, is consciousness a necessary epiphenomenon springing from the thought processes of any sufficiently intelligent entity?


[ENS] [ENS students] [David Madore]
[Mathematics] [Computer science] [Programs] [Linux] [Literature]

David Madore
Last modified: $Date: 1999/03/27 19:55:24 $