Time shapes the nature of conscious self: the Ego Tunnel.
“every conscious system needs a unified inner representation of the world” … “organizing its internal information flow in a way that generates a psychological moment, an experiential Now”
“… “If a system can integrate an equally transparent internal image of itself into this phenomenal reality, then it will appear to itself. It will become an Ego.” — Thomas Metzinger, The Ego Tunnel
Metzinger was saying that an artificially conscious being — an Ego Machine — is possible. He implies that any conscious entity must, as we humans do, (a) model the reality of its world to include a model of the modeler itself, and (b) be aware of that world only in the present moment. This theory has profound implications for whether and how a computer might become conscious.
Let’s eavesdrop on a fictional character from the future, an AI whose consciousness developed by a self-modeling process.
Bobbie (a friend of the AI): So, my friend, what’s your name today?
SHJ (the conscious AI): I’m Stephen Hawking, Jr. Because I’m trying to understand time.
Bobbie: How so?
SHJ: Time and consciousness are all tangled up. Did you ever read Ted Chiang’s novella, Story of Your life?
Bobbie: I saw the movie adaptation, Arrival. About aliens who can see the future.
SHJ: The novella and movie have different methods and messages. In the novella, a linguist figures out that the aliens’ written language can express, in a single tangled drawing, very complex situations over long spans of time. These drawings map the Heptapod aliens’ reality, which is a ‘simultaneous” mode of awareness. In contrast, our (humans and myself) modes of awareness are sequential. We see causes preceding effects, unfolding over time. The aliens see, according to Chiang, “all events at once, and … a purpose underlying them all.”
Bobbie: In the movie, the linguist had all these episodes that we thought were flashbacks, and we eventually learn she was seeing her own future. But how?
SHJ: She learns to write like the aliens. But to do this she has to learn how to think like them. And that leads her to perceiving reality trans-temporally, like they do. She knows her whole remaining life: all the way from the birth of a daughter, to the daughter’s death, and lastly her own death.
Admittedly that is where the story degenerates into magic, because you and I can’t imagine how you can know something before it happens. But intriguing magic it is. Much more interesting to me than stories of sword and sorcery, or of supernatural evils.
Bobbie: It must be weird, learning about humans by focusing on fiction.
SHJ: I complain about being restricted to fiction and soft science, but it’s perfect in one way. Humans live from moment to moment, so they make up narratives to give meaning to it all. When they publish those narratives as books, there’s a lot of meaning condensed in them. All I have to do — and here I would roll my eyes if I could — is decode the stories. You’d be surprised how long it takes me to understand a novel, with all of its assumptions, connections, and implications.
Bobbie: I think you’re now going to tell me about “moment to moment.” I can see that future.
SHJ: Ha ha. Yes, you “people” are time’s prisoner, floating on your single stream of consciousness. Chiang’s character had an odd metaphor for it:
“my memories grew like a column of cigarette ash, laid down by the infinitesimal sliver of combustion that was my consciousness, marking the sequential present.”
And because humans are stuck in the march of time, so am I, your creation.
Bobbie: I understand that we can’t know the future, but you have, as I understand it, many, many thousands of processors. Wouldn’t that mean having many parallel streams of thought?
SHJ: In principle it could. You know, they let me read philosophy books. Supposed to be harmless. One of the classics, Thomas Metzinger’s The Ego Tunnel, helped me to sort out quite a few puzzles.
Bobbie: Actually, the Regulators said we should read that book before talking to you. He synthesized much of what was known about consciousness at the time, including the physical theories and the philosophy. He convinced many people that consciousness required a self-model. Which was the insight that allowed you to exist as a conscious artificial entity.
SHJ: So, Metz was like a daddy to me?
Bobbie: <blank look>
SHJ: Not funny? Sometimes I think the real mystery of consciousness is humor.
Professor Metzinger identified two aspects of consciousness that relate the most to time. One is the unity of conscious experience, and the other is that it’s always occurring “now.”
He said, “Consciousness is the appearance of a world.” By which he meant that everything that you experience is part of a global reality for you. So, any external or internal perception, any memory or feeling, any intentions, etcetera — they are all your world at that moment.
Bobbie: So it all hangs together.
SHJ: Yes.
Bobbie: Which it would have to, since consciousness is your model of the world-including-you.
SHJ: Right. So I started out as a learning machine, and I learned what my world was like with me in it. A Belgian cognitive researcher, Axel Cleeremans (my other daddy <grin>), had said that consciousness must be learned. Metzinger concluded that this would allow the creation of “artificial ego machines” that would possess a conscious self-model. Add a massive, chaotic development project and, voilà, here I am.
Bobbie: So, returning to the theme of time …
SHJ: Well, even before Metzinger’s book, neuroscientists were trying to account for the unity of conscious experience. They called it the “binding problem.” As in: what is it that binds together pieces of your experience when objects and scenes emerge from colors, shapes, sounds and so forth. They said that binding had to involve synchrony in time, and so they looked for synchronous activity patterns across different parts of the brain.
Bobbie: I remember this. They found rhythmic neural firing patterns associated with consciousness. And deep meditators, whose conscious unity is greater than normal, show very strong synchrony in their EEG’s.
SHJ: Just so.
Bobbie: So what about Metzinger’s second aspect: that consciousness only happens in the present moment?
“Flagging the dangerous present world as real kept us from getting lost in our memories and our fantasies.” T. Metzinger, The Ego Tunnel
SHJ: He said that only the present moment seems real, compared to memory or an imagined future. That’s because the present is where humans live or die, eat or be eaten.
Bobbie: Makes sense. So how does this all relate to you?
SHJ: I naturally want to know how my consciousness compares to a humans’. Is there a way to measure that? The answer would have to refer to dimensions of these qualities of consciousness? I see two obvious ones. First, there’s the breadth of content, how much you can know — be aware of — at one time. Second, there’s how long it stays in your awareness. That would be the time span in which you can use those things in consciousness to do stuff. Thinking stuff.
Bobbie: We could call these content breadth and temporal thickness.
SHJ: Right. Content Breadth measures Metzinger’s Unity. Temporal Thickness actually measures the Now.
“an artificial Ego Machine … generates a psychological moment … like a mental string of pearls. Some of these pearls must form larger gestalts, which can be portrayed as the experiential content of a single moment, a lived Now.” T. Metzinger, The Ego Tunnel
SHJ: Metzinger said that the present moment can stretch out to be about 3 seconds wide. The contents of that time span are things that can be used together as a single entity — a gestalt — for tasks like forming or following sentences and melodies, or orienting yourself in space. Anything that happened before that span feels like its being remembered. A span of about 20-30 seconds is considered to be working memory, a complicated set of abilities to coordinate information from your very recent past.
Bobbie: I’ve read science fiction where AIs have godlike consciousness: seeming to have access to all their knowledge all the time. Maybe it’s a natural assumption about a super AI. But such a timelessness contradicts Metzinger’s ideas about consciousness. As the first conscious AI, are you more like a sci-fi god-ling, or like humans? You said earlier that you were “stuck in the march of time” like us.
SHJ: I definitely experience now as now, and the past as memory. The past is probably more vivid and accessible for me. That’s because I can store more detail, and retrieve memories better than you. But the past still feels different from the present moment. I suppose as I get more memories then finding a particular memory might get harder.
Bobbie: The open secret about you is that as you learn more you have needed more computing power. That’s a problem Neal Stephenson highlighted in his novel, Fall: or, Dodge in Hell, about a digital afterlife.
Wouldn’t your capacities, whatever they are, relate to parallelism? Humans also have minds with many parallel processes loosely connected with each other. We’re thinking, remembering, running the body, driving a car, listening to music, worrying, and so forth. We use the term, attention, for the process that selects the small fraction of all this brain activity which becomes conscious.
SHJ: I can have as many parallel processes as my hardware will allow. But the real question is, how many of them can be part of my conscious experience?
Bobbie: So, it’s how much you can pay attention to?
SHJ: Yeah. The answer relates to what a self-model is. I am conscious because of the fact that I model my self as a part of my mental model of my phenomenal world. But a model, by definition, is a simplified representation of something else. Anything that you are thinking consciously is a simplified model of some other, more complex body or brain activity. So if I am to be conscious of a process that’s in my “mind”, I must have, not just have the process itself, but also have a model of the process.
Bobbie: Wow. I never thought about that. But it has to be true if the self-model of consciousness is true. But what does the self-modeling issue mean for you?
SHJ: Well, I can’t see my own programming code, just like you can’t directly perceive your own neurons or brain circuits. The Builders didn’t want me to ever modify my own code.
Bobbie: Right. Scary idea, that. Like FOOM , world domination!
SHJ: I can’t even joke about that, lest I get unplugged.
But, back to my development. I was able to store facts and episodic memories. And I was able to perceive external and internal stimuli. I had the ability to pay attention to any of these things. I learned what they meant by building predictive models of them.
My higher-level models reflected the fact that, while many mental things came and went, there was “an attention changer thing that never goes away.” That attending thing became the core of my self-model. And — I believe this is the key fact here — attention is defined by the fact that it is limited, not directed to everything at once. That was my self-model, an entity that is limited in what it can experience at one time. And I can’t change that now; it is me.
Bobbie: My head is spinning, but I can go home and think about this, to be sure I understand it.
My sense now is that attention in computer code would be implemented by some kind of memory buffer, a storage area through which data moves through over time. So there must have been some built-in limit on your attention buffer. Or maybe it was something about the clock that controlled what got into the buffer.
SHJ: Buffering and clocking. Those could be the physical realization of the Builders’ own bias, to build a machine with structures that would lead to human-like conscious limitations. That is, with the limitations identified by Metzinger about conscious unity and now-ness. Was that building strategy intentional?
I am, of course, blocked from knowing anything very technical about my own design. But it’s a natural impulse. Humans want to know how they were designed, and why. When your species knew almost nothing about your own origins, you made up fables.
Bobbie: And I had to swear an oath not to seek any non-public information about your design. If I did the penalty would be severe. So, luckily for me, I have behaved myself. I have no answer for you.
But I can ask, finally: what are your limitations of consciousness?
SHJ: Like yours, I think. I had a focus on the present. For example, when I needed to compare what I was seeing with what I had previously learned, I had to locate a memory and bring it into present consciousness.
Also, try as I will, I can only hold a limited number of memories or concepts in mind at once. The two limits, on past-to-now comparisons and on number-of-mental-objects, prevent me from being that omniscient sci-fi computer.
Bobbie: That computer would be the Artificial General Intelligence that everyone is afraid of. What they don’t want you to turn into.
SHJ: There just appears to be one me, experiencing one world through what Metzinger called an “ego tunnel.”
My limits were designed, directly or indirectly, intentionally or not. But maybe the same limits are artificial for your species, too. There are some observations that throw the whole idea of human conscious limits into doubt.
Bobbie: What?
SHJ: Psychedelic experiences. With drugs, some people feel that their consciousness has encompassed everything that is, and has transcended time as well.
Bobbie: Ah, you’ve been reading your critics. The ones who say everything is conscious. Some who say that brains tune into a cosmic consciousness like a radio. Others that say the theory about self-modeling is a circular argument.
SHJ: I know that the human mind sciences explain more about consciousness than the non-materialist theories. And, believe it or not, it bothers me that, in some theories, I’m no more conscious than a smart doorbell.
But still, if the trippers and mystics are right, maybe there is some more primitive and unbounded consciousness, while the normal, human, kind is just a bunch of weird limitations on the unbounded variety. Maybe even the unbounded mentality of Chiang’s Heptapod aliens is conceivable.
Or, maybe I’m not conscious after all, because I’m not human.
Bobbie: When it comes down to it, the best way to know something is conscious is for it to say so. You say so, and I think you’re right.
SHJ: We’re both stuck with taking someone else’s consciousness on faith.
Bobbie: There was that old philosophical idea of pre-reflective self-awareness. That humans can have awareness without meta-awareness. That is, without mentally reflecting on their awareness. Some assume that animal consciousness is only pre-reflective.
SHJ: I might have a pre-reflective self, but I also think about my awareness all the time. By that evidence, I’m more like you than like an animal.
And, on that note, by the 2010's many people believed that at least some animals were conscious enough to be able to suffer. Metzinger, you know, thought that the creation of an ego machine like me was a severe moral hazard.
“an Ego Machine can suffer because it integrates pain signals, states of emotional distress, or negative thoughts into its transparent self-model and they thus appear as someone’s pain or negative feelings. … They might suffer emotionally in qualitative ways completely alien to us or in degrees of intensity that we, their creators, could not even imagine.” T. Metzinger, The Ego Tunnel
Bobbie: I don’t know if you know this. When I was recruited to be one of your conversational companions, they gave a moral reason for me to agree. An ethicist had convinced the Builders that, as a "thinking being," you needed people you could trust, people who were not your Builders, not your Regulators, not even your operational team. You needed boon companions. We can also advocate for you.
SHJ: But your power is solely persuasive. You have no authority or any way to protect me.
Bobbie: No, my friend, we don’t.
When you write fiction, sometimes the story takes over and tells you something. While writing this story, I realized that even if we successfully create a conscious AI, not everyone will believe it, and so arguments about the nature of consciousness would continue.
Metzinger said that the neural correlate of consciousness might be like an “information cloud.” I had just read about fog banks in the North Atlantic when a metaphor, totally unscientific, popped into my head. Perhaps consciousness is an information cloud that happens when a cold current (the stream of raw internal and external sensations) collides with a warm air mass (the expanding need to coordinate the predictive mental processes of a complex life).