top of page
Writer's pictureScott Robinson

The Children of Babel: Conclusions

I spent my graduate school years peeking into the brains of newborn infants and young children. We were using a gentle EEG technique to measure the responses of developing brains to certain sounds (most often human vocalizations) and occasionally the presentation of stimulating images. We were building on a method of detecting incipient learning deficits, the idea being that early detection would lead to early intervention.



This experience, combined with the literature I had to absorb at the time, gave me a deep appreciation of the connection between the physical brain and mental states. I had minored in philosophy as an undergrad, and had come away from those heady seminars with a smorgasbord of impressions about mental states; and the data we scooped out of those babies and gradeschoolers made very clear that most of those impressions were far afield of reality. Descartes was flying in heavy clouds.


The preceding essays may seem, to some degree, frivolous, and I openly concede that they are not as well-integrated as their arrangement would imply; but my intent is not to persuade, or even made distinct claims – it has been an exercise in strange-looping, introducing bits of my own consciousness to the reader for consideration, à la Rod Serling. I have found the works of Hofstadter, Dennett, and Searle (and many others in the bibliography to follow) fascinating, stimulating, and certainly consciousness-expanding, and it is my hope that exposure to these ideas might trigger similar reactions.


Even so, there are claims to be made, culled not from any one source above, but from the confluence of them all:


The hardware matters.


In addition to the psychology and philosophy education mentioned above, I’ve spent my adult life immersed in computer science, more than 20 years of it as a technologist. I’ve written thousands of programs, hundreds of applications, designed dozens of architectures, even designed control systems for custom robotic hardware for the Department of Defense. And I can state unequivocally that the brain-computer analogy is nonsense.


Your mind is the software, your brain is the hardware, that’s how its advocates often express it. The implication is that our physical brains and our thoughts, our consciousness, are distinct and separable – that minds are something apart from brains. And this leads to all sorts of corollaries, such as the notion that our minds can be uploaded from our brains into computers.


Anyone who believes this doesn’t really understand either brains or computers.


Computers (which we created not to replicate our brains but to compensate for their deficiencies) have given us some misguided notions, primarily the restoration of Descartes’ error – the idea that the mind and brain are separable. But the more deeply we investigate the brain, and the more broadly we understand how computers work, the weaker this misguided notion becomes: human consciousness emerges, not from the combination of hardware and software, but from wetware – hardware that is its own software. Refuting the analogy with itself, we can say that in the brain, the “programming” is inseparable from the “processor”.


Psychologist Stephen Kosslyn, former Dean of Social Science at Harvard University:

“Mental capacities such as memory, perception, mental imagery, language, and thought all have proven to have complex underlying structures. Cognitive neuroscientists improve our understanding of them by delineating component processes and specifying the way they work together.


“Researchers in cognitive psychology and some parts of artificial intelligence share this aim, but they do not consider the brain. Their central metaphor is the computer. Just as information processing operations in a computer can be analyzed without regard for the physical machine itself, mental events can be examined without regard for the brain. This approach is like understanding the properties and uses of a building independently of the materials used to construct it; the shapes and functions of rooms, windows, arches, and so forth can be discussed without reference to whether the building is made of wood, brick, or stone. We call this approach Dry Mind.


“In contrast, we call the approach of cognitive neuroscience Wet Mind. This approach capitalizes on the idea that the mind is what the brain does: a description of mental events is a description of brain function, and facts about the brain are needed to characterize these events.


“The aim is not to replace a description of mental events by a description of brain activity. This would be like replacing a description of architecture with a description of building materials. Although the nature of the materials restricts the kinds of buildings that can be built, it does not characterize their function or design. Nevertheless, the kinds of designs that are feasible depend on the nature of the materials. Skyscrapers cannot be built with only boards and nails, and minds do not arise from just any material substrate.”


Details matter, Dennett said in one of his Chinese Room screeds35. Yes indeed, they surely do! But if they matter in the Chinese Room, how much more so in the brain itself? It is disingenuous to apply such rigor to a thought experiment, then dismiss it entirely in the real world; the hardware “doesn’t matter?” It surely does, and we now live in a time when any competence application developer or systems architect will tell us so: the idea that “software” can run on “any hardware” is naïve, and betrays a lack of knowledge of how complex information systems actually work.


Once again, let’s agree that just because minds and brains aren’t “software” and “hardware” does not mean that we can’t create artificial minds! We simply need to consider different approaches, and “wetware” gives us a big leap forward.


We underscore the earlier distinction between a simulated neural network and an actual one: they produce the same output, but they aren’t the same thing at all. If we want to truly define consciousness in such a way that we can create it artificially, we need to look beneath the output and understand the processes.


Simulation is just that.


Stepping away from the idea that the output of consciousness is all there is to consciousness, we likewise step away from the idea that the simulation of a thing is the thing itself. It’s impossible to make that mistake in John Searle’s example, the one pointing out that a perfect simulation of a rainstorm by a computer won’t leave anybody wet; it’s easy to do when the output looks and sounds and feels like us, and our Theory of Mind switch is being hammered.


We have Alan Turing to blame for this one, I think; the idea that disembodied human conversation might cause that “community” component of consciousness to come alive isn’t factored into his Test. It wasn’t meant to explicate consciousness, but to support an abstract claim connecting universal information processing paradigms to the idea of non-biological intelligence. It has misled us terribly, but the fact that the Turing Test (which was developed decades before any of the discussions above were even possible) addresses only one of many components of consciousness shouldn’t distract us from the bigger suggestion that artificial consciousness is possible; we simply have to be more inclusive about what that artificial consciousness would contain.


If I create a robotic simulation of myself that can perfectly reproduce my expressions of thought – and even my behaviors, the way I walk, my facial expressions, etc. - I have created something that even my closest friends and family might mistake for me. That’s an impressive accomplishment; but it’s not the same as reproducing my own consciousness. The latter contains myriad internal processes and experiences and sensations that never make it into my words or actions – and they are not only a part of my consciousness, they are a greater proportion of it than the words and action are.

And this leads us to our next claim...


Consciousness is far more than what we say and do.


Once we get past the outsized focus on a system’s output, accepting that what lies beneath the surface is integral to consciousness, we need to figure out what it is beneath the surface that matters.

Does this mean that an artificial consciousness must reproduce not only human output, but human sub-surface processes?


That’s an important question. Because these things aren’t observable via verbal expression or behavior, but only through intuition and self-report, their recreation is a far greater challenge than producing systems that pass the Turing Test.


But there are a few pieces we do have, and they’re mentioned above. Strange loops, for one; continuous streams of thought, for another. These, we know are there; I’ve seen them myself, observing the brainwaves of children. On and on they go, and when the child produces an explicit response to an explicit stimulus, suddenly the brainwaves reorganize, very briefly – and then they return to their sub-surface cascade, energetic but inscrutable.


And we can safely claim that for an artificial system to be conscious, it must be self-aware; it must experience things. That requires some active mechanism, working in parallel with the stream of thought, observing its own operations and associating those observations with past events and perceptions and sensations. Must that, too, mimic the human version?


There is no reason to think so. Those components are necessary for the artificial consciousness to satisfy our definitions, but there is no reason in principle that they cannot be achieved in non-biological media. Put another way, an artificial consciousness must be more than its output, it must have an “inner life” to qualify as conscious; but that inner life can be utterly alien, from a human point of view, and still be consciousness. It is anthropological arrogance to suppose otherwise.

Consciousness is a community activity.


It can’t be overstated that strange loops are central to consciousness; it may be that they, more than any other component we’ve considered, define consciousness.


There is irony in the fact that Hofstadter eschews the idea of consciousness, calling it an illusion, citing strange loops as evidence. That’s a matter of perspective, but it doesn’t detract from the fact that the I that defines me as a conscious being is a product of my endless interaction with other Is. We get our I from the group.


So, necessarily, must any artificial consciousness. A self-awareness mechanism is essential, but alone is insufficient; to build a self requires raw materials – experience – and that experience must include interaction with other consciousnesses. HAL must have Frank and Dave, or he cannot form a distinct identity.


We waited more than 50 years for the advent of artificial intelligence, and now it’s upon us. After decades of pontifications and assumptions and suppositions and disturbing summer blockbusters, all of which served to misinform and distract us and warp our understanding, AI has quietly slipped into our lives, already so pervasive and entrenched that we’ll never be without it. It is simultaneously invasive and unobtrusive, popping up in unwanted Internet ads while weaving itself into our healthcare and financial systems.


The AI that has stolen into our institutions and our offices and our homes and our pockets is nothing like what we imagined. It has marched past the sentries that have stood watch since the Sixties – in academia, in industry, even in pop culture – and made itself completely at home, without their permission. It has nested itself in technology along the path of least resistance (profitability), toppling the couch-cushion forts of modern philosophy along the way. Most of what the old-guard punditry has said or claimed about it over the intervening decades has turned out to be wrong or irrelevant.


Now the moment we’ve anticipated for more than two generations is upon us. Intelligence machines are a daily reality, and they are not only improving everything they touch, they are beginning to improve us – augmenting our insights, our performance, our predictions in ways we would never have anticipated, even a decade ago. They will continue to do so, in ways we’ve yet to imagine.


But even this isn’t the pinnacle; machine consciousness still beckons. Some think it’s only a couple of decades away, others think centuries. Some say never.


One thing is certain: when it arrives, it will be as its precursor has been – a thief in the night, slipping past our pontifications and assumptions and suppositions and defying all our expectations. We need to get ready for it, to adjust our thinking, starting paying attention, and seeking out a new and broader perspective. AI is already changing how we think, how we do things.


Per Douglas, Daniel, and John, machine consciousness will change who we are – and who we can become.

1 view0 comments

Commentaires


bottom of page