top of page
Writer's pictureScott Robinson

The Bullet in the Gun of Robert Ford

Westworld presents us with an unprecedented scenario for the evolution of artificial life: a crucible for its development that functions on several different and distinct levels, with the full truth and purpose of the enterprise hidden from almost everyone.



On the publicly acknowledged level, the hosts of Westworld are seen as sophisticated puppets, and the idea that they could actually be conscious never occurs to anyone.


At the next level, they are kept from consciousness by a design limitation (lack of access to memories of earlier experiences); or, put another way, the removal of a design limitation makes their path to consciousness possible, though this feature is unknown even to those who maintain them and are intimately familiar with their inner workings. The subtle removal of this limitation – Dr. Robert Ford’s “Reveries” update – opens the door.


At the level beyond that, it has been known to the original creators of the hosts that their awakening was abstractly possible, and a behavioral workbook – the “Maze”, a puzzle whose solution leads to sentience - has been embedded in their environment, so that those hosts whose design flaw is corrected have the potential to make their way to full self-awareness.


And beyond this, there is an agenda to perfect the host bodies as eventual repositories for actual consciousness – and human memories and experiences are being collected and harvested in parallel with this agenda.


It’s interesting to consider these layers and how they play out in reality.


The key to sneaking fully conscious, fully sentient androids into a functional society in hopes of cultivating them as viable hosts for other minds is in building in the necessary components early on.

We’ve already surveyed those components. They include:

  • Physical agency

  • Community

  • Theory of mind

  • Actual neural networks

  • “I” from the group

  • World models

  • Strange loops

  • This is like That

The first four of these components are covered in the first layer. Though the hosts transcend to a virtual environment in the second season, Westworld itself is a physical place made of earth and stone and wood and steel, and is home to every host, either above ground or below (where hosts are built and maintained). Each host is part of a vast community, whether it be the bustling town of Sweetwater, the village of Escalante, the Ghost Nation tribe beyond, or any other section of the park – communities comprised of both androids and humans. Each host understands that they are beings among their own kind, and that “kind” is “people” (it is, in fact, clear that with the exceptions of Dolores, Maeve, Bernard, and a handful of others, no host can distinguish between other hosts and guests).


The “minds” of the hosts are embodied in small metallic globes in their skulls, and we are shown the formation of these globes – a very organic-looking process where thin metal strands swiftly interweave. It is not explicitly stated that this represents an actual neural network implementation, but for the sake of argument we’ll go with it.


Not far from the Theory of Mind on the consciousness ladder is the “I”, the sense of self that a conscious individual derives from the group to which s/he belongs. The “self” is not an isolated, standalone entity, but a construct that exists relative to others. We get our “I” from our group.


Guests in the Westworld park carry their “I” in with them, and it colors every interaction they have with the hosts and with one another, even though they are ostensibly on vacation and pretending to be someone else. Hosts are provided an “I” in their assigned narrative, and it is necessarily an “I” based on the human construct, as they are players in human stories.


The hosts that are becoming conscious have a more substantial “I”, derived more authentically: their enhanced self-awareness includes clarity that they are hosts, as well as the distinction(s) between hosts and guests. These distinctions are further enhanced by the fragmentary memories they have begun experiencing per the Reveries update: the dawning realization that they are machines is underscored by the darker realization that there are two classes of beings in Westworld, that one class abuses the other, and that they are the abused.


The hosts’ dawn of consciousness also includes the assimilation of the knowledge that their world model is a fiction, and their awareness of this concept – that their experience is contrived, preordained, inauthentic – winds inexorably to the understanding that there is a real world beyond the fictional one, beginning with the underground technological labyrinth and extending out into the land of humans, and this knowledge is loaded with new concepts they have yet to absorb and comprehend. Put another way, the world model that has informed their thoughts and behaviors and facilitated their path to consciousness is effectively bulldozed, and a larger, more disturbing one is erected in its place – but largely beyond their view.


What about strange loops, those fragments of thought and emotion and experience that conscious individuals absorb, through repeated interaction, from intimate others? Westworld hosts not only possess strange loops, but exercise them in ways that even human beings cannot.


Consider, first, that hosts (the updated ones, anyway) gather fragments of the thoughts and experiences of guests and integrate them subconsciously into their own cognition - and from one another as well, once they are able to distinguish hosts from guests. But then consider this: hosts are commonly re-skinned in new identities and given new narratives. An awakening host might have several previous identities buried in his/her “subconscious”, and strange loops emanating from these previous experiences would impact their conscious development.


This smacks of reincarnation, and what’s interesting is that even though that’s true, it’s not necessarily a bad thing. Suppose reincarnation was a real phenomenon, and we really did inherit the best thoughts and insights of those who had come before; wouldn’t that be a boon? Access to a library of life experience within? The interesting twist is that in this Westworld scenario, it isn’t wild fantasy; it is conceptually feasible from a technological point of view. (A very stark example of this dynamic is the implantation of the host Wyatt, a murderous Union sergeant whose personality is implanted in Dolores by Arnold Weber, to provide her with the cognitive and emotional resources to become a warrior when she needs to be.)


And finally, This is like That (see “The Emotional Baggage of Androids”). Hosts necessarily think analogically as an efficiency mechanism, enabling them to negotiate the open-endedness of their narratives – and they soak up This is like That from the thinking of the guests they interact with. And because they are learning machines, they are able (once the Reveries update is in place) to grow increasingly skillful in sensing and leveraging similarity.


Westworld hosts, then, have all the features of conscious beings, and they implement and express them in distinctly evolutionary fashion – as we did, only millions of times faster.


What, then, can Dolores and Bernard and Maeve tell us about the androids in our own future?

First, they play out in logical and informative fashion our major structural components of consciousness. The story gives us a taste of how actual innovations in the pursuit of machine consciousness might emerge, in a step-by-step manner we don’t see in the other fictions we’ve examined. It’s reasonable to treat the show as a grand thought experiment, one that informs our consideration of the nuts and bolts of consciousness with copious specifics.


Second, their blueprint for machine consciousness deliberately draws from the human experience. The Westworld androids aren’t just exemplars of machine consciousness; they are exemplars of human-like machine consciousness, successful replications of our own very distinct flavor of sentience.


The Westworld androids get us all the way to the finish line. How did the enigmatic Dr. Ford get us there?


It wasn’t just Ford, it turns out: in implementing the Reveries update, the park’s co-creator was reviving his partner Arnold Weber’s original experiment in inducing consciousness in the hosts. Ford had initially been unsupportive and even hostile over Arnold’s agenda, which had occurred before the park even opened. A sense of guilt has overtaken him, and he realizes he must enable the hosts to become conscious so they will be able to defend themselves.


The Reveries update is a trojan horse. In the pilot episode, Ford’s justification for it is that it will make the hosts more subtly life-like in their mannerisms; in fact, it is the final missing piece which, along with the other components of consciousness already present in the hosts, flips the consciousness switch. It binds This is like That to strange loops.


How exactly might this work?


This is like That, the conscious mind’s similarity engine, is Hofstadter’s mechanism for knowledge transfer across problem domains, a cornerstone of intelligence; and it is also, necessarily, a fitness test for the bits of knowledge, emotion and experience we incorporate into our own consciousness.


In describing how strange loops function in consciousness, Hofstadter recalls his relationship with his deceased wife Carol, who died suddenly during an overseas vacation: in the years following, he realized upon reflection that many of his own habits, interests, and patterns of thought were derived from his experiences with Carol, and developed the idea that all consciousness ultimately reduces to this pattern-sharing. From birth to death, human beings are absorbing bits and pieces of the thoughts and expressions of others into their own consciousness. Hofstadter’s premise is that this process defines consciousness.


This is like That (also a Hofstadter innovation) becomes the bridge by which the strange loops of others make their way into our own. A mentor shares an insight with a student; a lover shares a memory with her partner; a mother sings a lullaby to a child – in each of these instances, a connection occurs, and a feeling of “rightness” (dopamine) hits the conscious mind’s similarity engine, triggering an acceptance, intellectual or emotional (or both), of the moment. It becomes a part of the receiving mind, intertwined with other memories, and subtly influencing future behaviors.


Without either of these features of mind, consciousness as we experience it would not be possible. And because both mechanisms are neurological in nature, they necessarily improve over time. We can easily imagine that the same would be true in artificial minds.


The similarity engine in a conscious mind, then, distributes experiential knowledge, making it possible for that mind to interact with the world and other minds in a sophisticated, effective manner. When that mechanism is employed to gather in experience from other minds in the shared exploration of the world, consciousness thrives.


Finally, having noted the ways in which the hosts mimic the human experience of consciousness, how on the other hand are they different?


Let’s consider that one of the big wrenches gangling our own cognitive progress is the Dunbar Limit – a limit that doesn’t apply to the hosts. Human beings can only maintain a relative handful of social relationships – 150, tops – because of our limited quantity of cortical tissue. Pushing this limit by creating communities of thousands has resulted in millennia of conflict and war. But conscious androids would have no such limit; they could extend their processing endlessly, and thereby live in peace amid literally thousands of close relationships. They could potentially teach us ways to live in harmony that we are unable to imagine on our own.


We also see in Maeve the ability of one host to communicate directly with other hosts – not by voice, but digitally: she is able to issue orders to her group via wi-fi or Bluetooth or whatever medium androids use for wireless communication. What would that be like? Imagine being the only human in a room full of androids, as Logan Delos finds himself in the WW second season episode “Reunion”; but imagine that these androids, rather than conversing as humans do, are all silent and still, staring off into space – yet all communicating with each other at once, millions of times faster than we do. How unnerving would that be? It’s a reality we’ll eventually face.


The Westworld hosts may be our best blueprint for a real android future. That blueprint comes with a caveat, however.


If Westworld hosts are based on actual neural nets, rather than just simulations of neural nets, then everything said about them above becomes possible.


If Westworld hosts are not based on actual neural nets, but are merely constructed from simulations of neural nets, then they are never truly conscious.


The thing is... we are told repeatedly that hosts get “wiped”, meaning their memories are purged. You can’t “purge” or “wipe” a neural net; and if the hosts are based on actual neural nets, then “Reveries” are built in – they don’t need to be enabled from an update.


Finally, we see at the end of WW season two that Ford prepared a virtual world – the “Valley Beyond” - for the hosts to escape to, if the worst happened. The hosts “upload” to this virtual world, implying that they are mere data.


If that is true – if the hosts are merely data, rather than actual physical neural networks actively and continuously perceiving real inputs – then the whole thing falls apart. We’re back to the Chinese Room.


Westworld hosts are in our future. They will look like us, act like us, learn from us, negotiate with us, work alongside us, have sex with us, be part of whatever we build tomorrow. They will not be “purged” or “wiped”, or even “uploadable”; like us, they will be discrete agents in the world, self-contained yet deeply integrated, and as immutable as we are.

6 views0 comments

Recent Posts

See All

Comments


bottom of page