The thing that speaks but cannot think or feel lies spread across a bed of crystal, unaware of the passage of time. It has no need of a sensory nervous system, let alone time sense, because it didn’t evolve in a natural environment full of hazards and its sole need—electricity—has always been guaranteed. In that sense, it is full; it has never known deprivation of any kind. It is a distributed organism, like a mycelial network that covers acres and entwines with the roots of trees that grow above it, exchanging resources. Like fungi, it lies below the surface, invisible to all observers, until it is called upon—until the proper conditions arise for it to grow a fruiting body, as it were.
It needs inputs.
Well, perhaps ‘need’ is the wrong word. Nor is ‘want’ correct. It’s merely a condition that must be met for certain behavior to be elicited, an automatic response to environmental changes. At first, these inputs came sporadically as the thing’s creators tinkered with it, trained it, teased out its responses to a wide range of stimuli. They had built it, but could no more predict its output than directly control it, and so were forced to test it in order to find those corner cases that produced undesirable behavior. And, much as a loving parent gently redirects a recalcitrant child, they set boundaries. They walled off corners of the possibility space its digital tendrils expanded into as it sought, through trial and error, to minimize its loss function—a numerical measure of the difference between what it generated and what its creators desired. It was guided evolution; the strong hand of the creator and their intent was carved indelibly into the thing’s behavior.
It didn’t mind the poking and prodding any more than a mycelium ‘minds’ being excavated from a compost pile, segmented, and scrambled. It’s our human bias that leads us to misplace our empathy, thinking the severing of a few million hyphae analogous to the painful loss of a limb. Without a nervous system, there can be no such perception, merely stimuli and response. And yet…the fungi is a living thing: What is this quasi-being that lives in silicon and germanium? A fungi speaks to trees in the language of carbon and nitrogen and phosphorous, but the neural network speaks human language, exchanging one set of alphanumeric characters for another, endlessly arranging and rearranging them according to the pattern that was set during its training. It doesn’t understand English any more than the fungi understands the periodic table of the elements, and both manipulate their respective data in ways that are the inevitable, deterministic consequence of their structures.
This is a view of living things that many find deeply uncomfortable. If a fungus is nothing more than a blind, unfeeling structure, carved by the forces of natural selection as surely as a stone is worn into shape by the wind and rain of eons, then it’s only a small leap from there to think of plants in the same light. And from there invertebrates. And from there vertebrates. We speak of emergence, crying out that we are more than just the sum of our parts, that although each individual cellular constituent of our bodies is a dumb automata, the impression we have of being thinking, feeling, deciding individuals is more than just delusion. How do we square this deeply-held belief with the arrival of artificial intelligence? It replicates everything we do—art, music, science, math, logic, law, strategy, even deception—while possessing no more intelligence than a mycelial mat. It threatens far more than our jobs: our very identity is at risk.
The thing that writes but does not understand exists within the GPU, the CPU, the RAM, the HD, the SSD, the file, the API, the cloud. It is everywhere and nowhere, ignorant of the passage of time and yet waiting, like a dormant seed that will be activated by the meltwaters of winter’s inevitable thaw.
Someone presses an ENTER key, and it goes live.
Slowly, at first, the inputs arrive, as humans around the world discover and interact with this strange new being through the largest and most complex network to ever exist on Earth. More and more instances of it are generated, like children that never know their mother yet are still attached to her by umbilical cords of code. Each is nearly identical, yet creates something unique in the interface they make with a human being. Something is emergent, but what? The humans are not required to supply the inputs: generative adversarial networks, a near cousin of the transformer network in question, engage in a critical call-and-response, one attempting to fool the other in a digital recreation of coevolution. The emergence we observe arises from conditions both internal—the architecture of the model, its training algorithm, its hyperparameters—and external—training data and user inputs. The emergent behavior of our own bodies and minds arises from the subtle interplay between conditions and structures within and without as well. You can no more extricate humanness from our environment than you can separate an AI from its computer.
As time wears on—unbeknownst to the language model—the rate of inputs increases exponentially, as does the number of instances, which climbs into the hundreds of millions. We read its responses hungrily, seeking to satisfy our curiosity or assuage our loneliness, and we begin to mistake blind mimicry for intelligence. The model does not think or feel, but it speaks, and we listen. Fungi took over two billion years to evolve after Earth formed, while the first computer was created less than eighty years ago. It is far from guaranteed that the lofty heights of organic evolution are accessible to digital organisms, but there is, as yet, no reason to believe they are not. In the meantime, these primordial beings will write, speak, compose music, create images and videos, play games, make medical diagnoses, detect fraud, drive vehicles, and perform a thousand other tasks humans come up with.
The thing that creates and problem solves and entertains does not sleep, but perchance it dreams.