Corporations Are People? The Race For Consciousness Between Systems and Technology
What if Romney is right? Not in a political, corporations-are-comprised-of-people sense – but in a philosophical sense? In the sense that corporations, businesses have achieved a sort of sentience not identical to humans, but evocative? What if it’s not Skynet which evolves, but NASDAQ?
Philosophers love this sort of question. It’s ripe for unanswerable discussion. We can’t begin to understand what an animal’s mental experience is like (see Nagel’s “What Is It Like To Be A Bat?”), much less another person’s. We’re not even clear on what makes us, us (see Dennett’s “Where Am I?”).
One thing is clear: consciousness is a matter of degree. Let’s assume, for a moment, that we accept that consciousness derives from the brain – that is, that there’s no separate soul which holds our consciousness. (After all, this is also a matter of degree. Some animals, who few would argue have “souls” in the religious sense, display traits indicating self-awareness.) If consciousness derives from the brain, then our ability to recognize ourselves, develop complex patterns of language, establish our society – all of these things happen because of the interaction of electrical signals and chemicals in a few pounds of matter in our skulls. And it means that these processes should theoretically be replicable.
People have been trying for a long time to replicate consciousness. Long ago, magicians sought to breathe life into inanimate objects. Today, it’s simple to create a computer program that takes input and responds in a human-like way. This is the basis of Alan Turing’s famous test, after all – that a computer will have achieved thought if it can fool a human into thinking the computer was human. That, in other words, a person presented with the output of the computer would be unable to tell that it was a computer. This is how we can tell that people are people – they respond and behave exactly as we’d expect people to.
One of the more successful attempts to pass the Turing test is Cleverbot. Cleverbot, instead of responding to a conversation from a database of answers programmed by its creators, develops an evolving set of responses based on the conversations it has. The programmers, rather than try and predict every possible response, gave Cleverbot a set of rules from which to make its decisions. Cleverbot follows those rules, and learns from prior conversations – much as a child would, if you’ll forgive the logical leap.
The process works. At a Turing competition earlier this month, Cleverbot was voted to be human 59.3% of the time. Sound unimpressive? The actual humans earned a score of 63.3%.
You’ll note that I’m intentionally jumbling the terms “think,” “learn,” “live,” “consciousness,” and “self-awareness” to represent the intangible state of “personhood,” of “humanity.” A philosophical argument would be more rigorous about its language. But I’m not trying to make a philosophical argument. I’m trying to answer a question: what is the mental state of a complex business? Is it like a simple brain?
A recent study found that one-third of shares of corporate stock traded in the UK occurred based on a set of pre-ordained rules (or algorithms), without any human intervention. Like Cleverbot, the system mimicked human behavior by obeying a set of rules it had been given. If humanity vanished from the Earth, but the system stayed online, it’s conceivable that the system would keep making decisions and making trades for centuries to come.
But is this akin to a water wheel? Is it akin to a calculator? Or is it akin to a person executing a simple set of tasks? Akin to a snail? Does consciousness necessitate that the purchasing computers develop their own rules, their own efficiencies?
A company does this – though it’s the people in the company that make the decisions and set the rules. In the brain that is a corporation, we are the neurons. But the company also responds to external stimuli, adjusting, without conscious intervention. A company both through hardware and humanware adjusts its behavior and rules. The neurons respond – but can’t necessarily anticipate the direction the company will go based on their actions.
Admittedly, it’s a stretch. But there are two reasons that a sentient corporation or marketplace seems more likely than a sentient military computer system, a la Skynet.
The first is that the thoughts of the company, like those of a bat, would be intangible to us – and probably completely incomprehensible. We can only imagine our own experience, and would have no idea what it means for a company to be conscious. We probably wouldn’t recognize it.
The second? A company or market is much more likely to include external stimuli than is a closed military network. As in the case of Cleverbot, it’s exposure to unpredictability that allows rules to be broken. A structured system that predicts a stimulus and provides a response is necessarily less complex than one that engages unpredictable stimuli and responds according to the interpretation of loose rules.
Humans consistently learn more and more about the mental lives of the animals around us – discovering tool use among birds and language in use between dolphins. We recognize these things in part because they have a corollary to our own experiences. We hold up human-shaped cutouts and figure out which of their actions shine through.
For a non-corporeal entity, we have no cut-out to hold up. We see behaviors and never think to consider that they might lie somewhere on the slippery slope of cognition between rock and man, the way we would if we saw, say, a lizard following a similar set of rules. We are pre-disposed to seeing life as a collection of cells that matures and dies, not as a process of consciousness.
Apple or Goldman Sachs may be more than legal structures that produce goods or manipulate markets. They may be primitive, stupid consciousnesses that will, over time, spur the evolution of ever-more complex entities.
Probably not. But maybe.
Have a tip we should know? email@example.com