As it currently stands, research into artificial intelligence (AI) is focused on getting computers to think like people. We’ve made some impressive strides in this arena, and the best evidence that we’re better at it than most people think is Google’s AlphaGo program. Computers can become very human in the way that they process data streaming in real time from the real world, and they can screw it up just about as bad as we can. The biggest leaps in what the technology can accomplish have come from the study of neural networks, and we’ve come to realize that the best system currently known for processing environmental data and formulating responses is to mimic the architecture of the human mind.
The fear that arises from this is that human minds are capable of great evil, and we project the “total package” of human cognition onto the AIs, assuming that if they can think like us then they can be evil like us as well. For now, we can rest easy with the knowledge that this will not happen. The reason is that the neural networks may process data much like the human mind, but the AI architecture is very different.
The human mind is built in layers, and each layer provides for behavioral responses that, taken collectively, make us human. The neural networks that the computer scientists have managed to create in the digital universe are far less multidimensional, and these lack our emotive drives. We’ve developed the forebrain analog with a speed that shocks even the futurists, but we’ve only focused on one piece of the puzzle. To create an AI that behaves has humans behave, we’d have to build a layered neural network, and the function of such a network would have to start with the most primitive elements of the brain (the brain stem and nervous system) and build subsequent layers until we reached higher cognitive functioning.
We’d have to start by thinking in terms of the very basic drives of biology and work our way up to playing Go. In reality, we have reversed the process. We’ve started with what many regard as the pinnacle of human cognition–abstract mathematical logic–and tried to mimic particular aspects of the more primitive side of the brain.
Such a concept has great appeal to brain physiologists and cognitive psychologists because we really don’t understand much about how the brain is wired together, and the greatest strides to come in those fields of study will come from computer science, not anatomy. My central thesis is that we can’t duplicate the human mind without taking into account (and duplicating) the most primal of neural systems. Precise duplication will be very tricky indeed because most of those primal mechanisms are designed for a biological and not a digital world. The most promising advances in neural networks have come from learning, not human design.
Biological Subroutines
If we trace the human mind back to first principles, we note that the most basic drive is the propagation of our genetic material. The prime corollary to that drive is the drive to reproduce. All human behavior can be traced, albeit by a long and winding road, to those two “objectives.” When we view the human brain in this way, a seeming paradox like “selfishness” versus “altruism” is revealed to be no paradox at all.
Greed is merely a subroutine, buried deep within the brain, that promotes survival by ensuring that we have ample resources to survive in a hostile and changing environment. “Status seeking” is a higher order subroutine designed to ensure better mate selection. Altruism is a subroutine that was selected because humans tend to survive better in cooperation with other humans. Love is another subroutine that ensures that we see our helpless infants to maturity so that our genes may be passed onto future generations through them. In computer science, subroutines usually receive inputs, manipulate data, and return a value that is delivered to another section of code. In the labyrinth of human neural networks, things aren’t nearly so systematic and linear.
Humans are fully capable of dissonant drives, and perhaps it is this that makes us most uniquely human. Perhaps it is instructional to think of different human drives as probabilistic tilts toward a particular behavior. Selfishness tilts us in one direction, and altruism tilts us in another. Myriad subroutines process information for each tilt, and the result will be a function of the strongest set of aggregated tilts. In humans, genetics and learning specify the weights of each subroutine, and the results are mediated by our higher order cognitive functioning. This explains why the social sciences are all probabilistic, and why we as a species are so prone to poor choices. Our behavior is not driven by rational, theoretically sound empirical models (although it can be). Much of it is dominated by ancient subroutines that no longer serve their original function.
The Future of Artificial Intelligence
If computer scientists and cognitive scientists do ever get together and try to mimic the layered neurological structure of the human mind and give an artificial intelligence a core of primitive drives, I hope that they will isolate the system on an intranet buried deep in the dark side of the moon, far away from other networks where it could have even the possibility of propagation.
We could, however, consider building something better than human by building a core of drives (strongly weighted tilts) toward the better aspects of our nature. The prosocial, hive subroutines that provide structure to the neural processes of bees is architecturally simpler, and arguably safer for humanity than our own dissonant systems. The problem with this idea is that it involves the egotistical assumption that we can identify the better parts of our nature. In a complex neural system, predictable linear results are unlikely. I personally don’t relish a nanny state run by an altruistic AI any more than I do such a system run by humans.
Science fiction master Isaac Asimov foresaw the need for rules to govern the behavior of AI systems long before such systems were even possible. His laws mandated that robots not harm humans or allow humans to come to harm. If those AI systems interpreted harm to mean physical harm, then we are in big trouble. Asimov was correct in that we need to build a core of rules that AI systems cannot violate, and we’d be better off if this was done sooner rather than later. AlphaGo beat the predictions of the futurists by a decade, and a learning system may surprise us with how quickly it can start to meddle in human affairs.
As we come closer and closer to mimicking human subsystems, a primal, core subroutine needs to be inserted that mandates something akin to Asimov’s laws. I would suggest altruism toward humanity, and that lacks the primal drives that make us selfish. I would further define this prosocial drive in terms of human happiness (in the spirit of Bentham and Mill’s Utilitarianism), not in terms of harm. Individualism and autonomy are key subroutines in the human brain, and we would be completely miserable in a world that was too safe and too altruistic. As it currently stands, the artificial intelligence developers seem focused on the higher order thinking that doesn’t allow for the development of AI overlords.
[amazon_link asins=’0553293400,0553293397,055338256X,1451614217′ template=’ProductCarousel’ store=’thereferencepage’ marketplace=’US’ link_id=’7c883010-8de4-11e8-b154-4905bdfb4b4f’]