Artificial Intelligence – The Serpent in the Garden?
Anyone involved with computers in their work is witnessing the exponential growth in the abilities of artificial intelligence (AI) in solving real world problems that involve pattern recognition. The chess world has already seen the humbling of the top Grand Masters by newer AI programs. In medicine, Pap smears, EKG’s, mammograms are already being interpreted through machine learning. Facial recognition software is bringing in a Brave New World of government surveillance. The use of AI is being frighteningly explored by military powers around the world. What could possibly go wrong with autonomous armed robots programmed to kill?
The world of business has already discovered some of the issues of unplanned bias when companies began to use AI programs to screen resumes of potential new hires for positions in the organization. Based on patterns observed in previous hires, the program effectively screened out applicants of color and certain minority backgrounds, and had to be terminated. There is an unfortunate tendency for people to believe that because a machine can process data faster than the human mind, it’s also capable of human intelligence or consciousness.
Consciousness is a subject long pondered by both scientists and philosophers. Without debating the exact definition and criterion of consciousness, what is important about consciousness for human beings is that it’s not just about the self. While we see it in ourselves, we also perceive it or project it on other people, other creatures. It is required for us to become empathic, to create and maintain the social structures that have allowed us to evolve into who we are today. As AI continues to evolve, could it ever develop its own consciousness? How could we ever know that this has occurred?
The only diagnostic test for machine consciousness that we have right now is the Turing test developed by the British computer scientist Alan Turing in the mid twentieth century. The test says that if a person holds a conversation with a machine and mistakes its responses for a human, then the machine must be considered effectively conscious. However, this is obviously a poor and limited definition. While your dog may not be able to carry on a conversation with you, would you deny that it is conscious? The test is really a test of what is going on inside the machine; it’s a test of the social cognition of the human participant. Turing never got into the question of consciousness; only if the machine could think like a person. His original test was very complex, and its very complexity was required to understand whether a computer could understand the complexities of social interactions which require a theory of the mind. What we really need to test is whether a computer can tell if it’s talking to a real person or another computer.
Without consciousness, an AI program that can make consequential decisions affecting humans would be a sociopath. Imagine if the decisions regarding your healthcare were to be solely made by an AI program! While we are happily not there yet, AI is being increasingly used in decisions affecting human beings. Because of the nature of AI, even the people who write AI programs can’t really tell you what happens inside the black box that is the program – only the outputs the program produces. Unfortunately, as long as the results seem reasonable to those who ask for them, they are questioned less and less, and the “authority” of the AI grows. Since there is no way of checking the internal process, there is no way to determine if an error somehow occurred either in programing or the processing of the program. In case of the HR application for hiring, the bias eventually became apparent. In other, more subtle errors, the problem may take much longer to recognize, if it ever is.
Science fiction writers like Isaac Asimov and movie directors like Stanley Kubrick have long recognized the potential problems and pitfalls in the use of AI. As these programs become more powerful and prevalent in our society, humans best hope of survival may well be of giving them human-like values and social consciousness. It’s likely too late to put the genie back in the bottle.
As someone who works in the computer industry and has toyed with AI development, I share the concern. Albeit, my experience is cursory and unfocused. My son has been moved to an AI group at his work. It is likely a solid career move for him, although, even he admits it is fraught with misgivings.
As you mention, AI robotic soldiers (or police officers…) are something that is coming and are NOT something I am comfortable with. I suspect they will make fewer “mistakes” on day to day interactions but, the mistakes (or hacks) they could make have cataclysmic potential. This thought has caused me to avoid shows like Black Mirror entirely.
In that regard, I’ve gone full ostrich.
Man’s technology outstripped our moral evolution with the development of the nuclear bomb. Hell.. maybe even with black powder.
Or maybe I’m just becoming that angry old man, shaking my fist and telling the kids to get the hell off my lawn.
Thanks for your insightful and informed comments – always welcome 🙂
Very interesting topic, and makes me wonder if computers ever could experience human emotion like consciousness. Perhaps all that’s needed is mimicking the functionality of the brain to create consciousness? I’m no expert but can continue to wonder and perhaps technology could get there in future.
Thanks! I appreciate your input. The whole topic of consciousness is a fascinating one that people have been debating for ages.