No, AI Isn't Taking Over the World

< Back to Blog
Tagged: ['op-ed', 'scholastic'] 12/15/16 || 1:43AM

Robots are becoming increasingly involved in our lives--according to billionaire Elon Musk, too involved. Musk and his friends are investing billions into researching the dangers of artificial intelligence and illuminating the deep ethical consequences of robotics, raising an important question: should robots have ethics?

All debates of artificial intelligence begin with an analysis of human intelligence. The process by which we think has been praised as the hidden, complex network of the brain; building a machine with these capabilities, surely, is impossible, preposterous even--right?

Well, perhaps.

Simple choices and moral decisions are objectively made by perceiving our surroundings, fitting them to moral guidelines, and deciding on an appropriate course of action. A robot equipped with the right variety of sensors can also make its own observations, analyze these with moral codes, and choose a course of action, thinking and learning through techniques such as "neural networking". Neural networking composes of layers of nodes and interconnected weighting systems to emulate the human brain--given enough data, it can theoretically predict anything.

Complex thought patterns and abstraction, on the other hand, are still a human-exclusive trait. Fortunately, ethical agency can be achieved without higher level thinking.

Knowing this enables us to progress past the plausibility of "can" to the dilemma of "should." Robots are increasingly integrating into our lifestyle, from the infamous vacuum Roomba to the nursing-caretaker Robear; the future's machine-integrated society will often come across ethically sensitive situations. Should a caretaker prioritize an urgent call or the owner's privacy? Should the autonomous cars of Tesla and Google swerve to save the driver while endangering the bystanders, or sacrifice the owner to minimize casualties? These delicate situations make it imperative for these automatons to hold ethical agencies--codes of ethics by which to abide by and make decisions upon.

Companionship robots pose the most precarious situation. Accounting for privacy concerns, these robots need an ethical code; according to ethical researchers such as Dr. Mattheus Scheutz of Tufts University, the actions of an oblivious automaton may be "insensitive to social norms based on consideration." Scheutz found that as the machine takes actions towards its directive (i.e. cleaning the house) with no regard to the owner, it makes ethical mistakes such as intruding during an emotionally sensitive time.

Another fault of household robotics lies in emotional connection. Studies show that most modern caretaking robots have low ratings, with an average of 3.4 out of 10 due to an inability to form long-lasting and meaningful connections with the owner--these robots are simply incapable of the level of sensitivity necessary for social interaction. Dr. Gordon Briggs, another scholar at Tufts' Human-Robot Interaction Laboratory, has developed studies showing that as household robots perform tasks, their owner grows more and more appreciative of their servitude. This causes the owner to develop a sort of emotional attachment, but these robots are incapable of ethical reaction and therefore appear "cold-hearted and unemotional."

Ethical dilemmas are not limited to the field of caretaking. Another field, assistive robotics, raises the same concerns, albeit in a more tangible and serious manner. Scholars such as Dr. Wendell Wallach of Yale University, a celebrity of sorts in the field of robot ethics, have criticized the lack of ethical agency present in modern medical systems, particularly that of APACHE. In modern hospitals, APACHE systems control intensive care units to house those in critical conditions. APACHE enjoys the cruel pleasure of disastrous autonomy, the freedom to choose any course of action with no programmed ethics. Without a moral agency, it may pursue an unethical yet logical directive, posing a lethal risk to its constituents. In therapeutic robotics, the automaton may push its patient forward without any care for distress. Without the proper ethics to care for its patient, an assistive robot is liable to cause emotional and physical harm.

However, with appropriate ethical guidelines, these machines will be capable of implicit learning and ethical reaction.

With a moral agency, the robot can respect the owner's boundaries and identify intrusions of privacy.

With emotional sensitivity, the robot can respond to its owner in an ethically appropriate manner.

With a code of ethics, medical and assistive robotics can avoid unethical directives.

So, what's stopping us from pursuing these ethics?

The same fear-gouging demagogues such as Elon Musk who raises the issue of ethics also raise science-fiction scenarios of robots growing the ability to learn from themselves, causing an eventual robot rebellion. This frightens people, and develops a phenomenon known as the "uncanny valley." Scheutz and Wallach have both found in their independent studies that robots given ethical agencies often approach a human-like existence, possibly passing the Turing test depicted in science fiction such as Ex Machina. Unfortunately, while in Ex Machina the researchers are thrilled by the capabilities of deep learning, in reality most respondents are too afraid of autonomy and respond negatively, causing a dip in approval known as the uncanny valley. In short, as machines grow very humanlike, their approval increases steadily until they are almost human; at this point, their approval drops dramatically and only increases as they breach human likeness itself.

Therefore, the only obstacle to ethical progress in the field of robotics is humankind—we fear the impossible too greatly to permit the possible. Only as we become less afraid of the fictional pitfalls of robotics will we be able to address the real concern of ethics.