Thursday, September 17, 2015

The A.I. Conversation and the Ethical Dilemma

On September 17, 2015, BBC Technology reporter Jane Wakefield posted a familiar article titled "The search for a thinking machine." The article discusses our slow and steady progress towards developing meaningful artificial intelligence, specifically focusing on Fei-Fei Li and her so-far 15 year journey attempting to teach AI to truly "see". She distinguishes that to "see" is more than just the literal process of reading in image data and that when we "see" something we also process the image, sorting the data into attributes and comparing it with a knowledge base to infer properties of the image.

For example, when we look at a birthday cake with some candles and the words "Happy Birthday" drawn on, not only can we identify the object as a cake, but we can also easily derive that it is a birthday cake and that it is most likely someone's birthday. While this may not be something we do consciously, object identification and description is more than just seeing a circle and assessing that it is in fact a circle.

The article continues to tell Li's success story, explaining that her work led to an annual competition at Stanford University in which competitors such as Google and Microsoft use Li's project ImageNet to test the advances made in machine seeing technology. As of 2014, Google and Stanford have revealed technology that can, in fact, interpret the images they "see" (examples below).


By now, this conversation on AI, where it came from, where it's at, and where it seems to be going is a familiar one in our culture. Dating back to 1984 when the first Terminator movie was released and even earlier in many cases, we've been having this discussion about artificial intelligence for a while now. Sometimes the conversation looks at the feasibility of AI: Can we truly create silicone based life forms? Other times we choose to examine our motives for investing in AI: Will AI help save lives that humans cannot? Many times we like to reflect on the past and predict the future of the evolution of AI: If we've advanced this quickly, how long will it take for them to advance far past our means of control? Finally, we have the ethical dilemma: Do the potential ethical costs of artificial intelligence outweigh the projected ethical benefits?

On one hand, it does seem that the precision offered by computer-based intelligence would be invaluable to optimizing efficiency across many fields. Whether it's search and rescue missions, military applications, human aid machines, or even advanced data management, the potential applications of AI seem endless. And to maintain, build, design, and monitor these robots could even help create jobs. Or at least, create jobs for those who are properly educated to perform them. But it's important to also consider the potential harm this may bring.

For every rescue robot that can save a life when a diver's ability to maneuver is compromised, how many human life savers are made obsolete and can no longer provide for their family? For every robotic soldier we send to a village (even assuming they can accurately identify hostile targets from civilians), are we sacrificing intrinsic parts of our own humanity? Does war have any inherent meaning once it's simply evolved into a duration of calculated destruction? If we send a robot in to rescue a small child from a burning home, can a robot possibly offer the same comfort and psychological sense of safety as a human firefighter? Even without the stereotypical Terminator exaggeration of "robots take over the world and eliminate humans", there are still some major concerns about the ethics of AI integration to consider as we move forward.

What it all comes down to, in my opinion, is the question "what is our purpose on Earth and in life once we've created machines that operate better than we can?" We're inventors, innovators, adventurers, explorers, artists, and human. If we truly could program these elements into a machine, then what would it mean to be human?

In spite of this, I would not suggest that we should stop AI research. On the contrary, I am very excited about advancements such as those discussed in "The search for a thinking machine." I am also, however, extremely aware of the caution we should take as we decide when, where, and how to let machines replace us in day-to-day life. In many ways, they already have. Every day more and more children are born into a world where knowing how to send a letter is considered far less important since the popularization of email.

Conclusively, there are two major questions to ask:
"Is true artificial intelligence possible / to what extent is it possible?' and
"Is it right to allow ourselves to be replaced by machines?"

I certainly do not know the answer to either of these questions, and I would assert that anyone who claims to is unaware of the true complexity of these issues. Just the same, it is something that is important for us all to think about as we enter a new era of technology.

No comments:

Post a Comment