Saturday, September 14, 2013

Robot dreams? Or human nightmares?

I came across an interesting article this morning about a research team developing artificial muscles for robots. “A research team from the National University of Singapore’s (NUS) Faculty of Engineering has created artificial muscles that can stretch five times their original length and could give robots the ability to lift up to 80 times their own weight. To put this in perspective, that would roughly be like an average 190 pound man bench pressing a 7.5-ton African elephant.” Robots with superhuman strength? How cool is that! It amazes me how far we have gotten in the field of robotics and artificial intelligence. I try to keep up on news that pertains to robotics. I don’t think people quite realize how close researchers are to creating robots like those in Battlestar Galactica. It is very impressive. Reading this article soon got my seasoned science fiction addicted brain into thinking about where this will all lead. Have sci-fi authors like Philip K Dick and Isaac Asimov been unintentionally prophesizing our future? Do the negatives of robots outweigh the positives? And what do we all do when things go wrong? Who will save us?

*On a side note, sci-fi considers morality and ethics of robots, their rights, and their roles, but it also largely focuses on their impact on our human society. That's the part I'll focus on.*

Robotics and artificial intelligence can drastically improve life, as we know it. Areas of medicine, space exploration, hazardous careers, and even food can be enhanced. Robotics in surgery is already in effect. Doctors are now able to make use to robots to make precise movements in delicate surgery that would otherwise be physically impossible for a human to do. In an accident, robots like these can save people, lift heavy objects, perform on-site surgery, and a bunch of other stuff. They can be our personal maids, they can help out at shops, and give us those massages whenever we need them (hopefully, keeping my fingers crossed). Robots with super computer AI brains will accelerate research of all kinds. They will change everything from education, to travel, to finance. The possibilities will be endless.

Ethics with respect to robotics and artificial intelligence has been a concern since before we even had the technical means to produce robots. Isaac Asimov stated the Three Laws of Robotics back in the early 40s which are:
    ·  A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    ·  A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
    ·  A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
 In his short stories (the ones I have read at least), these three laws are “hardwired” into every robot’s brain. But then this brings up other issues of morality and ethics when the positronic robots act unexpectedly, or apply the rules in an unconventional way. An easily identifiable example is when the powerful AI VIKI (Virtual Interactive Kinetic Intelligence) prompts an evil robot uprising in I, Robot (for those of you who don’t know, the story isn't from any Isaac Asimov story in particular, but draws from different plots of various stories). VIKI has the right idea about protecting people, but she applies it in a totalitarian way as to “protect” humans from their own inevitable self destruction. The robots from I, Robot all posses super human strength and thus humans are pretty much defenseless. When I think of the article I read about the artificial muscle research, it makes me think, “What chance do I stand against a robot that can easy toss 80 people like me as if we were basketballs?”

Not really relevant but the little robot guy makes the idea of a robot uprising so much cuter! Gotta love Borderlands (video game).

Another thought is, if everything goes wrong (ex: evil robot uprising), who do we blame to fix the mess? As Americans, we're pretty much programmed to point fingers when we’re stuck in a rut. Will we even have a person to point fingers at? Or can we blame a robot? I feel like whoever built the robot, or paid to have it build, should take responsibility. But it’s almost too easy to eventually blame the guy who manufactured the materials. It would become an endless line of pointing fingers. And at that point, I’m not sure we will all have a Will Smith in our lives to save us.

I’m always excited when I encounter news about robotics and AI, but I can’t help feel concurrently worried. I think of all the technology related catastrophes that have been foreshadowed in books, and wonder if one of those nightmares can ever come true. We’ve come a long way since caveman times, but once we hit our peak with robotics and AI, will we face a downward spiral and eventually create our own destruction? I hope not.

*The title is a reference to one of Isaac Asimov's novels, Robot Dreams (for those of you who may have missed the reference).

1 comment:

  1. I think this post is interesting. I remember the movie I Robot with Will Smith. Of course, those 3 laws of robotics that you mentioned were being followed at first. Robots were subservient to humans and had to follow there every command. Then one day, the revolution finally started. It seemed that they made robots too smart for their own good!

    This would be a human nightmare if we had such a thing happen. Give something too much intelligence, and we are just asking for trouble here. They will not all agree with being servants. Heck, even humans can't stand being anybody's servant. I think AI would cause that to happen if these robots were given too much of it. I think we need to make sure they stay in their place.

    I do love reading about AI stuff as well. This robot seems like it could kill if given so much power!

    ReplyDelete