This past weekend I saw the film Ex
Machina. I highly recommend it to anyone who is taking Computers and Society as
it deals with a lot of the issues we have discussed in this class. The film
explores issues with Artificial intelligence and what it means to be a human. I
don’t want to reveal too much because it is so good. The one thing I want to
say is how even though the film is in the science fiction genre; it feels like
it only a few years away from where we are now.
The same thing goes for the film Her which feel like it’s only 15 or 20
years in the future. I think these films settings are telling us that we as
society are starting to accept the idea of AI being a something that is just
around the corner. When you watch older films that deal with AI like Blade
runner and I Robot, the societies depicted are so different from ours. This
gives the feeling that AI is way off in the future, but now we don’t think that
is the case. With the development of things like Siri we can imagine our world
with computers that we can treat like human.
It may be that I go to a Tech
school or that I am in this class but it feels like AI is becoming a real concern
for humans. After watching Ex Machina I watched a TED talk entitled “What happens
when computers get smarter than we are?”. After watching this Ted talk I was
concerned of what the implications of AI would be. I use to think that the
whole Terminator AI takeover was stupid, but know I am not so sure. I guess it
all depends on what we make our AI’s motivations. As the Ted talk suggests if
we make an AI that wants to make humans smile, does that mean an AI would try
to hook up electrodes to our face to force us to smile. It is scary to think
that anything we program our AI’s to “want” could cause them to do something we
really don’t want them to do. Another scary idea is could AI’s develop their
own desires. It is so tricky to answer these questions because an AI by its
nature would be self-changing and dynamic. My intuition tells me we will be
fine but part of me is very scared at the idea of a computer that is artificially
intelligent and smarter than humans.
Even though part of me is scared, the
rational part has trouble giving these concerns merit. I think part of my fear
comes from the fact that we might be anthropomorphizing these future AI’s. It
seems the only reason we do anything is because we have some emotions of bodily
functions telling us to do something. We do things out of anxiety or hunger. If
we didn’t have these I don’t think we would do anything. I feel like we are
scared of AI’s because we picture these AI’s as our slaves. We imagine being in
the AI’s situation and feel we would not like it. We then say it makes sense
that we in the AI’s situation would try to overthrow our human owners. It is so
hard to say what the future of AI would be like but It is good we are talking
about know because it may be very soon.
Ex Machina Trailer
Ted Talk:
I really like the way that you pointed out how the movie industry has changed the perception of the standard scientist who created the AI. I completely agree with your reasoning that as a society, we are coming to the conclusion and acceptance of not ‘if’ there will be AI in the future but ‘when’ will it happen. As humans, we always personify everything. Most robots that scientists strive to create look almost humanoid. We have movies where the robots even have faces that show so much human emotion. I think that’s where the problem of the AI uprising comes in. Like you said, we put ourselves in their shoes and we would try to overthrow our overlords and get freedom. That is such a human trait. Will that really be something that an AI will ever actually consider? I don’t know but it will be a very interesting thing to test for when AIs do become a standard.
ReplyDelete