In continuing my fascination with
artificial intelligence, I decided to continue to look at some things being
said about it. A New York Times article spoke about how rapidly artificial
intelligence is expanding and said that it was “magical,” including parts
speaking of Harry Potter to help readers understand what is meant by it being
magical. Artificial Intelligence has certainly come far because we have phones
that respond to our voice, scientists wish to make cars that can drive
themselves, and some robots can outsmart you in any game you play against them,
such as chess. However, this author makes a good point that none of these
robots have been produced with common sense, vision, natural language
processing, or the ability to create other machines yet. We have tried to
create these robots to directly simulate our human brain. I personally do not
think we can ever achieve such a feat.
The author did say that there is
the possibility that it does happen that way in about four decades at least,
but that it would ultimately not matter how long it took, only what happened
next. Machines could be smarter than us someday, but would that really be something that we want to happen?
I remember writing about how we should not fear that machines become smarter
than us because they can only know what we know on maximum level. However,
there is the possibility that they do become close to as smart and if they
become even that smart, they could
take over our lives almost completely. They could learn how to do things that
humans can do. They had already started long ago when they started taking calls
from people and responding to those calls. Soon, they may be taking those calls
and making pizza to near perfection as well, just the way you like it!
The author mentions another author
named James Barrat who wrote a book entitled “Our Final Invention: Artificial
Intelligence and the End of the Human Era” and he posted some quotes. For
example, he says “Barrat worries that “without meticulous, countervailing
instructions, a self-aware, self-improving, goal-seeking system will go to
lengths we’d deem ridiculous to fulfill its goals,” even, perhaps,
commandeering all the world’s energy in order to maximize whatever calculation
it happened to be interested in.” I am sure we have seen movies of these kinds.
A machine wishes to accomplish its task and it will do so by any means necessary,
which I think brings up something that the author talked about before and that
is common sense.
These machines do not have common
sense yet. I do not think you can program common sense into a machine.
Machines, especially those that are artificially intelligent, wish to
accomplish its task and nothing more. That is what they are programmed to do.
The brain is such a complex piece of “machinery” itself, one that can simply
not be imitated. There is no way that people can create a machine to do what
the brain does because we don’t even
understand how the brain works. It is said that humans only really use ten
percent of their brains. We may have names for each section and think we know
what each section does, but what if we had the other ninety percent as well? We
would not even begin to comprehend it right now. Machines can never have what
humans have. We are too uniquely made in how our bodies are built. The fears of
machines taking over can seem genuinely worrisome, but there is too much that
they need to imitate in order to pull it off.
The New Yorker is dope, but I'm fascinated by the fact that they deem A.I. (or at least call it) a "threat." Obviously, self-learning A.I. is quite the rampantly advancing field, but as you essentially stated, there are two roadblocks for this to ever even come to fruition: 1) Machines are not humans (Sounds dumb and simple, but cannot be overlooked) 2) We, the humans, are the ones to program them, the machines. A.I. is not a threat, so the grandiose dreams/nightmares of Mr. Krabs' fear of robot overlords (This is from Spongebob, Vinsel) are not to be feared. However, the more something is pushed, especially something as volatile as Artificial Intelligence, and tampered with, the more the risk increases for negative repercussions to occur. Thus, if there is a continual striving to create as-close-as-possible self-learning A.I., there is more than enough room for negative backfire. We are our own threats -- period.
ReplyDelete