Artificial Intelligence will soon
bring on another technological revolution, where machines may take our jobs and
shake up the economy in the process. Drones are being used in warfare, Google,
Tesla, and other companies are working on self-driving cars, and robots are being
designed to be excellent conversationalists. As a consequence of this shifting
paradigm, a new critique is also emerging, one that is concerned with the
anthropological significance of a changing species.
Recently, a new center to study the
implications of artificial intelligence has been established at the U.K.’s
Cambridge University. Its purpose: to influence the ethical development of AI,
the latest sign that concerns are rising about AI’s impact on everything from
loss of jobs to humanity’s very existence. The new center will work in cooperation
with the university’s Center for the Study of Existential Risk, where
researchers study emerging risks to humanity’s future, including climate
change, biological warfare, and artificial intelligence. The new center will
also collaborate with the Oxford Martin School at the University of Oxford,
Imperial College London, and the University of California, Berkeley. A major
focus of the collaboration would be around what the study called “the value
alignment program,” where software programmers would team up with ethicists and
philosophers on trying to write code that would govern the behavior of
artificial intelligence programs.
Studies like these show that
theories about a new era in which we share the planet with non-biological
intelligence are starting to be addressed more critically. They are no longer
unique to the hypothetical claims of computer scientists. What this means for
us, the technological generation, is a pressing concern to shape the level of
influence that AI will have. Should we reject or accept the inevitable? I
personally attest that we should reject it. By no means do I think that
progress in the field will halt just because some people have moral concerns but
we can make a personal choice to control how much exposure we have to it. Unfortunately,
our perpetual tendency to be lazy will make us susceptible to choosing the most
convenient option: AI that does everything for us. After all, why do something
yourself, when you can have an automated program do it for you?
That does sound pretty deterministic
when you put it that way. However, there is a variable that most people forget.
In the first few months of retiring, people are really happy. They can finally
do whatever they want knowing they have no responsibilities to address. But
after some time, the euphoria fades away. Now people are just bored. That’s my
hope for the future of AI. Hopefully, after AI takes off, people will realize
that they cannot sustain their content by having something else think for them.
We are curious creatures and no matter how domesticated we become, we cannot
sever nature’s grasp on us.
It makes you wonder, is the
Orwellian dystopia such a distant world?
No comments:
Post a Comment