What is your first reaction to
hearing about an advancement made to AI research that "brings us one step
closer to machines that think like we do?" For me, it's "cool! Now
how do I do one better?" But for many, that response seems to be "so
is Skynet enabled yet?" or "so we're one step closer to the
Terminator?" I see this as a problematic response. Yes, something like
Terminator needs to be thought about, but it's a jump in technology that
ignores the gradual, if rapid, steps that need to be completed for such a task
to be accomplished.
One
thing people seem to assume is that an AI would find humans to be flawed and
they need to be removed. AIs need to be written and tested, which implies that
a predetermined action will be taken for a certain input. It’s not necessarily
good for AI development, but it is something that can be improved over time.
This means that if there is a chance that a AI might say "kill...any
moving object" (an basic set of commands), there is a high chance for that
to be disabled and for the command "kill" to require human input
and/or verification. There is also that word, time. Developers and engineers
instinctively try to reduce the amount of time they need to spend on something.
For complex learning applications, this is done by in two ways: training
(running through a set of predetermined input and output) or by pregenerated
the data and reusing it each time.
Unless
a researcher wanted to kill everyone, the tests to be learned from and
pregenerated data wouldn’t have something teaching AIs to kill people. I’ve
always viewed AI training and implementation as something that provides a
better understanding of the concept of learning. If people don’t like guns and violence,
then instead of removing the weapons (as then it becomes a replace weapon X
with weapon Y), determine why that
was determined in the first place. Just as has been discussed with the
accountability of self-driving cars, AIs and robots may be getting subconsciously
viewed as “if it does something wrong, who is accountable” and if that AI
determined to do something controversial, what does that make the AI? A human,
an intelligent being?
Should
the reaction to improvements to AIs be “o no, will this become a Terminator and
destroy mankind?” or “how do we handle the accountability of a machine that may
plan out reactions in a way that it’s designers didn’t plan for and/or how can
we prepare its initial ‘memory’ to not consider something as vile as killing
another or even injuring it as a proper reaction?” It may be something better
suited to sociologists and physiologists then scientists and engineers.
Because, while the above post was scatter-brained, it is a legitimate thought
to work through. If a movie like the Terminator is the only reason for society
to fear AI and robot development, then the researchers simply need to provide
proof that their AIs are trained to NOT look at killing and injuring as viable
responses to events. If it is a more fundamental, if higher-thinking, fear of
accountability and understanding thought processes, then the same “don’t injure
or kill” AI training and data preparation should be done, but it should also be
traced to determine why an AI might even go down that path. Both of which may
provide a more fundamental response to why an individual might go down that
same path and a possibility of determining a way to bring a person back from
the brink. Is the AI the Terminator, or are we?
No comments:
Post a Comment