Sunday, December 13, 2015

OpenAI, is it Necessary?

                Recently there has been a bit of excitement about AI technologies. And I figure that after doing reports on stuff similar to this, everyone has AI on their minds in one way or another. OpenAI, a new nonprofit research company launched last Friday. It is funded by $1 billion in donations from Elon Musk, Sam Altman, and Peter Thiel. Their goal is the advancement of artificial intelligence. They released a statement with their intentions, which you can read here.  
                In this statement they said, “Because of AI's surprising history, it's hard to predict when human-level AI might come within reach. When it does, it'll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest.” This is a very broad statement. They are basically saying that they want AI technology to be beneficial to humans, not harmful. Hey, that sounds pretty good to me.
                The thing that I really want to talk about here is that I find AI technology to be extremely pointless. There is no need to have a machine with human level intelligence. It feels like we are just trying to make these things because we imagined them. I don’t really see how it’s supposed to be beneficial. It will get to a point where we can make something that can think about concepts and answer questions like a human, but why should we? That’s one of the things that makes us different from the machines that we create.
                This article brings up a good point to think about as well. The article quotes Stephen Hawking’s reservations about making something like this. He said, “One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all." This is another concept that we need to think about as well. If we make a successful AI system, will we be able to control it? It should be able to infinitely learn and theorize things, and when it learns about how the terrible things that have happened in the world, how will it react? We cannot know these things before they happen, and I really don’t want to see it happen.

                This is basically the point of this class, to understand the ethics of the things that we create. I don’t think that it is necessary to create AI technology, but I don’t know whether it is ethical or not. If it turns out to be truly beneficial to us, then who is to say it’s unethical? On the other hand, if it causes tons of damage, is it fair to blame those who created something that we really don’t need? Just some food for thought.

No comments:

Post a Comment