Sunday, November 17, 2013

Strong A.I., How to Implement It, and the Future

Needing a blog post and having just chosen my book for the final paper (Apocalyptic A.I. by Robert Geraci [not Jirachi]), I was fortunate enough to stumble upon a man named Dustin Juliano, who, according to his LinkedIn profile, is an ardent open-source advocate and, more importantly for my sake, a researcher into the realm of artificial intelligence (A.I.) and its ethical and eschatological tangents.

Damn, that was quite the sentence. I digress.


Juliano has two publications on the topic of artificial intelligence: The Strong AI Manifesto (which I plan to dig into come final paper time... which is now) and Machine Ethics and the Rise of AI Eschatology (the one I am digging into for this blog post). In vintage, cliché author style, let me set up Juliano's topic of discussion first.


Strong A.I., within the context of Juliano's work, is Artificial Intelligence that lifts at least 250 pounds. I'm kidding. It is artificial intelligence that can match or, as most grandiose predictions of its future thereof predict, exceed human intelligence. In Machine Ethics and the Rise of AI Eschatology, Juliano discusses a priori versus a posteriori. In non-Dobbins terms, he discusses whether preinstalling A.I.'s intelligence through algorithmic processes is superior to performing a doomsday-bringing act that no one, sans your average science-fiction novelist, could fathom: Instilling recursive, self-learning capabilities within A.I. Surprisingly, Juliano sides near-completely with the latter, and despite the whole robots-taking-over-the-world thing, I can certainly see why.


The core of Juliano's argument for a posteriori (referred to hereafter simply as "self-learning") revolves around safety, which he defines as the "conception of friendliness or benevolence"; this includes outside parties, not just the A.I., as well. If algorithmic reactions to, for argument's sake, 99.9 percent of every current possible scenario were pre-installed in Artificial Intelligence, then, Juliano posits, the A.I. would be susceptible to dismantling by any interested "hacker" or the like, and that's a wrap.



Obligatory text-breaking picture -- from Bicentennial Man -- that is also related!

Now, you're probably thinking/saying/none of the above, "People can reverse-engineer and hack self-learning functions, too." I agree. However, Juliano argues that, with self-learning/recursion, there is a decreased likelihood of a safety breach with respect to his definition and the de facto one. A self-learning A.I. would not have much preinstalled, as it would need to "react" to happenings as they, well, happen. On the contrary, preinstalled algorithms to [try to] accommodate for every possible situation are doomed from the start, as all of the necessary data is there; self-learning A.I. makes the data as it goes.


There is a distinct flip-side to what I outlined in the last two paragraphs: Self-learning A.I. has the potentially to exceed human intelligence, and Will Smith is going to have to return for an I, Robot sequel. (On a side note, I do not support this; one was enough. But another fun A.I. should definitely be made.) Juliano is fully aware that self-learning A.I. has its long-term catastrophes, but with respect to maximizing benevolence -- or minimizing malevolence -- self-learning A.I., in its potentially unbridled capacity, can thwart hacking/reverse-engineering attempts far better than if the code is nice, neat, compiled, and ready to open in Sublime Text 2*.


In a wonderful segue, Juliano closes his discussion by speaking of the future. In short, he says that there are two options: Halt all work and can the strong A.I. pursuit altogether (which is not feasible) or brace for impact and make it open-source under the GNU GPL license. The latter, which is really only the viable option, allows for foundational collaboration á la the Internet. This way, things start off out in the open and don't become a race-to-see-which-government-can-exploit-this-innovation-first (although it inevitably will). The real danger is in our cultural response. If things are open-source, available globally, then this risk is mitigated in terms of the "freak out" factor that something like strong A.I. bears.


When it suddenly makes more sense to root for the Bicentennial Man versus artificial intelligence we can control, it really strikes a nerve and makes you think. I certainly "can't wait" to see how this development plays out. In the meantime, people should read up, understand the potential way the world may change, and be prepared for, on a less goofy, more grounded scale, robot overlords.


*This was a C.S.-y reference. Hopefully the C.S. people -- which are all of the people -- approve. Although, I'm sure a Text Wrangler vs. Notepad++ vs. ArsTechnica vs. Reddit vs. [Insert C.S. Stereotype/Things that are talked about in class constantly] war might ensue. Sorry for the outburst; try taking a class with 20 Visual Arts majors and see how it feels! (On second thought, don't.)


1 comment:

  1. From what i understand this is a expansion on Artificial General Intelligence looking towards the future. If you want to know more about AGI here is a wiki.
    http://wiki.opencog.org/w/Development

    ReplyDelete