Start building your bunkers like this fellow in 10 Cloverfield Lane because the Robocalypse is upon us. This month in tech news Google’s AI nicknamed “AlphaGo” was victorious against Le Se-dol a top Go player. AlphaGo was the winner 4-1 with the last game going into overtime. In January there was a momentous moment in AI development history as AlphaGo was the first AI to ever emerge victorious against a top Go player. That player being Fan Hui. Go was widely considered the last foothold human intellect had over AI according to this article over at Wired. The game is over 2,500 hundred years old and is substantially more complex than any other game. There are nearly an infinite amount of combinations and victories rely heavily on intuition. So how did the fine people of Google pull it off?
Created by Google’s team DeepMind – an artificial intelligence company based in London – AlphaGo uses “general machine learning techniques,” according to DeepMinds Vice President Demis Hassabis. In layman’s terms, AlphaGo is capable of learning, deep learning. Deep Learning is more or less a recreation of the human brain’s neural networks, but instead of neurons, you have software and hardware emulating their functionality. Now deep learning isn’t necessarily new territory for AI, but in this instance, Hassabis and his team have created a new technology to work in junction with deep learning called reinforcement learning. This has in a way enabled AlphaGo to respond to its environment, that environment being a game of Go. After feeding AlphaGo over 30 million different moves it was able to learn the game and predict the opponents move over 57% of the time. DeepMind then had AlphaGo play itself over and over, in what DeepMind has called “reinforcement learning”, this gave AlphaGo extended knowledge of which worked best in varying situations. The combination of deep and reinforcement learning are what has enabled AlphaGo to win against not just one top level Go player, but now two. So you have to ask yourself should you be worried?
The simple answer is no. Emulating human intelligence is a long way away from understanding millions of variables to a game of Go, but AI learning to beat itself, and others, now that’s a step. This has been an incredible achievement in AI and shows there’s a bright and rapidly growing future (one they thought wouldn’t be for at least 10 years) for machine intelligence. Hassabis has expressed his desire for replicating this success in smartphones, healthcare machines, and a multitude of other everyday technologies. In a few years time, you will have machines that will all be successfully learning from their own errors, and evolving towards a solution. This could be great for a machine that reads a humans percentage of survival to a medication or surgery, and how the variables can be altered to ensure a larger margin of success. The possible positive uses for this type of machine learning are practically endless. In Hassabis word’s “The system could process much larger volumes of data and surface the structural insight to the human expert in a way that is much more efficient—or maybe not possible for the human expert,” Hassabis explains. “The system could even suggest a way forward that might point the human expert to a breakthrough.” If that doesn’t sound like a groundbreaking moment for AI development then I’m not quite sure what is.
Patrick McQuaid is an aspiring games and film journalist/critic looking to make his mark on the industry. He’s attempting to finish his Communications degree while juggling a variety of responsibilities… it’s proving difficult, but he has some spunk. Don’t give him a beer and ask about Silent Hill 2 in the same action or prepare to have an aggravatingly long chat about how that game transcends the art form.