The Rapid Growth of Artificial Intelligence (AI): Should We Be Worried?

By Kira Gerard, Year 12.

“With artificial intelligence we are summoning a demon.” – Elon Musk

In 2016, Google’s AI group, DeepMind, developed AlphaGo, a computer program that managed to beat the reigning world champion Lee Sedol at the complex board game Go. Last month, DeepMind unveiled a new version of AlphaGo, AlphaGo Zero, that mastered the game in only three days with no human help, only being given the basic rules and concepts to start with. While previous versions of AlphaGo trained against thousands of human professionals, this new iteration learns by playing games against itself, quickly surpassing the abilities of its earlier forms. Over 40 days of learning by itself, AlphaGo Zero overtook all other versions of AlphaGo, arguably becoming the best Go player in the world.

Artificial intelligence is defined as a branch of computer science that deals with the simulation of intelligent behaviour in computers, allowing machines to imitate human behaviour in highly complex ways. Simple AI systems are already wide-spread, from voice-recognition software such as Apple’s Siri and Amazon Echo, to video game AI that has become much more complex in recent years. It plays a key role in solving many problems, such as helping with air traffic control and fraud detection.

However, many people are concerned with the continued advancement of artificial intelligence potentially leading to computers that are able to think independently and can no longer be controlled by us, leading to the demise of civilisation and life as we know it. In 2014 Elon Musk, the tech entrepreneur behind innovative companies such as Tesla and SpaceX, stated in an interview at MIT that he believed that artificial intelligence (AI) is “our biggest existential threat” and that we need to be extremely careful. In recent years, Musk’s view has not changed, and he still reiterates the fear that has worried humanity for many years: that we will develop artificial intelligence powerful enough to surpass the human race entirely and become wholly independent.

As demonstrated in a multitude of sci-fi movies – 2001: A Space Odyssey, The Terminator, Ex Machina, to name a few – artificial intelligence is a growing concern among us, with the previously theoretical concept becoming more and more of a reality as technology continues to advance at a supremely high pace. Other scholars, such as Stephen Hawking and Bill Gates, have also expressed concern about the possible threat of AI, and in 2015 Hawking and Musk joined hundreds of AI researchers to send a letter urging to UN to ban the use of autonomous weapons, warning that artificial intelligence could potentially become more dangerous than nuclear weapons.

This fear that AI could become so powerful that we cannot control it is a very real concern, but not one that should plague us with worry. The current artificial intelligence we have managed to develop is still very basic in comparison to how complex a fully independent AI would need to be. AlphaGo’s Lead Researcher, David Silver, stated that through the lack of human data used, “we’ve removed the constraints of human knowledge and it is able to create knowledge itself”. This is an astonishing advancement, and signals huge improvements in the way we are developing artificial intelligence, bringing us a step closer to producing a multi-functional general-purpose AI. However, AlphaGo Zero’s technology can only work with tasks that can be perfectly simulated in a computer, so highly advanced actions such as making independent decisions are still out of reach. Although we are on the way to developing AI that matches humans at a wide variety of tasks, there is still a lot more research and development needed before advanced AI will be commonplace.

The artificial intelligence we live with every day is very useful for us, and can be applied in a variety of ways. As addressed by Mr Kane in last week’s WimTeach blog, technology has an increasing role in things such as education, and we are becoming ever more reliant on technology. Artificial intelligence is unquestionably the next big advancement in computing, and as Elon Musk stated in a recent interview: “AI is a rare case where I think we need to be proactive in regulation instead of reactive… by the time we are reactive in regulation it is too late.” As long as we learn how to “avoid the risks”, as Hawking puts it, and ensure that we regulate the development of such technologies as closely as we can, our fears of a computer takeover and the downfall of humanity will never become reality.

Leave a Reply