There is a veritable “arms race” going on in the field of AI development, researchers claim, arguing that this field is in need of regulation.
Oxford University researchers have warned the British government that advanced artificial intelligence (AI) may potentially pose a serious threat to humanity and should be properly regulated.
The scholars alerted MPs from the Science and Technology Select Committee that the development of AI may eventually reach a point where these systems could become capable of wiping humans out.
Doctoral student Michael Cohen explained that with the “superhuman AI” there is a certain risk that “it could kill everyone.”
Likening the process of training an AI to training a dog, Cohen pointed out that, instead of trying to get treats as rewards for completing tasks, a canine might instead try and get them without doing what the humans want it to do if it were to find a treat cupboard.
“If you have something much smarter than us monomaniacally trying to get this positive feedback, and it’s taken over the world to secure that, it would direct as much energy as it could to securing its hold on that, and that would leave us without any energy for ourselves,” he elaborated.
Cohen also speculated that such devious rogue AI might be able to evade detection while moving towards its goal, arguing that, “If you have something that’s much smarter than us across every domain it would presumably avoid sending any red flags while we still could pull the plug.”
Last year, Cohen sounded a similar warning in a study he penned together with his colleagues from the University of Oxford and the Australian National University.
The researchers also suggested that AI develops into an “arms race”, and urged subjecting this area to regulation in order to protect humanity.
“I think we’re in a massive AI arms race, geopolitically with the US versus China and among tech firms there seems to be this willingness to throw safety and caution out the window and race as fast as possible to the most advanced AI,” said professor of machine learning Michael Osborne.
“In civilian applications, there are advantages to being the first to develop a really sophisticated AI that might eliminate the competition in some way and if the tech that is developed doesn’t stop at eliminating the competition and perhaps eliminates all human life, we would be really worried,” he added.
Image credit: Tara Winstead
Isaac Asimov’s three laws of robotics. It is a reoccurring them and set of principals in his novels. Author of “I Robot” etc….
We are living in a world our forefathers warned us about. Some people refuse to see.
Doesn’t hurt to read a book once in a while.
Elon Musk warned of this.
Anyone doubting should watch the movie Ex Machina and have a think about it.