“AI is as comparable a danger as nuclear weapons,” Oxford University Professor warns
“Mark my words, AI is much more dangerous than nukes. It scares the hell outta me.” That was the statement made by Tesla’s CEO Elon Musk in 2018 during an interview with Jonathan Nolan during the South by Southwest Conference in Austin, Texas. Unfortunately, Mus is not alone. Researchers and other renowned experts are also warning about the dangers of artificial intelligence (AI). The 1-million-pound question is are we prepared and what are our governments and policy planners doing to avert this danger?
While the recent popularity of ChatGPT has brought AI to the mainstream and lured many into its societal benefits, what’s not gaining the attention of the general public is the dark side of AI. For many years, professionals and tech workers have been worrying about artificial intelligence (AI) taking away their jobs.
However, experts believe there are other more serious risks and dangers to be concerned about including algorithmic bias caused by bad data used by AI, deep fakes, privacy violations, and many more. While these dangers pose serious risks to society, researchers and scientists are more worried about how AI could be programmed to do something even more dangerous: weapons automatization.
Today, AI has now been employed in the development of swarming technologies and loitering munitions, also called kamikaze drones like the ones being used in the ongoing Russia-Ukraine war. Unlike the futuristic robots you see in some sci-fi movies, these drones use previously existing military platforms that leverage new AI technologies. These drones have in essence become autonomous weapons that are programmed to kill.
But using AI-powered to kill in milliary combat is just the beginning. Michael Osborne is an AI Professor and machine learning researcher at Oxford University. He is also the co-founder of Mind Foundry. While everyone is on the ChatGPT craze, Professor Osborne is now warning about the risks of AI and forecasts that advanced AI could “pose just as much risk to us as we have posed to other species: the dodo is one example.”
Early this month, a group of researchers from Oxford University told the science and technology committee in the British parliament that AI could eventually pose an “existential threat” to humanity. Just as humans were wiped out by the dodo, the AI machines might eliminate us, they said, the Times Of London reported.
During the meeting, Professor Osborne warned British parliamentarians that truly powerful artificial intelligence could kill everyone on Earth. “AI is as comparable a danger as nuclear weapons,” he said. Professor Osborne also added that the risk is not the AI disobeying its programming, but rigidly obeying it in unintended ways:
“A superintelligent AI told to end cancer, to give an oversimplified example, might find the easiest method is removing humans. When polled, around a third of researchers think AI could lead to a global catastrophe. AI safety programs exist, but businesses and countries are engaged in an “arms race” that make cautious approaches difficult.”
Michael Cohen, Professor Osbborne’s colleague and a doctoral student at Oxford University, told The Times of London:
“With superhuman AI there is a particular risk that is of a different sort of class, which is . . . it could kill everyone.”
While AI has improved our lives, scientists fear that we’re at risk of sacrificing humanity for the sake of convenience due to AI’s lack of human morality. One frightening scenario, according to Cohen is that AI could learn to achieve a human-helping directive by employing human-harming tactics.
“If you imagine training a dog with treats: it will learn to pick actions that lead to it getting treats, but if the dog finds the treat cupboard, it can get the treats itself without doing what we wanted it to do,” he explained. “If you have something much smarter than us monomaniacally trying to get this positive feedback, and it’s taken over the world to secure that, it would direct as much energy as it could to securing its hold on that, and that would leave us without any energy for ourselves.”
Professor Osborne and Michael Cohen are not the only two researchers sounding the alarm bell about the risks and dangers of AI. Many other scientists who work with AI have expressed similar concerns. A 2022 September survey of 327 researchers at New York University found that a third believe that AI could bring about a nuclear-style apocalypse within the century, the Times Of London said.
Similar concerns appear to be shared by many scientists who work with AI. A September 2022 survey of 327 researchers at New York University found that a third believe AI could cause a disaster similar to a nuclear apocalypse within the century, the Times Of London reported.
“Plausible that decisions made by AI or machine learning systems could cause a catastrophe this century that is at least as bad as an all-out nuclear war,” 36% of the researchers said.
Whether it’s a century from now, AI is increasingly becoming too smart and as some experts have suggested, it’s imperative that we preserve human control and develop mechanisms such as an AI kill switch to control what AI can and cannot do. The current lack of oversight and absence of clear global AI policy is a wake-up call for governments around the world. We need to take action now before it’s too late.
Below is another video about the danger of AI. In the video, Stuart Russell, a British computer scientist known for his contributions to artificial intelligence, warns about the risks involved in the creation of AI systems.