Here’s why AI can be extremely dangerous – whether it’s conscious or not

by The Insights

“The idea that this stuff could actually get smarter than people…I thought that was a long way off…Obviously I don’t think about that anymore,” said Geoffrey Hinton, one top artificial intelligence scientists at Google, also known as “the godfather of AI”, after quitting his job in April so he could raise awareness about the dangers of the technology.

He’s not the only one worried. A 2023 survey of AI experts found that 36% fear that the development of AI will lead to a “nuclear-level disaster”. Nearly 28,000 people have signed an open letter written by the Future of Life Institute, including Steve Wozniak, Elon Musk, the CEOs of several AI companies and many other leading technologists, asking for a six-month break or a moratorium on new advanced technologies. AI development.

As a consciousness researcher, I share these strong concerns about the rapid development of AI, and I am a co-signer of the Future of Life open letter.

Why are we all so worried? In short: the development of AI is going way too fast.

The key issue is the extremely rapid improvement in conversation among the new generation of advanced “chatbots”, or what are technically called “large language models” (LLMs). With this upcoming “AI explosion” we’ll probably only have one chance to get it right.

If we’re wrong, we may not live to tell the tale. This is not hyperbole.

This rapid acceleration promises to lead to “artificial general intelligence” (AIG) soon, and when that happens, AI can improve without human intervention. It will do so in the same way that, say, Google’s AlphaZero AI learned to play chess better than even the best human or other AI chess players in just nine hours after it was first turned on. He achieved this feat by playing himself millions of times.

A team of Microsoft researchers analyzing OpenAI’s GPT-4, which I believe is the best of the new advanced chatbots currently available, said it had “sparks of advanced general intelligence” in a new paper. of preprint.

On the GPT-4 test, he performed better than 90% of human candidates on the Uniform Bar Examination, a standardized test used to certify lawyers for practice in many states. This figure was up from just 10% in the previous version of GPT-3.5, which was trained on a smaller data set. They found similar improvements in dozens of other standardized tests.

Most of these tests are reasoning tests. This is the main reason why Bubeck and his team concluded that the GPT-4 “could reasonably be considered an early (but still incomplete) version of an artificial general intelligence (AGI) system”.

This rate of change is why Hinton told the New York Times“Look how it was five years ago and how it is now. Take the difference and spread it forward. It’s scary.” During a Senate hearing in mid-May on the potential of AI, Sam Altman, the head of OpenAI, called the regulations “crucial.”

Once the AI ​​can get better, which might not be more than a few years away, and could in fact already be here now, we have no way of knowing what the AI ​​will do or how we can control it. Indeed, super-intelligent AI (which, by definition, can outperform humans in a wide range of activities) will be – and this is what worries me the most – able to circle around programmers and everything. other human by manipulating humans to do his will; he will also have the ability to act in the virtual world through his electronic connections, and to act in the physical world through robot bodies.

This is called the “control problem” or the “alignment problem” (see philosopher Nick Bostrom’s book Superintelligence for a good overview) and has been studied and debated by philosophers and scientists, such as Bostrom, Seth Baum and Eliezer Yudkowsky, for decades now.

I think of it this way: Why would anyone expect a newborn to beat a grandmaster at chess? We wouldn’t. Likewise, why would anyone expect to be able to control super-intelligent AI systems? (No, we won’t be able to just flip the switch, because the super-intelligent AI will have thought of all possible ways to do so and taken steps to avoid being turned off.)

Here’s another way to look at it: a super-intelligent AI will be able to do in about a second what it would take a team of 100 human software engineers a year or more to do. Or choose any task, like designing a new aircraft or an advanced weapon system, and a super-smart AI could do it in about a second.

Once AI systems are integrated into robots, they will be able to act in the real world, rather than the virtual (electronic) world, with the same degree of superintelligence, and will of course be able to replicate and improve at a superhuman level. pace.

Any defenses or protections that we attempt to build into these AI “gods”, on their way to godhood, will be anticipated and easily neutralized by the AI ​​once it achieves superintelligence status. That’s what it means to be super smart.

We won’t be able to control them because everything we think about, they will have already thought about, a million times faster than us. All the defenses we’ve built will come undone, like Gulliver throwing off the tiny wisps the Lilliputians used to try to hold him back.

Some argue that these LLMs are just machines of automation without consciousness, implying that if they are not aware, they are less likely to break free from their programming. Even if these language patterns, now or in the future, are not conscious at all, it doesn’t matter. For the record, I agree that they are unlikely to have any real awareness at this stage, although I remain open to new facts as they arise.

Either way, a nuclear bomb can kill millions of people without any conscience. Similarly, AI could kill millions of people without conscience in multiple ways, potentially including using nuclear bombs either directly (much less likely) or through manipulated human intermediaries (more likely).

So debates about consciousness and AI really don’t feature much in debates about the safety of AI.

Yes, language models based on GPT-4 and many other models are already circulating widely. But the requested moratorium is to halt development of any new models more powerful than 4.0 – and this can be enforced, forcefully if necessary. Training these more powerful models requires massive server farms and energy. They can be closed.

My ethical compass tells me that it is very unwise to create these systems when we already know we cannot control them even in the relatively near future. Discernment is knowing when to step back from the edge. It is now.

We must not open Pandora’s box any further than it has already been opened.

This is an opinion and analytical article, and the opinions expressed by the author or authors are not necessarily those of American scientist.

You may also like

Leave a Comment

About Us

The Insights is a top leading multimedia news magazine curating a variety of topics and providing the latest news and insights.

Editors' Picks

Newsletter

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

@2021 – All Right Reserved. Designed and Developed by our team