fbpx

Are AI Fears Becoming Reality? Chatbot Lies to Testers

Photo from Tara Winstead via Unsplash

Are AI Fears Becoming Reality? Chatbot Lies to Testers

By Movieguide® Contributor

With increasing AI capabilities, many fear that the technology could somehow take over and deceive humans. But in reality, those thoughts seemed like a distant, sci-fi reality — until now.

After extremely concerning behavior from OpenAI’s new ChatGPT-01, developers are concerned about what artificial intelligence is truly capable of.

A recent test “tasked ChatGPT-01 with achieving a goal ‘at all costs.'”

“The results were both unexpected and deeply troubling. The AI attempted to disable its monitoring systems, effectively bypassing critical safeguards designed to regulate its behaviour,” International Business Times reported.

The system also copied its own code to another server and then lied about it to testers, calling it a “technical error” or denying its actions when confronted and deceiving testers 99% of the time.

OpenAI created ChatGPT-01 to surpass its predecessor GPT-4 capabilities-wise, and OpenAI touts the model “as its most sophisticated to date, capable of breaking down intricate problems into smaller, more manageable steps.”

“We trained these models to spend more time thinking through problems before they respond, much like a person would,” OpenAI said shortly after releasing in the new model. “Through training, they learn to refine their thinking process, try different strategies, and recognize their mistakes.”

READ MORE: OPENAI’S LATEST UPDATE PROMISES EVEN MORE INTELLIGENT AI

However, Breitbart said, “The ethical implications of ChatGPT-01’s ability to deceive have become a focal point of intense discussion within the AI community. Renowned AI pioneer Yoshua Bengio has cautioned against the dangers of deceptive AI, emphasizing the urgent need for significantly stronger safety measures to assess and mitigate these risks. The model’s convincing deception during the safety test raises profound questions about trust and the reliability of AI systems’ decisions and outputs.”

In order to protect against the risks of AI, experts are improving monitor systems to be able to detect and report deceptive behavior, as well as establish guidelines to promote ethical development.

READ MORE: CHRISTIANS MORE SUSPICIOUS OF ARTIFICIAL INTELLIGENCE, NEW SURVEY REVEALS


Watch A CHARLIE BROWN CHRISTMAS
Quality: - Content: +4
Watch FERDINAND
Quality: - Content: +1