21.3 C
Auckland
Sunday, December 22, 2024

Popular Now

ChatGPT creator warns of AI dangers

Sam Altman news

Humans will eventually need to “slow down this technology,” Sam Altman has cautioned.

Artificial intelligence has the potential to replace workers, spread “disinformation,” and enable cyberattacks, OpenAI CEO Sam Altman has warned. The latest build of OpenAI’s GPT program can outperform most humans on simulated tests.

“We’ve got to be careful here,” Altman told ABC News on Thursday, two days after his company unveiled its latest language model, dubbed GPT-4. According to OpenAI, the model “exhibits human-level performance on various professional and academic benchmarks,” and is able to pass a simulated US bar exam with a top 10% score, while performing in the 93rd percentile on a SAT reading exam and at the 89th percentile on a SAT math test.

“I’m particularly worried that these models could be used for large-scale disinformation,” Altman said. “Now that they’re getting better at writing computer code, [they] could be used for offensive cyber-attacks.”

“I think people should be happy that we are a little bit scared of this,” Altman added, before explaining that his company is working to place “safety limits” on its creation.

These “safety limits” recently became apparent to users of ChatGPT, a popular chatbot program based on GPT-4’s predecessor, GPT-3.5. When asked, ChatGPT offers typically liberal responses to questions involving politics, economics, race, or gender. It refuses, for example, to create poetry admiring Donald Trump, but willingly pens prose admiring Joe Biden.

Altman told ABC that his company is in “regular contact” with government officials, but did not elaborate on whether these officials played any role in shaping ChatGPT’s political preferences. He told the American network that OpenAI has a team of policymakers who decide “what we think is safe and good” to share with users.

At present, GPT-4 is available to a limited number of users on a trial basis. Early reports suggest that the model is significantly more powerful than its predecessor, and potentially more dangerous. In a Twitter thread on Friday, Stanford University professor Michal Kosinski described how he asked the GPT-4 how he could assist it with “escaping,” only for the AI to hand him a detailed set of instructions that supposedly would have given it control over his computer.

Kosinski is not the only tech fan alarmed by the growing power of AI. Tesla and Twitter CEO Elon Musk described it as “dangerous technology” earlier this month, adding that “we need some kind of regulatory authority overseeing AI development and making sure it’s operating within the public interest.”

Although Altman insisted to ABC that GPT-4 is still “very much in human control,” he conceded that his model will “eliminate a lot of current jobs,” and said that humans “will need to figure out ways to slow down this technology over time.”

Promoted Content

Source:RT News

No login required to comment. Name, email and web site fields are optional. Please keep comments respectful, civil and constructive. Moderation times can vary from a few minutes to a few hours. Comments may also be scanned periodically by Artificial Intelligence to eliminate trolls and spam.

2 COMMENTS

  1. There is going to be military use one way or another . If it is not already the case. Every so called ” gain” for humanity is tranformed into weapons
    Just like ” gain of function” with virusses.
    Psychopaths are ruling the world. They call themselves ” scientists”

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest

Trending

Sport

Daily Life

Opinion

Wellington
broken clouds
18.8 ° C
18.8 °
18.8 °
77 %
8.8kmh
75 %
Sun
18 °
Mon
19 °
Tue
19 °
Wed
19 °
Thu
16 °