Tech

OpenAI says AI is ‘safe enough’ as scandals raise concerns

OpenAI CEO Sam Altman insisted that OpenAI had put in ‘an enormous quantity of labor’ to make sure the protection of its fashions.

OpenAI CEO Sam Altman defended his firm’s AI expertise as protected for widespread use, as issues mount over potential dangers and lack of correct safeguards for ChatGPT-style AI programs.

Altman’s remarks got here at a Microsoft occasion in Seattle, the place he spoke to builders simply as a brand new controversy erupted over an OpenAI AI voice that carefully resembled that of the actress Scarlett Johansson.

The CEO, who rose to international prominence after OpenAI launched ChatGPT in 2022, can also be grappling with questions concerning the security of the corporate’s AI following the departure of the crew accountable for mitigating long-term AI dangers.

“My biggest piece of advice is this is a special time and take advantage of it,” Altman instructed the viewers of builders looking for to construct new merchandise utilizing OpenAI’s expertise.

“This is not the time to delay what you’re planning to do or wait for the next thing,” he added.

OpenAI is an in depth companion of Microsoft and supplies the foundational expertise, primarily the GPT-4 giant language mannequin, for constructing AI instruments.

Microsoft has jumped on the AI bandwagon, pushing out new merchandise and urging customers to embrace generative AI’s capabilities.

“We kind of take for granted” that GPT-4, whereas “far from perfect…is generally considered robust enough and safe enough for a wide variety of uses,” Altman stated.

Altman insisted that OpenAI had put in “a huge amount of work” to make sure the protection of its fashions.

“When you take a medicine, you want to know what’s going to be safe, and with our model, you want to know it’s going to be robust to behave the way you want it to,” he added.

Nonetheless, questions on OpenAI’s dedication to security resurfaced final week when the corporate dissolved its “superalignment” group, a crew devoted to mitigating the long-term risks of AI.

In asserting his departure, crew co-leader Jan Leike criticized OpenAI for prioritizing “shiny new products” over security in a sequence of posts on X (previously Twitter).

“Over the past few months, my team has been sailing against the wind,” Leike stated.

“These problems are quite hard to get right, and I am concerned we aren’t on a trajectory to get there.”

This controversy was swiftly adopted by a public assertion from Johansson, who expressed outrage over a voice utilized by OpenAI’s ChatGPT that sounded just like her voice within the 2013 movie “Her.”

The voice in query, known as “Sky,” was featured final week within the launch of OpenAI’s extra human-like GPT-4o mannequin.

In a brief assertion on Tuesday, Altman apologized to Johansson however insisted the voice was not primarily based on hers.

Ā© 2024 AFP

Quotation:
OpenAI says AI is ‘protected sufficient’ as scandals increase issues (2024, May 22)
retrieved 22 May 2024
from https://techxplore.com/information/2024-05-openai-ai-safe-scandals.html

This doc is topic to copyright. Other than any honest dealing for the aim of personal examine or analysis, no
half could also be reproduced with out the written permission. The content material is offered for info functions solely.



Click Here To Join Our Telegram Channel


Source link

In case you have any issues or complaints concerning this text, please tell us and the article might be eliminated quickly.Ā 

Raise A Concern

Show More

Related Articles

Back to top button