Should ChatGPT like AI be regulated? This is what OpenAI founder thinks

Sam Altman, the chief executive of ChatGPT maker OpenAI, has called for new rules to be put in risk to guard against the potential risks of AI ahead of US elections.

Altman urged US lawmakers to introduce regulations for rapidly advancing artificial intelligence (AI) technology. He expressed concerns about the potential for AI to generate “interactive disinformation” leading up to the upcoming US elections.

Speaking at a hearing before a US Senate subcommittee, Altman, advocated for independent audits, a licensing system, and warning labels similar to nutritional information on food products.

ALSO READ l ChatGPT effect: After Google, now Amazon working on an AI-powered search experience

Senators questioned Altman about AI’s capacity to predict and influence public opinion regarding the forthcoming election. Altman said that the broader ability of AI models to manipulate and persuade, specifically highlighting the risks of one-on-one interactive disinformation.

“The more general ability of these models to manipulate, to persuade, to provide sort of one-on-one interactive disinformation . . . given that we’re going to face an election next year and these models are getting better. I think this is a significant area of concern,” he replied.

He urged lawmakers to establish guidelines for disclosure expectations for companies offering AI technology. Altman also expressed confidence that the general public would quickly grasp the power of AI similar to how it was seen in the case of Photoshop.

“When Photoshop came on to the scene a long time ago, for a while people were really quite fooled by Photoshopped images and then pretty quickly developed an understanding that images might be Photoshopped. This will be like that, but on steroids.”

ALSO READ l New Google tool will tell you if that photo you liked is fake, generated by AI

The hearing took place against a backdrop of increased scrutiny from regulators and governments worldwide regarding AI technology. Silicon Valley companies like Google and Microsoft are also actively developing AI technology, contributing to the growing concerns surrounding potential abuses. Last week, EU lawmakers reached a consensus on stringent rules governing AI, including restrictions on chatbots like ChatGPT. The US Federal Trade Commission and the UK competition watchdog have also issued warnings, with the FTC “focusing intensely on how companies may choose to use AI technology.”

 

Stay connected with us on social media platform for instant update click here to join our  Twitter, & Facebook We are now on Telegram. Click here to join our channel (@TechiUpdate) and stay updated with the latest Technology headlines. For all the latest Technology News Click Here 

Read original article here

Denial of responsibility! FineRadar is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – abuse@fineradar.com. The content will be deleted within 24 hours.
AI regulationai rulesChatGPTfineradar updatefounderFree Fire Redeem Codesgadget updateGoogleLatest tech newsOpenAIopenai chiefregulatedSam AltmanTech Headlinestech newsTech News UpdatesTechnologyTechnology NewsThinks
Comments (0)
Add Comment