US begins study of possible rules to regulate AI like ChatGPT


FILE PHOTO: ChatGPT logo is seen in this illustration taken, February 3, 2023. REUTERS/Dado Ruvic/Illustration/File Photo

WASHINGTON (Reuters) - The Biden administration said Tuesday it is seeking public comments on potential accountability measures for artificial intelligence (AI) systems as questions loom about its impact on national security and education.

ChatGPT, an AI program that recently grabbed the public's attention for its ability to write answers quickly to a wide range of queries, in particular has attracted U.S. lawmakers' attention as it has grown to be the fastest-growing consumer application in history with more than 100 million monthly active users.

The National Telecommunications and Information Administration, a Commerce Department agency that advises the White House on telecommunications and information policy, wants input as there is "growing regulatory interest" in an AI "accountability mechanism."

The agency wants to know if there are measures that could be put in place to provide assurance "that AI systems are legal, effective, ethical, safe, and otherwise trustworthy."

“Responsible AI systems could bring enormous benefits, but only if we address their potential consequences and harms. For these systems to reach their full potential, companies and consumers need to be able to trust them,” said NTIA Administrator Alan Davidson.

President Joe Biden last week said it remained to be seen whether AI is dangerous. "Tech companies have a responsibility, in my view, to make sure their products are safe before making them public," he said.

ChatGPT, which has wowed some users with quick responses to questions and caused distress for others with inaccuracies, is made by California-based OpenAI and backed by Microsoft Corp.

NTIA plans to draft a report as it looks at "efforts to ensure AI systems work as claimed – and without causing harm" and said the effort will inform the Biden Administration's ongoing work to "ensure a cohesive and comprehensive federal government approach to AI-related risks and opportunities."

A tech ethics group, the Center for Artificial Intelligence and Digital Policy, asked the U.S. Federal Trade Commission to stop OpenAI from issuing new commercial releases of GPT-4 saying it was "biased, deceptive, and a risk to privacy and public safety."

(Reporting by David Shepardson and Diane Bartz; Editing by Nick Zieminski)

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

   

Next In Tech News

Jeff Bezos says most people should take more risks. Here’s the science that proves he’s right
Musk, president? Trump says 'not happening'
Bluesky finds with growth comes growing pains – and bots
How tech created a ‘recipe for loneliness’
How data shared in the cloud is aiding snow removal
Trump appoints Bo Hines to presidential council on digital assets
Do you have a friend in AI?
Japan's antitrust watchdog to find Google violated law in search case, Nikkei reports
Is tech industry already on cusp of artificial intelligence slowdown?
What does watching all those videos do to kids' brains?

Others Also Read