UK focuses on transparency and access with new AI principles


FILE PHOTO: AI (Artificial Intelligence) letters are placed on computer motherboard in this illustration taken June 23, 2023. REUTERS/Dado Ruvic/Illustration/File Photo

LONDON (Reuters) -Britain set out principles on Monday designed to prevent artificial intelligence (AI) models from being dominated by a handful of tech companies to the detriment of consumers and businesses, by emphasising the need for accountability and transparency.

Britain's anti-trust regulator, the Competition and Markets Authority (CMA), is, like other authorities around the world, trying to control some of the potential negative consequences of AI without stifling innovation.

The seven principles it listed aim to regulate foundational models such as ChatGPT by making developers accountable, by preventing Big Tech tying up the tech in their walled platforms, and by stopping anti-competitive conduct like bundling.

CMA Chief Executive Sarah Cardell said on Monday there was real potential for the technology to turbocharge productivity and make millions of everyday tasks easier – but a positive future could not be taken for granted.

She said there was a risk that the use of AI could be dominated by a few players who exert market power that prevents the full benefits being felt across the economy.

"That's why we have today proposed these new principles and launched a broad programme of engagement to help ensure the development and use of foundation models evolves in a way that promotes competition and protects consumers," she said.

The CMA's proposed principles, which come six weeks before Britain hosts a global AI safety summit, will underpin its approach to AI when it assumes new powers in the coming months to oversee digital markets.

It said it would now seek views from leading AI developers such as Google, Meta, OpenAI, Microsoft, NVIDIA and Anthropic, as well as governments, academics and other regulators.

The proposed principles also cover access to key inputs, diversity of business models including both open and closed, and flexibility for businesses to use multiple models.

Britain in March opted to split regulatory responsibility for AI between the CMA and other bodies that oversee human rights and health and safety rather than creating a new regulator.

The United States is looking at possible rules to regulate AI and digital ministers from the Group of Seven leading economies agreed in April to adopt "risk-based" regulation that would also preserve an open environment.

(Reporting by Paul Sandle and Sarah Young, Editing by Kylie MacLellan and David Evans)

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

   

Next In Tech News

Musk now says it's 'pointless' to build a $25,000 Tesla for human drivers
Google defeats lawsuit over gift card fraud
Russian court fines Apple for not deleting two podcasts, RIA reports
GlobalFoundries forecasts upbeat Q4 results on strong demand from smartphone makers
Emerson sharpens automation focus with offer for rest of AspenTech in $15 billion deal
Palantir shares surge to record as AI boom powers forecast raise
Tax fraud investigators search Netflix offices in Paris and Amsterdam, says source
Singapore's Keppel to buy Japanese AI-ready data centre
Tesla increases wages for staff at German gigafactory by 4%
Apple explores push into smart glasses with ‘Atlas’ user study

Others Also Read