Connect with us

Bussiness

AI regulators ‘under-resourced’ compared to developers, committee warns

Published

on

AI regulators ‘under-resourced’ compared to developers, committee warns

The Commons science committee has warned the £10m announced by the government to help Ofcom and other regulators respond to the growth of AI is “clearly insufficient”.


Regulators for artificial intelligence in the UK are “under-resourced” compared to developers of the technology, the Commons science committee chief has warned.

Outgoing chairman of the Science, Innovation and Technology Committee, Greg Clark, said he is “worried” by the gap between regulators and the “finance that major developers can command”.

In a report produced by the committee into the governance of AI, the group said the £10m announced by the government in February to help Ofcom and other regulators respond to the technology’s growth was “clearly insufficient”.

The next government should announce further financial support “commensurate to the scale of the task”, it added.

“It is important that the timing of the general election does not stall necessary efforts by the government, developers and deployers of AI to increase the level of public trust in a technology that has become a central part of our everyday lives,” the report states.

The committee was also concerned at suggestions the new AI Safety Institute has been unable to access some developers’ models for pre-deployment safety testing.

The next government should name developers refusing access – in contravention of the agreement at the November 2023 summit at Bletchley Park – and report their justification for refusing, they added.

Former business secretary Mr Clark said it was important to test the outputs of AI models for biases “to see if they have unacceptable consequences”, as biases “may not be detectable in the construction of models”.

“The Bletchley Park summit resulted in an agreement that developers would submit new models to the AI Safety Institute,” he said.

“We are calling for the next government to publicly name any AI developers who do not submit their models for pre-deployment safety testing.

“It is right to work through existing regulators, but the next government should stand ready to legislate quickly if it turns out that any of the many regulators lack the statutory powers to be effective.

“We are worried that UK regulators are under-resourced compared to the finance that major developers can command.”

Read more:
Glue cheese to pizza and eat rocks, says Google’s new AI feature
Schumacher’s family win legal case over ‘tasteless’ AI-made interview

Follow Sky News on WhatsApp

Keep up with all the latest news from the UK and around the world by following Sky News

Tap here

The government and regulators should safeguard the integrity of the general election campaign by taking “stringent enforcement action” against online platforms hosting deepfake content, which “seeks to exert a malign influence on the democratic process”, the report said.

But the “most far-reaching challenge” of AI may be the way it can operate as a “black box” – in that the basis of, and reasoning for, its output may be unknowable.

The Department for Science, Innovation and Technology said the UK is taking steps to regulate AI and upskilling regulators as part of a wider £100m funding package.


This is a limited version of the story so unfortunately this content is not available.

Open the full version

Continue Reading