Regulate AI Like Nuclear Power, Says UK Labour Party

0

[ad_1]

Officials in the United Kingdom suggested that artificial intelligence technology be regulated and require a government license—similar to pharmaceutical or nuclear power companies, according to a report by the Guardian.

“That is the kind of model we should be thinking about, where you have to have a license in order to build these models,” a digital spokesperson for the Labour Party, Lucy Powell, told the publication. “These seem to me to be good examples of how this can be done.”

Powell said policymakers should focus on regulating artificial intelligence at the developmental level instead of attempting to ban the technology. In March, citing privacy concerns, Italy banned ChatGPT before lifting the ban after OpenAI instituted new security measures in April.

“My real point of concern is the lack of any regulation of the large language models that can then be applied across a range of AI tools, whether that’s governing how they are built, how they are managed, or how they are controlled,” Powell said.

Powell’s comment echoes those of U.S. Senator Lindsey Graham, who said during a congressional hearing in May that there should be an agency that can grant AI developers a license and also take it away—an idea with which OpenAI CEO Sam Altman agreed.

Altman even recommended creating a federal agency to set standards and practice.

“I would form a new agency that licenses any effort above a certain scale of capabilities, and that can take that license away and ensure compliance with safety standards,” Altman said.

Invoking nuclear technology as a parallel to artificial intelligence is not new. In May, famed investor Warren Buffett likened AI to the atomic bomb.

“I know we won’t be able to uninvent it and, you know, we did invent—for very, very good reason—the atom bomb,” Buffett said.

That same month, artificial intelligence pioneer Geoffrey Hinton resigned from his position at Google in May so that he would be free to sound the alarm about the potential dangers of AI freely.

Last week, the Center for AI Safety published a letter saying, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Signatories included Altman, Microsoft co-founder Bill Gates, and Stability AI CEO Emad Mostaque.

The rapid development and application of AI technology have also raised concerns about bias, discrimination, and surveillance, which Powell believes can be mitigated by requiring developers to be more open about their data.

“This technology is moving so fast that it needs an active, interventionist government approach, rather than a laissez-faire one,” Powell said.

 

Stay on top of crypto news, get daily updates in your inbox.

[ad_2]

Source link

You might also like
Leave A Reply

Your email address will not be published.