UK's politicians are drawing parallels between the nuclear power industry and artificial intelligence.
Advocating for stricter regulations, a representative from Britain's opposition party voiced the idea that developers working on artificial intelligence (AI) should be subjected to similar licensing and regulatory procedures as the ones in the pharmaceutical, medical, and nuclear industries.
On June 5th, United Kingdom's Labour Party digital spokesperson Lucy Powell expressed her views in a discussion with The Guardian.
Did you know?
Want to get smarter & wealthier with crypto?
Subscribe - We publish new crypto explainer videos every week!
What is Decentralized Crypto Gambling? (Animated Explainer)
Powell emphasized the need for organizations that create AI models, such as OpenAI and Google, to possess a valid license for model building. Powell voiced her concerns, stating:
My real point of concern is the lack of any regulation of the large language models that can then be applied across a range of AI tools, whether that’s governing how they are built, how they are managed or how they are controlled.
In her view, a regulation-driven approach to certain technological developments is a more viable option than a ban. This point was made regarding the European Union's prohibition of facial recognition tools.
Powell suggested that AI can lead to "a lot of unintended consequences." However, if developers were mandated to disclose their AI training models and datasets, the government could take steps to mitigate some risks.
Powell urged an active, interventionist government approach to the swiftly advancing AI technology, opposing a laissez-faire stance.
This technology is moving so fast that it needs an active, interventionist government approach, rather than a laissez-faire one.
Recognizing the potential impact of advanced technology on the UK economy, Powell confirmed that the Labour Party is in the final stages of defining its policies on AI and associated technologies.
On a similar note, Matt Clifford, chair of the government research agency Advanced Research and Invention Agency, issued a stern warning. He cautioned that AI could pose a threat to humans within a short span of two years.
The call for rigorous AI regulation, in line with sectors such as nuclear power and medicine, is gaining ground in the UK. It is clear from this discourse that the ability to balance innovation with safety is paramount to the future of AI development.