The House Science, Innovation, and Technology Committee (SITC) recommends that the next UK government should be prepared to introduce legislation on artificial intelligence (AI) if the current regulatory framework is found to be insufficient in protecting the public interest. The committee conducted an inquiry into the governance of AI in the UK and stressed the need for legislative action to address potential harms associated with technological advancements.
While the committee acknowledges the use of existing regulators to oversee AI development in specific sectors, it expressed concern about the limited resources available to these regulators compared to AI developers. To ensure accountability, the next government should provide additional support and funding to regulators monitoring the AI industry.
Furthermore, the SITC raised alarm over reports that the UK’s AI Safety Institute (AISI) has faced challenges in accessing AI models for safety testing. The government was urged to disclose which models have undergone safety testing, the findings of such tests, and whether developers have cooperated with safety recommendations.
Regarding pre-deployment testing, the SITC emphasized the importance of companies providing access to unreleased AI models for safety assessments. The government was called upon to identify any developers who have refused access to testing and explain their reasons for doing so. It also highlighted the voluntary commitments made by leading AI companies to ensure safety across all stages of the AI lifecycle.
In conclusion, the SITC underscored the need for the UK government to be proactive in regulating AI to prevent potential risks and ensure public trust in the technology. The committee emphasized the importance of aligning regulatory measures with international standards to promote AI safety and urged the government to conduct regular reviews of its regulatory approach.