The UK government is rolling out a new artificial intelligence (AI) assurance platform. This initiative aims to help businesses across the country identify and manage the risks that come with AI technology. With 524 firms currently in the UK’s AI assurance market, employing over 12,000 people and valued at more than £1 billion, the government sees potential for this sector to grow sixfold to £6.5 billion by 2035.
Launching on November 6, 2024, this platform will serve as a comprehensive resource for AI assurance. It will gather existing tools, services, and guidance in one location, including introductory materials on AI assurance and techniques from the Department for Science, Innovation and Technology (DSIT). The platform will also provide clear steps for businesses to assess the impact of their AI systems and check for bias in their data, which helps build trust in AI’s everyday use.
Digital Secretary Peter Kyle emphasizes AI’s potential to enhance public services, increase productivity, and stimulate economic recovery. He states that to fully leverage this technology, we need to foster trust, as AI becomes an integral part of our lives. The steps announced are designed to give businesses the clarity and support needed to use AI safely and responsibly.
DSIT plans to expand the platform over time by creating new resources such as an “AI Essentials toolkit.” This toolkit aims to simplify governance frameworks and standards for industries. They have also launched an open consultation for a new self-assessment tool called AI Management Essentials (AIME). This tool will offer straightforward guidelines for organizations, promoting ethical and responsible AI development. It’s designed to be user-friendly, especially for small and medium-sized enterprises (SMEs).
The insights gained from the AIME self-assessment will also aid public sector buyers in making informed procurement decisions regarding AI technology. The suite of products available through the platform will help organizations get started with AI assurance, laying a foundation for a more resilient ecosystem.
The UK government views the development of safe and responsible AI systems as central to its strategy for this technology. They plan to increase the availability of third-party AI assurance and create a “roadmap to trust” along with the industry. This effort includes developing a “terminology tool for responsible AI” to help assurance providers navigate international governance.
Additionally, the UK’s AI Safety Institute (AISI), established by former Prime Minister Rishi Sunak before the AI Safety Summit in November 2023, is launching the Systemic AI Safety Grants program. This initiative will distribute up to £200,000 in funding for researchers dedicated to enhancing AI safety.
On the same day the assurance platform launches, the AISI has announced a partnership with Singapore’s AI safety institute. This collaboration will focus on research and developing shared policies, standards, and guidance. Singapore’s Minister for Digital Development and Information, Josephine Teo, expressed commitment to creating AI for the public good and emphasized the importance of this partnership with the UK.
Ian Hogarth, chair of the UK AISI, stated that tackling AI safety effectively requires global cooperation. The agreement with Singapore marks the first step in a shared goal to enhance AI safety science and promote best practices in the responsible development of AI systems.