Thursday, November 21, 2024

UK Government Reveals Details on AI Safety Research Funding

The UK government has kicked off a new research and funding program aimed at making AI safer, setting aside up to £200,000 in grants for researchers. This initiative, called the Systemic Safety Grants Programme, comes together with the Engineering and Physical Sciences Research Council (EPSRC) and Innovate UK, which are part of UK Research and Innovation (UKRI). The UK’s Artificial Intelligence Safety Institute (AISI) will manage the program, initially funding around 20 projects with £4 million. As the program progresses, an additional £8.5 million is lined up to support further phases.

Launched ahead of the UK AI Safety Summit in November 2023, the AISI’s role is to look into the emerging risks of new AI systems. They’re already working with their US counterpart to enhance safety testing approaches. The focus here is to protect society from various AI-related dangers, like deepfakes and misinformation, while fostering public trust in AI technology.

This research will spotlight critical risks tied to AI deployment in important sectors, such as healthcare and energy, with the goal of developing long-term solutions to these challenges. Digital Secretary Peter Kyle emphasized the importance of AI adoption for enhancing growth and public services but stressed that public trust in these innovations is crucial. He pointed out that this grants program is a step toward ensuring the safe rollout of AI systems in the economy.

UK organizations can apply for these grants via a dedicated website, which aims to understand the future challenges posed by AI. The program also encourages international collaboration, allowing projects to include partners from abroad, thus strengthening global coordination on safe AI development.

The first round of proposals must be submitted by November 26, 2024, with successful applicants announced by the end of January 2025 and funding awarded in February. AISI Chair Ian Hogarth highlighted that the program will explore and address risks associated with AI systems, whether it’s the challenge of deepfakes or the unpredictability of AI failures. He believes that gathering diverse research can help build a solid foundation for understanding AI safety for the public’s benefit.

A press release from the Department of Science, Innovation and Technology (DSIT) reaffirmed Labour’s pledge to introduce targeted regulations for companies that develop powerful AI models, emphasizing a balanced regulatory approach instead of sweeping rules.

In May 2024, the AISI opened its first international office in San Francisco, eager to connect with top AI firms like Anthropic and OpenAI. This announcement also included the release of their first AI model safety testing results. They found that none of the five tested large language models could complete more complex tasks without human oversight and were prone to basic “jailbreaks.” Some models even risked producing harmful outputs without intentional hacks. Nevertheless, the AISI noted that these models showed promise, successfully tackling basic to intermediate cybersecurity tasks and demonstrating knowledge equivalent to that of PhD experts in chemistry and biology.