Almost 75% of people in the UK feel that laws regulating artificial intelligence (AI) would make them more comfortable with its use. This insight comes from a survey involving over 3,500 UK residents, which explored their awareness and experiences related to AI technologies.
Most respondents, about 72%, want clear regulations to help ease their concerns about AI. A significant number—around 90%—believe governmental or regulatory bodies should have the authority to stop harmful AI applications. Additionally, over 75% agree that safety should be overseen by the government or independent regulators, rather than solely by private companies.
The survey highlighted that many have encountered AI-related issues. Two-thirds reported experiencing harmful effects, with misinformation, financial fraud, and deepfakes being the most noted problems. Many respondents also support the idea of appealing AI-driven decisions, with 65% wanting clearer procedures for challenges and 61% advocating for more transparency about how AI makes choices.
Despite this widespread demand for regulations, the UK lacks comprehensive laws governing AI. A report from the Ada Lovelace and Alan Turing Institutes noted the government’s recognition of the need for protection against AI risks. However, it criticized the absence of specific strategies to realize these aims.
Octavia Field Reid from the Ada Lovelace Institute emphasized that responsible AI development must heed public expectations and experiences. The disconnect between current policies and public concerns might lead to a backlash, especially among marginalized groups who often experience the negative impacts of AI most acutely.
The survey also revealed disparities in attitudes toward AI across different demographics. For instance, 57% of Black participants and 52% of Asian participants expressed worry about facial recognition in law enforcement, significantly more than the 39% across the general population. People with lower incomes tended to view AI technologies as less beneficial than those from higher income brackets.
Data privacy remains a top concern. About 83% of the public is uneasy about public agencies sharing their data with private firms for AI training. Many do not feel represented in decisions regarding AI, with half saying their voices aren’t considered.
Experts stress that to unlock AI’s potential positively, incorporating public perspectives into policymaking is essential. Helen Margetts from the Alan Turing Institute pointed out the government’s commitment to empowering regulators with the necessary resources to strengthen public trust.
The report urges policymakers to consult the public actively to capture varied opinions and concerns around AI, to identify risks and appropriate governance strategies effectively.
Despite the importance of public engagement in shaping AI systems, there currently aren’t effective paths for the public to weigh in on these technologies. Angela McClean, the government’s chief scientific adviser, noted the absence of viable channels for public input on scientific and technological matters.
Looking globally, a UN advisory body recently stressed the need for an international framework for AI governance, highlighting the transboundary nature of AI technology. It cited the risks associated with the concentration of power and wealth that AI brings, along with the unpredictable nature of its development and deployment. The call is for a collaborative, global approach to ensure AI serves everyone fairly.