Saturday, May 24, 2025

Dell Prioritizes Private Cloud Flexibility

Microsoft’s ICC Email Block Sparks Renewed European Data Sovereignty Issues

M&S Cyber Attack Impact Expected to Persist Until July

Comparing AI Storage Solutions: NAS, SAN, and Object Storage for Training and Inference

Lloyds and Nationwide to Leverage UK Finance Sector’s LLM Technology

Microsoft Mobilizes Team to Combat Threat of Lumma Malware

DSIT Allocates £5.5 Million for New Project Funding

Dell Technologies Customers Creating Practical AI Applications

Vast Data Soars into the AI Stratosphere with AgentEngine Launch

AI in National Security: Balancing Proportionality and Privacy Concerns

A recent study released during the Centre for Emerging Technology and Security’s annual Showcase 2025 event sheds light on the public’s concerns about automated data processing in national security.

The research, titled “UK Public Attitudes to National Security Data Processing,” shows that awareness of what national security agencies do is quite low among the UK public. During a panel discussion, investigatory powers commissioner Brian Leveson emphasized the new challenges posed by technology. He stated, “We’re facing growing challenges. Rapid advancements, particularly in AI, are reshaping our public authorities.” Leveson pointed out that these changes affect how information is collected and processed in intelligence work, noting that AI may soon be central to investigations.

However, he warned about the risks involved: “AI could enable investigations to encompass far more individuals than ever before, raising concerns about privacy, proportionality, and collateral intrusion.”

The CETaS research, based on a Savanta poll of 3,554 adults and a 33-person citizens’ panel, revealed more support than opposition for national security agencies processing data, even sensitive information like identifiable medical data. While there’s general backing for police data use, regional police forces received slightly less support compared to national agencies.

Despite this support, the public does not want national security agencies to share personal data with political parties or commercial organizations. Marion Oswald, a co-author of the report, highlighted a key point: “Data collection without consent is always intrusive, even if the analysis is automated and unseen.”

The study indicates hesitance around the use of predictive tools, with only 10% in favor. People expressed concerns about accuracy and fairness. “Panel members were particularly worried about these issues and wanted clear safeguards,” said Oswald, stressing the need for technology oversight.

The study also found a substantial gap in public understanding of national security efforts. A significant 61% of respondents felt they understood agencies’ work “slightly” or “not at all,” while only 7% believed they understood it “a lot.” Rosamund Powell, another co-author, noted that the public’s ideas about national security are often shaped by fictional portrayals, like those in James Bond movies.

Yet when people learn more about national security practices, like facial recognition data collection, their concerns grow. Powell noted that there’s more support for analyzing public data, like social media posts, compared to private information, which raises additional apprehensions.