More than 80% of companies worldwide are using AI to streamline their operations. AI has become part of our everyday lives through chatbots, voice assistants, and smart search technologies. But as AI spreads, so do the risks, especially from nation-state actors who exploit it for espionage, cyber attacks, and threats to supply chains.
Recent events like February’s AI Action Summit, President Trump’s Executive Order, and the UK government’s AI Opportunities Action Plan underline two key points. First, national interests drive government AI strategies. Second, AI is now a central focus in national defense strategies. This shift raises alarms about models like DeepSeek’s R1 being used for industrial espionage.
Yet, getting caught up in specific models or vendors misses the bigger picture. AI can be and is already being weaponized, supporting tactics such as cyber reconnaissance and targeting key industry secrets. Chief information security officers (CISOs) and security leaders are tasked with understanding how AI transforms the threat landscape and what steps to take next. Startups and tech firms face unique challenges, as they often attract the attention of nation-states targeting advanced technology. To counter AI-related threats, adjustments in people, processes, and technology within cybersecurity are essential.
Nation-state actors are increasingly employing generative AI in their cyber operations to boost efficiency and precision. Over 57 advanced persistent threat (APT) groups have been identified as utilizing AI in their attacks. AI streamlines processes like research, translation, coding, and even malware development. One of the most alarming uses of AI is in creating convincing phishing messages, which ramps up the frequency and effectiveness of cyber-attacks. Large language models (LLMs) are capable of generating personalized and highly believable messages, amplifying the risks of social engineering. A notable case happened with Arup, an engineering firm that lost $25 million to a deepfake impersonating a CFO, highlighting the dangers posed by AI-fueled operations.
Moreover, the risk is not just about direct cyberattacks. Nation-state actors target AI supply chains—both hardware and software. The SolarWinds Sunburst attack illustrated how sophisticated state actors can breach enterprise networks through supply chains. This threat reaches AI software too; by embedding vulnerabilities during development, attackers can exploit a vast network of targets.
Recent directives, like the US’s prohibition on importing certain connected vehicle technology from selected countries, underline increasing vigilance regarding supply chain vulnerabilities. Malicious entities are even infiltrating widely-used Python packages for LLMs like ChatGPT to deliver malware capable of harvesting critical data. Organizations procuring AI systems must be vigilant about the origins of the AI and the potential interactions users will have with it.
To combat AI-augmented threats from nation-states, security leaders should adopt a variety of strategies, including governance frameworks for AI, targeted employee training, strong data protection protocols, and proactive threat intelligence. Aligning AI governance with best practices, like NIST’s AI Risk Management Framework and ISO 42001, lays the groundwork for robust defenses. Clear roles, policies on acceptable use, and vigilant monitoring can deter the exposure of sensitive information.
Adapting the culture and roles within organizations is crucial. Training should start with AI literacy, helping staff understand AI’s implications on security. An inventory of AI systems used within an enterprise is a must; CISOs should have visibility into where and how AI is being applied across their organizations.
Implementing data access controls can also reduce the risk of proprietary information being stolen. Techniques like data segmentation can keep AI models from handling sensitive information, and privacy measures such as encryption can bolster defenses against state-sponsored attacks. Embracing principles like data minimization and purpose limitation can reinforce both security and responsible AI usage.
Supply chain risk management is vital to avoid the infiltration of compromised AI tools. Conducting thorough security assessments of third-party AI vendors, ensuring that AI models are not reliant on unsecured foreign APIs, and maintaining documentation of software dependencies can help detect vulnerabilities.
Lastly, AI itself can be harnessed to combat AI-driven threats. Anomaly detection powered by AI can highlight unusual behavior or data loss patterns. Utilizing adversarial AI to assess system vulnerabilities and bolster monitoring for AI-generated phishing attempts can strengthen defenses. As AI-enabled attacks progress beyond human capabilities, automated monitoring and defensive strategies become essential for protecting against rapid exploitation of vulnerabilities. The need for proactive and strategic responses from security leaders is clear, as they work to build robust defenses against the escalating tide of AI-powered threats.
Elisabeth Mackay, a cyber security expert at PA Consulting, emphasizes this urgent call to action in addressing the increasingly complex landscape of cyber threats.