Artificial intelligence (AI) is advancing rapidly, and AI agents have emerged as a particularly transformative technology. These agents, powered by cutting-edge models from organizations such as OpenAI and Microsoft, are being integrated into a range of enterprise solutions, delivering substantial gains in automation and efficiency. Nevertheless, the rise of AI agents brings with it an array of new risks and security challenges that organizations must proactively address.
### Understanding the Unique Risks Associated with AI Agents
AI agents signify a significant evolution in how AI engages with both digital and physical environments. Unlike earlier AI models, these agents can operate autonomously or semi-autonomously, making decisions and achieving objectives with minimal human involvement. While this autonomy presents exciting opportunities, it also considerably broadens the potential threat landscape.
Historically, risks associated with AI were largely confined to the inputs, processing, and outputs of the models, along with vulnerabilities in the software systems that support them. However, AI agents introduce risks that extend well beyond these traditional boundaries. The vast and intricate series of events initiated by AI agents can often remain hidden from human operators, leading to serious security concerns as organizations grapple with monitoring and controlling the agents’ real-time actions.
Some of the most urgent risks involve data exposure and unauthorized data transfers, which can happen at any stage of the agent-driven processes. Furthermore, both benign and malicious AI agents can consume system resources uncontrollably, resulting in denial of service situations. An even graver concern is the potential for inappropriate or harmful activities executed by misguided autonomous agents, including instances of “agent hijacking” by external adversaries.
The risks do not conclude there. Programming errors in AI agents may lead to unintended data breaches and other security vulnerabilities. The reliance on third-party libraries raises supply chain risks that can compromise both AI-specific and broader IT environments. Moreover, the common practice of hard-coding credentials in agents—often found in low-code or no-code development frameworks—intensifies access management issues, making it easier for attackers to exploit these agents for malicious purposes.
### Three Key Controls to Mitigate Risks Associated with AI Agents
To effectively manage the multifaceted risks tied to AI agents, organizations should implement robust controls. The first essential step is to create a comprehensive view and roadmap of all agent activities, including processes, connections, data exposures, and information flows. This visibility is vital for detecting anomalies and ensuring that agent interactions comply with enterprise security policies. Maintaining an immutable audit trail of agent actions is also crucial to support accountability and traceability.
Additionally, organizations should establish a detailed dashboard that monitors how AI agents are being utilized, evaluates their performance against enterprise policies, and checks their compliance with security, privacy, and legal regulations. This dashboard should integrate with existing identity and access management (IAM) systems to enforce least privilege access and restrict unauthorized actions by AI agents.
Once a thorough mapping of agent activities is established, organizations should set up mechanisms to identify and flag any activities that deviate from established policies. By establishing baseline behaviors, it becomes easier to spot outlier transactions, which can then be addressed through automatic real-time remediation.
Given the rapid pace and high volume of AI agent interactions, human oversight alone may not suffice. Therefore, organizations should implement tools capable of automatically suspending and rectifying rogue transactions while directing unresolved issues to human operators for manual review.
The final control involves the application of automatic, real-time remediation to address identified anomalies. This may encompass actions such as redacting sensitive information, enforcing least privilege access protocols, and blocking access in response to policy violations. Organizations should also maintain deny lists of prohibited threat indicators and files that AI agents should not access. Establishing a continuous monitoring and feedback mechanism is essential for identifying and correcting any undesirable actions resulting from inaccuracies in AI agents.
As AI agents become increasingly integrated into enterprise systems, the associated risks and security challenges must not be overlooked. Organizations need to deepen their understanding of these risks and implement necessary controls to mitigate them. By thoroughly mapping AI agent activities, detecting and addressing anomalies, and applying real-time remediation, businesses can leverage the advantages of AI agents while upholding strong security measures. In this fast-changing environment, proactive risk management is not just advisable; it is essential.
Avivah Litan is a prominent VP Analyst at Gartner. Discussions on digital risk management and building resilience against cyber threats will take place at the Security & Risk Management Summit 2024 in London, from September 23-25.