Generative AI (GenAI) is changing how businesses operate, bringing both exciting opportunities and serious risks. For Chief Information Security Officers (CISOs), the goal is to foster innovation while protecting sensitive data and staying compliant with varying laws around the globe. If an AI tool is compromised, it could lead to data leaks, breaches of laws, or even poor decision-making based on incorrect information.
To tackle these challenges, CISOs need to rethink their cybersecurity strategies in three key areas: data use, data sovereignty, and AI safety.
### Data Use: Know What You Share
The main risk with AI isn’t malicious hackers; it’s a lack of understanding. Many organizations adopt third-party AI tools without realizing how their data is handled. Most AI platforms pull information from vast, publicly available sources, often without regard for the origin.
While major companies like Microsoft and Google are incorporating more ethical safeguards, much of their fine print remains muddled and can change unexpectedly. CISOs should treat AI tools like high-risk vendors. Before deploying, security teams must carefully review the terms of use, check where and how data might be stored or reused, and implement opt-out options when possible. Bringing in external consultants or specialists can help organizations navigate these complex agreements. Think of data shared with AI as a valuable export that needs to be carefully monitored.
### Data Sovereignty: Respecting Borders
Another significant risk in AI is the erosion of geographical boundaries concerning data. What’s legal in one country may not be in another. For multinational companies, this creates a tricky landscape filled with potential regulatory pitfalls, especially with laws like DORA, the UK Cyber Security and Resilience Bill, and frameworks such as the EU’s GDPR.
CISOs must adjust their security strategies to match regional data sovereignty laws. This involves scrutinizing where AI systems are hosted, how data moves between regions, and whether the right data transfer mechanisms are in place. If AI tools can’t comply with localization requirements, security teams should consider geofencing, data masking, or using local AI deployments. Policies should enforce data localization for sensitive datasets and include clear guidelines on cross-border data handling.
### Safety: Robust AI Protection
The final aspect of AI security involves protecting systems against manipulation, such as prompt injection attacks and model hallucinations. Prompt injection is gaining attention as a new threat where attackers can manipulate AI models to behave unexpectedly or reveal confidential information.
CISOs need a dual approach to mitigate these risks. First, adapt internal controls and conduct red-teaming exercises, like penetration testing, to test AI systems rigorously. Techniques like chaos engineering help reveal vulnerabilities before they can be exploited.
Second, the selection of vendors should prioritize AI providers that demonstrate solid testing, safety protocols, and ethical practices. Though these vendors may cost more, the risks associated with untested AI tools are far greater. CISOs should also push for contracts that clarify vendor responsibilities for operational issues or unsafe outcomes. Agreements must cover liability, incident response steps, and how to manage breaches if they occur.
### Evolving Roles for CISOs
As AI becomes integral to business operations, CISOs need to transition from being mere security gatekeepers to enablers of safe innovation. It’s crucial to update data policies, strengthen data sovereignty controls, and create multi-layered safety nets around AI tools. The key is to unlock AI’s potential while maintaining trust and compliance.
The best way to keep up with the fast-paced changes from AI is by being proactive, knowledgeable, collaborative, and focused on accountability.
Elliott Wilkes, the CTO at Advanced Cyber Defence Systems, brings over a decade of experience in digital transformation, including his time as a cybersecurity consultant for the Civil Service in both the U.S. and the U.K.