The launch of DeepSeek has sparked familiar ethical debates surrounding large language models. We often grapple with questions about data use, transparency, and bias. But when this tech comes from China, it triggers deeper geopolitical worries, as we’ve seen with TikTok. Fear about data handling quickly morphs into anxiety over state influence and national security risks.
These concerns aren’t unfounded. The AI arms race between the US and China has turned AI into a major national strategy, with each country prioritizing leadership in this space. Now, whenever a new model drops—whether from the US, China, or elsewhere—it gets examined not just for its capabilities but for the geopolitical implications it brings.
In terms of data security, major US companies like OpenAI and Anthropic have faced scrutiny over their data practices, but DeepSeek adds another layer of risk. China has a notorious reputation for state-sponsored corporate espionage, highlighted by incidents like the hack of the US Treasury that the US linked to Chinese hackers.
For Chief Information Security Officers (CISOs) and security leaders, the emergence of this powerful AI model should heighten their focus on data security, especially regarding intellectual property and sensitive information vital for maintaining a competitive edge. But the worry isn’t just what DeepSeek can do now; it’s about how future models might be trained. These models depend on massive datasets pulled from a plethora of publicly available sources, but that won’t be enough for the next generation. There’s a rising concern that upcoming LLMs could utilize data gathered through unethical practices, whether from state-sponsored hacks or large-scale scraping operations in murky legal territories.
This threat isn’t theoretical. Data hoarding—storing encrypted information with plans to decrypt it later—is already known in the industry. For CISOs, this means today’s vulnerabilities are not the only concern. Even securely stored encrypted data could be at risk over the long haul as business and government strategies evolve.
So, how can CISOs navigate this risk? DeepSeek’s arrival serves as a wake-up call to reassess data protection, particularly against state-level threats. Organizations must start by gaining complete visibility into what data they have, where it is, and who can access it.
But merely having visibility isn’t enough. Security technology has to adapt as well. Privacy-enhancing technologies, including those resilient to quantum threats, should be on any forward-thinking team’s agenda. Moreover, organizations need to push their tech suppliers to adopt stronger encryption measures to keep up with the rapid pace of AI advancements and prepare for a potential post-quantum world.
There’s also a cultural shift needed. Companies must come to terms with the fact that data security threats often involve more than just hackers after a quick payday. In our geopolitically fractured landscape, data is a strategic asset. Every piece of proprietary information—from design specs to customer data—has new significance, not just for competitors but also for nation-states looking to exploit it.
The launch of DeepSeek is a prompt to recognize that the lines between innovation, economic competition, and geopolitics are blurring. For CISOs, the conversation about data protection needs a major overhaul, acknowledging that data is no longer just a business asset; it’s a target in a larger contest for economic and geopolitical dominance.
Dr. Nick New is the CEO at Optalysys, and he holds a Ph.D. in Optical Pattern Recognition from Cambridge, giving him a solid grounding in optical technology. At Optalysys, he advances work in silicon photonics and fully homomorphic encryption.