Open source AI is picking up speed among major players. DeepSeek just announced it will share parts of its model architecture and code with the community. Alibaba is also in the mix, launching a new open source multimodal model to help create cost-effective AI agents. Then there’s Meta’s Llama 4 models, which are labeled as “semi-open” and rank among the most powerful AI systems publicly available.
This trend toward openness brings some key advantages: it encourages transparency, promotes collaboration, and speeds up development across the AI community. But it also comes with risks. AI models are still software, complete with complex codebases and dependencies. Like any open source venture, they can harbor vulnerabilities, outdated elements, or even hidden backdoors that grow as adoption increases.
At their essence, AI models are still code—albeit layered with complexity. Validating traditional components is like reviewing a detailed blueprint. AI models, however, function like black boxes, built from massive datasets and complex training processes that are tough to trace. Even when datasets or tuning parameters are available, they’re often unwieldy for auditing. Unintentional or intentional harmful behaviors can get baked into these models, and the unpredictable nature of AI makes thorough testing a challenge. The same features that make AI so effective also render it risky.
Bias is one of the trickiest dangers. Skewed training data embeds systemic flaws that can quietly reinforce harmful patterns in hiring, lending, or healthcare under the guise of objectivity. This black-box nature poses a real problem. Companies are using powerful models without fully grasping how they function or how their results might affect individuals.
These aren’t just hypothetical threats. You can’t comb through every line of training data or foresee every possible output. Unlike traditional software, proving that an AI model is safe or reliable is nearly impossible.
When you can’t fully test AI models or easily manage their effects, you’re left with trust. But that trust doesn’t come from wishful thinking; it requires governance. Organizations need rigorous oversight to vet models, track their origins, and monitor their behavior over time. This goes beyond technical measures; it’s strategic. Until businesses apply the same scrutiny to open source AI as they do to other software, they’ll face unseen risks with unpredictable consequences.
Securing Open Source AI: A Call to Action
Businesses should tackle open source AI with as much rigor as software supply chain security, if not more. These models present unique risks that can’t be completely tested, so proactive oversight is crucial.
-
Establish Visibility into AI Usage: Many organizations lack the tools to track AI model usage in their software. Without visibility into where models are embedded—like in applications, pipelines, or APIs—effective governance is impossible. You can’t manage what you can’t see.
-
Adopt Software Supply Chain Best Practices: Treat AI models like other critical software components. Scan for known vulnerabilities, validate training data sources, and carefully manage updates to avoid regressions or new risks.
-
Implement Governance and Oversight: Many companies already have robust policies for traditional open source use. AI models deserve similar attention. Set up governance frameworks that include model approval processes, dependency tracking, and internal standards for safe AI use.
-
Push for Transparency: AI doesn’t have to remain a black box. Organizations should demand clarity around model lineage: who developed it, the data used for training, modifications made, and its origins. Documentation should be standard practice.
- Invest in Continuous Monitoring: The risks of AI don’t stop at deployment. Threat actors are exploring prompt injection, model manipulation, and adversarial exploits. Real-time monitoring and anomaly detection can catch issues before they escalate.
DeepSeek’s decision to share parts of its model code is part of a larger trend: major players are starting to engage more with the open source AI community, despite the lack of full transparency. For businesses using these models, this increasing accessibility is both an opportunity and a responsibility. Just because a model is available doesn’t guarantee it’s reliable. Security, oversight, and governance must be prioritized to ensure these tools are safe, compliant, and aligned with business goals.
In the race to adopt AI, trust is key. Trust calls for visibility, accountability, and governance every step of the way.