Thursday, November 21, 2024

What are the Security Risks of Adopting Bring Your Own AI?

Since OpenAI launched ChatGPT in November 2022, interest in generative AI tools has skyrocketed. People use it for everything from drafting emails to powering chatbots, thanks to its ability to generate relevant responses to various prompts.

According to Microsoft’s latest Work Trend Index report, which surveyed over 31,000 professionals, about 75% of knowledge workers now use some form of generative AI at work, with nearly half of them starting in just the last six months. Interestingly, around 80% of these users are using their personal AI tools, a trend more pronounced in smaller businesses. The adoption spans all age groups, not just tech-savvy younger users.

With the surge in information, many professionals face what’s called digital debt. A prime example is the overload of emails; the report reveals that around 85% of emails are skimmed in under 15 seconds. It’s no wonder that people are drawn to tools that simplify the tedious aspects of their jobs.

Nick Hedderman from Microsoft pointed out that this digital debt has intensified during the pandemic. He mentioned that 68% of employees feel overwhelmed by their workload, and nearly half report feeling burnt out.

Most professionals are turning to accessible generative AI tools found on smartphones or online. However, many of these tools operate outside corporate oversight. When users access free online tools, their data often becomes a product for others to exploit.

Sarah Armstrong-Smith, Microsoft’s chief of security, cautioned that users need to treat free tools like social media platforms. She raised concerns about what data is used for training AI and whether user information is secure.

The challenge of using external generative AI tools boils down to data governance. Organizations are dealing with shadow IT, meaning staff use apps not sanctioned by the IT department. Armstrong-Smith highlighted that the issues surrounding data sharing and oversight are longstanding.

The data governance issues with external generative AI tools are quite serious. First, there’s the risk of data leakage when users paste confidential information into online tools that lack proper control. This could lead to sensitive data being misused in AI training.

If you introduce unverified data from an external source into your company’s systems, you risk contaminating your own datasets. Armstrong-Smith pointed out that improper use of generative AI has already led to significant errors. For example, a lawyer once relied on ChatGPT to draft documents but was misled by generated, fictitious case references.

In a corporate context, using inaccurate information can severely impact decision-making processes. Armstrong-Smith emphasized that businesses must ensure the tools they use have proper governance and security measures in place.

If a large number of employees are using external apps, it signifies a strong demand for those digital tools. Identifying specific use cases can help businesses choose the right generative AI solutions seamlessly integrated into existing workflows.

Using corporate generative AI instead of public tools like ChatGPT offers better data management. It keeps company data secure within corporate boundaries, minimizing risks associated with external tools. While the AI provider handles the back end, the organization remains responsible for user data and application deployment.

Training employees on available generative AI tools is essential. Raising awareness through educational programs helps familiarize staff with these tools, encouraging their use over external platforms.

Generative AI is here to stay, and its prevalence will likely increase. Armstrong-Smith mentioned a focus on “agentic generative AI,” which involves using AI agents to streamline business processes, such as scheduling or managing invoices.

Despite its potential to reduce mundane tasks, data protection remains a crucial concern. Organizations must educate employees on the risks associated with external tools and ensure they have the right resources in their networks.

As AI becomes more mainstream, there’s increased choice in the technologies available, and the cost will likely decrease, leading to broader adoption across industries.