Thursday, November 21, 2024

The Business Necessity of Responsible AI

The emergence of generative artificial intelligence (AI) has sparked extensive debates regarding potential AI-related harms, with discussions covering a wide range of concerns from training data practices to military applications and the adequacy of ethics departments within AI companies. While proponents of stringent regulation, often termed “AI doomers,” vigorously advocate for measures to counteract perceived existential risks, they have also critiqued new legislation that prioritizes competition and consumer protection over broader ethical considerations.

In this context, major AI companies like Microsoft and Google are increasingly releasing annual transparency reports detailing their AI development and testing methods. These reports emphasize a shared responsibility framework for enterprise customers, an approach reminiscent of cloud security protocols, especially critical as autonomous “agentic” AI tools emerge.

As AI systems—both generative and traditional—are already implemented in organizations and client-facing applications, it’s necessary to shift AI governance responsibilities from data science teams, often lacking expertise in ethics or business risk, to Chief Information Officers (CIOs). CIOs can address AI ethics pragmatically, taking into account risk tolerance, regulatory demands, and potential operational changes. Accenture notes that only a small fraction of companies have successfully evaluated AI risks and adopted best practices at scale.

According to Gartner analyst Frank Buytendijk, pressing concerns in the real world include transparency deficits, bias, accuracy challenges, and ambiguous purpose boundaries. Under laws like GDPR, data gathered for one objective cannot be repurposed without consent, highlighting the potential pitfalls for companies. For instance, an insurance company using social media images to evaluate applicants’ smoking habits, despite their contrary disclosures on insurance forms, raises ethical and legal alarms.

In light of forthcoming AI regulations, there are compelling reasons beyond mere compliance for organizations to align their AI initiatives with core values and business goals, argues Forrester principal analyst Brandon Purcell. He notes that companies failing to do this may face significant repercussions, but they also stand to gain substantial advantages. Properly aligning AI objectives with real-world outcomes can enhance business performance, leading to higher profitability and efficiency.

Salesforce’s Chief Ethical and Humane Use Officer, Paula Goldman, echoes these sentiments, stating that instilling trust in AI systems not only fulfills compliance requirements but also enhances functionality and productivity.

Start with Core Principles
Diya Wynn, responsible AI lead at AWS, posits that responsible AI is both a vital business necessity and a sound practice. She favors the term “responsible AI” over “AI ethics,” as it encapsulates a broader dialogue inclusive of security, privacy, and compliance considerations essential for addressing risks and unintended consequences.

Fortunately, many companies already have structures for AI governance in place due to their GDPR compliance efforts, although they may need to integrate ethical expertise alongside the technical skills of data science teams. Responsible AI encompasses quality, safety, fairness, and reliability, and Purcell suggests that organizations should establish a set of ethical principles rooted in accountability, empathy, integrity, and transparency that resonate with their corporate culture.

Establishing a clear ethical framework can prevent conflicts between business departments, such as when AI lending tools push sales by extending loans to riskier applicants. Organizations can then implement effective controls using tools meant for both business leaders and data scientists. Buytendijk warns that AI ethics is fundamentally a human discipline, transcending mere technological considerations.

When deploying AI systems, businesses should focus on technosocial metrics rather than just technical measures of machine learning accuracy. For instance, generative AI chatbots may expedite handling simple customer calls but could inadvertently lead to increased overall call times by diverting more complex inquiries to human agents, ultimately impacting customer loyalty.

Many organizations are also interested in tailoring metrics that reflect user experiences, such as measuring the “friendliness” of AI interactions or assessing apology scores, as mentioned by Mehrnoosh Sameki, responsible AI tools lead for Azure. Emerging AI governance tools from Microsoft, Google, Salesforce, and AWS address various phases of a responsible AI process, from model selection to monitoring production systems.

Establishing Safeguards for Generative AI
Generative AI models present unique challenges requiring comprehensive mitigation strategies beyond the traditional issues of fairness and transparency. Input guardrails can help keep AI systems focused, enhancing the accuracy of responses and maintaining business reputation while reducing costs associated with lengthy customer interactions.

Escalating compliance needs make it essential that personally identifiable information (PII) never reaches AI models, but output guardrails, which prevent harmful or inaccurate responses, are equally crucial. Azure AI Content Safety incorporates features to filter risky content, including detecting attempts to circumvent safety measures and managing the accuracy of generated content.

Hallucinations, a well-known challenge with generative AI where the model generates unverified information, aren’t the only concern; intentional omissions in responses must also be addressed. Rather than solely filtering hallucinations, it’s more effective to ground the model with pertinent data.

Training data for generative AI raises legal and ethical issues, particularly regarding the methods for curating training datasets. A Salesforce survey revealed that 54% of employees doubt the trustworthiness of the data powering AI systems they utilize, making it imperative for organizations to use their own data to enhance generative AI accuracy. Techniques such as Retrieval Augmented Generation (RAG) help achieve this by refining initial queries with relevant information from proprietary sources.

Feedback collection is vital for refining AI systems, alongside the usual metrics of usage and expenses. For instance, Azure and Salesforce’s content safety services provide audit trails enabling organizations to analyze whether language filters are effective and to understand user frustrations, which can indicate the need for model improvements.

Developing a seamless feedback mechanism allows users to easily provide insights, ensuring that the AI-generated information is accurately presented and users can assess its reliability. Goldman emphasizes the importance of delineating when to trust AI responses and when human judgment is warranted.

Human oversight strategies may involve what Goldman refers to as “mindful friction,” ensuring that demographic attributes are not automatically selected in marketing segmentation, thus avoiding unintentional biases.

Transform Training into an Advantage
The impact of AI systems on employees is a vital consideration in ethical AI deployment. Generative AI’s success in customer support does not stem from replacing human agents but rather from empowering them to handle intricate queries more effectively using trained systems with RAG capabilities, as highlighted by Peter Guagenti, president of Tabnine.

The EU’s AI Act emphasizes fostering “AI literacy” within organizations, which, like other technologies, allows trained users to maximize the benefits of AI tools. Guagenti asserts that those most familiar with the tools gain the best outcomes and that focusing on enhancing users’ capabilities fosters business success.

Involving employees in identifying the challenges they face provides insight for crafting effective AI solutions, Goldman notes. Achieving this necessitates collaboration between business and development teams. Implementing responsible AI governance requires establishing a structured approach, and tools such as Azure’s impact assessment template can facilitate a seamless integration of business cases with risk considerations for CIOs.

Employing a red team to evaluate models and test safeguard strategies permits thorough impact assessments, enabling CIOs to determine system readiness for deployment and ongoing monitoring for performance. Broader evaluations of AI-related risk exposure can also yield benefits by uncovering areas for improvement in PII management and access controls. Purcell highlights that the current moment presents an opportunity to address existing governance challenges, whether related to AI or beyond.