Building Trust In AI: The Case For Transparency
AI is rapidly transforming the world of business, becoming increasingly woven into the fabric of organizations and the daily lives of customers. However, the speed of this transformation presents risks as organizations grapple with deploying AI responsibly and minimizing the risk of harm.
The Cornerstone of Responsible AI: Transparency
Transparency is a fundamental aspect of responsible AI. AI systems—including their algorithms and data sources—should be understandable so that we can comprehend how decisions are made and ensure they are fair, unbiased, and ethical. While many businesses are taking steps to ensure transparency, there have been cases where the use of AI has been worryingly opaque.
Examples of Transparent AI Done Well
Adobe Firefly: When Adobe released its Firefly generative AI toolset, it was open and transparent about the data used to train its models. Unlike other generative AI tools, Adobe published information on all the images used and reassured users that it owned all rights to these images or that they were in the public domain. This transparency allows users to trust that the tool hasn't infringed on copyrights.
Salesforce: Salesforce includes transparency as a key element of “accuracy” in its guidelines for developing trustworthy AI. This means they clarify when AI provides uncertain answers, citing sources and highlighting areas users might want to double-check to avoid mistakes.
Microsoft’s Python SDK for Azure Machine Learning: This includes a “model explainability” function set to “true” by default, giving developers insights into interpretability and ensuring decisions are made fairly and ethically.
Cognizant: Cognizant recommends creating centers of excellence to centralize AI oversight. This allows best practices around transparency to be adopted across an organization, ensuring AI is accountable and explainable.
Examples of Transparent AI Done Badly
OpenAI: OpenAI, creators of ChatGPT and DALL-E, has been accused of lacking transparency about the data used to train their models. This has led to lawsuits from artists and writers claiming their material was used without permission. This opacity can lead to a breakdown in trust between the AI service provider and its customers, and users might face legal action if copyright holders successfully argue that AI-generated material infringes on their IP rights.
Other Image Generators: Tools like Google’s Imagen and Midjourney have been criticized for biased outputs, such as overly depicting professionals as white men and showing historical inaccuracies. A lack of transparency hinders developers from easily identifying and fixing these issues.
Banking and Insurance: In these industries, AI is used to assess risk and detect fraud. Without transparency, customers might be refused credit, have transactions blocked, or face criminal investigations without understanding why they were singled out.
Healthcare: Non-transparent AI systems in healthcare pose serious dangers. Biased data can lead to mistakes in tasks like spotting signs of cancer in medical imagery, resulting in worse patient outcomes. Without measures to ensure transparency, biased data is less likely to be identified and removed.
The Benefits of Transparent AI
Building Trust: Transparency is essential for building trust with customers. They want to know how and why decisions are being made with their data and inherently distrust “black box” machines that do not explain their processes.
Identifying and Eliminating Bias: Transparency allows us to identify and eliminate problems caused by biased data by ensuring all data used is thoroughly audited and cleansed.
Regulatory Compliance: As AI regulation increases, such as the upcoming EU AI Act, which requires AI systems in critical use cases to be transparent and explainable, businesses using opaque AI could face significant fines.
Building transparency and accountability into AI systems is increasingly seen as critical for developing ethical and responsible AI. Although today’s advanced AI models are highly complex, overcoming the challenge of transparency is essential for AI to fulfill its potential for creating positive change and value.
About Alex Kouchev
🚀 Workspace Innovator: I review AI impact on Work | Connecting HR and Tech | 12+ Years Leading People & Product Initiatives | opinions expressed are my own
For over a decade, guided by the principle that "People Are People, Not Human Resources,"
I've immersed myself in the evolving landscape of work trends, HR technology, and organizational dynamics.
My mission is clear: to ensure that in the age of AI and Digital Transformation, we create workplaces where human intelligence and machine capabilities harmoniously co-exist. I focus on designing ethical, innovative solutions that not only drive organizational performance but also elevate the work experience for every associate.
With over 12 years of experience in International HR and Product Management, I’ve pioneered the development of human-centric solutions that deliver organizational efficiencies and boost employee satisfaction. My unique background empowers me to bridge the gap between functional and technical stakeholders, thus accelerating digital transformation across the enterprise.
Read More
-
Generative AI
6
- 24 Mar 2024 Navigating The Generative AI Divide: Open-Source Vs. Closed-Source Solutions 24 Mar 2024
- 1 Apr 2024 How AI Is Changing The World Of Cybersecurity 1 Apr 2024
- 15 Apr 2024 The Best Open-Source Generative AI Models Available Today 15 Apr 2024
- 30 Apr 2024 Why AI Challenges Us To Become More Human 30 Apr 2024
- 1 May 2024 Building Trust In AI: The Case For Transparency 1 May 2024
- 15 May 2024 The 4 Forms of Generative AI Revolutionizing Our World 15 May 2024