Scaling AI: 3 Reasons Why Explainability Matters
In highly regulated industries, explainable AI is increasingly essential for leaders to ensure trust in, and govern, their enterprise AI applications.
As artificial intelligence and machine learning-based systems become more ubiquitous in decision-making, should we expect our confidence in the outcomes to remain like that of its human collaborators? When humans make decisions, we’re able to rationalize the outcomes through inquiry and conversation around how expert judgment, experience and use of available information led to the decision. Unfortunately, engaging in a similar conversation with a machine isn’t possible yet. To borrow the words of former Secretary of Defense Ash Carter when speaking at a 2019 SXSW panel about post-analysis of an AI-enabled decision, “'the machine did it' won’t fly.”
As we evolve human and machine collaboration, establishing trust, transparency and accountability at the onset of decision support system and algorithm design is paramount. Without it, people may be hesitant to trust AI recommendations because of a lack of transparency into how the machine reached its outcome. Fortunately, we’re beginning to see some light at the end of the “AI is a black box” tunnel as research on AI explainability is commercialized in the market. These early-stage AI explainability solutions are essential for leaders to ensure trust in, and to govern and scale, their AI applications across the enterprise.
Defining Explainable AI
As AI becomes more embedded into our lives, a crucial part of our adoption journey stems from our ability to understand how AI makes decisions, why it comes to certain conclusions, and how to be confident in the results achieved. That’s explainability in a nutshell—a new “-ility” added to quality attributes used to evaluate the performance of a system. Current explainable AI solutions attempt to show how AI models make decisions in terms that humans understand, translating the process AI uses to transform data into real insights and value.
Explainability is not a new phenomenon. IT systems developers have grappled with it for decades, finding ways to explain complex topics in ways non-technical stakeholders can understand. Likewise, AI developers grapple with how to describe complex machine learning and AI models in a way that resonates with a wide range of audiences. In fact, there isn’t consensus on how to define explainable AI, however, most people agree that there should be a way to examine and trace AI decision-making provenance.
As AI permeates everyday life, explainability is becoming increasingly vital. Here are three reasons why explainability matters:
- People need to understand not just outcomes, but details behind the model (data provenance, performance metrics, etc.) to have confidence in the outputs.
- Once people trust how AI makes decisions, they can implement appropriate governance and accountability procedures needed to comply with requirements for their industries.
- Rather than accepting “black box” AI, people should be empowered to make decisions about what level of explainability is acceptable for their AI applications.
Explainability emerged as one solution to the problem of “black box” AI models, which are difficult to understand why they reached a decision. In highly regulated industries, stakeholders are legally required to show work and prove how an AI system reached a decision. Explainability becomes a proxy for model interpretability and auditability for non-data scientists.
Getting AI Models Ready for Prime Time
We live in an era where data availability drives demand for trusted facts and informed decision-making. As we scale solutions and comply with regulations like the European Union’s General Data Protection Regulation, the AI community will be pressed to make AI systems easy to use, explain and defend. Stakeholders need to understand how a model generates its results and when to question those results before they can be confident of widely deploying a model.
Not all models can be explained, nor should they be. Different AI applications have diverging levels of requirement for explainability, from privacy concerns to risk tolerance. For example, we don’t require our GPS systems to "explain" driving directions, but we will likely have higher standards for systems used in medical diagnoses or treatment recommendations. In the absence of full explainability, software developers look to other methods to validate AI model performance.
The Beginning is a Great Place to Start
Explainable processes and outcomes are the foundation for crucial elements to building trustworthy AI, such as transparency and governance. Data scientists and software engineers should build explainability features into their workflow from the beginning.
From there, governance and accountability mechanisms can easily be established to ensure that your organization is compliant with regulations for your industry. These measures are also safeguards against some of the stickier issues with deploying and managing AI, such as amplifying bias in data or having an overwhelming impact on marginalized populations. Organizations must be empowered to strike the right balance between accuracy, transparency, and risk in their AI, as there will always be tradeoffs.
It's all part of the process of developing reliable solutions that deliver real value.
Josh Elliot is head of operations at Modzy.
NEXT STORY: Why You're Having Freaky Dreams During COVID-19