Software makers offer more transparent machine-learning tools—but there’s a trade-off. Artificial intelligence (AI) software can learn from data or experiences to make predictions, but there are limits to this machine learning.
A computer programmer specifies the data from which the software should learn and writes a set of instructions, known as an algorithm, about how the software should do that—but doesn’t dictate exactly what it should learn. This is what gives AI much of its power: the ability to discover connections in the data that would be more complicated or nuanced than a human would find.
But this complexity also means that the reason the software reaches any particular conclusion is often largely opaque, even to its own creators. For software makers hoping to sell AI systems, this lack of clarity can be bad for business. It’s hard for humans to trust a system they can’t understand—and without trust, organizations won’t “pony up big bucks for AI software.”
Regulation is also driving companies to ask for more explainable AI. In the U.S., insurance laws require that companies be able to explain why they denied someone coverage or charged them a higher premium than their neighbor. In Europe, the General Data Protection Regulation (GDPR), effective in May 2018, gives citizens of the EU a “right to human review” of any algorithmic decision affecting them, such as bank loan applications. [For further discussion on the GDPR and data privacy, see our January 2019 Regulatory Corner].
Software vendors and IT systems integrations have responded to this that they are able to give further insight to their customers on how their AI programs think, or “explainability,” according to a study from IBM in August 2018.
However, the trade-off with the transparency in of an AI algorithm’s decision-making and its effectiveness may be a less nuanced model that is less accurate with attempts to reduce complex decisions into a handful of factors.
Read full report here.