TU/e tools reveal the ‘secret’ of the AI ​​model

New software tools have been developed at Eindhoven University of Technology to explain the output of artificial intelligence (ai) models. These interactive visualization tools provide insight into the ‘thought processes’ of such models. They help unlock the ‘secrets’ of machine learning systems. PhD student Dennis Collaris did the research.

Collaris recently received his Ph.D. at TU/e ​​about new approaches to interpreting machine learning models from different perspectives: from local explanation of individual predictions to global explanation of the entire model. The researcher showed that predictions from machine learning models can be explained.

‘explainable ai’

“Regulators Eager for Explainable AI”

There is a great need for ‘explainable AI’, which is why such studies attract a lot of attention. Steven Maijoor, who as director of De Nederlandsche Bank is tasked with overseeing the Dutch financial sector, recently emphasized the need for a solid practical framework that explains how AI systems arrive at certain conclusions.

The supervisors desperately need that, especially now that the ‘open economy’ is on the way. Open finance goes beyond the data and services available in banks. This is not only about payment data, but also data about investments, savings, loans and insurances, such as claims history. Data that can help financial institutions develop new products and better assess risks. In order to take the step towards open financing, basic questions must first be answered, partly about the function of AI systems.

Clarity

PhD candidate Collaris also shows that one must carefully consider the parameters of ‘explanatory techniques’ and clearly indicate uncertainty. The scandal surrounding the unemployment benefit law shows that extreme caution must be exercised. An incorrect prediction by an AI system can have far-reaching consequences.

The General Data Protection Regulation (AVG – GDPR) therefore states that it must be possible to explain how a model reaches a certain conclusion. According to Collaris, however, this is quite difficult when it comes to self-learning AI systems. Based on a mountain of data, a proverbial black box spits out an answer. It is not easy to figure out how the model arrives at this answer.

The researcher explains that a computer model does not have a clearly defined step-by-step plan. Gradually, such a model discovers which characteristics of, for example, potential customers of an insurance company indicate that there is a chance that they will commit fraud. Machine learning models can provably provide useful recommendations. The problem is that they do not provide motivation. And these are necessary, for example, if someone is denied insurance or a fraud investigation is initiated. Collaris noticed at Achmea how much effort it takes data scientists to explain their predictive models.

Insight into the soul

“What features does a machine learning computer model use to make a prediction?”

In order to find out which strategy a computer model has chosen, a clear overview of the data used and processed is essential. To this end, Collaris developed two interactive software tools, ‘ExplainExplore’ and ‘StrategyAtlas’, which give users insight into the ‘soul’ of machine learning models.

ExplainExplore shows what properties a machine learning computer model uses to make a prediction. This software tool indicates how much each property in the model is taken into account when determining a prediction. Collaris calls this the ‘function contribution’. It is an interactive explanation system for exploring explanations of individual predictions. The tool provides context for each explanation by presenting comparable forecasts and showing the impact of minor input perturbations.

Second, Collaris introduces the StrategiAtlas. This visual approach to analysis enables a global understanding of complex machine learning models. This is done through the identification and interpretation of different modeling strategies.

These model strategies are identified in a projection-based strategy map visualization. Data scientists can determine the validity of these strategies by analyzing feature values ​​and contributions using heat maps, density plots, and decision tree abstractions.

Leave a Comment