Luc Van Nerom

Deputy Managing Director PSI Metals

At

Managing Director PSI Metals Belgium

Biography:

Luc Van Nerom studied mathematics and computer science and started as a researcher at the AI Lab of the Vrije Universiteit Brussel. In 1986 he created a spin-off company ‘Artificial Intelligence Systems’ bringing AI and optimization technology to the metals industry. After a merger, the products of this company have been embedded in the production management software of PSI Metals. Today, Luc is focusing on product innovation management and industrial intelligence at PSI and PSI Metals.

Abstract:

Breaking the black-box nature of predictive models by Luc Van Nerom, deputy managing director PSI Metals, managing director PSI Metals Belgium

Machine learning is rapidly being adopted across several industrial sectors, including the metals industry. The adoption of machine learning techniques in the real live industrial environment requires an insight into the results produced by predictive models. The maturity of the model should be confirmed by thorough inspection employing Machine Learning Interpretability techniques. On one hand, it facilitates a deep understanding of predictive model behaviour under a variety of circumstances. In connection with the domain knowledge, it brings an efficient tool for Root Cause Analysis. In this presentation we aim to show how to apply the aforementioned techniques to the problem of defect detection in the metals industry. In our use case the process data gathered during coil production was used for the creation of machine learning models, where the predictive target was the occurrence of a defect. These models, subsequently, were exposed for the machine learning interpretability techniques. We will show the application of these techniques, in particular how to extract business value from certain aspects of machine learning interpretations. A special emphasis will be placed on the prediction breakdown, which is a decomposition of a single prediction into the contributions from all involved predictors. This is a measure of their importance and provides precise information about the impact of a given data/process feature in the context of a particular prediction. All mentioned techniques are applied to understand the decisions proposed by the models. This allows for building confidence and trust that the predictions are fair and based on clear presumptions.