BACK

Explain, explain, explain...: Interpretability of black-box machine learning models - how to build trust in AI

14:00 - 14:20, 9th of May (Thursday) 2019/ DevTrends

Modern machine learning and deep learning models, while achieving very high accuracy, are complex enough to make it impossible even to the experts to interpret the underlying decision process. As understanding why a model makes certain predictions is becoming a business(and legal - eg. GDPR) necessity, various methods as LIME, SHAP, etc. have been proposed to provide help in black-box model interpretation. The talk will present methods for making the black-box models interpretable without sacrificing accuracy. As an addition - a few words about disentanglement and interpretable hidden representations.

TOPICS:
AI DeepTech DevTrends ML/DL Tech

Adam Kaczmarek

NBC IT Outsourcing