Hanan is senior data science team leader at Playtika where he uses machine learning to model user behavior and marketing. He is alumni of successful startups such as BioCatch.com and Gong.io were he showed proof of concept and built the data science teams from scratch. He is also an alumni of cooperates such as Microsoft where he was a senior data sciecntist. During his army service, Hanan was a signal processing and digital communication team leader (IDF). Hanan holds a PhD in computation neuroscience (Hebrew University) specialized in computational modeling of behavior and neural activity. He holds also a B.Sc. [cum laude] in Physics, B.Sc. [summa cum laude] and M.Sc. [cum laude] in Electrical Engineering (Tel Aviv University) Workshop description: You have built a fraud-detection system with terrific performance. Everyone in the datascience are happy. However, the human-analyst who need to approve the account closure is unsatisfied. She wants to know why your model is so sure that user X is predicted to be fraud. This scenario repeats in many business and scientific scenarios, for example, the need to show a salesperson what he said that caused the deal to probably fail. In other words, the predictions of a model are not enough, as it is also necessary to explain what caused the model to make the current decision. It is important not to confuse this with Feature Importance which aim is to find the overall important features in the entire dataset. Here we are interested in the features who influenced the most on the decision of the *current* sample. We will have two teaching assistant to help the audience with technical problems (Gal and Nadav) + maybe additional speaker (Yigal) – all CC. 1. Demo dataset and an example of the problem 2. Solution when model is linear (e.g. weights * feature ?) 3. Solution by naïve perturbation – see my patent: https://patents.google.com/patent/US20160379133A1/en 4. Solution using LIME – see https://github.com/marcotcr/lime Hands on will be 30% of the time.