Live on 13th June 2021 - Explainable AI [ LIME & SHAP ]

**Upcoming Live Session: Explaining AI**

Our next live session for all enrolled students will take place on Sunday, June 13th at 7 pm. This is a 2-hour live session that will be accessible via our desktop app, and as usual, we will use a Slack miscellaneous channel for all discussion during the live session.

**Topic: Explainable AI**

The topic of our live session this week is explainable AI, which is a very important and evolving field. It's becoming more and more crucial as the complexity of advanced deep learning models increases every day. We need to be able to understand what's happening within a model so that we can give insights into why a model is giving a specific result for a given input.

**LIME and SHAP: Model Agnostic Interpretability Systems**

We will be covering techniques like LIME (Local Interpretable Model-agnostic Explanations) and SHAP (SHapley Additive exPlanations). These are called as model agnostic interpretability systems, meaning they will provide model interpretability or understanding what's happening within a model. This is the core essence of explainable AI: to be able to explain what's happening within complex AI systems.

**Model Agnosticism**

What it means is that these techniques can work with any or most black box models. Given a black box model, which could be very complex, all we know is given an input, it gives us some output. We don't know the inner workings of the model beyond that. These black box systems can provide valuable insights through interpretability and explainability.

**Mathematics and Geometric Intuition**

In our live session, we will dive deep into the underlying mathematics of LIME and SHAP as much as possible. We'll also provide geometric intuition so that you understand these techniques from first principles. This means we will break down the concepts and techniques in a way that's easy to grasp.

**Pros and Cons, Applications, and Code Snippets**

We will discuss the pros and cons of these techniques, where they can be applied, and where they might be slightly harder to apply. We'll also provide code snippets so that you learn how to apply these techniques to actual real-world problems. This means you'll have hands-on experience with implementing explainable AI in your own projects.

**Join Us on June 13th**

So, mark your calendars for Sunday, June 13th at 7 pm. Join us as we explore the exciting world of explainable AI and how LIME and SHAP can help you understand complex models better. We'll be using our desktop app and Slack miscellaneous channel for discussion during the live session.

**Conclusion**

Explainable AI is becoming more and more important, and it's essential for modern advanced deep learning systems. By understanding what's happening within these complex models, we can provide valuable insights into why they're making certain predictions or decisions. Join us on June 13th to learn more about LIME and SHAP, and how you can apply explainable AI in your own projects.

"WEBVTTKind: captionsLanguage: enhi friends our next live session for all of our a course enrolled students will be on the coming sunday which is the 13th of june at 7 pm this is a 2-hour live session and this live session will be accessible for all of our course enrolled students via the desktop app and as usual we will use a slack miscellaneous channel for all the discussion during the live session and the topic of discussion this week is explainable ai this is a very very important and evolving field very fast evolving and we will be covering techniques like lime and shaft in this discussion lime and shaft are very interesting and very powerful techniques something that we have used extensively ourselves in our own experiences and these are techniques that we've been wanting to cover for a long time now again explainable ai is becoming more and more important as the complexity of advanced deep learning models is increasing every day so we have to be able to understand what's happening within a model so to give you a quick primer about lyme and shap lyman shap are called as model agnostic interpretability systems what it means is they will provide you model interpretability or you'll understand what's happening within a model and why a model is giving a specific result for a given input that's what is the core essence of explainable ai you need to be able to explain what's happening within the complex ai system so it provides model interpretability but these techniques are also model agnostic which means you can have any or most black box models and given a black box model which could be very very complex all you know is given an input it gives you some output right given some x i generates y i or y i hat that's all you know about it now given these black box systems they can provide very valuable insights through interpretability and explainability so in this live session itself we will dive deep into the underlying mathematics of lyme and shap as much as possible we'll also provide geometric intuition so that you understand these techniques from first principles we will discuss about the pros and cons of these techniques and where we can apply and where they might be slightly harder to apply we'll also discuss in detail code snippets so that you learn how to apply these techniques to actual real world problems so see you on the coming sunday at 7 pm again explainable ai is becoming more and more important and essential for modern advanced deep learning systemshi friends our next live session for all of our a course enrolled students will be on the coming sunday which is the 13th of june at 7 pm this is a 2-hour live session and this live session will be accessible for all of our course enrolled students via the desktop app and as usual we will use a slack miscellaneous channel for all the discussion during the live session and the topic of discussion this week is explainable ai this is a very very important and evolving field very fast evolving and we will be covering techniques like lime and shaft in this discussion lime and shaft are very interesting and very powerful techniques something that we have used extensively ourselves in our own experiences and these are techniques that we've been wanting to cover for a long time now again explainable ai is becoming more and more important as the complexity of advanced deep learning models is increasing every day so we have to be able to understand what's happening within a model so to give you a quick primer about lyme and shap lyman shap are called as model agnostic interpretability systems what it means is they will provide you model interpretability or you'll understand what's happening within a model and why a model is giving a specific result for a given input that's what is the core essence of explainable ai you need to be able to explain what's happening within the complex ai system so it provides model interpretability but these techniques are also model agnostic which means you can have any or most black box models and given a black box model which could be very very complex all you know is given an input it gives you some output right given some x i generates y i or y i hat that's all you know about it now given these black box systems they can provide very valuable insights through interpretability and explainability so in this live session itself we will dive deep into the underlying mathematics of lyme and shap as much as possible we'll also provide geometric intuition so that you understand these techniques from first principles we will discuss about the pros and cons of these techniques and where we can apply and where they might be slightly harder to apply we'll also discuss in detail code snippets so that you learn how to apply these techniques to actual real world problems so see you on the coming sunday at 7 pm again explainable ai is becoming more and more important and essential for modern advanced deep learning systems\n"