Inferential vs Descriptive Statistics, Discriminative vs Generative models, Predictive Modeling and

so if you are building or if you're building a probabilistic model where you are estimating the distribution of data such a model is called generative model and it's called generative model because once I know the distribution I can generate more points from this distribution for example take naive Bayes which is a very popular generative model that we discussed in the course while yet while naive Bayes can be used to classify points as positive or negative labeled points right that's one of the tasks that we do right with naive Bayes but naive Bayes is achieving that by modeling the distribution of data right that's that's what it's trying to do so naive Bayes is a generative model right so it's one of the most popular and simple generative models that we come across similarly there is something called a Gaussian mixture models which is a probabilistic or a generative equivalent of k-means clustering we discussed k-means clustering lots of details in the course so there is an equivalent method called or a related method I should say it will equivalent there is a related method called Gaussian mixture models which is a probabilistic and generate a variation to k-means ok that's also very popular especially textbooks typically covered that in lots of detail as a generative model right so to put it simply discriminative models build boundaries between positive label data and negative label data while generative models model the distribution of data so that you can generate more points from this distribution if you want right I hope this is clear

the next term which is often used in business blogs and stuff like that basically business a term or business language is predictive modeling which is a very very broad term predictive bonding basically means a model that can predict future outcomes that song predictive modeling means and that could mean your classification techniques your regression techniques your time series analysis techniques basically predictive models is a term in business language which means building a model that can predict future outcomes of course if I build a classification technique to detect let's say cancerous cells if I have a new patient in the future with some with some tests being conducted on the patient I can determine whether the patient has cancer or not with a reasonable certainty so then it becomes a predictive model so in the whole of business language predictive modeling boils down to being able to predict so being able to build a model so it's called predictive modeling right so it's basically building models that can predict outcomes but and that includes both classification regression and time series analysis all of these things fit into that framework

a topic model is a very interesting sub topic in nature language processing right so a topic model is basically saying given a paragraph of text or a page of text or or an article let's say can I can I find out what top does this text belong to for example it's a you may have a news article I want to determine I want to determine how much of this text is talking about politics or what is the probability that this text belongs to the topic of prom of sorry off of politics how much what is the probability that this text is talking about sports but what is the probability this topic is talking about science and technology but what is the probability that this is this this article is talking about business all right so your business science and technology politics sports all these are topics and given a given article you can map an article to the topics that it's talking about either you can map it to one major topic that it's talking about or you can break it up right and say that this this article 90% of the article is talking about politics 10% of it is talking about business right so you can break it up so topic modeling effectively means being able to assign various topics which are related to a given article in a natural language processing setting so this is all about natural language or text processing paradigm

"WEBVTTKind: captionsLanguage: enso this is a very interesting question from one of our students because she is confused with some of the terms that she comes across when she learns data signs of machine learning it's blogs or listens to a few videos and stuff like that so at this very common confusion for somebody who is new to the field because there's so many terms thrown around with different meanings and interpretations and some of these terms come from Statistics some of them come from called machine learning some of them come from business use cases and stuff like that so I will try to break them break each of them and try to explain them in simple terms so that it would be easier for the students to understand the difference between these two terms so the first two terms are from statistics they're inferential statistics and descriptive statistics right so in our codes we learnt we actually learned most of the techniques most of the important techniques in both inferential and descriptive statistics without explicitly using these textbook ish terms but what they mean is this first let's take the descriptive statistics what is the word descriptive here mean so to put it simply descriptive statistics help you as a word descriptive says they're trying to describe the key properties of the data for example if I'm given some some sample of data as we learned in our course if somebody gives you a simple sample of data descriptive statistics will help us compute some key properties of the data for example mean mean tells me what is the average value right or what is the central value of such or medium tells me the central value within this whole data similarly standard deviation right tells me what is the variance for that matter tells me what is the spread amongst this data right as we discussed in the course in lots of detail right similarly kurtosis skewness all of these things tell me about the properties of the data the minimum value the maximum value the median the X percentile for example 10th percentile or 90th percentile 95th percentile all of these are descriptive statistics because in a nutshell they're described some key aspects or properties of your data that's why they're called descriptive statistics now the second one is called inferential statistics inference what does the word inference mean in English inference basically means a process of coming up with a conclusion or reaching a conclusion about a boatyard about about some physical process for example one of the very important parts of inferential statistics is hypothesis testing that we discussed in lots of detail again in the course for example if I want to infer about or if I want to if I want to say okay given given students from two different classes are their heights different is their if I want to infer that fact or if I want to infer that statistically whether the height between students of two different classes is different or not right there I use null hypothesis alternative hypothesis my permutation testing my resampling all of these techniques that we discussed in hypothesis testing so a lot of statistical tests typically the null hypothesis test or what we've seen KS test is something that we have seen right where we can use caster's to infer whether a distribution belongs to gaussian distribution or not right so all these are inferential statistics because I'm trying to infer I'm not trying to describe data remember I'm trying to infer properties of the data statistically or in for some fact I should say properties of the data because I'm trying to infer whether something in for a fact right using statistical means so in a nutshell descriptive statistics has all of your percentiles quantiles median means standard deviation ketosis Kunis all of those things inferential statistics fundamentally has all of your tests right your hypothesis testing T testing your z test your KS test and the son darling test all of these tests are in your informational statistics so these are two terms from statistics and it's it's important to understand them right we didn't use these terms explicitly because these are typically textbook ish terms but enough I hope you have a better understanding if you know what descriptive means so remember that descriptive means describing something inferential means coming up with some conclusion or inferring something at the end of the analysis so the next important concept is discriminative models versus generative models right this is a very important distinction in machine learning so descriptive models in a nutshell so the word disk sorry not descriptive the next important terms are discriminative models and generative models in machine learning this is not in statistics this is in machine learning so the word discriminative what does it mean it means to discriminate between two different sets of points so discriminative models to understand them very very simply are basically models that build a boundary or that discriminate between your positive labeled points and negative label points as we discussed in the course for example a logistic regression finds a hyperplane that separates positive points from negative points similarly an SVM discriminates between your positive points and negative points but building a hard boundary or a soft boundary nosov SVM like setting okay again all these models you have discussed in lots of detail in the course itself okay so that's what discriminative models are generative models on the other hand are very interesting the very powerful also so generative models try to model the distribution of data for example suppose you have a bunch of features okay represented with capital X and you have an output variable Y what was your total data mean your total data mean basically both your x and y if you can somehow build a probabilistic or or a distribution of a probability distribution for your data set which is if you can build a probability distribution for X comma Y this is often called as joint distribution right because you're trying to predict so basically if you have the probability distribution of X comma Y you basically a total data set if you have a probability distribution for your data set right you can generate more points from this probability distribution imagine if I have a probability difficulty distribution of let's say a single variable let's say a scalar value let's say if I know the probability distribution of a single scalar value like for example it's assume it's a Gaussian distribution with mean 0 and variance 1 I can generate more points from the distribution right that's why it's called the generative processes because if I know the distribution if I know the parameters of the distribution I can generate data from this distribution similarly instead of one scalar variable being Gaussian distributed imagine if I know the distribution of my whole data including features and output variable if I know the distribution of my whole data set D which is a combination of x and y if i know the whole distribution i can generate more data from this right so if you are building or if you're building a probabilistic model where you are estimating the distribution of data such a model is called generative model and it's called generative model because once I know the distribution I can generate more points from this distribution for example take naive Bayes which is a very popular generative model that we discussed in the course while yet while while naive Bayes can be used to classify points as positive or negative labeled points right that's one of the tasks that we do right with naive Bayes but naive Bayes is achieving that by modeling the distribution of data right that's that's what it's trying to do so naive Bayes is a generative model right so it's one of the most popular and simple generative models that we come across similarly there is something called a Gaussian mixture models which is a probabilistic or a generative equivalent of k-means clustering we discussed k-means clustering lots of details in the course so there is an equivalent method called or a related method I should say it will equivalent there is a related method called Gaussian mixture models which is a probabilistic and generate a variation to k-means ok that's also very popular especially textbooks typically covered that in lots of detail as a generative model right so to put it simply discriminative models build boundaries between positive label data and negative label data while generative models model the distribution of data so that you can generate more points from this distribution if you want right I hope this is clear the next term which is often used in business blogs and stuff like that basically business a term or business language is predictive modeling which is a very very broad term predictive bonding basically means a model that can predict future outcomes that song predictive modeling means and that could mean your classification techniques your regression techniques your time series analysis techniques basically predictive models is a term in business language which means building a model that can predict future outcomes of course if I build a classification technique to detect let's say cancerous cells if I have a new patient in the future with some with some tests being conducted on the patient I can determine whether the patient has cancer or not with a reasonable certainty so then it becomes a predictive model so in the whole of business language predictive modeling boils down to being able to predict so being able to build a model so it's called predictive modeling right so it's basically building models that can predict outcomes but and that includes both classification regression and time series analysis all of these things fit into that framework the next term that the student asks does is something called as a topic model so a topic model is a very interesting sub topic in nature language processing right so a topic model is basically saying given a paragraph of text or a page of text or or an article let's say can I can I find out what top does this text belong to for example it's a you may have a news article I want to determine I want to determine how much of this text is talking about politics or what is the probability that this text belongs to the topic of prom of sorry off of politics how much what is the probability that this text is talking about sports but what is the probability this topic is talking about science and technology but what is the probability that this is this this article is talking about business all right so your business science and technology politics sports all these are topics and given a given article you can map an article to the topics that it's talking about either you can map it to one major topic that it's talking about or you can break it up right and say that this this article 90% of the article is talking about politics 10% of it is talking about business right so you can break it up so topic modeling effectively means being able to assign various topics which are related to a given article in a natural language processing setting so this is all about natural language or text processing paradigm so I hope these definitions that our students of ours are clear now with these examples and it's very easy I mean all of us get confused with these terms because you may not use it so the way I understand this term says I take the key term like generative okay what does generative mean here for example in generative models generative models basically means I should be able to generate data to generate data I need to know the distribution of data discriminative what does discrimination means to separate two things off okay inferential I want to infer some properties of the data right through null hypothesis and hypothesis testing descriptive it's trying to describe about the data topic modeling I want to find topics in the data similarly predictive modeling predictive modeling is a business term the talks about being able to make make predictions in the future by building models simple definitions nothing fancy hereso this is a very interesting question from one of our students because she is confused with some of the terms that she comes across when she learns data signs of machine learning it's blogs or listens to a few videos and stuff like that so at this very common confusion for somebody who is new to the field because there's so many terms thrown around with different meanings and interpretations and some of these terms come from Statistics some of them come from called machine learning some of them come from business use cases and stuff like that so I will try to break them break each of them and try to explain them in simple terms so that it would be easier for the students to understand the difference between these two terms so the first two terms are from statistics they're inferential statistics and descriptive statistics right so in our codes we learnt we actually learned most of the techniques most of the important techniques in both inferential and descriptive statistics without explicitly using these textbook ish terms but what they mean is this first let's take the descriptive statistics what is the word descriptive here mean so to put it simply descriptive statistics help you as a word descriptive says they're trying to describe the key properties of the data for example if I'm given some some sample of data as we learned in our course if somebody gives you a simple sample of data descriptive statistics will help us compute some key properties of the data for example mean mean tells me what is the average value right or what is the central value of such or medium tells me the central value within this whole data similarly standard deviation right tells me what is the variance for that matter tells me what is the spread amongst this data right as we discussed in the course in lots of detail right similarly kurtosis skewness all of these things tell me about the properties of the data the minimum value the maximum value the median the X percentile for example 10th percentile or 90th percentile 95th percentile all of these are descriptive statistics because in a nutshell they're described some key aspects or properties of your data that's why they're called descriptive statistics now the second one is called inferential statistics inference what does the word inference mean in English inference basically means a process of coming up with a conclusion or reaching a conclusion about a boatyard about about some physical process for example one of the very important parts of inferential statistics is hypothesis testing that we discussed in lots of detail again in the course for example if I want to infer about or if I want to if I want to say okay given given students from two different classes are their heights different is their if I want to infer that fact or if I want to infer that statistically whether the height between students of two different classes is different or not right there I use null hypothesis alternative hypothesis my permutation testing my resampling all of these techniques that we discussed in hypothesis testing so a lot of statistical tests typically the null hypothesis test or what we've seen KS test is something that we have seen right where we can use caster's to infer whether a distribution belongs to gaussian distribution or not right so all these are inferential statistics because I'm trying to infer I'm not trying to describe data remember I'm trying to infer properties of the data statistically or in for some fact I should say properties of the data because I'm trying to infer whether something in for a fact right using statistical means so in a nutshell descriptive statistics has all of your percentiles quantiles median means standard deviation ketosis Kunis all of those things inferential statistics fundamentally has all of your tests right your hypothesis testing T testing your z test your KS test and the son darling test all of these tests are in your informational statistics so these are two terms from statistics and it's it's important to understand them right we didn't use these terms explicitly because these are typically textbook ish terms but enough I hope you have a better understanding if you know what descriptive means so remember that descriptive means describing something inferential means coming up with some conclusion or inferring something at the end of the analysis so the next important concept is discriminative models versus generative models right this is a very important distinction in machine learning so descriptive models in a nutshell so the word disk sorry not descriptive the next important terms are discriminative models and generative models in machine learning this is not in statistics this is in machine learning so the word discriminative what does it mean it means to discriminate between two different sets of points so discriminative models to understand them very very simply are basically models that build a boundary or that discriminate between your positive labeled points and negative label points as we discussed in the course for example a logistic regression finds a hyperplane that separates positive points from negative points similarly an SVM discriminates between your positive points and negative points but building a hard boundary or a soft boundary nosov SVM like setting okay again all these models you have discussed in lots of detail in the course itself okay so that's what discriminative models are generative models on the other hand are very interesting the very powerful also so generative models try to model the distribution of data for example suppose you have a bunch of features okay represented with capital X and you have an output variable Y what was your total data mean your total data mean basically both your x and y if you can somehow build a probabilistic or or a distribution of a probability distribution for your data set which is if you can build a probability distribution for X comma Y this is often called as joint distribution right because you're trying to predict so basically if you have the probability distribution of X comma Y you basically a total data set if you have a probability distribution for your data set right you can generate more points from this probability distribution imagine if I have a probability difficulty distribution of let's say a single variable let's say a scalar value let's say if I know the probability distribution of a single scalar value like for example it's assume it's a Gaussian distribution with mean 0 and variance 1 I can generate more points from the distribution right that's why it's called the generative processes because if I know the distribution if I know the parameters of the distribution I can generate data from this distribution similarly instead of one scalar variable being Gaussian distributed imagine if I know the distribution of my whole data including features and output variable if I know the distribution of my whole data set D which is a combination of x and y if i know the whole distribution i can generate more data from this right so if you are building or if you're building a probabilistic model where you are estimating the distribution of data such a model is called generative model and it's called generative model because once I know the distribution I can generate more points from this distribution for example take naive Bayes which is a very popular generative model that we discussed in the course while yet while while naive Bayes can be used to classify points as positive or negative labeled points right that's one of the tasks that we do right with naive Bayes but naive Bayes is achieving that by modeling the distribution of data right that's that's what it's trying to do so naive Bayes is a generative model right so it's one of the most popular and simple generative models that we come across similarly there is something called a Gaussian mixture models which is a probabilistic or a generative equivalent of k-means clustering we discussed k-means clustering lots of details in the course so there is an equivalent method called or a related method I should say it will equivalent there is a related method called Gaussian mixture models which is a probabilistic and generate a variation to k-means ok that's also very popular especially textbooks typically covered that in lots of detail as a generative model right so to put it simply discriminative models build boundaries between positive label data and negative label data while generative models model the distribution of data so that you can generate more points from this distribution if you want right I hope this is clear the next term which is often used in business blogs and stuff like that basically business a term or business language is predictive modeling which is a very very broad term predictive bonding basically means a model that can predict future outcomes that song predictive modeling means and that could mean your classification techniques your regression techniques your time series analysis techniques basically predictive models is a term in business language which means building a model that can predict future outcomes of course if I build a classification technique to detect let's say cancerous cells if I have a new patient in the future with some with some tests being conducted on the patient I can determine whether the patient has cancer or not with a reasonable certainty so then it becomes a predictive model so in the whole of business language predictive modeling boils down to being able to predict so being able to build a model so it's called predictive modeling right so it's basically building models that can predict outcomes but and that includes both classification regression and time series analysis all of these things fit into that framework the next term that the student asks does is something called as a topic model so a topic model is a very interesting sub topic in nature language processing right so a topic model is basically saying given a paragraph of text or a page of text or or an article let's say can I can I find out what top does this text belong to for example it's a you may have a news article I want to determine I want to determine how much of this text is talking about politics or what is the probability that this text belongs to the topic of prom of sorry off of politics how much what is the probability that this text is talking about sports but what is the probability this topic is talking about science and technology but what is the probability that this is this this article is talking about business all right so your business science and technology politics sports all these are topics and given a given article you can map an article to the topics that it's talking about either you can map it to one major topic that it's talking about or you can break it up right and say that this this article 90% of the article is talking about politics 10% of it is talking about business right so you can break it up so topic modeling effectively means being able to assign various topics which are related to a given article in a natural language processing setting so this is all about natural language or text processing paradigm so I hope these definitions that our students of ours are clear now with these examples and it's very easy I mean all of us get confused with these terms because you may not use it so the way I understand this term says I take the key term like generative okay what does generative mean here for example in generative models generative models basically means I should be able to generate data to generate data I need to know the distribution of data discriminative what does discrimination means to separate two things off okay inferential I want to infer some properties of the data right through null hypothesis and hypothesis testing descriptive it's trying to describe about the data topic modeling I want to find topics in the data similarly predictive modeling predictive modeling is a business term the talks about being able to make make predictions in the future by building models simple definitions nothing fancy here\n"