-->

Tuesday, May 8, 2018

In probability and statistics, a generative model is a model for generating all values for a phenomenon, both those that can be observed in the world and "target" variables that can only be computed from those observed. By contrast, discriminative models provide a model only for the target variable(s), generating them by analyzing the observed variables. In simple terms, discriminative models infer outputs based on inputs, while generative models generate both inputs and outputs, typically given some hidden parameters.

Generative models are used in machine learning for either modeling data directly (i.e., modeling observations drawn from a probability density function), or as an intermediate step to forming a conditional probability density function. Generative models are typically probabilistic, specifying a joint probability distribution over observation and target (label) values. A conditional distribution can be formed from a generative model through Bayes' rule.

Shannon (1948) gives an example in which a table of frequencies of English word pairs is used to generate a sentence beginning with "representing and speedily is an good"; which is not proper English but which will increasingly approximate it as the table is moved from word pairs to word triplets etc.

Despite the fact that discriminative models do not need to model the distribution of the observed variables, they cannot generally express complex relationships between the observed and target variables. They don't necessarily perform better than generative models at classification and regression tasks. The two classes are seen as complementary or as different views of the same procedure.

Types




Andrew Ng Naive Bayes Generative Learning Algorithms - This set of videos come from Andrew Ng's courses on Stanford OpenClassroom at http://openclassroom.stanford.edu/MainFolder/HomePage.php ...

Generative models

Types of generative models are:

  • Gaussian mixture model (and other types of mixture model)
  • Hidden Markov model
  • Probabilistic context-free grammar
  • Naive Bayes
  • Averaged one-dependence estimators
  • Latent Dirichlet allocation
  • Restricted Boltzmann machine
  • Generative adversarial networks

If the observed data are truly sampled from the generative model, then fitting the parameters of the generative model to maximize the data likelihood is a common method. However, since most statistical models are only approximations to the true distribution, if the model's application is to infer about a subset of variables conditional on known values of others, then it can be argued that the approximation makes more assumptions than are necessary to solve the problem at hand. In such cases, it can be more accurate to model the conditional density functions directly using a discriminative model (see below), although application-specific details will ultimately dictate which approach is most suitable in any particular case.

Discriminative models

  • Logistic regression,
  • Support Vector Machines,
  • Maximum Entropy Markov Model,
  • Conditional Random Fields,
  • Neural Networks

Machine learning


COMP24111 Machine Learning Naïve Bayes Classifier Ke Chen. - ppt ...
COMP24111 Machine Learning Naïve Bayes Classifier Ke Chen. - ppt .... Source : slideplayer.com

A generative algorithm models how the data was generated in order to categorize a signal. It asks the question: based on my generation assumptions, which category is most likely to generate this signal? A discriminative algorithm does not care about how the data was generated, it simply categorizes a given signal.

Suppose the input data is x ∈ { 1 , 2 } {\displaystyle x\in \{1,2\}} and the set of labels for x {\displaystyle x} is y ∈ { 0 , 1 } {\displaystyle y\in \{0,1\}} . A generative model learns the joint probability distribution p ( x , y ) {\displaystyle p(x,y)} while a discriminative model learns the conditional probability distribution p ( y | x ) {\displaystyle p(y|x)} “probability of y given x”.

Let's try to understand this with an example. Consider the following 4 data points: ( x , y ) = { ( 1 , 0 ) , ( 1 , 0 ) , ( 2 , 0 ) , ( 2 , 1 ) } {\displaystyle (x,y)=\{(1,0),(1,0),(2,0),(2,1)\}}

For above data, p ( x , y ) {\displaystyle p(x,y)} will be following:

while p ( y | x ) {\displaystyle p(y|x)} will be following:

So, discriminative algorithms try to learn p ( y | x ) {\displaystyle p(y|x)} directly from the data and then try to classify data. On the other hand, generative algorithms try to learn p ( x , y ) {\displaystyle p(x,y)} which can be transformed into p ( y | x ) {\displaystyle p(y|x)} later to classify the data. One of the advantages of generative algorithms is that you can use p ( x , y ) {\displaystyle p(x,y)} to generate new data similar to existing data. On the other hand, discriminative algorithms generally give better performance in classification tasks.

See also


Torch | Generating Faces with Torch
Torch | Generating Faces with Torch. Source : torch.ch

  • Discriminative model
  • Graphical model

References


Weak Supervision
Weak Supervision. Source : hazyresearch.github.io

Sources


A Systems Approach to Workplace Learning | ghlc.ca
A Systems Approach to Workplace Learning | ghlc.ca. Source : ghlc.ca

  • Shannon, C.E. (1948) "A Mathematical Theory of Communication", Bell System Technical Journal, vol. 27, pp. 379â€"423, 623â€"656, July, October, 1948

3D Generative Adversarial Network
3D Generative Adversarial Network. Source : 3dgan.csail.mit.edu

 
Sponsored Links