Generative vs Discriminative Model
🧠

Generative vs Discriminative Model

Tags
Computer Science
Deep Learning
Generative Models
Discriminative Models
Language Models
Machine Learning
Published
May 5, 2024

The Origin of the Naming of Generative and Discriminative

Generative and Discriminative models are named based on the type of learning they perform. Generative models learn the joint probability distribution of the input and output, and can generate new data points. Discriminative models, on the other hand, learn the conditional probability of the output given the input, and are mainly used for classification tasks.
Deep Learning models, such as Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs), are typically considered discriminative models as they are designed to separate the input data into specific classes.
On the other hand, generative models like Naive Bayes, Gaussian Mixture Models, and some types of Autoencoders are designed to learn the joint probability distribution of the input data and the labels. They capture the distribution of individual classes and use this knowledge to generate new data that is similar to the ones they've seen during training.
To summarize, while discriminative models focus on the boundary between classes, generative models pay attention to how the data in each class is distributed. Both types have their uses and are chosen based on the specific requirements of the task at hand.

Which one is Deep Learning Model?

Deep Learning models, such as Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs), are typically considered discriminative models as they are designed to separate the input data into specific classes.
On the other hand, generative models like Naive Bayes, Gaussian Mixture Models, and some types of Autoencoders are designed to learn the joint probability distribution of the input data and the labels. They capture the distribution of individual classes and use this knowledge to generate new data that is similar to the ones they've seen during training.
To summarize, while discriminative models focus on the boundary between classes, generative models pay attention to how the data in each class is distributed. Both types have their uses and are chosen based on the specific requirements of the task at hand.

Is Language Model Generative or Discriminative?

Language Models can be both generative and discriminative. For instance, a traditional n-gram model is a type of generative model as it generates text based on the probability distribution it learned from the training data. On the other hand, a model like BERT, which predicts missing words in a sentence, can be seen as a discriminative model as it discriminates among all possible words to find the most likely one.
GPT, or Generative Pre-trained Transformer, is a type of generative model. It is trained on a large corpus of text data and learns to predict the next word in a sentence. It then generates text by starting with an initial input (or "prompt") and repeatedly predicting the next word until it reaches a specified length or end token.
The main difference between BERT and GPT is that BERT and GPT is the encoder and the decoder part of the whole structure of Transformer.
Yes, the distinction between the encoder and decoder parts of the Transformer model architecture is a key reason why GPT is considered a generative model and BERT is considered a discriminative model.
GPT, as a generative model, uses the decoder part of the Transformer. It is trained to predict the next word in a sequence, which involves generating new output data based on the learned probability distribution.
On the other hand, BERT, as a discriminative model, uses the encoder part of the Transformer. It is trained to understand the context of a given input and does not generate new data but rather classifies or predicts existing data.
However, it's important to note that these designations are not solely due to the use of encoder or decoder parts but also depend on the training objective of the models.
The distinction between GPT (Generative Pre-trained Transformer) and BERT (Bidirectional Encoder Representations from Transformers) does relate to the distinction between joint and conditional probability.
GPT, a generative model, is trained to maximize the joint probability of a sequence of tokens (words), or P(X1, X2, ..., Xn). It learns to predict the next word in a sequence, which involves generating new output data based on the learned joint probability distribution.
On the other hand, BERT, a discriminative model, is trained to maximize the conditional probability of a token given its context in a sequence, or P(Xi | X1, X2, ..., Xi-1, Xi+1, ..., Xn). It is trained to understand the context of a given input and does not generate new data but rather classifies or predicts existing data based on the learned conditional probabilities.
Β 

Why on earth the decoder is modeling joint probability distribution?

It is absolutely right that the GPT (or any similar decoder-only transformer) technically models conditional probabilities of the form 𝑃(π‘₯π‘–βˆ£π‘₯1,π‘₯2,...,π‘₯π‘–βˆ’1)P(xi∣x1,x2,...,xiβˆ’1), where π‘₯𝑖xi is a token and π‘₯1,π‘₯2,...,π‘₯π‘–βˆ’1x1,x2,...,xiβˆ’1 are the tokens that precede it. This is indeed a series of conditional probabilities. The core training processβ€”particularly for autoregressive models like GPTβ€”is based on maximizing the likelihood of each token given the previous tokens in the sequence, which inherently involves maximizing conditional probabilities. However, this process indirectly maximizes the joint probability distribution of the sequence as a whole due to how probabilities decompose via the chain rule..
For example, when we consider the model as a whole and its function across an entire sequence, it effectively builds and learns the joint probability distribution of the sequence through these conditional probabilities. This is because, according to the chain rule of probability, the joint probability of a sequence of tokens can be expressed as the product of conditional probabilities:
𝑃(π‘₯1,π‘₯2,...,π‘₯𝑛)=𝑃(π‘₯1)⋅𝑃(π‘₯2∣π‘₯1)⋅𝑃(π‘₯3∣π‘₯1,π‘₯2)β‹…...⋅𝑃(π‘₯π‘›βˆ£π‘₯1,π‘₯2,...,π‘₯π‘›βˆ’1)P(x1,x2,...,xn)=P(x1)β‹…P(x2∣x1)β‹…P(x3∣x1,x2)β‹…...β‹…P(xn∣x1,x2,...,xnβˆ’1)
In practice, GPT and similar models are trained to maximize the likelihood of this product (the joint probability) over many sequences during training. Each step of prediction 𝑃(π‘₯π‘–βˆ£π‘₯1,π‘₯2,...,π‘₯π‘–βˆ’1)P(xi∣x1,x2,...,xiβˆ’1) is a conditional probability, but collectively these steps are used to model the entire sequence's probability distribution.
Thus, while the training process indeed focuses on learning to predict each next token given all previous tokens (a conditional probability), the overarching goal and result of this training is to capture the joint probability distribution across sequences of tokens. This generative capacity enables the model to produce coherent and contextually appropriate continuations from any given starting sequence of tokens, simulating the joint distribution of sequences in the model's training corpus.

How does Generative Model generate examples in practice?

In practice, generative models generate examples by first learning the underlying data distribution during their training phase. Once trained, they can generate new data instances that are likely under this learned distribution, essentially creating new examples that mirror the patterns and structures found in the training data.
Β 

Can we use Discriminative Model to generate examples?

Typically, discriminative models are not used to generate new examples as they are designed to discriminate between different classes of data. However, in some specific use-cases, such as image-to-image translation or text-to-text translation, they can generate outputs that can be considered 'new examples', but these are based on input data and not generated from the learned data distribution like generative models.

References