Why Generative Model Doesn’t Work Well For Classification Task
Last Updated on February 13, 2024 by Editorial Team
Author(s): Khanh Vy Nguyen
Originally published on Towards AI.
We heard about Generative models, and some of them are achieving incredible outcomes, such as GAN (Generative Adversarial Networks), GPT (Generative Pre-trained Transformer). While gaining notice from researchers in NLP and computer vision, do generative models work on simple machine learning tasks such as text classification?
First, let’s see what models belong to generative models and how do they work
- Generative Model
There are quite a few popular models, which can be listed:
- Naive Bayes — if you just started to learn NLP, you will see the Naive Bayes classification as the classic example
- Hidden Markov Models—Probabilistic model, commonly used in time series. The latest package that implemented Hidden Markov that I have used is PyMC
- Autoencoder—a vanilla autoencoder including an encoder and a decoder. They are usually neural network
- Variational Autoencoder/Generative Adversarial Networks: For generating new samples
While there are various generative models with different techniques, they share the same generative mechanism, which is the ability to learn from the distribution of samples and generate data from it rather than learning the specific features themselves.
We can see this mechanism from Naive Bayes to GAN. For example, Naive Bayes
Now, let’s get to generative models for classification and see why they don’t usually work well compared to discriminative models
2. Generative model for text classification
- Based on the formula, we can see that P(x,y) is the joint probability, which means we assume all features are independent of each other, which is sometimes not true with real-world datasets.
- Then the P(x) is the probability of features occurring across all labels. What if there is one feature in the test set that never occurs in the training set, which often happens in NLP classification due to the limit of a bag of words? Naive Bayes will assign a zero probability, and then it will break. To fix that, we add “smoothing” hyperparameter, let’s call it alpha, and often it causes hallucinations on the NLP task.
- In NLP, one word can have multiple meanings. For example:
This is plant
This is also plant
Classification without context could be a nightmare for Naive Bayes or any generative model because they learn on the distribution, not on the context.
This is where the discriminative models come in handy
3. Classification on Images
Let’s say cats have pointy ears and short fur, while dogs have round ears and long fur. The discriminative model will take these features and try to determine whether the image is a dog or a cat. In other words, they try to model the probability of class Y given a set of features X (P(YU+007CX))
Generative models tend to learn the probability of features given the label, which can be described as P(XU+007CY). They try to learn how to make a realistic representation of some class by taking the probability of features given in class Y.
For this, GAN could be useful for classification by using it as feature extraction and then feeding it into a discriminator with the hope that a good GAN can extract the underlying distribution of features that can help for the classification task.
In conclusion, the classification of text, given the semantics of the language, could be a challenge for the generative model. It can be useful as feature extraction for image classification, however, at the final step, we still need to use discriminative model.
Reference:
https://learnopencv.com/generative-and-discriminative-models/#GenerativeModelling
https://developer.nvidia.com/blog/faster-text-classification-with-naive-bayes-and-gpus/
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.
Published via Towards AI