Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Unlock the full potential of AI with Building LLMs for Productionβ€”our 470+ page guide to mastering LLMs with practical projects and expert insights!

Publication

GANs using MNIST Dataset
Latest   Machine Learning

GANs using MNIST Dataset

Last Updated on July 24, 2023 by Editorial Team

Author(s): Aman Sawarn

Originally published on Towards AI.

Generating similar character images like MNIST using Keras API

Source

β€œGenerative Adversarial Networks is the most interesting idea in the last 10 years in Machine Learning” β€” Yann LeCun.

Generative Adversarial Networks(or GANs) have become extremely popular since early 2018s. GANs are all about creating, styling and manipulating input images that are similar to the dataset images but not exactly the same.

GANs two components:

  1. Generator
  2. Discriminator

It is mainly an unsupervised computer vision network where the output of the Generator is pitted against the discriminator. Once the entire network has been trained and evaluated, we just use the Generator block to generate new images.

Simple IntuitionGenerator is like a peddler trying to create new wine samples, while the discriminator works as a team of wine tasters trying to find the created ones. Both generators and discriminators try to outperform each other. Also, Both Generator and Discriminator have a Multi-layer perceptron(MLP) architectures.

Generative Adversarial Network

The work of the generator is to create new images similar to the dataset which should be indistinguishable by the discriminator networks.

The discriminator network takes two outputs into consideration i.e the images from the real dataset and images from the generator network. The discriminator network works as a binary classifier and classifies whether a given image is a generated one or a real one.

Training a GAN

For this example blog, we would be using the MNIST Dataset and create new character images. We would understand the details and intricacies of the model one-by-one. Training a GAN involves the following steps:

Step 1: Loading the Dataset

In this step, we load our Dataset. For this blog, we would be using the MNIST dataset which is a (28,28) dimension image for every data point in the dataset. They have been flattened into (784,1) dimension vectors. There is a total of 60,000 images in the dataset.

Step 2: Defining the Optimizer Parameters

We define our adam optimizer using the given parameters.

Learning rate=0.02

Step 3: Defining the Generator model Architecture

The generator model is an MLP architecture with one layer stacked over the other layer. It takes a 100-dimensional random noise and returns a 784-dimensional output vector. It must be noted that the final output layer has a β€˜tan h’ activation and not β€˜sigmoid’ activation. The reason behind using tan h instead of sigmoid is beyond the scope of this blog.

Generator Network
Summary of Generator Network

Step 4: Defining the Discriminator model

Like the Generator model, the Discriminator model is also an MLP architecture. It takes a 784-dimensional input from the data as well as the generator network. It returns a single value output, which is the output score (in probability) in classifying the generated and real images. Unlike the Generator, It has a sigmoid activation in the final output layer and not the tan h activation layer.

Discriminator Network
Summary of Discriminator Network

Step 5: Defining the GAN model

Till now, we have loaded the MNIST dataset- and defined the generator and the discriminator network. Now, we will combine the Generator and Discriminator model to define the GAN model.

We feed a 100-dimensional random noise to the generator network, and its output is fed into the discriminator networks. It is difficult to train the discriminator and the generator network simultaneously. In neural networks term, the challenge of training two networks simultaneously is that they may fail to converge.

GAN

Step 6: Defining function to create images from the generator output

By now, this would have been clear that the output given by the Generator Network is a 784-dimensional vector, While the images in the MNIST Dataset has a size of (28,28). So, once the generator model has given its prediction, It is reshaped into a (28,28) matrix.

Step 7: Training the Network

In this step, we define a batch (say batch_size=128). We then define the Generator Network. Once the generator has been defined, some random noise is given to the network, using which it predicts an output. Now, Batches of Data from a real and generated dataset are given to the discriminator. Now, we would make the discriminator weights trainable and the generator weights froze. We train the GAN by alternatively freezing weights of Generator and Discriminator models.

Training on a Batch Size of 128 and 500 epochs
This is how initial epochs training looks like
This is how final epochs training looks like.

For more references, try this colab notebook or this Github link.

Outputs

Initial Epochs(less than 20)

Final Epochs(Between 450 and 500)

GAN β€” What is Generative Adversarial Networks GAN?

To create something from nothing is one of the greatest feelings, … It’s heaven.

medium.com

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.

Published via Towards AI

Feedback ↓