Busy? This Is Your Quick Guide to Opening the Diffusion Models Black Box
Last Updated on August 18, 2023 by Editorial Team
Author(s): Paul Iusztin
Originally published on Towards AI.
Decode Stable Diffusion: Train, Generate New Images, & Control Using a Given Context
This member-only story is on us. Upgrade to access all of Medium.
Prompt: βAn oil pastel drawing of a funny cat sleeping in a weird positionβ [Image by the Author β Generated using DALL-E]
If you opened my article, you probably used a text-to-image model from services such as DALL-E, Midjourney, or Stability AI.
Well, all of them are based on diffusion models.
Even if you want to treat them as a magic black box, having an intuition on how they work under the hood will help you generate better art.
This article aims to give you an intuition on how diffusion models generate new… Read the full blog for free on Medium.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI