Stable Diffusion: New Open Source AI Image Generator
Just like Dall-E & Midjourney, Stable Diffusion is rising in popularity because it's free and open source. As we know, with Dall-E & Midjourney we don't the access to the source code, but now with Stable Diffusion where the model weights and the full source code are available to the public.
Stable Diffusion is a text-to-image model that will empower billions of people to create stunning art within seconds. It is a breakthrough in speed and quality meaning that it can run on consumer GPUs. You can see some of the amazing output that has been created by this model without pre or post-processing on this page. - Stable Diffusion
By the way, A public demonstration space can be found here: [https://huggingface.co/spaces/stabilityai/stable-diffusion]
We can now adjust the internal parameters in a way we cannot do with closed solutions like Dall-E & Midjourney. Let's have a look at some of the things that it can do!
Since the internal parameters are now exposed, we can add small changes to them and create several of outputs, stitching them together as a video.
We can create a beautiful visual novel by entering several prompts. The result of the first image can now be morphed into the next one, creating these amazing transitions.
Stable Diffusion is so competent at creating fantasy art like this:
This video was made with StableDiffusion ( https://github.com/CompVis/stable-dif... ), combining 36 prompts in a single, seamless video morph taking you on a trip through evolution.
When a tool is finally open to the public, I think we will see an insane amount of creativity coming over the next years.