DreamFusion: Generate 3D models From Text
AI-generated 3D models are coming. DreamFusion: Text-to-3D using 2D Diffusion.
We could only dream of text-to-image technology which eventually happened with Dall-E & Midjourney, and Stable Diffusion. Blender, the open source 3D software has a new Stable Diffusion implementation for creating seamless texture.
We all knew it was just a matter of time before text-to-3D technology would be here. AI definitely changing how 3D apps will be made now and future.
Adapting the text-to-image approach to 3D synthesis requires large-scale datasets of labeled 3D assets and efficient architectures for denoising 3D data, none of which currently exist. In this work, DreamFusion finds a way around these limitations by using a pre-trained 2D text-to-image diffusion model to perform text-to-3D synthesis.
Currently, all of the models are not downloadable. The idea of all of these samples is to push the limit of what AI can create. I think this is the best one so far when it comes to AI-generated 3D models. There are sculpted geometries that the AI generates. When you look at their gallery they have lots of nice things. It's interesting to see how the result comes out and how the AI handles the text to spit out the 3d model.