How I trained Stable Diffusion to generate pictures of myself?

<p>This tutorial describes how I used&nbsp;<a href="https://dreambooth.github.io/" rel="noopener ugc nofollow" target="_blank"><strong>DreamBooth</strong></a><strong>&nbsp;</strong>from Google Research&rsquo;s paper&nbsp;<em>DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation.&nbsp;</em>To train a Generative AI model able to create images of myself.</p> <p>The paper presents itself like this:</p> <blockquote> <p>It&rsquo;s like a photo booth, but once the subject is captured, it can be synthesized wherever your dreams take you&hellip;</p> </blockquote> <p><img alt="" src="https://miro.medium.com/v2/resize:fit:700/1*WX49fu6ndOu9bYfKDuPjBA.jpeg" style="height:197px; width:700px" /></p> <p>Examples of images generated by DreamBooth</p> <p>Hugging Face , known for making machine learning models accessible, incorporated&nbsp;<a href="https://github.com/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb" rel="noopener ugc nofollow" target="_blank">DreamBooth into their ecosystem</a>&nbsp;in September this year. That was a game changer in terms of the usability of DreamBooth.</p> <h1>Steps for training the model</h1> <p>The whole process was quite straightforward once I found a notebook that works fine. The Generative AI field has been evolving fast in the last couple of months, and because of that, many implementations are outdated or simply broken due to dependencies that have changed.</p> <p><a href="https://medium.com/@thiagoalves/how-i-trained-stable-diffusion-to-generate-pictures-of-myself-3412814c952e"><strong>Read More</strong></a></p>