diffusers/examples/dreambooth at main · huggingface/diffusers · GitHub


DreamBooth training example

DreamBooth is a method to personalize text2image models like stable diffusion given just a few(3~5) images of a subject. The script shows how to implement the training procedure and adapt it for stable diffusion.

Running locally with PyTorch

Installing the dependencies

Before running the scripts, make sure to install the library's training dependencies:
To make sure you can successfully run the latest versions of the example scripts, we highly recommend installing from source and keeping the install up to date as we update the example scripts frequently and install some example-specific requirements. To do this, execute the following steps in a new virtual environment:
Then cd in the example folder and run
And initialize an
environment with:
Or for a default accelerate configuration without answering questions about your environment
Or if your environment doesn't support an interactive shell e.g. a notebook

Dog toy example

Now let's get our dataset. Download images from here and save them in a directory. This will be our training data.
And launch the training using
Note: Change the to 768 if you are using the stable-diffusion-2 768x768 model.

Training with prior-preservation loss

Prior-preservation is used to avoid overfitting and language-drift. Refer to the paper to learn more about it. For prior-preservation we first generate images using the model with a class prompt and then use those during training along with our data. According to the paper, it's recommended to generate images for prior-preservation. 200-300 works well for most cases. The flag sets the number of images to generate with the class prompt. You can place existing images in , and the training script will generate any additional images so that are present in during training time.

Training on a 16GB GPU:

With the help of gradient checkpointing and the 8-bit optimizer from bitsandbytes it's possible to run train dreambooth on a 16GB GPU.
To install please refer to this readme.

Training on a 8 GB GPU:

By using DeepSpeed it's possible to offload some tensors from VRAM to either CPU or NVME allowing to train with less VRAM.
DeepSpeed needs to be enabled with . During configuration answer yes to "Do you want to use DeepSpeed?". With DeepSpeed stage 2, fp16 mixed precision and offloading both parameters and optimizer state to cpu it's possible to train on under 8 GB VRAM with a drawback of requiring significantly more RAM (about 25 GB). See documentation for more DeepSpeed configuration options.
Changing the default Adam optimizer to DeepSpeed's special version of Adam gives a substantial speedup but enabling it requires CUDA toolchain with the same version as pytorch. 8-bit optimizer does not seem to be compatible with DeepSpeed at the moment.

Fine-tune text encoder with the UNet.

The script also allows to fine-tune the along with the . It's been observed experimentally that fine-tuning gives much better results especially on faces. Pass the argument to the script to enable training .
Note: Training text encoder requires more memory, with this option the training won't fit on 16GB GPU. It needs at least 24GB VRAM.

Using DreamBooth for other pipelines than Stable Diffusion

Altdiffusion also support dreambooth now, the runing comman is basically the same as abouve, all you need to do is replace the like this: One can now simply change the to another architecture such as .


Once you have trained a model using above command, the inference can be done simply using the . Make sure to include the (e.g. sks in above example) in your prompt.

Inference from a training checkpoint

You can also perform inference from one of the checkpoints saved during the training process, if you used the argument. Please, refer to the documentation to see how to do it.

Training with Flax/JAX

For faster training on TPUs and GPUs you can leverage the flax training example. Follow the instructions above to get the model and dataset before running the script.
_Note: The flax example don't yet support features like gradient checkpoint, gradient accumulation etc, so to use flax for faster training we will need >30GB cards.
Before running the scripts, make sure to install the library's training dependencies:

Training without prior preservation loss

Training with prior preservation loss

Fine-tune text encoder with the UNet.

Training with xformers:

You can enable memory efficient attention by installing xFormers and padding the argument to the script. This is not available with the Flax/JAX implementation.
You can also use Dreambooth to train the specialized in-painting model. See the script in the research folder for details.