mirror of
https://github.com/huggingface/diffusers.git
synced 2026-01-27 17:22:53 +03:00
Update README.md to include our blog post (#1998)
* Update README.md Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
This commit is contained in:
@@ -321,3 +321,6 @@ python train_dreambooth_flax.py \
|
||||
You can enable memory efficient attention by [installing xFormers](https://github.com/facebookresearch/xformers#installing-xformers) and padding the `--enable_xformers_memory_efficient_attention` argument to the script. This is not available with the Flax/JAX implementation.
|
||||
|
||||
You can also use Dreambooth to train the specialized in-painting model. See [the script in the research folder for details](https://github.com/huggingface/diffusers/tree/main/examples/research_projects/dreambooth_inpaint).
|
||||
|
||||
### Experimental results
|
||||
You can refer to [this blog post](https://huggingface.co/blog/dreambooth) that discusses some of DreamBooth experiments in detail. Specifically, it recommends a set of DreamBooth-specific tips and tricks that we have found to work well for a variety of subjects.
|
||||
|
||||
Reference in New Issue
Block a user