mirror of
https://github.com/huggingface/diffusers.git
synced 2026-01-29 07:22:12 +03:00
[doc] add link to training script (#3271)
add link to training script Co-authored-by: yiyixuxu <yixu@yis-macbook-pro.lan>
This commit is contained in:
@@ -33,7 +33,12 @@ cd diffusers
|
||||
pip install -e .
|
||||
```
|
||||
|
||||
Then navigate into the example folder and run:
|
||||
Then navigate into the [example folder](https://github.com/huggingface/diffusers/tree/main/examples/controlnet)
|
||||
```bash
|
||||
cd examples/controlnet
|
||||
```
|
||||
|
||||
Now run:
|
||||
```bash
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
@@ -33,7 +33,13 @@ cd diffusers
|
||||
pip install -e .
|
||||
```
|
||||
|
||||
Then cd in the example folder and run
|
||||
Then cd into the [example folder](https://github.com/huggingface/diffusers/tree/main/examples/custom_diffusion)
|
||||
|
||||
```
|
||||
cd examples/custom_diffusion
|
||||
```
|
||||
|
||||
Now run
|
||||
|
||||
```bash
|
||||
pip install -r requirements.txt
|
||||
|
||||
@@ -24,7 +24,7 @@ The output is an "edited" image that reflects the edit instruction applied on th
|
||||
<img src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/output-gs%407-igs%401-steps%4050.png" alt="instructpix2pix-output" width=600/>
|
||||
</p>
|
||||
|
||||
The `train_instruct_pix2pix.py` script shows how to implement the training procedure and adapt it for Stable Diffusion.
|
||||
The `train_instruct_pix2pix.py` script (you can find the it [here](https://github.com/huggingface/diffusers/blob/main/examples/instruct_pix2pix/train_instruct_pix2pix.py)) shows how to implement the training procedure and adapt it for Stable Diffusion.
|
||||
|
||||
***Disclaimer: Even though `train_instruct_pix2pix.py` implements the InstructPix2Pix
|
||||
training procedure while being faithful to the [original implementation](https://github.com/timothybrooks/instruct-pix2pix) we have only tested it on a [small-scale dataset](https://huggingface.co/datasets/fusing/instructpix2pix-1000-samples). This can impact the end results. For better results, we recommend longer training runs with a larger dataset. [Here](https://huggingface.co/datasets/timbrooks/instructpix2pix-clip-filtered) you can find a large dataset for InstructPix2Pix training.***
|
||||
@@ -44,7 +44,12 @@ cd diffusers
|
||||
pip install -e .
|
||||
```
|
||||
|
||||
Then cd in the example folder and run
|
||||
Then cd in the example folder
|
||||
```bash
|
||||
cd examples/instruct_pix2pix
|
||||
```
|
||||
|
||||
Now run
|
||||
```bash
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
Reference in New Issue
Block a user