mirror of
https://github.com/huggingface/diffusers.git
synced 2026-01-27 17:22:53 +03:00
Fix typos and add Typo check GitHub Action (#483)
* Fix typos * Add a typo check action * Fix a bug * Changed to manual typo check currently Ref: https://github.com/huggingface/diffusers/pull/483#pullrequestreview-1104468010 Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com> * Removed a confusing message * Renamed "nin_shortcut" to "in_shortcut" * Add memo about NIN Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>
This commit is contained in:
@@ -44,7 +44,7 @@ To this end, the design of schedulers is such that:
|
||||
The core API for any new scheduler must follow a limited structure.
|
||||
- Schedulers should provide one or more `def step(...)` functions that should be called to update the generated sample iteratively.
|
||||
- Schedulers should provide a `set_timesteps(...)` method that configures the parameters of a schedule function for a specific inference task.
|
||||
- Schedulers should be framework-agonstic, but provide a simple functionality to convert the scheduler into a specific framework, such as PyTorch
|
||||
- Schedulers should be framework-agnostic, but provide a simple functionality to convert the scheduler into a specific framework, such as PyTorch
|
||||
with a `set_format(...)` method.
|
||||
|
||||
The base class [`SchedulerMixin`] implements low level utilities used by multiple schedulers.
|
||||
@@ -53,7 +53,7 @@ The base class [`SchedulerMixin`] implements low level utilities used by multipl
|
||||
[[autodoc]] SchedulerMixin
|
||||
|
||||
### SchedulerOutput
|
||||
The class [`SchedulerOutput`] contains the ouputs from any schedulers `step(...)` call.
|
||||
The class [`SchedulerOutput`] contains the outputs from any schedulers `step(...)` call.
|
||||
|
||||
[[autodoc]] schedulers.scheduling_utils.SchedulerOutput
|
||||
|
||||
@@ -71,7 +71,7 @@ Original paper can be found [here](https://arxiv.org/abs/2010.02502).
|
||||
|
||||
[[autodoc]] DDPMScheduler
|
||||
|
||||
#### Varience exploding, stochastic sampling from Karras et. al
|
||||
#### Variance exploding, stochastic sampling from Karras et. al
|
||||
|
||||
Original paper can be found [here](https://arxiv.org/abs/2006.11239).
|
||||
|
||||
|
||||
@@ -86,11 +86,11 @@ just like we did before only that now you need to pass your `AUTH_TOKEN`:
|
||||
>>> generator = DiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", use_auth_token=AUTH_TOKEN)
|
||||
```
|
||||
|
||||
If you do not pass your authentification token you will see that the diffusion system will not be correctly
|
||||
downloaded. Forcing the user to pass an authentification token ensures that it can be verified that the
|
||||
If you do not pass your authentication token you will see that the diffusion system will not be correctly
|
||||
downloaded. Forcing the user to pass an authentication token ensures that it can be verified that the
|
||||
user has indeed read and accepted the license, which also means that an internet connection is required.
|
||||
|
||||
**Note**: If you do not want to be forced to pass an authentification token, you can also simply download
|
||||
**Note**: If you do not want to be forced to pass an authentication token, you can also simply download
|
||||
the weights locally via:
|
||||
|
||||
```
|
||||
@@ -98,7 +98,7 @@ git lfs install
|
||||
git clone https://huggingface.co/CompVis/stable-diffusion-v1-4
|
||||
```
|
||||
|
||||
and then load locally saved weights into the pipeline. This way, you do not need to pass an authentification
|
||||
and then load locally saved weights into the pipeline. This way, you do not need to pass an authentication
|
||||
token. Assuming that `"./stable-diffusion-v1-4"` is the local path to the cloned stable-diffusion-v1-4 repo,
|
||||
you can also load the pipeline as follows:
|
||||
|
||||
|
||||
@@ -49,7 +49,7 @@ The `textual_inversion.py` script [here](https://github.com/huggingface/diffuser
|
||||
|
||||
### Installing the dependencies
|
||||
|
||||
Before running the scipts, make sure to install the library's training dependencies:
|
||||
Before running the scripts, make sure to install the library's training dependencies:
|
||||
|
||||
```bash
|
||||
pip install diffusers[training] accelerate transformers
|
||||
@@ -68,7 +68,7 @@ You need to accept the model license before downloading or using the weights. In
|
||||
|
||||
You have to be a registered user in 🤗 Hugging Face Hub, and you'll also need to use an access token for the code to work. For more information on access tokens, please refer to [this section of the documentation](https://huggingface.co/docs/hub/security-tokens).
|
||||
|
||||
Run the following command to autheticate your token
|
||||
Run the following command to authenticate your token
|
||||
|
||||
```bash
|
||||
huggingface-cli login
|
||||
|
||||
@@ -18,7 +18,7 @@ distribution.
|
||||
|
||||
## Installing the dependencies
|
||||
|
||||
Before running the scipts, make sure to install the library's training dependencies:
|
||||
Before running the scripts, make sure to install the library's training dependencies:
|
||||
|
||||
```bash
|
||||
pip install diffusers[training] accelerate datasets
|
||||
@@ -117,7 +117,7 @@ from datasets import load_dataset
|
||||
# example 1: local folder
|
||||
dataset = load_dataset("imagefolder", data_dir="path_to_your_folder")
|
||||
|
||||
# example 2: local files (suppoted formats are tar, gzip, zip, xz, rar, zstd)
|
||||
# example 2: local files (supported formats are tar, gzip, zip, xz, rar, zstd)
|
||||
dataset = load_dataset("imagefolder", data_files="path_to_zip_file")
|
||||
|
||||
# example 3: remote files (supported formats are tar, gzip, zip, xz, rar, zstd)
|
||||
|
||||
Reference in New Issue
Block a user