* add: LoRA text encoder support for DreamBooth example.
* fix initialization.
* fix: modification call.
* add: entry in the readme.
* use dog dataset from hub.
* fix: params to clip.
* add entry to the LoRA doc.
* add: tests for lora.
* remove unnecessary list comprehension./
* Added distillation for quantization example on textual inversion.
Signed-off-by: Ye, Xinyu <xinyu.ye@intel.com>
* refined readme and code style.
Signed-off-by: Ye, Xinyu <xinyu.ye@intel.com>
* Update text2images.py
* refined code of model load and added compatibility check.
Signed-off-by: Ye, Xinyu <xinyu.ye@intel.com>
* fixed code style.
Signed-off-by: Ye, Xinyu <xinyu.ye@intel.com>
* fix C403 [*] Unnecessary `list` comprehension (rewrite as a `set` comprehension)
Signed-off-by: Ye, Xinyu <xinyu.ye@intel.com>
---------
Signed-off-by: Ye, Xinyu <xinyu.ye@intel.com>
controlnet training center crop input images to multiple of 8
The pipeline code resizes inputs to multiples of 8.
Not doing this resizing in the training script is causing
the encoded image to have different height/width dimensions
than the encoded conditioning image (which uses a separate
encoder that's part of the controlnet model).
We resize and center crop the inputs to make sure they're the
same size (as well as all other images in the batch). We also
check that the initial resolution is a multiple of 8.
* Add SD/txt2img Community Pipeline to diffusers along with TensorRT utils
Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>
* update installation command
Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>
* update tensorrt installation
Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>
* changes
1. Update setting of cache directory
2. Address comments: merge utils and pipeline code.
3. Address comments: Add section in README
Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>
* apply make style
Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>
---------
Signed-off-by: Asfiya Baig <asfiyab@nvidia.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* WIP controlnet training
- bugfix --streaming
- bugfix running report_to!='wandb'
- adds memory profile before validation
* Adds final logging statement.
* Sets train epochs to 11.
Looking at a longer ~16ep run, we see only good validation images
after ~11ep:
https://wandb.ai/andsteing/controlnet_fill50k/runs/3j2hx6n8
* Removes --logging_dir (it's not used).
* Adds --profile flags.
* Updates --output_dir=runs/fill-circle-{timestamp}.
* Compute mean of `train_metrics`.
Previously `train_metrics[-1]` was logged, resulting in very bumpy train
metrics.
* Improves logging a bit.
- adds l2_grads gradient norm logging
- adds steps_per_sec
- sets walltime as x coordinate of train/step
- logs controlnet_params config
* Adds --ccache (doesn't really help though).
* minor fix in controlnet flax example (#2986)
* fix the error when push_to_hub but not log validation
* contronet_from_pt & controlnet_revision
* add intermediate checkpointing to the guide
* Bugfix --profile_steps
* Sets `RACKER_PROJECT_NAME='controlnet_fill50k'`.
* Logs fractional epoch.
* Adds relative `walltime` metric.
* Adds `StepTraceAnnotation` and uses `global_step` insetad of `step`.
* Applied `black`.
* Streamlines commands in README a bit.
* Removes `--ccache`.
This makes only a very small difference (~1 min) with this model size, so removing
the option introduced in cdb3cc.
* Re-ran `black`.
* Update examples/controlnet/README.md
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* Converts spaces to tab.
* Removes repeated args.
* Skips first step (compilation) in profiling
* Updates README with profiling instructions.
* Unifies tabs/spaces in README.
* Re-ran style & quality.
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* fix: norm group test for UNet3D.
* fix: unet rejig.
* fix: unwrapping when running validation inputs.
* unwrapping the unet too.
* fix: device.
* better unwrapping.
* unwrapping before ema.
* unwrapping.
* ⚙️chore(train_controlnet) fix typo in logger message
* ⚙️chore(models) refactor modules order; make them the same as calling order
When printing the BasicTransformerBlock to stdout, I think it's crucial that the attributes order are shown in proper order. And also previously the "3. Feed Forward" comment was not making sense. It should have been close to self.ff but it's instead next to self.norm3
* correct many tests
* remove bogus file
* make style
* correct more tests
* finish tests
* fix one more
* make style
* make unclip deterministic
* ⚙️chore(models/attention) reorganize comments in BasicTransformerBlock class
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* [Config] Fix config prints and save, load
* Only use potential nn.Modules for dtype and device
* Correct vae image processor
* make sure in_channels is not accessed directly
* make sure in channels is only accessed via config
* Make sure schedulers only access config attributes
* Make sure to access config in SAG
* Fix vae processor and make style
* add tests
* uP
* make style
* Fix more naming issues
* Final fix with vae config
* change more
* ensure validation image RGB not RGBA
* ensure validation image RGB not RGBA
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>