1
0
mirror of https://github.com/huggingface/diffusers.git synced 2026-01-27 17:22:53 +03:00
Files
diffusers/examples/reinforcement_learning/README.md
Kadir Nar bcc570b910 📄 Renamed File for Better Understanding (#4056)
* 📄 Renamed File for Better Understanding

Renamed the 'rl' file to 'run_locomotion'. This change was made to improve the clarity and readability of the codebase. The 'rl' name was ambiguous, and 'run_locomotion' provides a more clear description of the file's purpose.

Thanks 🙌

* 📁 [Docs] Renamed Directory for Better Clarity

Renamed the 'rl' directory to 'reinforcement_learning'. This change provides a clearer understanding of the directory's purpose and its contents.

* Update examples/reinforcement_learning/README.md

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* 📝 Update README

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-07-21 09:08:27 -07:00

826 B

Overview

These examples show how to run Diffuser in Diffusers. There are two ways to use the script, run_diffuser_locomotion.py.

The key option is a change of the variable n_guide_steps. When n_guide_steps=0, the trajectories are sampled from the diffusion model, but not fine-tuned to maximize reward in the environment. By default, n_guide_steps=2 to match the original implementation.

You will need some RL specific requirements to run the examples:

pip install -f https://download.pytorch.org/whl/torch_stable.html \
                free-mujoco-py \
                einops \
                gym==0.24.1 \
                protobuf==3.20.1 \
                git+https://github.com/rail-berkeley/d4rl.git \
                mediapy \
                Pillow==9.0.0