From 8a812e4e147a1ad3aa418c35015715cb7c9b0e36 Mon Sep 17 00:00:00 2001 From: Parth38 <58384863+Parth38@users.noreply.github.com> Date: Sun, 3 Dec 2023 23:06:25 -0600 Subject: [PATCH] Update value_guided_sampling.py (#6027) * Update value_guided_sampling.py Changed the scheduler step function as predict_epsilon parameter is not there in latest DDPM Scheduler * Update value_guided_sampling.md Updated a link to a working notebook --------- Co-authored-by: Sayak Paul --- docs/source/en/api/pipelines/value_guided_sampling.md | 2 +- src/diffusers/experimental/rl/value_guided_sampling.py | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/source/en/api/pipelines/value_guided_sampling.md b/docs/source/en/api/pipelines/value_guided_sampling.md index 01b7717f49..3c7e4977a6 100644 --- a/docs/source/en/api/pipelines/value_guided_sampling.md +++ b/docs/source/en/api/pipelines/value_guided_sampling.md @@ -24,7 +24,7 @@ The abstract from the paper is: *Model-based reinforcement learning methods often use learning only for the purpose of estimating an approximate dynamics model, offloading the rest of the decision-making work to classical trajectory optimizers. While conceptually simple, this combination has a number of empirical shortcomings, suggesting that learned models may not be well-suited to standard trajectory optimization. In this paper, we consider what it would look like to fold as much of the trajectory optimization pipeline as possible into the modeling problem, such that sampling from the model and planning with it become nearly identical. The core of our technical approach lies in a diffusion probabilistic model that plans by iteratively denoising trajectories. We show how classifier-guided sampling and image inpainting can be reinterpreted as coherent planning strategies, explore the unusual and useful properties of diffusion-based planning methods, and demonstrate the effectiveness of our framework in control settings that emphasize long-horizon decision-making and test-time flexibility.* -You can find additional information about the model on the [project page](https://diffusion-planning.github.io/), the [original codebase](https://github.com/jannerm/diffuser), or try it out in a demo [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/reinforcement_learning_with_diffusers.ipynb). +You can find additional information about the model on the [project page](https://diffusion-planning.github.io/), the [original codebase](https://github.com/jannerm/diffuser), or try it out in a demo [notebook](https://colab.research.google.com/drive/1rXm8CX4ZdN5qivjJ2lhwhkOmt_m0CvU0#scrollTo=6HXJvhyqcITc&uniqifier=1). The script to run the model is available [here](https://github.com/huggingface/diffusers/tree/main/examples/reinforcement_learning). diff --git a/src/diffusers/experimental/rl/value_guided_sampling.py b/src/diffusers/experimental/rl/value_guided_sampling.py index dfb27587d7..f46d3ac98b 100644 --- a/src/diffusers/experimental/rl/value_guided_sampling.py +++ b/src/diffusers/experimental/rl/value_guided_sampling.py @@ -113,7 +113,7 @@ class ValueGuidedRLPipeline(DiffusionPipeline): prev_x = self.unet(x.permute(0, 2, 1), timesteps).sample.permute(0, 2, 1) # TODO: verify deprecation of this kwarg - x = self.scheduler.step(prev_x, i, x, predict_epsilon=False)["prev_sample"] + x = self.scheduler.step(prev_x, i, x)["prev_sample"] # apply conditions to the trajectory (set the initial state) x = self.reset_x0(x, conditions, self.action_dim)