Diffusers documentation
EDMEulerScheduler
EDMEulerScheduler
The Karras formulation of the Euler scheduler (Algorithm 2) from the Elucidating the Design Space of Diffusion-Based Generative Models paper by Karras et al. This is a fast scheduler which can often generate good outputs in 20-30 steps. The scheduler is based on the original k-diffusion implementation by Katherine Crowson.
EDMEulerScheduler
class diffusers.EDMEulerScheduler
< source >( sigma_min: float = 0.002 sigma_max: float = 80.0 sigma_data: float = 0.5 sigma_schedule: typing.Literal['karras', 'exponential'] = 'karras' num_train_timesteps: int = 1000 prediction_type: typing.Literal['epsilon', 'v_prediction'] = 'epsilon' rho: float = 7.0 final_sigmas_type: typing.Literal['zero', 'sigma_min'] = 'zero' )
Parameters
- sigma_min (
float, optional, defaults to0.002) — Minimum noise magnitude in the sigma schedule. This was set to 0.002 in the EDM paper [1]; a reasonable range is [0, 10]. - sigma_max (
float, optional, defaults to80.0) — Maximum noise magnitude in the sigma schedule. This was set to 80.0 in the EDM paper [1]; a reasonable range is [0.2, 80.0]. - sigma_data (
float, optional, defaults to0.5) — The standard deviation of the data distribution. This is set to 0.5 in the EDM paper [1]. - sigma_schedule (
Literal["karras", "exponential"], optional, defaults to"karras") — Sigma schedule to compute thesigmas. By default, we use the schedule introduced in the EDM paper (https://huggingface.co/papers/2206.00364). The"exponential"schedule was incorporated in this model: https://huggingface.co/stabilityai/cosxl. - num_train_timesteps (
int, optional, defaults to1000) — The number of diffusion steps to train the model. - prediction_type (
Literal["epsilon", "v_prediction"], optional, defaults to"epsilon") — Prediction type of the scheduler function."epsilon"predicts the noise of the diffusion process, and"v_prediction"(see section 2.4 of Imagen Video paper). - rho (
float, optional, defaults to7.0) — The rho parameter used for calculating the Karras sigma schedule, which is set to 7.0 in the EDM paper [1]. - final_sigmas_type (
Literal["zero", "sigma_min"], optional, defaults to"zero") — The finalsigmavalue for the noise schedule during the sampling process. If"sigma_min", the final sigma is the same as the last sigma in the training schedule. If"zero", the final sigma is set to 0.
Implements the Euler scheduler in EDM formulation as presented in Karras et al. 2022 [1].
[1] Karras, Tero, et al. “Elucidating the Design Space of Diffusion-Based Generative Models.” https://huggingface.co/papers/2206.00364
This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic methods the library implements for all schedulers such as loading and saving.
add_noise
< source >( original_samples: Tensor noise: Tensor timesteps: Tensor ) → torch.Tensor
Parameters
- original_samples (
torch.Tensor) — The original samples to which noise will be added. - noise (
torch.Tensor) — The noise tensor to add to the original samples. - timesteps (
torch.Tensor) — The timesteps at which to add noise, determining the noise level from the schedule.
Returns
torch.Tensor
The noisy samples with added noise scaled according to the timestep schedule.
Add noise to the original samples according to the noise schedule at the specified timesteps.
index_for_timestep
< source >( timestep: typing.Union[float, torch.Tensor] schedule_timesteps: typing.Optional[torch.Tensor] = None ) → int
Parameters
- timestep (
floatortorch.Tensor) — The timestep value to find in the schedule. - schedule_timesteps (
torch.Tensor, optional) — The timestep schedule to search in. IfNone, usesself.timesteps.
Returns
int
The index of the timestep in the schedule. For the very first step, returns the second index if multiple matches exist to avoid skipping a sigma when starting mid-schedule (e.g., for image-to-image).
Find the index of a given timestep in the timestep schedule.
precondition_inputs
< source >( sample: Tensor sigma: typing.Union[float, torch.Tensor] ) → torch.Tensor
Precondition the input sample by scaling it according to the EDM formulation.
precondition_noise
< source >( sigma: typing.Union[float, torch.Tensor] ) → torch.Tensor
Precondition the noise level by applying a logarithmic transformation.
precondition_outputs
< source >( sample: Tensor model_output: Tensor sigma: typing.Union[float, torch.Tensor] ) → torch.Tensor
Parameters
- sample (
torch.Tensor) — The input sample tensor. - model_output (
torch.Tensor) — The direct output from the learned diffusion model. - sigma (
floatortorch.Tensor) — The current sigma (noise level) value.
Returns
torch.Tensor
The denoised sample computed by combining the skip connection and output scaling.
Precondition the model outputs according to the EDM formulation.
scale_model_input
< source >( sample: Tensor timestep: typing.Union[float, torch.Tensor] ) → torch.Tensor
Scale the denoising model input to match the Euler algorithm. Ensures interchangeability with schedulers that need to scale the denoising model input depending on the current timestep.
set_begin_index
< source >( begin_index: int = 0 )
Sets the begin index for the scheduler. This function should be run from pipeline before the inference.
set_timesteps
< source >( num_inference_steps: typing.Optional[int] = None device: typing.Union[str, torch.device, NoneType] = None sigmas: typing.Union[torch.Tensor, typing.List[float], NoneType] = None )
Parameters
- num_inference_steps (
int, optional) — The number of diffusion steps used when generating samples with a pre-trained model. - device (
strortorch.device, optional) — The device to which the timesteps should be moved to. IfNone, the timesteps are not moved. - sigmas (
torch.TensororList[float], optional) — Custom sigmas to use for the denoising process. If not defined, the default behavior whennum_inference_stepsis passed will be used.
Sets the discrete timesteps used for the diffusion chain (to be run before inference).
step
< source >( model_output: Tensor timestep: typing.Union[float, torch.Tensor] sample: Tensor s_churn: float = 0.0 s_tmin: float = 0.0 s_tmax: float = inf s_noise: float = 1.0 generator: typing.Optional[torch._C.Generator] = None return_dict: bool = True pred_original_sample: typing.Optional[torch.Tensor] = None ) → EDMEulerSchedulerOutput or tuple
Parameters
- model_output (
torch.Tensor) — The direct output from the learned diffusion model. - timestep (
floatortorch.Tensor) — The current discrete timestep in the diffusion chain. - sample (
torch.Tensor) — A current instance of a sample created by the diffusion process. - s_churn (
float, optional, defaults to0.0) — The amount of stochasticity to add at each step. Higher values add more noise. - s_tmin (
float, optional, defaults to0.0) — The minimum sigma threshold below which no noise is added. - s_tmax (
float, optional, defaults tofloat("inf")) — The maximum sigma threshold above which no noise is added. - s_noise (
float, optional, defaults to1.0) — Scaling factor for noise added to the sample. - generator (
torch.Generator, optional) — A random number generator for reproducibility. - return_dict (
bool, optional, defaults toTrue) — Whether or not to return an EDMEulerSchedulerOutput or tuple. - pred_original_sample (
torch.Tensor, optional) — The predicted denoised sample from a previous step. If provided, skips recomputation.
Returns
EDMEulerSchedulerOutput or tuple
If return_dict is True, an EDMEulerSchedulerOutput is
returned, otherwise a tuple is returned where the first element is the previous sample tensor and the
second element is the predicted original sample tensor.
Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion process from the learned model outputs (most often the predicted noise).
EDMEulerSchedulerOutput
class diffusers.schedulers.scheduling_edm_euler.EDMEulerSchedulerOutput
< source >( prev_sample: Tensor pred_original_sample: typing.Optional[torch.Tensor] = None )
Parameters
- prev_sample (
torch.Tensorof shape(batch_size, num_channels, height, width)for images) — Computed sample(x_{t-1})of previous timestep.prev_sampleshould be used as next model input in the denoising loop. - pred_original_sample (
torch.Tensorof shape(batch_size, num_channels, height, width)for images) — The predicted denoised sample(x_{0})based on the model output from the current timestep.pred_original_samplecan be used to preview progress or for guidance.
Output class for the scheduler’s step function output.