[community pipeline] Add RF-inversion Flux pipeline by linoytsaban 路 Pull Request #9816 路 huggingface/diffusers (original) (raw)
This PR adds rf-inversion Flux pipeline to support real image editing & zero-shot stylization with flux.
Thanks to community contributions 馃檶馃徎- this PR is adapted from the diffusers implementation https://github.com/raven38/rf_inversion by @raven38,
as well as (https://github.com/DarkMnDragon/rf-inversion-diffuser) by @DarkMnDragon
Note - start_timestep & stop_timestep were converted (from the original implementation) to be a float, representing the fraction of the total amount of steps
Example:
- get pipeline
!wget https://raw.githubusercontent.com/linoytsaban/diffusers/rf-inversion/examples/community/pipeline_flux_rf_inversion.py
# !wget https://raw.githubusercontent.com/huggingface/diffusers/main/examples/community/pipeline_flux_rf_inversion.py
- load model
import torch
import diffusers
from pipeline_flux_rf_inversion import RFInversionFluxPipeline
pipe = RFInversionFluxPipeline.from_pretrained("black-forest-labs/FLUX.1-dev",
torch_dtype=torch.bfloat16)
pipe.to("cuda")
- load & invert image
from PIL import Image
image = Image.open("./example/cat.png").resize((1024, 1024))
inverted_latents, image_latents, latent_image_ids = pipe.invert(image=image,
num_inversion_steps=28,
gamma=0.5)
- edit
pipe("portrait of a tiger",
inverted_latents=inverted_latents,
image_latents=image_latents,
latent_image_ids=latent_image_ids,
start_timestep=0,
stop_timestep=.38,
num_inference_steps=28,
eta=0.9,
).images[0]
