OneDiff 1.0 is out! (Acceleration of SD & SVD with one line of code)
OneDiff 1.0 is out! (Acceleration of SD & SVD with one line of code)
This is an automated archive made by the Lemmit Bot.
The original was posted on /r/stablediffusion by /u/Just0by on 2024-04-16 16:05:04.
Hello everyone!
OneDiff 1.0 is for Stable Diffusion and Stable Video Diffusion models(UNet/VAE/CLIP based) acceleration. We have got a lot of support/feedback from the community
(), big thanks!
The later version 2.0 will focus on DiT/Sora-like models.
OneDiff 1.0 's updates are mainly the issues in milestone v0.13,, which includes the following new features and several bug fixes:
- OneDiff Quality Evaluation
- Reuse compiled graph
- Refine support for Playground v2.5
- Support ComfyUI-AnimateDiff-Evolved
- support ComfyUIIPAdapterplus
- support stable cascade
- Improvements
- Quantize tools for enterprise edition
+
+
- SD-WebUI supports offline quantized model
State-of-the-art performance
SDXL E2E time
- Model stabilityai/stable-diffusion-xl-base-1.0
- Image size 10241024, batch size 1, steps 30
- NVIDIA A100 80G SXM4
SVD E2E time
- Model stabilityai/stable-video-diffusion-img2vid-xt
- Image size 5761024, batch size 1, steps 25, decoder chunk size 5
- NVIDIA A100 80G SXM4
More intro about OneDiff: https://github.com/siliconflow/onediff?tab=readme-ov-file#about-onediff
Looking forward to your feedback!