ComfyUI 31 Lightning Checkpoints Compared Against Dreamshaper Turbo, Stable Diffusion
Вставка
- Опубліковано 15 вер 2024
- With Stable Diffusion AI image generation the Dreamshaper Turbo checkpoint has been my model of choice since its release. It needs just 6 sampling steps, which makes it fast, and I prefer the images even over non Turbo checkpoints. A new trend now is 'Lightning' checkpoints, they promise nice images in just 4 steps. In this video I compare 6 checkpoints:
- Dreamshaper Turbo 2.1
- RealitiesEdge Turbo 7
- Dreamshaper Lightning
- RealitiesEdge Lightning 7
- Juggernaut Lightning 9
- Realvis Lightning 4
Nice comparision Thanks Rudy. My experience with Lightning modes can give you more punch contrast looking so fast also can create more Artifacts. Dreamshaper Turbo is more cosistent slighty missing contrast and slightly less saturated over all good. I mostly use Fooocus sometimes ComfyUI an Forge. Before I upscale any portrait photo I do a quick skin retouch in Photoshop or lightroom. Usually open the shadows, give some texture and grain. When you upscale skin will be less plasticky more natural.
Yes, I bet when you first edit the image in Photoshop before upscaling it can make a huge improvement on a more natural look of skin, which still is an issue wit AI attempts for realistic photography.
Turbo seems to be much better at non realistic images, specially if you want finer detail such as some expressive brushwork. I think lighting is good for quick experimentation for mostly semirealistic or realistic images.
In the first image, the girl on the right has two feet inside one sneaker, which leads me to believe she has lots of trouble getting to school/work.
I'd be curious to see you do a similar comparison with LCM - either with LCM versions of those models or with the regular versions using the LCM Lora.
I'll have a new look now that several checkpoints probably have had an update, but my first tries with LCM were disappointing, which made me stick to Dreamshaper Turbo.
@@rudyshobbychannel Thanks, I've had good luck with the LCM Lora, especially with SD 1.5 models. Most models seem to work well with it although I've seen a few that produced terrible results. The nice thing is that I can use your excellent tips on the Kohya Deep Shrink and Self-Attention Guidance nodes, combine that with the LCM Lora and have a perfect 1920x1080 image with 6-12 steps in one sampling pass. Your tips have really transformed how I use Comfy.
Hello! Thank you very much man! I love the your thumbnail image and the one shown in 14:42 so much and If you still have the copy of that orginal image "PNG with workflow", could you please maybe upload it somewhere and share it maybe? Thank you in advance! Subscribed!
You're lucky, I found them in my Recycle Bin. :) I placed them (and a coule more) in a folder called Workflow Images in this GDrive: drive.google.com/drive/folders/1C4hnb__HQB2Pkig9pH7NWxQ05LJYBd7D?usp=drive_link
They are the 1024 px images after the first sampler, still need to be sampled a second time and upscaled to become 'watchable'.
@@rudyshobbychannel oh my!!!! Thankkkkk you so much Rudy!!! Have a nice day sir!!!!