The live painting workflow is literally a glimpse into the future. I imagine at some point well be creating everything with this "instant refresh" technique, even inpainting. Zooming in on an image that has mutated hands, paint over them a few times until it looks right, retouch the hair, retouch a few things in the background, and finally put it through a very high quality upscaler like Magnific.
I was surprised to learn that adding LCM to my model, would not only end up in speeding up the generation inference, but literally helped to reach near SDXL quality at higher resolutions and even fixed hands and few other things. LCM with LCM Sampler is godlike.
I actually prefer Krita for this, it can even take my scribbles and make them amazing, plus it has layers, lots of tools, etc. It will hook into Comfyui and either generative standard images, or you can use it in a live mode for both SDXL/1.5 and it supports turbo models
i agree. i've been using epic realism in krita for live landscape concepting for my twin peaks inspired game, and it generates an image about every 4 seconds on my rtx 2060 super. i have not tried sdxl turbo in there but will soon, since the speed seems nuts here. and being able to add pose vector layers to add people and pose them, is great in krita. PS. how do you keep it from "muting" the colors? i put vivid ones, and the result is muted.
@@sznikers hm not sure how to check if it loaded. but I'll check the settings to ensure it's part of the profile. but also, I know in a1111 there's a tickbox that says "do color correction to keep original colors in image to image", and I wonder if there is something like that inside krita.
@@westingtyler1 I haven't had any issues with muted colours, although I tend to stick to a couple of models, realisticvision, dreamscaper 8/SDX, Juggernaut, and now a couple of Turbo models, I particularly like Pixelwaveturbo, I would try different VAE's, the 840000-ema-pruned/Anything-v3, and the SDXL VAE
Another great vid! thank you! I am having a bit of trouble with the workflow... I installed the Inpaint custom nodes. But these nodes are missing: "seed" and "image scale side to side" ... I'd really appreciate a few pointers. T Y again! and keep it up!!
nice vid to do the live painting, however the pics keeps filling the drive because it saves every picture...how can we disable the auto saving in comfyui?
@sebastian kamph for the live painting mode, what is needed is for the software to write the result to the same filename until you press a button. this way turbo live paint would not fill your harddrive with infinite pictures, but would actually only save the results that the user wants.
You could just have the image result as a preview, then it won't write to disk, then you can just right click on it and save it. Or have a switch and pass it on to an upscaler
ok i used the same prompt, the same checkpoint, the settings were also the same like in your video (automatic 1111). while generating was at 50% the image loked good just blurry but when it finished it is always a total mess. how comes that the qulity of your frog is muuuch better than mine?
yeah I dont get the appeal of comfyui unless someone is doing some unique complex chain. comfy DOES seem to render images a bit faster than a1111 though in my experience.
@@sznikers It really isn't that much quicker. You can do all the same presents in auto 1111, the png tab also allows you to drag in a photo generated with auto 1111 and auto fill the settings just like comfy ui.
@@TPCDAZ well i dont have to do anything in comfy once i have my workflow done. Thats the whole point of comfy, you dont have to sit and click things like in a1111, you start the workflow and after many automatically executed steps all of which you have adjusted to your needs you get your end results.
"I´m just gonna stop this so my hard drive isn´t full with images of bottles with sunrises in them" :D I dont´t know why, but that was funny! Nice Work Seb! I saw already a similar way for live painting using a plugin for Kirta. You can use SD for generative fills as well similar to Photoshop! Although in Kirta you have more control ( like pose editor etc..) this workflow is simper to use!
My A1111 is up to date, but I do not have the R-ESRGAN 4x+ upscaler, I have the ESRGAN_4x. I'm guessing they are not the same since the results I'm getting are not good at all, bunch of fused faces.
strangely enough, ive set up exactly as the comfyui guide shows, which I can see your doing as well, some reason my generations are more than minutes long, I'm trying to discover what is going wrong
Very informative! And thanks for covering *both* Comfy and A1111. I'm sure like many I use both. I don't like seeing development/support for A1111 decline and I think a lot of that is coming from somekinda elitism thing that's started where if you dont' use comfy you're not doing it right. Eh, can use either...sometimes you don't want overcomplexity to make a run of images. Cheers.
comfy is cleaner, it allows you to organize the nodes how you like and be creative in the mix and match of nodes in new interesting ways. A1111 is limited in that sense.
@@bandinopla but sometimes or i must say A1111 UI was more Comfortable than ComfyUI, because node based thing always felt daunting even blender node thing,and yes im not kidding despite i am okay with nodes my friends just nah to that thing
The quality is not as great, but compared to LCM and other stuff, the quality vs speed is still very very good. Especially with custom SDXL Turbo models. Some output images for me I couldn't see if it was turbo or not.
I'm using the DirectML version because I have an AMD and I have to use my CPU and It's PAINFULLY slow. Will this help with that? Or is it only for those using GPUs? I actually have a really decent GPU (RX 5700 XT) but I sadly can't use it since SD hardly supports AMD.
I'm puzzled by this. I am using the exact same setting. CFG scale 1, Sampling step 1, 512x512 with HiresFix 2x, same sample same upscales. All the images have the symptoms of using 712 or 1024 back in the days of SD 1.5. Multiple eye or cascading limbs, etc. I'm truly confused. Classic SDXL works just fine. Using AUTO1111. Any ideas what am I missing ?
Hey Seb, could you do a series of videos on animation with stable diffusion A1111? or Comfy UI perhaps?, I've been looking into doing animation but I'm too distracted to compile all the info plus everybody seems to have a different way of doing it and it's quite confusing.. Would love your help for doing a nice animation
I got a few animation videos but will probably do more over time 🌟
Рік тому
Could you update the new install 2023? There has been some much new content you added, and following tutorials, your stable diffusion UI is different from mine, despite using the latest version.
Weird, I've been using Turbo in A1111 with standard SDXL resolutions (1024x1024, 1152x896, etc), and I didn't have any problems. Not sure what I'm doing different 🤷♂ EDIT: I see what I'm doing different. I'm using 10 samples and a CFG of 4.
Question!! My son is doing a project on ''The Mesolithic period'', he wants to use some examples of AI art for his talk on the subject. The problem is, all my attempts using Comfyui are a mutated group of cavemen, he's nervous as it is, he said my AI art will make him the laughing stock. So, are there any simple PROMPTS that i could use to produce good results?. cheers
Cant you just use the preview node (not save node)?, maybe have a disable save node for ones you want to keep, when you connect it it will only save one image unless you make changes upstream
Can you please provide a specification for PC build of $2400, which will run ai models locally in fastest way possible at this price. What things we should consider when building PC solely for running ai models locally and rarely gaming? What really helps to run this model fastest locally? please provide related information also. I want to build a PC with budget of $2400. Thank you.
I was watching this 2 months ago and I was thinking - he has 4090, that's why it's so fast. And only today I installed Turbo model. It's 1 sampling step generation! WTF? O.o
Honestly, I am not impressed. 1.5 models are much better than SDXL Turbo and standard SDXL is better than 1.5. I have a powerful GPU so I better wait a little longer for a much better result.
Well that might not be the point of this model, the point of it might be to get a lot of high-res concepts that you can then take into image to image to get higher res versions of. Honestly I'd rather use a model like this to get hundreds of base images so I can choose a few good ones to upscale.
I would love to use a1111 even more as that has been my goto for a long time, however all the new tech and advanced workflows gets released to Comfy first 🥲❤️
To answer your question..."Yes", yes you can stop using it... its not compulsory at all. Also if you find something complicated I wouldn't jump to the conclusion that *it's* dumb.
Detailed text guide (Automatic1111 and ComfyUI) for Patreon subscribers here: www.patreon.com/posts/sdxl-turbo-guide-94305599
The live painting workflow is literally a glimpse into the future. I imagine at some point well be creating everything with this "instant refresh" technique, even inpainting. Zooming in on an image that has mutated hands, paint over them a few times until it looks right, retouch the hair, retouch a few things in the background, and finally put it through a very high quality upscaler like Magnific.
I was surprised to learn that adding LCM to my model, would not only end up in speeding up the generation inference, but literally helped to reach near SDXL quality at higher resolutions and even fixed hands and few other things. LCM with LCM Sampler is godlike.
The live painting is incredible it's literally like translating thoughts into images
Not even close.
Try to really imagine an image.
Then try to generate your REAL thought to image . GOOD LUCK !
where can i get the "painter node"?? It doesn't appear when I search for it within the nodes in Comfyui
AlekPet/ComfyUI_Custom_Nodes_AlekPet
search for alekpet I think, it's part of a suite
I actually prefer Krita for this, it can even take my scribbles and make them amazing, plus it has layers, lots of tools, etc. It will hook into Comfyui and either generative standard images, or you can use it in a live mode for both SDXL/1.5 and it supports turbo models
i agree. i've been using epic realism in krita for live landscape concepting for my twin peaks inspired game, and it generates an image about every 4 seconds on my rtx 2060 super. i have not tried sdxl turbo in there but will soon, since the speed seems nuts here. and being able to add pose vector layers to add people and pose them, is great in krita.
PS. how do you keep it from "muting" the colors? i put vivid ones, and the result is muted.
@@westingtyler1 did you check your vae is loading?
@@sznikers hm not sure how to check if it loaded. but I'll check the settings to ensure it's part of the profile. but also, I know in a1111 there's a tickbox that says "do color correction to keep original colors in image to image", and I wonder if there is something like that inside krita.
@@westingtyler1 try to force your own VAE in a1111, maybe your model doesn't have one baked in.
@@westingtyler1 I haven't had any issues with muted colours, although I tend to stick to a couple of models, realisticvision, dreamscaper 8/SDX, Juggernaut, and now a couple of Turbo models, I particularly like Pixelwaveturbo, I would try different VAE's, the 840000-ema-pruned/Anything-v3, and the SDXL VAE
WOWW, thanks sir sebastian. You always bring best video for AI. Im glab to subs u. Best content.
Great guide
Thanks, you superstar you!
Another great vid! thank you! I am having a bit of trouble with the workflow... I installed the Inpaint custom nodes. But these nodes are missing: "seed" and "image scale side to side" ... I'd really appreciate a few pointers. T Y again! and keep it up!!
found anything i have the same problem?
nice vid to do the live painting, however the pics keeps filling the drive because it saves every picture...how can we disable the auto saving in comfyui?
ah i see, use preview node instead of image save node
ah i see, use preview node instead of image save node
so how do we control how much weight the drawing has on the output image? it doesn't seem to affect it enough
My daughter just told me the hailing taxis joke last night. So funny to hear you say it today.
Hah, she must have amazing taste in jokes!
@sebastian kamph for the live painting mode, what is needed is for the software to write the result to the same filename until you press a button. this way turbo live paint would not fill your harddrive with infinite pictures, but would actually only save the results that the user wants.
You could just have the image result as a preview, then it won't write to disk, then you can just right click on it and save it. Or have a switch and pass it on to an upscaler
Excellent video! , Thanks a lot Sebastian, great as usual but where is the live painting workflow ?
Hi, I added it in the description now.
Much appreciated !@@sebastiankamph
Is live painting available for A1111?
you have not linked the turbo workflow!
ok i used the same prompt, the same checkpoint, the settings were also the same like in your video (automatic 1111). while generating was at 50% the image loked good just blurry but when it finished it is always a total mess. how comes that the qulity of your frog is muuuch better than mine?
You had me at Automatic 1111. So glad you haven't gone fully to the dark side of Comfy like other youtubers. Appreciate you trying to be neutral
yeah comfy is not comfy at all
yeah I dont get the appeal of comfyui unless someone is doing some unique complex chain. comfy DOES seem to render images a bit faster than a1111 though in my experience.
Its paint to switch, but once you do your charts its way more quick
@@sznikers It really isn't that much quicker. You can do all the same presents in auto 1111, the png tab also allows you to drag in a photo generated with auto 1111 and auto fill the settings just like comfy ui.
@@TPCDAZ well i dont have to do anything in comfy once i have my workflow done. Thats the whole point of comfy, you dont have to sit and click things like in a1111, you start the workflow and after many automatically executed steps all of which you have adjusted to your needs you get your end results.
What sampler, steps and cfg did you recommend for sdxl turbo?
nvm, I think we just stuck with no negative prompt, at 1 cfg and 2 steps. xd
Does this thing need the LCM thing you mentioned previously? Does this use way less ram then?
This does not require lcm
@@sebastiankamphbut it still uses way less ram right?
It'll still need enough to be able to run SDXL, but yes, a little less over time due to it needing less steps.@@doords
"I´m just gonna stop this so my hard drive isn´t full with images of bottles with sunrises in them" :D I dont´t know why, but that was funny! Nice Work Seb! I saw already a similar way for live painting using a plugin for Kirta. You can use SD for generative fills as well similar to Photoshop! Although in Kirta you have more control ( like pose editor etc..) this workflow is simper to use!
Glad you enjoyed it!
So I'm trying to load these workflows.... unless I'm mistaken you've only uploaded them as png images? Why not the json file of the workflow template?
Turbo is absolutely amazing.
My A1111 is up to date, but I do not have the R-ESRGAN 4x+ upscaler, I have the ESRGAN_4x. I'm guessing they are not the same since the results I'm getting are not good at all, bunch of fused faces.
strangely enough, ive set up exactly as the comfyui guide shows, which I can see your doing as well, some reason my generations are more than minutes long, I'm trying to discover what is going wrong
Very informative! And thanks for covering *both* Comfy and A1111. I'm sure like many I use both. I don't like seeing development/support for A1111 decline and I think a lot of that is coming from somekinda elitism thing that's started where if you dont' use comfy you're not doing it right. Eh, can use either...sometimes you don't want overcomplexity to make a run of images. Cheers.
Absolutely! I totally agree with you. I prefer A1111 but tried Comfyui. Both are interesting.
comfy is cleaner, it allows you to organize the nodes how you like and be creative in the mix and match of nodes in new interesting ways. A1111 is limited in that sense.
@@bandinopla but sometimes or i must say A1111 UI was more Comfortable than ComfyUI, because node based thing always felt daunting even blender node thing,and yes im not kidding despite i am okay with nodes my friends just nah to that thing
What are the drawbacks of a turbo model?
this. I would assume it's quality but an answer would be nice.
Not good enough quality for people.
The quality is not as great, but compared to LCM and other stuff, the quality vs speed is still very very good. Especially with custom SDXL Turbo models. Some output images for me I couldn't see if it was turbo or not.
You are my hero! :O Thanks (cfg 1.0 my god....... thanks)
Am using a tesla nvidia card on azure does not seem to work 5 step get an image but 1 step does not
really useful, thank you :)
You're welcome!
I'm using the DirectML version because I have an AMD and I have to use my CPU and It's PAINFULLY slow. Will this help with that? Or is it only for those using GPUs?
I actually have a really decent GPU (RX 5700 XT) but I sadly can't use it since SD hardly supports AMD.
very nice. I’ll try that 🎉
Awesome
I'm puzzled by this. I am using the exact same setting. CFG scale 1, Sampling step 1, 512x512 with HiresFix 2x, same sample same upscales. All the images have the symptoms of using 712 or 1024 back in the days of SD 1.5. Multiple eye or cascading limbs, etc. I'm truly confused. Classic SDXL works just fine. Using AUTO1111. Any ideas what am I missing ?
Hey Seb, could you do a series of videos on animation with stable diffusion A1111? or Comfy UI perhaps?, I've been looking into doing animation but I'm too distracted to compile all the info plus everybody seems to have a different way of doing it and it's quite confusing.. Would love your help for doing a nice animation
I got a few animation videos but will probably do more over time 🌟
Could you update the new install 2023? There has been some much new content you added, and following tutorials, your stable diffusion UI is different from mine, despite using the latest version.
having issues getting my manager to look like yours, also wont load image scale node is there a git for that one?
you might have to go to the extensions folder and then the comfyui manager menu (command line) and do a git pull to update the manager
wow that's very useful
Can it be used with img2img and controlnet?
Do they make any for non sdxl mondel Sabastian
Weird, I've been using Turbo in A1111 with standard SDXL resolutions (1024x1024, 1152x896, etc), and I didn't have any problems. Not sure what I'm doing different 🤷♂
EDIT: I see what I'm doing different. I'm using 10 samples and a CFG of 4.
how to install in forge ? it lagging in forge , generation time is too long in forge pls make tutorial on it
it's sdxl but the base latent is 512x512 ?
Question!! My son is doing a project on ''The Mesolithic period'', he wants to use some examples of AI art for his talk on the subject. The problem is, all my attempts using Comfyui are a mutated group of cavemen, he's nervous as it is, he said my AI art will make him the laughing stock. So, are there any simple PROMPTS that i could use to produce good results?. cheers
I got prompt styles on my Patreon. But for easy quick images, check out Fooocus.
my lcm lora is not working for sdxl :(
Forget lcm! 😊
Can it work as fast with inpaiting ?
It would be nice if there was a way for Auto-queue to just overwrite the last image so it doesn't fill up your hard drive.
Cant you just use the preview node (not save node)?, maybe have a disable save node for ones you want to keep, when you connect it it will only save one image unless you make changes upstream
@@fretts8888 That's a good idea
cool stuff!
What graphics card are you using?
Rtx 4090
Can you please provide a specification for PC build of $2400, which will run ai models locally in fastest way possible at this price. What things we should consider when building PC solely for running ai models locally and rarely gaming? What really helps to run this model fastest locally? please provide related information also. I want to build a PC with budget of $2400. Thank you.
Nvidia GPU with as much gb vram as possible. Rest is irrelevant currently
thank you so much.@@sebastiankamph
can you please provide minimum requirement for VRAM.@@sebastiankamph
despite video title, there's no live painting in A1111 -_-
I was watching this 2 months ago and I was thinking - he has 4090, that's why it's so fast. And only today I installed Turbo model. It's 1 sampling step generation! WTF? O.o
I really just come here for the jokes, the learning is a bonus
Superstar you!
But wasn't sdxl models trained on 1024x1024 datasets ? that's confusing
I agree, very confusing indeed. Apparently it wasn't the case for turbo
A potato computer would not have the VRAM for SDXL. Or?
👍👍👍👍
👍
mine 3060ti took 3 times more with this model than the standard 1.5
I come for the Dad joke but stay for the AI
The real mvp
Dragging and dropping png images to load workflows... I don't know how it works, but it's legit.
I can't control the denoising on that workflow
yeah seriously it's kind of an issue. not really a usable workflow without that
Are you American or something?
I thought the Swedish accent gave it away.
SyntaxError: Unexpected non-whitespace character after JSON at position 4 (line 1 column 5)..... Yay :/
Anderson Susan Martinez Sharon Rodriguez Patricia
Honestly, I am not impressed. 1.5 models are much better than SDXL Turbo and standard SDXL is better than 1.5. I have a powerful GPU so I better wait a little longer for a much better result.
Well that might not be the point of this model, the point of it might be to get a lot of high-res concepts that you can then take into image to image to get higher res versions of. Honestly I'd rather use a model like this to get hundreds of base images so I can choose a few good ones to upscale.
Taylor Thomas Taylor Paul Perez Dorothy
Walker Charles Martin William Brown John
please blink more often and dont do the weird slow zoom-ins on your face
can we just stop using comfy? its so dumb and overly complicated without any benefits
I would love to use a1111 even more as that has been my goto for a long time, however all the new tech and advanced workflows gets released to Comfy first 🥲❤️
To answer your question..."Yes", yes you can stop using it... its not compulsory at all. Also if you find something complicated I wouldn't jump to the conclusion that *it's* dumb.
Can you use medvram and xformers with it?
Sure, go for it