Quick tip: If you have Rgthree nodes enabled (which you will have if you download this workflow), go into the rgthree node settings, and enable "Show fast toggles in group headers." This wlll give you little bypass/mute icons in each group header, eliminating the need to use the Fast Bypasser nodes. Trust me, it's better. Great video Sebastian.
@@sebastiankamph Another idea is to use Rgthree's "Power Lora Loader" to clean up the nodes. (I also like to use the anything anywhere nodes to eliminate a lot of noodles)
Why wouldn't I see ControlNet Union Pro in the manager? Haven't looked but sure I can find it manually but your instructions mention it so thougth I'd ask.
Question around 5:00 when choosing what model to download. Can you briefly elaborate on what constitutes "beefy" in this case? Ie: i have an ok processor and a 3070ti, which some might consider beefy. However, i believe Vram is important on GPUs for Ai processes, correct? So are you referring mainly to the amount of vram? Ie: though my 3070ti might out perform a 3060 on video games typically, the 3060 has more vram and as such may be more viable / beefy?
Tried a few variants of strength but doesn't seem like the resulting output looks in the pose that the source is at all. Thoughts? Just implemented this, made sure all the pieces were linked and uploaded a pose of a person and ran it. output was totally different. EDIT: SoftEdge seems to work better than LineArt. Though if I enable one of my LoRAs it goes back to doing nothing specific related to the image.
Great vid - again 🙂 Is there a reason to why you don't use the AIO Aux Preprossor instead of making all theys groups? In the AIO you can just pick what ever controller you want to use, and it will download automatically.
Clear and friendly as ever! I don't see the point of controlnet for flux: use denoise of 0.08 for instance, base shift 0.5 - the trick is the max_shift! Fluxuate between 0.7 and up to 5 or more, depends.. This max_shift is the agent of change (The flux, if you will)! It's like flux has this built in already 😅. Granted, the original image color will persist this way.. But hey. And as always.
@@sebastiankamph Yeah! 😄 - One must try it to see! Almost no denoise (even 0.001 worked) - Use max_shift as agent of change.. Image can retain much of the original if max_shift is low (like 1.5) and 'dream' much change (as prompt says) if max_shift is high (like 5) (- If sampler is ddim + scheduler : ddim_uniform able to retain most - but I think will work on Euler / simple too) - do try! - And as always.
Great thank you for the tutorial! I work with Forge but test my Comfy for this workflow CN+FLUX But I have got error: UnetLoaderGGUF `newbyteorder` was removed from the ndarray class in NumPy 2.0. Use `arr.view(arr.dtype.newbyteorder(order))` instead. Can you suggest any decision please?
Hello ! Thanks for the video, super helpful as usual. What if I want to change the pose of an exisiting image using control net? It would help a lot for my comic book ! thanks :)
Hey Sebastian, im just starting out because like 1.5 years ago my Graphics couldnt handle that, back then u used Stable Diffusion with automatic1111 what would you now recommend is it ComfyUI or is there something better to start with?
It's like this error all the time: UnetLoaderGGUF `newbyteorder` was removed from the ndarray class in NumPy 2.0. Use `arr.view(arr.dtype.newbyteorder(order))` instead.
Flux is a different model entirely. SD and SDXL were released by Stable Foundation. Flux is from Black Forest Labs, which was started by people who left Stable Diffusion.
@@arothmanmusic which do you use? Also I have a slightly unrelated question, if you have the time to help me. So ran a loRA training node/workflow and the output_dir is models/loras but i can not find it, any suggestions? the datapath (text files for the pictures) i can find and are in the right folder but am lost finding the actual Lora model. I am running ComfyUI with a SDXL checkpoint for the loRA training
Thank you for the workflow. All of the controlnets worked for me except for DWpose. I'm getting this error. "DWPreprocessor 'NoneType' object has no attribute 'get_providers'. EDIT: I'm trying to dig in... I'm reading this in the terminal. .... wholebody.py", line 41, in __init__ print(f"Failed to load onnxruntime with {self.det.get_providers()}. Please change EP_list in the config.yaml and restart ComfyUI") EDIT2: When I set the detector and estimator to *.torchscript.pt it works. Not sure what's going on. (shrug)
I can never get pass this, I researched it, tried several fixes, can't get past it, [Errno 2] No such file or directory: 'C:\\ComfyUI_windows_portable_nvidia\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\ComfyUI-tbox\\..\\..\\models\\annotator\\LiheYoung\\Depth-Anything\\.cache\\huggingface\\download\\checkpoints\\depth_anything_vitl14.pth.6c6a383e33e51c5fdfbf31e7ebcda943973a9e6a1cbef1564afe58d7f2e8fe63.incomplete'
Free guide and workflow here www.patreon.com/posts/114035831
is there no flux anime yet?
The best and easiest Flux ControlNet I've seen so far! And you're offering the tutorial and workflow for free too-thank you!
You're very welcome! Share it with a friend 😊💫
I learned more about comfyUI in this video than in 50 other tutorials. Thanks!
Quick tip: If you have Rgthree nodes enabled (which you will have if you download this workflow), go into the rgthree node settings, and enable "Show fast toggles in group headers." This wlll give you little bypass/mute icons in each group header, eliminating the need to use the Fast Bypasser nodes. Trust me, it's better.
Great video Sebastian.
Great tip!
@@sebastiankamph Another idea is to use Rgthree's "Power Lora Loader" to clean up the nodes. (I also like to use the anything anywhere nodes to eliminate a lot of noodles)
nice! thanks!
the fast bypasser trick is very clever.
Thank you, much appreciated and awesome how you took the time to explain every detail ❤
You are so welcome! ☺️💫
Thank you, Sebastian, you are an awesome tutor! I am just starting and learning a lot.
Thx! It's simple, clear and understandable.
Glad you liked it!
Thanks for the vid 😁.
I wonder when are we going to get a functional CN for ForgeUI
does this work with the new flux tools? should I use lora or checkpoint? Should these be placed in controlnet folder?
C:\Stable DIffusion\ComfyUI_windows_portable\ComfyUI\models\xlabs\controlnets
Why wouldn't I see ControlNet Union Pro in the manager? Haven't looked but sure I can find it manually but your instructions mention it so thougth I'd ask.
Question around 5:00 when choosing what model to download. Can you briefly elaborate on what constitutes "beefy" in this case? Ie: i have an ok processor and a 3070ti, which some might consider beefy. However, i believe Vram is important on GPUs for Ai processes, correct? So are you referring mainly to the amount of vram? Ie: though my 3070ti might out perform a 3060 on video games typically, the 3060 has more vram and as such may be more viable / beefy?
Tried a few variants of strength but doesn't seem like the resulting output looks in the pose that the source is at all. Thoughts? Just implemented this, made sure all the pieces were linked and uploaded a pose of a person and ran it. output was totally different.
EDIT: SoftEdge seems to work better than LineArt. Though if I enable one of my LoRAs it goes back to doing nothing specific related to the image.
First 😁 Finally a good video for Flux CN!!!
Thanks! I think so at least. The way it's set up it works really well for me.
Great vid - again 🙂 Is there a reason to why you don't use the AIO Aux Preprossor instead of making all theys groups? In the AIO you can just pick what ever controller you want to use, and it will download automatically.
great tutorial! thank you.
btw, can the same workflow be used for videos instead of images (vid2vid projects)? if yes, how?
Clear and friendly as ever! I don't see the point of controlnet for flux: use denoise of 0.08 for instance, base shift 0.5 - the trick is the max_shift! Fluxuate between 0.7 and up to 5 or more, depends.. This max_shift is the agent of change (The flux, if you will)! It's like flux has this built in already 😅. Granted, the original image color will persist this way.. But hey. And as always.
Interesting, so you're saying it's like an img2img with a ControlNet light functionality built in?
@@sebastiankamph Yeah! 😄 - One must try it to see!
Almost no denoise (even 0.001 worked) - Use max_shift as agent of change..
Image can retain much of the original if max_shift is low (like 1.5)
and 'dream' much change (as prompt says) if max_shift is high (like 5)
(- If sampler is ddim + scheduler : ddim_uniform able to retain most - but I think will work on Euler / simple too) - do try! - And as always.
Great video! Which workflow do you recommend if you are using 3D software to obtain the depth map and toon shader (similar to line art/canny)?
I'm running on an RTX4070 with Q5 and two loras and it just freezes at KSampler :|
Sure u didn't mistake it for long load times? XD
can we add lora to our own portrait ? like my portrait in some fantasy style or something. i tried a lot but always face structure is changing a bit.
Need lora of person you want and the one you want to use. 2 loras
how would you add a reference image to guide the result style along the prompts?
If we have a trained Lora can that be used in this workflow, I assume ye but just trying to put all the pieces together.
What is the button for disabling or enabling the section?
What is the best model setup for my laptop that using RTX3060 6GB?
Great thank you for the tutorial! I work with Forge but test my Comfy for this workflow CN+FLUX
But I have got error:
UnetLoaderGGUF
`newbyteorder` was removed from the ndarray class in NumPy 2.0. Use `arr.view(arr.dtype.newbyteorder(order))` instead.
Can you suggest any decision please?
it's because UnetLoaderGGUF has to be installed with numpy
@@b4gu3tt3 It did not
Hello !
Thanks for the video, super helpful as usual.
What if I want to change the pose of an exisiting image using control net? It would help a lot for my comic book !
thanks :)
Hey Sebastian, im just starting out because like 1.5 years ago my Graphics couldnt handle that, back then u used Stable Diffusion with automatic1111 what would you now recommend is it ComfyUI or is there something better to start with?
I keep getting some unknown when it progress to the DualCLIPLoader despite having all the correct model in the correct place
It's like this error all the time:
UnetLoaderGGUF
`newbyteorder` was removed from the ndarray class in NumPy 2.0. Use `arr.view(arr.dtype.newbyteorder(order))` instead.
I loveeeeeeeeeeeeee thisssssss
great video
Thank you :)
where are the contolnets for flux for forge
I don’t quite understand what the difference is between flux and sdxl I think that is what the alternative is called
Flux is a different model entirely. SD and SDXL were released by Stable Foundation. Flux is from Black Forest Labs, which was started by people who left Stable Diffusion.
@@arothmanmusic which do you use?
Also I have a slightly unrelated question, if you have the time to help me.
So ran a loRA training node/workflow and the output_dir is models/loras but i can not find it, any suggestions? the datapath (text files for the pictures) i can find and are in the right folder but am lost finding the actual Lora model. I am running ComfyUI with a SDXL checkpoint for the loRA training
The manager button just fucked off!!!! what???
Anyone here has a blank image output ? Any fix ?
i just want ForgeUI
😀👏
is it possible to make a girl's face not change much?
Remember that stable diffusion a111 is our true daddy.
Comfyui is soo much better and gets all the cool stuff on release day.
how do i get the model to look exactly like me, I subbed to your patron I'm still lost lol
Thank you for the workflow. All of the controlnets worked for me except for DWpose. I'm getting this error. "DWPreprocessor 'NoneType' object has no attribute 'get_providers'.
EDIT: I'm trying to dig in... I'm reading this in the terminal.
.... wholebody.py", line 41, in __init__
print(f"Failed to load onnxruntime with {self.det.get_providers()}.
Please change EP_list in the config.yaml and restart ComfyUI")
EDIT2: When I set the detector and estimator to *.torchscript.pt it works. Not sure what's going on. (shrug)
I can never get pass this, I researched it, tried several fixes, can't get past it, [Errno 2] No such file or directory: 'C:\\ComfyUI_windows_portable_nvidia\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\ComfyUI-tbox\\..\\..\\models\\annotator\\LiheYoung\\Depth-Anything\\.cache\\huggingface\\download\\checkpoints\\depth_anything_vitl14.pth.6c6a383e33e51c5fdfbf31e7ebcda943973a9e6a1cbef1564afe58d7f2e8fe63.incomplete'