I stumbled upon your channel and found your tutorials to be very good at explaining different methods for accomplishing tasks in ComfyUI and why some are better than others. Keep up the good work.
in the clip, I suggest wrong ControlNet Preprocessor Node this one is better (same principle) : (WIP) ComfyUI's ControlNet Preprocessors auxiliary models github.com/Fannovel16/comfyui_controlnet_aux Thx to @puoiripetere for suggestion
In the first example, what if I want to do an image to image using controlnet, like to recreate the same or similar image but with a different pose, how would i do that, pls help
Simple and effective, thanks. Question: could the detect_face parameter in the Openpose preprocessor node be used like an approximate faceswap? I've come to think of this when you have to turn it off at 8:50
This is superb !! I want to give you 10 likes actually but UA-cam limites it to one. Keep up good work. Sir, Can you let me know your hardware specification for this type of work ?
I have been watching several COmfyUI guides...it seemed the process in K Sampler is so quick, I have old iMac late 2013 (3.4GHz Quad-Core Intel Core i5, with NVIDIA GeForce GTX 775M 2 GB, 24 GB memory)..but running K-Sampler so slow...any one have a way to speed it up? Appreciated your solution given if any 🙂
This is my favorite tutorial series for ComfyUI, but I can't follow Ep 6 tutorial, because Openpose Preprocessor does not detect Hand and Face of the Model. I use a different Checkpoint to generate an Anime Image Model but it can only detect Body pose. Could you help me out, thanks.
May I ask if my machine cannot be connected to the internet? How can I manually install custom nodes? I tried downloading the nodes package, but it did not take effect
Hi, I noticed that if the controlnet image is larger than the latent image, the image will be cut off... there is a solution like in automatic to make the image before moving to the latent,is the image resized correctly?
hi , i installed the WAS utilities and this open pose all working fine... then i installed the face swap reactor node , but now cannot see the earlier stuff installed even if says installed.... any reason? thanks.... i see theres a lot of cannot import...... 'cv2.gapi.wip.draw' has no attribute 'Text'........just after install the reactor face swap actually if just move out reactor folder from custom nodes then again works everything fine... reactor folder is a bit of a bomb for other folders around...
Excellent tutorial .. thank you .. I am getting error .. can you suggest a resolution please ...... Error occurred when executing ControlNetLoader: module 'comfy.sd' has no attribute 'ModelPatcher' File "F:\SDXL\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "F:\SDXL\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "F:\SDXL\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "F:\SDXL\ComfyUI_windows_portable\ComfyUI odes.py", line 577, in load_controlnet controlnet = comfy.controlnet.load_controlnet(controlnet_path) File "F:\SDXL\ComfyUI_windows_portable\ComfyUI\comfy\controlnet.py", line 394, in load_controlnet control = ControlNet(control_model, global_average_pooling=global_average_pooling) File "F:\SDXL\ComfyUI_windows_portable\ComfyUI\custom_nodes\AIT\AITemplate\AITemplate.py", line 347, in __init__ self.control_model_wrapped = comfy.sd.ModelPatcher(self.control_model, load_device=comfy.model_management.get_torch_device(), offload_device=comfy.model_management.unet_offload_device())
When I attempt to use your method, the face in the photo transforms into another person. How can I capture the same face with different angles and expressions?
Keep them coming, these tutorials are way better than what is currently available.
Thx a lot, i will keep making better and better tutorials
BEST TUTORIAL OF CONTROL NET BY FAR 1000/1000, THANKS A LOT!
How display stepping preview to ksampler????
Thank you for explaining one by one, you make me really understand the workflow, hoping you continue to make more videos about AI!
Great tutorial, I was struggling trying to understand how ControlNet works but now is very clear. Thanks for sharing your knowledge!
I stumbled upon your channel and found your tutorials to be very good at explaining different methods for accomplishing tasks in ComfyUI and why some are better than others. Keep up the good work.
You are the best at these, my man. Thank you
This is the best content for comfyui on the internet. Please keep it up.
Thx a lot
Great explanation! Thanks for sharing!
in the clip, I suggest wrong ControlNet Preprocessor Node
this one is better (same principle) :
(WIP) ComfyUI's ControlNet Preprocessors auxiliary models
github.com/Fannovel16/comfyui_controlnet_aux
Thx to @puoiripetere for suggestion
Thank you for the wonderful guides, clear and comprehensive. Keep it up 🎉
which ones do we use or what? i dont understand this at all
Very clearly explains several materials at once, thank you very much. Have subscribed directly
Thanks for your clear explanation. I was searching it desperately. More tutorials on comfyui please.
In the first example, what if I want to do an image to image using controlnet, like to recreate the same or similar image but with a different pose, how would i do that, pls help
Thank you for finally showing me how to apply two nets. This is the foot in the door I needed.
Simple and effective, thanks. Question: could the detect_face parameter in the Openpose preprocessor node be used like an approximate faceswap? I've come to think of this when you have to turn it off at 8:50
Wow, so easy to understand, so straight forward. Thanks a lot!
Thank you, I was looking a multicontrolnet pipeline and this is really helpful.
great video !!! excelent explanation. keep making videos like this.
Excellent tutorial. Easy to follow and understand. Thank you!
brother you helped me lotttt, i just started and i already know most basics because this video 🤩
I have the latest version but cant find the preprocessors
You're the best, man. Thanks a ton.
I'm always thankful. I am a comfyui beginner. I can't get that skull shape from the open pose. What should I do?
This is superb !! I want to give you 10 likes actually but UA-cam limites it to one. Keep up good work. Sir, Can you let me know your hardware specification for this type of work ?
Amazing tutorial, thank you so much!
ตามมารับความรู้อีกแล้วครับ!
Absolutely awesome video! Thank you!
That was amazing!!! BIGLOVE!!
What are you using to show a preview of what's processing? (under ksampler for example)
Amazing info my friend. subscribed. Thanks a lot.
Can you use this in a workflow where a face has been generated and then using this workflow?
I'm always confused about how does ControlNet work. Is it for generating a desired pose?
object of type 'ControlNet' has no len()
I don't know which part I messed up. It just doesn't work.
I have been watching several COmfyUI guides...it seemed the process in K Sampler is so quick, I have old iMac late 2013 (3.4GHz Quad-Core Intel Core i5, with NVIDIA GeForce GTX 775M 2 GB, 24 GB memory)..but running K-Sampler so slow...any one have a way to speed it up? Appreciated your solution given if any 🙂
Get a better gpu
Oh, hi, nice work on the tutorial! :D
Hi, my teacher!
thanks, you helped me a lot :)
The best! 👏
where is controlnet apply i cant found .
This is my favorite tutorial series for ComfyUI, but I can't follow Ep 6 tutorial, because Openpose Preprocessor does not detect Hand and Face of the Model. I use a different Checkpoint to generate an Anime Image Model but it can only detect Body pose. Could you help me out, thanks.
Sometimes openpose is not work well with anime pose. You can use realistic image reference but you can still generate anime style result
ขอบคุณครับ
คลิปต่อไปน่าจะเป็น inpaint ร่วมกับ controlnet ไหมครับอาจารย์
ต่อไปเป็น LoRA กับพวก Detailers ก่อนครับ
มีเปิดคอร์สบ้างมั้ยครับ
how did you use multiple contronet? it's really hard!
May I ask if my machine cannot be connected to the internet? How can I manually install custom nodes? I tried downloading the nodes package, but it did not take effect
you can download file and put it in custom node folder. but some custom nodes also need to install some additional requirements of python libraries
Hi, I noticed that if the controlnet image is larger than the latent image, the image will be cut off... there is a solution like in automatic to make the image before moving to the latent,is the image resized correctly?
1000+++/1000 Excellent tutorial, thank u a lot เยี่ยมมากชอบมากเลยคับ กดติดตามละคับ
Hi, nice video, isn't it preferable to install the auxiliary version for controlnet?
Oh, you're right! I didn't aware of that one
hi , i installed the WAS utilities and this open pose all working fine... then i installed the face swap reactor node , but now cannot see the earlier stuff installed even if says installed.... any reason? thanks.... i see theres a lot of cannot import...... 'cv2.gapi.wip.draw' has no attribute 'Text'........just after install the reactor face swap
actually if just move out reactor folder from custom nodes then again works everything fine... reactor folder is a bit of a bomb for other folders around...
Excellent tutorial .. thank you .. I am getting error .. can you suggest a resolution please ......
Error occurred when executing ControlNetLoader:
module 'comfy.sd' has no attribute 'ModelPatcher'
File "F:\SDXL\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "F:\SDXL\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "F:\SDXL\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "F:\SDXL\ComfyUI_windows_portable\ComfyUI
odes.py", line 577, in load_controlnet
controlnet = comfy.controlnet.load_controlnet(controlnet_path)
File "F:\SDXL\ComfyUI_windows_portable\ComfyUI\comfy\controlnet.py", line 394, in load_controlnet
control = ControlNet(control_model, global_average_pooling=global_average_pooling)
File "F:\SDXL\ComfyUI_windows_portable\ComfyUI\custom_nodes\AIT\AITemplate\AITemplate.py", line 347, in __init__
self.control_model_wrapped = comfy.sd.ModelPatcher(self.control_model, load_device=comfy.model_management.get_torch_device(), offload_device=comfy.model_management.unet_offload_device())
Please try to update both comfyui and custom nodes to latest version
ทำแบบนี้ใน sd ได้ไหมครับ
please come back, you don't post since long time bro
i will be back soon. thx for thinking of me
Use SAM detection for better inpainting.
Thanks a lot.
You are very good. Can you do a tutorial on outpainting?
thank you so much
thanks!
ขอบคุณมากครับ
ขอบคุณมากครับอาจารย์สำหรับความรู้ที่ให้มาเรื่อยๆครับ
Wonderful!!! Thank you!!! For sharing your knowledge and workflows!!!! SUBBED!!!!
When I attempt to use your method, the face in the photo transforms into another person. How can I capture the same face with different angles and expressions?
you have to use LoRA to help. please watch EP07 first
How can I turn on progress image on ksampler node(box)?