THANK YOU for putting together all the resources in a clean document and thank you for a great workflow! One thing I noticed is that the iterative upscaler definitely adds details or extra elements to a render that may disrupt your original composition. The quality is fantastic but I'm wondering if there's a way to maintain quality upscale without adding extras?
hello thank you for the tutorial, i downloaded the workflow to get familiar on animatediff and i have an error on the ipadapterapply, it says its not found but i download everything, there are no missing nodes yet this is still missing, how do i fix this? thank you!
Thank you for sharing this, I'm a 3d artist that's been waiting for ai to get to this point so I am super excited to try this out. I am curious about openpose --- is there no option to use an exported rig directly from your 3d software? You already have the camera and rig in blender, you should be able to export this info somehow so openpose doesn't have to guess with depth or soft edges and that would ideally solve the issue with it messing up which way the character is facing --- I'll investigate it own my own as well, but I'd figure I'd at least ask first
I dont know of a way to do what you're saying. The closest thing I've seen is someone created a rig and model that is designed like the openpose skeleton but I haven't tested that. But if you do find anything out let me know. I would love to learn about it. Thank you!
@@enigmatic_e I bet if you were to render the wire frame as a separate mp4 that mirrors the same as your 3D video, you could use it as the input latent for open pose, then send the output to intercept the latent of the 3D video. Not sure how the node tree would look but I bet it's possible.
hi! beginners question. So if I run a software like ComfyUI locally, does that mean that all AI art, music, works that I generate will be free to use for commercial purposes?or am I violating terms of copyright? I am searching more info about this but I get confused, thanks in advance
Hi i used your installation guide and set the base path to my A1111.. where do I drop the loras and embeddings and how to install the ip adapter? Love your videos thanks for your efforts to educate us
You drop the Lora and embedding in the A1111 folders. I can’t remember exactly where those folders are but check the models folder, they’re not hard to find if you explore a little bit. And the ipadapter can be installed if you go manager and install missing nodes. Let me know if you still run into issues.
HELP PLS :/ all my video combine node are red :/ Failed to validate prompt for output 281: * (prompt): - Return type mismatch between linked nodes: frame_rate, INT != FLOAT * VHS_VideoCombine 281
Hi, this is great! Thanks for sharing :) Btw how can I put the wires straight lines in the workflow? I do really like that setting, looks more cleaner than the curve wires... Thanks!
tysm! got everything working except on the last node group: FaceRestoreModelLoader & Upscale Model Loader, which 2 models do you recommend to install there so I can finalise my renders
Hi, I downloaded all the files from your PDF and when I try to generate some video i'm getting this error in the KSampler from the "Output" section: AttributeError: type object 'GroupNorm' has no attribute 'forward_comfy_cast_weights' Can somebody help to know what i'm doing wrong? :S
where can i find and install the the LineartStandardPreprocessor node ERROR: comfyui When loading the graph, the following node types were not found: LineartStandardPreprocessor Nodes that have failed to load will show as red on the graph. FIX: If you stumble across this after already installing the preprocessor node just uninstall the node and reinstall and you will be fixed
Hmm, I'm getting a purple outline on my ksampler, so it seems that everything before it seems to load and work well. Plus i bypassed everything after, such as Iterative Upscale, Face Detailer sections. I get these errors below. If I figure it out, I'll update a comment. ERROR:root:!!! Exception during processing !!! ERROR:root:Traceback (most recent call last):
@@enigmatic_e Load IPAdapter Model = ip-adapter-plus_sd15.safetensors AnimateDiff Loader = v3_sd15_mm.ckpt I'ts actually running fine now. I had the wrong controlnet model running on the SoftEdge section. I ran a control-lora-depth-rank128 in there. I only have the openpose section running right now (all others are bypassed).
Do you feel that way about multiple videos or just this one? I’ll try to keep a closer eye on it. I typically keep the voiceover levels at what’s considered industry standards but I gotta double check this video. Thanks for the feedback.
@@enigmatic_e Yes I have watched a bunch of your videos with low volumes. Part 1 of this seems louder. You should try to hit close to 0db when editing. Industry Standards might be different than UA-cam since everyone is watching on different devices with different volume output levels. I notice I have to put my volume up by 30%+ when switching to your video from someone elses on my Studio Monitors. Nevertheless you have some great tutorials on your channel. Keep up the good content
Topaz Video AI: topazlabs.com/ref/2377
Ohhh snapppp part 2 here we gooooooo!
"Controlnets... I aint teachin you that" LOOOL
😂
THANK YOU for putting together all the resources in a clean document and thank you for a great workflow! One thing I noticed is that the iterative upscaler definitely adds details or extra elements to a render that may disrupt your original composition. The quality is fantastic but I'm wondering if there's a way to maintain quality upscale without adding extras?
No problem. Regarding your question, maybe reducing cfg or denoise in the upscale ksampler?
high quality tutorial, thanks bro
Please, don't stop! Great tuttorials!°
thanks for checking it out. hope it helps.
so cool,expect your next video!!
es realmente increible lo bueno que sos explicando y manjeando estas herramientas, gracias crack!
de nada👍🏼👍🏼
hello thank you for the tutorial, i downloaded the workflow to get familiar on animatediff and i have an error on the ipadapterapply, it says its not found but i download everything, there are no missing nodes yet this is still missing, how do i fix this? thank you!
Thank you for sharing this, I'm a 3d artist that's been waiting for ai to get to this point so I am super excited to try this out. I am curious about openpose --- is there no option to use an exported rig directly from your 3d software? You already have the camera and rig in blender, you should be able to export this info somehow so openpose doesn't have to guess with depth or soft edges and that would ideally solve the issue with it messing up which way the character is facing --- I'll investigate it own my own as well, but I'd figure I'd at least ask first
I dont know of a way to do what you're saying. The closest thing I've seen is someone created a rig and model that is designed like the openpose skeleton but I haven't tested that. But if you do find anything out let me know. I would love to learn about it. Thank you!
@@enigmatic_e I bet if you were to render the wire frame as a separate mp4 that mirrors the same as your 3D video, you could use it as the input latent for open pose, then send the output to intercept the latent of the 3D video. Not sure how the node tree would look but I bet it's possible.
yea im sure theres a way to do that. Thats the great thing about comfyui, theres so many possibilities@@calvinherbst304
hi! beginners question. So if I run a software like ComfyUI locally, does that mean that all AI art, music, works that I generate will be free to use for commercial purposes?or am I violating terms of copyright? I am searching more info about this but I get confused, thanks in advance
Hi i used your installation guide and set the base path to my A1111.. where do I drop the loras and embeddings and how to install the ip adapter? Love your videos thanks for your efforts to educate us
You drop the Lora and embedding in the A1111 folders. I can’t remember exactly where those folders are but check the models folder, they’re not hard to find if you explore a little bit. And the ipadapter can be installed if you go manager and install missing nodes. Let me know if you still run into issues.
😍😍😍
How did you create Comfy UI workflow? Where is it?
HELP PLS :/
all my video combine node are red :/
Failed to validate prompt for output 281:
* (prompt):
- Return type mismatch between linked nodes: frame_rate, INT != FLOAT
* VHS_VideoCombine 281
Hi, this is great! Thanks for sharing :) Btw how can I put the wires straight lines in the workflow? I do really like that setting, looks more cleaner than the curve wires... Thanks!
Yea just go to settings in the manager window and change spline to straight I believe
tysm! got everything working except on the last node group: FaceRestoreModelLoader & Upscale Model Loader, which 2 models do you recommend to install there so I can finalise my renders
I would look at the model name it shows when you first upload the workflow. You might be able to find them through the manager, install model.
@@enigmatic_e thanks, will do
Hi there! I have the same problem, did you find the model for FaceRestoreModelLoader? Thanks in advance!
Any clue how to fix the frame rate issue? All nodes connected to the initial Frame Rate node have a red circle around the frame_rate input.
Try a different video combine.
Hi, I downloaded all the files from your PDF and when I try to generate some video i'm getting this error in the KSampler from the "Output" section:
AttributeError: type object 'GroupNorm' has no attribute 'forward_comfy_cast_weights'
Can somebody help to know what i'm doing wrong? :S
where can i find and install the the LineartStandardPreprocessor node
ERROR: comfyui When loading the graph, the following node types were not found: LineartStandardPreprocessor Nodes that have failed to load will show as red on the graph.
FIX: If you stumble across this after already installing the preprocessor node just uninstall the node and reinstall and you will be fixed
Have you tried Ksampler RAVE? seems working pretty well , would be curious to hear if in this specific workflow it helps even more or nah
Hmm I don’t think I’ve used it. What does it do differently?
why do you upload the info on mega T.T mega still loading forever and dont give me the file .
Never had any complaints about it but what would you recommend?
Hmm, I'm getting a purple outline on my ksampler, so it seems that everything before it seems to load and work well. Plus i bypassed everything after, such as Iterative Upscale, Face Detailer sections. I get these errors below. If I figure it out, I'll update a comment.
ERROR:root:!!! Exception during processing !!!
ERROR:root:Traceback (most recent call last):
I’m taking a wild guess and thinking it might have to do with the ipadapter or animatediff. Which models are you using there?
Ah, so i fixed that by bypassing the Softedge controlnet section section since had control-lora-depth-rank running in that slot. Whoops!
@@enigmatic_e Load IPAdapter Model = ip-adapter-plus_sd15.safetensors
AnimateDiff Loader = v3_sd15_mm.ckpt
I'ts actually running fine now. I had the wrong controlnet model running on the SoftEdge section. I ran a control-lora-depth-rank128 in there. I only have the openpose section running right now (all others are bypassed).
The volume on your videos are very low compared to any other youtube video I watch. Just letting you know
Do you feel that way about multiple videos or just this one? I’ll try to keep a closer eye on it. I typically keep the voiceover levels at what’s considered industry standards but I gotta double check this video. Thanks for the feedback.
@@enigmatic_e Yes I have watched a bunch of your videos with low volumes. Part 1 of this seems louder. You should try to hit close to 0db when editing. Industry Standards might be different than UA-cam since everyone is watching on different devices with different volume output levels. I notice I have to put my volume up by 30%+ when switching to your video from someone elses on my Studio Monitors. Nevertheless you have some great tutorials on your channel. Keep up the good content
The render take more than 30 minutes for me, i dont understand i have a rtx4060ti 16go
Depends on how high your resolution is.
i really really really hope you can get it to work with automatic1111! i love using automatic1111's UI.