Tonight playing with a workflow I found I could get someone to (kind of) walk by getting images in the right order, this kind of baffled me, I sit down and put TV on and see this. Thankyou so much for showing me what my workflow is telling me is possible. Many thanks for all your contributions.
I keep getting this error: "Prompt outputs failed validation - IPAdapterBatch: - Exception when validating inner node: tuple index out of range" EDIT: I did an update all and now this error is gone, but got a new one. "Error occurred when executing IPAdapterBatch: cannot access local variable 'face_image' where it is not associated with a value" If I bypass the 2nd IPAdapter node, it works? So something it doesn't like w/that node. EDIT: The problem was the IPAdapter weights was set to "full batch" instead of alternate. So was wasn't getting any images for the 2nd IPAdapter.
Seems like a great base to use when upscaling video. Upscale the key frames but also utilize the original animation for controlling pose or whatever. Very cool technique
This is great tutorial. As a newbie to ComfyUI I found there were a lot of additional things I needed to download that weren't mentioned such as Clip Vision 😉
Cool! I need a lot of animation frames, so image cherry picking and manual keyframing just doesn't cut it, but this method works great for shorter and detailed animations. Suggestion: color code the nodes so it would be easier to follow. With all grey nodes, it is hard to follow, especially on mobile phone. I hope we will see more animation stuff soon;)
Great! Thanks also for sharing your files. Now i am waiting for ipadapter wich can handle higher resolution and also waiting for more context length with animatediff.
@@latentvision Thank you, I should have watched the video before asking the question :) Your videos and time developing these nodes is of huge benefit to the open AI community, thank you!
6:54 hey Matteo I extracted frames from video and placed the frames into a folder. Instead of using 'Load Image' node one by one, Is there any node automatically load up images from a folder in order? like files names are in order so it can load up images automatically in order. Thank you always.
very informatic tutorial, when i running fire water workflow i am getting error from prompt sheduler which is missing 4 req positional arguments: pw_a, pw_b, pw_c and pw_d. pls suggest me what is the solution. thanks
Hey Matteo, sorry another annoying question from me. Your workflow works a charm and I'm having great results with the typography workflow. I've been trying to create a moment at the beginning before the first word comes in. I can do this by adding a black image in the Images Batch Multiple node before the first word. But the result is that there is no 'die off' after the second word. I've tried many things; adding 2 black frames at the end, repeating the second prompt 3 times in the Prompt Schedule From Weights Strategy node, adding more frames in the IPAdapter Weights node, but nothing seems to work. Any thoughts would be helpful. I know you're not getting paid for this so I appreciate any help at all
hard to say without seeing your workflow. but generally speaking you need to add a "fire" frame at the beginning (so animation starts with 2 fire images basically) and then a black frame for the control net
Hello Matteo, Thank you for the great tool and tutorials! I haven a question. I am unable to use this technique mantaining the characteristics of the image I am using. For some reason the result comes different from the input I have created. What is the parameter thet controls how much of the input image is used? Can I force it to just follow it? Cheers
Hey Matteo, thanks for the amazing job you're doing. Following this workflow i get an error: "only integer tensors of a single element can be converted to an index". This is happening when i turn the IPAdapter Batch nodes "weight" widgets to inputs and connect them to the IPAdapter Weights node output. Somehow if i turn back those weights inputs into widgets, the Sampler is able to process them, but ofc i don't get the desired result. Do you know what this might be related to?
oh man Matteo Thank you so much this is what I have been looking for! possibly I could apply batches of mask to make an animation? like I get a sequence of water movement and get masks of the sequence. connect the masks to attention mask to create other objects moving mimicking water movement.
What you think to add an image interrogator from the last images batch multiple and connect it to the Prompt Schedule? It will require string format but I guess it could work...
Not sure if this was mentioned, but for the life of me I couldn't find the Images Batch Multiple node. Took a bit of searching (Manager was quite unhelpful, here) until I found it was part of the ComfyUI Essentials pack. Hope this helps someone.
This is brilliant--thank you for sharing! Is it possible to apply a style lora into the workflow? The IP adapter gets the look pretty close, but if a custom style lora could be applied in conjunction with the IP adapter that would push things to a whole new level.
@@latentvision You're right. It turns out that I was let down by the habit of repeating what I saw from your videos without using ready-made workflows :).
Hey Matteo, I don't seem to find the "lcm-lora-sd15.safetensors" file anywhere online. I've followed your links in description but they bring me to .ckpt files, so I'm a bit confused here. Can you please help? Thanks a lot for your time.
I got a question about the IPAdapter Weights node. If you want to " hold" one of the input images for a while instead of constantly evolving, how would one approach this. You can increase the number of Frames used but it's still moving forward to the next input image, could you somehow freeze this for a few frames? Or am I asking to much now haha.
Hello Sir. Can you Please help me out? ipAdapter faceid suddenly got extremely slow and I have no idea Now to fix it. It did not use to be that slow. Do you have any idea what I could do?
I love the workflow! is there any chance to get less movement in your second example? LIke can I tell the AnimateDiff Node to decrease the movements from frame to frame?
This tutorial is really great! Very practical!(sponsored!) But I have a small question: if I don't want the original image to change, which parameters do I need to adjust? I tried ControlNet, but it doesn't seem to work.
with animatediff the original image will always change to a certain degree. You can use video2video or controlnets, but it's not like SVD for example that it starts from a given frame and reiterate on that
Hey Master Matteo! Trying here on a Mac Silicon.. In the end of the script, I see this error: "RuntimeError: MPS: Unsupported Border padding mode" Probably a Mac error? :(
Hmm, when trying to use your workflow I'm getting this error When loading the graph, the following node types were not found: IPAdapterBatch IPAdapterUnifiedLoader IPAdapterWeights IPAdapterNoise Nodes that have failed to load will show as red on the graph. I've updated ComfyUI_IPAdapter_plus , deleted and recloned, deleted and redownloaded through manager, and I continue to get the same error each time. no module named "node helpers" is why it fails to import
Is your ComfyUI up to date? That sometimes messes things up for me. Can try a git pull when inside the ComfyUI folder and after that try to update IPA again.
If you hook it up to a Display Any node, you'll see what the outputs are. It looks like it's a list of parameters specific to Matteo's nodes in order to generate the appropriate keyframes. Essentially it's a parametric way of calculating the keyframes, that way you can add or remove images and it will automatically adjust the keyframes accordingly. This replaces the need to use something like Batch Prompt Schedule or Batch Value Schedule nodes to manually enter in keyframe values.
Странно, я полностью повторил это видео и у меня совершенно другой результат. Все модели теже. Но на выходе получается уродливое видео :( Этот метод больше не работает?
Can't thank you enough for your contributions to the field. You are truly a genius!
one "thank" is enough :)
This is history in the making.
damn, I'm so old already?!
Although, you still will see flicker and issues at higher detail resolutions. These are very simple examples
@@RetzyWilliamsits still really great ground building for the potential that others can do with this tool
Someone should be paying this man.
LOL, I agree! :D
Thank you for all your hard work, Brother! Your contributions to this community have helped to elevate my content so much. I can't thank you enough.
Amazing work Matteo as always.
Proud to share italian roots with some talented guys like you.
I've never clicked so fast on a UA-cam thumbnail !
🤣
This looks totally mind blowing! Thanks for sharing! Would love to watch a breakdown that is suited more for beginners, especially for the later part.
Thanks!
An absolutely awesome masterclass from Maestro Latente!!... so many great tips that I cannot thank you enough!!
Tonight playing with a workflow I found I could get someone to (kind of) walk by getting images in the right order, this kind of baffled me, I sit down and put TV on and see this. Thankyou so much for showing me what my workflow is telling me is possible. Many thanks for all your contributions.
Thanks Mateo for this great topic that I am not ready (yet!) at all
This is super, Matteo. Why are you so good at this?
I love you SO SO MUCH! Been waiting for this tutorial since I saw your post last week hahaha Thank you thank you
I keep getting this error: "Prompt outputs failed validation - IPAdapterBatch:
- Exception when validating inner node: tuple index out of range"
EDIT: I did an update all and now this error is gone, but got a new one.
"Error occurred when executing IPAdapterBatch:
cannot access local variable 'face_image' where it is not associated with a value"
If I bypass the 2nd IPAdapter node, it works? So something it doesn't like w/that node.
EDIT:
The problem was the IPAdapter weights was set to "full batch" instead of alternate. So was wasn't getting any images for the 2nd IPAdapter.
same
same problem with "face_image" error. Thanks for the solutions in edit.
Thanks for the solution!
Legend thank you I had this same issue, thanks for posting the solution!
Seems like a great base to use when upscaling video. Upscale the key frames but also utilize the original animation for controlling pose or whatever. Very cool technique
This is great tutorial. As a newbie to ComfyUI I found there were a lot of additional things I needed to download that weren't mentioned such as Clip Vision 😉
Thank you, I learn new stuff from here, all the love for you
When Matteo speaks, I listen👌👌
Kindly make about clothes and garments on modern
While I'm overwhelming about all the node and how the man know to use them
Mateo: this is only the tip of the iceberg 😳
Great stuff. Thank you very much for your knowledge. Have a good day!
OMG, This is a great job , thank you so much
This is wonderful. Thank you always. This only works for SD1.5 models, correct?
there are a couple of SDXL models for AnimateDiff, but they don't work very well
Cool! I need a lot of animation frames, so image cherry picking and manual keyframing just doesn't cut it, but this method works great for shorter and detailed animations.
Suggestion: color code the nodes so it would be easier to follow. With all grey nodes, it is hard to follow, especially on mobile phone.
I hope we will see more animation stuff soon;)
The url in notes for the GIF controlnet model does not lead to that model unless these other motion models are the same thing by a different name.
just rename it
These videos are great!
thank you for this helpful tutorial
The best channel for learning comfy.
Nicee, thanks, you think vid2vid is coming soon?
Great! Thanks also for sharing your files. Now i am waiting for ipadapter wich can handle higher resolution and also waiting for more context length with animatediff.
Thank you for the in-depth video! But where do I get the ControlGIF model for the ControlNet node?
Hey Matteo, thanks so much for this. Is thee a workflow for creating such consistent character images like you did with the blond girl?
as I said in the video it's mostly prompting, but if you add an IPAdapter of the first generation the subsequent will be very close to it
@@latentvision Thank you, I should have watched the video before asking the question :) Your videos and time developing these nodes is of huge benefit to the open AI community, thank you!
6:54 hey Matteo I extracted frames from video and placed the frames into a folder. Instead of using 'Load Image' node one by one, Is there any node automatically load up images from a folder in order? like files names are in order so it can load up images automatically in order. Thank you always.
check the node "load images path"
very informatic tutorial, when i running fire water workflow i am getting error from prompt sheduler which is missing 4 req positional arguments: pw_a, pw_b, pw_c and pw_d. pls suggest me what is the solution. thanks
My God, ... Mateo, my master, eternal respect to you, I am shocked by your knowledge.
I just hope my 1080Ti can handle this xD
Thanks one again!
Hey Matteo, sorry another annoying question from me. Your workflow works a charm and I'm having great results with the typography workflow. I've been trying to create a moment at the beginning before the first word comes in. I can do this by adding a black image in the Images Batch Multiple node before the first word. But the result is that there is no 'die off' after the second word. I've tried many things; adding 2 black frames at the end, repeating the second prompt 3 times in the Prompt Schedule From Weights Strategy node, adding more frames in the IPAdapter Weights node, but nothing seems to work. Any thoughts would be helpful. I know you're not getting paid for this so I appreciate any help at all
hard to say without seeing your workflow. but generally speaking you need to add a "fire" frame at the beginning (so animation starts with 2 fire images basically) and then a black frame for the control net
It's amazing!very usefully video,thank you
I was the 1337 view. Must be a sign!
(Thanks Matteo, for your great work to the community!)
Incredible! ThankU❤
Hello Matteo, Thank you for the great tool and tutorials! I haven a question. I am unable to use this technique mantaining the characteristics of the image I am using. For some reason the result comes different from the input I have created. What is the parameter thet controls how much of the input image is used? Can I force it to just follow it? Cheers
also i can not find controlGIF.ckpt file
any luck with it?
were you able to find the .ckpt file?
I love this guy!
Hey Matteo, thanks for the amazing job you're doing.
Following this workflow i get an error: "only integer tensors of a single element can be converted to an index".
This is happening when i turn the IPAdapter Batch nodes "weight" widgets to inputs and connect them to the IPAdapter Weights node output.
Somehow if i turn back those weights inputs into widgets, the Sampler is able to process them, but ofc i don't get the desired result.
Do you know what this might be related to?
please post an issue on the official repository adding workflow and complete error message
oh man Matteo Thank you so much this is what I have been looking for! possibly I could apply batches of mask to make an animation? like I get a sequence of water movement and get masks of the sequence. connect the masks to attention mask to create other objects moving mimicking water movement.
yeah that would work too
What you think to add an image interrogator from the last images batch multiple and connect it to the Prompt Schedule? It will require string format but I guess it could work...
Not sure if this was mentioned, but for the life of me I couldn't find the Images Batch Multiple node. Took a bit of searching (Manager was quite unhelpful, here) until I found it was part of the ComfyUI Essentials pack. Hope this helps someone.
This is brilliant--thank you for sharing!
Is it possible to apply a style lora into the workflow? The IP adapter gets the look pretty close, but if a custom style lora could be applied in conjunction with the IP adapter that would push things to a whole new level.
controlGIF is "motion_checkpoint_less_motion" or "motion_checkpoint_more_motion" from "crishhh/animatediff_controlnet" ?
normal motion :D
@@latentvision Most likely it's controlnet_checkpoint.ckpt from "crishhh/animatediff_controlnet"
I believe I put the link inside the workflow in a note node
@@latentvision You're right. It turns out that I was let down by the habit of repeating what I saw from your videos without using ready-made workflows :).
@@latentvision I don't understand your conversation. Which model is controlGIF?
Hi Matteo!
Your new video is so great! I want to ask what is your PC specs (CPU, GPU, RAM)?
Thanks a lot for these videos, I learned a lot!
amd 59xx, 64gb ram, nvidia 4090 running on linux
Hey Matteo, I don't seem to find the "lcm-lora-sd15.safetensors" file anywhere online. I've followed your links in description but they bring me to .ckpt files, so I'm a bit confused here. Can you please help? Thanks a lot for your time.
search LCM LORA on huggingface
couldn't these setups be packaged into the program so we just change the variables instead of going to such a steep learning curve?
they could, yes
I'm getting a 'TypeError: can't multiply sequence by non-int of type float' when I try your workflow?
I got a question about the IPAdapter Weights node. If you want to " hold" one of the input images for a while instead of constantly evolving, how would one approach this. You can increase the number of Frames used but it's still moving forward to the next input image, could you somehow freeze this for a few frames? Or am I asking to much now haha.
the easiest is to repeat the frame twice
Does this work for images without people? For example for making a video of clouds flowing
sure, it works with anything
Hello Sir. Can you Please help me out? ipAdapter faceid suddenly got extremely slow and I have no idea Now to fix it. It did not use to be that slow. Do you have any idea what I could do?
please join my discord or post an issue on github, it's hard to escalate on an youtube comment
@@latentvision i understad that. You are right. I will join the dicord and post it as an issue. Thank you for your Work.
Thank you!!
I love the workflow! is there any chance to get less movement in your second example? LIke can I tell the AnimateDiff Node to decrease the movements from frame to frame?
you can run it slower by increasing the number of frames
very good tutorial, thanks for all (:
There needs to be a frame that is perfectly from behind. Otherwise you'll get that crazy Popeye-jaw.
This tutorial is really great! Very practical!(sponsored!) But I have a small question: if I don't want the original image to change, which parameters do I need to adjust? I tried ControlNet, but it doesn't seem to work.
with animatediff the original image will always change to a certain degree. You can use video2video or controlnets, but it's not like SVD for example that it starts from a given frame and reiterate on that
Hey Master Matteo! Trying here on a Mac Silicon..
In the end of the script, I see this error:
"RuntimeError: MPS: Unsupported Border padding mode"
Probably a Mac error? :(
please report the error on github, posting the full backtrace. thanks
Hmm, when trying to use your workflow I'm getting this error
When loading the graph, the following node types were not found:
IPAdapterBatch
IPAdapterUnifiedLoader
IPAdapterWeights
IPAdapterNoise
Nodes that have failed to load will show as red on the graph.
I've updated ComfyUI_IPAdapter_plus , deleted and recloned, deleted and redownloaded through manager, and I continue to get the same error each time.
no module named "node helpers" is why it fails to import
Is your ComfyUI up to date? That sometimes messes things up for me. Can try a git pull when inside the ComfyUI folder and after that try to update IPA again.
@@elowine I'll give that a try, I haven't updated in a few weeks
What is up with the Shutterstock watermark in the final image?
i dont have sgm_uniform as a scheduler. can someone point out how/what to get this?
Thanks as always ... I have a question .. Can we make it loop video?
there's a way to make kinda looping videos in animatediff, check the main repository
Getting this error, any idea why? Required input is missing: encode_batch_size
you probably just need to refresh the page
Someone knows the node for his "images batch multiples" and "ipadapter weights" ? thk you
comfyui essentials and ipadapter of course
@@latentvision thk you for the answer and your work.
Another question... why you don't use the node "everywhere" ? Did you encoutered trouble with it ?
@@seminole3001 it makes the workflow very difficult to follow especially when teaching. In a node system like comfy it's considered an "anti-pattern"
@@latentvision last question, the model animateGIF ? Did you rename it ? I don't find a link to download it...
Not gonna lie, a lot of this just went straight over my head. lol How the hell did you get so good with comfy?
practice I guess :)
Matt30! Multo-grazie!
You mention a Discord channel for animation (Bannadoku or somthing - its hard to hear). Can you provide a link or the correct name?
banodoco, see you there :D
@@elowine When searching Discord communities for banadoco I get zero hits. Do you need an invite link to find it?
Can someone explain how the weights strategy parameter works?
If you hook it up to a Display Any node, you'll see what the outputs are. It looks like it's a list of parameters specific to Matteo's nodes in order to generate the appropriate keyframes. Essentially it's a parametric way of calculating the keyframes, that way you can add or remove images and it will automatically adjust the keyframes accordingly. This replaces the need to use something like Batch Prompt Schedule or Batch Value Schedule nodes to manually enter in keyframe values.
in the first pass how did u get consistent characters
will try this with my drawings🔥
Why sgm_uniform? Karras worse?
If I remember correctly it is recommended with LCM sampler
What is the software called when you were refining the images?
it's an open source software called GIMP
I'm not sure but I think Mateo mentioned gimp in one of his earlier video's
Good open source tools are also photopea and Krita 😉
Ugghh i love your brain sir ...
I knew it! the zombie apocalypse has started!
How do we find that Discord server you mentioned at the beginning?
try this discord.gg/WdpGf2tx
Matteo é o melhor!
My utmost gratitude man, what you're doing is insane!
I can stop F5-ing now 😄I'm 300 images in, and still no back of the head image, I love the tech, I hate the prompting 😅
use the composition IPadapter
Hello author, read Embed group ipadpt Where can I download this file
Awesome!! I enjoy all of your videos
you are CRAZY(in the good way), OMG
❤
Outstanding as usual, thanks for the great work!
Wonderfull THX
Great as always!!! 🎉
Cool
This is INCREDIBLE. Thank you!
Always amazing!
I LOVE YOUR WORK MAN
🐐🐐🐐🐐🐐🐐
Awesome work
just doing my part
you are geniuses
Great update! Banodoco is indeed amazing!
Fantastic! 🎉 wonderful video
Maestro! ❤
Looks like Kara from detroid become human :)
😇😇😇
Super cool!! keep going 👍
Awesome !!
Странно, я полностью повторил это видео и у меня совершенно другой результат. Все модели теже. Но на выходе получается уродливое видео :(
Этот метод больше не работает?