interesting question for ya: to transform a real-world video to a consistent-style Anime video, is it better for the workflow to be: 1. take the human from the video (removing background) and translate its movements to a 3D avatar in blender (using a motion tracking algorithm) , then upload the 3D avatar movements to comfyUI&AnimateDiff? 2. just take the video and feed it into comyUI&AnimateDiff and use controlnet to take care of that? End goal is to have only the human be converted using AnimateDiff, and minimizing artifacts. Background would be a still image (to minimize AI artifacts). It'd be like an anime.
Cool workflow! learned a lot, thank you so much. I'd like to ask about how to set up the keyframe parameters for the camera in Blender at the end. I've tried some adjustments, but the movements feel quite stiff, and not as smooth as yours. Also, The video generated in my ComfyUI seems to diverge significantly from the input video. I'm hoping to preserve the essence of the original video while applying a subtle touch of style. Could you please advise on which nodes or parameters I should adjust to achieve this?
If Alt-Z don't work make sure to disable the game-overlay in Geforce Experience if you have an nvidia, because it uses the same shorcut so you can't use it on blender and geforce experience
Tanks for another great video! I've been making stuff for fun with AI programs for a few years, and it's crazy how much better it is now. But, every time I look at comfyUI examples, and all those "noodles" going all over the place with the nodes, I just can't bring myself to learn it. Especially since things change so quickly. I've learned so many different bits of software to do AI stuff. I keep hoping there will be something just as good or better than comfyui but less complicated. Are there any other software suggestions for doing text to video, or video to video? I built a good computer with a lot of ram and a 3090 a couple years ago so it should be able to run most if not everything out there.
I've been using A1111 from a few weeks after it became available getting it installed was harder than using it, so I'm happy to keep in that program if need be. 🤣 I'm not seeing any good video to video tutorials for it. If it can be done in A1111, please do a tutorial for it, if you don't mind. @@enigmatic_e
can you do a tutorial for 2D rigging + AnimateDiff? if you don't want to, I'm still curious to hear how you would approach it and which tools you might use
Love it. Was waiting for this tut since I saw your video on X. Super helpful.
👍🏽
Broski ❤ great as always😊
🙏🏽
Always a pleasure to learn from your videos, innovative concepts, well explain, thanks a lot from France
Once I saw this on Reddit, I knew this would be a good one. Thank you enigmatic E.
You are still the best Eric! Thanks for this great tutorial
Man thank you so much! 🙏🏽
Thanks! I learned a lot things here! Props to YOU
No problem 👍🏽
thank you,it is very good,thank you again!
always love my dude's work !!!!!
🙏🏽🙏🏽
Cool workflow E! Great tut as always.
Appreciate it!
Lets goooooooooo Cant wait!
🎉🎉🎉
interesting question for ya:
to transform a real-world video to a consistent-style Anime video, is it better for the workflow to be:
1. take the human from the video (removing background) and translate its movements to a 3D avatar in blender (using a motion tracking algorithm) , then upload the 3D avatar movements to comfyUI&AnimateDiff?
2. just take the video and feed it into comyUI&AnimateDiff and use controlnet to take care of that?
End goal is to have only the human be converted using AnimateDiff, and minimizing artifacts. Background would be a still image (to minimize AI artifacts). It'd be like an anime.
Cool workflow! learned a lot, thank you so much. I'd like to ask about how to set up the keyframe parameters for the camera in Blender at the end. I've tried some adjustments, but the movements feel quite stiff, and not as smooth as yours. Also, The video generated in my ComfyUI seems to diverge significantly from the input video. I'm hoping to preserve the essence of the original video while applying a subtle touch of style. Could you please advise on which nodes or parameters I should adjust to achieve this?
When it comes to style, models, loras, and Ipadapters are a big factor. I go over camera movements in my part one.
Yes bakkgrounds will be nice.. thankk you for all!
This is what I want! Thank you for tutorial!
If Alt-Z don't work make sure to disable the game-overlay in Geforce Experience if you have an nvidia, because it uses the same shorcut so you can't use it on blender and geforce experience
Yes, great point. I had to do this myself but forgot about this detail. Thank you!
Yes, we want background and effects, sir 😘
Bravo
How do you find all these things?Incredible!
How did you do with comfyui? I did not understand
Go watch part two of this series, this is part 3
Did anyone combine this with mocap already? For some homebrewing maybe could use a Quest 3, as it features full body capture afaik.
I haven’t but it would be a great thing to use for something like this.
Tanks for another great video!
I've been making stuff for fun with AI programs for a few years, and it's crazy how much better it is now. But, every time I look at comfyUI examples, and all those "noodles" going all over the place with the nodes, I just can't bring myself to learn it. Especially since things change so quickly. I've learned so many different bits of software to do AI stuff. I keep hoping there will be something just as good or better than comfyui but less complicated.
Are there any other software suggestions for doing text to video, or video to video?
I built a good computer with a lot of ram and a 3090 a couple years ago so it should be able to run most if not everything out there.
Have you tried Automatic1111? I might try to make some more videos on it since it’s less intimidating.
I've been using A1111 from a few weeks after it became available getting it installed was harder than using it, so I'm happy to keep in that program if need be. 🤣
I'm not seeing any good video to video tutorials for it. If it can be done in A1111, please do a tutorial for it, if you don't mind. @@enigmatic_e
🙏🔥🔥
can you do a tutorial for 2D rigging + AnimateDiff? if you don't want to, I'm still curious to hear how you would approach it and which tools you might use
Very I nteresting idea. I’ll def play around and see if I can come up with something good. 👍🏽
@@enigmatic_e thanks! looking forward to hearing your thoughts on that 😀
Can you make an animation of Four Armed Person??
i dont think thats possible through mixamo but definitely if you rigged it yourself.
I'm already pretty knowledgeable with blender, but i use automatic1111, should i learn comfy ui? is it worth getting into?
Many have made the switch so I would definitely recommend it. In some way A1111 is easier to use but it’s always harder to setup Animatediff with it.
@@enigmatic_e alright will do!
Excellent! Now........gotta stop being scared and learn blender....😆
You can animate it with mocap 😮
Facts!
Past me: is it still free?
I think it is
'Promo SM'