thanks! I don't have a Patreon, mostly because I don't have the time to do more in-depth, subscriber only content (I do this for a living, and I'm either bound by NDA on more in depth stuff, or I straight up don't have the time to create more content), and because I try to share everything I can share openly, without subscriptions. I do have a ko-fi page in the video's descriptions, but that's just for donations, there's no content locked behind it.
I forgot that you had a ko-fi page. I’ll sign up over there then. Yeah, I kinda guessed you were a bit too busy for Patreon. Anyway, thanks for sharing openly and for consistently bringing some clarity to these topics.
I love the workflow, but an issue I keep having is that even though the Liquid AF is good, it's still certainly not usable for a lot of things because if you leave it on for more than a quick transition, it's instability and flaws become pretty noticeable, I'm trying to work on figuring out how to make it more stable
I'm definitely not an expert on AnimateDiff, there might be better LoRAs out there, but at the stage I'm at with these workflows right now I'm focusing on finding the limitations and the areas of applicability of the tech, so I haven't shopped around for better solutions yet. I'm also very lacking in knowledge about how AnimateDiff came to be, and how it's evolving, and that's something I'll need to look into sometime in the future. Otherwise I'll just keep kicking the can down the road and settle for other people's solutions to issues I don't really have a proper understanding of.
are they questionable in the sense that they're the first tests out of a bit of tech that came out a couple of weeks ago, raw and unpolished? yes are they questionable in the sense that they showcase the complete lack of possibilities in pursuing motion graphics animations with these nodes? I don't think so. is it better to do motion graphics, right now, with anything but stable diffusion? probably, yes. my reasoning in pursuing this is that it's a neat piece of tech that makes it possible to do something that a) has a commercial application, and b) was not possible with SD before now. whether or not it's actually something that will eventually be commercially viable is a completely different matter. on the first video I published about product relighting, there was a ton of people questioning color shifts and detail loss. over the course of the following months, we fixed that, little by little. I'm trying to do the same thing here for motion graphics.
@@risunobushi_ai Thank you for clarifying. I'm not against it, it just seems like a lot of work from the side of the person that hasn't learned Comfy yet. :) I've fiddled with it, but what you guys do is way out there. Like might as well learn Houdini if I'm trying this niche :D It's nice you get to experiment though.
oh definitely! right now, learning houdini would be the smartest choice for this kind of stuff, but then again I do like experimenting with new, unconventional ways of doing the same things that other softwares do better - you never know what could happen down the line. but yeah, I was actually on the verge of cataloguing this as part of my Stable Diffusion Experimental series, but it's the existing commercial application of the motion graphics sector that pushed it over the edge, even if it's really raw tech right now.
How weird. Earlier today I was watching some Blender Motion Graphics videos and thought are there any ComfyUI Motion Graphics video, nothing came up in the titles when I searched earlier. But later during the day, your video popped up in my Subscribers feed Thanks
You consistently put out cutting-edge, incredibly high quality content. Keep it up!
We want more videos! Thanks for the effort on doing the work and video creation.
Thank you so much for this. I would love to support you on Patreon so that I can see you dive even deeper into the topics you've been covering.
thanks! I don't have a Patreon, mostly because I don't have the time to do more in-depth, subscriber only content (I do this for a living, and I'm either bound by NDA on more in depth stuff, or I straight up don't have the time to create more content), and because I try to share everything I can share openly, without subscriptions.
I do have a ko-fi page in the video's descriptions, but that's just for donations, there's no content locked behind it.
I forgot that you had a ko-fi page. I’ll sign up over there then. Yeah, I kinda guessed you were a bit too busy for Patreon. Anyway, thanks for sharing openly and for consistently bringing some clarity to these topics.
very interesting
interesting. i‘ll try it.
I love the workflow, but an issue I keep having is that even though the Liquid AF is good, it's still certainly not usable for a lot of things because if you leave it on for more than a quick transition, it's instability and flaws become pretty noticeable, I'm trying to work on figuring out how to make it more stable
I'm definitely not an expert on AnimateDiff, there might be better LoRAs out there, but at the stage I'm at with these workflows right now I'm focusing on finding the limitations and the areas of applicability of the tech, so I haven't shopped around for better solutions yet.
I'm also very lacking in knowledge about how AnimateDiff came to be, and how it's evolving, and that's something I'll need to look into sometime in the future. Otherwise I'll just keep kicking the can down the road and settle for other people's solutions to issues I don't really have a proper understanding of.
hi there, im having this isue "module 'torch' has no attribute 'float8_e5m2" thanks for all
Andrea, we are waiting.
I don't understand, what would you use this for though? The clips at the beginning are... questionable?
are they questionable in the sense that they're the first tests out of a bit of tech that came out a couple of weeks ago, raw and unpolished? yes
are they questionable in the sense that they showcase the complete lack of possibilities in pursuing motion graphics animations with these nodes? I don't think so.
is it better to do motion graphics, right now, with anything but stable diffusion? probably, yes.
my reasoning in pursuing this is that it's a neat piece of tech that makes it possible to do something that a) has a commercial application, and b) was not possible with SD before now. whether or not it's actually something that will eventually be commercially viable is a completely different matter.
on the first video I published about product relighting, there was a ton of people questioning color shifts and detail loss. over the course of the following months, we fixed that, little by little. I'm trying to do the same thing here for motion graphics.
@@risunobushi_ai Thank you for clarifying. I'm not against it, it just seems like a lot of work from the side of the person that hasn't learned Comfy yet. :) I've fiddled with it, but what you guys do is way out there. Like might as well learn Houdini if I'm trying this niche :D
It's nice you get to experiment though.
oh definitely! right now, learning houdini would be the smartest choice for this kind of stuff, but then again I do like experimenting with new, unconventional ways of doing the same things that other softwares do better - you never know what could happen down the line.
but yeah, I was actually on the verge of cataloguing this as part of my Stable Diffusion Experimental series, but it's the existing commercial application of the motion graphics sector that pushed it over the edge, even if it's really raw tech right now.
How weird.
Earlier today I was watching some Blender Motion Graphics videos and thought are there any ComfyUI Motion Graphics video, nothing came up in the titles when I searched earlier.
But later during the day, your video popped up in my Subscribers feed
Thanks
I might be able to read minds, beware
wooooow cool
can't I just first make a mask in Davinci for example and then just upload the masked video into your workflow? seams much easier for me I think.