This is very cool. That is how I imagine working with AI. Writing a prompt and hoping for the best doesn't excite me. Directing the AI is where it is at! Thanks for the video and resources!
I went to do this when I realized you could put 'None' for the preprocessor and use your own depth images, but was a little surprised how hard it was to get a depth image from Blender. The Map Range node idea is new, as well as the Freestyle mode for Canny nodes, which I've never seen before in any tut. Thank you for creating such useful SD x Blender content! I laughed out loud when you said ComfyUI was easy to install. Maybe easy for a genius like yourself... Then again, it's probably easier to use than Deforum, but I like Blender -> Deforum -> Video AI -> Resolve for this sort of thing, as Deforum has a tools for controlling keyframing changes across time, interpolation and diffusion from frame to frame, like color coherence and frame blending. Also, the new Forge version is super fast.
This is the way - the tricky part is temporal consistency. You can obtain it by breaking down your movement into parts, train motion LORAs and match them to the final output using a schedule.
If it ever learns to finally stabilize and actually concentrate on keeping a real, consistent shot without so much warping designs, it might finally be of some use as a tool for indie creators.
if your having trouble finding freestyle, go back too the setting he set to standard just before that, at 3:06. make sure freestyle option is checked above that. then freestyle options he uses later will show up.
The biggest observation I made was how well you transition from a singular to an inclusive perspective. It really makes me feel like I am a part of a Community.
Of course yes, it applies to literally everything and its inevitable. But... sooner you start learning and embracing it, the bigger advantage you will have :)
@@MrFrost-xh6rf seemed like the first thing they went for was art, and I'm surprised how quickly it got good at advanced 3D art. I admit I used meshy to create a strawberry shortcake model for me when I needed a food model. Can you imagine drawing realistic food in blender?
Incredibly great work, dear Mick! Your video edits alone are evidence of detailed diligence and precision. One of the most amazing channels in the UA-cam ocean - fun to follow your workflows. Unfortunately - super ambitious and probably still too time-consuming for most enthusiasts. But - on a weekly basis - technology is evolving. The question is probably the skillful use and intellectual penetration of this potential. My sincere admiration for your perseverance, your diligence and the excellent presentation of your results! Ultimately, that is what it is: the optimization of the interface between "imagination" and the "digital world". Thank you!!
Been working very hard to work with blender and understand it. As a 3d artist, as a short film writer, as an animator… then all this “AI does it for you and does it better” BS comes along. Feels like a huge slap in the face. Worst part is that I don’t understand it. Like… At all. Reply 💯 if you feel me 😔
I cant tell you how to feel, but my suggestion is that you add one more hat, that is director. AI is as good as its training data, and the person using it.
Everything I'm seeing needs to be fixed by humans in the final product and all of it has that 'dream state' A.I. look about it. Ultimately this video shows me the power of A.I. speeding up the traditional workflow process for conceptualization, however it doesn't look like we are anywhere near the point this is going to replace traditional artists. I say, stick with it. Learn the traditional methods as Blenders long term plan is to start implementing a lot of this stuff natively to help artists create more art faster. In the end, it will probably just end up being another tool in the tool box that lets you speed-ball ideas quickly, so you can eliminate bad ideas quickly, but ultimately the hard work to finalize everything will still need to be done by humans and that might be true for a long, long time.
@@reasonsreasonably Last time I post anything like that on UA-cam. But thankfully I found out how limited AI is in its current state when it comes to these sorts of things. I posted this a while ago and my current animations make this one look like a 3rd grader did it. Yeah, I get that you were trolling but still, what you said holds some truth. AI will eventually get to that point and I will eventually have to learn it. But as I revisit this video and your post I’m quite proud of the fact that my work now looks rather professional and this looks like another “choppy, hard on the eyes AI driven mess”. AI is good for some things, but animation isn’t one of them yet. I’m nearly done producing a web series that I will happily give you the link to when completed. Have a good day, friend 👍
It would be nice to have a 'temporal coherence' checkbox in comfy that would force it to keep things consistent over the length of the video and not be so random.
Just a minor suggestion on your workflow… I would pass the end result as a latent on another ksampler with really low denoising to improve the final comp… Also maybe sd1.5 animatediff with lcm is a more interesting approach for your followers since it’s lower bandwidth, better consistency, better controlnets etc
This was fantastic! Super inspiring. I don't use Blender, but the CONCEPT is what I needed. I'm sure there's got to be a way to make those render pass videos from within comfy from one video and not have to use image sequences. The segmentation of the elements and being able to prompt each of them separately is what I've been trying to find a simple demonstration of! Thanks!
90% will not be needed anymore. Because 98% of the people don't care if a artist made it or AI. Apple, Google , Amazon will all come with a AI cloud thing. Where you can make games, movies, images what ever in a simple click.
Awesome! I already thought about this, but you have finally done it ... It works quite well, but the flickering is this there. You may have to integrate animation workflow like animateDiff or something. The next step is to ask a LLM to code a Blender script or addon to automate the export of frames to a Comfy worlflow ;)
@@sams3493 This wont be enought, LoRAs are good for several still images and to have a character (or anything) consistency, but you can acheive this with IPAdapters too nowadays... Here the probleme is a temporal coherence between each frames and a video model is needed (animateDiff, SVD, ...) but I don't know exactly how to plug this with frames coming from Blender
I like the speed and versatility that this Workflow using AI offers, but knowing and being able to do everything myself is more satisfying. This method can be used more in an abstract sense, in that it doesn't matter as long as it looks good, but it can't be used to get very specific results, well, it can, but it can take a long time to get the exact “Shader” of that we need instead of using a procedural or even hand-painted one. Furthermore, AI art is not really art and before the haters start commenting, think about this argument: If you go to a pizzeria, order a pizza and start taking out ingredients or adding extras, does that make you the cook? No, and by that same logic, going to a website or an app and asking for or removing “ingredients” that you want or don't want to see in an image doesn't make you an artist and generating images with AI shouldn't be called art. AI is a good aid tool and should not be used to overload the market and artists' websites such as ArtStation, which there was a huge protest some time ago on this exact subject.
Art isn't a pizza. Art (from the American Heritage Dictionary): "The conscious use of the imagination in the production of objects intended to be contemplated or appreciated as beautiful, as in the arrangement of forms, sounds, or words." Does that definition apply to AI art? I would say so. You can try to redefine the term "art" all you want, but at the end of the day, my definition of art comes from the dictionary, and yours exists in your head. It's very true that AI art is not completely original. But the thing many fail to realize is that so is just about everything else. Hell, one of the most famous paintings of all time is a can of soup. At the end of the day, all you're doing is shooting yourself in the foot. AI is here to stay, and could probably be extremely useful to artists if they let it. But instead, they've chosen to throw a hissy fit. It's really a shame. "AI is hurting arists ability to..." yeah, until a "real artist" is using AI, and then the art community starts attacking them. It's honestly sad.
@@bbrainstormer2036"imagination", therefore no, under the narrow definition you posited, Ai generated images do not fit it. They have no living author, you can't reduce the creative process to the abstractions of gradient descent and backpropagation. It ain't no tool by default: it can be used as one, but it can also be used as automation. Within the market of capital and attention, the later has infinitely more of an incentive advantage than the former, so people using it as a "tool" will find the web scape completely saturated with fully automated content. Read Heidegger on technology. And remember "technology is a useful tool but a dangerous master", considering the lack of meaningful input in ai generated content, it falls more into the later, the user is far more used than he uses.
@@BinaryDood Just because the latter, to use your explanation, exists, doesn't mean that the former somehow doesn't or should be ignored. And your point about "incentives" is confusing, when lots of people don't make art in the hopes of some external reward, instead making it simply because they want to be creative.
@@bbrainstormer2036 indeed, and those people share the same world as those with extrinsic incentives, including resources. Stuff the individual can never be completely separate from. And since AI targets every field, it's not unfathomable to think too many will be distituted, their own willingness to create genuinely being what deprives them of what is deemed productive. This does not create an environment for people to be educated on intrinsic motivation and positive liberty, hence, being molded by a socioeconomic sphere of narrow and short term reward functions: not the stuff which feeds creativity, but which clogs it.
@@BinaryDood At this point, it's a red herring. Whether AI will have a positive effect on art as a whole (which seems to be the point you're making here) is very different from claiming that the use of AI disqualifies a work from being art. I'd also push back against the arguments you've made, but honestly, I don't feel like spending this much time on a red herring
Amazing gift my friend thanks à lot. I find if you use the view render from wireframe view and change the color setting, you win time (also i didn't manage to succes with freestyle/line render on 3.6) Thanks you a lot Mr Mumpitz !!
ComfyUI Manager wasn't working until I put the extracted folder ComfyUI-Manager-main into the folder ComfyUI_windows_portable/ComfyUI/custom_nodes. I think it might be good to adjust the instructions on this point. Looking forward to playing with it!
Wow this is amazing. so it doesn't mind coloring outside the lines (so to speak) I noticed the generation doesn't exactly match the alpha of the RGB mattes. Good to know! Keep it up!
2:50 the correct way of outputting passes like zdepth is to render image in linear format! not srgb, you set it in color management rollout panel. thanks for the tutorial!
An error message appears when importing color mask and depth map sequence frames: Error occurred when executing VHS_LoadImagesPath: No files in directory 'C:\tmp\color'. File "E:\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute What might I have done wrong?
You need to change values in the every node - "Load Images (Path)": skip_first_images - "0" (this count means, that you start generation from first frame in you sequence, it can be start from any frame you want) select_every_nth - "1" Also set yor frames count in the node "ImageLoadCap" - at first to "20", for a test. And change the resolution: higher resolution - longer time of waiting.
This is wonderful. I’m actually an illustrator who would like to aggressively speed up my workflow by putting my own designs in and using 3D models to generate (largely finished, line art only) images that I can then go on and tweak by hand for finishes. How would I, instead of pulling these prompts from just the general Ai system, use my own art work as the…Lora? I think it’s called? Do you have a tutorial on that using this same workflow? The workflow is perfect, but I just need an ability to upload my own pencils and inks. Thanks!
My comfyui does not run the mask preview node. For some unknown reason, I'm going to turn off the preview node and try to work with it. Thanks for sharing such a nice tutorial.
Supporting you on Patreon is there also a file where there are more than 4 masks? Sorry I am thinking a bit crazy, like 40 for a complex scene 🙈 Probably I could take some time to set it up... My inital thought: It would be very cool to do this per object. The AI style kind of the material per object.
Oh yes, a normal pass would work fantastically. I have actually tested this with SD 1.5. The problem is that there is no "normal" control net for Stable Diffusion XL yet. The videos generated by SD1.5 were extremely coherent, but unfortunately also pretty ugly. So I gave up on it and focused on SDXL for this video. But as soon as there is a control net for it, this workflow should work even better
@@CivilProtection5562literally the only purpose for it right now is ideation and even that is kinda narrow especially when you actually have an idea in mind it’s only really good for exploring ideas you would have never thought of otherwise. The funny thing is non-artists don’t even know how to use it and I don’t blame them it’s pretty hard to control and they have no idea how to construct an image in the first place and have never thought of the variables so they basically just spray and pray and cherry pick whichever ones it spits out that they like
@@SwoleKitchen everyone knows how to use ai to generate images. The problem is not that they don't know how to use "ai art". It's that they don't know hot to use art. These people are not needed by the industry and no one cares about their brainfarts. I mean prompts.
If I’m not mistaken, the blender bit can be removed entirely, no? Can’t you do the same separation in photoshop with an image? Or after effects with a video?
Man if you could get consistent character i would be sold. I managed to do th same exact thing just using pika and runway ai and animating models in unity.
But you can! Use the same prompt for the character in every scene. Create one image of your character (full body or face). Load it into the ipAdapter and connect the mask for your character into the "attn mask" input in your ipAdapter. ...I should make a video about this haha
@@mickmumpitz if you could that would be great becauee the one thing i neee is consistency in the character models. One other thing i managed to do was taking a sub par 3d model and passing it through krea ai, it makes the character super nice then combing frames from unity into that to out put a nicer imagr
Hey Mick, great video! I've been learning Blender for a while and I'm interested in trying your approach of combining it with AI. I'm planning to build a new PC for this purpose. Do you mind sharing your PC's specifications? It would be helpful for me to know what kind of hardware can handle this type of workflow.
the "Load Images (Path) MASKS and other dont work, when you search for upload it simply doesnt find the images in the folder, how you select the images to upload?
I've been using a similar workflow, although for shorter sequences. I want to build a product for this, integrated straight into Blender would be insane!
nice workflow ... also after watching your new 3d_to_anim workflow and seeing 6:15 //"after a few seconds" MY 3060/12gb i7 PC is rendering about 10minutes ... why? I am using "Mickmumpitz_AI-RENDERING_SDXL_IMG_FREE_v06" and all your recom. models ... thx in advance for any help
Hello, after I drag the workflow to Comfy, I go to "ComfyUI Manager -> Install missing Custom Nodes." but do not see an option to install missing Custom Nodes. Any suggestions? THANKS.
This is very cool. That is how I imagine working with AI. Writing a prompt and hoping for the best doesn't excite me. Directing the AI is where it is at! Thanks for the video and resources!
Hmmmm.. do I bookmark this under "AI" or under "Blender"?
ahh 😂
Ha ha! The same for me!
Both! ;)
YES!!😂
The answer is Yes
Insert “Why not both?” Meme
Wow very cool. Also love the way you just did a separate render with flat emmissive materials to Roll your own version of Cryptomatte !!! Brilliant
This channel is an absolute gem!
I went to do this when I realized you could put 'None' for the preprocessor and use your own depth images, but was a little surprised how hard it was to get a depth image from Blender. The Map Range node idea is new, as well as the Freestyle mode for Canny nodes, which I've never seen before in any tut. Thank you for creating such useful SD x Blender content! I laughed out loud when you said ComfyUI was easy to install. Maybe easy for a genius like yourself... Then again, it's probably easier to use than Deforum, but I like Blender -> Deforum -> Video AI -> Resolve for this sort of thing, as Deforum has a tools for controlling keyframing changes across time, interpolation and diffusion from frame to frame, like color coherence and frame blending. Also, the new Forge version is super fast.
should I use Deforum for cycles animation passes? I totally failed with eevee and could not even manage to add missing nodes in ComfyUI 😅
This is the way - the tricky part is temporal consistency. You can obtain it by breaking down your movement into parts, train motion LORAs and match them to the final output using a schedule.
is there a particular tutorial that shows this method?
This really seems like the future of 3D work flows
If it ever learns to finally stabilize and actually concentrate on keeping a real, consistent shot without so much warping designs, it might finally be of some use as a tool for indie creators.
It's so crazy how advanced this stuff is getting.
if your having trouble finding freestyle, go back too the setting he set to standard just before that, at 3:06. make sure freestyle option is checked above that. then freestyle options he uses later will show up.
one of those great videos that you found once in a decade on youtube
Absolutely. Mind blowing. Bravo. Man. Bravo.
Absolutely incredible video. I've never subscribed to someone's Patreon faster. Great job
The biggest observation I made was how well you transition from a singular to an inclusive perspective. It really makes me feel like I am a part of a Community.
Unglaublich!! :D so so nice danke für deine filme und der tollen inspiration!
Guess I have to learn this if I am going to be able to live and keep my job.
Well, you'll need to learn SOMETHING new. This is going to be old news pretty soon I'd think.
I wish they would keep Ai out of art too..
Of course yes, it applies to literally everything and its inevitable. But... sooner you start learning and embracing it, the bigger advantage you will have :)
@@MrFrost-xh6rf seemed like the first thing they went for was art, and I'm surprised how quickly it got good at advanced 3D art. I admit I used meshy to create a strawberry shortcake model for me when I needed a food model. Can you imagine drawing realistic food in blender?
i hope you undertand that this is all just alpha stage software and workflows and you cannot keep up with the pace of ai advancements in the end.
Incredibly great work, dear Mick! Your video edits alone are evidence of detailed diligence and precision. One of the most amazing channels in the UA-cam ocean - fun to follow your workflows. Unfortunately - super ambitious and probably still too time-consuming for most enthusiasts. But - on a weekly basis - technology is evolving. The question is probably the skillful use and intellectual penetration of this potential. My sincere admiration for your perseverance, your diligence and the excellent presentation of your results!
Ultimately, that is what it is: the optimization of the interface between "imagination" and the "digital world".
Thank you!!
This method seems great to create some NPR style squences. That last one with the forest and the moss looks amazing at 9:59
Been working very hard to work with blender and understand it. As a 3d artist, as a short film writer, as an animator… then all this “AI does it for you and does it better” BS comes along. Feels like a huge slap in the face. Worst part is that I don’t understand it. Like… At all. Reply 💯 if you feel me 😔
hi
I cant tell you how to feel, but my suggestion is that you add one more hat, that is director. AI is as good as its training data, and the person using it.
i feel that too now, and this shit is confusing lmao
Everything I'm seeing needs to be fixed by humans in the final product and all of it has that 'dream state' A.I. look about it. Ultimately this video shows me the power of A.I. speeding up the traditional workflow process for conceptualization, however it doesn't look like we are anywhere near the point this is going to replace traditional artists. I say, stick with it. Learn the traditional methods as Blenders long term plan is to start implementing a lot of this stuff natively to help artists create more art faster. In the end, it will probably just end up being another tool in the tool box that lets you speed-ball ideas quickly, so you can eliminate bad ideas quickly, but ultimately the hard work to finalize everything will still need to be done by humans and that might be true for a long, long time.
@@reasonsreasonably Last time I post anything like that on UA-cam. But thankfully I found out how limited AI is in its current state when it comes to these sorts of things. I posted this a while ago and my current animations make this one look like a 3rd grader did it. Yeah, I get that you were trolling but still, what you said holds some truth. AI will eventually get to that point and I will eventually have to learn it. But as I revisit this video and your post I’m quite proud of the fact that my work now looks rather professional and this looks like another “choppy, hard on the eyes AI driven mess”. AI is good for some things, but animation isn’t one of them yet. I’m nearly done producing a web series that I will happily give you the link to when completed. Have a good day, friend 👍
Awesome video 😃👍
I am very excited to try this out! 😃
It would be nice to have a 'temporal coherence' checkbox in comfy that would force it to keep things consistent over the length of the video and not be so random.
That’s exactly what animatediff tries to do… it’s improving a lot lately
Yah not quite mind blowing yet
Amazing 👌
Thanks!
very powerful tips and workflow
omg that is friggen awesome. :)
Wow, nice workflow.
Just a minor suggestion on your workflow… I would pass the end result as a latent on another ksampler with really low denoising to improve the final comp…
Also maybe sd1.5 animatediff with lcm is a more interesting approach for your followers since it’s lower bandwidth, better consistency, better controlnets etc
super cool, amazing work :)
Congrats! nice workflow
Thanks for share
This was fantastic! Super inspiring. I don't use Blender, but the CONCEPT is what I needed. I'm sure there's got to be a way to make those render pass videos from within comfy from one video and not have to use image sequences. The segmentation of the elements and being able to prompt each of them separately is what I've been trying to find a simple demonstration of! Thanks!
there is a node called Oneformer Coco Segmenter, which colors different objects in the video and then you can use a mask by color node
That's what I also thought about... AI with control 😊 Thanks for sharing!
This is amazing!
wow! It's look very promising!
Amazing work!
Youre a wizard Harry!
Bruh... this is godlike
Amazing work.
Amazing work, thank you for sharing! Does this workflow exist for Mac or only PC?
Amazing! Thank you!
I've been saying this since last year- this is going to be the render pipeline of the future.
This is amazing. keep it up!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
I luv you , u just gave me the fire
It's amazing but scary, this might end up putting a lot of lighting artists, comp artists and rendering artists out of work very soon.
90% will not be needed anymore. Because 98% of the people don't care if a artist made it or AI. Apple, Google , Amazon will all come with a AI cloud thing. Where you can make games, movies, images what ever in a simple click.
The only jobs not in danger from AI are on the forefront of science... Everybody else is going to get wrecked.
epic stuff ❤
The the short really captures the feeling of being chased in a nightmare
Thanks, this is amazing!
ok lookdev lighting texturing copositing you are fired !!!!!!!!!!!!!!!!!!!!!
Geiler Workflow!
great job, well done :)
I Must try this!! Every Video is a hit!
Wow! This is amazing!
Really very cool 🎉
not quite ready for prime time but cant wait to see when it gets there
yo, thats dope
'really like this.... good work.
Awesome! I already thought about this, but you have finally done it ...
It works quite well, but the flickering is this there. You may have to integrate animation workflow like animateDiff or something.
The next step is to ask a LLM to code a Blender script or addon to automate the export of frames to a Comfy worlflow ;)
Maybe also create a LORA (or whatever, I'm new to this) for more consistant subjects
@@sams3493 This wont be enought, LoRAs are good for several still images and to have a character (or anything) consistency, but you can acheive this with IPAdapters too nowadays...
Here the probleme is a temporal coherence between each frames and a video model is needed (animateDiff, SVD, ...) but I don't know exactly how to plug this with frames coming from Blender
Amazing !!
really cool!
I like the speed and versatility that this Workflow using AI offers, but knowing and being able to do everything myself is more satisfying.
This method can be used more in an abstract sense, in that it doesn't matter as long as it looks good, but it can't be used to get very specific results, well, it can, but it can take a long time to get the exact “Shader” of that we need instead of using a procedural or even hand-painted one.
Furthermore, AI art is not really art and before the haters start commenting, think about this argument:
If you go to a pizzeria, order a pizza and start taking out ingredients or adding extras, does that make you the cook?
No, and by that same logic, going to a website or an app and asking for or removing “ingredients” that you want or don't want to see in an image doesn't make you an artist and generating images with AI shouldn't be called art.
AI is a good aid tool and should not be used to overload the market and artists' websites such as ArtStation, which there was a huge protest some time ago on this exact subject.
Art isn't a pizza.
Art (from the American Heritage Dictionary): "The conscious use of the imagination in the production of objects intended to be contemplated or appreciated as beautiful, as in the arrangement of forms, sounds, or words."
Does that definition apply to AI art? I would say so. You can try to redefine the term "art" all you want, but at the end of the day, my definition of art comes from the dictionary, and yours exists in your head.
It's very true that AI art is not completely original. But the thing many fail to realize is that so is just about everything else. Hell, one of the most famous paintings of all time is a can of soup.
At the end of the day, all you're doing is shooting yourself in the foot. AI is here to stay, and could probably be extremely useful to artists if they let it. But instead, they've chosen to throw a hissy fit. It's really a shame.
"AI is hurting arists ability to..." yeah, until a "real artist" is using AI, and then the art community starts attacking them. It's honestly sad.
@@bbrainstormer2036"imagination", therefore no, under the narrow definition you posited, Ai generated images do not fit it. They have no living author, you can't reduce the creative process to the abstractions of gradient descent and backpropagation. It ain't no tool by default: it can be used as one, but it can also be used as automation. Within the market of capital and attention, the later has infinitely more of an incentive advantage than the former, so people using it as a "tool" will find the web scape completely saturated with fully automated content. Read Heidegger on technology. And remember "technology is a useful tool but a dangerous master", considering the lack of meaningful input in ai generated content, it falls more into the later, the user is far more used than he uses.
@@BinaryDood Just because the latter, to use your explanation, exists, doesn't mean that the former somehow doesn't or should be ignored. And your point about "incentives" is confusing, when lots of people don't make art in the hopes of some external reward, instead making it simply because they want to be creative.
@@bbrainstormer2036 indeed, and those people share the same world as those with extrinsic incentives, including resources. Stuff the individual can never be completely separate from. And since AI targets every field, it's not unfathomable to think too many will be distituted, their own willingness to create genuinely being what deprives them of what is deemed productive. This does not create an environment for people to be educated on intrinsic motivation and positive liberty, hence, being molded by a socioeconomic sphere of narrow and short term reward functions: not the stuff which feeds creativity, but which clogs it.
@@BinaryDood At this point, it's a red herring. Whether AI will have a positive effect on art as a whole (which seems to be the point you're making here) is very different from claiming that the use of AI disqualifies a work from being art.
I'd also push back against the arguments you've made, but honestly, I don't feel like spending this much time on a red herring
Great work!
Amazing gift my friend thanks à lot. I find if you use the view render from wireframe view and change the color setting, you win time (also i didn't manage to succes with freestyle/line render on 3.6) Thanks you a lot Mr Mumpitz !!
ComfyUI Manager wasn't working until I put the extracted folder ComfyUI-Manager-main into the folder ComfyUI_windows_portable/ComfyUI/custom_nodes. I think it might be good to adjust the instructions on this point. Looking forward to playing with it!
With same amount of time I’d create a serious good render. It can be useful as concept generator.
Did you used openAi api? Can't understand why I have terrible results
The knowledge you make with blender might actually be around and functional in a decade or two
nicely done
If Blender used ready-made AI models instead of calculating the physical behavior of lights and surfaces, it would certainly speed up rendering.
I could see this being used for independent artists music videos. Or media where less control is needed.
Great content! Thanks for the time and effort! Subscribed!
Wow great knowledge and content, this is sooo good
When used blender to extract canny and depth to create the image : oh
When created an animation on top of it : ohhhhhhh
Wow this is amazing. so it doesn't mind coloring outside the lines (so to speak) I noticed the generation doesn't exactly match the alpha of the RGB mattes. Good to know! Keep it up!
2:50 the correct way of outputting passes like zdepth is to render image in linear format! not srgb, you set it in color management rollout panel. thanks for the tutorial!
Very nice video! Thanks for sharing! ❤
An error message appears when importing color mask and depth map sequence frames:
Error occurred when executing VHS_LoadImagesPath:
No files in directory 'C:\tmp\color'.
File "E:\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
What might I have done wrong?
You need to change values in the every node - "Load Images (Path)":
skip_first_images - "0" (this count means, that you start generation from first frame in you sequence, it can be start from any frame you want)
select_every_nth - "1"
Also set yor frames count in the node "ImageLoadCap" - at first to "20", for a test. And change the resolution: higher resolution - longer time of waiting.
Have you picked up on the excitement surrounding VideoGPT? It's reshaping the landscape of video creativity.
Have you tried the native ComfyUi addon for blender? I wonder if that could speed this process.
Wow! Amazing workflow that you worked out here! Thanks for all the inspiration!
Love your video. btw you can use looseControl which are able to turn cube to various objects :D
This is wonderful. I’m actually an illustrator who would like to aggressively speed up my workflow by putting my own designs in and using 3D models to generate (largely finished, line art only) images that I can then go on and tweak by hand for finishes.
How would I, instead of pulling these prompts from just the general Ai system, use my own art work as the…Lora? I think it’s called? Do you have a tutorial on that using this same workflow?
The workflow is perfect, but I just need an ability to upload my own pencils and inks. Thanks!
Subscribed 💥
I'd like to go just one day without hearing about goddamn AI.
maybe build a time machine and go back to the 90s
@@denzelcanvasYT That would be nice
Then stop clicking on the videos about ai
@@MikeWazowski-x7k That wouldn’t change anything.
Was gonna say, youll have to get AGI to help you build a time machine then, cause the cat's not going back in the bag.
My comfyui does not run the mask preview node.
For some unknown reason, I'm going to turn off the preview node and try to work with it.
Thanks for sharing such a nice tutorial.
Supporting you on Patreon is there also a file where there are more than 4 masks? Sorry I am thinking a bit crazy, like 40 for a complex scene 🙈 Probably I could take some time to set it up...
My inital thought: It would be very cool to do this per object. The AI style kind of the material per object.
super, thanks
Thanks!!!
Very cool, would the addition of normal passes help with the consistency of the render? Would adding in a lighting pass work as well?
Oh yes, a normal pass would work fantastically. I have actually tested this with SD 1.5. The problem is that there is no "normal" control net for Stable Diffusion XL yet. The videos generated by SD1.5 were extremely coherent, but unfortunately also pretty ugly. So I gave up on it and focused on SDXL for this video. But as soon as there is a control net for it, this workflow should work even better
AI graphics gives me migraine.
Kinda
@@ghfgh_not kinda. It does.
@@CivilProtection5562literally the only purpose for it right now is ideation and even that is kinda narrow especially when you actually have an idea in mind it’s only really good for exploring ideas you would have never thought of otherwise. The funny thing is non-artists don’t even know how to use it and I don’t blame them it’s pretty hard to control and they have no idea how to construct an image in the first place and have never thought of the variables so they basically just spray and pray and cherry pick whichever ones it spits out that they like
@@cranberrycanvas do you really think non-artist don’t know how to use AI art I mean it’s not that complicated
@@SwoleKitchen everyone knows how to use ai to generate images. The problem is not that they don't know how to use "ai art". It's that they don't know hot to use art. These people are not needed by the industry and no one cares about their brainfarts. I mean prompts.
If I’m not mistaken, the blender bit can be removed entirely, no? Can’t you do the same separation in photoshop with an image? Or after effects with a video?
amazing!!!!
Man if you could get consistent character i would be sold. I managed to do th same exact thing just using pika and runway ai and animating models in unity.
But you can! Use the same prompt for the character in every scene. Create one image of your character (full body or face). Load it into the ipAdapter and connect the mask for your character into the "attn mask" input in your ipAdapter. ...I should make a video about this haha
@@mickmumpitz if you could that would be great becauee the one thing i neee is consistency in the character models.
One other thing i managed to do was taking a sub par 3d model and passing it through krea ai, it makes the character super nice then combing frames from unity into that to out put a nicer imagr
Great video as usual, may you mention ur PC specs?
Lowest setup I tested was 16 GB ram, RTX 2070 Super 8GB vram. It works but takes a long time.
Hey Mick, great video! I've been learning Blender for a while and I'm interested in trying your approach of combining it with AI. I'm planning to build a new PC for this purpose. Do you mind sharing your PC's specifications? It would be helpful for me to know what kind of hardware can handle this type of workflow.
Hi ! Amazing tutorial but I get stuck when I need to find "ComfyUI Manager -> Install missing Custom Nodes" to install the required nodes
Hi! Hello! could you help me? why color mask doesnt work like yours? i followed the same steps! thanks!!
I'm going to accomplish an awesome video with this by Morning 🌄 🙌 TY TY TY!!!!
the "Load Images (Path) MASKS and other dont work, when you search for upload it simply doesnt find the images in the folder, how you select the images to upload?
I've been using a similar workflow, although for shorter sequences. I want to build a product for this, integrated straight into Blender would be insane!
Where Can we found the link of free video workflow ? Also it would be great to have your initial input images for testing. Thanks much!!
Does this work with 2d grease pencil?
nice workflow ... also after watching your new 3d_to_anim workflow and seeing 6:15 //"after a few seconds"
MY 3060/12gb i7 PC is rendering about 10minutes ... why? I am using "Mickmumpitz_AI-RENDERING_SDXL_IMG_FREE_v06" and all your recom. models ...
thx in advance for any help
solved: its running using a samplercustom, ksampler select dpmdd_2m and a sd turbo scheduler ... + the sd xl turbo fp16 safetensor file ... thx
Hello, after I drag the workflow to Comfy, I go to "ComfyUI Manager -> Install missing Custom Nodes." but do not see an option to install missing Custom Nodes. Any suggestions? THANKS.