We used simlar technique for a big institutional video last month. We manged to make a good animation of a girl doing a back flip that is a nightmare for any AI model. However you miss an important step here that we always use on VFX and people forget to use in AI.
Tried and have some questions I would like to ask: 1、When pushing retarget button, why blender becomed inactive for about 10+ minutes, and the retargeting is not right. 2、How to keep the background consistent, I successfully run the whole process but in Comfy couldn't maintain the background, it's constantly flickering( I imported a stand alone background in the blender to start with). Thank you very much for the sharing again!
Retargeting can take quite a while depending on your hardware configuration, but I never had any issues with the result. Are you using AutoRig Pro? The flickering background is a general issue with StableDiffusion, though I believe it can be minimized with the right AnimateDiff settings. You could also separate the background from the person with a segs mask and render them separately, keeping the background rendering at a very low denoising value, so it keeps more stable. If you don't have much camera movements, you could also render a single background image and feed it into the segmented scene as a static background. I'm just working on a showcase in order to achieve this goal, should be out in the coming week.
Thank you for your response! Yes I was using AutoRig Pro. But I don't know why it took me so long to rig them. And I got you advice on how to maintain background. Looking forward to your next video.@@-RenderRealm-
let's make it simpler with mocap for blender to make it easier to do motion capture rather than being limited to mixamo. would you mind making a video about it? toast
The images contain the workflows, just download them, start ComfyUI, and then drag the image onto your ComfyUI browser window, then the workflow should be loaded.
That´s amazing! One more interesting thing is to change the default bot for another 3D model to extract lines and 3D model stuff.
Closer and closer to full consistency.
Wow. Thnk you SO much for everything. This was amazing
Excellent tutorial!
thank you! Excellent tutorial :)
This is great!! 🎉😊
We used simlar technique for a big institutional video last month. We manged to make a good animation of a girl doing a back flip that is a nightmare for any AI model. However you miss an important step here that we always use on VFX and people forget to use in AI.
Tried and have some questions I would like to ask: 1、When pushing retarget button, why blender becomed inactive for about 10+ minutes, and the retargeting is not right. 2、How to keep the background consistent, I successfully run the whole process but in Comfy couldn't maintain the background, it's constantly flickering( I imported a stand alone background in the blender to start with). Thank you very much for the sharing again!
Retargeting can take quite a while depending on your hardware configuration, but I never had any issues with the result. Are you using AutoRig Pro? The flickering background is a general issue with StableDiffusion, though I believe it can be minimized with the right AnimateDiff settings. You could also separate the background from the person with a segs mask and render them separately, keeping the background rendering at a very low denoising value, so it keeps more stable. If you don't have much camera movements, you could also render a single background image and feed it into the segmented scene as a static background. I'm just working on a showcase in order to achieve this goal, should be out in the coming week.
Thank you for your response! Yes I was using AutoRig Pro. But I don't know why it took me so long to rig them. And I got you advice on how to maintain background. Looking forward to your next video.@@-RenderRealm-
Thanks so much.
Aye, you just described the future of whole and new animation method
Not new already used in big productions since october.
how can we do the opposite. I want to take an openpose from a still image and use it to retarget the openpose blender rig with another model?
the author have a video showing how to retarget with another model
let's make it simpler with mocap for blender to make it easier to do motion capture rather than being limited to mixamo. would you mind making a video about it? toast
It's a great job, but I can only find images where you would have shared with us the workflow. Is this correct?
The images contain the workflows, just download them, start ComfyUI, and then drag the image onto your ComfyUI browser window, then the workflow should be loaded.
Great!
It seems great!! i couldn't test because the manager don't give me access to "IPAdapterApplyEncoded" missing for me...
the workflow is a bit old the "IPAdapterApplyEncoded" was replaced by the new "IPAdapter Advanced" just replace the node and reconnect
First 🎉🎉🎉🎉🎉
Fucking awesome video! You are a life saver!!!