Oh boy, this is exciting. Wonder if you can import an existing workflow in there, and if it's compatible with any custom nodes. Being able to edit the composition of your generation like this saves time and opens up potential.
Hi, for the import feature, I don't think there is already something that reproduce same workflow, generally you need to start from scratch. I tested after this video from myself also a few like Lora and SDXL model and works well, they require different nodes. There is also the possibility as image to 3D, but I didn't tried yet and it seems also AnimateDiff for consistent animations should be supported. To be sure that ComfyUI is compatible with a specific model you need to check this list, these are all working nodes tested by the developer: github.com/AIGODLIKE/ComfyUI-BlenderAI-node?tab=readme-ov-file#tested-nodes
@@Gioxyer Thanks for the info! Nice to see it already supports a handful of custom nodes, hope the project gets the attention it needs to keep itself goin 👍
You're welcome, considering that there is the possibility to run already inside Blender it's a good thing for someone that would like to experiment more in deep with Viewport
Since it is not said clearly I would kindly like to ask two questions: 1) Is it realistic to use this ComfyUI plugin to generate AI output that is ANIMATED? 2) Also, is it realistic do do it on a freeware basis?
I suggest to you to use Nvdia graphic card, at least 8gb of vram, because a lot of nodes use cuda cores or if you would like to test on cloud gpus, there are some free and paid alternatives, then you upload render/animation inside it
@@Gioxyer thank you! I'll probably see if I can find a free cloud GPU. Times are tough with these prices. From what you notice, did the character stay consistent to the character 3D model but in a different style? Or does it do slight (or more) changes the more you move the bones or camera?
I generally use Salt AI that doesn't use any credits, I made a new video about it. For the character that should be consistent I advice to use openpose that allow you to use controlnet for setting the pose better and also keep denoise at same level of reference (image or viewport)
Make sure that you use node correctly: 1) Input Image as Viewport (last button) 2) Use Blank Grease Pencil with Draw Mode 3) Test different pose If you encountering some issue you can report on or email directly the creator from GitHub: github.com/AIGODLIKE/ComfyUI-BlenderAI-node/issues
Great video, I have added it to the project link. If you encounter any problems, please feel free to raise them in ISSUE.(Blender YES~)
Thanks for mention and for support
This look awesome will try it out thank u
Oh boy, this is exciting. Wonder if you can import an existing workflow in there, and if it's compatible with any custom nodes. Being able to edit the composition of your generation like this saves time and opens up potential.
Hi, for the import feature, I don't think there is already something that reproduce same workflow, generally you need to start from scratch.
I tested after this video from myself also a few like Lora and SDXL model and works well, they require different nodes.
There is also the possibility as image to 3D, but I didn't tried yet and it seems also AnimateDiff for consistent animations should be supported.
To be sure that ComfyUI is compatible with a specific model you need to check this list, these are all working nodes tested by the developer:
github.com/AIGODLIKE/ComfyUI-BlenderAI-node?tab=readme-ov-file#tested-nodes
@@Gioxyer Thanks for the info! Nice to see it already supports a handful of custom nodes, hope the project gets the attention it needs to keep itself goin 👍
You're welcome, considering that there is the possibility to run already inside Blender it's a good thing for someone that would like to experiment more in deep with Viewport
INPUT IMAGE NOT SHOWING: After install and connect to Comfy UI, generate an image. Restart comfy ui and blender. Input image is now available.
i can not find the input image node.. pls help.. did you install any custom nodes?
All working great, all nodes are fine, but its stuck executing the VAE decode part when I am trying the cube
Since it is not said clearly I would kindly like to ask two questions:
1) Is it realistic to use this ComfyUI plugin to generate AI output that is ANIMATED?
2) Also, is it realistic do do it on a freeware basis?
always got : ('Invalid Node Type: 输入图像.001',) this with the blender viewport
hello i am using blender 4.2.3 i can install it however when i load the model i cant see the ckc.
Which error gives to you?
Hay there i only used automatic 1111 but do this addon include control net and inpaint
Also uainf checkpoints and loras of sd 1.5 and xl
Hehe i started to watch the video u abit started to answer thia atuff xd
Do we need a specific graphics card and does this really have consistent memory for a full movie, or will we see glitches?
I suggest to you to use Nvdia graphic card, at least 8gb of vram, because a lot of nodes use cuda cores or if you would like to test on cloud gpus, there are some free and paid alternatives, then you upload render/animation inside it
@@Gioxyer
thank you! I'll probably see if I can find a free cloud GPU. Times are tough with these prices. From what you notice, did the character stay consistent to the character 3D model but in a different style? Or does it do slight (or more) changes the more you move the bones or camera?
I generally use Salt AI that doesn't use any credits, I made a new video about it.
For the character that should be consistent I advice to use openpose that allow you to use controlnet for setting the pose better and also keep denoise at same level of reference (image or viewport)
don't use the grease pencil, ... it does not add anything to your presentation !
Make sure that you use node correctly:
1) Input Image as Viewport (last button)
2) Use Blank Grease Pencil with Draw Mode
3) Test different pose
If you encountering some issue you can report on or email directly the creator from GitHub:
github.com/AIGODLIKE/ComfyUI-BlenderAI-node/issues
@@Gioxyer 🤣🤣