Hi thanks and really great video, but which part or node exactly I can replace to allow me use my image character to create consistent character sheet too, and also if can give me your contact to speak about some work?
is it possible to start with an image that is the poses for full body but also head facing left, right, front and back. Thinking it would be good to train a Lora with body and head like this.
yes it is possible to do that by replacing the first part of workflow with load image nodes. it is good way for training lora using this multi pose images
Thanks for the great video and workflow. Can we apply an image prompt instead of a text prompt in order to generate the character sheet of an existing character...I have looked at workflows from other channels as well but could not fınd it. I dont want to go the old way of fooocus because no matter the weights, it still changes the character. I want the consistency that is why I am asking. Thank you for your reply in advance :)
i assume with the final rendering image we can use it to create a lora with flux gym, to then being able to generate consistant character with flux in any pose or expression right ? never try creating a lora myself not sure if this is enough for it. ?
I contiually get no vae connected error ... "this controlnet needs a vae" if I connect the VAE from the load checkpoint "juggernautxl" I then get another error "Mat1 and Mat2 shapes cannot be multiplied? Not sure where to go with this now????
@@cgpixel6745 Every SDXL model I tried from the checkpoint loader gives a Mat1 and Mat2 shapes cannot be multiplied error. I was using the exact model you have in your workflow and it asks for a vae connection if I connect it from the checkpoint loader to the vae on the apply control net I get the same Mat1 and Mat2 shapes cannot be multiplied error
@@loquacious1956 I encountered a similar issue, and my solution was: in the first work area under the ControlNet model, use the file recommended by the author instead of using Flux's ControlNet.
Yeah and when you understand how the workflow work you just have to click queue prompt and it will work automatically so it is a tutorial.if you wanna understand how each nodes works it gonna be very difficult for you.
@@kevinbatdorf this video allow you to start from nothing and get an image with multiple angles and consistent results so for me it's enough. Edit: don't bother with the response since I am very convinced with my ideas.
Very good video, however I'm getting this error "ValueError: Fast download using 'hf_transfer' is enabled (HF_HUB_ENABLE_HF_TRANSFER=1) but 'hf_transfer' package is not available in your environment. Try `pip install hf_transfer`." Do you have any tips?
[Errno 2] No such file or directory: 'D:\\ComfyUI_windows_portable_nvidia\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\ComfyUI-tbox\\..\\..\\models\\annotator\\lllyasviel\\Annotators\\.cache\\huggingface\\download\\body_pose_model.pth.25a948c16078b0f08e236bda51a385d855ef4c153598947c28c0d47ed94bb746.incomplete
Great video, great workflow and thank you for including the graph.
cool workflow, thanks!
so cool! very thanks! ♡
I have a question, can face fix & body fix work on 1 person image?
Yes it can work
@@cgpixel6745 thank you! I`ll try it!
Thank you so much for the details video....
your welcome i am happy that you like it
i think you can add an ultimate sd upscale node to last step and improve the quality 10 times. thanks for sharing
yes you can do it i had the same idea but since it takes to much time for my pc to handle it so i did not include
Hi thanks and really great video, but which part or node exactly I can replace to allow me use my image character to create consistent character sheet too, and also if can give me your contact to speak about some work?
Hey bro! Thanks for you sharing! I have an issue - save image node inside character sheet is always black (no image generated). Any thoughts?
Make sure to use the right vae for the vae decode don't use sdxl vae that comes with sdxl model into flux nodes
is it possible to start with an image that is the poses for full body but also head facing left, right, front and back. Thinking it would be good to train a Lora with body and head like this.
yes it is possible to do that by replacing the first part of workflow with load image nodes. it is good way for training lora using this multi pose images
@@cgpixel6745solid, thanks for the reply. I am a bit of a hack with all of this but will have a crack.
Thanks!
Thanks for the great video and workflow. Can we apply an image prompt instead of a text prompt in order to generate the character sheet of an existing character...I have looked at workflows from other channels as well but could not fınd it. I dont want to go the old way of fooocus because no matter the weights, it still changes the character. I want the consistency that is why I am asking. Thank you for your reply in advance :)
yes you can do that by replacing the first part of the workflow with load image node
@ thanks a lot 😄
i assume with the final rendering image we can use it to create a lora with flux gym, to then being able to generate consistant character with flux in any pose or expression right ? never try creating a lora myself not sure if this is enough for it. ?
Yes from what I know it can work but you will need to have at minimum 20 images of your subject
I contiually get no vae connected error ... "this controlnet needs a vae" if I connect the VAE from the load checkpoint "juggernautxl" I then get another error "Mat1 and Mat2 shapes cannot be multiplied? Not sure where to go with this now????
then just plug the vae from checkpoint loader of the SDXL model it should fix it
@@cgpixel6745 Every SDXL model I tried from the checkpoint loader gives a Mat1 and Mat2 shapes cannot be multiplied error. I was using the exact model you have in your workflow and it asks for a vae connection if I connect it from the checkpoint loader to the vae on the apply control net I get the same Mat1 and Mat2 shapes cannot be multiplied error
@@loquacious1956 I encountered a similar issue, and my solution was: in the first work area under the ControlNet model, use the file recommended by the author instead of using Flux's ControlNet.
great
thanks
the images have not much quality . can it be fixed ?
for SDXL you can use turbo or lightning model as for the final results it should be good since we are using flux model combined with realism lora
This isn’t really a tutorial. You’re just explaining the workflow. Do you have any videos that teach how everything works and fits together?
Yeah and when you understand how the workflow work you just have to click queue prompt and it will work automatically so it is a tutorial.if you wanna understand how each nodes works it gonna be very difficult for you.
This is more like a walkthrough of a workflow. No one learns anything here. Anyway, no need to be defense. Take the feedback or ignore it. Up to you.
@@kevinbatdorf this video allow you to start from nothing and get an image with multiple angles and consistent results so for me it's enough. Edit: don't bother with the response since I am very convinced with my ideas.
Very good video, however I'm getting this error "ValueError: Fast download using 'hf_transfer' is enabled (HF_HUB_ENABLE_HF_TRANSFER=1) but 'hf_transfer' package is not available in your environment. Try `pip install hf_transfer`." Do you have any tips?
from what i know it is related to your internet connexion you can check this link for more info github.com/huggingface/huggingface_hub/issues/1831
How much ram is needed
[Errno 2] No such file or directory: 'D:\\ComfyUI_windows_portable_nvidia\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\ComfyUI-tbox\\..\\..\\models\\annotator\\lllyasviel\\Annotators\\.cache\\huggingface\\download\\body_pose_model.pth.25a948c16078b0f08e236bda51a385d855ef4c153598947c28c0d47ed94bb746.incomplete