Bro, it's like you're reading my mind! Every time I run into a new issue or have a new objective, I just check YT and I see your latest video covering it. Keep on crushing it!
Thank you so much! I have just one question - how can I add a scarf? Segmentation has no option for scarfs. It thinks that scarf is pants and puts it on like pants
thanks for the great work man!!! I am trying to adjust your workflow so it will work with Dev and not GGUF, but getting errors in StyleModelApply about the dimensions. do you have a working workflow?
Nice trick, but the output resolution/detail will be limited since we are concatenating two images and processing at once? How big can the concatenated image be here using this model?
Hey, thanks for the tutorial. I have the following problem. Do you have any idea how to fix it? mat1 and mat2 shapes cannot be multiplied (577x1024 and 1152x12288)
are you guys getting this error : CLIPVisionLoader Error(s) in loading state_dict for CLIPVisionModelProjection: size mismatch for vision_model.embeddings.patch_embedding.weight: copying a param with shape torch.Size([1152, 3, 14, 14]) from checkpoint, the shape in current model is torch.Size([1024, 3, 14, 14]).??
hello, i got this error, any tip how to resolve it ? Failed to validate prompt for output 220: * MaskToImage 229: - Return type mismatch between linked nodes: mask, received_type(*) mismatch input_type(MASK)
In ComfyUI Folder, find models In models folder - Flux Fill Model should be placed in Unet folder & Redux file should be placed in style_models folder (create style_models folder if not there)
Is the flux redux model a style model? I am getting this error "invalid style model C:\ComfyUI\ComfyUI_windows_portable\ComfyUI\models\style_models\flux1-redux-dev.safetensors"
i am getting this error : CLIPVisionLoader Error(s) in loading state_dict for CLIPVisionModelProjection: size mismatch for vision_model.embeddings.patch_embedding.weight: copying a param with shape torch.Size([1152, 3, 14, 14]) from checkpoint, the shape in current model is torch.Size([1024, 3, 14, 14]).
i am getting this error even though i have the correct model. please help fyi i already update compyui and others "StyleModelLoader invalid style model D:\ComfyUI_windows_portable\ComfyUI\models\style_models\flux1-redux-dev-1.safetensors"
@@xclbrxtra Yes, mine is 12GB, I think other factors affected it yesterday, I used it again today and it runs very well, it just takes 5-6 minutes. Be that as it may, thank you so much for the tutorial, it's fantastic!
this is brilliant. seems like a way to create consistent characters with just a single image.
Bro, it's like you're reading my mind! Every time I run into a new issue or have a new objective, I just check YT and I see your latest video covering it. Keep on crushing it!
Great Video Bro thanks a lot
Thank you for Workflow, it's work very well.
Crazy good stuff man!
You're just a person with a capital. Thank you for sharing your best practices with us. I'll keep an eye on your progress."
Thank you so much! I have just one question - how can I add a scarf? Segmentation has no option for scarfs. It thinks that scarf is pants and puts it on like pants
thanks for the great work man!!!
I am trying to adjust your workflow so it will work with Dev and not GGUF, but getting errors in StyleModelApply about the dimensions. do you have a working workflow?
Can you please mention the exact error ?
@@xclbrxtra StyleModelApply
Sizes of tensors must match except in dimension 1. Expected size 2048 but got size 4096 for tensor number 1 in the list.
Hi Bro, Great content. Can we have "how to video" of it? Tnx
Great Tutorial and Workflow
for me its changing the model face also didn't match the face ?
Is it better then CatVTON plus Flux refiner?
Nice trick, but the output resolution/detail will be limited since we are concatenating two images and processing at once? How big can the concatenated image be here using this model?
Hi, mac user here, got this error :DownloadAndLoadSAM2Model
Torch not compiled with CUDA enabled
What can I do with it?
Hey, thanks for the tutorial. I have the following problem. Do you have any idea how to fix it?
mat1 and mat2 shapes cannot be multiplied (577x1024 and 1152x12288)
That usually means something is the wrong dimensions. What node is red on your workflow?
I have the same issue
are you guys getting this error : CLIPVisionLoader
Error(s) in loading state_dict for CLIPVisionModelProjection:
size mismatch for vision_model.embeddings.patch_embedding.weight: copying a param with shape torch.Size([1152, 3, 14, 14]) from checkpoint, the shape in current model is torch.Size([1024, 3, 14, 14]).??
anyways, it worked by updating comfyui
Getting the same error
Could you also try mockup generator?
Is it possible to change the bottom wear too or can swap the body instead of clothes?like body reference instead of clothes
The clothes are long-sleeved, but the model is wearing short-sleeved clothes. The resulting image shows the model wearing a short-sleeved shirt.
If you Inpaint the whole arm then in 1-2 tries, it will generate long sleeved outfit.
can you please leave link for siglip
hello, i got this error, any tip how to resolve it ?
Failed to validate prompt for output 220:
* MaskToImage 229:
- Return type mismatch between linked nodes: mask, received_type(*) mismatch input_type(MASK)
where to put the models on comfyui? maybe do a installation tutorial? thanks
In ComfyUI Folder, find models
In models folder -
Flux Fill Model should be placed in Unet folder & Redux file should be placed in style_models folder (create style_models folder if not there)
@@xclbrxtra thank you!
Can it work with cartoons? or other image styles?
Is the flux redux model a style model? I am getting this error "invalid style model C:\ComfyUI\ComfyUI_windows_portable\ComfyUI\models\style_models\flux1-redux-dev.safetensors"
can this workflow work for product as well?
i am getting this error : CLIPVisionLoader
Error(s) in loading state_dict for CLIPVisionModelProjection:
size mismatch for vision_model.embeddings.patch_embedding.weight: copying a param with shape torch.Size([1152, 3, 14, 14]) from checkpoint, the shape in current model is torch.Size([1024, 3, 14, 14]).
anyways, it worked by updating comfyui
Bro, how do I handle the error message "UnetLoaderGGUF
'conv_in.weight'" reported by Unet Loader (GGUF)?
unfortunately, i can not import ComfyUI-Florence2 custom nodes. (IMPORT FAILED) T_T
did you fixed it same problem
i am getting this error even though i have the correct model. please help fyi i already update compyui and others
"StyleModelLoader
invalid style model D:\ComfyUI_windows_portable\ComfyUI\models\style_models\flux1-redux-dev-1.safetensors"
Is this working with jewelry?
It should but I would suggest to paint mask manually for it as auto detection won't be able to paint the intricate edges.
RTX3060 doesn't run at all
What's the VRAM, id it's 12 GB then it should work as I'm running this on 8gb VRAM
@@xclbrxtra Yes, mine is 12GB, I think other factors affected it yesterday, I used it again today and it runs very well, it just takes 5-6 minutes. Be that as it may, thank you so much for the tutorial, it's fantastic!
CLIPVisionEncode
'NoneType' object has no attribute 'encode_image'