please tell me how to fix this error - Trying to set a tensor of shape torch.Size([4098, 1024]) in "weight" (which has shape torch.Size([1026, 1024])), this looks incorrect.
Thank you for the excellent video! I tried this workflow as well as the one with manual masking, but in both cases, I'm getting gray noise in the mask area. Do you have any idea what I might have missed?
In your video you use the flux lora model for LoraLoaderModelOnly node, but you linked and downloaded a file in the video that called "diffusion..." (2:00). In the internet i found the flux lora model, but the file size is exact the same. So people who get a error, have to download the "other" file, or rename it. It is a little bit confusing, but ok. 😄 After installing missing nodes, i tried your workflow. At the beginning he download the Florence 2 model during workflow quene. It took a while, but it is ok for somebody who know what is happening and will not abort the quene(like me at the first time). After aborting, there will be an error, but after deleting the florence files, i was able to download again. At the end, your workflow works good. I tried it with faces, hair, clothes and so on. I also tried sunglasses, but when somebody not wear any glasses, he is not able to put the sunglasses on. Sometimes he generates little bit different faces, also eyes from people who wear glasses, because he mark also the eyes in the mask. But you workflow will help for the most things. Thanks!
Hello, i'm getting an error for the DualClipLoader (GGUF) clip_name1 and name2 undefined; also the lora model is difussion_pytorch_model instead of the one you have which is Flux.1-turbo. i'm not sure why :(
Hi, Your tutorials are very nice. Would you like to review the Nvidia jetson orin Nano can be useful for people who don't have a powerful PC or It can be helpful to use comfy UI in our laptops.
Firstly i really like ur tutorials! Can u please make tutorials about commercial ai ads like u did before,, maybe something like a product Lora + model lora + style lora with realistic generated images (and animating them)
Thanks. Nice work~.Can I know what is the strength number & strength_type you put in the Apply Style Model cause is not expand. 10q. Mine image look to strong on the outcome. Sorry. Got it correct my Loading the image in 1024x1024 large size and everything will be perfect. Thanks again.
@@AiMotionStudio Hi, thank you for your tutorial. It is exactly what I needed, however, I have two unresolved problems and I wonder if you can help meL LoraLoaderModelOnly #81 and DualCLIPLoader #41(GGUU) show red color borders. I have updated everything, installed missing nodes, and nothing seems to work. I would appreciate your assistance.
Yrs it is possible, openpose, depth map and canny are part of the flux tools that was released I will look into the workflow and maybe do a video about it in future.
please tell me how to fix this error - Trying to set a tensor of shape torch.Size([4098, 1024]) in "weight" (which has shape torch.Size([1026, 1024])), this looks incorrect.
Thank you for the excellent video! I tried this workflow as well as the one with manual masking, but in both cases, I'm getting gray noise in the mask area. Do you have any idea what I might have missed?
In your video you use the flux lora model for LoraLoaderModelOnly node, but you linked and downloaded a file in the video that called "diffusion..." (2:00). In the internet i found the flux lora model, but the file size is exact the same. So people who get a error, have to download the "other" file, or rename it. It is a little bit confusing, but ok. 😄
After installing missing nodes, i tried your workflow. At the beginning he download the Florence 2 model during workflow quene. It took a while, but it is ok for somebody who know what is happening and will not abort the quene(like me at the first time). After aborting, there will be an error, but after deleting the florence files, i was able to download again.
At the end, your workflow works good. I tried it with faces, hair, clothes and so on. I also tried sunglasses, but when somebody not wear any glasses, he is not able to put the sunglasses on. Sometimes he generates little bit different faces, also eyes from people who wear glasses, because he mark also the eyes in the mask.
But you workflow will help for the most things. Thanks!
Thank you for the excellent video! The workflow used Florence2Flux model, could you pls. let me know which folder should this model put into?
Perfect! 💙
Thank you for the great tutorial! May I ask what graphics card you use? I'm also curious about its VRAM.
Awesome 👌
Thanks 🤗
Hello, i'm getting an error for the DualClipLoader (GGUF) clip_name1 and name2 undefined; also the lora model is difussion_pytorch_model instead of the one you have which is Flux.1-turbo. i'm not sure why :(
Hi, Your tutorials are very nice. Would you like to review the Nvidia jetson orin Nano can be useful for people who don't have a powerful PC or It can be helpful to use comfy UI in our laptops.
Firstly i really like ur tutorials! Can u please make tutorials about commercial ai ads like u did before,, maybe something like a product Lora + model lora + style lora with realistic generated images (and animating them)
Thanks. Nice work~.Can I know what is the strength number & strength_type you put in the Apply Style Model cause is not expand. 10q. Mine image look to strong on the outcome.
Sorry. Got it correct my Loading the image in 1024x1024 large size and everything will be perfect. Thanks again.
Yes you need to leave all setting as default you can get better result using a high image resolution, thanks you.
I fixed the clip archives, but now I'm getting a black image as result, do you know why?
did you set the dualclip loader to both be the same clip?
@@aidan6536 yep, the same as in the video
Try to update your confyUI to the latest version this should fix it.
I've got an error in clip text encode node: 'NoneType' object has no attribute 'device', Please, how can I fix that?
check your DualCLIPLoader see if you have the appropriate CLIP files, if not download them and put them in the clip folder under models.
@@AiMotionStudio Hi, thank you for your tutorial. It is exactly what I needed, however, I have two unresolved problems and I wonder if you can help meL LoraLoaderModelOnly #81 and DualCLIPLoader #41(GGUU) show red color borders. I have updated everything, installed missing nodes, and nothing seems to work. I would appreciate your assistance.
Is it possible to use openpose controlnet with flux fill
Yrs it is possible, openpose, depth map and canny are part of the flux tools that was released I will look into the workflow and maybe do a video about it in future.
Is it possible to upload here your own mask made in Photoshop?
I will release the Pt. 2 tutorials that include a manual masking in ComfyUI however it directly done in confyUi and not photoshop.
i want somthing with canny and inpainting and redux at same time to be able to make half human half robots
🎉🎉🎉🎉
Unfortunately, it doesn't work with t5xxl_fp8. And t5xxl_fp16 is too big for my GPU to handle.
If you are already using a flux model just use that clip which your GPU can handle and leave the remaining models as default.
You can outsource clip to the cpu with flags I believe.