Ultimate Flux Fill Inpainting + Flux Redux Model Workflow | ComfyUI Tutorial Pt. 1

Поділитися
Вставка
  • Опубліковано 2 лют 2025

КОМЕНТАРІ • 32

  • @dmitriy_fatio9992
    @dmitriy_fatio9992 3 дні тому

    please tell me how to fix this error - Trying to set a tensor of shape torch.Size([4098, 1024]) in "weight" (which has shape torch.Size([1026, 1024])), this looks incorrect.

  • @carlosfelipe20
    @carlosfelipe20 26 днів тому +1

    Thank you for the excellent video! I tried this workflow as well as the one with manual masking, but in both cases, I'm getting gray noise in the mask area. Do you have any idea what I might have missed?

  • @Datenkralle
    @Datenkralle 10 днів тому

    In your video you use the flux lora model for LoraLoaderModelOnly node, but you linked and downloaded a file in the video that called "diffusion..." (2:00). In the internet i found the flux lora model, but the file size is exact the same. So people who get a error, have to download the "other" file, or rename it. It is a little bit confusing, but ok. 😄
    After installing missing nodes, i tried your workflow. At the beginning he download the Florence 2 model during workflow quene. It took a while, but it is ok for somebody who know what is happening and will not abort the quene(like me at the first time). After aborting, there will be an error, but after deleting the florence files, i was able to download again.
    At the end, your workflow works good. I tried it with faces, hair, clothes and so on. I also tried sunglasses, but when somebody not wear any glasses, he is not able to put the sunglasses on. Sometimes he generates little bit different faces, also eyes from people who wear glasses, because he mark also the eyes in the mask.
    But you workflow will help for the most things. Thanks!

  • @zodiacfengshuisecrets
    @zodiacfengshuisecrets 16 днів тому

    Thank you for the excellent video! The workflow used Florence2Flux model, could you pls. let me know which folder should this model put into?

  • @ruslopuslo
    @ruslopuslo Місяць тому

    Perfect! 💙

  • @Bookmark_Design-qs7hj
    @Bookmark_Design-qs7hj Місяць тому

    Thank you for the great tutorial! May I ask what graphics card you use? I'm also curious about its VRAM.

  • @katejackson1682
    @katejackson1682 Місяць тому +1

    Awesome 👌

  • @Philip-Chan12
    @Philip-Chan12 15 днів тому

    Hello, i'm getting an error for the DualClipLoader (GGUF) clip_name1 and name2 undefined; also the lora model is difussion_pytorch_model instead of the one you have which is Flux.1-turbo. i'm not sure why :(

  • @ashishjumade4882
    @ashishjumade4882 Місяць тому

    Hi, Your tutorials are very nice. Would you like to review the Nvidia jetson orin Nano can be useful for people who don't have a powerful PC or It can be helpful to use comfy UI in our laptops.

  • @nbholai
    @nbholai 22 дні тому

    Firstly i really like ur tutorials! Can u please make tutorials about commercial ai ads like u did before,, maybe something like a product Lora + model lora + style lora with realistic generated images (and animating them)

  • @JemmyWong79
    @JemmyWong79 Місяць тому

    Thanks. Nice work~.Can I know what is the strength number & strength_type you put in the Apply Style Model cause is not expand. 10q. Mine image look to strong on the outcome.
    Sorry. Got it correct my Loading the image in 1024x1024 large size and everything will be perfect. Thanks again.

    • @AiMotionStudio
      @AiMotionStudio  Місяць тому

      Yes you need to leave all setting as default you can get better result using a high image resolution, thanks you.

  •  Місяць тому

    I fixed the clip archives, but now I'm getting a black image as result, do you know why?

    • @aidan6536
      @aidan6536 Місяць тому

      did you set the dualclip loader to both be the same clip?

    •  Місяць тому

      @@aidan6536 yep, the same as in the video

    • @AiMotionStudio
      @AiMotionStudio  Місяць тому

      Try to update your confyUI to the latest version this should fix it.

  •  Місяць тому

    I've got an error in clip text encode node: 'NoneType' object has no attribute 'device', Please, how can I fix that?

    • @AiMotionStudio
      @AiMotionStudio  Місяць тому +2

      check your DualCLIPLoader see if you have the appropriate CLIP files, if not download them and put them in the clip folder under models.

    • @La-verdad-aunque-duela
      @La-verdad-aunque-duela 25 днів тому

      @@AiMotionStudio Hi, thank you for your tutorial. It is exactly what I needed, however, I have two unresolved problems and I wonder if you can help meL LoraLoaderModelOnly #81 and DualCLIPLoader #41(GGUU) show red color borders. I have updated everything, installed missing nodes, and nothing seems to work. I would appreciate your assistance.

  • @arjuneswarrajendran
    @arjuneswarrajendran Місяць тому

    Is it possible to use openpose controlnet with flux fill

    • @AiMotionStudio
      @AiMotionStudio  Місяць тому +1

      Yrs it is possible, openpose, depth map and canny are part of the flux tools that was released I will look into the workflow and maybe do a video about it in future.

  • @Djonsing
    @Djonsing Місяць тому

    Is it possible to upload here your own mask made in Photoshop?

    • @AiMotionStudio
      @AiMotionStudio  Місяць тому +1

      I will release the Pt. 2 tutorials that include a manual masking in ComfyUI however it directly done in confyUi and not photoshop.

  • @mostafamostafa-fi7kr
    @mostafamostafa-fi7kr 14 днів тому

    i want somthing with canny and inpainting and redux at same time to be able to make half human half robots

  • @ajokesmith124
    @ajokesmith124 Місяць тому

    🎉🎉🎉🎉

  • @xxx-zw2hy
    @xxx-zw2hy Місяць тому

    Unfortunately, it doesn't work with t5xxl_fp8. And t5xxl_fp16 is too big for my GPU to handle.

    • @AiMotionStudio
      @AiMotionStudio  Місяць тому +1

      If you are already using a flux model just use that clip which your GPU can handle and leave the remaining models as default.

    • @charimuvilla8693
      @charimuvilla8693 День тому

      You can outsource clip to the cpu with flags I believe.