Mastering Inpainting with Invoke to Add, Remove, and Transform Objects In Your Images

Поділитися
Вставка
  • Опубліковано 26 січ 2025

КОМЕНТАРІ • 45

  • @bkcottman
    @bkcottman Місяць тому +1

    Fantastic tutorial. Invoke is so incredibly powerful.

  • @JosamaBinBiden
    @JosamaBinBiden Місяць тому

    This is very well done. Good job my dude.

  • @MujaidinilTalebbanoNapoletano
    @MujaidinilTalebbanoNapoletano 2 місяці тому

    I followed it from start to finish and it was interesting and constructive, thanks for the work you do!

  • @VirtusRex48
    @VirtusRex48 2 місяці тому

    Can't wait to see the training process!!!

  • @johnmcaleer6917
    @johnmcaleer6917 Місяць тому

    Great vid, lots of learning in that for me...thank you..

  • @ExplorewithZac
    @ExplorewithZac 2 місяці тому +2

    I'm currently in the process of building an inference UI very similar to InvokeAI... But the truth is that InvokeAI is so good that sometimes I wonder why I'm even building this... lol The only way mine differs is having a lot of these features automated and it has a Compose, Transform, and Enhance tab. I also have my Models & LoRas dropdown on the right panel above the Image Gallery, because I have found that the right panel tends to have very little content in it... Having models on the right, with images, feels intuitive because they are both assets that are managed by the user.

    • @ExplorewithZac
      @ExplorewithZac 2 місяці тому +1

      Compose Workspace = txt2img and img2img
      Transform Workspace = inpainting, outpainting, live drawing
      Enhance Workspace = Upscaling. Exposure, contrast and white balance correction.

  • @johnwilson7680
    @johnwilson7680 Місяць тому

    Love your videos, Liked and Subscribed. Please consider 4K for future videos; with all the small text in the interface, it would really help. Thanks!

  • @ПРОСМЕЛЫХИБОЛЬШИХЛЮДЕЙ

    extremely interesting. It didn't work out at first. I study through translation.. This is your first lesson where something has finally started to work. That's great. thank you. It's all fascinating. It's just a miracle. Here's another thing, maybe tell me. Is it possible to manually install models and in which folder.? Everything here is not the same as in the automation of 1111, which I have already started to get used to.

  • @autonomousreviews2521
    @autonomousreviews2521 2 місяці тому

    Fantastic share - thank you :)

  • @NB-ec9wc
    @NB-ec9wc 2 місяці тому +2

    where i can download custom illusionXL model, please

  • @gamalfarag
    @gamalfarag 2 місяці тому +2

    where can i download customillusionxl model ??

  • @AndreyJulpa
    @AndreyJulpa 2 місяці тому

    So much usefull information, thank you! I have a question, I have a 3d render of interior room. Can I somehow change lightning to night/morning etc. without changing geometry and texture of furniture and other details?

  • @MagicBurnAI
    @MagicBurnAI 2 місяці тому +1

    Please fix the issue that with each update i have to download everytime the same model.

  • @Larimuss
    @Larimuss Місяць тому

    Great tutorial thanks! Love it. But I’m wondering where these models are from? Custom illusion etc. did invoke train these? Why not include it in the app starter models?
    What if we wanted say an exact house as guidance? Can we guide with an existing image? Also for In paint. Wouldn’t it be better to try the same seed first? Or does it really not matter?
    In Inpaint I assume weight is the same as prompt priority or coherence.

    • @invokeai
      @invokeai  14 днів тому

      This is the model used: civitai.com/models/719084/customxl

  • @laustchylde
    @laustchylde Місяць тому

    Do you have a list of models behaviors that we can reference? You change models often under 'Generation' with an obvious understanding of what they do so it would be very helpful to have a list we can reference somewhere to the effect of "Model X = good for doing this" or "Model Y = will do that". Thanks

  • @Pawel_Mrozek
    @Pawel_Mrozek 2 місяці тому +1

    Damn. I was looking where is denoising strength in the new version for larger part of en hour ;) Why it is there no one knows.

  • @ПРОСМЕЛЫХИБОЛЬШИХЛЮДЕЙ

    I downloaded and installed the new free version from the website. But for some reason I don't have a slider in the interface-Denoising stretch.Either it's not there because I'm doing something wrong or because the free version

  • @entoincognito6597
    @entoincognito6597 2 місяці тому

    Hi, thank you for this very interesting tutorial. I'm new so I have a silly question, but is it possible to simply remove the background from an image and export the image as a PNG?

    • @invokeai
      @invokeai  2 місяці тому

      You can't export as a transparent png, yet.

  • @garry3989
    @garry3989 2 місяці тому

    Can I ask if this is version dependent? The reason I'm asking is I'm running 5.4.1rc1 and there is no denoising slider on the generation tab

    • @garry3989
      @garry3989 2 місяці тому

      Ignore that I found it :)

  • @Kentel_AI
    @Kentel_AI 2 місяці тому

    Thanks :)

  • @joechip4822
    @joechip4822 2 місяці тому

    What I don't understand... When you created the images for in-fill and outpaint right at the start - why wasn't the prompt of the night scene applied any more? The prompt was still there!

    • @invokeai
      @invokeai  2 місяці тому

      Similar to how the Lower Denoising Strength example from the "Nighttime" example didn't change the image to a night time scene (that only happened at a strength of 1), when infilling/outpainting, it is using the color from the existing image for that context.

  • @TheNjordy
    @TheNjordy 2 місяці тому

    Wait, so 1.0 denoise still leave some "meanings" on the canvas, the prev image is not completely gone?

  • @Avalon1951
    @Avalon1951 2 місяці тому

    Can you guys get Playground 2.5 to work in Invoke?

    • @invokeai
      @invokeai  2 місяці тому

      We likely won't be incorporating it, but contibutors are welcome to add it.

    • @Avalon1951
      @Avalon1951 2 місяці тому

      @@invokeai why would they add it if it doesn't work in Invoke which is too bad because personally I think it's one of the best models out there

  • @GeoBook
    @GeoBook 26 днів тому

    I'm doing the same thing as you, but the result is completely different... The image ends up looking like a sand painting. However, I'm using different models because, honestly, what even is this CustomIllusionXL?

    • @invokeai
      @invokeai  14 днів тому

      This is the model used: civitai.com/models/719084/customxl

    • @GeoBook
      @GeoBook 14 днів тому

      @@invokeai Thank you for your response!

  • @Taladar2003
    @Taladar2003 2 місяці тому +3

    I think the sadness might have gone better if you had included the ears in the inpaint mask, that expression in animals is often portrayed with droopy ears.

    • @invokeai
      @invokeai  2 місяці тому +1

      Great point!

  • @HyewonAn-q7r
    @HyewonAn-q7r 2 місяці тому +2

    와 미쳤다..

    • @simjounhax
      @simjounhax 2 місяці тому

      저도요... ㄷㄷ

  • @DezorianGuy
    @DezorianGuy 2 місяці тому +1

    Could you please use examples with human characters. You only show landscapes and that's maybe not what people mostly generate. Or is Invoke only good at architecture and landscapes?

    • @invokeai
      @invokeai  2 місяці тому +1

      We typically ask our live audience for direction and this is what they typically ask for, so feel free to join a future live stream and make your request!

    • @DezorianGuy
      @DezorianGuy 2 місяці тому +1

      @@invokeai I don't know your audience, but I am quite sure you should include a humanoid example in each of your future videos. Or are there Invoke tutorial channels focusing on real life generating scenarios?

  • @TheNjordy
    @TheNjordy 2 місяці тому +1

    I'm not sure I like the idea of moving the denoising slider to the layers area. It's still a parameter for the generation process. Perhaps it would be better to place this slider in one of the corners of the canvas box, and if you move the slider, it would be nice if the image inside the canvas box would show increased noise. Why? First, this would make it very easy to understand for a newbie who might not grasp what it does without a lengthy tutorial explanation. Secondly, it's an additional visual aid for the user to see how much information is being approximately destroyed. We are visual, not digital creatures, and it's not easy to imagine adding noise to an image by 11%.