Inpainting Tutorial - Stable Diffusion

Поділитися
Вставка
  • Опубліковано 19 гру 2024

КОМЕНТАРІ • 312

  • @sebastiankamph
    @sebastiankamph  Рік тому +2

    Early access to videos as a Patreon supporter www.patreon.com/sebastiankamph

  • @nightai
    @nightai Рік тому +9

    I immediately scrolled to comments after coffee cup fiasko and I wasn't disappointed. This community is so great, you can learn so much, so fast.

  • @bellsTheorem1138
    @bellsTheorem1138 Рік тому +164

    A much better solution when using "Masked Only" is to place a tiny dot of masking on or near content of the image that gives your masked region and prompt some context. What happen with Masked Only is that the image is cropped to fit just the masked part so many times it loses context to fit the new generation into. So if you want to in paint a hand, add a dot of mask further up the arm so it knows how it should be positioned or sized to match the rest of the arm. In your example adding a tiny dot of mask to the other coffee cup would have produced a better result. Simply because the cropping will include that contextual information. You have to leave in enough for the AI to work with.

    • @brandonaraya7598
      @brandonaraya7598 Рік тому +6

      i do the same , just mask only , and add 2 points around the part that i want to change so the ia has something to work

    • @crobinso2010
      @crobinso2010 Рік тому +7

      Dot Contexting would make a good topic for an instruction video.

    • @loveutube04
      @loveutube04 Рік тому +2

      How do I remove the clothes, it is not working for me.

    • @nirl6171
      @nirl6171 Рік тому +2

      ​@@loveutube04😂

    • @DerXavia
      @DerXavia Рік тому +2

      @@loveutube04 get cloth adjuster lora, works like a charm :)

  • @zoybean
    @zoybean Рік тому +190

    Hey Seb, just a little tip. Once you're done inpainting, you can put the final image into img2img with very low denoise to remove inpainting blurs, shadows, etc. for a smoother result.

    • @doingtime20
      @doingtime20 Рік тому +3

      I thought of the same as I was having trouble with an image that is a painting, although img2img has very little denoise strength it still manages to change small crucial details like the eyes. I wish there was a better solution to this, since the "only mask" option leaves a much smoother surface than the rest of the image and makes it look out of place. It's not so much trouble when it's a photo, but in other kind of images like paintings the "blur" of the fix stands out more. :S

    • @spanko685
      @spanko685 Рік тому

      how do you use this interface, that one he's showing in the video?

    • @tannhausergate7162
      @tannhausergate7162 Рік тому +5

      @@spanko685 It's Automatic1111, which is relatively easy to install. Just make sure you follow the steps exactly, especially when it comes to the Python version stated. Too recent is just as bad as too old, because some of the components are very finicky. There are tons of videos and websites covering it specifically, as it's the most popular and probably still the most feature rich UI.

    • @spanko685
      @spanko685 Рік тому

      @@tannhausergate7162 thx so much

    • @abetuna2707
      @abetuna2707 Рік тому

      how? can you elaborate plz

  • @Antalion20
    @Antalion20 Рік тому +124

    The reason that the coffee cup doesn't fit well within the image is because the render box for the inpaint area is such a small part of the image - anything generated is done so only within the context of a) what is not denoised i.e. the original image and b) what SD can actually see (within the render box). For high denoising strength you generally want a larger render box, otherwise it's easy to lose context.
    But what if you only want to change a small area? No problem! The render area is created as a bounding box that contains all the inpaint area you've selected - so, you can increase it by adding tiny dots of inpaint area to the scene. If you make them very small then whatever is behind it will generally be unchanged once rendered, so only the main area you've selected will be altered - but the dots will still count towards the bounding box. In the example given, I would put one dot above and to the left of the first coffee cup, and one at the bottom and to the right of the table. That way, what is rendered will probably adher more closely to both the focus (the blur on the coffee cup) and orientation and size of the table.
    For lower denoising strength (0.5 or below I'd say) it will generally be able to glean the context from what remains of the original image, but for anything higher I get much better results with this method.

    • @sebastiankamph
      @sebastiankamph  Рік тому +24

      Clever! Nice tip 🌟

    • @EmperorZ19
      @EmperorZ19 Рік тому +9

      Is this different from simply increasing the mask padding to include that larger area?

    • @morganandreason
      @morganandreason Рік тому +5

      @@EmperorZ19 That was my thought exactly. Isn't that exactly what the "Only masked padding, pixels" slider is for?

    • @blisterfingers8169
      @blisterfingers8169 Рік тому +5

      @@morganandreason Mask padding is exactly what this is for but it only goes so big. InvokeAI has a much better solution with an actual bounding box you can move around.

    • @morganandreason
      @morganandreason Рік тому +2

      @@blisterfingers8169 I can see how a movable bounding box is a lot better, yes. Hope it comes to Auto1111.

  • @zvit
    @zvit Рік тому +35

    The reason you were struggling with the coffee cup is because "full image" is not just to keep the inpaint part the same resolution, but it tells the inpaint engine to look at the entire picture when drawing. So, you will get a perfectly sized cup, and correct sunlight on the cup coming from the window, for example. But with "masked only", it only looks at the area of the cup, and that's why it won't fit the scene as well.

  • @MariyamJayne
    @MariyamJayne Рік тому +1

    I love you! In a week you have turned me from someone who had zero experience using AI tools to being a pro using stable diffusion. Thank you for all the amazing tutorials!

  • @runebinder
    @runebinder 11 місяців тому

    Hi, only got into Stable Diffusion a couple of weeks ago and hadn't had much luck with Inpainting, this tutorial made a lot of sense and got much better results with my first Inpaint after watching this. Thanks :)

  • @pizzaluvah
    @pizzaluvah Рік тому

    I'm all for self-deprecating humo(u)r, but in all seriousness, you are a very capable teacher. Thank you.

  • @Minimalici0us
    @Minimalici0us Рік тому +1

    1:09 - The scroll works by holding down Shift and scrolling the mouse wheel

  • @AIKnowledge2Go
    @AIKnowledge2Go Рік тому +3

    Awesome tutorial. Understanding the difference for masked content "original" and "latent noise" helped me so much. As others had already mentioned, making the inpaint area bigger also helps. When i Inpaint Body parts like the upper torso, I often mask the start of arms and the neck as well, so that inpaint understands in which direction the body is moving.

  • @MaHo-c9m
    @MaHo-c9m Рік тому +16

    Thank you for these tutorials and sharing your process. They've been a huge help for me as a beginner to these tools.

    • @sebastiankamph
      @sebastiankamph  Рік тому +2

      You're very welcome! 😊😊

    • @TheGalacticIndian
      @TheGalacticIndian Рік тому

      @@sebastiankamph I would love to see OUTpainting tutorial in Automatic1111 too 🐣

  • @ovalshrimp
    @ovalshrimp Рік тому +4

    This was helpful Seb, thanks. Inpainting has always been a bit of a mystery.

  • @user-jk9zr3sc5h
    @user-jk9zr3sc5h Рік тому +3

    Alright I've subbed? joined? I dont know what the term is-
    I'm giving myself 2 full time weeks to really pick up on stable diffusion and you're helping to launch me. Thank you.

    • @sebastiankamph
      @sebastiankamph  Рік тому +2

      Thank you kindly for your support 😘 After binging my content you should be well and ready to compete with almost anyone in SD! 🌟

    • @user-jk9zr3sc5h
      @user-jk9zr3sc5h Рік тому

      @@sebastiankamph The smallish content sizes really help with being able to scrub back and forth in a video- thank you :)

  • @coda514
    @coda514 Рік тому +11

    Thank you for the knowledge. BTW, Did you hear about the artist who took things too far? Guess he didn't know where to draw the line.

  • @aliceleblanc7318
    @aliceleblanc7318 Рік тому +4

    I am new at the whole impainting topic and this video helped me a lot to get an overview of the possibilities. Many thanks ❤

    • @sebastiankamph
      @sebastiankamph  Рік тому +1

      Glad it was helpful! Thank you for the kind words 🌟

  • @BrettArt-Channel
    @BrettArt-Channel 5 місяців тому +1

    Cool, Thanks for the simple perfect Instructions 💪💪

  • @Smudgie
    @Smudgie Рік тому +1

    It's like ASMR for SD. Thank you!

  • @Remowylliams
    @Remowylliams Рік тому +2

    This is a great tutorial, while I worked through the pain of learning this myself. I know this would have helped me hugely. Bonus, I did learn a bit more about the blur settings. Thank you very much.

    • @sebastiankamph
      @sebastiankamph  Рік тому +1

      You're very welcome! Happy you learned something new 😊

  • @limajacques
    @limajacques Рік тому

    Fine, I'll leave a comment after watching like a half dozen of your videos. Well paced, thorough explanations, technical expertise on the matter. Ok, then, I guess I have to thank you.

    • @sebastiankamph
      @sebastiankamph  Рік тому +1

      Appreciate it! Community engagement help more people see my videos, which in turn help me 😊

  • @bentp4891
    @bentp4891 Рік тому

    I've always found inpainting to be a bit hit and miss. That was useful. Thanks.

  • @Fabstron
    @Fabstron Рік тому

    Seb I could listen to your voice for hours

  • @JorshusPrime
    @JorshusPrime Рік тому

    I never imagined inpainting would be so simple, which is why I'm here. I was like nice I can draw black lines on stuff how does that help me? I genuinely thought you had to be fluent with digital painting for this. Thanks for the tutorial dude.

    • @sebastiankamph
      @sebastiankamph  Рік тому

      Happy to help, glad you liked it! Tell a friend 🌟😊

    • @JorshusPrime
      @JorshusPrime Рік тому

      @@sebastiankamph I do every time I learn something new in SD :)

  • @interestedinstuff
    @interestedinstuff Рік тому

    Very useful video as usual. I've been struggling to get this to work. Getting the hand now. Including help from the comments in this section. Thanks folks.

  • @danielwooten4509
    @danielwooten4509 10 місяців тому

    Bro doing the lord's work. Thanks for your awesome tutorials

  • @MrPlasmo
    @MrPlasmo Рік тому +1

    Damn thanks for the zooooom! Was wondering when we gonna get it

  • @alexkatzfey
    @alexkatzfey Рік тому +1

    Awesome stuff! I was wondering what all the different settings for inpainting were for. I've just started messing around with Stable Diffusion and watching your videos has definitely helped me to start figuring out what's possible. Thanks again!

  • @p_p
    @p_p Рік тому +1

    9:25 switch to "whole picture" instead of only masked for that particular task

  • @Roughneck7712
    @Roughneck7712 Рік тому

    Very nice video. Inpainting in SD is not intuitive but your simple, to the point instructions helped to explain some issues I've been having with it. Thank you!

  • @MultiOmega1911
    @MultiOmega1911 Рік тому +3

    Marvelous tutorials like usual, keep going the good work!

  • @darkbelg
    @darkbelg Рік тому

    This was a really good and condensed tutorial, thanks.

  • @itsban
    @itsban 7 місяців тому

    Thanks!

    • @sebastiankamph
      @sebastiankamph  7 місяців тому

      Happy to help! Thank you so much for your support :)

  • @cyberprompt
    @cyberprompt Рік тому

    bookmarked, liked and subscribed. mastering these parts are so important and so much to try and fail at!

    • @sebastiankamph
      @sebastiankamph  Рік тому

      Welcome aboard! You'll find lots of valuable resources here, I hope 😉

  • @michaelleue7594
    @michaelleue7594 Рік тому +11

    I wish there was a tool for drawing, like, a heat map on an image, that allowed you to highlight areas where major changes are required and areas where minor changes would be better, in the context of the whole image. Inpainting with a mask is a great tool, but it will always fail to account for image context. There's basically no way to get SD to make a second cup like the first one, for example (short of exporting to Photoshop or something). There's a tool in DALL-E where you can choose different mask colors and assign each color to specific idea, and I feel like that would be really useful if you could also assign weights to those colors.

    • @KDawg5000
      @KDawg5000 Рік тому +4

      I thought maybe the value of the mask would affect how much is changed, so I did some experimenting. I created masks with values going from black to gray to white. In the end, it didn't matter. At some point stable diffusion switches from off to on, so there's no middle ground as far as I could tell.

    • @DejayClayton
      @DejayClayton Рік тому

      So basically, a heatmap-driven denoise parameter

    • @TheOriginalOrkdoop
      @TheOriginalOrkdoop 11 місяців тому

      Doesn't "ControlNet" or "segment anything" used with "inpaint anything" extensions do this?

  • @ThePenguinMejia
    @ThePenguinMejia Рік тому

    Thanks the guide and thanks to the commenters for extra tips.

  • @ish694
    @ish694 Рік тому

    This is soooo good. Loving the tutorials and the dad jokes man !!

  • @loszhor
    @loszhor Рік тому +1

    Thank you for the information.

  • @jubb1984
    @jubb1984 Рік тому

    I thought Sebastian was a bad teacher at first, but then he told a dad joke...
    That's it, that's all I've got.
    Thanks again for a good tutorial 😁👍 i have also been using the regular model for inpainting, feels like more often then not the corresponding inpaint model is to truncated or seems to do the exact same thing as the regular one (or im just using it wrong).

  • @void2258
    @void2258 Рік тому +1

    I use invoke Ai when I need inpainting or outpainting if at all possible (lora support coming in a few days means a lot of the cases I can't right now soon will be possible). Much better interface. Hoping I can switch over entirely once the migration to nodes makes addons possible.

  • @RamianP
    @RamianP Рік тому

    Nice video. Finally worked, thanks!

  • @KissesLoveKawaii
    @KissesLoveKawaii Рік тому +3

    The padding is useful to give the AI information about the outside when it should change the inside. Denoise + Padding balance is needed when you encounter that AI inpaints whole body inside the face mask especially at higher resolution. i set mine at about 188 generating at 1072 x 1400

  • @hatuey6326
    @hatuey6326 Рік тому +1

    great tuto : when i use the inpaint for my own face trained with dreambooth and the model protogen infinity, i usaully inpainting after the first result, with à 0.45 denoise.

  • @zz-sf1bm
    @zz-sf1bm Рік тому

    love learning these skills, thx for the vid.

  • @titusfx
    @titusfx Рік тому +2

    🎯 Key Takeaways for quick navigation:
    00:00 🎨 Inpainting is a key technique in Stable Diffusion for enhancing image quality and fixing specific areas.
    01:22 🖼️ When using inpainting in Stable Diffusion, choose the right settings.
    03:35 ⚙️ Adjust settings for inpainting, such as resolution and sampling methods.
    06:11 🧩 Inpainting allows you to add or modify objects in an image.
    08:55 🎨 Detailed inpainting can be achieved by iteratively refining the image.
    12:04 👍 Inpainting in Stable Diffusion becomes more effective with practice and fine-tuning settings.

  • @joeduffy52
    @joeduffy52 Рік тому

    A tutorial for inpainting in ComfyUI would be good 😉 Your SDXL Workflow file is the best I've tried so far

  • @nocturne.nocturnal
    @nocturne.nocturnal Рік тому

    Fantastic tutorial man, informative and simple to follow

  • @cassiohol
    @cassiohol Рік тому

    Great tutorial! Thank you

  • @devnull_
    @devnull_ Рік тому +1

    Thanks. BTW - I've used inpaint sketch very little. A few times I tried, it was VERY laggy. Can't be the GPU and I've got the latest drivers too. For you it seemed to be working OK.

  • @AlexSuns
    @AlexSuns Рік тому

    thank you for making these videos! You are a great teacher!

  • @TheGalacticIndian
    @TheGalacticIndian Рік тому

    Cool stuff, you're a great teacher!🎖

  • @DerXavia
    @DerXavia Рік тому

    I did this before but always fucked up, what was mainly because I still described the entire picture and didn't fit the resolution to the area. Your video helped a lot thank you.

  • @ryan_cha190
    @ryan_cha190 Рік тому

    This is exactly what I need. Thank you.

  • @Aisaaax
    @Aisaaax Рік тому

    I think that using something like photoshop in-between the inpainting steps is key to getting great results. 😮

    • @sebastiankamph
      @sebastiankamph  Рік тому

      Yes, it surely helps a lot. Photobashing can yield great images.

  • @Jack-ok5uc
    @Jack-ok5uc Рік тому +2

    I made an image of a man in a blue suit. It's like a portrait. I then went into img2img > inpaint and made his entire suit black with the marker tool. I have most of the settings the same as Seb, including original for masked content and inpaint area is only masked. Sampling steps at 25, Euler a, batch count 5, cfg scale 10, denoising 0.8. positive prompt is red suit now. I click generate and I get 5 images on the right that are exactly the same as the original image on the left. I mean absolutely identical.
    What am I doing wrong?

    • @S4SA93
      @S4SA93 Рік тому +2

      Same problem, no matter what I do I get the exact same output. Anyone care to help?

  • @servbotz
    @servbotz Рік тому +2

    Inpainting is awesome but super finnicky. Really takes lot of iterations. But one thing I like using inpainting for is different facial expressions for a video game or visual novel.

  • @vitezslavackermannferko7163

    7:35 This is so like Bb Ross' happy little accidents 😂

  • @ddrguy3008
    @ddrguy3008 Рік тому +1

    Is there a way to manage skin tone fixes? For example, in yours, the face is slightly off tone to the rest/untouched parts. How do you go about fixing tone/brightness/contrast/etc when you like what it is the seed produced, but the finer details keep it from blending together as seamlessly as you're trying for?

    • @ZeroCool22
      @ZeroCool22 Рік тому +1

      That's because you should use Inapinting models, they not only ensure the edges/lines match, but it also does a correct light/colors.

    • @ddrguy3008
      @ddrguy3008 Рік тому

      What good inpainting models are there? I only see like 5 on civitai

    • @ZeroCool22
      @ZeroCool22 Рік тому

      @@ddrguy3008 You can create your own Inpaint Models too using the Checkpiont merger in Autos GUI.

  • @ph6560
    @ph6560 Рік тому +1

    *My favorite Swedish Stable Diffusion Oracle,* I have two humble questions:
    *[1]* Is there any good deepfake face-swap plug-in (extension) for video in Stable Diffusion?
    *[2]* This video concerns inpainting for images. Is there inpainting extensions for video as well in SD?
    Now I'm still a newbie so my questions might have obvious answers, nonetheless I trust the Oracle to guide me.

    • @sebastiankamph
      @sebastiankamph  Рік тому

      Man, I wish I could help you. I'd love to be the Swedish oracle but I don't know of anything that can help you. Let me know if you find it.

  • @BobDoyleMedia
    @BobDoyleMedia Рік тому

    Finally! Canvas Zoom!!

  • @bonecast6294
    @bonecast6294 Рік тому

    thank you so much for this tutorial and going over many important things here. i have a question, hope you don't mind.
    at 10:00 , this is exactly how i would like to work on this, the sketch tools are extremely simple, how would this process go that you mention at 10:00 ? i am a pretty good artist, i would love to draw in an almost good cup there, and then have the ai merge/blend it in better with the rest of the image. or if i am lazy, i would love to be able to concept bash some stuff together, and have the ai fix it up.

  • @Tcgtrainer
    @Tcgtrainer Рік тому +1

    ¿Do you know how to use controlnet with inpainting? i cant get good results using also controlnet at the same time

  • @20xd6
    @20xd6 Рік тому +1

    This was cool! Thank you.
    Do you have a vid that is focused on img2img (not inpainting)?
    Or a vid on outpainting would be very cool, too!
    One last vid I would like to see is how to craft txt2img prompts that avoid common defects, such as cropped heads, double body parts, Heterochromia, crossed-eyes, and other stable diffusion oddities.

  • @MrSongib
    @MrSongib Рік тому

    I use inpaint sketch and got som hallo around the inpaint area, is there settings that I messed up or something?
    can you make some tutorials on that Sir?
    Edit: ahh it was blur setting I think I still confuse by it, is it for the mask or for the finish painting. idk xd
    5:52 man this warning is what cost me a lot of the time, they need to fix this bug.
    6:00 onwards. One thing that I think you know, but if you don't. Then If you want Inpaint something that already exists in the painting, it's better to make it a "Whole picture" so it can pick colors and existing concepts that have already been there so it is more cohesive in terms of color and aesthetics. and then after you can refine the details with "Only masked". or even something new it's better to take the whole picture and then work on the details in the "Only Masked" area.

  • @jankvis
    @jankvis Рік тому

    THX again Seb, very usefull :))

  • @raynnusvettmore
    @raynnusvettmore Рік тому

    I have done many times of inpainting but never understand the options. Thank you.

  • @EmperorZ19
    @EmperorZ19 Рік тому +1

    I'm not sure I understand the purpose of latent noise. If you're setting denoising strength to 1.0, doesn't that mean that the resulting image has nothing in common with what was inside of the mask, and it wouldn't matter which mask fill option you chose?

  • @elprimomelt
    @elprimomelt Рік тому

    Thank u! amazing tutorial

  • @RADKIT
    @RADKIT 11 місяців тому

    amazing explanation and video also a masterclass in efficient teaching! Thanks.
    i am only facing an issue with the inpaint sketch where the whole UI gets laggy and slow, only with the sketch option, any potential causes cross your mind?

  • @awais6044
    @awais6044 Рік тому

    Is there any way to make automatic detect cloth area and using prompt change the clothes only not real face in image.

  • @abhirajmishra892
    @abhirajmishra892 Рік тому

    what is the checkpoint name at 0:36

  • @ghhdgjjfjjggj
    @ghhdgjjfjjggj Рік тому

    thanks for the tutorial. but what about if the face is good but the body is messed up? How can we do it the opposite way around?

    • @sebastiankamph
      @sebastiankamph  Рік тому

      Paint the face and then press the button to use the inverted mask.

  • @elaxel1469
    @elaxel1469 Рік тому

    If we wanna fill the face with a lora or dreamboth model which denoising strength is ideal?

  • @peerhenry
    @peerhenry Рік тому

    Amazed that AI nailed the starbucks logo

  • @누가스크류바
    @누가스크류바 Рік тому

    you're my new asmr

  • @ignat3802
    @ignat3802 9 місяців тому

    That was insightful. Thanks, pase of the video was good, never i felt lagging behind. Yet i can not not notice, how come your SD so fast? Mine generates 10 times slower, is there any guides on settings for automatic1111? I have 8 gigs VRAM, Nvidia 1080

  • @madmickey2957
    @madmickey2957 Рік тому

    How can i change the colors on brush? Can’t seem to find the settings for that

  • @edouarddubois9402
    @edouarddubois9402 Рік тому

    What does the nfixer negative prompt do, exactly?

  • @S4SA93
    @S4SA93 Рік тому +2

    Anyone on AMD has the problem of inpainting not doing anything at all to the picture? It renders and processes but the result just looks identical every single time, there is no change what so ever. If someone knows a fixx I would be grateful.

  • @JustJoe24
    @JustJoe24 Рік тому

    what do you have installed that lets you in paint with color and specifically being able to pan over the image to select the color of whatever youre hovered over?

  • @johndoe4004
    @johndoe4004 Рік тому

    informative, but i do wish you said what mask blur did as you set it as well and only masked padding, pixels, i could see the result but not understand the why, as for the rest it was very nice

    • @sebastiankamph
      @sebastiankamph  Рік тому

      Mask blur changes the blur of the mask edge, increasing or decreasing it. Padding just gives a larger area to work with to adapt for the resolution you want in the render.

  • @williamcase426
    @williamcase426 6 місяців тому

    How do you get the canvas zoom to work? I've got the extension installed but spinning the mouse wheel while I'm over the image doesn't zoom.

  • @jankvis
    @jankvis Рік тому

    Color picker: Seb how did you replace the default color picker in windows? The default color tool in windows does not have a picker just manual color settings. The one you use had. Pls reply.
    cheers, Jan

    • @jankvis
      @jankvis Рік тому

      Fixed, just change web browser to Edge

  • @marcelocarvalho7049
    @marcelocarvalho7049 Рік тому

    Great! Thank you so much for this =)

  • @tautegu
    @tautegu Рік тому

    Is it possible to have a image based mask rather drawing the mask. I want to be able the render out my product and iterate different backgrounds. By also rendering an alpha mask, could I use the black and white image as a mask so that the product is unaffected?

  • @vitoroliveira2203
    @vitoroliveira2203 Рік тому +1

    I would love a more detailed tutorial regarding the cup situation in the video. Many times I try to use inpainting I generate some unsatisfactory results that do not blend well with the image at all with some having a clearly different art style, lighting and size. It just screams 'out of place' and I kinda give up after some time and go back to rolling the dice on img2img hoping for a more detailed or pleasing image.

    • @ZeroCool22
      @ZeroCool22 Рік тому

      That's because you should use Inapinting models, they not only ensure the edges/lines match, but it also does a correct light/colors.

  • @Avalon1951
    @Avalon1951 Рік тому

    what are the hot keys to zoom in and out in canvas zoom and how do you set them up?

    • @sebastiankamph
      @sebastiankamph  Рік тому

      Shift scroll

    • @Avalon1951
      @Avalon1951 Рік тому

      @@sebastiankamph Nope not happening, i'm inpaint right now and shift and scroll wheel doesn't work, i'm on a PC and canvas zoom is active, do I need to change a hotkey and if so how?

    • @sebastiankamph
      @sebastiankamph  Рік тому

      @@Avalon1951 I didn't change any settings. I just installed the extension and went to town.

    • @Avalon1951
      @Avalon1951 Рік тому

      @@sebastiankamph Then I'm truly lost, also in some of the things I read they said you can right click on the canvas or tabs to change hotkeys, i'm also not getting that any thoughts?

    • @Avalon1951
      @Avalon1951 Рік тому

      @@sebastiankamph All good had to fix some errors I was getting with git pull, updated and now Canvas zoom works as intended. Another question does --ar or --s750 work in A1111?

  • @ShortDullStories
    @ShortDullStories Рік тому

    Wow, thx !
    I used a lot of inpainting for my newest video but still think it looks a little bit off in some places (e.g. the Maggot Robby faces). Just did a quick test with adjusted width/height to the face properties and it yields waaaay better results.
    I think the inpaint sketch is tricky, most of the time it looks really off. Much effort to repair it after an object is placed.
    btw. loved your Bob Ross style video, you really could open a second channel and just do that there

  • @bradcasper4823
    @bradcasper4823 Рік тому

    Great video, thank you

  • @TheUndiagnosable
    @TheUndiagnosable Рік тому

    I could use some help. I followed the instructions in the video, but SD keeps redrawing other parts of the image too, not just the area I painted with the mask. What am I doing wrong?

    • @stephenmilazzo2535
      @stephenmilazzo2535 Рік тому

      Make sure you "x" out the first image on the left and then drag the newly inpainted image over. Otherwise, it's like a layer on a layer and starts to change all sorts of stuff.

  • @Sai1523
    @Sai1523 Рік тому

    How come when using it on illustrations faces for some reason a lot of the time it just keeps generating weird stuff instead of a face (like flowers, vase designs, ect.)?

  • @lyonstyle
    @lyonstyle Рік тому

    did you get a new camera? the video looks very crispy, great video as always

    • @sebastiankamph
      @sebastiankamph  Рік тому

      Same camera. Video is in 1440 now tho (however some scenes upscaled from 1080). Good to hear it looks better though :D

  • @PkKingSlaya
    @PkKingSlaya Рік тому

    Do you have a fix for the Inpaint Canvas disappearing with the image i'm trying to edit? The three little lines on the bottom right corner don't show and i cant drag out the canvas, the window just disappears, reloading the page brings it back but doesnt fix the issue.

  • @Waffletoasters
    @Waffletoasters 10 місяців тому

    What kind of extension do you use for your colour picking? Because it is different from what I see onscreen,

    • @Waffletoasters
      @Waffletoasters 10 місяців тому

      I found out what it was; apparently when you use 1111 in Firefox it will load a different colourpicker that is extreme limited (can't eye drop pick from the image for instance). So you kind of have to open it in Chrome instead for it to work

  • @ddblue0
    @ddblue0 Рік тому

    Can you change models while doing this? Like use a different model for a sketch or to re-render a face?

  • @Blue-eu5qn
    @Blue-eu5qn Рік тому

    Cool, but how do you remove the coffee cup so there is nothing there?

  • @LexChan
    @LexChan 9 місяців тому

    what if i have specific coffee cup, earing or object i want to put into the images. how?

  • @jazzlehazzle
    @jazzlehazzle Рік тому

    Is there any way in Auto1111 to do a "magic erase" with these inpainting tools? As in remove object?

  • @crashdummyglory
    @crashdummyglory Рік тому

    thanks! what are the system requirements to run this ? or does this run on the cloud?

  • @Marksplaytime
    @Marksplaytime Рік тому

    The question I have when you jumped down the second cup rabbit hole is, can you just select or mask the existing coffee cup and make a copy to insert in the image or would that be something I would export the project to Photoshop for and play with over there?

  • @kartikashri
    @kartikashri Рік тому

    no matter what I do cant make anything new it just blurs the are I inpaint! any solution