Stable Diffusion - Perfect Inpainting and Outpainting!

Поділитися
Вставка

КОМЕНТАРІ • 148

  • @dreamzdziner8484
    @dreamzdziner8484 2 роки тому +14

    Damn. SD is getting powerful every single day I think. I'm so glad I found your channel a few months back. Thank you for sharing all the knowledge 🙏🏽👍🏻❤🔥

  • @woobeforethesun
    @woobeforethesun 2 роки тому +4

    I have been inspired! I'm an IT professional, but my main desktop OS has always been Microsoft's Gaming OS. I just, finally, made the switch to Linux Mint (Cinnamon). It's mostly due to following your tutorials and taking the time with Grown Ups OS. I am a new convert! Thanks, as always for sharing your knowledge, experience, work flows, etc.. etc.. Always appreciated!

    • @NerdyRodent
      @NerdyRodent  2 роки тому +3

      Glad you’re having fun! Learning new stuff gets the old brain matter ticking over and strange, exciting, new ideas can form 😉

  • @TransformXRED
    @TransformXRED 2 роки тому +19

    There is a script for automatic1111 that gives us the possibility to draw more precise masks, in a bigger window too. It's listed in the automatic's repo. Very useful

    • @Mocorn
      @Mocorn 2 роки тому

      Oo, what is it called?

    • @knoopx
      @knoopx 2 роки тому +4

      even better there's one for automatic prompt-based masking

    • @ThaimachineFilms
      @ThaimachineFilms 2 роки тому

      where

    • @TransformXRED
      @TransformXRED 2 роки тому +1

      Mask drawing UI
      Listed in the "custom scripts" on AUTOMATIC1111's repo page

  • @chyldstudios
    @chyldstudios 2 роки тому +8

    Your avatar is becoming sentient.

  • @FolkerHQ
    @FolkerHQ 2 роки тому +44

    This is one of my favorite animations of a human-non-human face in the corner. Only sometimes when you look too much down it gods a bit flat, but other than that, I feel, I am watching a TV show from the 70s or so. Great. And thanks for the level one noise, had problems with inpainting as well. So that helped a lot.

    • @chacecampbell2697
      @chacecampbell2697 2 роки тому +8

      Give it a few months, it'll get so good that he'll give a secret face reveal and no one will notice 😂

    • @NerdyRodent
      @NerdyRodent  2 роки тому +3

      😉 Glad it helped! Make amazing things!

    • @lucienmontandon8003
      @lucienmontandon8003 2 роки тому +2

      @@NerdyRodent hey nerdy rodent! I managed to animate my images with the thin plate model. It looks incredibly real, but i am wondering how you managed it to make that clean in this video? Donyouvrecord the deiving video with a stand camera? Or what is your set up while making these videos? Is your source image upscaled?

    • @NerdyRodent
      @NerdyRodent  2 роки тому +4

      @@lucienmontandon8003 I just have a webcam on my monitor 😀

    • @LegendD112
      @LegendD112 2 роки тому

      @@lucienmontandon8003 bro what program do you use to do that?

  • @vectorr6651
    @vectorr6651 Рік тому +1

    This is great, I just started watching your videos. What are you using for your webcam overlay?

    • @NerdyRodent
      @NerdyRodent  Рік тому +1

      I use the thin plate spline motion model, video link in the description!

  • @jay4hes
    @jay4hes Рік тому

    The corner pip got me 😂 bro you look like a anchorman from the 70s . Nice vid

  • @skylerdrones777
    @skylerdrones777 Рік тому

    Your videos are great! How do you make the AI ​​with your face over there in the corner of the video? Can you tell me which video you teach this in?

  • @secretsunofficial
    @secretsunofficial Рік тому

    How did you create that animated realtime face at the bottom right?

    • @NerdyRodent
      @NerdyRodent  Рік тому

      I used the thin plate spline motion model for image animation (more information in the video description)

  • @arothmanmusic
    @arothmanmusic 2 роки тому +2

    Ah! You've answered my mystery as to why none of my inpainting attempts ever seem to make any change whatsoever to the output. Guess I need to change models...

  • @flonixcorn
    @flonixcorn 2 роки тому +1

    Always smashing the tutorials

    • @NerdyRodent
      @NerdyRodent  2 роки тому

      Hope you make some things! 😉

  • @autonomousreviews2521
    @autonomousreviews2521 2 роки тому +1

    fantastic as always.

  • @58gpr
    @58gpr Рік тому +1

    Thank you for all your amazing videos!

  • @officialxeNTed
    @officialxeNTed Рік тому +1

    really good video! thanks!

  • @jamiekingham5854
    @jamiekingham5854 Рік тому +1

    How do you get to that interface? Is that an app or web interface? So slick. Great work.

    • @NerdyRodent
      @NerdyRodent  Рік тому

      Thanks! It’s the stable diffusion automatic1111 webui!

    • @jamiekingham5854
      @jamiekingham5854 Рік тому

      @@NerdyRodent Thanks so much for replying. Your channel is incredible helpful. Are you on PC? I'm on a Mac but not M1 or M2. I think Ive found out how to install on Intel machine the i9 but it's all really confusing. .

    • @NerdyRodent
      @NerdyRodent  Рік тому +1

      @@jamiekingham5854 Yup, I use a Linux PC + Nvidia as that’s best for all things AI 😀 Not used a Mac in years, I’m afraid!

  • @midgardian2216
    @midgardian2216 Рік тому

    Did they pull support for A1111? I can try to open it and just get a blue refresh arrow and nothing else. I have A1111 installed locally.

  • @InnocuousRemark
    @InnocuousRemark 2 роки тому +6

    Could you make a video simply explaining each element of the webui? Some of them are really unclear as to what they do

    • @NerdyRodent
      @NerdyRodent  Рік тому

      Take a look at my earlier video for more of a feature overview Vs this workflow - ua-cam.com/video/XI5kYmfgu14/v-deo.html

  • @mattc3510
    @mattc3510 2 роки тому +1

    Can you please explain or do a video on how to download and install the runway in painting model ? I have tried even downloading it seems impossible there is no download links. I can’t find any download links..
    Thank you for the amazing tutorials 🙏

    • @NerdyRodent
      @NerdyRodent  2 роки тому +2

      Done one already, just for you! Links are in the description, just download to your models directory exactly as shown! Stable Diffusion InPainting - RunwayML Model
      ua-cam.com/video/rYCIDGBYYnU/v-deo.html

  • @ThePhillShow
    @ThePhillShow 2 роки тому

    Your stuff has been super helpful, would really appreciate a video on massaging perspective as it's something I'm having trouble making work. I mainly use SD for making backgrounds for green screen content I make and making the perspective right on the background has been difficult. If i wanted a straight on picture of a "sci-fi factory interior" that would work well in the background of a side-scrolling kind of shot, how would I go about making SD know construct the image in that way? I assume img-2-img would help, and sometimes it does for me but sometimes it decides to take an image I got with the right perspective out of luck and turn it into something with a totally different perspective, and adjusting de-noising doesn't seem to affect it.
    Would love your input on this!

  • @gokuljs8704
    @gokuljs8704 Рік тому

    How do you make that bottom right corner video

  • @infographie
    @infographie 2 роки тому

    Excellent

  • @pookienumnums
    @pookienumnums Рік тому

    i know this is an older video but for people just now catching on, one thing ive noticed in the video that i feel has caused less than desirable results is mixing things that are opposite such as 3d render with painting. you took out 3d and render (negative prompt) but left in blender, octane, unreal... and those are 3d modeling engines. you can use whatever you want, and sometimes get cool stuff with things that dont correlate or make sense together... but if youre having issues getting a look you want, examine your prompts for things that might be contradicting one another.
    for example, you might use something like photorealism, photograph, photography, and the modifiers like high quality, high definition.
    also check for artists that are not tailored to the style / subject matter / or medium youre trying to emulate. for example, you wouldnt use kim jung gi with photorealistic painting, or 3d render.
    i found that for me personally, doing things like that in effect cockblocks stable from producing things down the right pathway im trying to go. if anyone wants to see what ive been doing, over the last year or so, starting in disco and moving to stable, you can find my socials easily. GG GL and HF~
    edit: i dont use artist names in my prompts anymore, not since the middle of 2022 or so. i feel i personally get better results for what im doing without them.

    • @NerdyRodent
      @NerdyRodent  Рік тому

      I found exactly the opposite. Mixing styles gives me great results, and having things like “unreal engine” as a positive with “3d render” also produces awesome outcomes 😉

  • @zvit
    @zvit 2 роки тому

    Do find that you get better results when using the inpainting model for txt2img, as opposed to the regular or pruned SD v1.5?

    • @NerdyRodent
      @NerdyRodent  2 роки тому

      Roughly similar for me. Have you been getting better results?

    • @zvit
      @zvit 2 роки тому

      @@NerdyRodent It really depends on what I'm looking for. I just saw you using inpainting model for txt2img at the beginning of the video, so I asked. Thanks for your great content!!

  • @sharperguy
    @sharperguy Рік тому +4

    I wonder if we'll ever be able to merge the inpainting model with other models, so that we could inpaint in arcane style, or the various anime models.

    • @SZ_99
      @SZ_99 Рік тому +2

      You can now. There's a guide on Reddit posted a few days ago that let you turn any model into an inpainting one

  • @florinflorin249
    @florinflorin249 Рік тому

    Hey really like your videos. What software are you using for the Webcam to show the animated image? Thanks

    • @NerdyRodent
      @NerdyRodent  Рік тому

      Thanks! I’m using the thin plate spline motion model for image animation, as linked in the description 😀

    • @florinflorin249
      @florinflorin249 Рік тому

      @@NerdyRodent but it is not realtime, (i.e the webcam is not using that motion model to alter your face realtime), correct?

    • @NerdyRodent
      @NerdyRodent  Рік тому +1

      @@florinflorin249 The motion is webcam driven, but not in real time. You can use Avatarify for real-time though

    • @houdinimasters
      @houdinimasters Рік тому

      @@NerdyRodent is avatarify the best option for something like this? like for twitch streaming fot ex?

    • @NerdyRodent
      @NerdyRodent  Рік тому

      @@houdinimasters There are loads of live cams, but Avatarify is the only live version of the first order motion model I’m aware of

  • @catrocks
    @catrocks 2 роки тому +1

    Thanks for your videos ♥

  • @itaicarmeli1145
    @itaicarmeli1145 Рік тому

    Hadn't paid any mind to the scripts section... Outpainting - brilliant rodent!

  • @rokah1391
    @rokah1391 2 роки тому

    Do you think there is a trouble for img2img alternative script for not working for me and other. i get only very crap results unsusable whatever i tried in settings

  • @vipinsharma8923
    @vipinsharma8923 2 роки тому

    Great tutorial and thank you. One question, how can we fix a black (basically blank) area that appears instead of the additional outpainting?

    • @NerdyRodent
      @NerdyRodent  2 роки тому +1

      Either change model or use strength 1 usually works 😃 You can also play with the mask size. Depends what you’re wanting to achieve!

    • @vipinsharma8923
      @vipinsharma8923 2 роки тому

      @@NerdyRodent thanks for responding! I tried changing the model and the strength. I must be missing something, but I have the outpainting script installed and the inpainting model downloaded and selected. I'll play with the mask size, but it def feels like I'm missing a piece! I'll keeping digging, and thanks for the help!

  • @sneedtube
    @sneedtube 2 роки тому

    Is there any reason in particular if you started your first prompt with the inpaint ckpt rather than the full one? You said that the latter should be better at generating more diverse outputs so why not beginning with that?

    • @NerdyRodent
      @NerdyRodent  2 роки тому +2

      Start anywhere you like! It’s totally free form 😀

  • @Mandraw2012
    @Mandraw2012 2 роки тому

    @Nerdy Rodent, how come you don't inpaint at full resolution ? Is there a special reason ?

    • @NerdyRodent
      @NerdyRodent  2 роки тому +1

      Yup - the outputs can vary. Try both for the best result!

  • @therookiesplaybook
    @therookiesplaybook Рік тому

    How did you do the face in the corner? Please tell me you have a tutorial on that.

    • @NerdyRodent
      @NerdyRodent  Рік тому

      How to Animate faces from Stable Diffusion!
      ua-cam.com/video/Z7TLukqckR0/v-deo.html

  • @MarkRiverbank
    @MarkRiverbank 2 роки тому

    Recent PRs added the ability to make deeper HyperNetworks, and-more critically-non-linear activation functions which should greatly improve results. I think it may be time to revisit your comparisons.

  • @cupe4564
    @cupe4564 Рік тому

    what are your specs??

  • @beners
    @beners 2 роки тому

    Thanks for your channel it’s an incredible resource. Have you done any tutorials on the workflow for your talking heads in the corner? I always figured it was snap camera or something but this is next level…

    • @NerdyRodent
      @NerdyRodent  2 роки тому +2

      Yup - that’s in the description 😉

  • @aa-xn5hc
    @aa-xn5hc 2 роки тому

    Very interesting, thank you

  • @JohnVanderbeck
    @JohnVanderbeck Рік тому

    Where does that inpainting model come from?

    • @NerdyRodent
      @NerdyRodent  Рік тому

      You can get it from huggingface.co/runwayml/stable-diffusion-inpainting

  • @littleprincessnene
    @littleprincessnene 2 роки тому

    ok Nerdy Rodent, I have a really dumb question, I see all these tool from git but I have ZERO clue how to use it or where to use the web interface? is there any way i can get it to my local machine and use it? you mention you are using web interface, and when I click it, it is just a bunch of code, there are no place to use it like an app, can you direct me to a 101 tutorial on how to use all these apps to begin with? thanks.

    • @NerdyRodent
      @NerdyRodent  2 роки тому +1

      The zero-install, super-easy website info is right here - ua-cam.com/video/wHFDrkvsP5U/v-deo.html :)

  • @LouisGedo
    @LouisGedo 2 роки тому

    @Nerdy Rodent
    👋
    In SD Automatic1111 interface, outpainting left and right works fairly well.
    But outpainting up always looks like it's starting a brand new image with a sharp defined line right where the new pixels are generated as opposed to outpainting a smooth continuum to the original generated image.
    BTW, The images I'm generating and trying to outpaint are scenics with castles. Many of them the tops of the castle towers are cut off.
    What am I doing wrong??
    P.S. I'm using the 1.5 pruned emaonly model. I don't know how to load multiple models into SD.

    • @NerdyRodent
      @NerdyRodent  2 роки тому +3

      Use the inpainting model for outpainting :)

    • @LouisGedo
      @LouisGedo 2 роки тому

      @@NerdyRodent
      Thank you for your response.
      That's the 7.17 Gb model?
      In order to get it to open in SD, do I just drag the .Chkp model into the same Models folder the current model is in? Do I need to do anything else to get it to function in the SD local window?

    • @NerdyRodent
      @NerdyRodent  2 роки тому +2

      Video on the new inpainting model - Stable Diffusion InPainting - RunwayML Model
      ua-cam.com/video/rYCIDGBYYnU/v-deo.html 😀

    • @LouisGedo
      @LouisGedo 2 роки тому

      @@NerdyRodent 👋 Thank you so much for responding. I really like your content, but as I'm nearly PC programming illiterate, some of the most important parts of your videos (dealing with installation and programming stuff) is virtually impossible for me to fully grasp. That's my problem, I know.
      I watched that video you linked several days ago but I got stuck at the 1:09 time stamp of that video where you're adding various text into what looks like a CMD prompt window and that's where I get lost because whatever you're doing is second nature to you but not very familiar to me. Your cursor disappears from the screen and it's not clear how you accessed that dialog box.
      Anyway, my problem, not yours. Thank you for responding and looking to help. I probably just have to hope real hard that the programmers at Automatic1111 add it to an update.
      I was able to locally install SD because of an easy to follow video put out about a week ago by Scott Detweiler ( titled "Stable Diffusion 1.5 Windows Installation Guide").
      My poor level of knowledge about computer languages and programming leaves me little options except for watching such easy to follow demonstrations as that.

    • @NerdyRodent
      @NerdyRodent  2 роки тому +1

      Got a video right here for new computer users! Install Anaconda and Nvidia GPU drivers with CUDA on Microsoft Windows - Beginner Mode ON!
      ua-cam.com/video/OjOn0Q_U8cY/v-deo.html

  • @AntoniuNicolae
    @AntoniuNicolae 2 роки тому

    Does anybody know a way that will output the exactly same image you added in but in a different style?

    • @NerdyRodent
      @NerdyRodent  2 роки тому

      Lower the denoising strength, and for the inpainting model you can also alter the mask power (in settings, because that’s a good place for a slider)

  • @simonindelicate8133
    @simonindelicate8133 2 роки тому

    Have I gone mad or did you never turn the denoising down from one after changing it?

  • @BlueCollarDev
    @BlueCollarDev 2 роки тому

    I have noticed that there is a lack of rundowns on how to use Krita in place of the poor quality little drawing panel in the Web UI.
    Krita is packed with an enormous selection of features that makes it the no brainer choice for maximum control of the in-painting process imo.

    • @NerdyRodent
      @NerdyRodent  2 роки тому +1

      Got an example of that at ua-cam.com/video/M2R-tsZglaY/v-deo.html - fairly early on though now! Time really flies in the AI world, it seems :)

    • @BlueCollarDev
      @BlueCollarDev 2 роки тому

      @@NerdyRodent that's a different plug in than I'm running from what I can see. Specifically I'm referring to showing a breakdown of how to accomplish different goals like in-painting to create a very specific multi-step composition or out-painting things into the canvas smoothly. Particularly focusing on what setting to have different things like Denoising and why. Honestly, the more I think about this, the more inclined I'm becoming to make a video myself.

    • @NerdyRodent
      @NerdyRodent  2 роки тому

      @@BlueCollarDev Go for it! 😉

  • @hyungmokang9856
    @hyungmokang9856 2 роки тому

    This is a channel I always enjoy, thank you for the very useful information.
    Can you give me a hint on how to speak with an actor's face at the bottom of the video? (Deepfakes?)
    I'm very curious, so please inquire. Please continue to provide good content in the future.
    you are the best!

    • @NerdyRodent
      @NerdyRodent  2 роки тому +3

      Personally, I use the thin plate spline motion model for image animation

    • @hyungmokang9856
      @hyungmokang9856 2 роки тому

      @@NerdyRodent Wow!
      What a quick answer!~~
      Thank you so much

  • @kaypetri3862
    @kaypetri3862 10 місяців тому

    Can you make a workflow to change backgrounds of a real car. Maybe i shot from a racecar on track to the same racecar but with another landscape behind? I have big trouble with that and can't get good results. What for a model is the best to do this? The thing is that the carwrap form this photo never ever must be changed, only the environment around the car.

    • @NerdyRodent
      @NerdyRodent  10 місяців тому +1

      Yes, inpainting is perfect for changing just a small part of an image such as the landscape behind a car.

    • @kaypetri3862
      @kaypetri3862 10 місяців тому

      The question is what the best Model is for that case. I only Get horrible results when I do it.

    • @NerdyRodent
      @NerdyRodent  10 місяців тому

      @@kaypetri3862 An inpainting model is best for inpainting, such as huggingface.co/runwayml/stable-diffusion-inpainting/blob/main/sd-v1-5-inpainting.ckpt
      There are now inpainting controlnets too which are awesome because then you can use any model 😀

    • @kaypetri3862
      @kaypetri3862 10 місяців тому

      @@NerdyRodent hmm. I'm tested it out, but i can not get a good Result. The Car looses Details. Is there a way to contact you directly to share some of my Designs and the Problem?

  • @tiagotiagot
    @tiagotiagot 2 роки тому

    From what I've noticed the "Blender" keyword tends to make it focus more on amateur or older works, and doesn't consider much the more realistic stuff that isn't obvious at a first glance that it is CGI or the more professional artwork that is less likely to detail what tools were used....

    • @NerdyRodent
      @NerdyRodent  2 роки тому

      Don’t forget to try other rendering engines 😉

  • @busterroughneck
    @busterroughneck 2 роки тому +1

    A quick and dirty way to zoom in on the image you're masking with Inpainting is control + (multiple times) to scale up the browser CSS. Then control - (multiple times) to zoom back out.
    PS-Thank you for sharing your wisdom Mr. Rodent!

  • @icepickgma
    @icepickgma Рік тому

    Do you have a discord?

  • @ryanlillie8469
    @ryanlillie8469 2 роки тому

    how did you do that with your face? is that deepfake? or some sort of ai animation?

    • @NerdyRodent
      @NerdyRodent  2 роки тому

      It's animating a still image. Link is in the description!

  • @Elcodigodebarras
    @Elcodigodebarras 2 роки тому

    Could you show us how to install SD on AMD machines?

    • @NerdyRodent
      @NerdyRodent  2 роки тому

      Unfortunately I don’t have an AMD gpu, but if you’re on Linux just select the “rocM” install for PyTorch instead of the CUDA (Nvidia) one

    • @Elcodigodebarras
      @Elcodigodebarras 2 роки тому

      @@NerdyRodent Thanks a lot...

  • @Tcgtrainer
    @Tcgtrainer Рік тому

    ¿is this the best way to do it still?

    • @NerdyRodent
      @NerdyRodent  Рік тому +3

      For now, yup!

    • @Tcgtrainer
      @Tcgtrainer Рік тому

      @@NerdyRodent ty, is the process slow ? generating a image in my pc at 512 take seconds, doing it in outpainting takes more

    • @NerdyRodent
      @NerdyRodent  Рік тому +1

      @@Tcgtrainer 512x512 is 1s for me, so it’s not too bad

  • @TheAlgomist
    @TheAlgomist 2 роки тому +1

    “Who needs to learn Blender? You can just type rendered in Blender and then you get a Blender render” 👏😂👍

  • @papersplease
    @papersplease 2 роки тому

    No "Denoising" slider on my Webui. 🤷‍♂

    • @NerdyRodent
      @NerdyRodent  2 роки тому

      I’m using the Automatic1111 one

  • @DJVARAO
    @DJVARAO 2 роки тому

  • @CultofThings
    @CultofThings 2 роки тому

    I was thinking that maybe outpainting is a metaphor for faith and that's why no one can explain it.

  • @ZeroCool22
    @ZeroCool22 2 роки тому

    lol, you merged the 1.5 with the inpaint version... :D

  • @simpernchong
    @simpernchong Рік тому

    The guy at the corner is not you right. He looks like a james bond kind character.

    • @NerdyRodent
      @NerdyRodent  Рік тому

      Unfortunately, due to national security, I am unable to confirm or deny whether or not I am a James Bond character 😉

  • @MrLaura34
    @MrLaura34 2 роки тому

    memoria ram?

    • @NerdyRodent
      @NerdyRodent  2 роки тому

      I’ve got just 32GB RAM

    • @MrLaura34
      @MrLaura34 2 роки тому

      @@NerdyRodent OKAY..... ! thank you

  • @osuf3581
    @osuf3581 2 роки тому

    For inpainting, you will get the best results by using the inpainting model but changing 'masked content' to "latent' and incrasing denoising factor to 1.0.

  • @JonathonBarton
    @JonathonBarton 2 роки тому +1

    2:01 "Unreal 5, Octane, Blender" - I've not seen any evidence that adding those as terms in your prompt actually gets you anything like what it says on the tin. YES, it _changes_ the output, but that's expected, since the input tokens changed. It did get the 'harsh light' of Blender, and also the plasticky look...but...
    Evidence: If you take your output images (or ANY images, really...) and send them to IMG2IMG and then run CLIP and DeepBooru interrogation. You'll find that _NONE_ of those sorts of terms are _ever_ returned by the interrogator, and that using them _in the absence of anything else_ renders random junk - compared to putting in a single 'known' term like Alfred Hitchcock or David Hasselhoff, which hands you back that specific thing as output, confirming that they're known terms in the Model. Putting unknown terms in your prompt is, effectively, fluff that give you worse results (even if you LIKE the results you get, if you generate 100 or 1000 more, they'll be less 'focused' than if you hadn't included those terms).
    When I send a 'blender-looking' photo through CLIP and DeepBooru, I get things like 'harsh lighting' back as terms.
    It's the same thing with using "HD, 8K" and all that other rot people put in their prompts because they're practicing Cargo Cult Programming and saw it in some video or on Reddit.
    The output _can't_ be an "HD" or "8k" image, because it's literally _512x512_ ! 512x512 is 0.262MP, the default output of Stable Diffusion is _not even_ "SD" resolution (640x480, or 0.307MP)
    Working with other AI/ML solutions (Large Language Models, mostly) has taught me that every Token in your Prompt should have meaning for the model you're working with.
    Fortunately, Automatic1111 has given us the tools _right in the web interface_ for us to learn to speak the same language as the model...

    • @TransformXRED
      @TransformXRED 2 роки тому +4

      8k and HD (with many other) is not about the resolution of the generated images but it's to target more specifically things that were tagged as HD or 8k in the training dataset. Good and sharp images are most likely to be associated with 8k or HD tags.
      A viking helmet at 8k or one at 1000x1000 resized to 512x512 (and used for training) will look pretty much the same,... But the 8k ones are mostly those high res good quality images that were tagged properly.
      At the end of the day, if a sequence of words works good for someone, it doesn't matter. It's better to stick with it for a while, add to it maybe sometimes.

  • @TheGalacticIndian
    @TheGalacticIndian 2 роки тому

    I won't be surprised if one fine day a video here is commented on by Trump or Biden, seriously😉

  • @elmyohipohia936
    @elmyohipohia936 Рік тому

    I tested everything, the inpaint is not working... Only outpaint is.

    • @NerdyRodent
      @NerdyRodent  Рік тому

      For inpainting make sure to use a mask as shown or it will indeed not work!

  • @itsa-itsagames
    @itsa-itsagames Рік тому

    im just going to wait a year or two until they stream line this process because this shit is unbelievably frustrating to set up and follow.
    then in tutorials , there is always something missing or someone will say "ok now we do this" and then on my screen its a different version no longer available or buttons are moved around

  • @drawmaster77
    @drawmaster77 2 роки тому

    the problem is it's very good at drawing attractive women and cats, because that's what everyone else is doing and what it seems like 90% of model is trained around, and not much good at anything else. I've recently tried to do multiple variations of a futuristic/alien spaceship/space station/mechanical objects as concept art for my game, and the results were absolutely horrible. Even just renderings of space/planets are very subpar.

    • @audiogus2651
      @audiogus2651 2 роки тому

      You can definitely get there with the right prompts, keeping trying. Using init images helps too. Also training your own checkpoint via Dreambooth opens up massive new potential, more than anything else.

    • @NerdyRodent
      @NerdyRodent  2 роки тому

      Very true! It’s certainly harder to get decent space ships, but it can be done! Even just using img2img

    • @drawmaster77
      @drawmaster77 2 роки тому

      @@audiogus2651 thanks, I am actually considering training on few of my 3d models to see if it might be possible to create variations.

    • @NerdyRodent
      @NerdyRodent  2 роки тому

      I just tested The Superb Workflow and it does indeed work for spaceships too. If you have a “pose” in mind, definitely better to start with your own doodle. Made some very nice space stations and stuff! Tbh, haven’t found anything it doesn’t work on 😉

  • @FolkerHQ
    @FolkerHQ 2 роки тому

    I still have problems with characters, that have no feet and or no head and to trying to outpaint a head and feet. So is there a trick, to get a complete character in front of the view? My best soltion was, to make a siluette or something like that before and use img2img, but I am also limited to small resolutions with my RTX 2070 Super 😞

    • @NerdyRodent
      @NerdyRodent  2 роки тому +2

      Assuming you start with a head, just keep outpainting down! It’s bad at hands though…

    • @drawmaster77
      @drawmaster77 2 роки тому +1

      I've NEVER seen good hands/feet even in the best generations of others on SD subreddit. They are always deformed. At this point, I wouldn't waste my time trying to get SD to fix them, and just learn to manually fix defects in photoshop.

  • @asylumlabs6075
    @asylumlabs6075 Рік тому

    anyone got a link for that inpainting model?

    • @NerdyRodent
      @NerdyRodent  Рік тому +1

      Links are in the video description 😉