SDXL 1.0 in A1111 - Everything you NEED to know + Common Errors!

Поділитися
Вставка
  • Опубліковано 11 січ 2025

КОМЕНТАРІ • 509

  • @eded4104
    @eded4104 Рік тому +224

    Thank you for being one OF THE FEW that don't sensationalize your video to get views,. Just honest straight forward. Don't change....you will prevail much farther than all the others :)

    • @OlivioSarikas
      @OlivioSarikas  Рік тому +15

      thank you :)

    • @AndyHTu
      @AndyHTu Рік тому +1

      @@OlivioSarikas But i wanted you to click bait me with uncensored stuff and telling me its better than anything you've ever seen. ;)

    • @sukhpreetlotey6304
      @sukhpreetlotey6304 Рік тому

      Totally agree...really starting to cringe over those videos

    • @tlilmiztli
      @tlilmiztli Рік тому +2

      Ironic. This used to be Affinity Photo channel but when AI showed up, almost over night he saw where views are. So... "dont change AGAIN" maybe :D

  • @OlivioSarikas
    @OlivioSarikas  Рік тому +1

    #### Links from the Video ####
    SDXL 1.0 Announcement: stability.ai/blog/stable-diffusion-sdxl-1-announcement
    SDXL 1.0 Base and Lora: huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/tree/main
    SDXL 1.0 Refiner: huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/tree/main
    Stability Image: twitter.com/StableDiffusion/status/1684254689250902025
    Nerdy Rodert Image: twitter.com/NerdyRodent/status/1684506233334538246
    OrctonAI Images: twitter.com/OrctonAI/status/1684344552654610434

  • @3oxisprimus848
    @3oxisprimus848 Рік тому +26

    Olivio, I love your energy

  • @HikingWithCooper
    @HikingWithCooper Рік тому +11

    Thank you for always making this easy for those of us who are not programmers. I breathed a sigh of relief when I found this video amongst the lesser install vids where they always assume everybody has a professional-level knowledge of doing pulls and tweaking Python settings. So thank you!

  • @Frankenburger
    @Frankenburger Рік тому +18

    11:15 you don't HAVE to use 1024x1024. In my testing, SDXL can generate images as low as 768x768 without suffering severe quality loss. This is useful for lower VRAM systems, like my 8GB laptop, since it allows you to generate 768x1024 (portrait) or 1024x768 (landscape) images while saving a little bit of vram. I have even done 1280x720 images featuring a kitten wearing knight armor with very little quality loss compared to 1280x1024.

    • @Daddy.please97
      @Daddy.please97 Рік тому +4

      How are you doing it? Its apparently not working on my 8gb Rtx 3070, is there any other way to make it work? For me it only works if i write 2 word prompts and just makes awful results

    • @CoffeeAddictGuy
      @CoffeeAddictGuy Рік тому

      Is speed of 768x768 image generation the same with sdxl1.0 vs sd1.5 or is it noticeably longer?
      Will you be able to upscale that 768x768 to 1024x1024 in img2img using the refiner with similar result?

    • @MrMsschwing
      @MrMsschwing Рік тому +3

      ​@@Daddy.please97write --medvram in start bat file, that worked for me

    • @Frankenburger
      @Frankenburger Рік тому

      @@CoffeeAddictGuy SDXL in my testing is about 25-30% slower than SD1.5, but it's hard for me to get exact numbers since I have to use --medvram, which does change the performance a bit.
      Also, yes, you can render 768x768 with the base model and upscale it with the refiner to get better details in img2img. If you go to Civit and search @frankenburger you can find my test images that are a result of this method along with their metadata

    • @Frankenburger
      @Frankenburger Рік тому +1

      @@Daddy.please97 I'm not sure what you mean by awful results, but I'm not using a1111 any differently with SDXL than I was with SD1.5. Without being able to see your settings, I'd like to suggest going over to Civit and searching @frankenburger. I posted sample images that were rendered at 768x768 using the base model (and then upscaled them using the refiner) and included their metadata for your reference.

  • @jibcot8541
    @jibcot8541 Рік тому +5

    I loved the Hacker Olivio, lol , doing great work as usual.

    • @OlivioSarikas
      @OlivioSarikas  Рік тому +1

      Thank you. Hacker BOI might come around more often :)

  • @courtneyb6154
    @courtneyb6154 Рік тому +8

    Always love your videos. Easy to follow and I like how you explain what you are doing while you're doing it....makes things much more understandable. Thanks for all your hard work!!! Love your accent too! Haha!

  • @None_ya_B
    @None_ya_B Рік тому +3

    Thanks! I was looking for someone who is actually talking about it and not just trying to hype it up.

  • @unclehoxAi
    @unclehoxAi Рік тому +3

    that refiner trick is awesome. Very cool discovery. Always appreciate you Olivio

  • @pavel_mora
    @pavel_mora 10 місяців тому

    I was getting really poor results, and I had a tough time trying to find out why. Thanks to you, I realized I was generating images in 512x512, as I did on SD1.5. I appreciate it! 🙌

  • @sprinteroptions9490
    @sprinteroptions9490 Рік тому +2

    On a lark I tried the Comfy UI install and I'm very glad i did. Beautiful.

    • @justinwhite2725
      @justinwhite2725 Рік тому

      I installed Comfy two days ago and I'm pretty addicted with some of the custom stuff I'm doing. It's great.
      SDXL doesn't have controlnet yet.

  • @chrisrobinson7728
    @chrisrobinson7728 Рік тому +1

    Thank you so much for mentioning that the A1111 needed updating to work with the new SDXL, that fixed my problem with it not working. Nerdy Rodent, did a great video, but made no mention of this!

  • @BlueScorpioZA
    @BlueScorpioZA Рік тому +2

    For me, the best way of using SDXL so far is to set up a second copy of Auto1111 alongside my normal version - run it clean with no extensions installed, and the --no-half-vae --opt-sdp-attention --medvram command line options. Works like a charm, and it's pretty fast as long as you have the right drivers installed.

    • @okachobe1
      @okachobe1 Рік тому

      --xformers might help with the speed as well with that setup

    • @gulfblue
      @gulfblue Рік тому

      How do you go abouts making a second copy of A1111? Did you just reinstall A1111 in a new directory?

    • @okachobe1
      @okachobe1 Рік тому

      @@gulfblue zip the original folder and then install the new install

    • @gulfblue
      @gulfblue Рік тому +1

      @@okachobe1 I have no clue what that means...
      Do you mean zip the current SD folder I have so that it can't be affected, install SD again, and then what? Can I unzip my folder containing my original SD and use one for SDXL and one for the previous?

  • @MrGrantGregory
    @MrGrantGregory Рік тому

    Thanks man been waiting to get into A1111 for a while this got me there

  • @tom.shanghai
    @tom.shanghai Рік тому +5

    I've created a simple extension that uses the refiner for the hires fix pass but it requires a minimum of 32gb ram (not vram). But lets hope that we'll get a native way to use the refiner in A1111

    • @tom.shanghai
      @tom.shanghai Рік тому +1

      @@user-jm4cd5sd1x a1111 need much more ram for sdxl then comfy ui on my system..

  • @Bericbone
    @Bericbone Рік тому +13

    The refiner model should not be used for img2img. It's made to work with LEFTOVER NOISE from the base model. The refiner does not work very well on Gaussian noise added to a fully completed image. You need to wait for auto1111 to support the refiner model to use the it correctly, or switch to a comfyui workflow that uses it correctly. You can see it working in this video, adding some detail, but it has little understanding of the image so it also morphs the skin texture completely.

    • @johnnyc.31
      @johnnyc.31 Рік тому +2

      I’m not understanding your description of the proper use for the refiner model. I also don’t see what you say it’s doing to skin texture. Honestly both images at 15:02 have a very airbrushed / unrealistic painted style with little to no texture in the skin. Not the most impressive example image.

    • @digidope
      @digidope Рік тому +1

      @@johnnyc.31SD next has option for using refiner or not. It's (probably) coming to A1111 soon. img2img is not the way it's meant to be used.

    • @OlivioSarikas
      @OlivioSarikas  Рік тому +5

      Yes, but this is the only way to use it in A1111 for now. So it's better than nothing for people who don't want to use a different UI.

    • @openroomxyz
      @openroomxyz Рік тому

      Yea it does not feel it does much in which UI is it supported to work correctly ?

    • @Bericbone
      @Bericbone Рік тому

      @@openroomxyz StableSwarmUI or comfyui

  • @erickromano5030
    @erickromano5030 Рік тому +7

    This model checkpoint doesnt load for me, always get back to de last one i have used... you know why?
    Failed to load checkpoint, restoring previous
    size mismatch for model.diffusion_model.output_blocks.8.0.skip_connection.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([640]).

  • @CHAZZdarby.
    @CHAZZdarby. Рік тому

    First videos of yours I've seen. Instant sub and like. GREAT WORK!

  • @gingercholo
    @gingercholo Рік тому +1

    8 minutes in and you finally get to what I came here for. I can look at websites on my own 😂

  • @timothymonaghan433
    @timothymonaghan433 Рік тому +8

    @OlivioSarikas The charts were generated with results from blind image generation. People could vote on thousands of images. The choices were side by side and nobody knew which models were being used to generate the images. This is how those results were generated in the graph.

    • @OlivioSarikas
      @OlivioSarikas  Рік тому +5

      I know, but that still doesn't tell us much about how good the really is. Of course it is better than SD 1.5, but we still have to see if the trained models from the community are far better or just a little better. Of course also the improvements will be less big, but more precises over time.

  • @CheapBunny
    @CheapBunny Рік тому +7

    Pro Tip: Use Happy Diffusion if you don't have a powerful GPU.

  • @ronskiuk
    @ronskiuk Рік тому

    Great tip for that 1111 update in the .bat file, it wasn't working for me with loads of errors but does now! Thanks!

  • @micbab-vg2mu
    @micbab-vg2mu Рік тому +4

    Olivio, based on your instructions I installed Invoke AI 3.0. Stable Diffusion XL 1.0 works quite well even though my GPU is not great, only 12GB RAM - the image generation is fast. I use Midjourney prompts and in some cases achieve even better results. Thank you for the tips.

  • @ArnoSelhorst
    @ArnoSelhorst Рік тому +4

    Great seeing you improve as you create each episode. Following you for quite some time now and it's obvious you really put a lot of effort in your moderation skills. This said, this episode was one of the most entertaining ones I ever saw on your channel. Keep it up! You rock! 👍🏻🚀

  • @OrctonAI
    @OrctonAI Рік тому

    Thanks for the mention , always great informative videos 🙏

  • @danielfaraday8197
    @danielfaraday8197 Рік тому +1

    I'm lucky this year. Another superb discovery of a channel 🙂 Olivio, I have one question - can I use my own image source to generate a graphic where Im situated on a different planet and drinking some fine gin? 😅

  • @boloblack
    @boloblack Рік тому

    Thank you! Needed some easy to understand update info for Automatic1111

  • @intangur
    @intangur Рік тому +4

    Looks interesting so far. Will be interesting to see how it will eventually work out with deforum and similar extensions once they start to get updated.

  • @RollieHudson1
    @RollieHudson1 Рік тому

    Love the sunglasses for ‘hacker mode.’ 😂

  • @Searge-DP
    @Searge-DP Рік тому +5

    There were discussions about the SDXL 1.0 VAE and how it created some strange artifacts, a lot of people seem to recommend to use the SDXL 0.9 VAE with SDXL 1.0 to avoid those issues. Maybe worth a try to see if you still get those problems with the eyes when not using face restore.

    • @HaohmaruHL
      @HaohmaruHL Рік тому

      So far I didn't notice any difference when I use SDXL 1.0 VAE compared to when I use None.
      I have Restore Faces checked. If I don't then faces suddenly all turn blue and distorted in the last Step using the 1.0 VAE.
      Gotta try 0.9 VAE then.

    • @HaohmaruHL
      @HaohmaruHL Рік тому

      @Cutieplus is it when editing the .bat file with a notepad?

    • @marhensa
      @marhensa Рік тому

      they reupload the SDXL VAE several hours after they publish original SDXL VAE, i thought it already fixes that strange artifact.

  • @jonsantos6056
    @jonsantos6056 Рік тому

    8:45 Start; 10:10 update A1111 version by just git pull; 10:20 After installation

  • @Macieks300
    @Macieks300 Рік тому +2

    Finally! I was waiting for this video. I tried updating A1111 today to v1.5 on my own but it works way slower for me for some reason. Even for normal SD1.5 checkpoints. I was hoping to see you talk about the command line arguments and tips for them but I guess you skipped that part.

    • @OlivioSarikas
      @OlivioSarikas  Рік тому +3

      You might try to delete you venv folder and restart A1111. It takes a while to set everything up again. also try to use --xformers in the command args

    • @Avalon1951
      @Avalon1951 Рік тому +2

      @@OlivioSarikas Same here speed for me is terrible takes between 4 and 5 minutes for one image even with xformers activated

    • @Steamrick
      @Steamrick Рік тому

      You're probably running out of memory... it slows way down if the total video memory used is bigger than your GPU's vram.
      SDXL might *run* on 8GB of VRAM, but it's not happy until you have 16GB, at least right now. I think I've read that comfy UI does better, so you could try that.

    • @Macieks300
      @Macieks300 Рік тому +1

      @@Steamrick I have 12GB of vram but I wasn't even talking about SDXL. The regular SD1.5 checkpoints all work 5 times longer after update to A1111v1.5 than what I was using before which is A1111v1.3.

    • @philm325
      @philm325 Рік тому

      ​@@Macieks300I had this happen before and auto1111 wasn't using the gpu. I had to update the python torch files ro latest version, it was ok after. Search for a tutorial on it.

  • @martinperez1683
    @martinperez1683 Рік тому +1

    Tnx for the update, love this channel. Question: I'm at hugging face and i see a file uploaded two days ago, should I install "1.0_0.9vae.safetensors" or "1.0.safetensors"

  • @jsk333
    @jsk333 Рік тому +3

    Thank you for your consistently informative videos, so easy to understand and full of great stuff!

  • @JohnSmith-vk4vq
    @JohnSmith-vk4vq 11 місяців тому

    Your advice has been very tasty 😊, Thanks!

  • @widowmaker7831
    @widowmaker7831 Рік тому

    The images you are showing I'm wandering if it's just my screen but they appear to have vertical lines on all around the edges of the characters also through the characters. Is this a shortcoming of the AI or perhaps the way its transferred to this media? Also the image at 2:50 an improvement would be for the AI to render in the foot prints from the dog running and jumping in the sand, at the speed it would have to be running there would in a real pic be prints left in sand and also more than likely sand flung out from the dogs back paws. Just some observations.

  • @yoad734
    @yoad734 Рік тому +3

    Thanks! Awesome video! Can you please release your automatic 1111 layout (e.g extensions, models, favorite settings etc)?

  • @openroomxyz
    @openroomxyz Рік тому

    Was a joy to watch and very informative and clear, well done video, create something cool

  • @AaronALAI
    @AaronALAI Рік тому

    Been waiting for this video, thank you!

    • @OlivioSarikas
      @OlivioSarikas  Рік тому

      Thank you. Sorry was busy today with another project

  • @JohnMcG
    @JohnMcG Рік тому +2

    Very informative video, thanks. Unfortunatley as a linux user I have tested this on 2 seperate machines on automatic1111 and had consistant errors with loading the refiner model. Hope that it gets sorted soon as it looks great :)

    • @00042
      @00042 Рік тому +1

      There is nothing unfortunate about being a Linux user ;) Which distro?

  • @Xinkgs
    @Xinkgs Рік тому

    Fantastic enjoyable content. Had to subscribe ❤ Keep em coming. 👍

  • @TeeEss
    @TeeEss Рік тому

    As always I am amazed at your content and explanations - when do you ever sleep?

  • @spinouze5858
    @spinouze5858 Рік тому

    Ngl the end was amazing

  • @kenpa0048
    @kenpa0048 Рік тому

    A great video as always, with the plus of hacker Olivio 😂

  • @frankhall401
    @frankhall401 Рік тому

    I tried the prompt, "a perfectly normal man looking at his perfectly normal hands"... SDXL produced a handsome man with seven fingers and three thumbs!

  • @akhilwarrior
    @akhilwarrior Рік тому

    I'm absolutely loving it!

  • @e86-2pk-9
    @e86-2pk-9 Рік тому

    at 0:35 he says that there is a commercial license. I couldn't find any confirmation of this. Any links ?

  • @RoyD2
    @RoyD2 Рік тому

    great video. Explaining it very well.

  • @linusgustafsson2629
    @linusgustafsson2629 Рік тому

    Noticed my WebUI was just an unpacked zip-file, so I went to start over with cloning down the repo this time to easier keep it updated. Taking a while, but hopefully it will be worth it. The safetensors also take a long time to download so I imagine a lot of people are downloading them to enjoy themselves with AI art.

  • @mattfx
    @mattfx Рік тому

    Great video and amazing look with glass !! 😆

  • @KolTregaskes
    @KolTregaskes Рік тому +3

    You can see why Midjourney want v6 out as it will intrepert text better than v5, which seems to be one of the big features of SDXL. MJ is still generally better even at v5 but SDXL is very very close now.

  • @felipenl6753
    @felipenl6753 Рік тому +1

    Thank you Olivio for your great support and detailed explanation. One question, when trying to download the SDXL base model and the SDXL Refiner, I can see that today exists now the versions including VAE for both models. Is there any suggestion from your side to select the VAE or not VAE version to download and use?. Thank you.

  • @AndyHTu
    @AndyHTu Рік тому +5

    You and Sebastian are my favorites. The real AI OG's in the industry. How can I trust you to not be an AI Olivio?
    Real question. Where do you see what version of SD you're running? I keep trying to update, but it says I'm already up to date, fine, but how my SDXL isn't working. It just load :(

    • @Steamrick
      @Steamrick Рік тому +1

      Amongst other places, it'll spit out the version number in the cmd dialogue right beneath the python version and all of the version numbers are listed at the bottom of the A1111 UI. I really don't see how you could possibly have difficulty finding them?!

    • @Elwaves2925
      @Elwaves2925 Рік тому

      I had that problem and the solution I found was to do a fresh install with the latest A1111 webUI. Even if it says you're up to date, you aren't. You'll know for definite as it will tell you it's 1.5.0 or 1.5.1 where Steamrick mentions. You don't need to get Python again but remember to add it's path to the webui-user.bat.
      @Steamrick The A111 webui doesn't show it's version number for all installs. For older installs, those UI numbers you mention didn't show the UI version, they started with Python. 🙂

    • @OlivioSarikas
      @OlivioSarikas  Рік тому +1

      thank you. at the very bottom of the browser page OR at the start in the CMD window when you load

  • @leslieware_photography_imagery

    So what is the Resolution of the Renders?

  • @cyberhard
    @cyberhard Рік тому +1

    I'll start using SDXL when ControlNet is available for it.

  • @sownheard
    @sownheard Рік тому

    Yeah 🎉 official release 🙌

  • @juanserranos2006
    @juanserranos2006 Рік тому +1

    Excellent tutorial and information @OlivioSarikas but I was wondering, and if is not too much to ask... What is the Automatic 1111 that you are running, because I don't have some of the slide controllers (like the Clip Skip), and option windows (SD VAE), that you showed in this video. Is this part of a default installation of Automatic 1111, or are there extensions that you have to install?
    One more time thank you for your excellent videos. Take Care.

    • @Dmitrii-q6p
      @Dmitrii-q6p Рік тому

      check settings

    • @DiffusionHub
      @DiffusionHub Рік тому

      no extensions, you need to enable them in your settings

  • @autonomousreviews2521
    @autonomousreviews2521 Рік тому

    Nicely done as always!

  • @victorwijayakusuma
    @victorwijayakusuma Рік тому

    thank you for this video
    I got error when using SDXL
    size mismatch for model.diffusion_model.output_blocks.8.0.skip_connection.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([640])
    please help

  • @giovani_aesthetic
    @giovani_aesthetic Рік тому +1

    hey i have a problem that when i want to select this model a1111 didn't let me do it someone has this problemand know how to fix it?

  • @TomiTom1234
    @TomiTom1234 Рік тому +3

    Thank you for the video, you saved me a minute of rendering by using this "HACKER" method. I was struggling for this issue, since I have 8 gb VRAM card, and the rendering takes like 1,5 minute to render on the base model, which led me to use ComfyUI which is much faster and stabler. But it needs some learning to add nods for refiner and Lora etc..
    But you saved me man thank you!
    I am waiting for your videos on how to train our own Loras and models for this specific model, is it the same like before? Is there any change? All this will keep me busy watching all upcoming videos about this new version.

    • @OlivioSarikas
      @OlivioSarikas  Рік тому +1

      You are welcome :) It's not supposed to be used that way, but i think the results can be pretty nice too :)

  • @coloryvr
    @coloryvr Рік тому

    WOW! Very cool! BIG FANX & Colored Greetinx!

  • @rishabhnag
    @rishabhnag Рік тому

    Looks amazing, can you please help me out..I have one question regarding GPU, I have nvidia geforce RTX 4 GB , can I use SDXL with 4GB gpu and 16GB ram ?

    • @OlivioSarikas
      @OlivioSarikas  Рік тому +1

      as far as i know you need 8GB of vram

    • @rishabhnag
      @rishabhnag Рік тому

      @@OlivioSarikas thanks for your quick respose. 😀

  • @swenfoxington9721
    @swenfoxington9721 Рік тому +1

    Is Controlnet already available for SDXL 1.0 with Automatic 1111? Couldn't find anything on the web so far

  • @ominoxe
    @ominoxe Рік тому +1

    I experiment with sdxl a bit and found that refiner model can also be used for upscale (I used Ultimate SD), I set tile size to 1024x1024.
    It can not only upscale image, but add a lot of details to image.

    • @user-vs3qg4zs8s
      @user-vs3qg4zs8s Рік тому

      Hi, I am working a face regenerating model, can we connect to discuss together?

    • @ominoxe
      @ominoxe Рік тому

      @@user-vs3qg4zs8s Hi.
      Sure.
      Although I'm pretty bad at speaking English 😁

  • @SianaGearz
    @SianaGearz Рік тому

    Well one thing not mentioned is how much GPU you need. I have RTX 2060 Super with 8G VRAM. WIll it do? Should i use something else? I don't actually care that much about anatomical correctness, photorealism and number of fingers, or about high resolutions above 768x768, but i want interesting varied results.

  • @user-vs3qg4zs8s
    @user-vs3qg4zs8s Рік тому

    Thanks for the info Olivio, I am trying to understand if there is a way we can use one of the model to batch process multiple images with the same treatment to get similar results on all the images of the same batch, any guidance would be highly appreciated.

  • @SkizzieSpeedruns
    @SkizzieSpeedruns Рік тому +1

    Hello, is it possible to get it running with Vlad automatic? So far I haven't seen any tutorials for it.

  • @scottgust9709
    @scottgust9709 Рік тому

    Great vid BTW,subbed :)

  • @cobrac4t34
    @cobrac4t34 9 місяців тому

    Updated to A1111 1.8.0, when loading SDXL check point getting error. size mismatch for model.diffusion_model.output_blocks.8.0.skip_connection.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([640]). I'm assuming something broke related to SDXL in this latest A111 update. Any suggestions ?

  • @charetjc
    @charetjc Рік тому +1

    1:26 Please note that the percentages in these graphs do not add up to 100% because the math was done by a...

  • @andrewmccarty
    @andrewmccarty 6 місяців тому

    Do I need to install the SDXL base model in order to use SDXL checkpoints from CivitAI?

  • @agatitytube
    @agatitytube Рік тому

    Thank you, it is great. I saved your video.
    Is there any video from you about the step-by-step installation of A1111?

  • @CaptainKokomoGaming
    @CaptainKokomoGaming Рік тому

    How will this one fare with lower end users? 2060 ti super and lower? I have to use --lowvram in the arguments. Will I need xformers etc.?

  • @jayceetravel7601
    @jayceetravel7601 Рік тому

    I have a RTX 3090 , aside from xformers do you have any other command prompts you recommend I add? Thanks

  • @ForsgardPeter
    @ForsgardPeter Рік тому +1

    Thanks for your content. It has been so helpful. I have been a pro photographer for more than 35 years and this AI image creating is one of the most interesting things that has happened during my career. It will one new thing that I will be using in my professional work. I am so happy that I got to see this development in image making.
    I tested the SDXL 1.0 model and I am having hard time to get a natural looking skin. The images looks nice and in most cases the anatomy is next to perfect, but the skin looks too plastic. I personally like the old fashion natural looking skin. Any ideas?

  • @lex_darlog_fun
    @lex_darlog_fun Рік тому

    @OlivioSarikas
    According to their blog post, you're supposed to switch the model from base to refiner mid-generation. Shouldn't you do it by:
    - generating initial half-baked image with 10 steps or so
    - switching to refiner, removing lora from prompt, and continuing with clip skip 10 or something like that
    ?
    The 2nd paragraph in "The largest open image model" section of their announcement makes me believe that the approach shown by you isn't actually correct:
    > ... the base model generates *(noisy)* latents

  • @AlexJota11
    @AlexJota11 Рік тому

    Hey Olivio, thanks for making the tutorial. How do you get the Clip Skip and the VAE selector up there?

    • @Elias-nj6gi
      @Elias-nj6gi Рік тому +3

      Settings -> User Interface -> Quick Settings List
      Add "CLIP_stop_at_last_layers" and "sd_vae"
      Apply settings -> restart automatic1111

  • @timeTegus
    @timeTegus Рік тому +3

    hey for me it uses way to mutch system ram like above 32 gb and then it crashes

    • @fr0zen1isshadowbanned99
      @fr0zen1isshadowbanned99 Рік тому +1

      For me too.
      Well, the Refiner Model that is.
      60GB+ then Crash or Error Message.

    • @timeTegus
      @timeTegus Рік тому +1

      @@fr0zen1isshadowbanned99 with comfy ui i dont get the proplem

  • @ninuwander
    @ninuwander Рік тому +1

    stable-diffusion-xl-refiner-1.0 when i select this my RAM usage go 100% and fail to load model

    • @ZayxSt
      @ZayxSt Рік тому

      same here, did you fixed it? mine takes forever to load and takes like 3 minutes to generate one image at 1 it/s

    • @ninuwander
      @ninuwander Рік тому

      @@ZayxSt no I deleted the SDXL modes

  • @ManuelJimenez1
    @ManuelJimenez1 Рік тому

    Amazing! Thanks you Olivio

  • @thanksfernuthin
    @thanksfernuthin Рік тому +4

    It refuses to allow me to select SDXL 1.0 or the refiner. It takes a long time and then kicks it back to the last model I had loaded. Anyone else having this problem? Tons of size mismatches and torch errors in cmd window.

    • @erickromano5030
      @erickromano5030 Рік тому +2

      same here, but with both archives

    • @thanksfernuthin
      @thanksfernuthin Рік тому +1

      @@erickromano5030 Cool. I hope someone has an answer.

    • @YoIomaster
      @YoIomaster Рік тому +3

      I have the same error, long list of missmatch messages, can't find a solution :(

    • @erickromano5030
      @erickromano5030 Рік тому

      @Cutieplus yes, I am

    • @claynelson5035
      @claynelson5035 Рік тому

      Same errors hoping for a solution

  • @hoaluong679
    @hoaluong679 Рік тому +1

    Hello, how can I get the box to choose VAE ??

  • @LudgerBrechmann
    @LudgerBrechmann Рік тому

    you are an absolute legend

  • @amirierfan
    @amirierfan Рік тому

    Thank you for this. I do not have the SD VAE bar though. Why is that?

    • @OlivioSarikas
      @OlivioSarikas  Рік тому

      have a look here: ua-cam.com/video/BKHWJ_b3h-s/v-deo.html&lc=Ugy9j83wfHxgdDVac_x4AaABAg

  • @johnnyzero6811
    @johnnyzero6811 Рік тому +1

    15:45 I'm also getting lots of over-saturated red images like this. Any fixes?

    • @OlivioSarikas
      @OlivioSarikas  Рік тому +1

      maybe a lower CFG scale? I didn't have too much time for testing it in depth yet

    • @johnnyzero6811
      @johnnyzero6811 Рік тому

      @@OlivioSarikas Yeah, I'm not sure the solution yet, but my images are over-cooked just as yours was, deeply saturated with reds. Maybe some fine-tuned models will fix this, or different settings...?

  • @johnleighdesigns
    @johnleighdesigns Рік тому

    Great stuff really helpful - as a photographer and designer getting into AI generation its amazing to get these supporting tutorials - For me my desktop AMD PC with NVIDIA Super 1650 is very slow using Automatic 1111 it takes around 20 minutes to get one image generated and so this is not practical for me so im looking at options of updating spending £300 on new GPU or other alternatives

    • @Resmarax
      @Resmarax Рік тому

      That sounds a bit excessive. Was this with SDXL using 1024x1024 resolution?

    • @johnleighdesigns
      @johnleighdesigns Рік тому

      @@Resmarax hi yes it was - super slow - its faster to use somehting like clipdrop but its rather restrictive - will see about spending out on a new GPU and read up if my AMD processor has any influence

  • @calumyuill
    @calumyuill Рік тому +1

    When I select the base model I see the following error and the model auto reverts to previous loaded model: Loading weights [31e35c80fc] from F:\stable-diffusion-webui\models\Stable-diffusion\sd_xl_base_1.0.safetensors
    Failed to load checkpoint, restoring previous + size mismatch for model.diffusion_model.output_blocks.8.0.skip_connection.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([640]).

    • @stormjack
      @stormjack Рік тому

      I saw this behavior when selecting the base VAE model named sd_xl_base_1.0_0.9vae.safetensors in Automatic 1111. But using sd_xl_base_1.0.safetensors doesn't cause that for my PC.

    • @calumyuill
      @calumyuill Рік тому

      @@stormjackthanks for your comment, I am getting this when selecting sd_xl_base_1.0.safetensors

  • @guns1inger
    @guns1inger Рік тому

    I have tried the SD XL 1.0 models in A1111, ComfyUI and InvokeAI. I have a 3060TI with 8 GB VRAM and 32 GB of system ram. ComfyUI won't load the base model without erroring out saying I don't have enough memory. A1111 will load and render using the base model, but then crashes out loading the refiner, again saying I don't have enough memory. I have tried both of these with medvram and even lowvram, and it seems to make no difference. InvokeAI, on the other hand, can load and use both the base and refiner models and still render relatively fast.

  • @chidori0117
    @chidori0117 Рік тому +2

    Tthe SDXL models run very slowly for me in A1111 even after reinstall. Its basically not usable at the moments since the generation is measured in s/it insted of it/s. It runs fine in comfy UI so I dont really know what the deal with A1111 is ... its wierd since it loads the VRAM completly but the GPU still seems to not be working much since the fans never ramp up. In comfy the GPU load is high but you also clearly hear the fans ramp up. Also during generation it causes my entire system to slow down which never happens with 1.5 models or using SDXL in comfy so I guess A1111 still needs some serious work in that regard.

    • @OlivioSarikas
      @OlivioSarikas  Рік тому

      give comfyUI a shot

    • @dhwz
      @dhwz Рік тому

      It's mandatory to use --medvram (or even --lowvram) if you have VRAM size of 8GB or lower. Else it will be extremely slow because you run out of memory.

    • @chidori0117
      @chidori0117 Рік тому

      @@dhwz I have an 8GB card so yes not that much. The medvram actually helped thanks for that. Still it seems Comfy is either doing that kind of load split automatically or is adressing the hardware differently since it is not running into the same issues. I am no programmer though so thats as far as I can undestand why both UIs have such a difference in behavior/perfomance.

  • @fearsomezq
    @fearsomezq Рік тому

    am i missing something? when i try to select the sdxl model it never lets me use it. Always just goes back to whatever model i had selected before.

  • @Squeezitgirdle
    @Squeezitgirdle Рік тому +2

    My experience with sdxl had MUCH MUCH worse hands than some of the custom models we've been using until now.
    Has anyone tried controlnet with it yet?

    • @JochenSutter
      @JochenSutter Рік тому +1

      I tried contolNet v1.1.233 but it does not work with SDXL "TypeError: unhashable type: 'slice'"

    • @Squeezitgirdle
      @Squeezitgirdle Рік тому

      @@JochenSutter ahh, I'm not gonna be able to use it then. For now.

  • @peterhosfield
    @peterhosfield Рік тому

    has there been any new insight into minimum specs for this?

  • @udonpraguypanya2992
    @udonpraguypanya2992 Рік тому

    How about SDXL 0.9 that already installed in A1111 before... delete its files in the directory or what....!

  • @rolarocka
    @rolarocka Рік тому

    SDXL is indeed amazing 🎉❤

  • @Dmitrii-q6p
    @Dmitrii-q6p Рік тому

    will automatic add correct way to use model + refiner?

  • @rendez2k
    @rendez2k Рік тому

    "Well call me Bob and butter me sideways" is the best line in any UA-cam video this century, and this is FACT 😀

  • @ArtificialHorizons
    @ArtificialHorizons Рік тому

    I was getting poor quality results and errors most of the times. Now I know why. I had my vae set to 84k. Lets try out with new knowledge. Thank you!

  • @mirek190
    @mirek190 Рік тому

    Why A111 needs so many steps to generate full SDXL picture?
    Why is not as simple like under ComfyUI?

  • @OliNorwell
    @OliNorwell Рік тому +2

    Great video, right now I'm enjoying SDXL a lot and getting some very decent stuff generated. Though things will 'go nuclear' once we have a series of custom models out and ControlNet works with it.
    My only fear is VRAM. I can't wait to do Dreambooth with this, but I assume that if we're using 1024x1024 as source images then VRAM requirements are going to skyrocket?