- 13
- 336 911
Keyboard Alchemist
United States
Приєднався 20 чер 2023
On this channel, we will cover new and interesting AI topics, focusing on AI art and image generation, Stable Diffusion, Automatic1111, tutorials, and workflows.
The AI community is such a great and welcoming community that is built upon open-source and sharing of knowledge. Through this channel, I want to share my knowledge and learn from the community, such that we can bring AI tools to the masses and advance the work together.
The AI community is such a great and welcoming community that is built upon open-source and sharing of knowledge. Through this channel, I want to share my knowledge and learn from the community, such that we can bring AI tools to the masses and advance the work together.
How to Install and Use Stable Diffusion in 2024 FREE & LOCAL. Ultimate Quick Start Guide.
#aiart, #stablediffusiontutorial, #generativeart
In this tutorial, I will show you how to install and run the latest Stable Diffusion models quickly with a FREE application called Stability Matrix. Available on Windows, Mac OS, and Linux. Stability Matrix is an open-source app that has all the best and most popular webUI packages under one roof. You can run A1111, Forge, Fooocus, and ComfyUI just to name a few. All packages are available for 1-click installation; no more manual install of Git and Python. Stability Matrix is free to install, it is local and private, it is loaded with quality-of-life features, and it is simple and intuitive to use. Whether you are brand new to Stable Diffusion or a seasoned pro, there is something for you in Stability Matrix.
For me, some of the really nice quality-of-life features of Stability Matrix are: (1) it will automatically check for UI updates and you can do a 1-click install for any UI package updates, (2) it will automatically check your Python dependencies for each UI package and update them as needed, (3) it embeds Git and Python dependencies such that they do not need to be globally installed for your UI packages to work, and therefore eliminating potential conflicts with other apps on your computer that might need to use a different version of Git or Python, (4) the Model / Checkpoint manager enables you to use your checkpoints, LoRAs, Textual Inversions, etc. across all UI packages thus saving you disk space and effort in trying to manage these files, and (5) the Model Browser allow you to import files directly from CivitAI and HuggingFace, and file them into the corresponding model folder depending on the model type.
Chapters:
00:00 - Intro
00:56 - Stability Matrix Overview
01:27 - Installing Stability Matrix
03:35 - Choose an Initial WebUI Package
05:56 - Choose an Initial Model or Checkpoint
08:51 - Launch Forge WebUI
10:52 - Menu option: Checkpoints
11:18 - How to import your existing Checkpoints and LoRAs
12:10 - How to import your existing Textual Inversions
12:50 - How to import your existing VAEs
14:00 - Installing A1111 and Enable Model Sharing (‘Symlink’)
15:25 - Comparing A1111 and Forge image generation speed
16:55 - Installing Fooocus UI
18:37 - Menu option: CivitAI Model Browser
19:49 - Importing a new Checkpoint from CivitAI Model Browser
21:56 - Menu option: Output Browser
23:14 - Launch Options
25:35 - Menu option: Inference
26:45 - Summary
Useful links:
Stability Matrix github:
github.com/LykosAI/StabilityMatrix
A1111 command line arguments:
github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Command-Line-Arguments-and-Settings
***If you enjoy my videos, consider supporting me on Ko-fi***
ko-fi.com/keyboardalchemist
In this tutorial, I will show you how to install and run the latest Stable Diffusion models quickly with a FREE application called Stability Matrix. Available on Windows, Mac OS, and Linux. Stability Matrix is an open-source app that has all the best and most popular webUI packages under one roof. You can run A1111, Forge, Fooocus, and ComfyUI just to name a few. All packages are available for 1-click installation; no more manual install of Git and Python. Stability Matrix is free to install, it is local and private, it is loaded with quality-of-life features, and it is simple and intuitive to use. Whether you are brand new to Stable Diffusion or a seasoned pro, there is something for you in Stability Matrix.
For me, some of the really nice quality-of-life features of Stability Matrix are: (1) it will automatically check for UI updates and you can do a 1-click install for any UI package updates, (2) it will automatically check your Python dependencies for each UI package and update them as needed, (3) it embeds Git and Python dependencies such that they do not need to be globally installed for your UI packages to work, and therefore eliminating potential conflicts with other apps on your computer that might need to use a different version of Git or Python, (4) the Model / Checkpoint manager enables you to use your checkpoints, LoRAs, Textual Inversions, etc. across all UI packages thus saving you disk space and effort in trying to manage these files, and (5) the Model Browser allow you to import files directly from CivitAI and HuggingFace, and file them into the corresponding model folder depending on the model type.
Chapters:
00:00 - Intro
00:56 - Stability Matrix Overview
01:27 - Installing Stability Matrix
03:35 - Choose an Initial WebUI Package
05:56 - Choose an Initial Model or Checkpoint
08:51 - Launch Forge WebUI
10:52 - Menu option: Checkpoints
11:18 - How to import your existing Checkpoints and LoRAs
12:10 - How to import your existing Textual Inversions
12:50 - How to import your existing VAEs
14:00 - Installing A1111 and Enable Model Sharing (‘Symlink’)
15:25 - Comparing A1111 and Forge image generation speed
16:55 - Installing Fooocus UI
18:37 - Menu option: CivitAI Model Browser
19:49 - Importing a new Checkpoint from CivitAI Model Browser
21:56 - Menu option: Output Browser
23:14 - Launch Options
25:35 - Menu option: Inference
26:45 - Summary
Useful links:
Stability Matrix github:
github.com/LykosAI/StabilityMatrix
A1111 command line arguments:
github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Command-Line-Arguments-and-Settings
***If you enjoy my videos, consider supporting me on Ko-fi***
ko-fi.com/keyboardalchemist
Переглядів: 5 290
Відео
Regional Prompt Control with Tiled Diffusion/MultiDiffusion, Compose your Image Like A Boss! [A1111]
Переглядів 5 тис.9 місяців тому
#aiart, #stablediffusiontutorial, #generativeart This tutorial will show you how to use Regional Prompt Control within the Tiled Diffusion / Multi Diffusion extension to compose your images the way you want it! No, this isn't quite the same as Regional Prompter, i think it's actually easier to use. Plus, you can use LoRAs, Adetailer, and Control Net with this extension to give you even more con...
Tiled Diffusion with Tiled VAE / Multidiffusion Upscaler, the Ultimate Image Upscaling Guide [A1111]
Переглядів 17 тис.10 місяців тому
#aiart, #stablediffusiontutorial, #generativeart This tutorial will cover how to upscale your low resolution images to 4k resolution and above with the Tiled Diffusion with Tile VAE or Multidiffusion extension in A1111. We are going to walk through the workflow in the 1st part of the video, and then go into detail regarding how each setting affects your resulting image. As always, feel free to ...
How to Inpaint in Stable Diffusion A1111, A Detailed Guide with Inpainting Techniques to level up!
Переглядів 21 тис.Рік тому
#aiart, #stablediffusiontutorial, #generativeart This tutorial will cover how to inpaint in stable diffusion A1111 and some inpainting techniques, using tools like Photopea, img2img inpaint, inpaint sketch, and LORAs. Along the way, I will give you some tips and tricks for how to quickly and consistently obtain great inpainting results. As always, feel free to leave a comment down below and who...
STOP wasting time with Style LORAs! Use THIS instead! How to copy ANY style with IP Adapter [A1111]
Переглядів 42 тис.Рік тому
#aiart, #stablediffusiontutorial, #generativeart This tutorial will show you how to use IP Adapter to copy the Style of ANY image you want and how to apply that style to your own creation. It does the job of a LORA with just one image. We will also compare Control Net Reference Only versus IP Adapter and see how these two are different from each other. Chapters: 00:00 - Intro 00:26 - Topics ove...
ADetailer in A1111: How to auto inpaint and fix multiple faces, hands, and eyes with After Detailer.
Переглядів 20 тис.Рік тому
#aiart, #stablediffusiontutorial, #automatic1111 This tutorial walks through how to install and use the powerful After Detailer (ADetailer) extension in A1111 to automatically inpaint and fix faces, hands, eyes, and entire bodies without the manual work of drawing a mask and inpainting, so your images will always come out looking great! We will also go through an advance usage case where you ca...
How to do Outpainting without size limits in A1111 Img2Img with ControlNet [Generative Fill w SD]!
Переглядів 21 тис.Рік тому
#aiart, #stablediffusiontutorial, #automatic1111 This tutorial walks you through how to Outpaint any image by expanding its borders and filling in details in the extra space outside of your original image, similar to the generative fill functionality of photoshop. We will also walk through how to unlock the 2048 x 2048 image size limits of Automatic 1111 by using the super-secret-ultimate "Limi...
How to change ANYTHING you want in an image with INPAINT ANYTHING+ControlNet A1111 [Tutorial Part2]
Переглядів 40 тис.Рік тому
#aiart, #stablediffusiontutorial, #automatic1111 This is Part 2 of the Inpaint Anything tutorial. Previously, we went through how to change anything you want in an image with the powerful Inpaint Anything extension. In this tutorial, we will take a look at how to use the Control Net Inpaint model, and the Control Net and Cleaner features within the Inpaint Anything extension! Installation instr...
How to change ANYTHING you want in an image with INPAINT ANYTHING A1111 Extension [Tutorial Part1]
Переглядів 114 тис.Рік тому
#aiart, #stablediffusiontutorial, #automatic1111 This tutorial walks you through how to change anything you want in an image with the powerful Inpaint Anything extension. We will install the extension, then show you a few methods to inpaint and change anything in your image. The results are AMAZING! Chapters: 00:00 Intro 01:12 Overview of Inpaint Anything Extension 01:43 Install Inpaint Anythin...
Do THIS to speed up SDXL image generation by 10x+ in A1111! Must see trick for smaller VRAM GPUs!
Переглядів 19 тис.Рік тому
#SDXL, #automatic1111, #stablediffusiontutorial, Is your SDXL 1.0 crawling at a snails pace? Make this one change to speed up your SDXL image generation by 10x or more in Automatic 1111! I have a 8GB 3060Ti GPU and it was unbearably slow, until I put in this command line argument, and it sped up my image generation with the base model plus refiner by 10x to 14x for a single image generation. It...
How to Install and Use SDXL 1.0 Base and Refiner Models with Troubleshoot Tips!
Переглядів 13 тис.Рік тому
SDXL 1.0 is finally released! This video will show you how to download, install, and use the SDXL 1.0 Base and Refiner models in Automatic 1111 Web UI. We will also compare images generated with SDXL 1.0 to images generated with fine-tuned version 1.5 models. Is this the beginning of the end for version 1.5 models? Let's find out! NOTE: check out my latest video on how to install Stable Diffusi...
How to use XYZ plots Script to Optimize Parameters and Get the Most Out of your Model!
Переглядів 15 тис.Рік тому
This video tutorial walks you through how to use the XYZ plot Script in Automatic 1111 and provides a simple workflow that can help you find the optimal values of Sampling Steps, CFG scale, and Sampling Methods for the model you are using. I would recommend to do this when you first start working with a new model or checkpoint. Chapters: 00:00 Intro 00:10 Ep2 - How to get the most out of your m...
4 Easy Steps to Install Automatic1111 on your PC for FREE! Start creating fast!
Переглядів 5 тис.Рік тому
This video tutorial walks you through in detail how to install and run Stable Diffusion Automatic1111 on your Windows PC with a Nvidia graphics card, and how to quickly start generating AI images. NOTE: This method is a bit outdated. Take a look at my other video for a better way to install Stable Diffusion: ua-cam.com/video/85KR3GdS4wE/v-deo.html Chapters: 00:00 Intro 01:14 Ep1 - Install Stabl...
use s key to zoom
Did what you said, minus the xformers, since I'm funning on CPU and have gotten a bit of improvement.
One thing that sped my sdxl up was switching from my high res external monitor to output only to my laptop screen which is lower resolution. Or i guess if i lowered the resolution on my external monitor that would help too. Laptop rtx 4070.
awful technique
I can't draw on the image (upper right).
tnx alot, make more videos :D
Doesn't work on ForgeUI+Flux.
What's the difference between inpaint and inpaint sketch?
wow! I only have 2GB VRAM and this tip really made a difference on my Stable Diffusin WebUI Forge performance ! Thanks a lot !
cool, finally i dont need to wait 20+ minutes
do you have a link to the anime model used at 10:28? I searched citusMIX but cant find anything.
I use this utility ALL the time, here's a few tips for the highest quality you can achieve in the end results if you wanna be a try hard like me (depending on your GPU strength, you'll need to adjust. I have a 4090): TL:DR: I know this isn't practical for most people, but if you're reading this and like me, you want the BEST possible quality in the end result and you got the gear, this is how. 1. 128x128 Latent tile width/height settings with a 64 latent tile overlap (the suggestion of 8 in the video is probably more performant) and use the other recommended settings. 2. Set an xyz script with denoising and in the parameters put 0.1-0.5(+0.05). This will generate 9 different images and takes a while, even on my 4090 it's around 8-10 minutes. You can probably just do 0.1-0.5(+0.1) and be fine with 5 images for the rest. 3. The reason you do the previous step is because each level of denoise will produce a slightly different picture. Some with better eyes, some with better clothes, etc. 4. Take the lowest denoise photo and put it into a photo editor of your choice as a base(I use Photopea). 5. Import the other photos and set them as a raster mask layer and hide all. 6. Use a brush with black and white colors set and a lowest hardness setting to start blending in different layers. This will allow you to take the best from every single gen and combine them into a completely flawless end product. 7. Afterwards you can adjust color balancing, hue, contrast, and brightness settings to clean up the picture. Many pieces tend to gen with a slight red or green hue that needs to be fixed. 8. If you do this and it needs a lot of editing, it can introduce what's called color banding. To fix this, export your final result and import it into img2img. Turn off TiledDiffusion/VAE, set denoise to 0, turn off xyz script, set scale to 1, and re-gen. This will smooth out any artifacts introduced by editing but maintain the original picture at original size. Also, if you DO have high end gear, turn off Fast Encoder/Decoder. If you zoom in on a final product, it appears as though the entire gen was made with tiny little dots. If you need the performance advantage of the coders, you can fix this issue afterwards with step number 8 and it will remove them.
When I enable region prompt control, I always get this error "TypeError: expected Tensor as element 0 in argument 0, but got DictWithShape". It doesn't generate image, but when I uncheck it, it works - ofc without the region control.... Any idea how to fix that? Thank you
i was messing my brain around it too and saw one comment here saying this extension doesn't support sdxl model and yeah tried it with sd1.5 and it works. very disapointing :(
@@ratnajyotishakya1096 thank you for the info
I wished I had watched your tutorial before going down the rabbit hole 😅 thank you so much for the great tutorial. Perhaps it would make your videos a bit longer, quick explanations for the terms like VAE, textual inversions, etc. could make this seem more approachable imho
Great video. Whats your opinion about ComfyUI, I am curious why you never make any videos on it.
oh no,i don't konw why my computer can't download the inpaint model id,i followed your video properly😢,i need help😢
my internet is ok,but the program doesn't begin to download the inpaint model😢
What if I’m getting a “package modification failed”? Please help.
if you have multiple areas that need to be fixed, can you use latent upscaling and then downscale the image so it's more workable, and then use latent upscaling again?
is this channel abandoned?
Thanks for the video! Do you know if there is a way to train your own models and use them with this program? I mean like uploading 100 pics of a particular person's face to get a model. Can you do that with this?
Thank you very much sir
Such an amazing video! I just started with SD 2 weeks ago, and this video is what made my images reach a much higher quality level. Thanks!
I'm glad it was helpful! Thank you for your kind words!
Excellent guide, you made it so easy to understand! Thanks!
You're welcome! I'm glad you liked the video!
Sir, I have a important request. How we can re-design from a low q. To high. No upscale way. I want AI can known the objects and re-design it on high quality, I would to big thanks
man i searched everywhere to find out how to get the brush color toggle for inpainting and only this video explained how to manually get the extension installed by going into the available tab. it says that the canvas zoom is "built-in" in my A1111 but it doesnt update and it didnt came with the brush color tool. this video really helped, thanks so much bro
I'm glad it helped! Cheers!
Thank you ❤, it's work perfect
Welcome 😊
If i want to add a Lora like "add more details" and "Sharpness tweaker" will it work on this tile diffusion method?
Thank you for this tutorial it was very useful because you showed things step by step, and didnt skip anything. Only problem I have, is that I cant find anything else than the inpainting model I put in ControlNet in the models folder, is that normal?
Tried going to the hugging face page but the file is gone. Or at least if it's there it's very different than the one you showed.
Trying to do methord 2 but I get botched results. Think this needs an updated guide, that or my setup requires different parameters
Did things change or do I need to do something else? I have downloaded everything and put it into the folder but the pre processor doesnt have the one I downloaded. It only has auto, clip_h, pulid, face_id_plus, face_id, clip_sdxl_plus_vith and clip_g
13:20 this advanced part was very helpful. I always struggled with preventing artifacts.
When doing this, Inpaint just stretches the imagine. Any idea what I'm doing wrong?
This is great, I tried it myself and of course screwed something up. When I find a background generation I like, I then try to add chracters to the background generation, while keeping the same seed number, but the background changes even when keep the same seed number. Not sure how to proceed without relying on rng. Any thoughts?
is there a video on installing the segment-anything model ?
There is no "SEARCH" option for Extensions.
It doesn't work with pony or XL modelssssssss, can you use your alchemy to make it work?
太棒了!我昨天遇到了这个问题,无限延长的手臂,感谢您的视频!
Glad I can be of help! Thanks for watching.
Really ty again !
Does anyone know why im getting little bumby circles artifacts on upscaled images when using? using method 1. EDIT: Its something with FAST DECODER, with it off I get slower gen but I must
You say "I like to keep my extensions up to date all time", so just add : for /D %%I in (extensions\*) do git -C "%%I" pull at the start webui-user.bat. That will update all start, no need to restart :)
HI how do you generate the heatmap?
This video is a Gem, well done Sir 🔥
Thank you, I'm glad you liked the video!
FYI, After Detailer now is called ADetailer if you cannot find it in the extension list
so nowadays, is it normal for a 25-30sec generation SDXL image on a 3080 10gb? I was generating @ 720x1280 cheers.
thanks teach
this video is gold standard for anyone starting with automatic1111
It worked at first then all of a sudden it just stopped doing anything for the eyes. I can detect the eyes but it's like it doesn't even change them after running the adetailer. The eyes are the same afterwards, anyone run into this issue?
THis is insanely helpful
I'm glad it was helpful. Thank you for watching!