Install Stable Diffusion | Linux Mint CPU Only
Вставка
- Опубліковано 27 жов 2024
- In this video, we continue our AI journey by installing and configuring Stable Diffusion Automatic 1111, all running on CPU only. Follow our detailed step-by-step guide to set up this powerful image generation tool on your local machine. We also showcase a few image generation examples to demonstrate the capabilities of Stable Diffusion, even without a GPU. Perfect for those looking to explore local, open-source AI solutions!
I think Image Generation on CPU only is amazing! What do you think?
Besides making it more accessible for people that don’t have good computers, why would you choose to run cpu only?
That's the main reason.
Amazing tutorial!!! very concise, no fuzz, direct to the point. Got it running in a matter of minutes in my new linux mint! appreciate this very much! Thanks.
Hey, thank you sooo much! I recently moved from Win to Linux Mint so I'm still a noob. Without your guide I couldn't make it. I tried a few times but failed. So glad it works finally here!!
Instantly susbcribed, awesome series!
Please go more in depth, because there is still something not optimal with the usage of my Vram (I changed prefixes and it works on my RTX 3070, but Vram seems to be less than on Windows...)
Glad I found your channel, pls don't stop.
I love this series of localized AI tools.
@@spyghetti thank you 😊 🙏
I've spent two days fighting to get stable diffusion running and you sweep in with an 11min video, managing to have it running by minute 5!!!
Thank you so much, now I just need to figure out how to make it so my AMD card can help in the process, but at least I can start making some pics (slowly) in the mean time!
Glad I found you. Really cool idea for a baseline on a potentially much larger series. Hundreds of questions/ideas flowing through my head.
@@zeloguy Thanks! Stay tuned
to having those tools for free is amazing. i am waiting to see if we can create video with digital avatars for free and locally this will be great. thanks for sharing the knowledge and keep it simple for everyone
I think that might be possible. I'll be looking into that as well soon. Thanks for checking out the video!
First of all, thanks for this series. I really appreciate the "no-frills/straight to the point" approach. I only have one question. As I have an Rtx4090, can the AI apps you are presenting also use the GPU, or do I need to make changes to use the GPU? Thanks.
Some will auto use the GPU if it's detected. Others, we are using a parameter in the command that calls it, or sometimes even in the python file(s).setting things like torch to (cpu).
Thank you for checking out this series, I'm glad you like it! 🙂
I might do another one with a GPU, but that would likely be on Windows as my host PC with my GPU is running Windows 11.
@tonyquasar1274: it worked for me pretty well on my RTX 3070 with these commands only: --no-half-vae --medvram --enable-insecure-extension-access
...you wouldn't need the medvram command though.
And check that your Cuda version should be up to date. --> open terminal and type "nvidia-smi"
Im using theese tools a while on win, everytime something wrong a path not right a module updatet. It is sooooo anoying ! It cost me werks to get things running.
Scinnce 2 days I start with linux and ur channel ! It is soo much fun thx !!!
So awesome to hear! Thank you!
I downloaded an extension, but its not showing up. Sort of stumped on what to do. It downloaded in the correct folder. Mint 22.
For also using GPU, is it just removing the cpu-only argument?
Loved your video by the way
@@moriaanmichiels7215 I think you'll want to get rid of the --skip-torch-cuda-test as well. I haven't run it on my main rig in a while, but i can validate soon. Thanks for the support! 🙂
@@theit-unicorn1873 that worked for me
What specs does your PC have?
I'm running on a vm. The vm is setup with 16GB memory and 8 virtual CPU Cores
Can you show how to install ComfyUI as well? It's a lot better than this.
Im getting an error "Stable Diffusion model failed to load"... any suggestions??
maybe the model is too big? you could try adding "--lowvram" in the commandline
I only get a bunch of errors
And fixed them. Now it works. Thanks!
Next video Text to speach? :)
Are you psychic??? Lol working on it now 😅
@@theit-unicorn1873 I have my secrets ;)
@@theit-unicorn1873 I have my secrets ;)
@@theit-unicorn1873 I have my secrets ;)
getting an error File "/usr/local/lib/python3.10/lzma.py", line 27, in
from _lzma import *