FREE Local Image Gen on Apple Silicon | FAST!
Вставка
- Опубліковано 20 вер 2024
- Step by step setup guide for a totally local Image Generation with a ChatGPT-like UI using stable diffusion. A continuation of my Step by step local LLM guide here: • FREE Local LLMs on App...
Run Windows on a Mac: prf.hn/click/c... (affiliate)
Use COUPON: ZISKIND10
🛒 Gear Links 🛒
* 🍏💥 New MacBook Air M1 Deal: amzn.to/3S59ID8
* 💻🔄 Renewed MacBook Air M1 Deal: amzn.to/45K1Gmk
* 🎧⚡ Great 40Gbps T4 enclosure: amzn.to/3JNwBGW
* 🛠️🚀 My nvme ssd: amzn.to/3YLEySo
* 📦🎮 My gear: www.amazon.com...
🎥 Related Videos 🎥
* 🌗 RAM torture test on Mac - • TRUTH about RAM vs SSD...
* 🛠️ Host the PERFECT Prompt - • Hosting the PERFECT Pr...
* 🛠️ Set up Conda on Mac - • python environment set...
* 🛠️ Set up Node on Mac - • Install Node and NVM o...
* 🤖 INSANE Machine Learning on Neural Engine - • INSANE Machine Learnin...
* 💰 This is what spending more on a MacBook Pro gets you - • Spend MORE on a MacBoo...
* 🛠️ Developer productivity Playlist - • Developer Productivity
🔗 AI for Coding Playlist: 📚 - • AI
Repo
github.com/AUT...
- - - - - - - - -
❤️ SUBSCRIBE TO MY UA-cam CHANNEL 📺
Click here to subscribe: / @azisk
- - - - - - - - -
Join this channel to get access to perks:
/ @azisk
- - - - - - - - -
📱 ALEX ON X: / digitalix
#machinelearning #llm #softwaredevelopment
The pace of this video is brilliant. Quick but with all the relevant information.
I tried
Like that Alex reacts to comment quite fast. Thanks, this video is useful
You're welcome!
Awesome guide, thanks a lot Alex! Tried using A1111 last month or so, but today learned ComfyUI is also supported by Apple Silicon. Turns out, it's more optimised and much faster! Does not use too much RAM compared to A1111, and ComfyUI can even run models that were crashing on A1111 (on GPU poor 8GB base Mac). The setup and usage is slightly more advanced but not by much, and a guide from you would be appreciated!
Once again, you are taking the perfect approach to achieving success! Thank you.
Im a former data scientist (doing the whole medical school thing now), and this channel gives me the Data Science Developer Fix I need sometimes. Thank you for your content. Us Tech nerds love you more than we can comment. now back to studying lol
I really enjoy watching your video. It is informative in a vibe of fun. Thx for your effort!
Glad you enjoyed it!
Insane how underrated your channel is. Kinda like it that way tbh... But seriously, I love you Alex.
I love to watch your videos! It's so fun and at the same time informative as well.
Oh thank you!
This is awesome!! Thank you for sharing this 🤯
Glad you enjoyed it!
My M1 MBA 8/256 will definitely die trying to generate any of these (
Probably need at least 16gb RAM machine
RIP 8gb
idk. try and let us know.
Any luck? Let us know the result
M1 only works great with the phi3 model for text generation, You need 16gb of ram for image and bugger model
Thank you so much for these videos. They’re perfect for me as a new Mac user with little knowledge of the terminal commands.
can’t wait for custom MLX models to show up 😉
"You shouldn't be doing this at work anyway!"
LOL.
This IS my work.
🙂
This is really useful. Thanks a bunch for the gotchas and tutorial
Amazing content. Loving these tutorials!
May 2024 has been a watershed month. For the first time in my life, I really felt I've fallen behind. M4, Gemini 1.5, ChatGPT4o, Copilot+PC with Snapdragon X Elite. So much tech that my M1 MBA 8GB RAM will fail to take advantage of and fail to compete with. All this including local LLMs that are too resource intensive to even try out. My next laptop could well be a Windows PC if Apple doesn't address the RAM situation in their base models.
Just use diffusionbee
This is some cool stuff. Keep it coming!
great tutorial,thanks
BTW A1111 already creates a conda environment when running anyway
An overview of Pinokio would make for a good video
Do a videos with TTS please
thanks for videos! can you do one about the music/samples generation?
amazing stuff
Thanks Alex.
What are your thoughts on M4 and how it will speed up inference when released to Macs?
Hi, Alex, Thankyou you teach us so funny. I like it. But it would be better if you make it as docker image, or teach to make it as docker image, because I dont want to make chaos with different environments of python.
I'm running openwebui in a docker container and it can not access localhost, how can I go about it?
Same here :(
I used the “host.docker.internal” for connection.
Nice one. If it's possible to use Ollama directly to generate image?
Or you could just install something like Mochi Diffusion or Guernika?
Great video!! how about local Text-To-Speech for local webUI also? combine with image recognition then we will have chatGPT-o local version :) Thanks!
The last boom! was not enough
boom!
💥
Is there a way to use the image AIs without all the frontend overhead? :)
Thank you!!! The free models look like Dalle from last year, so maybe next year the free models will look like Dalle this year. I've spent way too much free time messing with ollama and open web ui because of your last video. Could you look into the RAG and the web search features? I've never gotten web stuff to work, but I feel like RAG documents do work, but I haven't been too successful messing with it. There's not a lot of content on it, but it seems like a perfect way to put my existing repo or repos so that the model can pick up on conventions and context.
There are way better models available. Just that he's not using the right ones. He's using random models from SD1.5 which was forever ago.
I'd recommend something like RealVisXL based on SDXL and there are super fast lightning models too.
Automatic1111 is pretty bad on my 16" macbook pro.. they perform better on Nvidia.. I think code are not optimized for apple silicon. Diffusion bee is a lot faster but most models are pretty bad and some models from civitai not working.. is any way to optimize these and to make it faster on apple silicons?
Why not NPU ?
Hi Alex. What do you think is good for newb IT...MacBook Air M3 24GB 1 TB or MacBook Pro M3 18GB 1 TB (15" and 14")... (coding, parallels, adobe -all programs, ...maybe machine learning) Thanks, Spasibo.
Now I'm curious. I'll try it in my m1 air hopefully it won't toast my machine 😂
It can also run on Linux :) and Windows 10/11 .. let see how it runs on Win11 ARM :)
Really nice
Can I run this from external storage?
Brilliant, Can you do a windows one?
I don't see images under settings in openwebui - was it disabled?
Use forge instead, it's much faster then A1111
Forge is not updating
Thank you for the videso, would this work on an intel based MAC?
I do have source code of product (bash script, python, c++), could you use llama read the source code and help troubleshooting problem on log files?
what about Lora?
Can we run this model on m3 pro base variant Alex????
Whta about Fooocus project?
I never told you, but I met an austrian cousin from Mr. Schwarzenegger. Really skinny guy and short. I guess Mr. Schwarzenegger had good bones.
🤣 did he look like the stable diffused guy?
@@AZisk It was Biofach 2004, and we had a booth at the fair. There were organic foods, cosmetics, and various products from all over Europe. One of the companies there manufactured mattresses made from organic seeds. A salesman from that company, who was a really nice person, stood out. However, his Austrian coworkers kept making fun of him because he was a cousin of the famous Arnold Schwarzenegger. Although he was short and skinny, he shared many facial features with Mr. Schwarzenegger, but in a more rugged way. I'm sorry if my English doesn't allow me to describe it better.
Are you from NYC?
nope. although i grew up in buffalo
Your placards are all scratched up. You should make new ones
Thanks Microsoft😊
I don‘t think Arnold Schwarzenegger is an animal.😅😅😅
Lolz @ FAST :P
You meant a photo of a "Lama" not a "llama". 😂
first like first comment
Legend!
Bro shows us some 1.5 models like its 2022 💀
Bro didn't have patience to watch the video. Tik tok gen.