Це відео не доступне.
Перепрошуємо.
Mac: Easy Stable Diffusion WebUI Installation | Full Guide & Tutorial
Вставка
- Опубліковано 30 лип 2024
- Transform Your Text into Stunning Images, now on a Mac! Learn How to Use AUTOMATIC1111's txt2img, img2img and More! This video will guide you through setting up Stable Diffusion WebUI on a Mac, and creating your first images.
Homebrew: brew.sh
Link to this video in article format (and all the copy/paste commands): hub.tcno.co/ai/stable-diffusi...
Timestamps:
0:00 - Explanation, and details
0:35 - Install Homebrew
1:28 - Downloading AUTOMATIC1111's Stable Diffusion WebUI
1:54 - Downloading Stable Diffusion models
3:28 - Starting A1's SDUI on Mac
5:00 - Using Stable Diffusion on a Mac
5:40 - Drawbacks of SD on Mac
5:50 - Launch arguments & Less VRAM
7:35 - Opening SDUI in the future
8:20 - Is Mac good for SDUI? Not really...
#StableDiffusion #AUTOMATIC1111 #Mac
-----------------------------
💸 Found this useful? Help me make more! Support me by becoming a member: / @troublechute
-----------------------------
💸 Direct donations via Ko-Fi: ko-fi.com/TCNOco
💬 Discuss the video & Suggest (Discord): s.tcno.co/Discord
👉 Game guides & Simple tips: / troublechutebasics
🌐 Website: tcno.co
📧 Need voiceovers done? Business query? Contact my business email: TroubleChute (at) tcno.co
Everything in this video is my personal opinion and experience and should not be considered professional advice. Always do your own research and ensure what you're doing is safe.
after ups and downs most of the time fixing error messages, I spent about 5h to make it work. Great vid, easy to follow
Very timely. We're doing an artist residency using AI generated videos. Exactly what I needed. Thank you so much!
nice can i have the info of the residency? curious
Do you have a tutorial on how to install deforum? Thanks so much for this video!
Thanks mate! Such a nice tutorial. Im gone check all your videos now, and of course donate!
i have been struggling with installing SD, THANK YOU VERY MUCH , I DID IT
I hardly ever comment but you are a legend my friend. This saved me hours, thank you!
This was super helpful! Thanks for sharing!
Outstanding tutorial, thank you. Installs and runs on MacBook Pro M1 with stable-diffusion-v1-5.
def what I'm searching for. Tkx so much bro
Thank you so much ! Those are very clear instructions, I was able to do it. Hopefully it will become simpler in the future, but I guess we're still early adopters
Thank you for making this tut 🙌
You made my life much easier!!
Hi! Thank you for your video, it was great! However I am encountering an issue:
RuntimeError: MPS backend out of memory (MPS allocated: 4.14 GB, other allocations: 2.33 GB, max allowed: 6.80 GB). Tried to allocate 1012.50 MB on private pool. Use PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 to disable upper limit for memory allocations (may cause system failure).
No idea where I have to change the value to 0, do you know?
Great Install tutorial
THAAAAANK You!!!
I tried to instal for 4 hours until I found your video!!
Hero!
what type of mac do you have?
@@digitalpabs 2020 Intel
I appreciate your video. I had no clue on what I was doing, but your video helped me install everything. My only question is, how do I know I’m running the latest version, which is 1.0. Been looking on how to update this on a Mac, but so far it’s only pc.
After installing home-brew, there will be an instruction given in the terminal output to add brew to your path. It's not shown in this video, because he has already installed brew, but you need to do it.
Thank you
I don't understand how to do it. Care to explain? I'm writing in the commands it's telling me to run, but get the error message "-bash: syntax error near unexpected token `)'"
@@mohammedsarmadawy362 There are two strings under "==> Next steps:
- Run these two commands in your terminal to add Homebrew to your PATH:". Copy the first one hit enter, and then copy the second one and hit enter. After that, you can continue to enter: brew install cmake....
@@TienW626 where do the commands begin and end
This video didn’t work. It says error can’t generate metadata at the end. I don’t know if I did something wrong but I copied the video and it didn’t work.
This help a lot, thank you very much~!
How fast is stable diffusion on Mac M2 ? Is it equivalent to which GPU for computer in terms of speed?
Hi help? not working getting an error: NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.
Time taken: 0.49s
Thank you so F**king much for this. You've saved me so much time and headaches.
Thank you, the video helped a lot..
You solved my problem
Thank youuu 🎉
Thank for your very clear tutorial, it seems "commandline_args low or medvram" doesn't change to much.
super helpful, thanks for it :))
Thanks brother, I did it.
Brilliant.thx! -Can Dreambooth be used with this as an extension? or in any other way on the Apple Sillicon?
Thanks for this, I followed the text doc as the command there worked to copy paste, I only couldn't do this step "To update, run git pull in the ~/stable-diffusion-webui folder." but still ended up with a url code and managed to generate an image. I guess thats fine, right?
Anyone know how I can improve iterations per second? I’m on a Mac studio and only getting 3s/it. I’ve tried the settings suggested in the video?
Got through, but in the web browser ui I get an error (RuntimeError: "upsample_nearest2d_channels_last" not implemented for 'Half') if there is a particular problem with that, I have a pro m1...
Hi, I finished all the tutorial, but once I have to copy the URL provided in terminal, it won't open on the browser. Any suggestions?
Thank you for the video
I have installed a depth map extension,but the tab is not showing up in the UI. Any idea why is that?
You're a god. Cheers mate.
Thanks for video. Another question. To install additional models. Can I add them to the models folder whilst the Terminal and/or browser UI is still running, or should I quit out of it, add the models to the folder and then restart
Great guide! Worked for me!
Can I know what version in Mac are you using?
@@bhanuwongnachiangmai4412 M1 Max with 32GB
@@bhanuwongnachiangmai4412 And it's always the latest version of the software, I do an software update everytime before I run it.
Hello, thank you for your video. I follow-up your guild until and I get in to SDUI already . Now I close all the tab, question is how do I reopen the program ?
good morning,
i have problems installing after homebrew..i have a mac mini m1
can you help me thanks
Thank you so much!
When I try to download the stable diffusion 1.5 model from hugging face, the download speed does not get above 200bits and gives a ETA of 6 hours, unfortunately it stops downloading after about 60mb and says there was a timeout. This is crazy as I have super fast broadband. Looks like I won't be using this!! The home-brew bit all went swimmingly.
Thank you, thank you and thank you.
Can you optimize vram etc. in the middle of a project? I don't want to start over.
can you use safetensors checkpoint files to place into the models folder? or is it only ckpt
Can I use Radeon Vega eGpu with it?
i am stuck at a stage where i had used the browser link and then it did something that led the terminal to get stuck at "Model loaded in 5.5s (calculate hash:..... "
Hi! Thanks for video! But I have a problem with generation images. TypeError: Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead. How to fix it? Please help .
Codeformer didn't download and doesn't work for me, however the webGUI works. Any solutions?
Everything worked so far but when I try to generate the pool it says
"RuntimeError: "upsample_nearest2d_channels_last" not implemented for 'Half'"
can somebody tell me what the problem is?
RuntimeError: "LayerNormKernelImpl" not implemented for 'Half' ¿Qué puedo hacer? ¿What can i do?
I have trouble with the deforum extension; Especially with FFMPEG ? can u do a video about it ? i think several user on mac will have this issue
thank you!
when i type cd stable-diffusion-webui and enter nothing happens...is this normal?
When I try to generate something it pops out a window which says that python closed suddenly , and the programs abort in the terminal
Hi! After finishing the installation I couldn't get the local URL because of the following error: metadata-generation-failed. Does anyone know how to salve it? Thx
I have a lot of libraries to download, can I set up all this folder for my external hard drive?
Hello, I've tried googling for this problem but didn't find any answers. I did everything as shown in the tutorial but when I click the generate button I get this error message: "Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead."
I am using a .safetensor model and I'm trying it on a 13 inch m2 MacBook Pro.
Has anyone had similar problems or maybe knows how I may try to solve it?
after downloading, i type "Pool" and it shows "RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'". What happened? QAQ
Thank you for the video.
But, how can I install ControlNet?
I don't quite understand the tab to write automatically, it stops and I can't get past it, I need help. Thank you
thanks for this but i got a python error (it crashed) after I pressed generate and now it won't connect to the url, any ideas?
Hi, Im getting this error RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'
yes me too what can done??
me too.
Same
Does it take more time to install the whole thing - mine was stuck at - Textual inversions loading
great clear instructions.
Im an M1 Max 64gb and getting around 5.94it/s while rendering out 1920x1080 frame with 2x upscaling. I used the two suggested optimization commands which doubled the it/s.
Is there a command to allocate more ram usage? During rendering, Acitivity Monitor shows around 93-94% GPU usage and around 13/64gb ram usage, would like to know if there's a way to leverage a little more ram for rendering.
The model is only around 7gb, so it will use 7gb of GPU memory, you'll need a bigger model if you want more ram use.
idk why my SB don't generate:( just called RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'
i have an error, can u advice how to fix it ? NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.
did u figure this out?
It keeps saying "Stable diffusion model failed to load" at the very last step. I did everything the same as you. What am I doing wrong??
I'm getting this message when I try to generate an image: RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'
Does anyone have an idea where the issue is?
I have Mac and had an issue setting up stable diffusion
finally, I did it from the terminal and I was there
but for the first time when I wanted to try and see how it generates the image I got this error:
RuntimeError: "upsample_nearest2d_channels_last" not implemented for 'Half'
Time taken: 0.74s
? I even edited the command for stable diffusion for no halftime script and updated the Python version but no luck is there any other way
Got the same problem
Great vid but, when i want to generate an image this happens:
RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'
What does this mean how can i fix it?
please help
RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check
Thanks so much for this. Was able to follow all the way through. I've downloaded 2 models since and dropped in 2 new models in the Stable-diffusion folder since. What are the steps or is there a video for adding new models?
Nevermind. I figured it out.
@@whiplashtv what did you do lol we're doing the same thing rn
@@bossmachine File drop from desktop to that folder, too awhile but it stook
great tutorial, the only thing is its missing controlnet
Came across this message when trying to generate an image: RuntimeError: "log_vml_cpu" not implemented for 'Half'
Can anyone help please?
Can anyone help how do i uninstall along with temp files and any other files downloaded to support Stable Diffusion?
Thanks bro
thank you very much brother. for now everything works for me and my m1 max. one thing tho: do I need to do smth specific when im closing it like u mentioned control + c or can I just close the terminal and browser?
Ctrl c stops a process in the terminal. If you close the terminal it’s the same result. You can just close terminal, nothing will break.
He did ctrl c because he wanted to close the process but not the terminal, to continue using it. But for you, at the end of your session just close terminal, you don’t need it anymore
thank you for the video, I am using Apple M2 Max and Deforum in Stable Diffusion is not working, maybe you can help? - I have such a report "NoneType' object has no attribute 'sd_checkpoint_info''. Before reporting, please check your schedules/ init values. Full error message is in your terminal/ cli."
Got this error:
Formula installation already attempted: python@3.10
Any ideas how to proceed?
how can i run the stable diffusion 2 on my mac? I tried to follow your article but could not find the config file. Help meee
compared to playgroundAI on the Mac MiniStudio Max the images take around 30 seconds to generate which means it is just shy if 2 to 3 times slower compared to the web. I don't think this is super slow.
Way faster now with the Mac optimised models
@@someghosts which models do you mean ?
i am facing this problem when entring this command ...
brew install cmake protobuf rust python@3.10.8 git wget
zsh: command not found: brew
not working for 3d is there a certain setup for it?
why checkpoint are not working after I added medvram ?
I can't add models
when prompted to put in brew install cmake protobuf rust python@3.10 git wget, i get "command not found" please help me, i really need this program
i finished all the step , but the final result of the picture is terrible , idk how to fix it
I installed everything and it works, but I closed the terminal, how can I log in again Stable Diffusion?
Mine works, but I have to keep the terminal open. Is this supposed to happen? I'm sure I didn't do something right. It says that closing the terminal will terminate running python.
thanks for this unqiue program! now i can generate my anime characters!
Hi, thanks for the great guide, it all worked for me.
Could you make a video on how to use OptimizedSD by basujindal?
Thank you.❤
thanks bro
Can I do this setup on my MacOS Ventura?
anybody figure this out?
Hi, Im getting this error RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'
learning IT is so fun when there is this brilliant voice explaining everything clearly and not a random indian tech support agent with the thickest accent you will ever hear
hello there, do you have any video for auto collect twitch channel points?
gave an error at the stage of launching the web interface:
Installing torch and torchvision
ERROR: Could not find a version that satisfies the requirement torch==2.0.1 (from versions: none)
ERROR: No matching distribution found for torch==2.0.1
WARNING: You are using pip version 20.2.3; however, version 23.2.1 is available.
You should consider upgrading via the '/Users/kupyasha/stable-diffusion-webui/venv/bin/python3 -m pip install --upgrade pip' command.
Dear TrobleChute, the terminal doesn't go ahed when I write this "brew install cmake protobuf rust python@3.10 git wget" Can you help me? Thank you
I'm having the same problem.
To create a public link, set `share=True` in `launch()`.
Startup time: 120.1s (import torch: 3.8s, import gradio: 3.8s, import ldm: 0.7s, other imports: 3.4s, setup codeformer: 0.2s, load scripts: 1.5s, load SD checkpoint: 105.1s, create ui: 1.1s, gradio launch: 0.3s).
Error completing request
Arguments: ('task(eprn50rcg0itid4)', 'pool', '', [], 20, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, [], 0, False, False, 'positive', 'comma', 0, False, False, '', 1, '', 0, '', 0, '', True, False, False, False, 0) {}
Traceback (most recent call last):
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/modules/call_queue.py", line 56, in f
res = list(func(*args, **kwargs))
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/modules/call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/modules/txt2img.py", line 56, in txt2img
processed = process_images(p)
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/modules/processing.py", line 486, in process_images
res = process_images_inner(p)
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/modules/processing.py", line 625, in process_images_inner
uc = get_conds_with_caching(prompt_parser.get_learned_conditioning, negative_prompts, p.steps, cached_uc)
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/modules/processing.py", line 570, in get_conds_with_caching
cache[1] = function(shared.sd_model, required_prompts, steps)
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/modules/prompt_parser.py", line 140, in get_learned_conditioning
conds = model.get_learned_conditioning(texts)
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 669, in get_learned_conditioning
c = self.cond_stage_model(c)
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/modules/sd_hijack_clip.py", line 229, in forward
z = self.process_tokens(tokens, multipliers)
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/modules/sd_hijack_clip.py", line 254, in process_tokens
z = self.encode_with_transformers(tokens)
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/modules/sd_hijack_clip.py", line 302, in encode_with_transformers
outputs = self.wrapped.transformer(input_ids=tokens, output_hidden_states=-opts.CLIP_stop_at_last_layers)
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/venv/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 811, in forward
return self.text_model(
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/venv/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 721, in forward
encoder_outputs = self.encoder(
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/venv/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 650, in forward
layer_outputs = encoder_layer(
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/venv/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 378, in forward
hidden_states = self.layer_norm1(hidden_states)
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/normalization.py", line 189, in forward
return F.layer_norm(
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/functional.py", line 2503, in layer_norm
return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'
Hi! i have this error when push in generate image: AttributeError: 'NoneType' object has no attribute 'lowvram' do you know how fix it? Thanks!!
Hello, do safetensor model files work for this?
Has there been an update? I keep getting "brew: command not found" within terminal.
Hey hey, i am facing this error:
RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'
CAn anybody help please :)?