Це відео не доступне.
Перепрошуємо.

Mac: Easy Stable Diffusion WebUI Installation | Full Guide & Tutorial

Поділитися
Вставка
  • Опубліковано 30 лип 2024
  • Transform Your Text into Stunning Images, now on a Mac! Learn How to Use AUTOMATIC1111's txt2img, img2img and More! This video will guide you through setting up Stable Diffusion WebUI on a Mac, and creating your first images.
    Homebrew: brew.sh
    Link to this video in article format (and all the copy/paste commands): hub.tcno.co/ai/stable-diffusi...
    Timestamps:
    0:00 - Explanation, and details
    0:35 - Install Homebrew
    1:28 - Downloading AUTOMATIC1111's Stable Diffusion WebUI
    1:54 - Downloading Stable Diffusion models
    3:28 - Starting A1's SDUI on Mac
    5:00 - Using Stable Diffusion on a Mac
    5:40 - Drawbacks of SD on Mac
    5:50 - Launch arguments & Less VRAM
    7:35 - Opening SDUI in the future
    8:20 - Is Mac good for SDUI? Not really...
    #StableDiffusion #AUTOMATIC1111 #Mac
    -----------------------------
    💸 Found this useful? Help me make more! Support me by becoming a member: / @troublechute
    -----------------------------
    💸 Direct donations via Ko-Fi: ko-fi.com/TCNOco
    💬 Discuss the video & Suggest (Discord): s.tcno.co/Discord
    👉 Game guides & Simple tips: / troublechutebasics
    🌐 Website: tcno.co
    📧 Need voiceovers done? Business query? Contact my business email: TroubleChute (at) tcno.co
    Everything in this video is my personal opinion and experience and should not be considered professional advice. Always do your own research and ensure what you're doing is safe.

КОМЕНТАРІ • 451

  • @westphalianfresco998
    @westphalianfresco998 5 місяців тому +6

    after ups and downs most of the time fixing error messages, I spent about 5h to make it work. Great vid, easy to follow

  • @NickSarafa
    @NickSarafa Рік тому +12

    Very timely. We're doing an artist residency using AI generated videos. Exactly what I needed. Thank you so much!

    • @puduart2176
      @puduart2176 Рік тому +1

      nice can i have the info of the residency? curious

  • @eliaszacariasart
    @eliaszacariasart Рік тому +1

    Do you have a tutorial on how to install deforum? Thanks so much for this video!

  • @atacamasoundsystem3361
    @atacamasoundsystem3361 Рік тому

    Thanks mate! Such a nice tutorial. Im gone check all your videos now, and of course donate!

  • @agri2986
    @agri2986 8 місяців тому

    i have been struggling with installing SD, THANK YOU VERY MUCH , I DID IT

  • @robinhuizing
    @robinhuizing День тому

    I hardly ever comment but you are a legend my friend. This saved me hours, thank you!

  • @silvie5524
    @silvie5524 11 місяців тому

    This was super helpful! Thanks for sharing!

  • @neiljb1975
    @neiljb1975 29 днів тому

    Outstanding tutorial, thank you. Installs and runs on MacBook Pro M1 with stable-diffusion-v1-5.

  • @tupham9967
    @tupham9967 Рік тому

    def what I'm searching for. Tkx so much bro

  • @KingOmarcO
    @KingOmarcO 10 місяців тому

    Thank you so much ! Those are very clear instructions, I was able to do it. Hopefully it will become simpler in the future, but I guess we're still early adopters

  • @playgemji
    @playgemji Рік тому

    Thank you for making this tut 🙌

  • @lukashuettner
    @lukashuettner Рік тому +1

    You made my life much easier!!

  • @dofroad
    @dofroad Рік тому +3

    Hi! Thank you for your video, it was great! However I am encountering an issue:
    RuntimeError: MPS backend out of memory (MPS allocated: 4.14 GB, other allocations: 2.33 GB, max allowed: 6.80 GB). Tried to allocate 1012.50 MB on private pool. Use PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 to disable upper limit for memory allocations (may cause system failure).
    No idea where I have to change the value to 0, do you know?

  • @davidporter1830
    @davidporter1830 Рік тому

    Great Install tutorial

  • @user-kv8lv3sq9j
    @user-kv8lv3sq9j Рік тому +1

    THAAAAANK You!!!
    I tried to instal for 4 hours until I found your video!!
    Hero!

  • @Drezwest
    @Drezwest Рік тому +1

    I appreciate your video. I had no clue on what I was doing, but your video helped me install everything. My only question is, how do I know I’m running the latest version, which is 1.0. Been looking on how to update this on a Mac, but so far it’s only pc.

  • @lindsay5985
    @lindsay5985 Рік тому +61

    After installing home-brew, there will be an instruction given in the terminal output to add brew to your path. It's not shown in this video, because he has already installed brew, but you need to do it.

    • @zepps88
      @zepps88 Рік тому +3

      Thank you

    • @mohammedsarmadawy362
      @mohammedsarmadawy362 Рік тому +2

      I don't understand how to do it. Care to explain? I'm writing in the commands it's telling me to run, but get the error message "-bash: syntax error near unexpected token `)'"

    • @TienW626
      @TienW626 Рік тому +8

      ​@@mohammedsarmadawy362 There are two strings under "==> Next steps:
      - Run these two commands in your terminal to add Homebrew to your PATH:". Copy the first one hit enter, and then copy the second one and hit enter. After that, you can continue to enter: brew install cmake....

    • @360screams
      @360screams Рік тому +12

      @@TienW626 where do the commands begin and end

    • @quackyman796
      @quackyman796 Рік тому +1

      This video didn’t work. It says error can’t generate metadata at the end. I don’t know if I did something wrong but I copied the video and it didn’t work.

  • @Imnotaccurate2
    @Imnotaccurate2 Рік тому

    This help a lot, thank you very much~!

  • @MrErick1160
    @MrErick1160 Рік тому +3

    How fast is stable diffusion on Mac M2 ? Is it equivalent to which GPU for computer in terms of speed?

  • @multicamsmash7438
    @multicamsmash7438 Рік тому +3

    Hi help? not working getting an error: NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.
    Time taken: 0.49s

  • @DONUTnews
    @DONUTnews Рік тому

    Thank you so F**king much for this. You've saved me so much time and headaches.

  • @technonatorr
    @technonatorr Рік тому

    Thank you, the video helped a lot..

  • @intoyou3573
    @intoyou3573 7 місяців тому

    You solved my problem
    Thank youuu 🎉

  • @Jeanjean-nq8lk
    @Jeanjean-nq8lk Рік тому +1

    Thank for your very clear tutorial, it seems "commandline_args low or medvram" doesn't change to much.

  • @AsphaltBoomer
    @AsphaltBoomer 8 місяців тому +1

    super helpful, thanks for it :))

  • @blackfoxai
    @blackfoxai Рік тому

    Thanks brother, I did it.

  • @jacquestati2382
    @jacquestati2382 Рік тому

    Brilliant.thx! -Can Dreambooth be used with this as an extension? or in any other way on the Apple Sillicon?

  • @gorkemgulan
    @gorkemgulan Рік тому

    Thanks for this, I followed the text doc as the command there worked to copy paste, I only couldn't do this step "To update, run git pull in the ~/stable-diffusion-webui folder." but still ended up with a url code and managed to generate an image. I guess thats fine, right?

  • @richardnatour2789
    @richardnatour2789 Рік тому

    Anyone know how I can improve iterations per second? I’m on a Mac studio and only getting 3s/it. I’ve tried the settings suggested in the video?

  • @Zap525
    @Zap525 Рік тому +1

    Got through, but in the web browser ui I get an error (RuntimeError: "upsample_nearest2d_channels_last" not implemented for 'Half') if there is a particular problem with that, I have a pro m1...

  • @alessandrocomella9981
    @alessandrocomella9981 Рік тому

    Hi, I finished all the tutorial, but once I have to copy the URL provided in terminal, it won't open on the browser. Any suggestions?
    Thank you for the video

  • @kunsagigyula8091
    @kunsagigyula8091 Рік тому

    I have installed a depth map extension,but the tab is not showing up in the UI. Any idea why is that?

  • @canuckcreatures
    @canuckcreatures Рік тому

    You're a god. Cheers mate.

  • @RJMCTV
    @RJMCTV 8 місяців тому

    Thanks for video. Another question. To install additional models. Can I add them to the models folder whilst the Terminal and/or browser UI is still running, or should I quit out of it, add the models to the folder and then restart

  • @JefHarrisnation
    @JefHarrisnation Рік тому

    Great guide! Worked for me!

    • @bhanuwongnachiangmai4412
      @bhanuwongnachiangmai4412 Рік тому

      Can I know what version in Mac are you using?

    • @JefHarrisnation
      @JefHarrisnation Рік тому

      @@bhanuwongnachiangmai4412 M1 Max with 32GB

    • @JefHarrisnation
      @JefHarrisnation Рік тому

      @@bhanuwongnachiangmai4412 And it's always the latest version of the software, I do an software update everytime before I run it.

  • @kavvinn.6228
    @kavvinn.6228 Рік тому

    Hello, thank you for your video. I follow-up your guild until and I get in to SDUI already . Now I close all the tab, question is how do I reopen the program ?

  • @pieronline25
    @pieronline25 Рік тому

    good morning,
    i have problems installing after homebrew..i have a mac mini m1
    can you help me thanks

  • @palex3382
    @palex3382 9 місяців тому

    Thank you so much!

  • @craigbell5885
    @craigbell5885 Рік тому +1

    When I try to download the stable diffusion 1.5 model from hugging face, the download speed does not get above 200bits and gives a ETA of 6 hours, unfortunately it stops downloading after about 60mb and says there was a timeout. This is crazy as I have super fast broadband. Looks like I won't be using this!! The home-brew bit all went swimmingly.

  • @quinmg782
    @quinmg782 Рік тому

    Thank you, thank you and thank you.

  • @blaqdesign286
    @blaqdesign286 Рік тому

    Can you optimize vram etc. in the middle of a project? I don't want to start over.

  • @UpliftedBrother
    @UpliftedBrother Рік тому

    can you use safetensors checkpoint files to place into the models folder? or is it only ckpt

  • @KiwiWButonierce
    @KiwiWButonierce Рік тому +3

    Can I use Radeon Vega eGpu with it?

  • @djleechcraft
    @djleechcraft Рік тому +1

    i am stuck at a stage where i had used the browser link and then it did something that led the terminal to get stuck at "Model loaded in 5.5s (calculate hash:..... "

  • @white_smoke4732
    @white_smoke4732 9 місяців тому

    Hi! Thanks for video! But I have a problem with generation images. TypeError: Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead. How to fix it? Please help .

  • @HK-xn3ob
    @HK-xn3ob Рік тому

    Codeformer didn't download and doesn't work for me, however the webGUI works. Any solutions?

  • @marcschwinn477
    @marcschwinn477 Рік тому +1

    Everything worked so far but when I try to generate the pool it says
    "RuntimeError: "upsample_nearest2d_channels_last" not implemented for 'Half'"
    can somebody tell me what the problem is?

  • @moriwase
    @moriwase Рік тому

    RuntimeError: "LayerNormKernelImpl" not implemented for 'Half' ¿Qué puedo hacer? ¿What can i do?

  • @BlaguesVirtuoses
    @BlaguesVirtuoses Рік тому

    I have trouble with the deforum extension; Especially with FFMPEG ? can u do a video about it ? i think several user on mac will have this issue

  • @Rajmanov
    @Rajmanov Рік тому

    thank you!

  • @Posturepro
    @Posturepro Рік тому

    when i type cd stable-diffusion-webui and enter nothing happens...is this normal?

  • @user-qp8se2pu6w
    @user-qp8se2pu6w Рік тому +1

    When I try to generate something it pops out a window which says that python closed suddenly , and the programs abort in the terminal

  • @pilutv8579
    @pilutv8579 Рік тому

    Hi! After finishing the installation I couldn't get the local URL because of the following error: metadata-generation-failed. Does anyone know how to salve it? Thx

  • @SahabuddinTanrkulu
    @SahabuddinTanrkulu 11 місяців тому

    I have a lot of libraries to download, can I set up all this folder for my external hard drive?

  • @Kenpachi7amasama
    @Kenpachi7amasama 8 місяців тому

    Hello, I've tried googling for this problem but didn't find any answers. I did everything as shown in the tutorial but when I click the generate button I get this error message: "Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead."
    I am using a .safetensor model and I'm trying it on a 13 inch m2 MacBook Pro.
    Has anyone had similar problems or maybe knows how I may try to solve it?

  • @terrancelow_
    @terrancelow_ Рік тому

    after downloading, i type "Pool" and it shows "RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'". What happened? QAQ

  • @michelangelobertoncelli595
    @michelangelobertoncelli595 Рік тому

    Thank you for the video.
    But, how can I install ControlNet?

  • @manuelgonzalezfernandez475
    @manuelgonzalezfernandez475 Рік тому

    I don't quite understand the tab to write automatically, it stops and I can't get past it, I need help. Thank you

  • @kjeksklaus7944
    @kjeksklaus7944 Рік тому

    thanks for this but i got a python error (it crashed) after I pressed generate and now it won't connect to the url, any ideas?

  • @Luxeduardo
    @Luxeduardo Рік тому +19

    Hi, Im getting this error RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'

  • @lokeshmamidala3199
    @lokeshmamidala3199 Рік тому

    Does it take more time to install the whole thing - mine was stuck at - Textual inversions loading

  • @GrampaClan
    @GrampaClan 9 місяців тому

    great clear instructions.
    Im an M1 Max 64gb and getting around 5.94it/s while rendering out 1920x1080 frame with 2x upscaling. I used the two suggested optimization commands which doubled the it/s.
    Is there a command to allocate more ram usage? During rendering, Acitivity Monitor shows around 93-94% GPU usage and around 13/64gb ram usage, would like to know if there's a way to leverage a little more ram for rendering.

    • @Johnson4o
      @Johnson4o 9 місяців тому +1

      The model is only around 7gb, so it will use 7gb of GPU memory, you'll need a bigger model if you want more ram use.

  • @SmRivers
    @SmRivers Рік тому +1

    idk why my SB don't generate:( just called RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'

  • @edwardn8165
    @edwardn8165 9 місяців тому +9

    i have an error, can u advice how to fix it ? NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.

  • @SamanthaSanders
    @SamanthaSanders 11 місяців тому +1

    It keeps saying "Stable diffusion model failed to load" at the very last step. I did everything the same as you. What am I doing wrong??

  • @andressafurletti
    @andressafurletti Рік тому

    I'm getting this message when I try to generate an image: RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'
    Does anyone have an idea where the issue is?

  • @Smartearningswaida
    @Smartearningswaida Рік тому +5

    I have Mac and had an issue setting up stable diffusion
    finally, I did it from the terminal and I was there
    but for the first time when I wanted to try and see how it generates the image I got this error:
    RuntimeError: "upsample_nearest2d_channels_last" not implemented for 'Half'
    Time taken: 0.74s
    ? I even edited the command for stable diffusion for no halftime script and updated the Python version but no luck is there any other way

  • @kw9976
    @kw9976 11 місяців тому +1

    Great vid but, when i want to generate an image this happens:
    RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'
    What does this mean how can i fix it?

  • @BigSamplePacks
    @BigSamplePacks 10 місяців тому

    please help
    RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check

  • @whiplashtv
    @whiplashtv Рік тому +2

    Thanks so much for this. Was able to follow all the way through. I've downloaded 2 models since and dropped in 2 new models in the Stable-diffusion folder since. What are the steps or is there a video for adding new models?

    • @whiplashtv
      @whiplashtv Рік тому

      Nevermind. I figured it out.

    • @bossmachine
      @bossmachine Рік тому

      @@whiplashtv what did you do lol we're doing the same thing rn

    • @whiplashtv
      @whiplashtv Рік тому +2

      @@bossmachine File drop from desktop to that folder, too awhile but it stook

  • @persiagrai767
    @persiagrai767 Рік тому +1

    great tutorial, the only thing is its missing controlnet

  • @tikicamproductions4504
    @tikicamproductions4504 Рік тому

    Came across this message when trying to generate an image: RuntimeError: "log_vml_cpu" not implemented for 'Half'
    Can anyone help please?

  • @abrarN70
    @abrarN70 Рік тому

    Can anyone help how do i uninstall along with temp files and any other files downloaded to support Stable Diffusion?

  • @ibrahimdusmez2804
    @ibrahimdusmez2804 7 місяців тому

    Thanks bro

  • @lonelypluto0
    @lonelypluto0 Рік тому

    thank you very much brother. for now everything works for me and my m1 max. one thing tho: do I need to do smth specific when im closing it like u mentioned control + c or can I just close the terminal and browser?

    • @diegomendez5512
      @diegomendez5512 Рік тому

      Ctrl c stops a process in the terminal. If you close the terminal it’s the same result. You can just close terminal, nothing will break.
      He did ctrl c because he wanted to close the process but not the terminal, to continue using it. But for you, at the end of your session just close terminal, you don’t need it anymore

  • @essaMOVIE
    @essaMOVIE 2 місяці тому

    thank you for the video, I am using Apple M2 Max and Deforum in Stable Diffusion is not working, maybe you can help? - I have such a report "NoneType' object has no attribute 'sd_checkpoint_info''. Before reporting, please check your schedules/ init values. Full error message is in your terminal/ cli."

  • @tomhardison5998
    @tomhardison5998 Рік тому

    Got this error:
    Formula installation already attempted: python@3.10
    Any ideas how to proceed?

  • @singhhoop
    @singhhoop 11 місяців тому

    how can i run the stable diffusion 2 on my mac? I tried to follow your article but could not find the config file. Help meee

  • @cekuhnen
    @cekuhnen Рік тому +4

    compared to playgroundAI on the Mac MiniStudio Max the images take around 30 seconds to generate which means it is just shy if 2 to 3 times slower compared to the web. I don't think this is super slow.

    • @someghosts
      @someghosts Рік тому

      Way faster now with the Mac optimised models

    • @cekuhnen
      @cekuhnen Рік тому

      @@someghosts which models do you mean ?

  • @dotmediahouse2538
    @dotmediahouse2538 Рік тому

    i am facing this problem when entring this command ...
    brew install cmake protobuf rust python@3.10.8 git wget
    zsh: command not found: brew

  • @mazenelamir2796
    @mazenelamir2796 Рік тому +1

    not working for 3d is there a certain setup for it?

  • @msr3995
    @msr3995 Рік тому

    why checkpoint are not working after I added medvram ?
    I can't add models

  • @IRNGreatestever
    @IRNGreatestever 10 місяців тому

    when prompted to put in brew install cmake protobuf rust python@3.10 git wget, i get "command not found" please help me, i really need this program

  • @tuandanh5149
    @tuandanh5149 Рік тому

    i finished all the step , but the final result of the picture is terrible , idk how to fix it

  • @lexbcnfilm
    @lexbcnfilm Рік тому

    I installed everything and it works, but I closed the terminal, how can I log in again Stable Diffusion?

  • @sheadobson9910
    @sheadobson9910 Рік тому

    Mine works, but I have to keep the terminal open. Is this supposed to happen? I'm sure I didn't do something right. It says that closing the terminal will terminate running python.

  • @JustAPersonWhoComments
    @JustAPersonWhoComments Рік тому

    thanks for this unqiue program! now i can generate my anime characters!

  • @igorikuba8853
    @igorikuba8853 Рік тому +1

    Hi, thanks for the great guide, it all worked for me.
    Could you make a video on how to use OptimizedSD by basujindal?
    Thank you.❤

  • @fotografcini
    @fotografcini 8 місяців тому

    thanks bro

  • @sifu2u_now
    @sifu2u_now Рік тому

    Can I do this setup on my MacOS Ventura?

  • @MarceloToledoFilm
    @MarceloToledoFilm Рік тому

    anybody figure this out?
    Hi, Im getting this error RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'

  • @danielhri100v
    @danielhri100v Рік тому +4

    learning IT is so fun when there is this brilliant voice explaining everything clearly and not a random indian tech support agent with the thickest accent you will ever hear

  • @iplaygames4fun2
    @iplaygames4fun2 Рік тому

    hello there, do you have any video for auto collect twitch channel points?

  • @kupyasha26
    @kupyasha26 Рік тому +1

    gave an error at the stage of launching the web interface:
    Installing torch and torchvision
    ERROR: Could not find a version that satisfies the requirement torch==2.0.1 (from versions: none)
    ERROR: No matching distribution found for torch==2.0.1
    WARNING: You are using pip version 20.2.3; however, version 23.2.1 is available.
    You should consider upgrading via the '/Users/kupyasha/stable-diffusion-webui/venv/bin/python3 -m pip install --upgrade pip' command.

  • @marcobongini9974
    @marcobongini9974 Рік тому +3

    Dear TrobleChute, the terminal doesn't go ahed when I write this "brew install cmake protobuf rust python@3.10 git wget" Can you help me? Thank you

  • @Bombomxyz123
    @Bombomxyz123 Рік тому +8

    To create a public link, set `share=True` in `launch()`.
    Startup time: 120.1s (import torch: 3.8s, import gradio: 3.8s, import ldm: 0.7s, other imports: 3.4s, setup codeformer: 0.2s, load scripts: 1.5s, load SD checkpoint: 105.1s, create ui: 1.1s, gradio launch: 0.3s).
    Error completing request
    Arguments: ('task(eprn50rcg0itid4)', 'pool', '', [], 20, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, [], 0, False, False, 'positive', 'comma', 0, False, False, '', 1, '', 0, '', 0, '', True, False, False, False, 0) {}
    Traceback (most recent call last):
    File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/modules/call_queue.py", line 56, in f
    res = list(func(*args, **kwargs))
    File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/modules/call_queue.py", line 37, in f
    res = func(*args, **kwargs)
    File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/modules/txt2img.py", line 56, in txt2img
    processed = process_images(p)
    File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/modules/processing.py", line 486, in process_images
    res = process_images_inner(p)
    File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/modules/processing.py", line 625, in process_images_inner
    uc = get_conds_with_caching(prompt_parser.get_learned_conditioning, negative_prompts, p.steps, cached_uc)
    File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/modules/processing.py", line 570, in get_conds_with_caching
    cache[1] = function(shared.sd_model, required_prompts, steps)
    File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/modules/prompt_parser.py", line 140, in get_learned_conditioning
    conds = model.get_learned_conditioning(texts)
    File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 669, in get_learned_conditioning
    c = self.cond_stage_model(c)
    File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
    File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/modules/sd_hijack_clip.py", line 229, in forward
    z = self.process_tokens(tokens, multipliers)
    File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/modules/sd_hijack_clip.py", line 254, in process_tokens
    z = self.encode_with_transformers(tokens)
    File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/modules/sd_hijack_clip.py", line 302, in encode_with_transformers
    outputs = self.wrapped.transformer(input_ids=tokens, output_hidden_states=-opts.CLIP_stop_at_last_layers)
    File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
    File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/venv/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 811, in forward
    return self.text_model(
    File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
    File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/venv/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 721, in forward
    encoder_outputs = self.encoder(
    File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
    File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/venv/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 650, in forward
    layer_outputs = encoder_layer(
    File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
    File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/venv/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 378, in forward
    hidden_states = self.layer_norm1(hidden_states)
    File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
    File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/normalization.py", line 189, in forward
    return F.layer_norm(
    File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/functional.py", line 2503, in layer_norm
    return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
    RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'

  • @carloscaro5941
    @carloscaro5941 Місяць тому +1

    Hi! i have this error when push in generate image: AttributeError: 'NoneType' object has no attribute 'lowvram' do you know how fix it? Thanks!!

  • @emptyheros
    @emptyheros Рік тому

    Hello, do safetensor model files work for this?

  • @sezaku85
    @sezaku85 Рік тому

    Has there been an update? I keep getting "brew: command not found" within terminal.

  • @sam_shure_
    @sam_shure_ Рік тому

    Hey hey, i am facing this error:
    RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'
    CAn anybody help please :)?