OLLAMA | Want To Run UNCENSORED AI Models on Mac (M1/M2/M3)

Поділитися
Вставка
  • Опубліковано 28 лис 2024

КОМЕНТАРІ • 6

  • @AIDevBytes
    @AIDevBytes  7 місяців тому +1

    🧑‍💻 My MacBook Pro Specs:
    Apple MacBook Pro M3 Max
    14-Core CPU
    30-Core GPU
    36GB Unified Memory
    1TB SSD Storage
    ℹ Other info you may find helpful👇
    Can you run LLM tool on your computer: huggingface.co/spaces/Vokturz/can-it-run-llm
    Remember that you will need a GPU with sufficient memory (VRAM) to run models with Ollama. If you are unsure how much GPU memory you need you can check out a calculator HuggingFace created called "Model Memory Calculator" here huggingface.co/docs/accelerate/main/en/usage_guides/model_size_estimator
    Also, here is an article that runs you through the exact mathematical calculation for "Calculating GPU memory for serving LLMs" - www.substratus.ai/blog/calculating-gpu-memory-for-llm

  • @mancowang6437
    @mancowang6437 4 місяці тому +1

    install step not work for my m1 pro mac & sonoma ,Uncaught Exception: Error: EPERM: operation not permitted, mkdir '/Users/wim/.ollama/logs' at Object.mkdirSync

  • @JoshFKDigital
    @JoshFKDigital 7 місяців тому +1

    Should post the commands in the description 😁

    • @AIDevBytes
      @AIDevBytes  7 місяців тому +1

      👍 Thanks for the feedback! Commands now the description.

  • @everry3357
    @everry3357 7 місяців тому

    How's the response time with your macbook pro specs does it go anywhere near chatgpt 4?

    • @AIDevBytes
      @AIDevBytes  7 місяців тому +1

      Once the models loads into the GPU memory for the first time the follow up responses seem to be slightly slower than GPT4. It's honestly not too noticeable if you are running on similar or better hardware specs that I listed in the description.