Local LLM Fine-tuning on Mac (M1 16GB)

Поділитися
Вставка
  • Опубліковано 6 лис 2024

КОМЕНТАРІ • 64

  • @ShawhinTalebi
    @ShawhinTalebi  3 місяці тому +3

    Really excited to finally get this working! I know many people had asked for it. What should I cover next?

    • @rdegraci
      @rdegraci 3 місяці тому

      I was able to test your Jupyter lab notebook and it generates the `adapter.npz` file and everything works! But how do I create a new model that has the `adapters.npz` embedded inside of it? I am running an Ollama server; how would we load it with this newly fine-tuned model, because we're using proprietary data so everything has to remain local to my machine and can't be uploaded to the Internet.

    • @AlissonSantos-qw6db
      @AlissonSantos-qw6db 3 місяці тому

      Please, talk about the MLOps life cycle and how to implement it.

    • @ShawhinTalebi
      @ShawhinTalebi  3 місяці тому +1

      @@rdegraci Great question! The original mlx-example repo shows how to do this: github.com/ml-explore/mlx-examples/tree/main/lora#fuse-and-upload

    • @ShawhinTalebi
      @ShawhinTalebi  3 місяці тому +1

      @@AlissonSantos-qw6db Great suggestion. While I may not be the best source for MLOps, I can definitely include more details around implementation of specific use cases.

  • @azadehbayani9454
    @azadehbayani9454 25 днів тому +1

    Wow, that was incredibly precise and helpful! Thank you, and keep up the fantastic work with your videos!

  • @JunYamog
    @JunYamog 3 місяці тому +2

    Thanks I have been using Unsloth remotely for fine tuning. Once the cloud bills start coming in, I am hoping to convince my boss that a macbook pro can be an option. My MLX are still just open tabs, glad to see someone doing it as well.

  • @ifycadeau
    @ifycadeau 3 місяці тому +4

    Didn't know you could do this on Mac! Amazing, thank you!

  • @kaldirYT
    @kaldirYT 3 місяці тому +2

    An easy video w/ great explanation to watch 👍🏽

  • @eda-un8zr
    @eda-un8zr Місяць тому

    I binge watched your videos - high quality great content. Thank you so much, please keep it up!

  • @LucaZappa
    @LucaZappa 12 днів тому +2

    great tutorial, thanks. One question, I didn’t understand where is the fine tuned model on my Mac and is it possible to run the model in Ollama?

    • @ShawhinTalebi
      @ShawhinTalebi  10 днів тому +1

      A folder should be created after training with the base model. Additionally, an adapters.npz file should appear which contains the adapters learned from LoRA.
      For running MLX models with Ollama, this video seems helpful: ua-cam.com/video/3UQ7GY9hNwk/v-deo.html

  • @pawel30w
    @pawel30w 3 місяці тому

    Thanks, great content! I really like the calm way you explain it all 👌

  • @acaudio7545
    @acaudio7545 2 місяці тому +1

    I've been playing around with this trying to see how you'll respond if I made horrible comments about your content - managed to get one slightly angry response 😁. But on a serious note, I love the work and a big fan of the channel now!

  • @chetanpun3937
    @chetanpun3937 3 місяці тому

    I was waiting for this video. Thank you so much.

  • @PhilippeDiollot
    @PhilippeDiollot 7 годин тому

    Nice telecaster !

  • @LIMLIMLIM111
    @LIMLIMLIM111 13 днів тому

    Thank you, you are awesome

  • @ISK_VAGR
    @ISK_VAGR 3 місяці тому +1

    Really cool and helpful. Thank you very much. Have you perform fine-tuning in llama3.1 models successfully with this method?

    • @ShawhinTalebi
      @ShawhinTalebi  3 місяці тому +2

      I have not but it should be as easy as replacing "mistralai/Mistral-7B-Instruct-v0.2-4bit" with "mlx-community/Meta-Llama-3.1-8B-Instruct-4bit" in the example code.

  • @AbidSaudagar
    @AbidSaudagar 3 місяці тому +1

    Amazing video. Thanks for sharing such valuable content.

    • @ShawhinTalebi
      @ShawhinTalebi  3 місяці тому

      Thanks Abid! I've been waiting 7 months for another video from you 😜

    • @AbidSaudagar
      @AbidSaudagar 3 місяці тому

      @@ShawhinTalebi coming soon 😄

  • @inishkohli273
    @inishkohli273 3 місяці тому

    Yes YES YES

  • @lorenzoplaatjies8971
    @lorenzoplaatjies8971 3 місяці тому

    Love the video thank you for these concise tutorials!
    On initial inference before moving onto Fine-Tuning I can't get the generation step to produce any tokens.

    • @ShawhinTalebi
      @ShawhinTalebi  3 місяці тому +1

      Glad you like them :)
      Not sure what could be going wrong. Were you able to successfully install mlx_lm?

    • @lorenzoplaatjies8971
      @lorenzoplaatjies8971 3 місяці тому

      ​@@ShawhinTalebi I appreciate you responding, I was able to figure it out! Thank you again for the video.

  • @futurerealmstech
    @futurerealmstech Місяць тому

    There are some rumors going around that 16GB should now be the standard memory configuration offered on the new Mac Mini. Any chance that when the M4 Mac Mini launches you can do a video on that as well?

    • @ShawhinTalebi
      @ShawhinTalebi  Місяць тому

      Great suggestion! Haven’t heard that rumor but makes sense. I might be switching to a single (beefy) MacBook Pro, could do a break down of I use it for ML projects if there’s interest :)

  • @ShekharSuman271991
    @ShekharSuman271991 13 днів тому

    took me a moment to find this:
    parser.add_argument(
    "--data",
    type=str,
    default="data/",
    help="Directory with {train, valid, test}.jsonl files",
    )
    worth mentioning that data file are picked up from /data by default

  • @absar66
    @absar66 3 місяці тому +1

    thanks for the great video .. based on your varied experience, can you make a separate video on data-preparation techniques/methods for fine tuning related task on open source models . hope to get a response from Shaw-human than Shaw-GPT..(just kidding)..😅

    • @ShawhinTalebi
      @ShawhinTalebi  3 місяці тому

      Great suggestion! There can be a lot of art in data prep (especially in this context). Added it to the list.

  • @AGI-Bingo
    @AGI-Bingo 2 місяці тому +1

    What can i expect to achieve on M3 pro 64gb ?

    • @ShawhinTalebi
      @ShawhinTalebi  2 місяці тому

      You could likely run full fine-tuning on some smaller models (

  • @camperinCod43
    @camperinCod43 Місяць тому

    Any advice or guidance on how I could deploy this model so that I can use it as a Telegram bot? I've been able to plug it into Telegram's API and I'm able to get the bot up and running (locally on my mac), and well, I don't wanna keep my Mac alive just to run the bot! Cheers, thanks for the video!

    • @ShawhinTalebi
      @ShawhinTalebi  Місяць тому

      Good question! Two options come to mind. 1) buy a Mac to serve your app or 2) rent an M-series Mac from cloud provider e.g. www.scaleway.com/en/hello-m1/

  • @dharavivek
    @dharavivek 2 місяці тому +1

    Can i do it on mac m1 8gb ram

    • @ShawhinTalebi
      @ShawhinTalebi  2 місяці тому +2

      Might be worth a shot. You can try reducing batch size to 1 or 2 if you run into memory issues.

    • @DuxBarbosa
      @DuxBarbosa 2 місяці тому

      it worked?

  • @tuncaydemirtepe7978
    @tuncaydemirtepe7978 3 місяці тому

    what if you have Apple M2 Max with 96gb memory? does that mean technically there is a 96gb memory GPU?

    • @ShawhinTalebi
      @ShawhinTalebi  2 місяці тому

      Good question. With the M-series chips there's no CPU vs GPU memory. The important thing here is using MLX allows you to make full use of your 96GB when training models!

    • @tuncaydemirtepe7978
      @tuncaydemirtepe7978 2 місяці тому

      I ll give it a try

  • @livebernd
    @livebernd 3 місяці тому

    How about fine tuning with an Intel processor on a Mac?

    • @ShawhinTalebi
      @ShawhinTalebi  3 місяці тому

      MLX won't help, but if you have a graphics card there may be tools out there that can. I just haven't done that before.

  • @daan3298
    @daan3298 3 місяці тому

    Can I capture video and audio all day, with a camera in my shoulder, and finetune a model with the data every night?

    • @ShawhinTalebi
      @ShawhinTalebi  3 місяці тому

      Sounds like an interesting use case! This is definitely possible. Potential challenges I see are: 1) handling that much video data and 2) figuring out how to pass that data into the model (e.g. you could use a multi-modal model or find a effective way to translate it into text)

    • @daan3298
      @daan3298 3 місяці тому

      @@ShawhinTalebi some steps in between to filter the input for usability could be handy. Maybe SAM?

    • @ShawhinTalebi
      @ShawhinTalebi  3 місяці тому

      @@daan3298 Without knowing any details, I can imagine that being helpful. Segment with SAM then object detection with another model.

  • @saanvibehele8185
    @saanvibehele8185 2 місяці тому

    Will this run on 8 gb memory?

    • @ShawhinTalebi
      @ShawhinTalebi  2 місяці тому +1

      It might be worth a try. You can also reduce the batch size if you run into memory issues.

    • @saanvibehele8185
      @saanvibehele8185 2 місяці тому +1

      @@ShawhinTalebi I am running it now. 20 epochs have run successfully so far

    • @ShawhinTalebi
      @ShawhinTalebi  2 місяці тому

      @@saanvibehele8185 Awesome!

  • @jpcam4781
    @jpcam4781 3 місяці тому

    Has anyone tried this on an 3.8GHz 8-core intel Core i7 chip?

    • @ShawhinTalebi
      @ShawhinTalebi  2 місяці тому

      MLX is specifically made for M-series chips. This example won't work with an i7.

  • @clapclapapp
    @clapclapapp 3 місяці тому

    8GB RAM RIP :(

    • @ShawhinTalebi
      @ShawhinTalebi  3 місяці тому

      LOL this still might be worth a try! If you run into memory issues you can reduce the batch size to 1 or 2. Curious to hear how it goes :)

    • @clapclapapp
      @clapclapapp 3 місяці тому

      @@ShawhinTalebi risking my only device to wrack up is totally worth it

    • @ShawhinTalebi
      @ShawhinTalebi  3 місяці тому

      @@clapclapapp lol