AI on Mac Made Easy: How to run LLMs locally with OLLAMA in Swift/SwiftUI

Поділитися
Вставка
  • Опубліковано 27 жов 2024

КОМЕНТАРІ • 20

  • @khermawan
    @khermawan 3 місяці тому +8

    Ollamac and OllamaKit creator here! 👋🏼 Great video, Karin!! ❤

  • @Algorithmswithsubham
    @Algorithmswithsubham День тому

    more on these please

  • @LinuxH2O
    @LinuxH2O 2 місяці тому

    Really informative, something I kind of was in need. Thanks for showing things off.

  • @Another0neTime
    @Another0neTime 3 місяці тому +1

    Thank you for the video, and sharing your knowledge.

  • @KD-SRE
    @KD-SRE 2 місяці тому +1

    I use '/bye' to exit out of the Ollama cli

  • @andrelabbe5050
    @andrelabbe5050 3 місяці тому

    I enjoyed the video. Easy to understand and most importantly showing what you can do without to much hassle with a not too powerful MacBook. From the video I believe I have the same model as the one you used. I do like the idea of setting preset for the 'engine'. I do use the Copilot Apps. I can then check how both perform for the same question. I have just tested deepseek-coder-v2 with the same questions as you... Funny thing, it is not exactly the same answer. Also on my 16Gb Mac,,, The Memory activity get a nice yellow colour. Sadly contrary to the Mac in the video, I got more stuff running in the background like Dropbox, etc... Which I cannot really kill just for the sake of it,

  • @botgang5092
    @botgang5092 Місяць тому

    Nice! 👍

  • @guitaripod
    @guitaripod 3 місяці тому +1

    wondering what it'd take to get something running on iOS. Even with 2B it might prove useful

  • @tsalVlog
    @tsalVlog 3 місяці тому

    Great video!

  • @mindrivers
    @mindrivers 3 місяці тому

    Dear Karin, Could you please advise on how to put my entire Xcode project into a context window and ask the model about my entire codebase?

  • @officialcreatisoft
    @officialcreatisoft 3 місяці тому

    I've tried using the LLM's locally, but I only have 8gb of ram. Great video!

    • @SwiftyPlace
      @SwiftyPlace  3 місяці тому +1

      Unfortunately, Apple made the base models with 8GB. A lot of people have the same problem as you.

    • @jayadky5983
      @jayadky5983 3 місяці тому +1

      I feel like you can still run the Phi3 model on your device.

  • @juliocuesta
    @juliocuesta 3 місяці тому

    if i understood correctly. The idea could be to create an app for macOS that includes some function that requires a LLM. The app is distributed without the LLM. The user is notified that said function will only be available if download the model. This message should be implemented in a View that contains a button that will download the file and configure the macOS app to start its use.

  • @kamertonaudiophileplayer847
    @kamertonaudiophileplayer847 3 місяці тому

    The awesome video!

  • @ericwilliams4554
    @ericwilliams4554 3 місяці тому

    Great video. Thank you. I am interested to know if any developers are using this in their iOS apps.

    • @SwiftyPlace
      @SwiftyPlace  3 місяці тому +2

      This is not working for iOS. If you want to run LLM on an iPhone you will need to use a smaller model which usually dont perform so well. Most iPhones have less than 8GB Ram. That is also why Apple Intelligence will process more advanced complex task in the cloud

  • @midnightcoder
    @midnightcoder 3 місяці тому +2

    Any way of running it on iOS?

    • @EsquireR
      @EsquireR 3 місяці тому

      Only watchos sorry

  • @bobgodwinx
    @bobgodwinx 3 місяці тому

    LLMs have a long way to go. 4GB to run a simple question is a no go. The have to reduce it to 20MB and people will start paying attention.