Enchanted LLM and the bright path for open language models

Поділитися
Вставка
  • Опубліковано 7 чер 2024
  • A look at the open source Enchanted LLM app and some thoughts about the state of open language models.
    Find Enchanted LLM here: apps.apple.com/us/app/enchant...
    Keyboard: Glove80 - www.moergo.com/collections/gl...
    Camera: Canon EOS R5 amzn.to/3CCrxzl
    Monitor: Dell U4914DW 49in amzn.to/3MJV1jx
    SSD for Video Editing: VectoTech Rapid 8TB amzn.to/3hXz9TM
    Microphone 1: Rode NT1-A amzn.to/3vWM4gL
    Microphone 2: Seinheiser 416 amzn.to/3Fkti60
    Microphone Interface: Focusrite Clarett+ 2Pre amzn.to/3J5dy7S
    Tripod: JOBY GorillaPod 5K amzn.to/3JaPxMA
    Mouse: Razer DeathAdder amzn.to/3J9fYCf
    Computer: 2021 Macbook Pro amzn.to/3J7FXtW
    Lens 1: Canon RF50mm F 1.2L USM amzn.to/3qeJrX6
    Lens 2: Canon RF24mm F1.8 Macro is STM Lens amzn.to/3UUs1bB
    Caffeine: High Brew Cold Brew Coffee amzn.to/3hXyx0q
    More Caffeine: Monster Energy Juice, Pipeline Punch amzn.to/3Czmfox
    Building A Second Brain book: amzn.to/3cIShWf
  • Наука та технологія

КОМЕНТАРІ • 32

  • @codetothemoon
    @codetothemoon  4 місяці тому +9

    ERRATA: In the video I mention that setting OLLAMA_HOST is an alternative to using ngrok, but that's the case only in scenarios where you only need access on your local network. ngrok apparently lets you leverage your Ollama instance from anywhere, which sounds awesome (thanks to @havokgames8297 for pointing this out)

    • @melongrasp
      @melongrasp 4 місяці тому

      Oh, that's really nice. Thanks for sharing!

    • @newtonchutney
      @newtonchutney 3 місяці тому

      Yepp ngrok can be used to forward your RPi! 😂
      But I'd suggest people start looking into tailscale instead.. As it has a lot more security and privacy

    • @newtonchutney
      @newtonchutney 3 місяці тому +1

      Tailscale is a mesh VPN system.. Not a reverse proxy like ngrok.. BTW..

  • @undefined24
    @undefined24 4 місяці тому +5

    Looks promising, thanks for sharing.

  • @Kabodanki
    @Kabodanki 4 місяці тому +3

    LLM not biased by the bay area mentality is the future. I'm glad Mistral is french, there's some hope to get away from censorship

    • @codetothemoon
      @codetothemoon  4 місяці тому +5

      not sure, it might have a bias towards crepes and baguettes, but I think I'm ok with that!

  • @devopstoolbox
    @devopstoolbox 4 місяці тому +5

    That is SO COOL!!!

    • @codetothemoon
      @codetothemoon  4 місяці тому +3

      agree - I know you've been on the Ollama train too 🚂

  • @fooblahblah
    @fooblahblah 4 місяці тому +4

    You can use ngrok to proxy to your internal machine but via an external hostname or ip

    • @codetothemoon
      @codetothemoon  4 місяці тому +2

      Thanks - I’ve added this as a pinned errata comment

  • @lenninlc
    @lenninlc 4 місяці тому +1

    So cool!

  • @dpi3981
    @dpi3981 4 місяці тому +2

    What gpu do you use for your setup?

    • @codetothemoon
      @codetothemoon  4 місяці тому +1

      I have an M1 Max which has an integrated GPU

  • @rnp0728
    @rnp0728 4 місяці тому +2

    Great

  • @youpapai
    @youpapai 4 місяці тому +1

    why does `ollama run something` pull/download the model every time ? Is there a setting to cache it or use the cached downloaded model?

    • @codetothemoon
      @codetothemoon  4 місяці тому

      it doesn't, at least for me. everything that appears in the list of language models to choose from is already downloaded and ready to go. That said, they might take a few seconds to load into memory, especially if they are on the larger side. Mistral 7B only takes ~10 seconds or so to load into memory for me. Are you seeing an isue where the model is downloaded on every run?

    • @youpapai
      @youpapai 3 місяці тому

      @@codetothemoonyes. being downloaded every run

  • @vimaximus1360
    @vimaximus1360 4 місяці тому +1

    I would love to see some hardware comparisons, between mac with 32+ GB ram and some Nvidia GPU.

    • @codetothemoon
      @codetothemoon  4 місяці тому

      this video might be what you're looking for! ua-cam.com/video/jaM02mb6JFM/v-deo.html

    • @vimaximus1360
      @vimaximus1360 4 місяці тому

      perfect! thank you @@codetothemoon !

  • @TommiNiemi-hu8pb
    @TommiNiemi-hu8pb 4 місяці тому +1

    ngrok is for nat traversal

    • @codetothemoon
      @codetothemoon  4 місяці тому

      Thanks yeah I made a pinned errata comment about this 😎

  • @havokgames8297
    @havokgames8297 4 місяці тому +1

    Ngrok would let you access your Ollama without being on the same WIFI as your computer

    • @codetothemoon
      @codetothemoon  4 місяці тому

      ahh got it - thanks for clarifying this! I should have looked into it a bit more. I'll post this in an errata comment.

    • @havokgames8297
      @havokgames8297 4 місяці тому +1

      @@codetothemoonno worries. No one would expect you to be an expert at everything. I've used Ngrok for example when developing a web app locally that has webhooks and I want an external service to be able to access my local development server. It is perfect for this. The issue is that on the free tier it won't keep the same host name, so when you configure your Enchanted LLM app - if you restart NGrok then the URL will be different. Either you can pay for the service and get static URLs (I believe), or use another static DNS service with a hostname pointing to your machine.

  • @CrazyLuke11
    @CrazyLuke11 4 місяці тому +2

    First 🎉🎉😂😂