FreeBSD: running a Large Language Model with ollama

Поділитися
Вставка
  • Опубліковано 29 лис 2024

КОМЕНТАРІ • 17

  • @RoboNuggie
    @RoboNuggie Місяць тому +4

    This was very interesting...... thank you!

    • @BSDJedi
      @BSDJedi  Місяць тому +1

      My only issue is the lack of time for making the videos; if it were me, I would be doing them all the time 🙂
      BTW, did you catch the LLM hallucinating? It says that FreeBSD was first released in the year 2000.... yeah... sure... hahahaha

  • @existentialbaby
    @existentialbaby Місяць тому +1

    thanku gud video

    • @BSDJedi
      @BSDJedi  Місяць тому +1

      Thank you for your comment and positive feedback :-)

  • @K.Solowoniuk
    @K.Solowoniuk 24 дні тому +1

    Very cool.

    • @BSDJedi
      @BSDJedi  12 днів тому

      Thank you :-)

  • @morkhoudia9
    @morkhoudia9 Місяць тому +1

    How it cost to use ports on freebsd in term of disk .Great video

    • @BSDJedi
      @BSDJedi  Місяць тому

      Hi. The ports tree is itself relatively large - for me it takes about 1 or 2 GB in size, but this might depend on how you pull it (e.g. all history, or only enough for the last commit, i.e. shallow clone). I would definitively recommend to use pkg instead of compiling everything because not only it can take some space, but can also take a huge amount of time to compile. As ollama is already available in the ports tree, just "pkg install" it :-)
      You might also find some interesting discussion about the size of the ports tree in here: forums.freebsd.org/threads/how-much-space-do-i-need-to-run-portsnap-fetch-extract.64586/
      The FreeBSD forums is an awesome source of information :-)
      Cheers!

  • @GokhanKaya
    @GokhanKaya Місяць тому +3

    Thank you for the great video.
    Does this mean we can run our own AI at home server?

    • @BSDJedi
      @BSDJedi  Місяць тому +3

      Hi. Yes, you can run your own AI model in your home server with ollama. I heard it can even run on a RPi, but I did not try this. Problem, though, is that it might require a beefy video card, if you want the model to run at any usable speed...
      Cheers :-)

  • @jondycz
    @jondycz Місяць тому +4

    Ollama isn't by Meta. Llama is.

    • @BSDJedi
      @BSDJedi  Місяць тому

      Thanks for clarification. I will update the cover ASAP… Cheers :-)

  • @daGraveR
    @daGraveR Місяць тому +3

    13:30 Yeah, I'm sure FreeBSD was first released in 2000 and not 1993... 😬

    • @BSDJedi
      @BSDJedi  Місяць тому +1

      AI has hallucinations FTW... 🤣

    • @davidireland1766
      @davidireland1766 8 днів тому +2

      From Wiki: 1 November 1993; 31 years ago

  • @vertigoz
    @vertigoz Місяць тому +1

    is this script available for usage in windows? I cannot seem to find it anywhere

    • @BSDJedi
      @BSDJedi  Місяць тому

      Hi. Thank you for the comment.
      No, I do not think that you can find it in windows... This script was developed by Yuri Victorovich, for the FreeBSD project; however, you can get it from here: github.com/freebsd/freebsd-ports/blob/main/misc/ollama/files/ollama-limit-gpu-layers and adapt it to your own use-case...
      Basically, you can see that the script runs the ollama command to get the Modelfile file, adapts it by adding the num_gpu parameter, and then recreates the model again with ollama... So, you can also easily reproduce the steps in Windows CMD.
      Cheers!