2. Installing Ollama

Поділитися
Вставка
  • Опубліковано 27 гру 2024

КОМЕНТАРІ • 26

  • @fabriai
    @fabriai 5 місяців тому +1

    Matt, thanks for the course. So far so good.

  • @zwelimdlalose1059
    @zwelimdlalose1059 4 місяці тому

    Thank you Matt been struggling to run Ollama for a minute now you just showed me what I was doing wrong 😅

  • @ReidKimball
    @ReidKimball 4 місяці тому

    Excited for the next videos! I'd like to start developing my own tools and small apps using LLMs. I'm technical and have made small python scripts, but not an experienced software engineer.

  • @usiala
    @usiala 3 місяці тому

    Great Series. Thanks.

  • @solyarisoftware
    @solyarisoftware 4 місяці тому +1

    Thanks, Matt. I'm upvoting all your videos. Is there a way to know if Ollama is using a GPU in the background? Perhaps a command-line command? More generally, a session that dives deeper into GPU usage would be great.

    • @solyarisoftware
      @solyarisoftware 4 місяці тому

      oh sorry, you already answered a similar question in the first comment

  • @incogveto
    @incogveto 5 місяців тому +1

    I appreciate your videos 🦾. Running Ollama in Docker Desktop on Windows 10 utilizing wsl 2 Ubuntu integration. I have a single 4090 and it smokes with 8b fp16 models. Running Ollama directly on Windows was terribly slow even with 8b q4 models. It took forever to load models into vram 🫵🤓👍

    • @technovangelist
      @technovangelist  5 місяців тому +1

      you have multiple levels of abstraction there. Its always going to be best running ollama directly on Windows rather than docker in a ubuntu container on the wsl vm. If it was slower, there must have been an issue with the drivers. I would solve that first.

    • @incogveto
      @incogveto 5 місяців тому

      @@technovangelist I reinstalled Ollama on Windows and it's just as fast if not faster. Must have had a old driver idk...Thanks for the info Matt!

    • @technovangelist
      @technovangelist  5 місяців тому

      Nice. Thanks for letting me know its all good.

  • @ambarrlite
    @ambarrlite 5 місяців тому

    lol, I was thinking of moving the .ollama folder to another drive and using a symbolic link, but now I will wait for those enviorment instructions. :)

  • @thibdub2752
    @thibdub2752 5 місяців тому +2

    Is it possible to have a video on Paperspace and Brev ?

    • @technovangelist
      @technovangelist  5 місяців тому +1

      That’s a great idea. I hadn't considered doing it before, but I probably should.

  • @kellysnodgrass2236
    @kellysnodgrass2236 2 місяці тому +1

    I am building an ollama environment for my small business. I am way more familiar with Windows than Linux. Should I install ollama on Windows or Linux? If Linux, I plan to use WSL. I come from a development background so I'm sure I can get around in Linux. Any recommendations? Thanks for sharing your knowledge with us!

    • @technovangelist
      @technovangelist  2 місяці тому +1

      If more familiar with windows use windows.

  • @Leon-AlexisSauer
    @Leon-AlexisSauer 3 місяці тому

    is it possible to get an graphical ui ?

    • @technovangelist
      @technovangelist  3 місяці тому

      sure, there are a lot of choices on the ollama repo

  • @claudioguendelman
    @claudioguendelman 5 місяців тому

    Thanks soo much i want to add the IA to a PHP Sistem

  • @alanrussell6678
    @alanrussell6678 5 місяців тому +1

    How do I know whether Ollama is seeing/using my GPU?

    • @znzbest2004
      @znzbest2004 5 місяців тому +1

      run ' ollama ps ' in terminal . there is an entry " PROCESSOR" which shows 'CPU or GPU'

    • @technovangelist
      @technovangelist  5 місяців тому

      After asking a question you can run ollama ps to see if ollama used the gpu and how much of the model could be offloaded to the gpu.

  • @Francotujk
    @Francotujk 5 місяців тому

    Hi!
    Do you know how to “package” ollama and an llm inside one app (for example an electron-react native app)?
    So the end user don’t need to install ollama/the llm and don’t need use the terminal. Just download the electron app and start using the llm

    • @technovangelist
      @technovangelist  5 місяців тому

      I don’t but Bruce on the ollama team did it with chatd. Well he added ollama, not a model. You can find the source on GitHub.

    • @technovangelist
      @technovangelist  5 місяців тому

      Here is a link to a message on this in the ollama discord. discord.com/channels/1128867683291627614/1261421971615252480

    • @Francotujk
      @Francotujk 5 місяців тому

      @@technovangelist thank you so much!! I will take a look to it!
      I really appreciate it! 👍🏻