Ollama on Linux: Easily Install Any LLM on Your Server

Поділитися
Вставка
  • Опубліковано 14 лис 2024

КОМЕНТАРІ • 52

  • @crazytom
    @crazytom 4 місяці тому +5

    Thanks for leaving all the errors in and correcting them. Excellent.

  • @datpspguy
    @datpspguy 10 місяців тому +4

    Was using Ubuntu Desktop running mixtral on ollama so i can make api calls with my FastApi app on VS code but realized i should separate them out and go headless for ollama. I didn’t realize that CORS was preventing outside calls from my dev machine and this video helped once i found the github page as well. Thanks for sharing

    • @IanWootten
      @IanWootten  10 місяців тому +1

      Glad to hear you sorted it!

    • @datpspguy
      @datpspguy 10 місяців тому

      thank you, I ended up storing the environment variable into the .conf file to bind the IP address so it handles this process automatically.@@IanWootten

  • @sto3359
    @sto3359 Рік тому +7

    This is amazing news! I'm limited to 16gb RAM on my Macs, but not so on my Linux machines!

  • @DataDrivenDailies
    @DataDrivenDailies Рік тому +5

    just what i was looking for, thanks ian!

  • @timjx3675
    @timjx3675 Рік тому +6

    Mistral 7B running really sweet on my old Asus (16GB ram ) laptop

    • @IanWootten
      @IanWootten  Рік тому +3

      Runs really fast on my MBP too, just started playing with it yesterday.

    • @timjx3675
      @timjx3675 Рік тому

      @@IanWootten sweet

    • @bigsmoke4568
      @bigsmoke4568 2 місяці тому

      Are you running it without a GPU? I have an old Laptop with 16gb as well with a beefy CPU, but I'm not sure if It'll be able to run somewhat smoothly with just the specs I have.

  • @receps.8396
    @receps.8396 Місяць тому

    Thank you, indeed. It worked.

  • @amjadiqbal5353
    @amjadiqbal5353 2 місяці тому

    You are real hero

  • @74Gee
    @74Gee 6 місяців тому

    Run Pod is very affordable too. From 17c per hour for a Nvidea 3080

    • @IanWootten
      @IanWootten  6 місяців тому +1

      Yeah, I wanted to do a comparison of all the new services appearing.

  • @perschinski
    @perschinski 5 місяців тому

    Great stuff, thanks a lot!

  • @trapez_yt
    @trapez_yt 4 місяці тому +1

    i cant run it on service ollama start, it says the following:
    $sudo: service ollama start
    ollama: unrecognized service

  • @PengfeiXue
    @PengfeiXue 7 місяців тому +1

    can we use ollama to serve in production ? if not,what is your suggestion?

  • @atrocitus777
    @atrocitus777 9 місяців тому +2

    how does this scale for multiple users sending multiple requests at a time? do you need to use a load balancer / reverse proxy? i don't think ollama supports batch inference still

    • @jakestevens3694
      @jakestevens3694 8 місяців тому

      You would have to launch and run the application multiple times, the best way is to just use something like docker. Otherwise, I believe theirs the "screen" command. If I remember correctly on Linux this will allow you to run applications in the CLI with multiple virtual "screens" or rather more like sessions, you would then want to make sure what ever port it uses is different from the others. Also take note the ram it uses, is the ram it uses, CPU can be shared. It might be possible with ram (with some tricks) however it's unlikely.

    • @atrocitus777
      @atrocitus777 8 місяців тому

      what about pulling from a custom endpoint where i have my own hosted models? i want to run this on an air gapped network that doen'st have any access to the internet so if i could point it to an on prem server i have that would be awesome. @@jakestevens3694

  • @wcgbr
    @wcgbr 8 місяців тому

    Hello. I'm developing an OnPremises application that consumes Ollama via API. However, after a few minutes, the Ollama Server stops automatically. I would like to know if there is any way to keep it running until I stop it.
    Thank you very much.

  • @VulcanOnWheels
    @VulcanOnWheels 7 місяців тому

    0:08 How did you get to your pronunciation of Linux?
    10:53 How could one correct the error occurring here?

  • @rishavbharti5225
    @rishavbharti5225 8 місяців тому

    This was a really helpful video Ian!
    But I am facing one issue after running ollama serve the server is shutting down when I close terminal. Please tell me if there is a way to prevent this.
    Thanks!

  • @SuperRia33
    @SuperRia33 4 місяці тому

    How do you connect to server via Python Client or Fast APIs for integration with projects/notebook?

    • @IanWootten
      @IanWootten  4 місяці тому

      If you simply want to make a request to an API from Python, there are plenty of options. You can use a package from Python itself like urlllib, or a popular library like requests.

  • @ITworld-gw9iy
    @ITworld-gw9iy 7 місяців тому

    for 70B model, what server would I need to rent? docs says at least 64GB of RAM... but regarding NVIDEA card no minimal specs in the docs. Who has experience with this?

  • @jamiecropley
    @jamiecropley 11 місяців тому +1

    Anyone got this running on anything lower than 8GB of RAM on digital ocean? I tried locally on my own computer with a huge prompt with a 3B model, and it only used around 1GB of RAM maximum

    • @IanWootten
      @IanWootten  10 місяців тому

      Yeah, depends on the model itself. ollama often lists the memory requirements on the model page. e.g. ollama.ai/library/llama2

  • @JordanCassady
    @JordanCassady 7 місяців тому

    Which version of Ubuntu did you choose? It seems to be missing from the video.

  • @AdarshSingh-rm6er
    @AdarshSingh-rm6er 5 місяців тому

    hello Ian, Its a very great video. I have some query, i will very thankful if you can help me. I am stuck since 3 days. Apparently, I am trying to host the ollama on my server. i am very new to linux and dont understand the whats wrong i am doing. I am using nginx to host the ollama on my proxies and configure the nginx file and yet getting access denied error. I can show you the code if you want, please respond.

  • @peteprive1361
    @peteprive1361 Рік тому +1

    I got an error while executing the curl command : Failure writing output to destination

    • @IanWootten
      @IanWootten  Рік тому +2

      Weird. Perhaps try running it from a directory you are certain you have write access to.

  • @blasandresayalagarcia3472
    @blasandresayalagarcia3472 10 місяців тому

    what is the cost of webhosting ollama or these type of LLM models?

    • @IanWootten
      @IanWootten  10 місяців тому

      In this case, it'll be the price of the virtual machine you choose to install it on so depends on the provider.

  • @sugihwarascom
    @sugihwarascom Рік тому

    How come the model run in 8gb of ram? On the docs it self it need at least 16gb for llama2

    • @IanWootten
      @IanWootten  Рік тому +2

      No idea - I was going on experience using ollama rather than the model itself.

  • @GenerativeAI-Guru
    @GenerativeAI-Guru Рік тому

    How do i change IP and port for Ollama

  • @petermarin
    @petermarin Рік тому

    benefits of running it like this vs docker?

    • @IanWootten
      @IanWootten  Рік тому

      Running anything within a container will always mean the app runs slower.

  • @user-wr4yl7tx3w
    @user-wr4yl7tx3w Рік тому

    do you think it is safe to install on your own laptop instead of the cloud server?

    • @IanWootten
      @IanWootten  Рік тому

      Yes. Ollama has desktop versions too and it doesn't send anything externally when query when you go that route. I have another video where I do this on my mac.

  • @nickholden585
    @nickholden585 Рік тому

    Right now there is an issue with Ollama where if you create an model, it spams you with "do not have permission to open Modelfile"
    It's super odd, because even if you give full read and execution rights to every user or run the command with sudo it still fails.
    The only viable work around is to run it on /tmp

    • @IanWootten
      @IanWootten  Рік тому

      This is an issue with the current user not having access to the ollama group. There's a recommended solution posted here (though sounds like it might not be completely resolved): github.com/jmorganca/ollama/issues/613#issuecomment-1756293841

    • @nickholden585
      @nickholden585 Рік тому

      @@IanWootten saw that.
      Even after running sudo usermod -a -G ollama $(whoami)
      It still won't work.
      The idea to run it in /tmp came from that thread haha.
      Outside of this issue, the rest of the project is pretty cool imo.
      Local llm with reinforced learning, wifi and direct brain integration will be the future.

  • @davidbl1981
    @davidbl1981 9 місяців тому

    Even if the killer is dead on the floor the killer is still there and would still be a killer 😅 so the correct answer would be 3.

  • @ankitvaghasiya3789
    @ankitvaghasiya3789 5 місяців тому

    thank you🦙