Ollama on Google Colab: A Game-Changer!

Поділитися
Вставка
  • Опубліковано 24 лис 2024

КОМЕНТАРІ • 37

  • @UnknownDigitalCreator
    @UnknownDigitalCreator 5 днів тому

    Very Nice...Good Explanation Deserve Respect...❤❤❤❤

  • @archilecteur
    @archilecteur 17 днів тому

    Instead of Terminal, can the call to the model running in Google Colab be inserted in an app?

  • @fabriciocincunegui5332
    @fabriciocincunegui5332 3 місяці тому

    thnx for patient

  • @marcelocruzeta8822
    @marcelocruzeta8822 Місяць тому

    Great. Thank you for the class. I Installed Open WebUI from GitHub Repo. No Docker. Can I configure it to run with the remote Ollama? I found it. Have to change in settings. Never mind.

  • @bnermine9780
    @bnermine9780 2 місяці тому

    Thank you for the great video! Could the model then be used inside a local python code? I am writing a classification script using an llm but running it on my cpu takes ages. Can I edit my local python code so that the classification is done with the model running on google colab but the results are stored locally? This would also help me apply the same model to different use cases. Thank you!!

    • @TechXplainator
      @TechXplainator  2 місяці тому +1

      Thank you so much for your kind words! And yes, you can definitely do that. Here is how that could work:
      1. Keep the Colab notebook running with Ollama and Ngrok set up as shown in the tutorial.
      2. In your local Python script, use the 'requests' library to send classification requests to the Ollama model via the Ngrok URL.
      3. Process the responses and store the results locally.
      I hope that helps. Happy coding ☺️

  • @wowfielder101
    @wowfielder101 Місяць тому

    HELP PLS the "export OLLAMA_HOST=" command is not working in cmd pls help

    • @TechXplainator
      @TechXplainator  Місяць тому

      You mean on windows? This is how it should work (have not verified it - I'm using mac):
      1. Open Command Prompt as Administrator.
      2. Run the command below, replacing `` with your Ngrok URL:
      setx OLLAMA_HOST ""
      3. Close and reopen Command Prompt to apply the changes.

  • @chillscripter
    @chillscripter 3 місяці тому

    do the exact thing that you said in the video but i got the error from ollama : the parameter is incorrect
    how can i solve that?

    • @TechXplainator
      @TechXplainator  3 місяці тому

      Hey there! To help me figure out what's going wrong, could you please tell me:
      1. Are you using a fixed Ngrok link or letting Colab create a new one each time?
      2. Did you open the link from the notebook in a browser? Does it say "Ollama is running"?
      3. Have you correctly linked your local Ollama to Colab-Ollama by setting the OLLAMA_HOST environment variable to your Ngrok URL? (You can usually do this in your terminal with a command like export OLLAMA_HOST=)
      4. When you run a model locally (like typing ollama run llama3.1), does the model download to your computer or to Colab? Can you see the download happening in your Colab notebook?

  • @Salionca
    @Salionca 4 місяці тому

    Jupyter Notebook links in the video description don't work.

    • @TechXplainator
      @TechXplainator  4 місяці тому +1

      Oooh you're right! I messed up some of my links there. Thank you so much for pointing that out! The links are fixed now 😊

  • @ardasemsettinoglu
    @ardasemsettinoglu 4 місяці тому

    When I write the export OLLAMA_HOST it said that" export : The term 'export' is not recognized as the name of a cmdlet" Is it because I am using docker ?

    • @TechXplainator
      @TechXplainator  4 місяці тому

      The error message you're encountering is not related to Docker, but rather to the command shell you're using. The "export" command is specific to Unix-like systems (such as Linux and macOS) and is not recognized in Windows PowerShell or Command Prompt.
      To set an environment variable in Windows, you should use the "set" command instead of "export". Here's how you can set the OLLAMA_HOST variable in PowerShell:
      $env:OLLAMA_HOST = "your_value_here"
      Or in Command Prompt:
      set OLLAMA_HOST=your_value_here
      Hope this helps ☺️

  • @jameschan6277
    @jameschan6277 4 місяці тому

    Please help if I use windows PC desktop, how can I open terminals like MAC?

    • @TechXplainator
      @TechXplainator  4 місяці тому

      To open terminals on a Windows PC desktop similar to how you would on a Mac, you can use the following methods:
      Option 1: PowerShell:
      1. Press `Windows+X` and select "Windows PowerShell" or "Windows PowerShell (Admin)" from the menu.
      2. Alternatively, press `Windows+R`, type `powershell`, and press Enter to open a PowerShell window.
      Option 2: Command Prompt:
      1. Press `Windows+R`, type `cmd`, and press Enter to open a Command Prompt window.
      2. You can also search for "Command Prompt" in the Start menu, right-click the result, and select "Run as Administrator" if you need elevated privileges.
      Hope this helps ☺️

  • @andreabaffascirocco2934
    @andreabaffascirocco2934 4 місяці тому

    I have try but seems that the command ollama run llama3.1 download the model on my laptop instead of colab.

    • @TechXplainator
      @TechXplainator  4 місяці тому +1

      Try running the command export OLLAMA_HOST= (check the URL says "Ollama is running" first). Then in the same terminal window, you should do "ollama run llama3" again. Hope this helps ☺️

    • @andreabaffascirocco2934
      @andreabaffascirocco2934 4 місяці тому

      @@TechXplainator Thanks. i'll try

    • @andreabaffascirocco2934
      @andreabaffascirocco2934 4 місяці тому

      @@TechXplainator Now all work fine, The problem is i have installed ollama on my ubuntu using snap. Whit this installation the pc try to download LLama 3.1 on pc and not on colab.

    • @TechXplainator
      @TechXplainator  3 місяці тому +1

      I'm glad it works now ☺️

  • @fabriciocincunegui5332
    @fabriciocincunegui5332 3 місяці тому

    How do i export ollama on my cmd im on windows 11

    • @TechXplainator
      @TechXplainator  3 місяці тому

      I can't verify this on a Windows PC since I don't have one, but based on my research, here's how to export the `OLLAMA_HOST` variable on Windows 11 using Command Prompt:
      1. Open Command Prompt as Administrator.
      2. Run the command below, replacing `` with your Ngrok URL:
      setx OLLAMA_HOST ""
      3. Close and reopen Command Prompt to apply the changes.

  • @HunterJuniorX
    @HunterJuniorX 4 місяці тому

    is there a way to use models from hugging face?

    • @TechXplainator
      @TechXplainator  4 місяці тому +1

      Yes there is - if they are available as quantized models (GGUF files). I made a video on how you can import GGUF files from huggingface and use them in Ollama - feel free to check it out: ua-cam.com/video/vs1u9z2U4ZA/v-deo.html

  • @Salionca
    @Salionca 4 місяці тому

    The video is greaat but I'm ot going to spend money on that. I prefer to wait to buy a new laptop.

    • @TechXplainator
      @TechXplainator  4 місяці тому +1

      Thanks! And thats completely understandable :-)

  • @MarkSmith-ho5ij
    @MarkSmith-ho5ij 2 місяці тому

    Coders using apple lol. Please use Linux and stop this...

  • @rajarshisen5905
    @rajarshisen5905 3 місяці тому

    Please help, I can run Ollama in Colab, but while run it from docker as open-web-ui, I am getting the following error while trying to chat to llama3 in the web browser. Ollama: 404, message='Not Found', url=URL('/api/chat')

    • @TechXplainator
      @TechXplainator  3 місяці тому

      Does Ollama work from the terminal? I mean, when running export OLLAMA_HOST= and ollama run llama3, do you get to interact with llama3 in your terminal? And do you see any action in your Colab (you should be seeing the notebook downloading a model and responding to chat)

    • @merocky5
      @merocky5 3 місяці тому

      Yes, I do. Ollama is executing on Colab, when I call it from local computer terminal. Only while using the open-web-ui, following the last command of your python notebook, I get the error as described above. The front end web app starts, but while trying to chat with ollama installed in Colab, I get the error mentioned in above message. I did some internet search and appears that "api" word may be /be not included in the latest version of ollama? Please help how I can resolve this. Thanks a lot

    • @TechXplainator
      @TechXplainator  3 місяці тому

      To make sure we're on the same page, I just want to summarize your setup:
      1. You're using a static Ngrok URL.
      2. You've successfully connected your local Ollama instance with the one hosted on Colab by running an export command.
      3. You've installed OpenWebUI using Docker and replaced the example Ngrok URL with your own static Ngrok URL, as indicated by this command:
      `docker run -d -p 4000:8080 -e OLLAMA_BASE_URL=example.com -v open-webui:/app/backend/data --name test --restart always ghcr.io/open-webui/open-webui:main`
      4. The Docker container was created, but trying to access the Ollama WebUI at `localhost:4000/` results in an error.
      Please confirm that this summary is accurate so I can help you troubleshoot the issue ☺️

    • @rajarshisen5905
      @rajarshisen5905 3 місяці тому

      @@TechXplainator Yes, the summary is spot on. I have followed all of the above bullet points and got error on last bullet point while trying to post a chat to Ollama using web-UI.

    • @TechXplainator
      @TechXplainator  3 місяці тому

      I was not able to replicate the error, but based on my research, here are a few things you could try:
      1. Verify OpenWebUI settings:
      Access the OpenWebUI settings page (click on your avatar on the bottom left) and verify that the Ollama Server URL is correctly set to your Ngrok URL: Go to “connections”. Under “Ollama Base URL” you should see your static Ngrok URL
      2. Network Configuration
      Ensure that the Docker container can communicate with the Ollama server. Use the --network=host flag to allow the Docker container to use the host network:
      docker run -d --network=host -v open-webui:/app/backend/data -e OLLAMA_BASE_URL= --name open-webui --restart always ghcr.io/open-webui/open-webui:main
      I hope this helps. If not, please check out the troubleshooting page from Open WebUI: docs.openwebui.com/troubleshooting/