Ollama on Windows ! Now, Everyone can use this

Поділитися
Вставка
  • Опубліковано 9 вер 2024

КОМЕНТАРІ • 36

  • @testales
    @testales 6 місяців тому +2

    This could have proably have saved me two days, not on the Linux part which only took a few minutes, but with this sh*tty Windows. I had to update it first to 2H22 because otherwise my GPU wasn't avaiable under WSL2. But it couldn't do because of some generic error with some number that can have like 100 reasons and after wasting a lot of time with fix-your-windows guides, I found an assistent tool that finally updated my windows. Than I had another day fun with Ollama Webui in terms of portmapping from its docker container to the Windows host and then the LAN. I got it working first but another Windows update broke it as it installed some new iphelper service at port 8080 which is the exact port under which WebUI is exposed. With the Guide of ChatGPT I finally disabled that and installed a reverse proxy. What a pain! I'll almost certainly make a dedicated Linux AI machine this year. Btw. you could have mentioned WebUI for Ollama perhabs unless I just missed it when you did. ;-)

    • @PromptEngineer48
      @PromptEngineer48  6 місяців тому

      Thanks for going through all the pain and sharing your story. :) We all will benefit from your experience.

    • @snuwan
      @snuwan 6 місяців тому +1

      I also plan to build a dedicated Linux Ai machine with a good GPU. I think most will run faster on Linux.

    • @PromptEngineer48
      @PromptEngineer48  6 місяців тому +1

      RIght. For me I host Ollama on a Cloud GPU for keeping my projects running.

    • @testales
      @testales 6 місяців тому +2

      @@snuwan Yeah, I'm just in the process of deciding whether I bite the bullet and go all-in to get a car-prized workstation GPU for like 4x its actual worth or if I try old cheap server GPUs. It'd be 2x24GB P40 or M40 or even 4x. I must be sure though that this can run a 70b model in a decent manner.

    • @PromptEngineer48
      @PromptEngineer48  6 місяців тому

      I like the GPU option as things evolve and my bought system gets old very quickly

  • @trapez_yt
    @trapez_yt 2 місяці тому +1

    Please make a tutorial on web UI on windows

  • @GlosbeNatural
    @GlosbeNatural 6 місяців тому +2

    Why im not able to use ollama In my vs code custom environment

    • @PromptEngineer48
      @PromptEngineer48  6 місяців тому

      !pip install ollama I think u should not have missed this.

    • @GlosbeNatural
      @GlosbeNatural 6 місяців тому +1

      @@PromptEngineer48 I do pip install ollama in my system but I'm trying to automate my ollama to take inputs using voice and in give that input in terminal using os.system but after activating ollama it not taking os.system input

    • @PromptEngineer48
      @PromptEngineer48  6 місяців тому

      @@GlosbeNatural are you working in your local system or colabs notebook etc.??

    • @GlosbeNatural
      @GlosbeNatural 6 місяців тому

      @@PromptEngineer48 local system

  • @markuskoarmani1364
    @markuskoarmani1364 6 місяців тому +3

    can I chat with my docs with windows version?

  • @glitch_city_gamer2846
    @glitch_city_gamer2846 6 місяців тому +1

    Thank you for the video! This is so exciting!

  • @jonathanfranklin461
    @jonathanfranklin461 6 місяців тому +1

    Thank you so much for all your videos. What is your thinking on using Ollama vs. LocalAI?

    • @PromptEngineer48
      @PromptEngineer48  6 місяців тому

      I love Ollama. It's integrate is easier. esp. now with embedding models, integration with colabs, windows, linux, mac (local and on the cloud) doing multiple projects etc. I love Ollama again

  • @snuwan
    @snuwan 6 місяців тому +1

    It is very nice that finally they added support to windows. But this preview they have is really slow. It is slower than when we run it using wsl subsystem in windows. I have used it successfully within wsl subsystem. A lot of users have been seeing the same thing. Hopefully they will address these performance issues. But I am glad it is finally supported in windows.

    • @PromptEngineer48
      @PromptEngineer48  6 місяців тому +2

      True. WSL is faster as of now. It's just a preview. Let's just wait for a week and see the development! Wait wont be that long.

  • @darthcryod1562
    @darthcryod1562 6 місяців тому +1

    anybody having ollama run painfully slow on windows 11? if run in linux/mac its so fast, any suggestion how to fix slowness?

    • @PromptEngineer48
      @PromptEngineer48  6 місяців тому

      I agree with that. there is nothing we can ado apart from ripping the Ollama code itself. or wait for the Ollama developers to make the changes.

  • @liammcmullan1133
    @liammcmullan1133 3 місяці тому +1

    Can you make a tutorial for Maestro

    • @PromptEngineer48
      @PromptEngineer48  3 місяці тому

      by Maestro, u mean mobile UI testing framework??

  • @rgm4646
    @rgm4646 6 місяців тому +1

    Very cool!

  • @spirit_wolf123
    @spirit_wolf123 6 місяців тому +1

    So what's new about this we've been able to run llama in LM studio and run open interpreter an auto Gen.???????

    • @PromptEngineer48
      @PromptEngineer48  6 місяців тому

      Lm studio is nothing compared to ollama

    • @spirit_wolf123
      @spirit_wolf123 6 місяців тому +1

      @@PromptEngineer48 thanks your comment told me loads of information....... What makes it better LM studio was just one example of many pieces of software including releasing it in a premade conda environment. personally I don't have a problem running any llm as I have my own home built GPU server rack that I mainly use for cracking hashes... I'm curious because it would be nice to carry one around in a laptop and not worry about it compromising the CPU GPU and the ram which it does do to the average consumer laptop!!!!. If you play with autogen or open interpreter I suggest you do it on a device you don't use on a regular basis as they cause broken links and compromise the OS security..

  • @JohbB
    @JohbB 5 місяців тому +1

    Andy Kaufman?

  • @HistoryIsAbsurd
    @HistoryIsAbsurd 6 місяців тому +1

    Hey thanks for the vid!
    Hey maybe im missing something (and maybe its due to it being just the preview too) but do you know how I can watch the real-time logs? When using this via docker I was able to see the backend of it and it really helped with troubleshooting. IM sure theres a way im just blind haha

    • @PromptEngineer48
      @PromptEngineer48  6 місяців тому +1

      Not direclty but 3 party services like Splunk could be used.

    • @HistoryIsAbsurd
      @HistoryIsAbsurd 6 місяців тому

      Oh good idea thanks alot for that!@@PromptEngineer48