Excited for the next videos! I'd like to start developing my own tools and small apps using LLMs. I'm technical and have made small python scripts, but not an experienced software engineer.
Thanks, Matt. I'm upvoting all your videos. Is there a way to know if Ollama is using a GPU in the background? Perhaps a command-line command? More generally, a session that dives deeper into GPU usage would be great.
I appreciate your videos 🦾. Running Ollama in Docker Desktop on Windows 10 utilizing wsl 2 Ubuntu integration. I have a single 4090 and it smokes with 8b fp16 models. Running Ollama directly on Windows was terribly slow even with 8b q4 models. It took forever to load models into vram 🫵🤓👍
you have multiple levels of abstraction there. Its always going to be best running ollama directly on Windows rather than docker in a ubuntu container on the wsl vm. If it was slower, there must have been an issue with the drivers. I would solve that first.
I am building an ollama environment for my small business. I am way more familiar with Windows than Linux. Should I install ollama on Windows or Linux? If Linux, I plan to use WSL. I come from a development background so I'm sure I can get around in Linux. Any recommendations? Thanks for sharing your knowledge with us!
Hi! Do you know how to “package” ollama and an llm inside one app (for example an electron-react native app)? So the end user don’t need to install ollama/the llm and don’t need use the terminal. Just download the electron app and start using the llm
Matt, thanks for the course. So far so good.
Excited for the next videos! I'd like to start developing my own tools and small apps using LLMs. I'm technical and have made small python scripts, but not an experienced software engineer.
Great Series. Thanks.
Thank you Matt been struggling to run Ollama for a minute now you just showed me what I was doing wrong 😅
Thanks, Matt. I'm upvoting all your videos. Is there a way to know if Ollama is using a GPU in the background? Perhaps a command-line command? More generally, a session that dives deeper into GPU usage would be great.
oh sorry, you already answered a similar question in the first comment
lol, I was thinking of moving the .ollama folder to another drive and using a symbolic link, but now I will wait for those enviorment instructions. :)
I appreciate your videos 🦾. Running Ollama in Docker Desktop on Windows 10 utilizing wsl 2 Ubuntu integration. I have a single 4090 and it smokes with 8b fp16 models. Running Ollama directly on Windows was terribly slow even with 8b q4 models. It took forever to load models into vram 🫵🤓👍
you have multiple levels of abstraction there. Its always going to be best running ollama directly on Windows rather than docker in a ubuntu container on the wsl vm. If it was slower, there must have been an issue with the drivers. I would solve that first.
@@technovangelist I reinstalled Ollama on Windows and it's just as fast if not faster. Must have had a old driver idk...Thanks for the info Matt!
Nice. Thanks for letting me know its all good.
Is it possible to have a video on Paperspace and Brev ?
That’s a great idea. I hadn't considered doing it before, but I probably should.
I am building an ollama environment for my small business. I am way more familiar with Windows than Linux. Should I install ollama on Windows or Linux? If Linux, I plan to use WSL. I come from a development background so I'm sure I can get around in Linux. Any recommendations? Thanks for sharing your knowledge with us!
If more familiar with windows use windows.
Thanks soo much i want to add the IA to a PHP Sistem
I'm sorry, I don't understand.
How do I know whether Ollama is seeing/using my GPU?
run ' ollama ps ' in terminal . there is an entry " PROCESSOR" which shows 'CPU or GPU'
After asking a question you can run ollama ps to see if ollama used the gpu and how much of the model could be offloaded to the gpu.
Hi!
Do you know how to “package” ollama and an llm inside one app (for example an electron-react native app)?
So the end user don’t need to install ollama/the llm and don’t need use the terminal. Just download the electron app and start using the llm
I don’t but Bruce on the ollama team did it with chatd. Well he added ollama, not a model. You can find the source on GitHub.
Here is a link to a message on this in the ollama discord. discord.com/channels/1128867683291627614/1261421971615252480
@@technovangelist thank you so much!! I will take a look to it!
I really appreciate it! 👍🏻
is it possible to get an graphical ui ?
sure, there are a lot of choices on the ollama repo