I am accessing the openai server via a Mac, and my guess is that the netsh command is about your windows workstation you are accessing the server. Is there a similar command that would need to be run, or if I do this on my Linux server via Firefox, will I still have the same issue. I cannot seem to get the ollama3:latest installed for openweb. Any insight would be greatly appreciated as this was the most concise video I have seen on the topic.
Nobody explained how to install ollama and run it in properite way ، it should be step's ، is docker important to install before ollama? I tried to install ollama alone and it doesn't installed completely!! I don't know why
1. you should mention what is your os 2. read the official documentation 3. if you run on win, just download the exe/msi file, install with one click(and click yes...)
How many CudaCores do you need for running this without getting frustrated. I know, the more the better. But what really makes sense? Just trying to find out what graphic card I should bet without busting my bank account…
so this "guide" is missing some key things like how to change the ip for wsl then how to run ollama like a service. even in his written guide is not telling on how to do this.
Thanks Brandon, nice review of what’s out there for local LLMs
Its great idea, thanks Brandon. I will test on my homelab.
Thank you for the helpful tutorial.
How to run ollama3 in macos, i want public localhost to use nextchatgpt , can you share command this solution
How many concurrent users can this support for business cases?
I am accessing the openai server via a Mac, and my guess is that the netsh command is about your windows workstation you are accessing the server. Is there a similar command that would need to be run, or if I do this on my Linux server via Firefox, will I still have the same issue. I cannot seem to get the ollama3:latest installed for openweb. Any insight would be greatly appreciated as this was the most concise video I have seen on the topic.
Nobody explained how to install ollama and run it in properite way ، it should be step's ، is docker important to install before ollama? I tried to install ollama alone and it doesn't installed completely!! I don't know why
1. you should mention what is your os
2. read the official documentation
3. if you run on win, just download the exe/msi file, install with one click(and click yes...)
How many CudaCores do you need for running this without getting frustrated. I know, the more the better. But what really makes sense? Just trying to find out what graphic card I should bet without busting my bank account…
this is awesome and can’t wait to try it. is there a mobile app for open webui?
I suspect technically it can easily run via any browser
so this "guide" is missing some key things like how to change the ip for wsl then how to run ollama like a service. even in his written guide is not telling on how to do this.
GPT4ALL is another good locally running chat interface... it can run On both the CPU but also on the GPU using Vulkan...
Is LMStudio better than Ollama?
no, but its a good option
@@camsand6109 then, Ollama is better?
😮I'm gonna try this now
This would be better content if done by an AI
Atleast we got the usually extremely boring stock videos as B rolls... *sigh*...