Obsidian with Ollama
Вставка
- Опубліковано 11 чер 2024
- Instead of using ChatGPT for running tasks, we can protect our precious notes and ideas with Ollama, an open-source project that lets you run powerful language models locally on your machine for free.
I cover how to install Ollama, set it up with Obsidian's Copilot plugin, and use it for AI-powered tasks like summarization, explanation, translation, and template generation - all while keeping your data private and avoiding subscription fees.
P.S:
The command to run Ollama as a local server, make sure to use
"OLLAMA_ORIGINS=app://obsidian.md* ollama serve"
Timestamps:
00:00 Intro
0:36 What is local LLM?
1:32 What is Ollama?
2:04 Install Ollama
2:26 Ollama commands!
3:09 Open up the command palette
4:30 Obsidian setup for using Ollama
5:06 Note about using the right models based on resource
5:34 Use case!
6:04 Outro
- - - - - - - - - - - - - - - - - - -
Connect with me
❤️ Newsletter: joonhyeokahn.substack.com/
❤️ LinkedIn: / joonhyeok-ahn
❤️ Instagram: / writer_dev123
❤️ Threads: www.threads.net/@writer_dev123-
- - - - - - - - - - - - - - - - - - Наука та технологія
Excellent video!
Going to try this out.
Thanks and let me know your experience!
What would be awesome is for this to be able to use an Ollama instance running in a container on another machine - that way I can use my container host for Ollama with all it's grunt, and keep the load off my smaller laptop.
That is an interesting idea..! Thanks for the feedback I will look into this to see it’s possible
If you get an error when you try to run the serve command about port already being use then run 'osascript -e 'tell app "Ollama" to quit''
what is the hardware requirements to run models locally.
It can run CPU-only, it can even run on a Raspberry Pi, it's just going to be slow if you don't have a beefy GPU. Pick a smaller model and it should be alright. But unless you care about being able to customize the model in a few ways or having extra privacy with your chats, it'd probably just be easier to use an external LLM provider
I'm running Llama3 8B model with Ollama on a basic M1 Mac with 16gb RAM - it's snappy. There is no strict cutoff for hardware requirements - if you want to run larger models with less RAM, Ollama can download quantized models which enable this (for a performance tradeoff). If you're on PC with GPU, you need 16GB of VRAM to run Llama3 8B natively. Otherwise you'll need to use a quantized model.
Can this work with LM Studio?
Good question I haven’t played with LM studio. I will and let you know!
Copilot needs integration with Groq AI, and Text to speech integration inside chat room.
That sounds interesting idea!
@@the-writer-devI will cover the costs, allowing us to remove WebsUI and solely utilize Ollama or LMstudio for the backend. With LMstudio now featuring CLI command capabilities, it's even more beneficial as it reduces the layers above Copilot. I conducted a test with LMstudio's new feature today, and the Copilot responses were noticeably faster on my low-end laptop. Additionally, we can incorporate groq's fast responses and edge neural voices, which are complimentary.
Awesome video man! Just sent you connect on LinkedIn
Thanks for the support and I will check it out!
Good video. I get this message in the terminal while setting the server step 4. >>> OLLAMA_ORIGINS=app://obsidian.md* ollama serve
The "OLLAMA_ORIGINS" variable in the context provided seems to be a custom
configuration, and serving files with `ollama` would again follow standard
Node.js practices:
1. To set an environment variable similar to "OLLAMA_ORIGINS", you could
do so within your project's JavaScript file or use shell commands (again
this is for conceptual purposes):
I can connect with you through other channels to work on this step.