Obsidian with Ollama

Поділитися
Вставка
  • Опубліковано 11 чер 2024
  • Instead of using ChatGPT for running tasks, we can protect our precious notes and ideas with Ollama, an open-source project that lets you run powerful language models locally on your machine for free.
    I cover how to install Ollama, set it up with Obsidian's Copilot plugin, and use it for AI-powered tasks like summarization, explanation, translation, and template generation - all while keeping your data private and avoiding subscription fees.
    P.S:
    The command to run Ollama as a local server, make sure to use
    "OLLAMA_ORIGINS=app://obsidian.md* ollama serve"
    Timestamps:
    00:00 Intro
    0:36 What is local LLM?
    1:32 What is Ollama?
    2:04 Install Ollama
    2:26 Ollama commands!
    3:09 Open up the command palette
    4:30 Obsidian setup for using Ollama
    5:06 Note about using the right models based on resource
    5:34 Use case!
    6:04 Outro
    - - - - - - - - - - - - - - - - - - -
    Connect with me
    ❤️ Newsletter: joonhyeokahn.substack.com/
    ❤️ LinkedIn: / joonhyeok-ahn
    ❤️ Instagram: / writer_dev123
    ❤️ Threads: www.threads.net/@writer_dev123-
    - - - - - - - - - - - - - - - - -
  • Наука та технологія

КОМЕНТАРІ • 16

  • @radonryder
    @radonryder 23 дні тому

    Excellent video!
    Going to try this out.

  • @HiltonT69
    @HiltonT69 Місяць тому +1

    What would be awesome is for this to be able to use an Ollama instance running in a container on another machine - that way I can use my container host for Ollama with all it's grunt, and keep the load off my smaller laptop.

    • @the-writer-dev
      @the-writer-dev  Місяць тому

      That is an interesting idea..! Thanks for the feedback I will look into this to see it’s possible

  • @nevilleattkins586
    @nevilleattkins586 3 дні тому

    If you get an error when you try to run the serve command about port already being use then run 'osascript -e 'tell app "Ollama" to quit''

  • @siliconhawk9293
    @siliconhawk9293 Місяць тому

    what is the hardware requirements to run models locally.

    • @TheGoodMorty
      @TheGoodMorty Місяць тому

      It can run CPU-only, it can even run on a Raspberry Pi, it's just going to be slow if you don't have a beefy GPU. Pick a smaller model and it should be alright. But unless you care about being able to customize the model in a few ways or having extra privacy with your chats, it'd probably just be easier to use an external LLM provider

    • @coconut_bliss5539
      @coconut_bliss5539 Місяць тому +1

      I'm running Llama3 8B model with Ollama on a basic M1 Mac with 16gb RAM - it's snappy. There is no strict cutoff for hardware requirements - if you want to run larger models with less RAM, Ollama can download quantized models which enable this (for a performance tradeoff). If you're on PC with GPU, you need 16GB of VRAM to run Llama3 8B natively. Otherwise you'll need to use a quantized model.

  • @elgodric
    @elgodric Місяць тому

    Can this work with LM Studio?

    • @the-writer-dev
      @the-writer-dev  Місяць тому +2

      Good question I haven’t played with LM studio. I will and let you know!

  • @Alex29196
    @Alex29196 Місяць тому

    Copilot needs integration with Groq AI, and Text to speech integration inside chat room.

    • @the-writer-dev
      @the-writer-dev  Місяць тому +1

      That sounds interesting idea!

    • @Alex29196
      @Alex29196 Місяць тому

      ​@@the-writer-devI will cover the costs, allowing us to remove WebsUI and solely utilize Ollama or LMstudio for the backend. With LMstudio now featuring CLI command capabilities, it's even more beneficial as it reduces the layers above Copilot. I conducted a test with LMstudio's new feature today, and the Copilot responses were noticeably faster on my low-end laptop. Additionally, we can incorporate groq's fast responses and edge neural voices, which are complimentary.

  • @IFTHENGEO
    @IFTHENGEO Місяць тому +1

    Awesome video man! Just sent you connect on LinkedIn

    • @the-writer-dev
      @the-writer-dev  Місяць тому

      Thanks for the support and I will check it out!

  • @VasanthKumar-rh5xr
    @VasanthKumar-rh5xr 26 днів тому +1

    Good video. I get this message in the terminal while setting the server step 4. >>> OLLAMA_ORIGINS=app://obsidian.md* ollama serve
    The "OLLAMA_ORIGINS" variable in the context provided seems to be a custom
    configuration, and serving files with `ollama` would again follow standard
    Node.js practices:
    1. To set an environment variable similar to "OLLAMA_ORIGINS", you could
    do so within your project's JavaScript file or use shell commands (again
    this is for conceptual purposes):
    I can connect with you through other channels to work on this step.