wow, you did it! I had a feeling you might be working on this. I've been waiting for something like this since I first saw ChatGPT last year. But I don't want to send my data to OpenAI so I'll be using the ollama embeddings with as large a model as I can fit into my RAM (I might even buy more RAM so I can use a larger model). This is going to be a game changer because I have Obsidian notes from every day for the past four years, can't wait to see what it comes up with! Thanks for the hard, and very useful work.
Your work is truly transformative. This plugin elevates the entire app experience and second-brain capabilities to new heights, and I can't express enough how much I appreciate your effort and creativity. Fantastic job!
you already help me alot im dyslexic often reading new stuff is hustle even read forgotten notes also, so you're helping me alot. Support embedded PDFs as context when this implemented you have no idea how much will this accelerate my learning curve as dyslexic
Embedded PDF will come soon! Have been putting it off for too long lol. Really glad I can help, I have a hard time reading long texts myself. My next app will be super relevant to people like us, please stay tuned!
This is a really cool project. I personally use text generation webui and exl2 models. It may be as simple as exposing the openai uri due to the openai compatible API textgen has. I'll be installing this to check it out soon. Considering what you mentioned near the end about the advanced configuration maybe you already have the URI exposed for me to point to my ooba. Great project!
Great work! Im excited about Which are the main diferencies between the responses using copilot with all the vault, and training a gpt un the gtp subscription app? THX!!
I don't know, friend, but I was testing your plugin and it doesn't integrate very well with the vault. The only thing is that it has a good design, but in general terms, the quality of smart connections is much higher than that of copilot.
Wow, this is really amazing work man, great job! I've got everything setup and working for ollama, lm studio and Claude 3, and I can chat, and use Vault QA, and my results are mostly good/great (still need to add more notes from my old system, etc and clean things up). However, when I am watching the console window's output, I'm seeing a LOT of 404's trying to hit '//api/embeddings'... the odd thing is I see some of them going through to '/api/embeddings', so I'm not sure why some are using the wrong base url? I double checked my setup, and I don't have a trailing '/' on my base URL so I'm not sure what the issue is, any suggestions?
Where an I get help with the local ollama setup? I'm struggling as the copilot options interface doesn't show me any of the options being used here. for example, the embedding model ollama-nomic-embed-text isn't there, and after I added it manually, the model behaves strangely. I'm just a beginner. Thanks
I'm noob with all of this related to AI and I would like to make a question. If I have a vault with 50k notes (900k tokens), how much will cost 1 question using the OpenAI API (GPT 3.5 16k)?
Is there a way to possibly save a Massive embedding locally to your machine by using something like Pinecone, and then have your plug-in reference that separately?
AFAIK pinecone doesn’t have a local vector store that works in-browser. But sure you could host something like that yourself, and let the plugin reference it if you know how to code.
From what I understand from this video you are able to index with the Ollama, Azure openai, Open Ai, and Claud/Cohere AI models? I can't really do any of these since Ollama takes a lot of system resources to run locally, and all the rest are paid services which I can't do either. Are there gonna be thinigs like OpenRouter, Gemini, that can also index? I know huggingface was able to index before but I don't think that is available anymore. So am I missing something or am I unable to discuss with my notes as of now? Thank you! I think the work you are doing is great and now I am able to use Gemini and Mistral through Openrouter and talk with Ai in Obsidian!
Hello Logan, I tired indexing the vault but seems that by doing so I hit the rate limit of the API (OpenAI) . Could it be the case? Since I am not able to use anymore my account. Is this something you came across? Thanks for the great work with the plugin!!
hey! you can confirm that by opening the console. 429 is rate limiting. If you can’t use OpenAI for some reason, you can use other models too. You could also use advanced setting to override the endpoint using any OpenAI compatible API.
Yes, you have different options with advanced custom prompt or the command “set note context for chat mode” and the upload button. A even easier way is also coming in the next couple of releases. Check out my last video for advanced custom prompt.
Hello, I don't know but I haven't been able to index the vault. I tried everything when I select the model and enter the key and I click on both forced and normal indexing and I get an error that occurred while indexing vault to vector store. The truth is that it took a while and it doesn't allow indexing when the indexing starts, it doesn't complete it or it stays at like 11 files and it doesn't go any further, I tried everything
Could you check the Troubleshoot section of the video, if nothing works, pls follow that instruction to open an issue on github with detailed description and screenshots
Logan it's me again, I've been thinking if your plugin able to communicate with not just pdf, what if it also can do html, docx, etc.... I think your plugin might change the tide of obsidian! use case: there is this article about carnivore diet this will makes me wanted to learn more about it so i do Google search and found whole bunch of files like pdf, html. so i just dump those bunch of files into some vault and then ask AI using your plugin about carnivore diet, and while chatting i just can put into notes what are the important thing according the chat. and maybe with some AI model can be trained to give random important tips about the subject, so that trained one can say this : "important tips when you willing try or you know someone will try this diet one must know about electrolyte consumption it is important bla bla bla" this use cade will change lot's of thing when learning and note taking.
awesome work! curious what is the best way to organize your vault so that the LLM has the best chance of grabbing relevant documents as part of the embeddings vectorstore search?
Good question! There're too many moving parts in the current Vault QA mode so there's no simple answer. You have to know how to pick a good combo of LLM + embedding models, and know the best ways to write questions. This is the downside of customizability. I will introduce a more opinionated way that works out-of-the-box next. Stay tuned!
The short answer is yes. It depends on what chat model and embedding model you choose, it’s highly customizable. If you dont want to dig into all that, I’m announcing a new mode that works out-of-the-box for nontechnical users.
The pride of Ai and Obsidian nerds 🚀🚀
Great job man
Thank you!
wow, you did it! I had a feeling you might be working on this. I've been waiting for something like this since I first saw ChatGPT last year. But I don't want to send my data to OpenAI so I'll be using the ollama embeddings with as large a model as I can fit into my RAM (I might even buy more RAM so I can use a larger model). This is going to be a game changer because I have Obsidian notes from every day for the past four years, can't wait to see what it comes up with! Thanks for the hard, and very useful work.
Thanks for making this awesome plugin 👍 It totally change the rule of how you can you interact with your vault
Your work is truly transformative. This plugin elevates the entire app experience and second-brain capabilities to new heights, and I can't express enough how much I appreciate your effort and creativity. Fantastic job!
This is tremendous. I’m looking forward to trying this out.
you already help me alot im dyslexic often reading new stuff is hustle even read forgotten notes also, so you're helping me alot. Support embedded PDFs as context when this implemented you have no idea how much will this accelerate my learning curve as dyslexic
Embedded PDF will come soon! Have been putting it off for too long lol. Really glad I can help, I have a hard time reading long texts myself. My next app will be super relevant to people like us, please stay tuned!
Have you tried using the OpenDyslexic font in Obsidian?
@@JasonJohnWellsnice info, ill try that
@@loganhallucinates ill be happily see your next videos too.
This is a really cool project. I personally use text generation webui and exl2 models. It may be as simple as exposing the openai uri due to the openai compatible API textgen has. I'll be installing this to check it out soon. Considering what you mentioned near the end about the advanced configuration maybe you already have the URI exposed for me to point to my ooba. Great project!
Interesting ! Plugin idea: Obsidian Firewall that monitors traffic by plugin. I have big concerns plugging my 2nd brain to the bots
Well done, thank you for this! its awesome!
Great work! Im excited about Which are the main diferencies between the responses using copilot with all the vault, and training a gpt un the gtp subscription app? THX!!
I don't know, friend, but I was testing your plugin and it doesn't integrate very well with the vault. The only thing is that it has a good design, but in general terms, the quality of smart connections is much higher than that of copilot.
Can u make ur plugin integrated with canvas this will be phenomenal in the obsidian workflow if you can that would be awsome
Multiple people have requested now, this is indeed a promising direction! Will look into it!
Please add the "pull nomic-embed-text" to docs! Took me 2h to find what was going on
Wow, this is really amazing work man, great job!
I've got everything setup and working for ollama, lm studio and Claude 3, and I can chat, and use Vault QA, and my results are mostly good/great (still need to add more notes from my old system, etc and clean things up). However, when I am watching the console window's output, I'm seeing a LOT of 404's trying to hit '//api/embeddings'... the odd thing is I see some of them going through to '/api/embeddings', so I'm not sure why some are using the wrong base url? I double checked my setup, and I don't have a trailing '/' on my base URL so I'm not sure what the issue is, any suggestions?
So much potential what can i do with this plugin now ,thanks a lot you have made this plugin a lifesaver
This is awsm!!! Please do consider making this plugin for SiYuan.
Glad I stumbled on this! Btw what’s your theme?
I think it’s called Nord
Where an I get help with the local ollama setup? I'm struggling as the copilot options interface doesn't show me any of the options being used here. for example, the embedding model ollama-nomic-embed-text isn't there, and after I added it manually, the model behaves strangely. I'm just a beginner. Thanks
Documentation is available at obsidiancopilot.com!
Could you support KoboldHoard API? It allows decentralized LLMs
I'm noob with all of this related to AI and I would like to make a question. If I have a vault with 50k notes (900k tokens), how much will cost 1 question using the OpenAI API (GPT 3.5 16k)?
Is there a way to possibly save a Massive embedding locally to your machine by using something like Pinecone, and then have your plug-in reference that separately?
AFAIK pinecone doesn’t have a local vector store that works in-browser. But sure you could host something like that yourself, and let the plugin reference it if you know how to code.
@@loganhallucinates I most definitely do not know how to code, lol, but maybe "Devin" from Cognition Ai can take care of that for me🤭
From what I understand from this video you are able to index with the Ollama, Azure openai, Open Ai, and Claud/Cohere AI models? I can't really do any of these since Ollama takes a lot of system resources to run locally, and all the rest are paid services which I can't do either. Are there gonna be thinigs like OpenRouter, Gemini, that can also index? I know huggingface was able to index before but I don't think that is available anymore. So am I missing something or am I unable to discuss with my notes as of now? Thank you! I think the work you are doing is great and now I am able to use Gemini and Mistral through Openrouter and talk with Ai in Obsidian!
You can still use any provider with the advanced setting override!
@@loganhallucinates Wait... so would I put my OpenRouter API in the Proxy Base URL and the model name I copied in the OpenAI proxy model name?
Add your payent details to OpenAI (by putting like 5$). That's it
@@cultsulth unfortunately I can't pay for anything, but thanks
方向是正确的,但使用openai的api,付出代价可能会比较大
. Can I import my Google keep notes to Obsidian inc hash tags?
Hello Logan, I tired indexing the vault but seems that by doing so I hit the rate limit of the API (OpenAI) . Could it be the case? Since I am not able to use anymore my account. Is this something you came across? Thanks for the great work with the plugin!!
hey! you can confirm that by opening the console. 429 is rate limiting. If you can’t use OpenAI for some reason, you can use other models too. You could also use advanced setting to override the endpoint using any OpenAI compatible API.
I do have a setting for embeddings in lm studio by now. Will this plugin also allow this in the future?
yes it’s coming next!
the copilot plugin is really good
Brilliant 🎉....New subscriber earned. Can embedding handle csv files and conduct data analysis....looking forward to testing it out...❤
Thanks! Right now only markdown, but other files will be added!
Awesome work. Does it work with PDF embedded in Obsidian notes ?
That’s coming in the next few weeks, stay tuned!
I would like to be able to call up individual folders and notes in context, like in Smart Connections... Is it possible in Copilot?
Yes, you have different options with advanced custom prompt or the command “set note context for chat mode” and the upload button. A even easier way is also coming in the next couple of releases. Check out my last video for advanced custom prompt.
thanks! Any planning underway to include shortcuts like [[]] to select notes and/to select directory?
@@loganhallucinates
Great job, thanks...
Is this better than Smart Second Brain?
Hello, I don't know but I haven't been able to index the vault. I tried everything when I select the model and enter the key and I click on both forced and normal indexing and I get an error that occurred while indexing vault to vector store. The truth is that it took a while and it doesn't allow indexing when the indexing starts, it doesn't complete it or it stays at like 11 files and it doesn't go any further, I tried everything
Could you check the Troubleshoot section of the video, if nothing works, pls follow that instruction to open an issue on github with detailed description and screenshots
Does this require any specific note structure? A lot of my notes are bullet points under daily notes.
No, my notes are like that too
This is terrific ! Thank you so much. Anyways to tip you ?
It's in the video description, github sponsor or buymeacoffee both work. Thanks!
ty logan
Great job.....
Logan it's me again, I've been thinking if your plugin able to communicate with not just pdf, what if it also can do html, docx, etc.... I think your plugin might change the tide of obsidian!
use case: there is this article about carnivore diet this will makes me wanted to learn more about it so i do Google search and found whole bunch of files like pdf, html.
so i just dump those bunch of files into some vault and then ask AI using your plugin about carnivore diet, and while chatting i just can put into notes what are the important thing according the chat. and maybe with some AI model can be trained to give random important tips about the subject, so that trained one can say this : "important tips when you willing try or you know someone will try this diet one must know about electrolyte consumption it is important bla bla bla"
this use cade will change lot's of thing when learning and note taking.
More types of embedded files will be supported, and the AI will gradually become smarter in finding the relevant pieces. stay tuned!
@@loganhallucinates that is really great news, wow you are really amazing. kudos to your work.
awesome work! curious what is the best way to organize your vault so that the LLM has the best chance of grabbing relevant documents as part of the embeddings vectorstore search?
Good question! There're too many moving parts in the current Vault QA mode so there's no simple answer. You have to know how to pick a good combo of LLM + embedding models, and know the best ways to write questions. This is the downside of customizability. I will introduce a more opinionated way that works out-of-the-box next. Stay tuned!
Does it work with notes written in languages other than English?
The short answer is yes. It depends on what chat model and embedding model you choose, it’s highly customizable. If you dont want to dig into all that, I’m announcing a new mode that works out-of-the-box for nontechnical users.
@@loganhallucinates you're amazing, thank you for your efforts!
where embeddings are stored?
they are in a local pouchdb
Within the obsidian folder? I mean for OpenAI embeddings only. Thank you@@loganhallucinates
WOW ty!
❤
dont understand
Great!