May I ask what files will be indexed locally? Say I have md, txt and org files. Does it mean only md files are indexed? If not, how can I index other types of files? Thx
It was a really pleasant surprise to receive the API key and learn about this new tool. It's like we have RAG in the context of our vault without having to make any effort for that. Great work, Logan!
Wow this is huge. I can use it to replace my AI search with Perplexity. Also the YT transcription was awesome as well although I much prefer Text Generator Plugin to transcribe since it not only transcribe but also allow for saved prompt to trigger and create a summary. Anyways, great work Logan!
I still use copilot with obsidian for certain queries and tasks, but found cline integration with vscode more flexible. For example, quick parsing and editing of any doc or folder on the whole pc, not just in obsidian. I think access to the latest models is also necessary, like the latest iteration of sonnet.
You can add any model in this plugin as a custom model. As for actions like editing, I’m considering it. I don’t like AI generate directly in my note because writing note is my way to think. It’s different from writing code. Delegate thinking to AI is a no no for me.
@@loganhallucinates I get that. I just think its a necessary feature for many tasks, such as coding or editing documents, etc. Which is why I say, obsidian copilot is useful for certain things, but not others.
You are close to a solution for both. The canvas add-in will be used for drcosion tree / business logic trees embedded with RAG and AI. Super powerful when it arrives, garunteed top feature.@@loganhallucinates
@@loganhallucinates understood that the possibility to search for all notes either by path or tag will be available in new version when its released, right? for example, If I ask copilot to summarize all my daily diary notes for 2024, it would be able to scan contents of all daily notes for 2024 and provide me, say 10 point bullet lists with main themes, right?
@@GytisStankevičius-y8o it can already do that in copilot plus alpha. You just have to make sure you have a model that can take in all your notes from the year (it can be costly too).
@@loganhallucinates love it, thanks for answering! actually I had only 30 daily notes for 2024 and free 1.5 Gemini API handled it quite well. ☺ But I guess it could become costly with thousands of notes and billions of tokens haha. The plugin is great, love it. You've done an amazing job!
How much of my Vault gets sent to OpenAI? Can I control the content sent? Can I control the content once it is sent to OpenAI? What is the max. limit of Vault size / content uploading?
Your search is local already, online LLMs are only used for the final generation step. You can also use local models, in that case nothing gets sent to any cloud provider.
This is great stuff. I'll comment on this: in minute 7, you needed to verify that the quotes were accurate by copying and searching the PDF. This is very time consuming when writing extensively. Copilot used to create links to the specific section on the PDF that was opened, they no longer do. I switched to Acrobat AI because it's the only ones that does it. Basically, every sentence or pagaphraph has a link that maps to the location in the PDF where it was taken from, and it highlights it side to side. NotebookLM does something similar but poorly, it rarely works well. I'm curious as to why is it so hard for LLM to include this feature, what are the limitations? Or is it just that it is too niche?
This requires some engineering on top of the LLM response, doable but may introduce many corner cases. We are thinking about this and it’s on our roadmap.
How does the vault option justify itself if it doesn't scan an entire vault? Those sources are a fraction of what the vault represents, My vault has been growing for years now and has over 3 million words stretched across over 2.5k files (notes), I was only looking forward to this for the vault feature but it is not as advertised
This is very nice but i want to know about the costing of all of it. What would be the costing for entire vault search because it will be taking lot of text into context in every query. Same question for big pdf or youtube transcript. Please do a complete cost breakdown. Also have you heard about vector embeddings? In which out text (all notes or documents) is converted into vector database so that AI can most efficiently (cost and processing) search the answer. Don't you think it would be useful in this scenario. We can convert our entire vault into vector database and then give AI access to that database for better results. This is how AI agent is building also.
Copilot for Obsidian is a top open-source project in RAG and AI agent tool use. You can get familiar with it by going over the documentation or the source code directly. The basic things like basic RAG with local vector search was done a year ago.
To answer your question about cost, you can monitor it with your provider. Prompt caching works for OpenAI models out of the box. Anthropic and Gemini’s prompt caching will be added in the next release.
The only thing that interested me in your 13min is this : Just to make sure I already use searching plugins... but I believe augmenting those with AI would be more beneficial Local search vault: It is mainly why I'm considering to use Ai with reference to sources. Requirements: My notes are personal and so I don't like the idea of using API that are not on my pc. I also hate the heavy resources for running big models. I prefer something as small as 3b or 1b.
please stay as obsidian plugin, it too much friction to move would prefer learning a new crewai... to load all my notes and chatting and using notebook with them rather than this
This is mind-blowing! How did I not find this earlier? Sponsored it right away!
May I ask what files will be indexed locally? Say I have md, txt and org files. Does it mean only md files are indexed? If not, how can I index other types of files? Thx
It was a really pleasant surprise to receive the API key and learn about this new tool. It's like we have RAG in the context of our vault without having to make any effort for that. Great work, Logan!
Glad you’re enjoying the tool!
@@loganhallucinates hey how can I get access to the alpha?
@@ThinhSan You can become a sponsor for the project on github or donate via buymeacoffee. I'll then send you the test license key.
I love your project. I’ve been using it for a year. It’s the one of the best AI apps out rn IMO
Thank you so much ❤️ You guys are the best!
This is amazing Logan! Thank you for building this PKM enhancer!
Amazing work, you unlocked a new era by yourself
This is awesome! Good job on this, I've joined the waiting list!
This is amazing! Keep up the great work. The amount of time this is going to save me is incalcuable.
Wow this is huge. I can use it to replace my AI search with Perplexity. Also the YT transcription was awesome as well although I much prefer Text Generator Plugin to transcribe since it not only transcribe but also allow for saved prompt to trigger and create a summary.
Anyways, great work Logan!
I just signed up for the waitlist! Would love to be a paying user right now!! Great work with this!
Thanks! The official launch is coming soon!
Totally awesome demo, will be playing with it and sharing feedback!
Awesome plugin! ❤I'm pretty sure this will make a big improvement to our efficiency, thanks for your work❤! And wating in the waiting list.😂
Thank you! It's been a lot of work! Stay tuned for the official launch.
I still use copilot with obsidian for certain queries and tasks, but found cline integration with vscode more flexible. For example, quick parsing and editing of any doc or folder on the whole pc, not just in obsidian. I think access to the latest models is also necessary, like the latest iteration of sonnet.
You can add any model in this plugin as a custom model. As for actions like editing, I’m considering it. I don’t like AI generate directly in my note because writing note is my way to think. It’s different from writing code. Delegate thinking to AI is a no no for me.
@@loganhallucinates I get that. I just think its a necessary feature for many tasks, such as coding or editing documents, etc. Which is why I say, obsidian copilot is useful for certain things, but not others.
You are close to a solution for both. The canvas add-in will be used for drcosion tree / business logic trees embedded with RAG and AI. Super powerful when it arrives, garunteed top feature.@@loganhallucinates
Sponsored the github project! w00t!
Thanks for the support! Please check your email for the test license key!
Waiting for !!! ❤❤❤❤ in waiting list
It is taking better and better shape. Waiting for the Spanish support. Regards
Does the Ai do the UA-cam video transcriptins itself or it just copies the youtube transcription? hope this questionmakes sense to you
Great question. At the moment it’s so fast because it copies youtube’s transcript. It will support transcription on its own later.
@@loganhallucinates understood that the possibility to search for all notes either by path or tag will be available in new version when its released, right? for example, If I ask copilot to summarize all my daily diary notes for 2024, it would be able to scan contents of all daily notes for 2024 and provide me, say 10 point bullet lists with main themes, right?
@@GytisStankevičius-y8o it can already do that in copilot plus alpha. You just have to make sure you have a model that can take in all your notes from the year (it can be costly too).
@@loganhallucinates love it, thanks for answering! actually I had only 30 daily notes for 2024 and free 1.5 Gemini API handled it quite well. ☺ But I guess it could become costly with thousands of notes and billions of tokens haha. The plugin is great, love it. You've done an amazing job!
looking forward!
How much of my Vault gets sent to OpenAI? Can I control the content sent? Can I control the content once it is sent to OpenAI? What is the max. limit of Vault size / content uploading?
Your search is local already, online LLMs are only used for the final generation step. You can also use local models, in that case nothing gets sent to any cloud provider.
This is great stuff. I'll comment on this: in minute 7, you needed to verify that the quotes were accurate by copying and searching the PDF. This is very time consuming when writing extensively. Copilot used to create links to the specific section on the PDF that was opened, they no longer do. I switched to Acrobat AI because it's the only ones that does it. Basically, every sentence or pagaphraph has a link that maps to the location in the PDF where it was taken from, and it highlights it side to side. NotebookLM does something similar but poorly, it rarely works well. I'm curious as to why is it so hard for LLM to include this feature, what are the limitations? Or is it just that it is too niche?
This requires some engineering on top of the LLM response, doable but may introduce many corner cases. We are thinking about this and it’s on our roadmap.
@@loganhallucinates Thank you. I'll be following the development.
Also, any idea on then this will be rolling out? I could use it today! (Ha.)
The official version should be available in Dec.
@ Great to hear! Thank you.
This looks great 👍
Can it support recognizing images embedded in notes, instead of requiring users to manually input the images?
It’s in development. My next demo will be on multimodal capabilities for both explicitly passed notes and local vault search.
if you get a tester license key will it be for lifetime to use copilot plus?
and copilot plus looks amazing
Lifetime access will require a believer plan one time purchase. Test key is only valid before official launch.
is it ready to purchase? i saw the pricing page in the website.
How did you see the pricing page? Is there a button that leads to it?
I'm just curious about what theme you're using.
This is the Border theme
How does the vault option justify itself if it doesn't scan an entire vault? Those sources are a fraction of what the vault represents, My vault has been growing for years now and has over 3 million words stretched across over 2.5k files (notes), I was only looking forward to this for the vault feature but it is not as advertised
It does index the entire vault. Thousands of people are using it as expected. Please check the documentation.
This is very nice but i want to know about the costing of all of it. What would be the costing for entire vault search because it will be taking lot of text into context in every query. Same question for big pdf or youtube transcript. Please do a complete cost breakdown.
Also have you heard about vector embeddings? In which out text (all notes or documents) is converted into vector database so that AI can most efficiently (cost and processing) search the answer. Don't you think it would be useful in this scenario. We can convert our entire vault into vector database and then give AI access to that database for better results. This is how AI agent is building also.
Copilot for Obsidian is a top open-source project in RAG and AI agent tool use. You can get familiar with it by going over the documentation or the source code directly. The basic things like basic RAG with local vector search was done a year ago.
To answer your question about cost, you can monitor it with your provider. Prompt caching works for OpenAI models out of the box. Anthropic and Gemini’s prompt caching will be added in the next release.
@@loganhallucinates or you can use ollama and it’s free. Correct?
The only thing that interested me in your 13min is this :
Just to make sure I already use searching plugins... but I believe augmenting those with AI would be more beneficial
Local search vault:
It is mainly why I'm considering to use Ai with reference to sources.
Requirements:
My notes are personal and so I don't like the idea of using API that are not on my pc. I also hate the heavy resources for running big models. I prefer something as small as 3b or 1b.
You can already do all these in the free version completely without internet.
@@loganhallucinates I couldn't do anything without the need of an API key. I can generate API key but privacy
@@Dex_1M Just use local models with ollama or lm studio. Pls check the documentation.
please stay as obsidian plugin, it too much friction to move
would prefer learning a new crewai... to load all my notes and chatting and using notebook with them rather than this