Thank you so much!! I was about to start a brand new vault with two years of research because I was concerned that I had too many notes. This has saved my life! Thank you so much. I really appreciate this.
I'd love to know when they're going to add the Llama3 model. Also, could you make a video explaining the differences between Nomic and MXBai? How do they work, and what is best for each case? Is there a way to use LMStudio instead of Ollama? Liked and subscribed. Amazing video
My idea was to take an idea explored with an LLM then have the LLM summarize the the conversation and conclusions, or you coupd write it in your own words, then the LLM can automatically generate the nodes. The original conversation could be saved in a separate text file that doesn't bloat your vault, which can be linked to in the note. It shouldn't be too hard to automate all this, but I'm not a programmer, or else I would love to make it a reality.
Thank you for the comprehensive plugin presentation! I'm curious about the parsing notes into vectors process: does it take into account any linking or tagging, or does it parse only text information? What do you think?
hi! Yours is the only good video ive found on this. for some reason the chat only writes 1-2 sentences, and then the chat ends abruptly, but with no error message. Any idea why?
The main differentiator is this: It supports open source models. While Smart connections uses OpenAI APIs. And you have to pay for it. Smart connections also has license model for which you have to pay to access some features.
@@beingpax thanks for explaining👍 just one more question: if I add a note after Indexing, do I have to do a new index, and does it recognise, just the new note or do I have to index the whole vault again and where will it be saved?
Also from me: many thanks for the really helpful videos! I have another question: What kind of download manager can we see at 2:15? I've been looking for something like that for ages! Greetings from Germany!
Somehow, when I attempt to obtain sources from which the bot gathers its information, it only provides me with numbers, e.g., (Source: 2). Then, when I click on 2, a new note with 2 as the headline appears. I have the feeling it does not take information out of my volt.. Where am I going wrong? Thank you in advance!
Hello sir, I have encounter this "Error: listen tcp 127.0.0.1:11434: bind: Only one usage of each socket address (protocol/network address/port) is normally permitted.". I have taskkilled the PID but it's not worked, another PID has come. Could you help me ?. Do I have to install the python or anything else ?
Hello Prakash, Firstly, thank you for your videos, which are always very professional and comprehensive. I need your help. Following your guide on Linux, I installed the Obsidian plugin, configured it, and downloaded Ollama and Gemma. However, in the Smart Second Brain section of Obsidian, I am unable to configure the AI to function correctly. The aspect of my setup that is much less complete and detailed (only for point) than the one you show on macOS. In particular, I am missing the "start Ollama" command.
When you open the Smart second brain pane, you have the option to set Ollama origions to enable streaming responses. Follow those instructions and it will probably solve. Whats the exact problem you are having?
My vault is over 3million words across 2k+ files, all geared towards one project. I was excited to use the Co-Pilot plugin because it advertises "vault mode", but was quickly disappointed when its default reference amount was only 3 notes/files at a time (lol), and can only stretch to 10 but gives a warning that it will prolly screw up the responses. I want to communicate with my vault as a whole for perfect macro context, but it seems my case use is still not possible with current Ai? SmartConnections doesn't seem to be any better, or am I mistaken?
If you have .pdf files in our vault does this plugin index the PDF files? If so, does that significantly increase the indexing time? Also does it use the information in the PDF to answer questions?
do you know where on the computer these models are stored? I have trouble finding the files...and they are huge so after trying some models you might want delete them
Thank you so much!! I was about to start a brand new vault with two years of research because I was concerned that I had too many notes. This has saved my life! Thank you so much. I really appreciate this.
Ollama is pronounced "Oh Lama".
Thank you Prakash for your videos. You have been providing immense value to the community.
Your channel is a goldmine.
Great video. Thank you, brother!
I'd love to know when they're going to add the Llama3 model. Also, could you make a video explaining the differences between Nomic and MXBai? How do they work, and what is best for each case? Is there a way to use LMStudio instead of Ollama? Liked and subscribed. Amazing video
It´s already possible
I'm using llama3 for both embedding and generation. And it works much better that way than the default embedding models.
Pax, your video quality is getting better and better. What are you using for your video-Editing?
Thanks ❤️. Right now, I'm transitioning into Final cut pro, because of how fast and reliable it is.
Thank you for this awesome and insightful video, man!
By far the best ai use I have seen so far in obsidian
I was thinking about creating some API to read all my notes, but this is already made haha
Thanks for the great video and explanation! this is GOLD!
Brother you are great,I was struggling to install fabric.After your video it didn't take much time.
Thank you so much for creating this video. Every aspect has been helpful.
Is there a plugin that can use A.I. to create and edit notes as well? This would be a big leap forward as far as efficiency of note taking goes!
My idea was to take an idea explored with an LLM then have the LLM summarize the the conversation and conclusions, or you coupd write it in your own words, then the LLM can automatically generate the nodes. The original conversation could be saved in a separate text file that doesn't bloat your vault, which can be linked to in the note. It shouldn't be too hard to automate all this, but I'm not a programmer, or else I would love to make it a reality.
Cool and easy to use! Thanks for the tips.
Thank you for making the video on AI......I strongly believe that A.I will dominate the PKM area.
This is wonderful!
❤ Great video
Does it update as you add new note?
Thank you for the comprehensive plugin presentation!
I'm curious about the parsing notes into vectors process: does it take into account any linking or tagging, or does it parse only text information?
What do you think?
hi! Yours is the only good video ive found on this.
for some reason the chat only writes 1-2 sentences, and then the chat ends abruptly, but with no error message.
Any idea why?
so can i feed it a folder with pdfs as well as my own notes?
Why it tell error :ollama call failed with status code 500
I love that it is free and private. What do U think about other AI plugins like smart connections?
The main differentiator is this: It supports open source models. While Smart connections uses OpenAI APIs. And you have to pay for it. Smart connections also has license model for which you have to pay to access some features.
@@beingpax thanks for explaining👍 just one more question: if I add a note after Indexing, do I have to do a new index, and does it recognise, just the new note or do I have to index the whole vault again and where will it be saved?
Not 100% sure, but I think it will only index the new updated notes only.
Smart connections is free too, but the answers aren't reliable rightnow - love to see it improve
Also from me: many thanks for the really helpful videos! I have another question: What kind of download manager can we see at 2:15? I've been looking for something like that for ages! Greetings from Germany!
Wow. That looks useful. Did you try that with other languages than English?
next level...
Love athe tutorial. When I have my new pc i will surely go into it.
Somehow, when I attempt to obtain sources from which the bot gathers its information, it only provides me with numbers, e.g., (Source: 2). Then, when I click on 2, a new note with 2 as the headline appears. I have the feeling it does not take information out of my volt.. Where am I going wrong?
Thank you in advance!
thanks for showing - nice video 👏
how could i connect it if my ollama is inside the wsl of my windows?
Hello sir, I have encounter this "Error: listen tcp 127.0.0.1:11434: bind: Only one usage of each socket address (protocol/network address/port) is normally permitted.". I have taskkilled the PID but it's not worked, another PID has come. Could you help me ?. Do I have to install the python or anything else ?
have you found a way to exceed this problem?
just english and german, when llama3 can tak to spanish, there is a way to change assistant language?
sadly it does not show up after installing. must clash with some of the other plugins.
I am getting error message "Failed to initialize Smart Second Brain (Error: TypeError: Failed to fetch)" ?
same problem here
Thanks for the great video!
How is this model with languages other than English?
Hello Prakash,
Firstly, thank you for your videos, which are always very professional and
comprehensive. I need your help. Following your guide on Linux, I
installed the Obsidian plugin, configured it, and downloaded Ollama and
Gemma. However, in the Smart Second Brain section of Obsidian, I am unable
to configure the AI to function correctly. The aspect of my setup
that is much less complete and detailed (only for point) than the one you show on macOS. In
particular, I am missing the "start Ollama" command.
When you open the Smart second brain pane, you have the option to set Ollama origions to enable streaming responses. Follow those instructions and it will probably solve.
Whats the exact problem you are having?
Can i run this in Polish language by any chance? currently only english is avalibe i wonder if there is any LLM that works with Polish
Благодарю за полезную информацию. Thank you for your assistance.
My vault is over 3million words across 2k+ files, all geared towards one project. I was excited to use the Co-Pilot plugin because it advertises "vault mode", but was quickly disappointed when its default reference amount was only 3 notes/files at a time (lol), and can only stretch to 10 but gives a warning that it will prolly screw up the responses.
I want to communicate with my vault as a whole for perfect macro context, but it seems my case use is still not possible with current Ai? SmartConnections doesn't seem to be any better, or am I mistaken?
This local solution might work for you. Try an embedding model like bge + llama3 for generation. It'll take a LONG while the first time though
If you have .pdf files in our vault does this plugin index the PDF files? If so, does that significantly increase the indexing time? Also does it use the information in the PDF to answer questions?
It doesn't support pdfs, yet.
do you know where on the computer these models are stored? I have trouble finding the files...and they are huge so after trying some models you might want delete them
Did you happen to learn how to delete the models?
thanks
What's your hardware spec? It seems to be taking forever to index as well as generate answers on my ryzen 5 laptop
I'm on Macbook Pro M2. Some local models take way long time. I find the mixtral model with mxbai works the best.
my ai why is slow response ?
What specs (ram / processor) do U need on your computer for running a local LLM like the new llama 3 7b ? I have a mini Mac with 8gb
8 GB will HAMMER your swap (the internal SSD, which I gather is only 256 GB) and end up destroying the SSD.
@@HiltonT69an interesting detail to this whole story:)
What LLM should I use for a different language (German)?
Schwierig
All these tests make me think that Smart Connections does a much better job of this...
❤❤❤❤❤
Yt sucks hard and removed my comment about using Llama 3, which is possible to do!
How is it and whats the system requirements to run these stuff
From my point of view, I think the Copilot plugin is relatively better than this plugin.
Meta and privacy is an oxymoron. 😂
the llm is pronounced
“OH-LA-MA”
c’mon ma’an !!!!!