I just want to say, regarding the first half with the audio recording / transcript... I have been doing that for years and years now, utilizing Lilly Speech to achieve the transcript of my voice while I simultaneously click my recording microphone button on my laptop, and then I just save the audio file in the same note, voila, same context output... I can read everything I said verbatim and if need be, also listen to my tonality..... Seems like a lot of runaround regarding all the API key talk and sending back and forth, etc... I believe the key to unlocking the future of technology is found in its analog application.
Is it possible to use lm studio or ollama running models locally (eg llama3) and avoid anything going to an external service (privacy/security concerns)? I'm doing similar with smart connections plugin > ollama > llama3. Your plugin's functionality looks great.
Ultimatly I hope a local LLM could be able to act like a PA using the Obsidian Brain. For this to be acheivable some use of OpenAI(or similar) surely needed to bring in knowledge needed (via both browser AI use + cut and paste and use within Obsidian automatically adding frontmatter or tags) and also to ensure the tags and/or frontmatter and/or links will enable finding of related information by a local LLM for routine use later on. There seem to be over a dozen existing PlugIns that seek to do quite clever things with Tags, some of them using AI. While I await this new plugin I am going to explore them. I find use of local LLMs slow on my day to day laptop but much faster on my Desktop with its GPU. I dislike the idea one would end up needing paid AI access routinly but I dont mind using it occasionally to get the Second Brain trained in a similar way to which the AI companies have too train their LLMs.
looks great. would be a nice option to be able to send to a local Whisper model for transcription rather than to OpenAI online
Looks very nice.
Do you plan to add local llms or claude?
this looks fantastic. i like that it has an openrouter option which opens up all the non-openai models
will be waiting for the release !
Can you also add a custom connection like textgenerator to use local models with lm studio or ollama?
Looks really great!
I just want to say, regarding the first half with the audio recording / transcript...
I have been doing that for years and years now, utilizing Lilly Speech to achieve the transcript of my voice while I simultaneously click my recording microphone button on my laptop,
and then I just save the audio file in the same note, voila, same context output...
I can read everything I said verbatim and if need be, also listen to my tonality..... Seems like a lot of runaround regarding all the API key talk and sending back and forth, etc... I believe the key to unlocking the future of technology is found in its analog application.
Is it possible to use lm studio or ollama running models locally (eg llama3) and avoid anything going to an external service (privacy/security concerns)? I'm doing similar with smart connections plugin > ollama > llama3. Your plugin's functionality looks great.
This looks good. Takes the place of Whisper?
Ultimatly I hope a local LLM could be able to act like a PA using the Obsidian Brain.
For this to be acheivable some use of OpenAI(or similar) surely needed to bring in knowledge needed (via both browser AI use + cut and paste and use within Obsidian automatically adding frontmatter or tags) and also to ensure the tags and/or frontmatter and/or links will enable finding of related information by a local LLM for routine use later on.
There seem to be over a dozen existing PlugIns that seek to do quite clever things with Tags, some of them using AI. While I await this new plugin I am going to explore them.
I find use of local LLMs slow on my day to day laptop but much faster on my Desktop with its GPU. I dislike the idea one would end up needing paid AI access routinly but I dont mind using it occasionally to get the Second Brain trained in a similar way to which the AI companies have too train their LLMs.
This plugin looks cool. Could you please create this in the SiYuan note-taking app?
When it will be released?
Just released... well, pre-released a version! Check out the latest video. Cheers!
👀👀
looks really good!