Dude! You guys are spoiling us with these latest tools. : ) I really like the modular approach that you can take with these graphs and perfect timing cause i just did up to module 5 of the academy. One thing that cracked me up though is the voice clone you used is the same one that is used on so many of the youtube shorts videos my kids watch, it become like the defacto AI voice on youtube LOL!
This is awesome! So good to know that we can do the subgraph thing. On another note, I’m almost done with the Langgraph course from Langchain Academy. So extremely helpful.
really impressive usecase of subgraphs. really drives the point home, remotegraph is super interesting. now I just need more time to keep up with the pace you guys are churning out stuff at
I built something like this using langgraph, albeit not as elegant but it’s functional and works well. My audio out is eleven labs turbo which I’m happy with but for my SST input node I’ve been testing different models to find the most responsive and effective for always on communication. That is to say my use case required no activation phrase and no ui event E.g., key press etc. again, it functions well but as you already know, responsiveness is king here so anyway I can reduce lag the better for me. I started with whisper api and then went to local install of Distil whisper but finally landed on a local install of Vosk which seems to be the most responsive and plenty accurate. The question is have you tried this and can you tweak whisper via openAI api or any other flavor to perform better than vosk? Also, with local implementations of SST (at least the open source ones) there is no cost so that’s another bonus.
Dude! You guys are spoiling us with these latest tools. : ) I really like the modular approach that you can take with these graphs and perfect timing cause i just did up to module 5 of the academy. One thing that cracked me up though is the voice clone you used is the same one that is used on so many of the youtube shorts videos my kids watch, it become like the defacto AI voice on youtube LOL!
you're building a whole core functionality of an app over a 15min video. goddamn so thankful these resources are so available!
This is awesome! So good to know that we can do the subgraph thing. On another note, I’m almost done with the Langgraph course from Langchain Academy. So extremely helpful.
just finishing module 1......
That's impressive! I've modified it, and now I can use it with the local Whisper model instead of OpenAI's.
really impressive usecase of subgraphs. really drives the point home, remotegraph is super interesting. now I just need more time to keep up with the pace you guys are churning out stuff at
You're the man, Lance! This is awesome. 🙇♂
this is really awesome
this is really cooool but can you guys spoil us more with similar approach for Realtime openAI
I built something like this using langgraph, albeit not as elegant but it’s functional and works well. My audio out is eleven labs turbo which I’m happy with but for my SST input node I’ve been testing different models to find the most responsive and effective for always on communication. That is to say my use case required no activation phrase and no ui event
E.g., key press etc. again, it functions well but as you already know, responsiveness is king here so anyway I can reduce lag the better for me. I started with whisper api and then went to local install of Distil whisper but finally landed on a local install of Vosk which seems to be the most responsive and plenty accurate. The question is have you tried this and can you tweak whisper via openAI api or any other flavor to perform better than vosk? Also, with local implementations of SST (at least the open source ones) there is no cost so that’s another bonus.
Legend Lance himself
He lives, he dies, he hates the Jackson 5, Lance from Langchain!
why need sales force now .?
lang graph studio for windows
This is awesome.can we have intruptbility like livekit easily?