Very cool. I have been thinking about how to code an "interactive" youtube viewer which does some of the things you have here. My idea is, while people are watching a particular youtube (would have to be post processed obviously), they can "pause" the video and "ask" the person(s) in the youtube what they meant by a particular statement of idea, the (Agent) in the youtube would have the full context of the video and respond accordingly. Then the viewer could unclick the video and resume watching.
I watch just about every AI video I can find or save them in a playlist for later. This is one of the most important ones I've listened to. It gave me so many ideas about how to keep myself up-to-date and filter out the noise. I can't wait to try them. This is amazing! Thank you sooooo much!!!
Amazing video Greg! Thank you so much for sharing! You legend! I have a question: did you do some chunking before passing the transcripts to the LLM? Those JSON transcripts seemed very bulky.
Hi Greg, brilliant as always! Couldn't find links to Dwarkesh's prompt. Would you mind sharing those, when you get a chance? Thanks again for being kind enough to share it with your crew!! We deeply appreciate it!
Pretty amazing work Greg, super well put together! I did something similar for my own use to capture insights etc for various youtube videos. your work is inspiring! BTW, how long does it take you to build it I am assuming just a single person. Thank you!
instead of downloading and transcribing you could have used youtube api to get the transcription. most of the videos have it. less compute intensive, probably more reliable too if the youtube author provided transcriptions. (edit: i see what you did there deepgram gives diarization and you can get segments)
Very cool. I have been thinking about how to code an "interactive" youtube viewer which does some of the things you have here. My idea is, while people are watching a particular youtube (would have to be post processed obviously), they can "pause" the video and "ask" the person(s) in the youtube what they meant by a particular statement of idea, the (Agent) in the youtube would have the full context of the video and respond accordingly. Then the viewer could unclick the video and resume watching.
wow, so impressive! I love MFM. So cool to see you talked about on their channel and see you on OpenAI's 12 days of Christmas!
Heck ya! Thanks for the shout out. I gotta make a video about that open ai experience later
@@DataIndependent please do! I bet a lot of your followers were proud to see you there. I know I was!
I watch just about every AI video I can find or save them in a playlist for later. This is one of the most important ones I've listened to. It gave me so many ideas about how to keep myself up-to-date and filter out the noise. I can't wait to try them. This is amazing! Thank you sooooo much!!!
Nice!!
Love it! Amazing!
Amazing video Greg! Thank you so much for sharing! You legend!
I have a question: did you do some chunking before passing the transcripts to the LLM? Those JSON transcripts seemed very bulky.
Hi Greg, brilliant as always!
Couldn't find links to Dwarkesh's prompt. Would you mind sharing those, when you get a chance?
Thanks again for being kind enough to share it with your crew!!
We deeply appreciate it!
Great work man, this is awesome. How long did it take you and what database are you using?
I had the db on the slide
Just supabase with pg vector
Thanks great video:)
Nice!! Thank you for all the nice comments over the videos
Pretty amazing work Greg, super well put together! I did something similar for my own use to capture insights etc for various youtube videos. your work is inspiring! BTW, how long does it take you to build it I am assuming just a single person. Thank you!
Nice!
Yep just a single person as a side project and took a while, maybe 2 months of a few hours a week?
this is amazing !!
Thanks man
Any plan to open the vault to public?
As we enter the age of AI Slop, this vid is incredibly useful. Thanks for sharing!
Glad to hear it
great video, but i dont know how long it will stay up because of the first part, might want to have a back up vid ready
What do you mean? The clip of Sam/shaan? Hope that doesn’t stop it
@@DataIndependent no the library you are using in your first step
I watched this podcast a while a go. Didn't know that the Greg they were talking about is you.
Nice!! Ya
instead of downloading and transcribing you could have used youtube api to get the transcription. most of the videos have it. less compute intensive, probably more reliable too if the youtube author provided transcriptions. (edit: i see what you did there deepgram gives diarization and you can get segments)
Those transcriptions aren't great tbh and there aren't speakers split out
Is this channel will become the 3Blue1Brown for AI building? Let's see, but in good direction for sure! Awesome content!
Nice! That’s a high bar
@@DataIndependent We believe in you, you can do it! 🌟😂
Insane
Ya!!
open source?
Not at the moment. Code too messy
@@DataIndependent Awesome Brother. I like it, Can you opensource it? I will contribute it