Perplexity CEO explains RAG: Retrieval-Augmented Generation | Aravind Srinivas and Lex Fridman
Вставка
- Опубліковано 16 жов 2024
- Lex Fridman Podcast full episode: • Aravind Srinivas: Perp...
Please support this podcast by checking out our sponsors:
Cloaked: cloaked.com/lex and use code LexPod to get 25% off
ShipStation: shipstation.co... and use code LEX to get 60-day free trial
NetSuite: netsuite.com/lex to get free product tour
LMNT: drinkLMNT.com/lex to get free sample pack
Shopify: shopify.com/lex to get $1 per month trial
BetterHelp: betterhelp.com... to get 10% off
GUEST BIO:
Arvind Srinivas is CEO of Perplexity, a company that aims to revolutionize how we humans find answers to questions on the Internet.
PODCAST INFO:
Podcast website: lexfridman.com...
Apple Podcasts: apple.co/2lwqZIr
Spotify: spoti.fi/2nEwCF8
RSS: lexfridman.com...
Full episodes playlist: • Lex Fridman Podcast
Clips playlist: • Lex Fridman Podcast Clips
SOCIAL:
Twitter: / lexfridman
LinkedIn: / lexfridman
Facebook: / lexfridman
Instagram: / lexfridman
Medium: / lexfridman
Reddit: / lexfridman
Support on Patreon: / lexfridman
Full podcast episode: ua-cam.com/video/e-gwvmhyU7A/v-deo.html
Lex Fridman podcast channel: ua-cam.com/users/lexfridman
Guest bio: Arvind Srinivas is CEO of Perplexity, a company that aims to revolutionize how we humans find answers to questions on the Internet.
After watching the whole talk yesterday, I asked Perplexity if Lex is smart & handsome, to which it responded that he's widely known to be intelligent, but no info about his physical appearance. Guessing it would buzz on social media and increase indexing/ranking, I asked Perplexity the same question today, and it says that Lex is respected, smart, and a hottie.
From existential crisis channel to LLM teaching channel :)😅
Great video.
I wonder why there is no BM25Vectorizer in scikit-learn? There's a TfidfVectorizer.
haha I hope Lex video edited out what Aravind said about what not to say. That would be so metal. Meta-metal!
when LLMs begin having 1,000,000,000 token windows, the RAG pattern might be obsolete
Indeed! I I Believe Perplexity is already doing this in normal, searches.
The difficulty for very large context, windows is that the processing power required is the square of the context window size. my impression was that this is fundamental To how attention works in the transformer architecture, but it seems to me like there might be ways to optimize it with research.
This is basically what AI is and will be for a while.
Obviously this was recorded before they were caught scraping Wired and Forbes articles, with robots.txt files, using unlisted ip addresses...lol
Perplexity is complete shit