Gemini Multimodal RAG Applications with LangChain
Вставка
- Опубліковано 7 бер 2024
- In this session, we'll present a novel approach for building visual question-answering assistants using LangChain and the Gemini multimodal language model.
Watch how easy it is to build applications and bring your questions for live Q&A!
Join, learn, and get your questions answered with the Google Cloud Community: goo.gle/ai-community
Share your feedback and suggestions for future sessions: goo.gle/gcc-event-feedback - Наука та технологія
Can you put the link of your notebook for a multi modal RAG?
Fantastic session. Also, can we get Aditya Rane to do more sessions?
And can you please share the notebook shown in the presentation? Thanks.
Excellent Demo. Loved this session.
Can you share the notebook? thanks
can you share the notebook? plz thanks
Nice tutorial! From Argentina
great video, can you put the notebook !
cheers from Brasil!!
Can you please provide the notebook link for Multi Modal RAG?
Are you running that notebook on Colab Enterprise?
Can i use cloud storage in place of docstore ?
Revolutionary
I wanna pass image as question and want to ask details about matching image. What embedding it will refer to ? -- they tricked us with using text embedding model ... Not impressed
How can we access the notebook?
@GoogleCloudEvents Collab notebook link please