Hello Sir, Can we add Query transformations for the above whole pipeline. My context is to build a RAG pipeline that chats with Multiple 10K reports, for which we can use domain specific LLM for embedding and query transformations, of course i want leverage the AutoRetrieveingTool also. Can you suggest me this?
Google Colab Notebook: colab.research.google.com/drive/1JOzbVzrm8_GJAmuh2Qcjsxf5Rg0yK3AG?usp=sharing
Event Slides: www.canva.com/design/DAGCrgbYdc0/Q56HqhQAp_-163pJNm_fmA/view?DAGCrgbYdc0&
I like the way you simplify and explain- starting with the big picture and then breaking down in to the details.❤
Really good session! Always looking forward to your slide decks and easy explanations!
Awesome as usual! I’m doing all of my development using Flowise and all of this information is useful and mostly transferable to Flowise. Thanks.
Can you guys, showcase the function calling feature with smaller models, some of us are trying to build on local. Anyways, great work 👍🏻
We'll see when we can slide this in!
Great video, thnx! Is it possible to include image in a LLM response? Text from rag and also an image as a bs64 to chat app?
It is possible - yes! You just need to handle the image pipeline - but LlamaIndex has pipelines for this built in!
Hello Sir, Can we add Query transformations for the above whole pipeline. My context is to build a RAG pipeline that chats with Multiple 10K reports, for which we can use domain specific LLM for embedding and query transformations, of course i want leverage the AutoRetrieveingTool also. Can you suggest me this?
Yes, you can!