it seems like RAG (knowledge base) has not been used in the architecture, llama indes is used instead, so the llm model (foundation model) is building the query with the help of user NLP input + few shots examples and tables metadata, right?
Correct. There is no knowledge base, but the approach for pull tables metadata and for identifying most relevant queries is exactly the same as used in RAG-identifying similarity of the user input and various elements.
Many thanks for your prompt answers. Can't wait to see the next video
Just uploaded the video. Curious to learn what you think
Great video. Could you please explain here or in a separate video the glu and metadata data extraction part
Thank you. Sure. Will do so during the coming weekend :)
I would be also greatful if you can share a walkthrough of how to create few shot examples
Yep. Will make sure to include it.
it seems like RAG (knowledge base) has not been used in the architecture, llama indes is used instead, so
the llm model (foundation model) is building the query with the help of user NLP input + few shots examples and tables metadata, right?
Correct. There is no knowledge base, but the approach for pull tables metadata and for identifying most relevant queries is exactly the same as used in RAG-identifying similarity of the user input and various elements.