As I mentioned, it is best-spoke for Llama models. Also, they are handling compatibility of funciton calling in a systematic way. There are different RAG benchmark. I saw CRAG (arxiv.org/pdf/2406.04744) in Neurips, there are some other ones like RAGBench (arxiv.org/pdf/2407.11005)
Get the notebook for free:
upaspro.com/llama-stack-building-rag-agents-from-meta/
So perfect
Is there an actual difference between this and others like LangChain. I wonder if there is a benchmark for comparing performance of RAGs
As I mentioned, it is best-spoke for Llama models. Also, they are handling compatibility of funciton calling in a systematic way. There are different RAG benchmark. I saw CRAG (arxiv.org/pdf/2406.04744) in Neurips, there are some other ones like RAGBench (arxiv.org/pdf/2407.11005)
They are advocating for function calling but in their website they haven’t fully covered lots of applicable functions …