RAG Evaluation (Document Relevance) | LangSmith Evaluations - Part 14

Поділитися
Вставка
  • Опубліковано 29 кві 2024
  • Evaluations can accelerate LLM app development, but it can be challenging to get started. We've kicked off a new video series focused on evaluations in LangSmith.
    With the rapid pace of AI, developers are often faced with a paradox of choice: how to choose the right prompt, how to trade-off LLM quality vs cost? Evaluations can accelerate development with structured process for making these decisions. But, we've heard that it is challenging to get started. So, we are launching a series of short videos focused on explaining how to perform evaluations using LangSmith.
    This video focuses on RAG (Retrieval Augmented Generation). We show you how to check that your retrieved documents are relevant to the input question. You can use LangSmith to create a set of test cases, run an evaluation against retrieved documents, and dive into output traces - helping you ensure your documents are relevant.
    Documentation:
    docs.smith.langchain.com/cook...

КОМЕНТАРІ • 1