RAG Evaluation (Answer Hallucinations) | LangSmith Evaluations - Part 13

Поділитися
Вставка
  • Опубліковано 23 кві 2024
  • With the rapid pace of AI, developers are often faced with a paradox of choice: how to choose the right prompt, how to trade-off LLM quality vs cost? Evaluations can accelerate development with structured process for making these decisions. But, we've heard that it is challenging to get started. So, we are launching a series of short videos focused on explaining how to perform evaluations using LangSmith.
    This video focuses on RAG (Retrieval Augmented Generation). We show you how to check that your outputs are grounded in the retrieved documents of your RAG pipeline. You can use LangSmith to create a set of test cases, run an evaluation against retrieved documents, and dive into output traces - helping you ensure your responses are hallucination-free.
    Documentation:
    docs.smith.langchain.com/cook...

КОМЕНТАРІ • 3

  • @mohsenghafari7652
    @mohsenghafari7652 Місяць тому

    Hi dear friend .
    Thank you for your efforts .
    How to use this tutorial in PDFs at other language (for example Persian )
    What will the subject ?
    I made many efforts and tested different models, but the results in asking questions about pdfs are not good and accurate!

  • @palashjyotiborah9888
    @palashjyotiborah9888 Місяць тому +1

    Mic quality could be improved..

    • @SciFiFactory
      @SciFiFactory Місяць тому +2

      ... with a 30$ mic ...
      Preparing and recording these presentations is so much more work.