Summary Evaluators | LangSmith Evaluations - Part 11

Поділитися
Вставка
  • Опубліковано 17 кві 2024
  • With the rapid pace of AI, developers are often faced with a paradox of choice: how to choose the right prompt, how to trade-off LLM quality vs cost? Evaluations can accelerate development with structured process for making these decisions. But, we've heard that it is challenging to get started. So, we are launching a series of short videos focused on explaining how to perform evaluations using LangSmith.
    This video explains how to define evaluators that compute customized summary metrics over a dataset.
    Documentation:
    docs.smith.langchain.com/eval...

КОМЕНТАРІ • 2

  • @RADKIT
    @RADKIT Місяць тому +1

    Thanks Lance!

  • @yudomaryanto1643
    @yudomaryanto1643 Місяць тому

    How we use the result of evaluation to our langchain