Evaluation Primitives | LangSmith Evaluations - Part 2

Поділитися
Вставка
  • Опубліковано 9 чер 2024
  • With the rapid pace of AI, developers are often faced with a paradox of choice: how to choose the right prompt, how to trade-off LLM quality vs cost? Evaluations can accelerate development with structured process for making these decisions. But, we've heard that it is challenging to get started. So, we are launching a series of short videos focused on explaining how to perform evaluations using LangSmith.
    This video introduces the primary components of LangSmith evaluation, including tracing (along with metadata, feedback, tags), datasets, and evaluators.
    Documentation:
    docs.smith.langchain.com/trac...

КОМЕНТАРІ • 1

  • @ScottzPlaylists
    @ScottzPlaylists 2 місяці тому +2

    Thanks for this useful content. 👍 In the future, what will be the Best LLM Ops platforms. I suggest:
    Lanchain,
    AutoGen,
    DSPy.
    Any more Suggestions ❓