Manually Curated Datasets | LangSmith Evaluations - Part 3

Поділитися
Вставка
  • Опубліковано 7 кві 2024
  • With the rapid pace of AI, developers are often faced with a paradox of choice: how to choose the right prompt, how to trade-off LLM quality vs cost? Evaluations can accelerate development with structured process for making these decisions. But, we've heard that it is challenging to get started. So, we are launching a series of short videos focused on explaining how to perform evaluations using LangSmith.
    This video introduces how to create, edit, and version your own evaluation dataset using the LangSmith SDK.
    Documentation:
    docs.smith.langchain.com/eval...
    Notebook:
    github.com/langchain-ai/langs...

КОМЕНТАРІ •