Це відео не доступне.
Перепрошуємо.

Fine-tuning or RAG?

Поділитися
Вставка
  • Опубліковано 14 сер 2024
  • arxiv.org/abs/...
    0:00 Comparing Fine-tuning and Retrieval Augmented Generation
    0:34 Using LLMs for Specialized Domains
    1:13 Fine-tuning vs In-context Learning Techniques
    2:23 Causes of LLM Factual Errors and Hallucinations
    3:50 Constructing the Experiment Dataset
    4:45 Models Tested and Accuracy Comparison
    5:51 RAG Outperforms Fine-tuning Across Models
    6:20 Why RAG Performs Better Than Fine-tuning
    7:01 Caveats and Open Questions
    7:39 Conclusion and Wrap-up
    Video explaining MMLU and other benchmarks: • LLM benchmarks

КОМЕНТАРІ • 6

  • @karinlv890
    @karinlv890 Місяць тому

    Thank you for saving my group meeting! Your video helps a lot!

  • @vida91963
    @vida91963 2 місяці тому

    Nice presentation thank you!

  • @HampusAhlgren
    @HampusAhlgren 2 місяці тому

    Just wanted to say I really appreciate your videos. Everything is short and concise and I love that you’re always using papers as the foundation for the conclusions. Keep it up!

    • @VivekHaldar
      @VivekHaldar  2 місяці тому

      Thanks for the kind words. That's the idea!

  • @sasha297603ha
    @sasha297603ha 2 місяці тому

    Very interesting paper, thanks for covering!

  • @RoulDukeGonzo
    @RoulDukeGonzo Місяць тому

    Seems clear that for 'current events' rag is going to win, but for broader, domain specific themes or logic, how does fine tuning stack up? E.g. create code using our internal suite of APIs... If context is big enough, icl should be fine, but rag may miss some key docs based on semantic similarity alone... I guess... I should write a paper 😂