RAG vs Fine-tuning

Поділитися
Вставка
  • Опубліковано 25 жов 2024
  • This week, we’re discussing RAG vs Fine-tuning, a paper that explores a pipeline for Fine-tuning and RAG, and present the tradeoffs of both for multiple popular LLMs, including Llama2-13B, GPT-3.5, and GPT-4. The authors propose a pipeline that consists of multiple stages, including extracting information from PDFs, generating questions and answers, using them for fine-tuning, and leveraging GPT-4 for evaluating the results. Overall, the results point to how systems built using LLMs can be adapted to respond and incorporate knowledge across a dimension that is critical for a specific industry, paving the way for further applications of LLMs in other industrial domains.
    Read the transcript & listen to the podcast: arize.com/blog...
    Link to paper: arxiv.org/abs/...

КОМЕНТАРІ •