RAG from scratch: Part 5 (Query Translation -- Multi Query)

Поділитися
Вставка
  • Опубліковано 19 тра 2024
  • Query rewriting is a popular strategy to improve retrieval. Multi-query is an approach that re-writes a question from multiple perspectives, performs retrieval on each re-written question, and takes the unique union of all docs.
    Slides:
    docs.google.com/presentation/...
    Code:
    github.com/langchain-ai/rag-f...

КОМЕНТАРІ • 15

  • @paraconscious790
    @paraconscious790 Місяць тому

    God this is amazing series from LangChain and Lance!!! Lance is an angel!!! 🙌🙏

  • @anonymous6666
    @anonymous6666 3 місяці тому +3

    i love lance's hand motions, so freakin' entertaining

  • @Wiktor-rf3tu
    @Wiktor-rf3tu 10 днів тому

    Great piece of knowledge! I am not a professional python developer (yet) and the syntax with building a chain with " | " broke my brain. You could either explain it a little bit or use more explicit syntax if possible in the future.

  • @mrchongnoi
    @mrchongnoi 3 місяці тому +2

    Thank you for the video. I am enjoying the series I do have a question regarding this method.
    If I think about conversations I have had with others, there are times either myself or the other person I am speaking with may say, "I do not understand your questions. What are you trying to say? What are you asking me". Would be better for the LLM to engage the user to see if there is deeper meaning to the question or questions? Once the LLM gains an understand, then retrieval can take place using the Multi Query.
    A person who is a investment expert who asks the question "What was Tesla performance in FY2022 compared to FY2023, will have a different expectation in the answer than a layman who who ask the same question.
    Just thinking out loud.

  • @hasszhao
    @hasszhao 3 місяці тому

    Correct me if I was wrong. Is the topic about what the class
    langchain.retrievers.multi_query.MultiQueryRetriever
    does?
    And it is very similar to the Llama-Index SubQuestionQueryEngine, only different is that the Llama-Index applies the break-down of the origin question into sub pieces instead finding similar questions with LLM.

  • @jay-dj4ui
    @jay-dj4ui Місяць тому

    So the charging profit is tracing? Like a log system? What about LlamaIndex?

  • @markenki
    @markenki 5 днів тому

    Odd that the original question isn't used. Why not use the original question and generate just four additional questions?

  • @user-xv2mx8rx7y
    @user-xv2mx8rx7y 3 місяці тому

    How did you do the graphics ?

  • @girijeshthodupunuri1300
    @girijeshthodupunuri1300 Місяць тому

    Can you share the notebook?

  • @B0tch0
    @B0tch0 3 місяці тому

    How do you use embeddings WITHOUT OpenAI ?????

    • @theartofwar1750
      @theartofwar1750 2 місяці тому +1

      You don't need openai for embeddings. The only benefit of using it is that that it is faster, plug and play, and potentially more accurate.
      You can use any embedding model you want, e.g. any embedding model from huggingface, etc. For example, you can use a swap out the openai embeddings for a CLIP model from hugging face.

    • @B0tch0
      @B0tch0 2 місяці тому

      @@theartofwar1750 If I tell you, build those shelves from scratch then you realize you need a subscription to IKEA. How does that make you feel?
      Using another methods would certainly be more "building RAG from scratch" don't you think?

  • @mitast
    @mitast 3 місяці тому

    The LCEL is the most confusing thing you have ever invented guys... No need of that sh*t