RAPTOR: New Retrieval Method for RAG | Chat with your Data using GPT

Поділитися
Вставка
  • Опубліковано 14 жов 2024
  • RAPTOR introduces a novel approach to retrieval-augmented language models( connect your data to language models) by constructing a recursive tree structure from documents. This allows for more efficient and context-aware information retrieval across large texts, addressing common limitations in traditional language models namely answering holistic type of questions.
    ☎️ Do you need any career or technical help? Book a call with me: calendly.com/m...
    *******************
    LET'S CONNECT!
    *******************
    Join Discord Channel: / discord
    ✅ You can contact me at:
    LinkedIn: / mohammad-ghodratigohar
    Email: mo.ghodrati95@gmail.com
    Twitter: / mg_cafe01
    🔔 Subscribe for more cloud computing, data, and AI analytics videos
    by clicking on the subscribe button so you don't miss anything.
    #RAG #chatgpt #Chat_with_data #Raptor

КОМЕНТАРІ • 5

  • @enriquecazap3780
    @enriquecazap3780 7 місяців тому +1

    You are increidble , I can learn and the same time I can enjoy and be happy with your excellent explanation many many thanks

  • @schance1666
    @schance1666 7 місяців тому +1

    Hey MG - your vids are the best (and the most fun!) on this topic. I have tried as best I can, but no one (including on Stack Exchange) has been able to help with my problem. I'm trying to chat with my db (which works fine) but when I need it to follow 'rules' it just ignores them. Is there a way to force the system to follow your 'rules'? (Like if you are matching people according to a long list of criteria). Do you have any vids on this?

  • @trashchenkov
    @trashchenkov 6 місяців тому

    Thanks for the video! What should we do if we want to add new texts to our existing vector DB? Should we reprocess all the steps from the very begining?

  • @miladnasiri9920
    @miladnasiri9920 7 місяців тому +1

    💪perfect

  • @IdPreferNot1
    @IdPreferNot1 7 місяців тому

    Any way for token estimation for larger embeddings?