RAPTOR - Advanced RAG with LangChain

Поділитися
Вставка
  • Опубліковано 26 лис 2024

КОМЕНТАРІ • 44

  • @andreypetrunin5702
    @andreypetrunin5702 8 місяців тому +1

    Спасибо! Наши знания становятся все шире и шире! )))

  • @8eck
    @8eck 7 місяців тому +1

    Very interesting approach with getting optimal clusters size. I did something similar in the past, via genetic algorithms.

  • @kai1234763
    @kai1234763 8 місяців тому +1

    Very impressive! Thanks for sharing!
    It would be great to see a comparison of the results. Once with RAPTOR and once simply vectorizing the documents in the classic way.

    • @codingcrashcourses8533
      @codingcrashcourses8533  8 місяців тому

      Yes, but so far i only saw this as prove of concept. I think its not too easy to make that work fully dynamic with a large set of raw data. Where to make the cutoff?

  • @henkhbit5748
    @henkhbit5748 8 місяців тому +1

    Great video, better than the Langchain video. This concept has alot of overhead and thus a performance impacr 2 be practical. its a simimar approach with the master detail summarization but with extra steps i.e clustering etc.

  • @reticentrex8446
    @reticentrex8446 8 місяців тому

    Always love your vids mate, cheers!

  • @MEvansMusic
    @MEvansMusic 7 місяців тому

    what is the purpose of the dimensionality reduction step prior to clustering? is it because clustering is computationally expensive and reducing before hand helps? or is there a different reason ?

  • @trashchenkov
    @trashchenkov 7 місяців тому +1

    If we have completed all the steps to create a vector database using the RAPTOR method, and then the task of adding new documents appears, do we need to do everything all over again? Regularly updating the database can then become very expensive.

    • @codingcrashcourses8533
      @codingcrashcourses8533  7 місяців тому +2

      Yes, raptor is not Well Suited for data that has to be Updated regularly

  • @8eck
    @8eck 7 місяців тому

    So you are filtering out any clustered points where probability of belongingness to a specific cluster is lower than specified threshold?
    Haven't seen GMM algorithm before, looks very interesting.
    Basically, you are filtering outliers by providing that threshold. Super cool.

  • @georgefraiha6597
    @georgefraiha6597 8 місяців тому

    Great Video Love it. But does it work when we have thousands of periodically changing files use case or it will be very expensive

    • @codingcrashcourses8533
      @codingcrashcourses8533  8 місяців тому

      Good point! Probably not the best idea since you have to recalculate the clusters

  • @corpsed5201
    @corpsed5201 5 місяців тому

    hello, i am trying to make a rag chatbot for academic books. what rag techniques do you suggest me to adopt. I have been a lot of videos lately but cannot decide on one method. is RAPTOR any good for my use case? I have minimum latency, have watched about fusion and crag also, but i they are too slow for a chatbot response.

  • @rowdyjuneja
    @rowdyjuneja 7 місяців тому

    Great video! Question - How does this process evolve in a production setting? What happens if I want to add new documents? It seems like you would need to rerun the entire process.

    • @codingcrashcourses8533
      @codingcrashcourses8533  7 місяців тому

      Yes, thats a big contra of this. You only want to perform RAPTOR for data that does not change

  • @YueNan79
    @YueNan79 7 місяців тому

    Hey I got a issue, What if sum of cluster documents exceed maximum token of summary chain ?

  • @JeisonJimenez-tb3nc
    @JeisonJimenez-tb3nc 7 місяців тому

    Excellent video, I have a question and I want to ask questions to my documents (there are more than 3500), currently I am doing it dividing them in small chuck vectorizing them and storing them locally with chromadb and I am using the RetrievalQA class of langchain but I am not getting accurate results, but ambiguous answers, I am using the LLM Mistral-7b. How can I adapt this Raptor approach to my use case, is it possible to save this clouster classification in my vectors in chromadb?

    • @codingcrashcourses8533
      @codingcrashcourses8533  7 місяців тому +1

      you can use raptor, but first and best way to improve RAG Performance is better data. 3500 Datasets is quite a lot for RAG.

  • @Girijeshcse
    @Girijeshcse 7 місяців тому

    I have a similar use case of having a long context as well as the output should also be large, like from the documents of all restaurants in a town. If I ask list down all the restaurants that serve Italian food, I tried RAPTOR but it's not working in this case, can you suggest what can be done here?

    • @codingcrashcourses8533
      @codingcrashcourses8533  7 місяців тому

      I will release a Video on routing and an llm Perform sql queries next week. Your data sounds if it Was Tabular, so maybe this might be worth a look

    • @Girijeshcse
      @Girijeshcse 7 місяців тому

      @@codingcrashcourses8533 It's not tabular data, it is data from an internal website where all the projects and related information are shared, I can say it's more hierarchical so I tried to use a knowledge graph but as we have to use OSS LLM which is not able to do proper nl2cypher generation.
      I am wondering about some vector db approaches with some indexing strategy and some chunking tweaks can help :(
      Also in original question, I was trying to give a similar problem :D

  • @loicbaconnier9150
    @loicbaconnier9150 8 місяців тому +1

    Thanks very good video
    Why don’t you try to make some clusterering on each on the 3 mains clusters ? Must be test no ?

    • @codingcrashcourses8533
      @codingcrashcourses8533  8 місяців тому +1

      What do you exacty mean with Clustering on the 3 main clusters? I did so and ended up with a single "cluster". :-)

    • @loicbaconnier9150
      @loicbaconnier9150 8 місяців тому

      I mean make a clistering for each cluster to have subcluster ( so less length for texts to summarize )

    • @codingcrashcourses8533
      @codingcrashcourses8533  8 місяців тому +1

      ​@@loicbaconnier9150 Clustering does not work that way. Clustering works bottom up. You got many documents and end up with few clusters. But you can not have a large cluster and identify subclusters. You probably talk about a Pricipal Component Analysis.

    • @loicbaconnier9150
      @loicbaconnier9150 8 місяців тому

      No i mean you take all dataset
      initial clustering give you 3 clusters
      what i will try is to make clustering on each 3

    • @codingcrashcourses8533
      @codingcrashcourses8533  8 місяців тому +1

      @@loicbaconnier9150 Sorry maybe I am stupid but I think this is exactly what I did. I created clusters, summarised the clusers, embedded the summaries and again and clustered it again. You do that bottom up.If that is not what you mean, can you maybe rephrase your questios with a example?

  • @meisherenow
    @meisherenow 2 місяці тому

    2-d is nice for visualization, but I'd have gone with something like 20-d for clustering.

  • @mrchongnoi
    @mrchongnoi 8 місяців тому

    How would this work on a 259 page document like the Tesla 2020 Financial Report ?

    • @codingcrashcourses8533
      @codingcrashcourses8533  8 місяців тому

      Depends on the model. With something like Gemini, you could probably start with the complete document, with other models you have to start with independent chapters. RAPTOR works best with long context window models

  • @IljaUchiah1997
    @IljaUchiah1997 6 місяців тому

    in my street there is a restaurant called bella vista and the owner is giovanni. you live in osnabrück?

  • @timtensor6994
    @timtensor6994 8 місяців тому

    Was just wondering what your setup specs are. For me a simple langchain query takes around 4-5 mins

    • @codingcrashcourses8533
      @codingcrashcourses8533  8 місяців тому

      Are you behind a vpn?

    • @timtensor6994
      @timtensor6994 8 місяців тому

      @@codingcrashcourses8533 no no not at all . I am also using local models and embeddings

  • @saravanannatarajan6515
    @saravanannatarajan6515 8 місяців тому

    Such a great topic and you covered very nicely, thanks!
    Can we use the clustering method as mentioned in the video to create summarization task (since I feel its interesting) please suggest if you have any idea

    • @codingcrashcourses8533
      @codingcrashcourses8533  8 місяців тому

      Probably! But I would proably use models with a large context window and try to provide it chapter by chapter. This is more suited for RAG.