Workaround OpenAI's Token Limit With Chain Types

Поділитися
Вставка
  • Опубліковано 6 тра 2024
  • Twitter: / gregkamradt
    Newsletter: mail.gregkamradt.com/signup
    Longer Prompts w/ LangChain - Get past your model's token limit using alternative chain types
    Langchain documentation
    langchain.readthedocs.io/en/l...
    Code: github.com/gkamradt/langchain...
    0:00 - Intro To Problem
    1:07 - Diagram
    1:54 - Diagram - Stuffing
    3:02 - Diagram - Map Reduce
    4:37 - Diagram - Refine
    5:45 - Diagram - Map Re-Rank
    7:21 - Code - LangChain
    8:39 - Code - Stuff
    9:40 - Code - Map Reduce
    12:02 - Code - Refine
    13:22 - Code - Map Rerank

КОМЕНТАРІ • 125

  • @jefferychen8330
    @jefferychen8330 Місяць тому

    This tutorial is really well-structured. I really like how you connect the current video with previous ones. Thanks so much!

  • @leromerom
    @leromerom Рік тому +25

    I appreciate that your videos always have two phases the explanation part and then you go the extra mile to explain the details, great work!

    • @DataIndependent
      @DataIndependent  Рік тому +1

      Nice! Question for you, do you prefer if I do:
      * Explanation #1, Code #1
      * Explanation #2, Code #2
      or
      * Code #1, Code #2
      * Explanation #1, Explanation #2
      Unsure which method is better for everyone.

    • @leromerom
      @leromerom Рік тому +4

      @@DataIndependent explanations first then code. Option #1 I think is best

    • @kunalchandra9869
      @kunalchandra9869 Місяць тому

      @@DataIndependent The former method would be better, will be able to connect with the theoretical explanation well if the practical is done along with it.

  • @vk2875
    @vk2875 День тому

    Amazing tutorial on this subject. Really appreciate your passion into detailing it in so much depths. Thank you !!!

  • @horseheadhunchback1990
    @horseheadhunchback1990 Рік тому +1

    This is a great series. Thank you for your work!

  • @nattapongthanngam7216
    @nattapongthanngam7216 20 днів тому

    Appreciate the clear explanation of Token Limit

  • @sifisomalinga9342
    @sifisomalinga9342 Рік тому +2

    This is genius content. Thanks for your amazing work.

  • @xorlop
    @xorlop Рік тому +1

    Wow, I am just stunned. This video is so helpful and informative. Thank you so so much!

    • @DataIndependent
      @DataIndependent  Рік тому

      Nice! That's great. Soon it won't be as big of a deal with gpt4-32k

  • @retardedpenguin1
    @retardedpenguin1 Рік тому +5

    I very rarely click like or dislike on videos... but this one is by far, one of the most helpful videos I've found for what we're working on. You explained everything extremely clearly (unlike the langchain docs, which do not explain things well), and provided a good low-level understanding of how each chain works. Thanks so much!

  • @joejoetheawesome
    @joejoetheawesome 11 місяців тому

    Brilliant explanation! thank you :)

  • @user-vu9fp9le9n
    @user-vu9fp9le9n Рік тому

    One word- Simply Great ! Thank You for this.

  • @realbutters
    @realbutters 6 місяців тому

    You just saved me hours of trial and error on a task I was about to start working on this week.
    Subbed immediately, thank you!

  • @feffy380
    @feffy380 Рік тому +6

    Correction for the refine method: the calls are *dependent*, not independent. Each call depends on the results of the previous call.

  • @StephenPasco
    @StephenPasco 10 місяців тому

    Great videos Greg!

  • @bnmy6581i
    @bnmy6581i 11 місяців тому

    This is awesome lesson. Thx

  • @wangking7384
    @wangking7384 Рік тому

    Thanks ❤you have helped me a lot🎉

  • @ArjanDuijs
    @ArjanDuijs Рік тому +2

    Cheers, always learning new stuff watching your videos! def gonna try the last two methods, although what is concerning me is the cost of using openAI.
    sure, it can do the summary of a 300 page document doing the refine method.. but at what cost?
    would be interested to see what the cost is for the different solutions what are the diferences in cost, which way is more effective to run.

  • @Sami-fm3zg
    @Sami-fm3zg Рік тому +6

    Top notch explanations, thanks. Would be helpful to have a Typescript tutorial as well tho if you ever have some time :)

    • @DataIndependent
      @DataIndependent  Рік тому

      Thanks! It'll just be python for now but I'll keep this in mind. Checkout the LangChain discord for more ts help

  • @Ryan-yj4sd
    @Ryan-yj4sd 10 місяців тому

    great video!

  • @sup5356
    @sup5356 10 місяців тому +1

    thank you for the going to the effort

    • @DataIndependent
      @DataIndependent  10 місяців тому

      Awesome - glad it helped and worked out

  • @Archlense
    @Archlense Рік тому

    best tutorial for lanchcain ever!!!!

  • @ujjwalgupta1318
    @ujjwalgupta1318 11 місяців тому

    Super useful. Thanks :)

  • @mikemansour1166
    @mikemansour1166 Рік тому +2

    Wooow! Thank you so much , I was really thinking about this the other day when I saw your previous Video , This is so helpful , I am not a coder I used to use excel to do the refining method(I didn't actually have a name for it ) with GPT-3 API , but your way is more efficient and I can easily implement in my work flow , I so much appreciate it

    • @DataIndependent
      @DataIndependent  Рік тому +1

      Nice glad to hear it. All the magic is with LangChain and the team putting it together.

    • @mikemansour1166
      @mikemansour1166 Рік тому

      @@DataIndependent I was wondering , if you guys have paid courses ?

    • @DataIndependent
      @DataIndependent  Рік тому

      @@mikemansour1166 Nope, but happy to do an intro call if you need anything. If more is needed we can do a consulting arrangement

  • @Incognitowil
    @Incognitowil Місяць тому

    I’m glad I found this video!!!

  • @bingolio
    @bingolio 11 місяців тому

    Great job, Thx! Just subscribed :D

  • @bingo101
    @bingo101 9 місяців тому

    It's really helpful, thanks

  • @dharanisugumar8699
    @dharanisugumar8699 Рік тому +1

    Your videos are great helpful. Much appreciated. I've a lament question. We can acheive this by reading the doc using python script and we can get the output right. I know AI gives the result without writing many code. But what is the major difference between these two? Thanks in advance.

  • @jakobkristensen2390
    @jakobkristensen2390 Рік тому

    This video is great, thanks

  • @Archlense
    @Archlense Рік тому

    PERFECT

  • @briancleary6751
    @briancleary6751 Рік тому +1

    excellent explanation as always, but your video previews always cover important parts of your slides.

  • @user-px1xq9im4r
    @user-px1xq9im4r Рік тому +1

    Absolutely amazed. One thing you should have done is => Explain each chain type and immediately show the demo rather than do it at the end. I forgot what refine and map-reduce does as I went towards the demo.
    Other than that, hats off dude.

    • @DataIndependent
      @DataIndependent  Рік тому

      I actually thought back and forth on this which would be better. I chose the method in the video (obvi) but I like the method you're mentioning as well.

  • @chienvu3814
    @chienvu3814 Рік тому +1

    Thank you for your work. It's amazing. But may I ask you about the slide? Can you share it for everyone ?

  • @sunshadow9704
    @sunshadow9704 4 місяці тому

    It is very helpful. Small observation. For a refine approach , I think the steps are dependent on each other. Not independent.

  • @creativeuser9086
    @creativeuser9086 11 місяців тому +2

    Why would we use the summarization method over the vector embedding and retrieval method?

    • @andytian5446
      @andytian5446 10 місяців тому

      I think the answer is simple😂, the vector embeddings and retrieval method doesn't solve the summarization problem.

  • @henkhbit5748
    @henkhbit5748 Рік тому +1

    Excellent explanation using langchain methods to split a large document! Like your Langchain videos. 👍
    A small question, in your rerank example for Q&A. Where are the loaded document(s) stored? Because it would be not efficient if u need to do the reload the docs every time u asks a question or if you create a chatbot where multiple users are asking questions..

    • @DataIndependent
      @DataIndependent  Рік тому +1

      The documents are stored on your local machine when you run langchain like that. Langchain will only send up the pieces of information it needs to your LLM

    • @henkhbit5748
      @henkhbit5748 Рік тому

      @@DataIndependent That is what I thought, but just to be sure..😀 A follow up question: Which "InstructGPT" model is used If the question is submitted to OpenAi? Davinci I assume? Can Langchain also use the new turbo 3.5 Chatgpt API chat model which is much cheaper?

  • @user-tk1bn8xc3i
    @user-tk1bn8xc3i Рік тому

    thanks it very very very helpful

  • @Kevin-sv5to
    @Kevin-sv5to 8 місяців тому

    You explained how to fix this issue for text files. How do I handle big csv files?

  • @edoardodenigris213
    @edoardodenigris213 Рік тому

    I tried and it works perfectly, thanks! I only have one problem: responses are in general quite short and general, 5 lines at most. How can I obtain more lenghty answers?

  • @mw3protegy1
    @mw3protegy1 Рік тому

    Where do you stay up to date with the AI advancements, discord etc?

  • @maximchuprynsky7472
    @maximchuprynsky7472 11 місяців тому +1

    Hi! Great video! I have a question. Is there any way of putting string insted of documents into the model?

    • @codewithbrogs3809
      @codewithbrogs3809 11 місяців тому

      No. Use the langchain.schema.Document object. Example python code for turning list of strings into python code
      from langchain.schema import Document
      list_of_strings = YOUR LIST OF STRINGS
      list_of_documents = [Document(page_content=string) for string in list_of_strings]
      #After initializing chain and llm
      chain({"input_documents": list_of_documents, "question": YOUR_PROMPT})

  • @sarveswarnaidu717
    @sarveswarnaidu717 Рік тому

    How to implement this on a CSV data which includes the tasks to aggregate ?
    For example, I've a supply chain data and the task is to retrieve the total amount spent by a customer.

  • @VineetShivhare
    @VineetShivhare Рік тому

    it would be really really helpful if could make a video on classification.
    say subject classification , topic classification or chain of classifications

    • @DataIndependent
      @DataIndependent  Рік тому

      Sounds fun. What's a tactical example you'd like to see?

    • @DM-fw5su
      @DM-fw5su Рік тому

      Taking a large document (100s pages of technical specification) and developing a classification language for content based on layout or based on conjunction of 2+ things in the document. Validating AI has a clear understanding of this new classification vocabulary. Then using that vocabulary to to query and allowing AI to use that vocabulary in its response.

  • @oryxchannel
    @oryxchannel Рік тому +1

    Like your studio philosophy. More 'workarounds'. ;-)

    • @DataIndependent
      @DataIndependent  Рік тому

      It's a symbiotic relationship!

    • @oryxchannel
      @oryxchannel Рік тому

      @@DataIndependent _Thats_ for sure. Just wait till someone gets joining up YT comments with AI right...."Hey, wait a minute...you can't have that AI idea...That's *my* intellectual property."😆

  • @debojitmandal8670
    @debojitmandal8670 3 місяці тому

    How do u reduce if ur also passing ur memory in the agent bcs I am getting that error bcs of the conversation buffer memory that is mentioned in my prompt template

  • @TrashPandamonium
    @TrashPandamonium Рік тому +1

    The question in the end read who was the friend that he got permission from, but the text you searched and showed stated that both him and his friend got permission - based on that excerpt, the answer seemed incorrect - though you probably just searched for the wrong snippet, I guess.

  • @charlesleon8961
    @charlesleon8961 11 місяців тому

    Another con from re rank would be the fact that the LLM will have to parse the entire document for every question right? I guess this scales from a paralellization standpoint but it could also cost a lot.

  • @chetan5581
    @chetan5581 4 місяці тому

    I have question : how do we do it for csv files ? thanks a lot !

  • @caiyu538
    @caiyu538 7 місяців тому

    Great

  • @rileyclubb
    @rileyclubb Рік тому

    yo man, amazing videos. What do you think about building an LLM based off your UA-cam channel so I can get your helpful answers to my questions?

  • @kalyeibakhbyergyen7298
    @kalyeibakhbyergyen7298 Рік тому +1

    i used japanese text to extract data by chunking but problem is even if i use smaller texts i get token limit error for example you requested 4103 tokens (103 in the messages, 4000 in the completion).

  • @kefalo84
    @kefalo84 9 місяців тому

    Can you update the link?

  • @diegolondrina7510
    @diegolondrina7510 Рік тому

    what chunk size would you recommend? you say in the video that 400 is just for demonstration. what is overlap for?

    • @DataIndependent
      @DataIndependent  Рік тому

      Chunk size depends on your use case.
      I've done 400-2000 and have had good success. Overlap, though I've used it, I haven't tested it enough to have an opinion

  • @alvintohw
    @alvintohw Рік тому +2

    thanks for the clear explanation. So what would be a good method for questions and answers across multiple docs? Seems map re-rank is most performant but restricted to one doc

    • @DataIndependent
      @DataIndependent  Рік тому +1

      Depends how many documents you have. If you have a ton, then you'll likely want to do embeddings and store them in a vectorstore so you can get the similar ones back. Check out my "question a book" video for more on how to do that.

  • @sportscardvideos
    @sportscardvideos Рік тому

    Can this be done with a large CSV or only text?
    Here my problem: loaded a large amount of CSV data in Pinecone. Now my prompt is generating a response that is tool long. Thanks!

  • @PhilCunliffe
    @PhilCunliffe 9 місяців тому

    What if the summaries from the Map Reduce method was over the max tokens for the final summarization call?

    • @DataIndependent
      @DataIndependent  9 місяців тому

      I *think* Langchain will map reduce it again. If not then you'll need to do that manually

  • @newphotographyltd6461
    @newphotographyltd6461 Рік тому +1

    Can you please provide video on how to compare two financial pdfs with large docs using gpt3.5 turbo?

    • @DataIndependent
      @DataIndependent  Рік тому

      What type of comparing do you want to do?

    • @newphotographyltd6461
      @newphotographyltd6461 Рік тому

      @@DataIndependent Lets take an example that page 5 of one pdf is most similar to the page 9 of the another pdf.

  • @ShaidaMuhammad
    @ShaidaMuhammad 4 місяці тому

    This is amazing work.
    Has anyone developed any technique that can hold memory with LLMs? i.e. an LLM that can save the context (the complete knowledge in the prompt) in some format to a local disk (memory). The memory is attached to the LLM so it can look up in the memory if required. The memory would work like a knowledge base.
    Let me know if anyone is working on this or already worked on it. I need to dig into that.

  • @hrushikeshdas4864
    @hrushikeshdas4864 11 місяців тому

    Damn! You are God 🙏

  • @simple-security
    @simple-security 9 місяців тому

    Question for anyone here:
    What is your approach if you're scanning say 100 new web sites and you want openai to summarize the news articles and categorize them.
    I can see setting up a loop and get openai to create a summary for 1 site at a time.
    I can also see myself using langchain with prompts and memory to store all the results in one place and then generating the output?
    Any suggestions on how a 'research script' would scale is appreciated.
    Thank you.

    • @DataIndependent
      @DataIndependent  9 місяців тому +1

      If you want to generate summaries, I would keep it at one summary per article per openai call
      So you'll eat a lot of tokens but the process will be straight forward

    • @simple-security
      @simple-security 9 місяців тому

      @@DataIndependent so are you saying I would use openai to provide a 'category' for the news article (one per call as you said) and then just use python to group/summarize those categories?

  • @SangyHanum
    @SangyHanum Рік тому +1

    Thanks.
    nit picking but Rich Draves was the friend with him not who gave him the persmission? probably poor question more than the chain.

    • @DataIndependent
      @DataIndependent  Рік тому

      Good call and good nit - agreed. The question could be better :)

  • @biswasshubendu4
    @biswasshubendu4 10 місяців тому

    hi, I want to create MOM using documents which is slightly different than summarization, will these methods work fine?

  • @grabellasrong6358
    @grabellasrong6358 10 місяців тому

    Could you rank how much information is lost for each of the methods?

  • @acerishi
    @acerishi Рік тому

    Is there a chain for translation in which i can apply this ?

    • @DataIndependent
      @DataIndependent  Рік тому

      Not an out of the box chain. But you could do a custom map reduce chain with custom prompts for your purpose.
      Check out my latest video on AI generated emails. You’d do the same thing but with different prompts for your use case

  • @antdx316
    @antdx316 26 днів тому

    Aren't there programs that automatically cut the files/docs into batches then it does it by itself?
    I'm trying to search my entire Twitter History and have to split up the data in order to feed it to an LLM.

  • @rexgloriae316
    @rexgloriae316 Рік тому +2

    Thanks for the videos man. One question - how can we increase the length of the final summary? I tried a custom prompt with something like "Write a summary of a minimum of 1000 words". But it seems to cut off the returned summary.

    • @DataIndependent
      @DataIndependent  Рік тому

      There is a parameter called "max_tokens" you'll want to adjust which will lengthen the output. You'll set it when you initialize your LLM

  • @LACHIVA1969
    @LACHIVA1969 Рік тому

    Yes, I was curious about these LLMs and quickly realized they are tyring to squeeze a lot of money before free opensource APIs show up. Not paying for no tokens on something that may be free in 6 months. These corporations are truly greedy. May try a month subcrisption of chatpdg and spent only $5.

    • @pythonization
      @pythonization 8 місяців тому

      Apparently we are going to have our own trained LLM's - even on mobile devices. I suppose today's LLM's will become commoditized but way more sophisticated "supermodel LLM's" will keep everyone glued to their screens.

  • @planetcrypton9666
    @planetcrypton9666 Рік тому

    How can I apply these solutions when using agents?

    • @DataIndependent
      @DataIndependent  Рік тому

      Check out the agent documentation on LangChain.com for a good start
      langchain.readthedocs.io/en/latest/modules/agents.html

  • @fahrikhalid3632
    @fahrikhalid3632 Рік тому

    how to impment this for SQLDatabaseChain ?

    • @imabhisht
      @imabhisht 11 місяців тому

      You got anything?

  • @crazycouplenyc
    @crazycouplenyc Рік тому

    Does pinecone remove the need for chunking? Does it have infinite memory?

    • @zzamme1505
      @zzamme1505 Рік тому

      no, the doc is still split into chunks and then individual chunks are embedded into vectors which are compared against the prompt

    • @DataIndependent
      @DataIndependent  Рік тому

      Yep, exactly what zzamme1 said

    • @alvarjover7081
      @alvarjover7081 11 місяців тому

      @@DataIndependent which method is more accurate between this video or embedding in vectors? I tried this one for a book with 120K words and took 10 mins to run. Embedding in vectors would make it faster (hopefully down to 3 mins)? I just started using all this so just learning from the pros! :D thanks in advance but also thanks for your content. Top!

  • @MoonDesignDev
    @MoonDesignDev Рік тому

    What about langchain memory?

  • @kuntalpcelebi2251
    @kuntalpcelebi2251 Рік тому

    would you please make a video about your environment or provide your python enviroenment as well? By loading the documents, I am getting this error: UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 10160: character maps to Edit. I had to make them pdf and use: loader = UnstructuredFileLoader("layout-parser-paper-fast.pdf", strategy="fast", mode="elements")

    • @pythonization
      @pythonization 8 місяців тому

      There are tutorials around that explain how to use "pipenv" on other channels python channels. I'm still getting started - different channels use "pipenv" or Docker or Anaconda. I suppose it's good getting comfortable with various environments - I haven't been programming for a while - also learning panda.

    • @pythonization
      @pythonization 8 місяців тому

      Also this the only channel that a playlist of 24 videos breaking down langchain extensively - a lot of other videos are good introductory videos but this "cookbook" approach is helping me get going in programming again.

  • @OBGynKenobi
    @OBGynKenobi 7 місяців тому

    This is nice, but I don't think any of these work for code. For example, I have a long Stored Proc and I want to generate Documentation for it, breaking it up will lose context and get all confused. Code can be self referential, ie, a variable in the first chunk might get referenced in the last chunk, but by this point context is gone.

    • @DataIndependent
      @DataIndependent  7 місяців тому +1

      Aligned w/ you, you'll need to chunk up another way or go graph to keep the connections alive. Check out what www.mendable.ai/ is doing, they may have a chunking/retrieval technique that works for you

  • @defidutch402
    @defidutch402 Рік тому +1

    Cool video!
    Some coding skills are required I guess?

  • @Fluttydev
    @Fluttydev 7 місяців тому

    these are not good approaches for practical work, Create embedding of the large model and then write any prompt