LangChain - Conversations with Memory (explanation & code walkthrough)

Поділитися
Вставка
  • Опубліковано 3 гру 2024

КОМЕНТАРІ • 128

  • @aanchalagarwal6886
    @aanchalagarwal6886 Рік тому +18

    A video on custom memory will be helpful. Thanks for the series.

    • @vq8gef32
      @vq8gef32 9 місяців тому

      Yes Please build a custom memory ! Thank you.

  • @atylerblack164
    @atylerblack164 Рік тому +2

    Thank you so much! I had a app built without any conversation memory just using chains and was struggling to convert to memory.
    you made this very easy to follow and understand

    • @SaifBagmaru
      @SaifBagmaru Рік тому +1

      Hey how did you do that i am trying to implement the same do you have any repo?

  • @Jasonknash101
    @Jasonknash101 Рік тому +5

    Another great video, I want to create my own agent with a memory. I’m thinking a vector database is the best way of doing it would be great if you could do a similar video outlining some of the different vector database, options, pros and cons of the different ones.

  • @aiamfree
    @aiamfree Рік тому

    i’ve been experimenting with entity in my own ways and its pretty wild and probably the most useful for general use. I imagine word for word would really only matter in something like a story generator or whatnot

  • @resistance_tn
    @resistance_tn Рік тому +15

    Great explanation ! Would love to see the custom/combinaisons one :)

    • @hiranga
      @hiranga Рік тому

      Yeah - would love to see a custom memory tute!

    • @RandyHawkinsMD
      @RandyHawkinsMD Рік тому

      Custom memory seems intuitively potentially useful for allowing human experts’ input to shape the knowledge graph that might be created to represent the state of users’ concerns based on experts’ knowledge. I’d be v.interested in a video on this subject. :)

  • @ketolm
    @ketolm 4 місяці тому

    Love the videos! Thank you for making them. Dying at the b-film footage

    • @samwitteveenai
      @samwitteveenai  4 місяці тому

      Thanks . Feedback is really appreciated. We have tried to reduce the stock video a lot on the newer vids.

  • @hikariayana
    @hikariayana Рік тому

    This is exactly what I needed, thanks so much!

  • @joer3650
    @joer3650 Рік тому

    best explanation Ive found, thanks

  • @kenchang3456
    @kenchang3456 Рік тому

    Indeed, this was helpful. Thank you for this video series. The more I work through them the more may questions are being answered :-)

  • @m_ke
    @m_ke Рік тому +4

    Oh how much I missed that voice. Keep the videos coming and maybe get some sunglasses and a webcam.

    • @samwitteveenai
      @samwitteveenai  Рік тому +2

      Long time no see. :D Working on getting a cam setup, but traveling a fair bit till April. Will DM you later.

    • @blackpixels9841
      @blackpixels9841 Рік тому

      This was the voice that got me started on my Deep Learning journey! Let us know if you're ever in Singapore again some day

  • @abhirj87
    @abhirj87 Рік тому

    wow!!! super helpful and thanks a ton for making this tutorial!!

  • @MannyBernabe
    @MannyBernabe Рік тому

    Super helpful overview. Thank you.

  • @krisszostak4849
    @krisszostak4849 Рік тому +2

    This is awesome! I love the way you explain things Sam! If you ever create an in depth video course about using lang chain and llms, especially regarding extracting particular knowledge from a personal or business knowledge base - let me know pls, I'll be first one to buy it 😍

  • @owszystkim5415
    @owszystkim5415 5 місяців тому +1

    Is it cost effective to use ConversationSummary? From my understanding it needs to summarize our conversation every time.

  • @musabalsaifi8993
    @musabalsaifi8993 Місяць тому

    Great work, Thanks a lot

  • @abdoualgerian5396
    @abdoualgerian5396 Рік тому

    i think the best one is to create like a small ai handler that handles all of the memory in your device then sends a very brief summary to the llm wih the necessary info of what the user means , in this case we will avoid sending too much data with much more effective promts than all of the mentined above

  • @DavidTowers-f1y
    @DavidTowers-f1y Рік тому

    I love these tutorials. Learning so much. Thanks.

  • @LearningWorldChatGPT
    @LearningWorldChatGPT Рік тому

    Great class!
    Thank you very much for sharing your knowledge
    Gained a follower !

  • @sahil5124
    @sahil5124 8 місяців тому

    it's really helpful, thanks man

  • @ghinwamoujaes9059
    @ghinwamoujaes9059 6 місяців тому

    Very helpful - Thank you!

  • @viktor4207
    @viktor4207 Рік тому

    Can you use both? So you can start working on a user profile by creating a knowledge graph associated with a user and storing it but then pass information to the bot in a summarized way?

  • @binstitus3909
    @binstitus3909 10 місяців тому +1

    How can I keep the conversation context of multiple users separately?

    • @sysadmin9396
      @sysadmin9396 8 місяців тому

      I’m looking for this answer as well. Did you ever figure it out?

  • @dogtens1060
    @dogtens1060 Рік тому

    nice overview, thanks!

  • @sysadmin9396
    @sysadmin9396 8 місяців тому +1

    Hi Sam, how do we keep the Conversation context of multiple users separate ?

    • @hussienhassin7334
      @hussienhassin7334 6 місяців тому

      Have you resolved it? I am still struggling too

  • @ghghgjkjhggugugbb
    @ghghgjkjhggugugbb Рік тому

    revolutionary video..

  • @noone-jq1xw
    @noone-jq1xw Рік тому +1

    Great video! I'm such a big fan of your work now! I'm sure this channel is going to places once the llms become a bit more mainstream in the programming stack. Please keep up with the awesome work!
    I have a question with regard to the knowledge graph memory section. The sample code given shows that the relevant section never gets populated. Furthermore, the prompt structure has two inputs, {history} and {input}, but we only pass on the {input} part, which might explain why the relevant information is empty. In this case, do you know if there is any use for the relevant information section?
    A second query is in regard to the knowledge graph. Since the prompt seems to be contextually aware, even though the buffer doesn't show the chat history, is it safe to say that in addition to the chat log shown (as part of verbose), it also sends the knowledge graph triplets created to the llm to process the response?

  • @z-ro
    @z-ro Рік тому +1

    Amazing explanation! I'm currently trying to use Langchain's javascript library to "persist" memory across multiple "sessions" or reloads. Do you have a video of the types of memory that can do that?

    • @untypicalien
      @untypicalien Рік тому

      Hey there, I'd love to know if after a month you found any useful resources or documentation about this. I'm trying to reach this as well. 😄

  • @starmorph
    @starmorph Рік тому

    I like the iced out parrot thumbnails 😎

  • @pec8377
    @pec8377 7 місяців тому

    How do you use the diff. conversation with LCEL ?

  • @sanakmukherjee3929
    @sanakmukherjee3929 Рік тому

    Nice explanation. Can you help me add this to a custom csv dataset.

  • @JimCh-g6w
    @JimCh-g6w Рік тому +2

    I've built this with streamlit UI as a front-end and deployed it as a Cloud Run service. Now, if multiple users are trying to chat with the Bot, the entire chat_history combined with all User conversations is being referred. If I want to have a user_id/session_id specific chat_history, how can I do it ? Could you please helo me

    • @sysadmin9396
      @sysadmin9396 8 місяців тому

      I have this same exact issue. Did you ever figure it out??

    • @naveennirban
      @naveennirban 3 місяці тому

      Hey guys, I am working on it too. I am trying to create multiple vector db with runtime knowledge feed for specific user.
      Name of vector db could be a unique id attached to your user model.

  • @WissemBellara
    @WissemBellara 7 місяців тому

    Nice Video, very well made

  • @hussamsayeed3012
    @hussamsayeed3012 Рік тому +1

    how do we add a custom prompt by adding some variable data, and using memory in ConversationChain?
    Like I'm trying this but getting the validation error:
    PROMPT = PromptTemplate(
    input_variables=["chat_history_lines", "input", "tenant_prompt", "context"], template=_DEFAULT_TEMPLATE
    )
    llm = OpenAI(temperature=0)
    conversation = ConversationChain(
    llm=llm,
    verbose=True,
    memory=memory,
    prompt=PROMPT
    )
    Error: 1 validation error for ConversationChain
    __root__
    Got unexpected prompt input variables. The prompt expects ['chat_history_lines', 'input', 'tenant_prompt', 'context'], but got ['chat_history_lines', 'history'] as inputs from memory, and input as the normal input key. (type=value_error)

    • @samwitteveenai
      @samwitteveenai  Рік тому +1

      you over write the 'prompt.template' and make sure it takes in the same inputs as the previous one etc. Take a look at one of the early vids about LangChain Prompts and Chains.

  • @carlosquiala8698
    @carlosquiala8698 6 місяців тому

    Can I mixed 2 types of memories? For example Entity and graph?

  • @mautkajuari
    @mautkajuari Рік тому

    beautifully explained

  • @wukao1985
    @wukao1985 Рік тому

    Thanks Sam for this great video. I found it really hard to understand how to make these memory function to work with ChatOpenAI model, can you help create a video on that? This video were all using divanci models.

    • @samwitteveenai
      @samwitteveenai  Рік тому

      yes good point these were made before that API existed. I might make some updated versions.

  • @Aidev7876
    @Aidev7876 11 місяців тому

    Imbusimg am SQL chain. Id like to add memory on that. Do we have some ideas on thst? Thanks

  • @jintao824
    @jintao824 Рік тому +1

    Great content Sam! Subbed. Just wanted to ask - are there technical limitations to why these LLMs have limited context windows? Any pointers to papers will be very helpful should they exist!

    • @samwitteveenai
      @samwitteveenai  Рік тому +5

      mostly this is about the attention layers and that the wider the spans the go you run into compounding computation. Take a look at this stackoverflow.com/questions/65703260/computational-complexity-of-self-attention-in-the-transformer-model

    • @jintao824
      @jintao824 Рік тому

      @@samwitteveenai Thanks Sam, I will check this out!

  • @insight-guy
    @insight-guy Рік тому

    Thankyou Sam.

  • @elyakimlev
    @elyakimlev Рік тому

    Thanks for this great tutorial series.
    Question: how do you set the k value for the ConversationSummaryBufferMemory option? I didn't see where you set it in your code. Is it always 2?

  • @aibasics7206
    @aibasics7206 Рік тому

    hi sam nice video! can you please clarify that can we finetune and use the memory here .For finetuning with own data we are using gpt index anf for llm predictor we are using LangChain.Can you tell me way around to use memory of langchain with integration of gpt index and loading own custom chat data ?

  • @caiyu538
    @caiyu538 Рік тому

    great tutorial

  • @CookFu
    @CookFu 5 місяців тому

    can a retrieval chain work with memory function? I have been trying that for couple of days, but it doesn't work.

  • @sooryaprabhu14122
    @sooryaprabhu14122 10 місяців тому

    bro please include the deployment also

  • @tubingphd
    @tubingphd Рік тому

    Thank you Sam

  • @embeddedelligence-926
    @embeddedelligence-926 Рік тому

    So how to make a conversational memory and using it with csv agent

  • @adumont
    @adumont Рік тому

    Really interesting. The last one about graphs and entities could have a lot of potential. I wonder how one could use some retrieval on a knowledge database for example to also enrich the context/prompt with information from that. For example, suppose the AI had access to the warranty database, and could check the status of the warranty for the TV serial number. It could maybe ask the user for the serial number, and automatically go check the warranty for that serial number, and answer "your TV is under warranty number xxx". Is there examples of how to do that?

    • @Jasonknash101
      @Jasonknash101 Рік тому

      Totally agree would be great to show. How are you integrate this with something like node JS

  • @prayagbrahmbhatt6375
    @prayagbrahmbhatt6375 Рік тому

    Great stuff ! Thanks for the tutorial ! I do have a question regarding Opensource models. How can use any alternative of OpenAI model ? like Vicuna or Llama ? What if we don't have openAI API-key ?

    • @samwitteveenai
      @samwitteveenai  Рік тому

      I have some vids using open source LLMs for this kind of thing

  • @stonaraptor8196
    @stonaraptor8196 Рік тому

    There has to be a simpler way to get a personalized, locally on my PC stored AI, that has long term memory and is able to keep up long conversations. Maybe I am very naive, but for me as a non-programmer, my main interest in AI is more in philosophical nature I guess. Where/how would I start or even get an offline version? Reading the OpenAI site is, let's say slightly challenging...

  • @kenchang3456
    @kenchang3456 Рік тому

    I just enjoy learning from your videos, thank you very much. Do you have any videos, suggestions or advice on how to control when a conversation goes off on a tangent and bring it back to the purpose of the conversation. E.g. A Chatbot for laptop trouble shooting - System: Hi how can I help you?, User: My laptop is broken., System: Can you describe the problem with more detail?, User: What's the weather like in Hawaii?, System: The weather is pleasant in Hawaii. Can you describe the problem with your laptop with more detail?

    • @samwitteveenai
      @samwitteveenai  Рік тому +1

      With big models this dealt with by good prompts that make it clear what it can and can't talk about and then to discontinue the conversation if people are too far off the main topics.

    • @kenchang3456
      @kenchang3456 Рік тому

      @@samwitteveenai Ah so it's in the prompts...interesting, thanks!

  • @vq8gef32
    @vq8gef32 9 місяців тому

    Amazing ! Appreciated it! But I can't run some of the codes ! :( is there any updated version?

    • @samwitteveenai
      @samwitteveenai  9 місяців тому

      Sorry I am working on updated LangChain vid which I will update the code. Some of these vids are a year old now

    • @vq8gef32
      @vq8gef32 9 місяців тому

      Thank you @@samwitteveenai amazing work, I am still watching your channel. Thank you heaps.

  • @Pure_Science_and_Technology

    Not sure the difference, but I use, print(conversation.memory.entity_store) : print(dir(conversation.memory)) I don't have an attribute 'store'

  • @foysalmamun5106
    @foysalmamun5106 Рік тому

    Thank you lot

  • @srishtinagu1857
    @srishtinagu1857 9 місяців тому

    Hii Sam, awesome video. I am trying to add conversation memory to my RAG application. But it is not giving correct response. Can you make a video or share some references for that. It will be really helpful. Thanks!

    • @samwitteveenai
      @samwitteveenai  9 місяців тому

      I need to make a full LangChain update this vid is a year old now. I am working on it, so hopefully soon.

    • @srishtinagu1857
      @srishtinagu1857 9 місяців тому

      @@samwitteveenai ok thanks! Waiting for it.

  • @ranjithkumarkalal1810
    @ranjithkumarkalal1810 Рік тому

    Great videos

  • @lordsairolaas
    @lordsairolaas 11 місяців тому

    Hello ! I'm making a chat bot using Conversation with KG but It keeps popping this error for the past few days could you help ?
    Got unexpected prompt input variables. The prompt expects [], but got ['history'] as inputs from memory, and input as the normal input key. (type=value_error)

  • @RedCloudServices
    @RedCloudServices Рік тому

    Sam can you help clarify? Do we still need to fine tune a custom LLM with our own corpus if we can use Langchain methods (i.e. webhooks, Python REPEL, pdf loaders, etc) ? or are both still necessary for all custom use cases?

    • @samwitteveenai
      @samwitteveenai  Рік тому +1

      LLMs that you finetune for your purpose should always have an advantage in regard to unique data etc. If you can get away with LangChain and and API though and you don't mind the costs then that will be easier.

  • @foysalmamun5106
    @foysalmamun5106 Рік тому

    waiting for video on custom memory 🙂

  • @lorenzoleongutierrez7927
    @lorenzoleongutierrez7927 Рік тому

    Thanks for sharing!

  • @428manish
    @428manish Рік тому

    It works fine with gpt3.5 turbo .. How to make it work with FAISS DB using local data(pdf)..

  • @harinisri2962
    @harinisri2962 Рік тому

    Hi I have a doubt. I am implementing ConversationBufferWindowMemory for document question answering chatbot. from langchain import memory
    conversation=ConversationChain(llm=llm,verbose=True,memory=ConversationBufferWindowMemory(k=2)) is it possible to return the source of documents answer using any parameters?

  • @stanTrX
    @stanTrX 2 місяці тому

    Thanks. How about agents with memory?

  • @Fluffynix
    @Fluffynix Рік тому

    How does this compare to Haystack which has been around for years?

    • @samwitteveenai
      @samwitteveenai  Рік тому +1

      Its quite different than Haystack. This is all about prompts and generative LLM manipulation that search. LangChain can do search with vector stores. You could probably use haystack as a tool with LangChain which could be cool for certain use cases.

  • @pengchengwu447
    @pengchengwu447 Рік тому

    I wonder if it's possible to specifiy *predefined* entities?

  • @svenandreas5947
    @svenandreas5947 Рік тому

    I`m wondering. This works as long as the human gives the expected information. Is there any chance to ask for information (like warranty number)?

    • @samwitteveenai
      @samwitteveenai  Рік тому +1

      yes you can do this with context and retrieval. Eg. adding a search for data etc. and passing the results into the context of the prompt.

    • @svenandreas5947
      @svenandreas5947 Рік тому

      @@samwitteveenai will google and search for this :-) Thanks for the hint. I just figured out the way via prompt engineering, but this wasn`t exactly what I was looking for. Thanks again

    • @samwitteveenai
      @samwitteveenai  Рік тому

      What exactly do you want to do?

    • @memesofproduction27
      @memesofproduction27 Рік тому

      langchain's self ask search sounds relevant.

  • @souvickdas5564
    @souvickdas5564 Рік тому

    How do I use memory with ChatVectorDBChain where we can specify vector stores. Could you please give code snippet for this. Thanks

    • @samwitteveenai
      @samwitteveenai  Рік тому

      I will made a video about Vector stores at some point.

  • @souvickdas5564
    @souvickdas5564 Рік тому

    How do we create conversational bot for non English languages and the languages that are not supported by the OpenAI embeddings? For example If I want to build a conversational agent for articles written in Indian languages (Bengali or Bangla), how we can do it?

    • @samwitteveenai
      @samwitteveenai  Рік тому

      You would use a multi-lingual embedding model which you could find on HuggingFace. Check out huggingface.co/sentence-transformers/stsb-xlm-r-multilingual there are others as well. there are also a number of multi-lingual LLMs including mT5 which supports Bengali. You would get best results by fine-tuning some of these models.

    • @souvickdas5564
      @souvickdas5564 Рік тому

      @@samwitteveenai thanks a lot.

  • @emmanuelkolawole6720
    @emmanuelkolawole6720 Рік тому

    Are you saying that Alpaca can only take in 2000 tokens? Please if that is true how can we increase it

    • @samwitteveenai
      @samwitteveenai  Рік тому

      increasing it requires some substantial retraining.

  • @RahulD600
    @RahulD600 4 місяці тому +1

    but still, this is not unlimited memory, right?

  • @gmdl007
    @gmdl007 Рік тому

    hi Sam, is there a way to combine this with qa with own pdf files?

    • @samwitteveenai
      @samwitteveenai  Рік тому

      yes I have a few videos about that if you look for PDF etc.

    • @gmdl007
      @gmdl007 Рік тому

      @@samwitteveenai fantastic, can you share?

    • @samwitteveenai
      @samwitteveenai  Рік тому

      @@gmdl007 There are a number take a look in this playlist ua-cam.com/video/J_0qvRt4LNk/v-deo.html

  • @ambrosionguema9200
    @ambrosionguema9200 Рік тому

    Hi Sam, in this, How to upload my own data file? in this code? Plase help me

    • @samwitteveenai
      @samwitteveenai  Рік тому +1

      I have a video coming out this weekend on using your own data for CSV and Excel files I will make one for larger dataset.

  • @WissemBellara
    @WissemBellara 7 місяців тому

    Is it possible to add chapters with timestamps please ? It would make it easier

  • @nilendughosal6084
    @nilendughosal6084 Рік тому

    How to handle memory for multiple users?

    • @samwitteveenai
      @samwitteveenai  Рік тому

      You serialize this out and load in the memory based on who is calling the model etc.

  • @mandarbagul3008
    @mandarbagul3008 Рік тому

    hello sir, greetings, what is span? 3.42

    • @samwitteveenai
      @samwitteveenai  Рік тому

      The span ( context size ) refers to the number of tokens (sub words) that you can pass into a model in a single shot.

    • @mandarbagul3008
      @mandarbagul3008 Рік тому

      @@samwitteveenaiGot it. thank you very much sir :)

  • @neerajmahapatra5239
    @neerajmahapatra5239 Рік тому

    Ho exam we add prompt with these memory chains.

  • @creativeuser9086
    @creativeuser9086 Рік тому

    Btw, it would be nice if you show yourself on cam when you’re not coding. The clips are weirdly distracting 😅

    • @samwitteveenai
      @samwitteveenai  Рік тому +1

      lol yeah plan to get a camera at some point. I cut back on the broll stuff after the early videos if that helps.

  • @alizhadigerov9599
    @alizhadigerov9599 Рік тому

    can we use gpt3.5-turbo instead of davinci-003 here?

    • @samwitteveenai
      @samwitteveenai  Рік тому

      you can but the code has to be changed with the new Chat options.

    • @aaroldaaroldson708
      @aaroldaaroldson708 Рік тому

      @@samwitteveenai Thanks. Are you planning to record a video on that? Would be very helpful!

  •  Рік тому

    select * from stock_videos where label like '%typing%' :D