LangChain: Giving Memory to LLMs
Вставка
- Опубліковано 26 лип 2024
- LangChain offers a significant advantage by enabling the development of Chat Agents capable of managing their memory. In this video, we explore different langchain memory types and provide guidance on integrating them into a LangChain Conversation chain.
▬▬▬▬▬▬▬▬▬▬▬▬▬▬ CONNECT ▬▬▬▬▬▬▬▬▬▬▬
☕ Buy me a Coffee: ko-fi.com/promptengineering
|🔴 Support my work on Patreon: Patreon.com/PromptEngineering
🦾 Discord: / discord
▶️️ Subscribe: www.youtube.com/@engineerprom...
📧 Business Contact: engineerprompt@gmail.com
💼Consulting: calendly.com/engineerprompt/c...
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
LINKS:
LangChain Memory: python.langchain.com/en/lates...
Google Notebook: colab.research.google.com/dri...
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
All Interesting Videos:
Everything LangChain: • LangChain
Everything LLM: • Large Language Models
Everything Midjourney: • MidJourney Tutorials
AI Image Generation: • AI Image Generation Tu...
#langchain #openai #chatgpttutorial - Наука та технологія
Wow can't wait to see more!
Great video. You re such a great teacher
your videos are super helpful. As a beginner I find it easy to follow the steps. These provides everything I need to execute end to end of the project (most of the times )
Great to hear! Enjoy the learning :)
Dude you just earned a new subscriber. Thankyou so much.
Thanks for the sub!
this was great. clear and understandable, thanks a lot!
Glad you found it useful 👍
Good video.
Nice video 💯. I'm interested in more long term memory and vector storage. Mainly, how to keep track of memories over weeks, months, or years.
Document embeddings might be the solution here.
You can also use zep memory
Very good video, please go into further details of langchain (e.g. working with a llm + tabular data could be very interesting (sql, pandas agents))
Will be making a lot more videos on LLMs. Stay tuned!
Yes (OpenAI API + Tabular Data) to connected to internet to create an AI Data Scientist.
thanks
Hey. I was wondering if there's a way to create the my custom chatGPT to write like it where it displays letter by letter as it's writing the answer rather than wait for few seconds then it shows it all at once. Thanks!
Thank you for the wonderful video. How do I implement memory functionality for Vector Index search? I have developed a Q&A chatbot based on my documents and I would like to implement memory functionality to remember past few conversations.
is it possible to add this memoery sheet into the multple pdf sheet you also provide so that I can track all the questions i asked regarding the pdfs and also have it regain the memory of all of hte questions I ask?
thannnnnnnnk you
hello brother! I liked your video and I would like to ask you about one thing. I have a lot of dialogs and how do I give a specific dialog to the message chain?
How can I keep the conversation context of multiple users separately?
How to override the default promt in ConversationChain (The following is a friendly conversation....) ?
Have you figured out how to retain memory when the app is built on streamlit? Just curious cause that'd be super helpful.
I haven't tested this approach with streamlit but these approaches should work, in theory
@@engineerprompt for some reason it doesnt. Memory gets reset everytime someone enters a query in chat.
Is it also possible to add author_ids?
Is there a solution for when utilizing the ChatGPT API?
How to add memory to load_qa chain or RetrievalQA chain
Next video :)
@@engineerprompt Thankyou ver much:) I was working on chat with pdf with memory here I was using load_qa and RetrievalQA but I couldn't able to add direct memory object can you suggest any solution I need it urgent
Hi @PromptEngineering
if i have list of products and list of orders, is posible to add to memory? if posible how can i do it?
Thanks!!!
Yes, just save them using the memory.save_context or you can add them as context using the document retrieval approach. Watch my localGPT video.
@@engineerprompt Thank you, my data is on DB now, could you please suggess me how to prepare those data as input data format when ingest?
How can save this buffer memory in mongodb?
how big would the memory get after an 1 week konversation where only fakts are saved and validate only by me ? 1TB ? 5TB ?
That will depend on the amount of conversations BUT keep in mind that all these llms have limited context windows (16k tokens for gpt-3.5) so if the memory has anything beyond that, it's not going to be useful. You probably want to look at embeddings at that point.
Can this work with a voice OpenAi chatbot in Python?
Yes
15:28 hi I have checked out your calendly schedule and would like to have a conversation with you but I need to have a preliminary conversation before I pay your consulting fee for a 45-minute session. What you have described in this video is very close to the problem I am trying to solve. I would like to discuss that and if you're able to solve it will gladly pay you for your time.
Email me and let's chat.
Thanks for trying but the video quality is very poor.
Do you think you can also train an llm model using the memory module ????