MemGPT 🧠 Giving AI Unlimited Prompt Size (Big Step Towards AGI?)

Поділитися
Вставка
  • Опубліковано 15 чер 2024
  • In this video, we look at MemGPT, a new way to give AI unlimited memory/context windows, breaking the limitation of highly restrictive context sizes. We first review the research paper, then I show you how to install MemGPT, and then we have special guests!
    Enjoy :)
    Join My Newsletter for Regular AI Updates 👇🏼
    www.matthewberman.com
    Need AI Consulting? ✅
    forwardfuture.ai/
    Rent a GPU (MassedCompute) 🚀
    bit.ly/matthew-berman-youtube
    USE CODE "MatthewBerman" for 50% discount
    My Links 🔗
    👉🏻 Subscribe: / @matthew_berman
    👉🏻 Twitter: / matthewberman
    👉🏻 Discord: / discord
    👉🏻 Patreon: / matthewberman
    Media/Sponsorship Inquiries 📈
    bit.ly/44TC45V
    Links:
    MemGPT Website - memgpt.ai/
    MemGPT Discord - / discord
    MemGPT Code - github.com/cpacker/MemGPT
    Install Instructions - gist.github.com/mberman84/6c1...
    Dataset - huggingface.co/MemGPT
    Autonomous Agents - • Fully Autonomous NPCs ...
    Chapters:
    0:00 - MemGPT Research Paper
    25:10 - MemGPT Installation Tutorial
    30:21 - Special Guests!
  • Наука та технологія

КОМЕНТАРІ • 576

  • @matthew_berman
    @matthew_berman  7 місяців тому +170

    So who’s building something with AutoGen + MemGPT?

    • @zappy9880
      @zappy9880 7 місяців тому +10

      Please do! Autogen had blown my mind before and now combined with this it could be unstoppable!

    • @TheRealDOSmile
      @TheRealDOSmile 7 місяців тому +14

      I'm currently working on something very similar to that.

    • @codescholar7345
      @codescholar7345 7 місяців тому +17

      Ha! I was just going to suggest that. How can we get it working with a local LLM and autogen?

    • @randotkatsenko5157
      @randotkatsenko5157 7 місяців тому +1

      ​@@TheRealDOSmile How to contact you?

    • @mavvemavve3498
      @mavvemavve3498 7 місяців тому +1

      I probably am ;)

  • @davidbaity7399
    @davidbaity7399 7 місяців тому +71

    As an older developer, we used 'virtual memory' because in 1989 computers only had 640k and in DOS, there was no OS memory management. We would swap CAD/CAM geometry objects in and out of memory as they were needed.
    Please keep us informed as this project moves forward especially when it can use open source LLM's.

    • @JorgetePanete
      @JorgetePanete 7 місяців тому +2

      LLMs*

    • @robinvegas4367
      @robinvegas4367 7 місяців тому +6

      Hold up a sec, I gotta find disk 2

    • @FamilyManMoving
      @FamilyManMoving 5 місяців тому +3

      The more things change, the more they stay the same. I've been writing code professionally for 30 years, and every generation of 20-somethings "discovers" something some greybeard taught me when I was 20-something.
      Virtual context management. Imagine that. New since about 1970.

    • @snooks5607
      @snooks5607 4 місяці тому +1

      nitpick; PC from 1989 likely had more RAM than 640k, DOS by default just couldn't address more than 1MB directly (with 384k reserved for system leaving 640k for user) because of legacy architectural limitation of the original IBM PC from 1981 and the holy tenets of backwards compatibility.
      since around dos4.0 in the backwards compatible "real mode" himem.sys and emm386 could give access to higher memory areas but the proper way was to switch to "protected mode" that could address rest of the system memory directly (16M for 24bit 286 and 4GB for 32bit 386) usually with an extender library like DOS/4G which were around in -89 but maybe not so widely spread yet.

    • @davidbaity7399
      @davidbaity7399 4 місяці тому

      @@snooks5607
      You need to understand at $1,500 per mb of memory, there were not many computers with more than a mb of memory.

  • @ZeroIQ2
    @ZeroIQ2 7 місяців тому +146

    AGI would be impossible without a memory system, so I agree this is another step towards it. It's really cool.

    • @matthew_berman
      @matthew_berman  7 місяців тому +6

      🎉🎉

    • @kloszi
      @kloszi 7 місяців тому +1

      I have the same fellings

    • @Bargains20xx
      @Bargains20xx 7 місяців тому +1

      AGI doesn't need to be a memory machine. An agi good enough at comprehension and decision making is enough. Now if you talk about AGI with conscience, we are talking about elon musk level extinction

    • @Madman-bi5bf
      @Madman-bi5bf 7 місяців тому

      What possibilities regarding MemGPT could be accomplished with AI like ChatGPT?

    • @akarna69
      @akarna69 7 місяців тому

      ​@@kloszino one cares. 😄

  • @middleman-theory
    @middleman-theory 7 місяців тому +29

    Your channel has distinctly carved its niche in the AI UA-cam arena. Among the myriad of AI UA-camrs I'm subscribed to, your channel, particularly over the last six months, has excelled in quality, presentation, and professionalism. Your videos have become my go-to source, superseding others that now seem laden with filler content.
    Your knack for diving straight into the core topic, elucidating not only the 'what' but the 'why,' is refreshing. The structured walk-throughs, practical guidance, and anticipatory glimpses into the future keep me engaged throughout. Your closing phrase, "And...I'll see you in the next one," has amusingly become a segment I look forward to; it encapsulates the essence of your engaging delivery.
    Being a part of your channel feels like being immersed in a thriving community. The clear, concise factual delivery, balanced with simplicity, makes the content accessible for newcomers while remaining enriching. Despite the crowded space of AI discussions on UA-cam, your channel effortlessly ranks within my top 10.
    Thank you for the enriching content and the community you've fostered.

    • @matthew_berman
      @matthew_berman  7 місяців тому +4

      This is such a kind comment, thank you so much!! Glad you’re learning from my videos :)

    • @theChotkiyOne
      @theChotkiyOne 7 місяців тому +2

      I agree, but this looks like it was written by GPT

    • @karlortenburg
      @karlortenburg 7 місяців тому +1

      Well deserved and well said! Amazing how you explain these matters for everyone. Any exec will be so pleased to have you guide them.
      And btw it doesn't matter whether the words were perfected by AI, it's the thought - the gratitude that counts.

    • @PanamaRed917
      @PanamaRed917 7 місяців тому +1

      @@theChotkiyOne that is exactly what I was just saying. LMAO

  • @ytpah9823
    @ytpah9823 7 місяців тому +69

    🎯 Key Takeaways for quick navigation:
    00:00 🧠 AI currently lacks memory beyond training data and is limited by its context window.
    00:29 📈 Progress has been made to increase context window size, but still limited (e.g., Chat GPT-4 has 32,000 tokens).
    00:58 📚 Introducing MemGPT: A solution to expand AI's memory. The video reviews this research and the open-sourced code.
    01:11 ✍️ Paper titled "M GPT: Towards LLMs as Operating Systems" has several authors from UC Berkeley.
    01:51 🗣️ Limited context window issues arise especially in long-term chat and large document analysis.
    02:20 💽 MGPT mimics computer OS memory management, with an "appearance" of large memory resources.
    03:27 📊 Increasing context window in Transformers is not optimal due to computational and memory costs.
    04:08 🔄 MGPT autonomously manages its memory through function calls, enhancing its ability.
    04:52 🖥️ Diagram explanation: Inputs go through parsers, get processed in virtual contexts (main and external), and get outputted after further processing.
    06:14 🖱️ MGPT allows AI to self-manage context, treating longer context as virtual memory and own context as physical memory.
    06:40 📟 Main context (like RAM) has a size limit while external context (similar to a hard drive) is virtually unlimited.
    07:08 📏 Various models have different token limits, impacting how many messages can be processed.
    07:48 ⚠️ Actual usable context is often less than advertised due to system messages and other requirements.
    09:00 🔄 Recursive summarization is another way to manage limited context, previously discussed in another video.
    09:15 🧠 MemGPT stores its "memories" in a vector database, but it eventually compresses them through a process called "reflecting on memories" to manage space.
    09:56 🔄 Recursive summarization can address overflowing context but is lossy, leading to gaps in the system's memory, much like video compression degradation.
    10:38 📝 MemGPT splits context into: system instructions, conversational context (recent events), and working context (agent's working memory).
    12:02 🎂 MemGPT can store key information from conversations in its working context, as shown by a birthday conversation example.
    12:43 💽 External context acts as out-of-context storage (like a hard drive), separate from the main context but can be accessed through function calls.
    13:25 🔍 There are two types of external contexts: recall storage (history of events) and archival storage (general data store for overflow).
    14:09 🧩 MemGPT manages its memory using self-directed memory edits and retrievals, executed via function calls and based on detailed memory hierarchy instructions.
    15:32 🔄 MemGPT can correct its memory when false information is detected, updating its stored context.
    16:14 🤖 The effectiveness of MemGPT as a conversational agent is evaluated based on its consistency (alignment with prior statements) and engagement (personalizing responses).
    17:10 🎵 Through a function call, MemGPT can delve into its past memory to recall previous conversations, like discussing a music artist.
    17:52 🕰️ Deep Memory Retrieval (DMR) enables the agent to answer questions that refer back to very specific details from past conversations.
    18:05 📊 The accuracy of MGPT's responses is better than GPT 3.5 or GPT 4 alone.
    18:19 🍪 Personalized conversation openers (like referencing a user's cookie preference) increase user engagement.
    19:01 ☕ Examples illustrate how MGPT uses context and recall differently to engage with users.
    20:12 📜 Many documents exceed the token limits of current models, creating challenges in document analysis.
    21:06 🧠 Large language models exhibit a bias in recalling information towards the beginning or end of their context, mirroring human memory patterns.
    22:44 📈 Charts indicate that MGPT maintains consistent accuracy regardless of the number of documents or nested information, unlike GPT 3.5 and 4.
    23:12 ⚖️ A trade-off with MGPT is that some token budget is used for system instructions.
    23:41 🤖 Discussion about LLMS as agents and their emergent behaviors in multi-agent environments.
    24:21 💻 Tutorial on how to activate and use MGPT, starting with code setup.
    27:35 📁 MGPT's document retrieval feature allows users to chat with their documents; using wildcards can fetch multiple text files.
    28:15 💵 Embedding files come with a computational cost; example given shows 3 documents for 12 cents.
    28:44 🔄 MGPT's persona is customizable, allowing users to tailor how the model interacts with information, like referencing archival memory.
    29:38 🔍 MGPT can retrieve specific data from documents, such as annual revenues of companies.
    30:06 🌐 Introduction to MGPT emphasized its rapid evolution and potential for open-source models in the future.
    30:33 🎙️ Interview with MGPT authors Charles and Vivian discussing inspiration and plans for the project.
    30:46 🧠 MGPT addresses the memory limitations of current language models by actively saving crucial data into a permanent memory store.

    • @tmhchacham
      @tmhchacham 7 місяців тому +4

      Wow, nice. Thank you!

    • @eraldcala9125
      @eraldcala9125 7 місяців тому +6

      What did you use for this

    • @captanblue
      @captanblue 7 місяців тому +1

      What was used for this?

    • @Madman-bi5bf
      @Madman-bi5bf 7 місяців тому +1

      Sounds pretty complicated, regardless, things like ChatGPT could use this to improve the performance of the ai they use, right?

    • @RandomButBeautiful
      @RandomButBeautiful 7 місяців тому +6

      @@eraldcala9125 I think its HARPA ai. I'm seeing tons of videos spammed with this.... already over it lol

  • @bertilhatt
    @bertilhatt 7 місяців тому +5

    Separating the conversation from an internal dialogue the system can have will prove very helpful: you can ask where the system has learned something to prevent hallucinations, have a space to run logical reasoning until confirmation, and now spout, “The ball has to be 10c and the bat $1.10… Wait, no.”

    • @Shinkaze33
      @Shinkaze33 7 місяців тому +2

      Yes, Self Awareness would great improve LLMs.. some humans need it to learn that skill too!

  • @chrismadison8946
    @chrismadison8946 7 місяців тому +1

    Love this video and thanks so much for the in-depth post! Accurately explains the theoretical science along with the practical implementation 🙏🏾

  • @RonnyMW
    @RonnyMW 7 місяців тому +4

    I think the information is valuable and is explained up to the point where you can't understand more without a deep dive into AI. Good job!

  • @redbaron3555
    @redbaron3555 7 місяців тому +28

    Yes please do another tutorial with MemGPT! This is huge!

    • @matthew_berman
      @matthew_berman  7 місяців тому +4

      Ok!

    • @redbaron3555
      @redbaron3555 7 місяців тому

      @@matthew_berman Thank you!!!👏🏻👍🏻

    • @toddai2721
      @toddai2721 Місяць тому

      Please also do a tutorial on Salesforce ai.

  • @djzuela
    @djzuela 7 місяців тому

    Matthew thank you so much for keeping us up to date. Your rock. Can't wait to play with this.

  • @micklavin
    @micklavin 7 місяців тому

    Thanks a million Matthew! Your videos are so clear and easy to follow 🙂Looking forward to your follow-up videos on MemGPT and open source models.

  • @basoele7795
    @basoele7795 7 місяців тому +11

    🎯 Key Takeaways for quick navigation:
    00:00 🧠 The limitation of AI regarding memory and context window sizes, with previous models having token limitations that hinder its performance in long-term interactions or extensive document analysis.
    02:33 🖥️ Introduction of MemGPT as a solution, mimicking traditional computer memory management systems with fast (RAM-like) and slow (Hard Drive-like) memory for handling larger contexts.
    04:08 💾 Explanation on how MemGPT autonomously manages memory through function calls, creating a virtual memory system for AI to access and manage information beyond fixed context limits.
    06:40 📊 Comparison between the context handling of different models and the real-world limitation of token count even in higher-end models like Claude 2.
    09:56 🔄 Mention of recursive summarization as a method to handle overflowing context windows, but its lossy nature leads to eventual large holes in memory.
    13:25 🗂️ The distinction between two types of external context, Recall Storage and Archival Storage, to store and manage different types of data.
    14:09 📝 Description of how memory edits and retrieval are self-directed and executed via function calls, with a detailed structure to guide the system on how to interact with its memory systems.
    17:10 🔄 Example of Deep Memory Retrieval (DMR) where the system references past conversations to answer current queries.
    18:19 👋 Evaluation of MemGPT on crafting engaging conversation openers by referencing past interactions to enhance user engagement.
    20:12 📜 Addressing the challenge of document analysis with large documents and the limitations of current models' context windows, introducing the potential of MemGPT in handling such tasks.
    21:21 🧠 The comparison of large language models' memory behavior to human memory, where both tend to remember the beginning and end of a list better than the middle.
    22:16 📉 The performance of GPT-3.5 and GPT-4 drops significantly after reaching their context window limits, while MemGPT maintains performance regardless of the number of documents retrieved.
    23:12 🔄 MemGPT requires system instructions for memory management which consumes a portion of the token budget, a trade-off for its enhanced document retrieval capacity.
    23:54 🤖 Reference to Park et al. paper on enabling memory in LLMs (Large Language Models) and observing emergent social behaviors in a multi-agent environment.
    [24:21](youtu.be/QQ2
    Made with HARPA AI

  • @thegooddoctor6719
    @thegooddoctor6719 7 місяців тому

    D@MN you're good. You are on the forefront. Thanks for finding the material, breaking it down, and explaining how to implement..... It is much appreciated....

  • @tdb2012
    @tdb2012 7 місяців тому

    I recently found this channel and really enjoy the videos. Great job Matt.

  • @UnicoAerta
    @UnicoAerta 7 місяців тому

    That video was awesome, very informative. I love how you ACTUALLY read the paper along the video

  • @MarkusEicher70
    @MarkusEicher70 7 місяців тому +1

    Thanks a ton, Matthew! That's such great news. One step closer to a real LLM-OS. Can't wait till they implement opensource model support. I also would like to see how things like LangChain, HuggingFace and others can get integratied into solutions. Would highly appreciate another video about this topics from you. Thanks for your great work! 💪

  • @danberm1755
    @danberm1755 7 місяців тому

    Well done! That was brilliant and the synergy between NN OSs and AutoGen seems like the way forward for sure.

  • @stickmanland
    @stickmanland 8 місяців тому +52

    Man! I for one, am fully ready to welcome our AGI overlords!

    • @Seriph001
      @Seriph001 7 місяців тому +3

      I'm right there next to you my friend

    • @DodoJo
      @DodoJo 7 місяців тому +2

      @@Seriph001I'm right behind you bro.

    • @randotkatsenko5157
      @randotkatsenko5157 7 місяців тому +2

      Bow to the chosen One.

    • @Romulusmap
      @Romulusmap 7 місяців тому +2

      Same

    • @andrewxzvxcud2
      @andrewxzvxcud2 7 місяців тому +4

      this meme is so overdone i cringe everytime i see it

  • @wingflanagan
    @wingflanagan 7 місяців тому

    Wow. I just set up my own MemGPT bot on Discord and had a long conversation, Impressive, though still a bit artifically cheerful. Thanks for this!

  • @friendofai
    @friendofai 7 місяців тому +2

    This was such a good episode. The fact that the LLMs have memory like humans remembering the first and last, wow. I want this. Great episode!

  • @J2897Tutorials
    @J2897Tutorials 7 місяців тому +7

    My favourite open source model is currently _Falcon 180B_ with the web search feature. I was impressed by M$'s _Bing Chat_ in Edge, but I mainly use Falcon instead now, since it seems just as good for grabbing information from the web, at least from my perspective. Although I don't fancy paying to run Falcon on a server, just to test it with MemGPT, despite my eagerness to try it out. It could be interesting if there was a _Falcon 180B_ API, similar to OpenAI's API, only much cheaper.

  • @tomt215
    @tomt215 7 місяців тому +9

    Please let us know and do this again when they have open source models!

  • @remsee1608
    @remsee1608 7 місяців тому +40

    Some of the new Mistral based local LLM's have 32k context and hence beat GPT-4 at certain tasks, it's amazing

    • @matthew_berman
      @matthew_berman  7 місяців тому +3

      Good to know!

    • @avi7278
      @avi7278 7 місяців тому +11

      which ones exactly?

    • @remsee1608
      @remsee1608 7 місяців тому

      @@avi7278i used TheBloke/MistralLite-7B-GGUF and it was good TheBloke/Mistral-7B-Phibrarian-32K-GGUF is another option i've tried it wasn't as good for what I was doing but it might be better on academic datasets

    • @emmanuelkolawole6720
      @emmanuelkolawole6720 7 місяців тому +12

      TheBloke/Mistral-7B-Phibrarian-32K-GGUF

    • @emmanuelkolawole6720
      @emmanuelkolawole6720 7 місяців тому +5

      TheBloke/Llama-2-7B-32K-Instruct-GGUF

  • @JonathanPohlner
    @JonathanPohlner 7 місяців тому +1

    always excited to see what you're posting next, really excited for more on AutoGen series

  • @titusfx
    @titusfx 7 місяців тому +1

    🎯 Key Takeaways for quick navigation:
    00:00 🧠 AI's lack of memory is a significant hurdle to improving artificial intelligence.
    00:29 💾 Current AI context windows are highly limited, even in large models like GPT-4.
    01:24 📄 MemGPT (Memory GPT) introduces a solution to expand AI's memory capacity.
    02:06 🖥️ MemGPT aims to mimic the memory management of an operating system, with RAM and hard drive equivalents.
    03:27 📊 Increasing context length in AI models incurs significant computational cost.
    04:23 🤖 MemGPT autonomously manages its memory through function calls, allowing dynamic context adjustments.
    05:32 🔄 MemGPT divides memory into main context (like RAM) and external context (like a hard drive).
    06:14 📊 Large parts of the context in AI models are used for system messages and pre-prompts.
    08:03 🤯 Recursive summarization, a previous approach, leads to significant memory loss over time.
    10:09 🧠 MemGPT can correct false information and update its memory during conversations.
    12:58 💾 MemGPT uses recall and archival storage to manage external context efficiently.
    14:23 📜 MemGPT performs self-directed memory edits and retrieval via function calls.
    16:14 🗣️ MemGPT excels in maintaining conversation consistency and engagement.
    20:12 📄 MemGPT addresses document analysis challenges posed by lengthy documents.
    21:06 🔄 Scaling context alone doesn't solve uneven attention distributions in large AI models.

  • @thenoblerot
    @thenoblerot 7 місяців тому +5

    One of my first function calling experiments was having GPT-4 manage a couple of it's own context windows, and it really does a good job! Told it to use regex. Didn't go to this scale tho... Sounds really expensive!!!

  • @ShaneHolloman
    @ShaneHolloman 7 місяців тому +1

    Thanks for the great content, Ive learned a lot from your Ai curation.
    Due to the pervasive sound effects I use subtitles on your channel. Keep up the great work

    • @matthew_berman
      @matthew_berman  7 місяців тому

      You don’t like the sound effects you’re saying? I’ll reduce them in future videos if people don’t like them.

  • @robertbyer8189
    @robertbyer8189 7 місяців тому +3

    Love the videos. Definitely want to see more on MemGPT as I believe this is going to be the next huge move in development.

  • @mordokai597
    @mordokai597 7 місяців тому +1

    things like textgen have qlora training built in that runs on fairly low spec hardware... have an option to train a lora from the long term memory on a schedule . start with a default lora trained on synthetic MemGPT input/output text pairs with the FULL Memgpt system header, then use short hand system messages during inference to give it 'reminders' on whatever aspect of the complete system protocol is the most important for that step.

  • @mlg4035
    @mlg4035 6 місяців тому

    Very cool and valuable information! Thank you! I am looking forward to them adding open source LLms!

  • @JimMendenhall
    @JimMendenhall 7 місяців тому +5

    Thanks for digging into this and explaining it so well. I have looked at this project a couple of times and didn't quite "get" it. Keep up the good work!

  • @SamDig
    @SamDig 7 місяців тому

    I loved your simple explanation of MemGPT; thank you!

  • @user-vz5dv7xb6l
    @user-vz5dv7xb6l 7 місяців тому +3

    This was the first thing I thought of when I learned about token limits. I even asked GPT to create a micro shorthand language to condense info. It didnt work in April but seems like were getting close!

  • @Leonid.Shamis
    @Leonid.Shamis 7 місяців тому +3

    Thank you very much for sharing this information! I'm very interested in using MemGPT with open-source LLM models installed locally. If you come across any new developments in that space, I would highly appreciate hearing about it!

  • @peterwan小P
    @peterwan小P 5 місяців тому

    wow thats amazing! thanks for sharing (you and the researchers as well) !! 🙏🙏🙏

  • @keithbrings9053
    @keithbrings9053 7 місяців тому

    glad to see the progress, been working on a solution using roughly the same approach for months now.

  • @user-hc5nh8kv7g
    @user-hc5nh8kv7g 7 місяців тому

    gotta add this one to the memgpt playlist brotha thanks for the great vids love you long time

  • @davidallred991
    @davidallred991 7 місяців тому +3

    Great video, Exciting stuff. Memory access is a huge limiting factor especially within coding projects so I can see this really moving things forward. It seems like this would give you the benefit of a huge LLM like ChatGPT that then can be "trained" or augmented to your specific use and data set while still retaining all of its full training data.

  • @kevon217
    @kevon217 7 місяців тому

    Great and helpful walkthrough. Love your channel.

  • @Artavazd.kirakosyan
    @Artavazd.kirakosyan 7 місяців тому

    I got to watch your video for 2nd time. your video is a huge boost for my startup idea. thanks a lot

  • @mshonle
    @mshonle 7 місяців тому +3

    About lossy compression: it’s fascinating to me that lossy *text* compression can act as a normalizer, including replacing misspelled words or typos. I wonder if the output of recursive reflection is text or an embedding? As embeddings they can have more nuance than can be expressed in words (eg, “like a unicorn but even more mythical”) but that nuance could accumulate noise as well.

  • @MCNarret
    @MCNarret 7 місяців тому +2

    They should use both the uncompressed and compressed memories, the compressed memories offer a "preview" to the AI which it can then call more details if it needs to.

  • @theresalwaysanotherway3996
    @theresalwaysanotherway3996 7 місяців тому

    wow, very exciting video. It'll be awesome to get an updated video once they release a way of running open source models with this, even if they're not entirely capable yet. I'd wager that if mistral can successfully scale their models up to ~34B they'll probably be able to be fine tuned into a very competent function calling model!

  • @goodwill_ken
    @goodwill_ken 7 місяців тому

    Please do more! Great content lad! Learning and using something much!

  • @alexjenkins8026
    @alexjenkins8026 7 місяців тому

    Epic vid thanks for the insight.
    Seems like a much better solve than the attention sink paper.
    Excited to see this in the wild.
    The very basic install instructions seemed out of place.

  • @SassePhoto
    @SassePhoto 7 місяців тому +3

    As always, highest quality content, many kind thanks!

  • @navigatingsideways
    @navigatingsideways 7 місяців тому

    Thanks 🙏 for all of the highlights. I have trouble focusing on my Sales job because I am trying so hard to learn bot 🤖 skills and reconsidering juggling the Newsletter information

  • @curtkeisler7623
    @curtkeisler7623 7 місяців тому

    Definitely want a tutorial with open source models and thank you so much for doing all of this I've learned a ton from you

  • @mstew8386
    @mstew8386 7 місяців тому

    Thanks for doing a video about MemGPT I can't wait to see what can be done with all this.

  • @alx8439
    @alx8439 7 місяців тому +6

    The issue with uneven attention in the context window (that phenomena when only beginning and end is memorized well, but everything else in the middle is foggy blurry) was partially solved by Mosaic with their MPT models

  • @nathanbollman
    @nathanbollman 7 місяців тому +7

    It looks like UC Berkley intends to release their own tuned version of Mistral-7B, Sounds like that project combined with their memory might have some amazing results for local independent research. Interesting they are on the Mistral 7B and not the Llama2 7b or Llamav3, This is institutional recognition of the value in this new open commercially viable solution and its plasticity to being fine tuned... I cant wait to see what comes of it definitely make a vid when its working with a local LLM, I suspect if Berkley is tuning Mistral for this use case it *could* be all local!

    • @lauridskristensen9800
      @lauridskristensen9800 7 місяців тому

      I've almost exclusively heard of Berkley in relation to jazz music education, so I can't help but wonder if they're *tuning* it to the jazz standards of "The Real Book"?! :D

  • @pavellegkodymov4295
    @pavellegkodymov4295 7 місяців тому

    Great, thanks a lot for a valuable update, Matthew!

  • @theaugur1373
    @theaugur1373 7 місяців тому

    I love that this came from young researchers and not from more senior ppl at a big company.

  • @Christopher-today
    @Christopher-today 7 місяців тому +1

    Amazing bit of work by this team.
    A thought... While I'm not going to be silly and say open source models are currently as good all around as openAI's offerings they're close in so many regards and are catching up fast in most areas BUT where openAI really has a lead is in things like Function Calling. I'm really, really hoping we see some innovation in this area in the open model space soon. Thankfully I do think that innovation is coming and openAI's closed ecosystem is going to be under more and more pressure. imo open models will eventually win. Thanks for the coverage.

  • @sveindanielsolvenus
    @sveindanielsolvenus 7 місяців тому +11

    Once we have a robust way of handling memory, like MemGPT, we can simply fine tune the LLMs to utilize the system. Then we no longer need to use context window space for the system prompt to operate the memory. The LLM will just "naturally" do it.

    • @gidmanone
      @gidmanone 7 місяців тому +1

      you can simply fine-tune right now for that

    • @sveindanielsolvenus
      @sveindanielsolvenus 7 місяців тому

      @@gidmanone Yes, when we can fine tune GPT-4. But it will be better if OpenAI implement this directly themselves.

  • @PietroSperonidiFenizio
    @PietroSperonidiFenizio 7 місяців тому +2

    Matthew, this is sn amazing video. Remember the format, it's really good. Of course there must be a paper which is as good as this. But your way of explaining it is really well done

    • @matthew_berman
      @matthew_berman  7 місяців тому +1

      Much appreciated. I don’t think some people liked the glitch transition or the sound effects. But can’t please everyone!

    • @PietroSperonidiFenizio
      @PietroSperonidiFenizio 7 місяців тому

      @@matthew_berman i have not noticed any glitch transition. But maybe my brain is running too low Herz to notice them 😉

  • @AaronSherman
    @AaronSherman 7 місяців тому

    Definitely would love follow-up on the future open source model usage!

  • @raroca23
    @raroca23 7 місяців тому

    Wow, very inspiring video. I’m working in my PhD and would be a must for it

  • @unc_matteth
    @unc_matteth 7 місяців тому

    this looks super neat. i have been having fun though with creativity of lllms by specifically pushing them past the context and token limits. that's when you seem to get some good creativity. though that's kinda the opposite of what you are going for here haha. great video buddy

  • @iamjimgroth
    @iamjimgroth 7 місяців тому

    I started writing something like this a few days ago. Realized it's a monumental task. So glad someone beat me to it. 😁

  • @rickhoro
    @rickhoro 7 місяців тому

    Super exciting project! I totally agree that document chat is a key app. Please do another video when they support an open source LLM.

  • @phonejail
    @phonejail 7 місяців тому

    This was such a great breakdown, even I understood. Wow.

  • @grahamschannel9705
    @grahamschannel9705 4 місяці тому

    Cant wait for the open source model. Thanks so much for presenting this information.

  • @JimLove1
    @JimLove1 7 місяців тому +1

    I like all your stuff but this video blew me away. Even though you include a transcript, I had to keep stopping it to make notes. Well done. The only place I stumbled was the many different but slightly similar constructs. I'm still working to wrap my mind around that. For instance, you had a reference to system instruction, conversational context and working context. Later you refer to recall storage and archival storage which I assume are the same as main context and external context. Later you have working context and recall. I'm sure it's just me, but I'm trying to sort that out in my own mental model. But again, well done!

  • @mvasa2582
    @mvasa2582 7 місяців тому

    Matt, MemGPT is a further abstraction of a context window from an application level (Chat) to an OS level. OS level Context window could be in-memory (similar to cache) and on-disk. Cache size can be controlled. Anything that needs to be in context is cached, and the rest is flushed to disk. This process is operated as a function call. A long context window is almost essential to maintain a holistic context.
    The context is saved on your personal or work environment/device according to your usage. Context can leveraged for any required automation or building work efficiencies.
    🙂How different is this from a traditional Windows-Registry with Name-Value pairs!! 🙂

  • @dominiccogan945
    @dominiccogan945 8 місяців тому +5

    I literally was just about to ask about a memGPT your a freak…. You earned that sub

    • @93cutty
      @93cutty 7 місяців тому +2

      I joined the discord the other day, it's pretty awesome in there too

    • @adelinrapcore
      @adelinrapcore 7 місяців тому

      you're*

    • @dominiccogan945
      @dominiccogan945 7 місяців тому

      @@adelinrapcore why does that always happen. Not lying I always mess it up and someone corrects me.

    • @matthew_berman
      @matthew_berman  7 місяців тому

      Haha thank you. I’m reading your mind :)

    • @matthew_berman
      @matthew_berman  7 місяців тому +1

      @@93cuttywelcome!

  • @ReanCombrinck
    @ReanCombrinck 7 місяців тому +10

    Please keep following this with opensource! Great for analysis

  • @HisWorkman
    @HisWorkman 7 місяців тому

    Thank you, for this video it was awesome. Yes, I would love to see you implement this with open source models.

  • @ryzikx
    @ryzikx 7 місяців тому +2

    9:59 as an amateur author use recursive summarization to communicate my ideas to LLMs all the time so i can't wait to see if this will be better

  • @robertheinrich2994
    @robertheinrich2994 7 місяців тому

    I wonder, if that could be used for example for applications to companies (for work): essentially, create a CV based on the profile of the company. for this, it would need to know a lot about the user and know which information is relevant for a job application and which one is not. maybe the system can also need something like memory-files? essentially store important facts about a person together so they can be queried together.
    I see massive potential in these context-based usecases.

  • @isitanos
    @isitanos 7 місяців тому +1

    A lot of things discussed here are very similar to how human memory works. We can hold a limited amount of data in our short attention window. Our brain can store a lot of long-term info but buries it deeper and out-of-reach if it thinks it's not currently relevant. It also seems to compress memories by letting us remember easily the most important details of an event but burying the rest deeper. And we have all kinds of techniques or "functions" to jog our memory to bring back old data we know we have somewhere, store more short-term stuff efficiently when cramming for an exam, and so forth.

    • @dekumutant
      @dekumutant 7 місяців тому

      The more i think about multi model systems the more i see similarities with how our brains divy up task priorities. Its both freaking me out and exciting me to be honest

  • @ElleDyson
    @ElleDyson 7 місяців тому +2

    While I acknowledge there are other similar concepts floating around, I think the MemGPT ease of use, documentation and open sourcing is a great resource. Maybe I need to read the entire paper, but I am curious whether the "working_context_apend" feature is self guided, or a schema specified by the programmers, eg. "Key Personality Trait" - did the LLM decide this was something to remember, or that was pre-defined ?

  • @Sean.Vosler
    @Sean.Vosler 7 місяців тому

    Thinking about what you’re thinking… subconscious analysis of thoughts based on beliefs… Seems like the CPU/Ram/HD analogy could be better replaced by how our minds actually process information. Love this stuff! Thanks for breaking it down

  • @fuba44
    @fuba44 7 місяців тому +1

    HUGE yes from me, please cover it again when it can use llama or the webUI api :-) suuuper cool project!

  • @leegregory5617
    @leegregory5617 7 місяців тому +4

    Yes please do another video if they incorporate open source. This looks awesome, but I don't want to use an Open AI key. Another great video BTW :) You are my go to AI UA-camr.

  • @500hitcombo
    @500hitcombo 7 місяців тому

    You help me so much my dude. Thank you 🙏

  • @nufh
    @nufh 7 місяців тому +7

    I came across your channel and AI related topics on UA-cam by accident. Now I'm hooked, even though what I know is very limited, but this thing is really interesting. I started learning Python since last week, I just found out what Docker is today. Do you have any suggestions/references for newcomers like me? I really like the idea of having an AI friend/buddy that we can chat with while helping us with work.

    • @sashetasev505
      @sashetasev505 7 місяців тому

      He’s a YT/media personality and knows little beyond what he reads in the news, press releases and GPT4 summaries. Certainly not a bad thing-we need dedicated news aggregators since legacy media and trad. sources are inadequate in this sense-but to expect anything more than bulletins, general zeitgeisty commentary (and superficial read-throughs like this) would be misguided. Knowledgeable or even merely competent engineers have bigger fish to fry rn or they are Indian/Asian and have a less polished AV style than this 🤷🏻‍♂️
      Good advice is boring: Use text to learn and YT news to keep up to date. No shortcuts to mastery.

    • @matthew_berman
      @matthew_berman  7 місяців тому +1

      Thanks for joining! Just go through my videos and work with an AI to learn Python :)

    • @matthew_berman
      @matthew_berman  7 місяців тому +2

      ⁠@@sashetasev505ouch. I guess my 20+ years of development, multiple tech businesses, and production-level AI implementations don’t count for much. 🤷‍♂️

    • @sashetasev505
      @sashetasev505 7 місяців тому

      @@matthew_berman No insult intended, just no hints of any of that apparent. 🤷🏻‍♂️ Your current line of work is as a medium. Do regale us with (evidence of) your dev lore and business acumen.

    • @ludoviclebleu
      @ludoviclebleu 7 місяців тому

      @sashetasev505 with respect I have to disagree, I think that's uncalled for and inaccurate. Matthew is doing way much more than reading the news, he's curating and showing how to install and use the tech. The specific applications we do with the tech is our gig. There's no way he could also cover use cases and scenarios at this pace of tech releases, and he would leave cases out anyway.
      He releases great and USEFUL content several times per week that, at least for me, would take hours I don't have cos I'm using this info to actually build my cases/scenarios.
      I'm so thankful with his work, I'd actually love to see a tuto to "wrap it all so far", like encompassing most of the tech he's curated and reviewed and used over the months in a macro system to build my applications on: memgpt + autogen, with open source LLMs he has tested and shown that would make the best agents (llama, llava, mistral, falcon...), plus GPT4 and Claude2, and Dalle3 and SDXL on top. And on runpod on demand (pay as u go).
      Out of all the creators I follow on AI, Matthew would be the best (only) one would could show how to build such a comprehensive system; I say this based on all his great historic here, he has the knowledge, the smarts and the pedagogy to do this. I almost think a responsibility by now ;)
      Cheers, @matthew_berman.

  • @cemtural8556
    @cemtural8556 7 місяців тому

    Very promising stuff. Liked, subscribed, following. Keep it coming :-)

  • @luizbueno5661
    @luizbueno5661 7 місяців тому

    Yes, please!
    Thank You for this video.
    And please , as soon as they release it with Open source LLMs, love your videos.

  • @Monotoba
    @Monotoba 7 місяців тому

    Would love to see more on this technique and new models for memgpt.

  • @productjoe4069
    @productjoe4069 7 місяців тому

    This is an exciting research direction. I wish they were using standard terminology from cognitive science though. What they call ‘recall’ storage is properly called episodic memory. What they call ‘archival’ storage is semantic memory. Using established terminology helps researchers find papers, and also can suggest ideas (for example, what’s the equivalent of procedural memory? Is that a useful thing to add?)

  • @gregorykarsten7350
    @gregorykarsten7350 7 місяців тому

    Very ingenious workaround. Although I thought vector stores were the answer? Would definitely like to see video on open source
    R astores

  • @orotoi1
    @orotoi1 7 місяців тому

    Amazing news! And yes, ofcourse we want to see it working with open source models..

  • @jidun9478
    @jidun9478 7 місяців тому

    Wow, what a brilliant concept!

  • @Artorias920
    @Artorias920 7 місяців тому +6

    Brilliant research & Brilliant video! Firm handshakes to you and the MemGPT team 🤝

  • @user-wt7pq5qc2q
    @user-wt7pq5qc2q 7 місяців тому

    Awesome information. Keep it up. Cheers Terence

  • @Squallpka1
    @Squallpka1 7 місяців тому

    This one is my top 1 thing about AI I am really excited about. Cant wait for local LLM integration.

  • @skud9999
    @skud9999 7 місяців тому

    Gotta pont out, that's pretty much an analog of how humans process memory as well. also when it says :working-context-append Key Personality
    high-speed, adrenaline-rush activities and intense gaming sessions in CSGO) a slightly more charitable reading of it would take the CSGO part as just a descriptor of things that are fast paced and adrenal pumpin activates like formula one racing.

  • @li-pingho1441
    @li-pingho1441 7 місяців тому

    thank you so much. what a perfect video.
    btw, we need open source model on memgpt!

  • @crawkn
    @crawkn 7 місяців тому

    Very comprehensive analysis, thanks. This is encouraging, but I wonder if limitations on memory aren't at least in part a safety feature, i.e. could much larger memories already be in use experimentally, but considered too risky for public use?

  • @ewasteredux
    @ewasteredux 7 місяців тому

    Hi Matthew! I have watched many of your recent videos and find the content fascinating. I have a very off the wall question. Considering the current state of the world with all the pervasive political conflict, I thought it would be a good time to reflect on this AI technology and think of a unique use case. If something analogous to a total breakdown of government or even an 'apocalypse' happened, what local AI tool(s) would you want access to in order to survive and or rebuild society?

  • @jp00738
    @jp00738 7 місяців тому +3

    Hey Matthew, great tutorial. Wondering if its already possible to use local llms with it by using the openai format apis on services like textgen webui?

    • @matthew_berman
      @matthew_berman  7 місяців тому

      Possible, yes. Not out of the box though. Also, they are working on making it native.

  • @73Ferret
    @73Ferret 5 місяців тому

    An awesome piece. Thank you.

  • @whoareyouqqq
    @whoareyouqqq 7 місяців тому

    Great news, great project! Thank you

  • @daveinpublic
    @daveinpublic 7 місяців тому

    At 14:00, archival actually means that GPT is formatting the data in a more summarized way, for storage in a simpler way

  • @Martin-kr5nx
    @Martin-kr5nx 7 місяців тому

    Defo cover open source models! Great work!

  • @pconyc
    @pconyc 7 місяців тому

    Definitely interested when this goes open source. Thx for this!

  • @abagatelle
    @abagatelle 7 місяців тому

    Amazing. Thanks very much Matt

  • @davidlavin4774
    @davidlavin4774 7 місяців тому +1

    As this continues to evolve where does the line between fine tuning the model with additional data vs having extended memory for this long term context. I realize that the memory is only for an instance of a model, but do they perform similar functions in some regards? For example, do you upload documents into the model or into the extended context?

  • @TomTrval
    @TomTrval 7 місяців тому

    Hey that is what I was working on for my Dungeon and Dragons DM AI :D