Download the full Project files for this project at my Patreon along with 250+ other projects:www.patreon.com/posts/chat-with-whole-108904494 Talk with me this Sunday the 28th, AMA for Architect+ level Patrons: www.patreon.com/posts/ama-meetings-for-108628691 Learn to code fast with AI assistance with my 1000x MasterClass: www.patreon.com/posts/1000x-dev-103326330\ Search 200+ echohive videos and code download lnks:www.echohive.live/ Auto Streamer: www.autostreamer.live/ Fastapi course: www.patreon.com/posts/learn-fastapi-26-95041684 Chat with us on Discord: discord.gg/PPxTP3Cs3G Follow on twitter(X) : twitter.com/hive_echo
Hey man, just wanted to let you know that I've been following you for close to a year now and really appreciate your content - especially your laid-back down-to-earth style and the fact, that you always make your code accessible. Thinking about joining your patreon once I can make more time to actually engage with the content more. Keep up the great work 🙂✌️
I'm definitely going to be looking through these code files to see how I can adapt it for a large memory system. See how well it works at least. My assistant needs context awareness and it should include months of context that it can try to search through to comment in a more relevant way than just having a single days context window available as a memory. I can't wait until we have seemingly unlimited contacts windows. But it might be through tricks like this that are implemented in the back end or some other ways that are combined together. Having a parallel agent that's constantly looking for memories on every submission, especially with many, could allow for injecting past data into the main agents context window to make things more relevant and feel more like it has an actual memory system so that I don't have to constantly explain the same thing that I've talked about in the past.
Yeah that is a great idea! I want to also work on a personal memory project as well. Which is similar to what you mentioned. Check out this GitHub repository for memory as well: github.com/mem0ai/mem0
Thank you as well 🙏 You can easily do that by using openrouter(which i made videos about before) by just changing the base url of openai library to openrouter in openai initialization then use "meta-llama/llama-3.1-405b-instruct" as the model. here is a link to openrouter: openrouter.ai/models
Download the full Project files for this project at my Patreon along with 250+ other projects:www.patreon.com/posts/chat-with-whole-108904494
Talk with me this Sunday the 28th, AMA for Architect+ level Patrons: www.patreon.com/posts/ama-meetings-for-108628691
Learn to code fast with AI assistance with my 1000x MasterClass: www.patreon.com/posts/1000x-dev-103326330\
Search 200+ echohive videos and code download lnks:www.echohive.live/
Auto Streamer: www.autostreamer.live/
Fastapi course: www.patreon.com/posts/learn-fastapi-26-95041684
Chat with us on Discord: discord.gg/PPxTP3Cs3G
Follow on twitter(X) : twitter.com/hive_echo
This feels like sitting next to a friend with shared interests, leading me through the cool stuff he’s been up to. This is deeply appreciated.
This made my day! Thank you very much. I am happy to hear that my daily endeavors of learning and coding is being appreciated ❤️
Agreed! What a treat to have found this channel.
@dittoXtime thank you very much as well 🙏
Hey man, just wanted to let you know that I've been following you for close to a year now and really appreciate your content - especially your laid-back down-to-earth style and the fact, that you always make your code accessible. Thinking about joining your patreon once I can make more time to actually engage with the content more. Keep up the great work 🙂✌️
Thank you very much for the kind words and the feedback. I am happy to hear you find the projects useful. Appreciate it 🙏
I'm definitely going to be looking through these code files to see how I can adapt it for a large memory system. See how well it works at least. My assistant needs context awareness and it should include months of context that it can try to search through to comment in a more relevant way than just having a single days context window available as a memory. I can't wait until we have seemingly unlimited contacts windows. But it might be through tricks like this that are implemented in the back end or some other ways that are combined together. Having a parallel agent that's constantly looking for memories on every submission, especially with many, could allow for injecting past data into the main agents context window to make things more relevant and feel more like it has an actual memory system so that I don't have to constantly explain the same thing that I've talked about in the past.
Yeah that is a great idea! I want to also work on a personal memory project as well. Which is similar to what you mentioned. Check out this GitHub repository for memory as well: github.com/mem0ai/mem0
Thanks for this video. Can you show how to set this up using an open source model like Lllama 3.1?
Thank you as well 🙏 You can easily do that by using openrouter(which i made videos about before) by just changing the base url of openai library to openrouter in openai initialization then use "meta-llama/llama-3.1-405b-instruct" as the model. here is a link to openrouter: openrouter.ai/models
@@echohive Thanks! I've had openrouter open in another tab for a few days. I guess it's time to take a look at it. :)
@john_blues yeah it is super easy. You will love it.