DeepSeek R1 AI Girlfriend - AI Girlfriend with Local DeepSeek R1 & D-ID API
Вставка
- Опубліковано 7 лют 2025
- #deepseek #ai #openai
Check out my Udemy course to create more advanced AI apps: www.udemy.com/...
✅ Locally installed DeepSeek-R1 (1.5B) via Ollama
✅ Chat history for personalized interactions
✅ D-ID API generates a video avatar for each response
✅ Real-time AI-driven conversation
This is just a fun experiment, but the potential applications are vast-from virtual companions to AI-driven customer support.
Code: github.com/mur...
#deepseek #ai #deepseekr1 #openai #chatgpt #artificialintelligence
Local AI is cool, but Glambase still gives the smoothest and most lifelike AI girlfriend experience.
The models/avatars they look so tailored,
looks nice
that was so daaaamn amazing
her tone creeped me out a bit tho lol
@@shin3312 glad you liked it man, will be maybe upgrading with more advanced ai voice models
Your python script doesn't even save the chat history to a file to load it back when it is relaunched. And even if you fixed that, it is still limited by the LLM's context length, as any other LLM chat. It will not "remember something from one year ago" because it will not fit within the maximum tokens of the context. So basically it only has short term, amnesiac memory. The problem is further worsened by someone without a 32GB+ GPU being only able to use a very reduced context size due to VRAM limitations. So if it can only remember 10,000 words, it will forget something from one or two days ago (leave alone a whole year)
@@HitokageWorshipper yes you are right, maybe Next video I'll be integrating vector databases to the chatbot. You can find similar ones with vector databases on my github btw
@@murataavcu The vector databases as you mentioned seems to be the only viable solution to the amnesia (context length) problem, as there is no other way to guarantee it will be able to remember previous messages later on. However, I have seem very scarce information on how to implement such a thing, and no one to actually make it real and working. Which makes me think people have tried (because really, I refuse to believe no one in the world has tried) but never really achieved it (in a minimally usable condition) so that's why no one has published it yet. No one succeeded.
@@murataavcu I wonder. If we have instruct models made specifically for function calling, if we have embeddings and vector databases, why no one has made it yet? Why does it not work? What is failing? Maybe that many people just use chat gpt and don't even have a decent GPU? I don't understand what could be the problem. No one wants to run AI locally (privately) and with potentially unlimited memory? I think people don't want to be restricted by the context length. I think people care about their private information. But probably I'm just wrong and people just spam chat gpt with short prompts all day.