A few notes for clarity: We are running the webserver for the model locally and then using the local webserver API. No Internet required Both ollama and DeepSeek R1 are under the MIT license. This means you can modify, distribute and sell a product with them in it. None of the data ever gets back to DeepSeek so they don't own it! I hope this makes things more clear! I think this has a lot of potential game uses! So I am super excited to see what devs can make with it!
IDK if it's a bug but I responded three times to the video "Should You Switch to Godot?" and the responses don't show up as if I hadn't respond anyways i want a top down gun Thank you!
We're so close to being able to fully integrate LLMS into NPC AI I'm gonna lose my mind! This goes so far beyond just simple chat interaction, building out function calls to trigger behavior like the minecraft Voyager paper is coming too. You seem super knowledgeable from checking out your other videos; have you considered tying your tutorials together into a larger, modular project? Something akin to the Mix & Jam series that one guy does for Unity and recreating unique game mechanics. If you're feeling bottlenecked by art assets that's my wheelhouse and would be down to talk more in-depth with ya if that piques your interest!
Dude that's awesome! I would absolutely down to talk! You can message us on bluesky if you have that! I am out of town until Sunday evening,, but I should be free after that!
Nah, you will lose your mind if there are AI for Text to UE5 Blueprint and Text to Retopology and/or Text to UV Mapping. Nvidia AI (Meshtron and Ace) still 1 billion miles away from that reality
something I have wanted to do since LLMs became a thing was to have something like the letters in animal crossing be read by the AI so it can pick what sort of response the NPCs should have from the players letter (full AI NPCs is a bit to resource intensive still I feel). hopefully something like this would let me make something where you can spread lies about the other villagers
Hello! Awesome video! For some reason, the request is taking a really long time (around 15-20 seconds) even when forcing Godot to use IPv4. I am certain it is not a problem with my PC because If I use the command line to enter a prompt, I get the generation instantly.
We have been running into some issues with the HTTP node in Godot tbh. We may do a video on how to do it in C# at some point to bypass the node altogether
@RoyasGodot I did a bit of testing and I have realized that the delay isn't actually a problem with godot. Since stream is not being used, the whole message has be completed before showing up on the screen. 😅 It would really cool if there was a way to use stream
If you use C# to make the http request that should make it possible. I don't know if there's a way using the http node unfortunately. I need to do some more tests with it
Very information video, I just wanna ask is there a way to automate the installation of ollama and deepseek. For example if I export the project as an exe, when run on a different machine it can automatically install ollama as well as deepseek. Though I'd doubt it'd be possible, probably would need an external server with the environment already setup and simply making calls to it
@@RoyasGodot Nice, looking forward to it. Currently my way of doing it is just isolating it into an external python application and run that in the background while the game is running, this also allowed me to control more stuff and have access to more models. Though windows would probably flag this as virus lmao
This is one of the most advanced things I could be doing on a home PC. Coupled with that experience of reading code and typing it in like it's 1980 again and I'm typing in some program from David Ahl's Basic Computer Games book. Wild juxtaposition.
It's honestly not that bad! This model can run on phones without too much effort. Also I think in the video the model was running on the CPU. I would have to double check
We are teaching specifically deepseek so I don't think it's misleading at all! We are doing what we say we are doing, but yes you could use other llms too. We talk about this in our latest video
If you set stream to true it sends one word at a time rather than the whole response. The problem with Godot's http node is that it closes the connection after 1 response so you only get one word. If you want to use the stream setting you would have to use C#
We are planning on doing some videos in the near future showing some use cases. We also discussed some ideas for games that could use this in our latest video if you would like to check that out!
@@RoyasGodot Yeah I know, like RooCode, But I want something like an Addon for Godot with AI integration you know? Especially since I don't use VSCode while making games. No need to apologize btw, the title makes sense when you watch the video just not what I expected is all.
A few notes for clarity:
We are running the webserver for the model locally and then using the local webserver API. No Internet required
Both ollama and DeepSeek R1 are under the MIT license. This means you can modify, distribute and sell a product with them in it. None of the data ever gets back to DeepSeek so they don't own it!
I hope this makes things more clear! I think this has a lot of potential game uses! So I am super excited to see what devs can make with it!
IDK if it's a bug but I responded three times to the video "Should You Switch to Godot?" and the responses don't show up as if I hadn't respond anyways i want a top down gun Thank you!
@Mr.Flutsch Ah, Thank you! No I didn't see the responses so appreciate your persistence!
I thought were about to make deepseek code game for you damn i wish we would lead to that point in future
We're so close to being able to fully integrate LLMS into NPC AI I'm gonna lose my mind! This goes so far beyond just simple chat interaction, building out function calls to trigger behavior like the minecraft Voyager paper is coming too. You seem super knowledgeable from checking out your other videos; have you considered tying your tutorials together into a larger, modular project? Something akin to the Mix & Jam series that one guy does for Unity and recreating unique game mechanics. If you're feeling bottlenecked by art assets that's my wheelhouse and would be down to talk more in-depth with ya if that piques your interest!
Dude that's awesome! I would absolutely down to talk! You can message us on bluesky if you have that! I am out of town until Sunday evening,, but I should be free after that!
@@RoyasGodot I'll take this as the opportunity to finally migrate! see if the gachabrain handle is still available and I'll shoot you a DM
Nah, you will lose your mind if there are AI for Text to UE5 Blueprint and Text to Retopology and/or Text to UV Mapping. Nvidia AI (Meshtron and Ace) still 1 billion miles away from that reality
Don’t ask it for the square root of 4 or to ask viewers to like and subscribe though 😅
something I have wanted to do since LLMs became a thing was to have something like the letters in animal crossing be read by the AI so it can pick what sort of response the NPCs should have from the players letter (full AI NPCs is a bit to resource intensive still I feel). hopefully something like this would let me make something where you can spread lies about the other villagers
I LOVE THIS IDEA! It would be so much fun to do!
Hello! Awesome video!
For some reason, the request is taking a really long time (around 15-20 seconds) even when forcing Godot to use IPv4. I am certain it is not a problem with my PC because If I use the command line to enter a prompt, I get the generation instantly.
We have been running into some issues with the HTTP node in Godot tbh. We may do a video on how to do it in C# at some point to bypass the node altogether
@@RoyasGodot Sounds great! Thanks a bunch for the reply :D
@RoyasGodot I did a bit of testing and I have realized that the delay isn't actually a problem with godot. Since stream is not being used, the whole message has be completed before showing up on the screen. 😅
It would really cool if there was a way to use stream
If you use C# to make the http request that should make it possible. I don't know if there's a way using the http node unfortunately. I need to do some more tests with it
This is amazing and quite easy to follow! Subbed 😄
Thank you so much! Glad you enjoyed it!
Wow good content sir
Thank you very much!
Very information video, I just wanna ask is there a way to automate the installation of ollama and deepseek. For example if I export the project as an exe, when run on a different machine it can automatically install ollama as well as deepseek. Though I'd doubt it'd be possible, probably would need an external server with the environment already setup and simply making calls to it
We are currently doing the research to do another video on that! There does seem to be a way to make that work!
@@RoyasGodot Nice, looking forward to it. Currently my way of doing it is just isolating it into an external python application and run that in the background while the game is running, this also allowed me to control more stuff and have access to more models. Though windows would probably flag this as virus lmao
That's sounds like a really good method tbh. We will probably try out a few methods and then make a video on our favorite
I'll make sure to check it out when it released. If you're down, we can chat more through blue sky
I am down!
Awesome vid! Keep it up guys!
Thanks so much for the support!
New Sub, Great work!
Thank you!
bro i cant understand a thing of what you are doin, how do you do it so straightfoward?
Sorry, where are you getting hung up?
Oh apologies I may have misunderstood, it's honestly just a lot of practice haha.
13:10 I think deepseek is too inteligent for its own good...
Hahaha it definitely over thinks things. Truely an AI of the people
This is one of the most advanced things I could be doing on a home PC. Coupled with that experience of reading code and typing it in like it's 1980 again and I'm typing in some program from David Ahl's Basic Computer Games book. Wild juxtaposition.
That's amazing! The more things change the more they stay the same. Thank you for your comment, I really appreciated it!
Very interesting, do we need to train the model ?
Nope! Its trained and everything!! All you need to do is run it
But you need it to fit in your vram and ram and storage🫠
It's honestly not that bad! This model can run on phones without too much effort. Also I think in the video the model was running on the CPU. I would have to double check
title is a bit misleading as long as you have oollama any LLM you have will work.
We are teaching specifically deepseek so I don't think it's misleading at all! We are doing what we say we are doing, but yes you could use other llms too. We talk about this in our latest video
What does stream do? I am looking at your tutorial to use in a different code base and I am just curious what that Boolean is setting?
If you set stream to true it sends one word at a time rather than the whole response. The problem with Godot's http node is that it closes the connection after 1 response so you only get one word. If you want to use the stream setting you would have to use C#
Sadly not really packageable for full release games
You can include ollama as a dependency for your games and then package them with the installer! We might have a video about that coming out soon!
Did spot some git libraries using llama cpp as a way to handle the dependency
Systems requirements for launching R1?
Varies based on model size
A phone can even run the 1.5 model
yea id be more interested in seeing actual use cases
We are planning on doing some videos in the near future showing some use cases. We also discussed some ideas for games that could use this in our latest video if you would like to check that out!
@@RoyasGodot ill check it oiut!
Not quite what I was expecting with that title but that's alright I guess.
Sorry about that! What about the title was misleading?
@@RoyasGodot I was sorta hoping to have DeepSeek as a coding assistant inside Godot with that title haha.
Oooh apologies, I do believe there are some vscode extensions that can do that!
@@RoyasGodot Yeah I know, like RooCode, But I want something like an Addon for Godot with AI integration you know?
Especially since I don't use VSCode while making games.
No need to apologize btw, the title makes sense when you watch the video just not what I expected is all.
????????
Hey! Is something unclear? Let me know, and I’d be happy to explain!
Lol is a 1.5b...
Useless model
Good enough for an NPC
There are links in the description for bigger models, this is model agnostic. But the better models you go the less people can play your game
Cool