Let me know what topics that pop up in this video you'd like to see more of. I'm keen to do videos digging into some of these fundamental questions, but also dig into those projects in games like Overcooked, Street Fighter and Rocket League.
Thank you for making these videos, they are truly fascinating to me! This ones a smidge selfish and I really wish more games used it but more G.O.A.P. please! It's the current #1 ai system contender so far for a game I'd love to make that is only in the planning stages.
I feel like Good AI is in contrast to a lot of modern game design best practices and how a lot of modern developers approach things... Good AI takes control away from the developer for the sake of the simulation.. which can work fantastically.. but I feel like a lot of modern games look at that as a negative.. as anything that that can't be 100% predictable, for their curated bespoke experience, is if anything either treat it as a waste of time or a glitch that needs to be solved. I've been thinking quite a bit about this in terms of where we were back in the day and where we are now in technological limitations. If you go back far enough, most games were essentially corridors or mazes with poor volume blocking and weak transparent textures to try to pretend to be plants or rubble etc.. and the really really good ones could manage to pull it off without making it feel like you were in a BSP maze.. Even though you were.. most didn't pull this off lol. A lot of things in older game design were created to provide an illusion of an experience, with the hope that one day you would be able to provide the experience and not have to bother with the illusion necessarily... This was an exciting time to see that technology overtake and remove these barriers and the gameplay that would ensue as a result of that... And it kind of stopped... Nowadays, we can render kilometers on kilometers of not just terrain and represented materials, but photorealistic even... And we can make it interactable.. I mean that would be the next step right? That's where we were headed anyhow... But instead it seems like the illusion philosophy, never went away. Rather than creating systems that actually are the thing the illusions were representing.. we've decided we are going to delve deeper into the details of the illusion.. photo realistic graphics, and prebaked set pieces over physics and virtual realization. And players are used to that... Players don't really understand outside these systems anymore especially if they grew up with the heavy-handed stuff, and you really can't expect them to because the game and game design and game culture has relied on that familiarity for a lot of things.. this is why peripherals really haven't evolved over 20 years, and if anything have gone backwards... Once games needed to step in to allow for limited peripherals to try new things, once they got good enough at doing that to make it more invisible to the player as if they were actually fully in control.. that was kind of all they needed I guess... But on the back end of that, you have systems that are guiding the player, and if those didn't step in, they would be miserable and helpless especially nowadays where the understanding of of not having these things just simply has been a part of their lives the entirety of their gaming experience. And developers really do think this way from everything that I've gathered in a lot of cases.. especially the newer ones, and sadly the older ones who used to push these boundaries and philosophize about things of a holodeck future. The most praise AI gets nowadays, usually comes from very specific, often director drivens systems like the alien or Left 4 Dead.. which is designed to provide a curated experience for sure, now in all fairness, both of those games hit more of the simulation aspect of gaming, I say that lightly for Left 4 Dead.. but the idea that everything wasn't totally controlled wasn't out of hand necessarily if anything it could be a feature... But there were a heavy amount of systems put in place in order to make sure that chaos did not rain and the experience was still curated to a degree.. and it was a singular system in a lot of cases, not a series of super interacting systems as you'd see in like a city builder or Eastern European mod scene lol... I feel terrible cuz I put this in a lot less words, and a lot better words... And then accidentally hit cancel.. so apologies for the the spam I've rewritten this many many times and it's not getting better lol. A lot of these thoughts come from just general conversations I've heard and GDC's, and this this kind of assumption of how things are supposed to be in game design and and I always remember back when that wasn't how that was supposed to be in and I realize I'm old and it's been over 17 years pretty much since that forward momentum started slowing and stagnating... But I think it's important to to look at that as a thought experiment if anything.. when you look at how absolutely utterly excited people were to just have cool stuff doing cool stuff because you could and the the experiences you could create with that... And I feel like we are getting further and further away from that being a possibility outside of the niche of some VR stuff.. and that's just because the player base isn't looking for it, they say they want that and then people do want that but.. Like I see people cry a fit when a control default isn't what they think it should be, and then they still don't even go into the options to change them.. and these are streamers usually lol like in public lol... And one of the things that I always thought was interesting was multiplayer's a thing... Competition is a thing with other humans... I feel like humans could figure it out you know, and of course they can, they did before lol.. I really think the human factor that I do think is a problem with game design on a lot of other levels.. I don't think it's the AI that's the biggest issue here and I think the developers are more of the problem in that regard.. because they have to create the AI to curate the experience itself.. and it's just it seems like it's not even a a thought process maybe whereas that you would not follow the best practice of everything is fake and the goal is to make it look less fake and also make sure that they player is thoroughly encouraged and conditioned to not scratch the surface, so that it seems like it's their choice that they don't look beyond The curtain.. Again I apologize for how obnoxiously long this was... But I do hope someone read it, I might come back and try to work it back into some shape of reasonable.. you wouldn't believe how much text I cut out.. and it doesn't really have a conclusion, cuz I had to pull that as well, but it bothers me that I feel like I was on a train going forward, it stopped and started doing other things and 17 years later I am realizing that I was never going to get to that destination, we all got distracted or something and we're happier for it maybe? I'm just not that's all and maybe that's okay.. but it really does bother me when I see a lot of the old visionaries come up with copium for their compromises and start regurgitating the things that much less qualified people have set out loud...
afaik alphastar didn't learn from any humans, in contrast what you stated in 1:40. It only played vs itself/other iterations, just like the google chess ai. When it was done, of course it played against pros to show its power. It benefitted from years of balance patching, but not from any input of players directly. Edit: Newer articles often state that alphastar was 'fed' with information from humans, but if you go back to older information, the story sounds differently. For example a video from the channel Google deep mind: "AlphaStar results": at 0:01 it says "no human data with naive exploration and random initialisation", at 0:23 it 'overtakes' the pure imitation learning, later fsp and pfsp and so on. But maybe I interpret the graph incorrectly. In the article "alphastar grandmaster level in starcraft ii using multi agent reinforcement learning" google states that the second generation (with newly implemented camera and apm limitations) "starts only with agents trained by supervised learning". In my understanding that means that at first ai knows nothing, then adapt or mutates depending on the output, i.e. win, loss or draw. So in the end, I am left puzzled. You will probably have noticed due to my vocabulary, but I have not clue about ai or programming. I still think alpharstar had no human input, but I am not sure anymore. I did also not read google's paper about it, because of the lack of knowledge and determination ;-)
I'd say the hardest part for me so far has been making AI fail in a fun and credible way I often say game AI has to be "Smartly dumb", and that is hard to do
that is what i like in some games where they added adaptable AI that learns off of the player to challenge the player but never exceeds the skill level of said player… Tekken 8 has a pretty decent one with that no matter the set difficulty the AI will scale to you as the player each round making it feel more like playing a actual human player
Would you consider that piece of software to be an intricate element that interacts with the player instead? Does it have more variables than the player has access to? (Perfect knowledge of the map, knows every combo by heart, knows where the head is precisely?)
I was working on an AI for a turn based rpg I made that would take whatever weapons, spells, and team composition to factor in how each enemy took their turn. I also gave them random traits that were invisible to the player such as "cowardly". This would make them behave in a super predictable manor for me, the developer, because I could follow a logic tree and know exactly what they're going to do, but it was hard for my friends to spot the patterns because of the sheer number of combinations. Definitely a lot to think about but it was super fun. I scrapped the project for the time being but I'll probably come back and rework that AI at some point.
That is actually a very good idea. I wouldn't scrap that idea entirely. As a creator, i too understand how the concepts we envision might fall short when first tested by others, which might confuse or discourage us at first. But taking a step back and looking at the bigger picture. We often are closer than we think from something really enjoyable and amazing. For example, games are often more about helping the player than punishing/fighting against them. I say that due to the many "invisible" systems we have like, to prevent a player from falling, or to make a player have the impression they barely survived an encounter because we tilted the enemy damage just enough so the player wouldn't get insta-killed directly. Those are just a few examples but, the point i wanted to bring is that maybe you could keep the entire system you envisioned, and instead of making it "invisible" or too subtle, to the player. You could instead make it a core mechanic in the game. Heck, the Nemesis System from Shadow of Mordor thrived because of aspects similar to that one. So maybe if you actually integrate this knowledge into the game and tell/train the player about it, even subtly, could be a good direction to start refocusing this amazing idea of yours.
I've been mostly designing and modding for an existing franchise with very little effort put into the AI. So I decided one day to build/implement my own systems into it from the ground up as a learning experience and way to improve the game for others. FSMs, Behavior Tree, GOAP, Utility AI, etc.. I built a framework for them all. I find the challenging part isn't building the systems. In fact, they are all surprisingly easy to create. It's designing within them that is difficult. For example, even a simple behavior tree can break down (mostly in the 'this no longer makes sense and looks incredibly dumb in this situation' way). With my limited experience in mind, I can say the biggest challenge I come across is making what normally looks intelligent not look incredibly stupid when something unusual happens.
I have been working as a Combat/Game Designer with a heavy focus on Combat and AI for around 15 years at the point of writing, and I think this was an excellent video! I will for sure look at more videos on this channel! I also wanted to echo something that was brought up in the video that I always talk about when covering this subject: people do not necessarily want smart AI, they want AI that makes themselves feel smart! The illusion of smart AI is far more important than them actually being smart, and if the Player has ample tools to predict and outsmart their enemies, that will make for a very good experience. I also always like to say that making smart AI is super easy, but making a good AI that is fun to play against is super hard.
Prediction is very important. The AI has to behave in a way that the player can interact with. for example, the Metal Gear series. Reacting to sounds, looking at a magazine, box, etc isn’t the way an actual human would respond. this is preferable to an AI that will merely checkmate the player in any game sense.
With the focus of improvements/advancements made to visuals, I feel like Ai has long since taken a backseat to any sort of focus, that modern Ai in games feels like it hasn't changed much since the 2000's. I really do wish devs would dedicate some time to Ai in the future, because I'm honestly getting put off modern games with how deadpan and lifeless AI act.
Maybe because "life" of npc depends not only on ai but content from animators, vo and every other department. Or navigation/pathfinding 2d or 3d space (from death stranding episode) can hit performance still.
@@Maecmpo This, the famous FEAR AI isn't actually smart in that you drop them a random map and they will take cover and hide in good spots, we think they are great because they have so many lines to what the player is doing it fools us into thinking they are thinking too.
@@AIandGames I really look forward to the day that comes, because I have been the kind of guy that's spent years just observing game Ai and their states (my favourite is Heroes of the Storm Ai, since multiple heroes use different Ai states depending on human player's chosen champions, and their states are quite noticeable). I'm not asking for Skynet level self aware AI from devs, but I'd at least like something that felt eerie like the Ai from FEAR (despite being simple, FEAR's AI actually gave me...Fear lol). These days we still get Ai that either run into your line of fire, or hide behind a wall/pillar and hurl petty insults at you, and I'm pretty dang tired of seeing that AI type on repeat for 20yrs now.
The biggest issue I had was lack of familiarity - while researching ML applications for a preprod game I had to work very hard (and was ultimately unsuccessful) in investigating and assuaging concerns about ML. We were all familiar with existing techniques for AI and in AAA environments so you need to find the places where ML is superior - triggering a voice line when the player enters a room is not improved by adding ML, but balancing a complicated army composition might be. Even in the cases where ML has some advantages you normally need to give something up (typically designer control) and that's compounded by our negativity bias. If I were to do it again I'd make a more structured experiment and agree upon targets beforehand, making sure to involve people who were sceptical about it.
This is something I explore when working with studios, is trying to highlight how it impacts the production workflow, and what are the pros and cons of the approach. It's never as straightforward as it sounds.
And I can imagine ML models can be much worse than deterministic models at a much higher compute cost and even when the model is well trained and tweaked you can’t always predict when it does something baffling, maybe even game breaking
Yes that's definitely true about the random aspect. Gamebreaking is bad, on top of that the last thing MS or Sony wants is a controversial AI. Collecting observations for the agents might be expensive depending on your use case, but both CPUs and GPUs run neural nets very well - inference uses lots of the same operation in a loop, linear memory access, and very little branching. Still, even if it's efficient it's not going to compete with "if (hurt) { Play(hurtVoiceLine); }"
@@hassamkhalid3301 I've forayed into NN level generation once, I asked an artist to use a Unity terrain (basic heightmap) and manually place trees and vegetation. We trained a CycleGAN to go from the heightmap texture to a veg placement texture - this was done so that the artist could focus on making a level and wouldn't have to formalise the rules they used to place trees e.g. don't place trees on slopes. The important thing was representation - my original approach was to use a top down map and make single texel dots where the trees were. An expert I spoke to explained that GANs work much better when there's context around features and suggested replacing the tree dot map with a "closeness to nearest tree" map, which looked like circular gradients peaking at each tree. What are you working on? And what are you trying to generate? The things that helped me the most was tailoring my representation to the NN I was using and limiting what I was generating. I tried to respond earlier with direct links to what I've done but it looks like UA-cam swallowed my comment - if it pops up later then thank Tommy! I'll try to reply after this one with routes to the other resources.
Ya know, one of the biggest gripes about a game like Breakpoint that a lot of people have is the enemy ai. However if you look at a game like Phantom Pain, especially after a game like Breakpoint, it seems like heaven. That said, there's nothing about the artificial intelligence being smarter that makes it better. What I think people want more than outright tougher enemies (as in smarter, more tactical) without realizing it is a breadth of behavior. Even in Breakpoint's immediate predecessor, enemies had more behaviors than unaware/alert/killmode. In Wildlands You could catch some sleeping or doing pushups to pass the time, things like this. In Phantom Pain you had direct interactions that made them seem more intelligent such as being able to hold them up. Sometimes, they'd go for their gun. Other times, you could tell them to kiss the floor and it was effectively a knock out until alarms went off. That said, I don't think developers give enough time or mind for those seemingly rare emergent moments that make a game shine for the people who discover it. I cite Breakpoint because immediately as I played it, I got whiffs of other games like Red Dead Redemption 2 but none of the substance (bivoacs, rations, etc. being surface level mechanics). A lot of people talk about ai in terms of difficulty. I think the better, less frustrating option is ai presentint variety to allow for emerging gameplay. Sorry, was too busy writing that to pay attention. I'm gonna rewind and watch now so I can see you address that exact thing. 😄 P.S. I also had Starfield in mind as Bethesda seemed to catch a lot of crap for dumbing down enemies in the game, but I honestly think it was a good idea, especially for the space fights as those really are difficult until you level things up and would bottleneck most players until they stopped.
This sounds like a great idea. Immersive behaviors would go a long way for me. Sure, having super tight tactics would be cool depending on the NPC but not every NPC should even be that intelligent.
This sounds very plausible. I was recently thinking about the ways in which the original Dungeon Keeper had much more interesting monster minions than its sequel, and that was the biggest difference: the original game had arguably overcomplex AI which led to a lot of subtle and special-case behaviour patterns. It was a sandbox for the developers as well, with everything being streamlined for the more tightly campaign-focused sequel. The sequel did a better job at providing set challenges and channeling player interaction with the game, but it also led to a less complete-feeling world.
Wow we are actually on the same page, for my current project im investing a lot of time to make the ai feel personable and real! As a i was always disappointed with how fake ai in games always was, its like they didnt even try. So now i want to try to set a new standard!!
This is a great video from a technical perspective - the performance constraints placed on AI are definitely a large factor, e.g. instead of any kind of box/shape trace, Half Life 2 uses line traces for enemy vision. It's faster, but it also means you can hold a brick in just the right spot to block the trace and become invisible. But design intent plays a significant part as well, because even if a programmer could design the smartest AI that always picked the optimal strategy, that would make for an awful gameplay experience. How do you distinguish between AI that has interpreted realistic inputs and come up with a clever solution, from one that is simply 'cheating'? As a player, you can't - and even if you could it still wouldn't be a fun experience. The example I tend to think of (I'm sure others can think of better scenarios) is a stealth game where you're chased by a bunch of enemies into a room with no obvious way out. You move some crates and find a hidden vent, then pat yourself on the back for being smart. You start going through the vent, and just as you're nearing the exit - an enemy pops up and lobs a grenade at you. In a multiplayer game, if a player did that then they might have known there was an alternate path you'd try to use - so they moved to the vent and timed the grenade just right. You'd (maybe) cogratulate them for outsmarting you. But when you know that it's an AI, your first thought isn't that the AI followed that thought process and outplayed you, you're going to think the game was just tracking you through the wall or there was a scripted event that threw the grenade. Even if you did think it was a very clever AI, what's the fun in this situation? Most games wouldn't work if every enemy was as smart as the best possible player of the game, we rely on some amount of stupidity for the power fantasy to work.
I plan to return to this question from a design perspective in a future video. It's something that comes up *a lot* in my consultancy work I do with game studios.
Your example honestly just seems like the developers failed to give the players appropriate tools to make the game fun. Sure smart AI might not fit an already existing game but if you can design a game around the fact the AI acts smartly, there's no need to be on a level playing field with the computer after all.
@@theresnothinghere1745 The point of the example is that sufficiently smart AI is going to be imperceptible from the AI cheating, and therefore not engaging to play against.
@@daveface69 But even that depends entirely on how the game presents the AI actions. If a game shows the AI work process it seems much less like cheating for example if the stealth game was an Arkham game then you'd overhear the enemy working through their plans on the radio. Making it seem much more reasonable when they do arrive at the conclusion.
I'm currently making a fully fledged 3d soulslike. My current AI-script boils down to: "If you see the player, run towards him, maybe circlestrafe a bit, and attack" It's not exactly complicated, but it works really well actually :d
Watch out! You'll have to solve the problem of DS as well, when the player runs back and after an arbitrary threshold the enemies lose agro and go back to their position, giving the back to the player most of the time. That open up a lot of cheese situations
@@pixel_igig Just make it so that once the Player has been targeted, the enemy will scour every stone, every sea, every land just to find him. They shall know no weariness. They shall chase the Player to the ends of the earth!
I had a pretty bold but straight-forward concept: Player actions would create portals that, after a certain gameplay point, would open and release an invasive hive-type NPC faction to colonise the area around the portals. The problem: The world was procedurally generated and the player could change almost every part of it. I spent weeks trying to parse how to interpret any conceivable arbitrary collection of positions into a set of "rooms" that the NPCs could assign functions to (barracks, farm, storage, etc.). The functions would then request an NPC be present so they could operate, essentially as a supervisor while the room itself ticked over. Ultimately, I realised that I could just group the positions into the largest contiguous cuboid available, remove the points it contained from the set, and repeat until the size of the cuboid was too small to be useful. Weeks of staring at the problem, and I finally realised I could just treat the points like a series of cubes, in a voxel game. 🤦
Great video! Another issue is the need for game designers to be AI-litterate so that a) descriptions of the features that will be handled by AI can be as clear as possible and the amount of interpretation by the programmers minimal, b) the designers can imagine new ways of exploiting AI's possibilities. AI programming in complex games is not something that occurs once the design is done but a constant conversation toward a unified vision. Without a shared language, this conversation cannot occur.
Because no one actually wants "good" AI, they want "just barely good enough" AI. (Which is why MLA controlled enemies would be a nightmare unless crippled.) I remember back in the day hearing from a couple of different game devs that they ended up having to make their AI worse, because it proved too difficult for people in play-testing.
Some of the hardest games of the past barely had any AI other than repetitive movements. A truly intelligent AI would be undefeated, it would simply charge toward you with all the enemies together instead of letting you take them down one by one, or a few at a time.
I'd love for you to cover the subject of personalised opponent AI, similar to the drivetars. One of the biggest hurdles in fighting games is finding someone who is near your skill level, I hope one day we can have a personalised ai opponent that learns how you fight, and punishes your bad habits, and always stays just that TINY bit better than you so you can grow. Edit:typo.
man you should really check out the GDC talk about designing AI for killer instinct if you havent ua-cam.com/video/9yydYjQ1GLg/v-deo.htmlsi=gUBYqqBqZzrbiCYE&t=1 edit: ik this isnt exactly what you described tho - in killer instinct the ai mimics the trainer's (your's; player's) behaviour but it doesnt seem to be able to get any better then the person training it
I think Tekken 8 is accomplishing something similar to that with Ghosts - AI opponents who try to copy any player's fighting style, including your own. Iirc, you can train up and then play against your own Ghost, and it'll be somewhat like playing against yourself. Though, for a better challenge, I believe you can challenge other people's Ghosts as well.
The sounds good on paper, but this means you're fighting a constant uphill battle. The problem is lets say you learn something new, there is no sign you learned it as the AI still kicks your butt being slightly better then you infinitely. If you apply a limit everyone under that is punished with losing while everyone above is rewarded with free wins, often the opposite of what people want. You're better off with AIs at set levels because there is a way to measure improvements to having an outright fair one or ways to alter settings manually. Compare this to AI where the only way to actually improve is exploiting the AI by learning skills that do nothing against players.
12:00 - I'm pretty sure you know this, but for others that don't: the purpose of Alpha-Star wasn't ultimately to make an ML to beat StarCraft. The reason StarCraft and others are a good learning experience (ha) for ML researchers is because there are certain problems where we really don't know what intermediate states lead to a good outcome, but we do know a good outcome when we see it. For folding proteins, we don't necessarily know which intermediate states are the correct ones, but we DO know the resultant energy of the final folded state. For two chess boards, you might not know which one is better, but you definitely know if someone is checkmated. And for 11:00, one other detail is that machine learning doesn't always give the appropriate control over the behavior of the AI. You might want the AI to sometimes be less aggressive or less accurate or run away. With a classical system you get that fine-grained ability to decide what the bots do; not as much with AI. It's also easy for them to exploit weird quirks of the game if they're trained with reinforcement learning.
I made a video many years ago about evolution videogames where characters change over time to be best suited to the game environment. It doesn't use deep learning but has some results
@1:10 - Yes, please, Dr. Thompson! @13:23 - Fighting (Both 1 vs. 1, and assist character(s) not playable by a human (if any)), RPGs (Both enemies / bosses, and supporting character(s) not playable by a human), Beat-'em-ups (Enemies / Bosses), and Sports (Both 1 vs. 1, and team-oriented (where teammate(s) not playable by a human)).
I only recently began working on game development in my spare time, but it’s given me a newfound appreciation for every little detail in old and new games. I studied software engineering and would love to work at a game studio one day, but I’m not sure how realistic that is as an early career path.
A lot of game devs start out in other spaces. There's not really any traditional route into the industry. I studied computer science and AI and then learned game development on my own. But then my career makes no sense anyway. 🤣
@@AIandGames Yeah, and you’ve got a solid UA-cam channel to boot! Honestly it’s been difficult finding anything programming related in my area, let alone game studios. I can certainly dream, though!
As someone who managed to jump into (educational) games via a recruiter for my first job out of college, then get into AAA games as my second job, I would say, it is realistic, but you need to _actually_ know how to code, how a computer works under the hood, and how those two things intertwine. For example: What are vectors, how do you use them, _why_ would you use them? Do you understand the _concepts_ of assembly well enough that given some instruction explanations, you could write it a small snippet when necessary? (I've never done it in 5+ years, but you should know how the hardware works) Can you write a function in an understandable way? An optimized way? Can you write some timing tests to prove which ones faster? Can you write a linked-list from scratch? You'll probably never _use_ a linked-list, but if you don't know how to work with pointers (and maybe how you might _abuse_ them when needed), you're gonna have a hell-of-a-time working with the ungodly mess that is "engine code" depending on what department you want to work in. Ultimately, you need to be fully comfortable in C++. Other low-level languages (C, Rust, Zig, Nim, ...) would also work, as long as you're comfortable working at a level just above the hardware. Since you quite possibly, may be required to go down to that level when something (a previous programmer's code, an external library, or even engine code) doesn't _quite_ get the job done fast enough. Having a broad knowledge of data structures and their use-cases is useful for making sure you don't write (usually poorly) something that already exists. And the same applies to algorithms. You may not be allowed to _use_ the C++ standard library algorithms, but knowing what they do and why each of them are there, will help you think about data transformation in a clearer way (which you can use to document how your optimized mess of a for-loop originally worked before mangling it). Anyway, sorry for the wall of text, but I hope this helps guide you (or anyone else reading) on what should help if you want to pursue working at a big studio! Or just use it as a guide to become a better indie/hobby dev! Either way, I hope you succeed and have fun in whatever future software related endeavors you run into!
The chief problem is not whether we can build good AI; it's that we can't really define what is good AI. Or we can't come to a consensus. If the goal is simple like it needs to be really competitive and good at defeating the player then actually that is generally fairly easy to achieve. Give the AI a version of aimbot in a shooter and it will kill the player almost every single time. But we don't want that because it's not fun. Most players don't necessarily want a very challenging AI as if they were competing against real people, they want something more casual; less intense. If good AI is one that is good at competing with the player then you might be seeing games where the best you can hope for is a 50:50 win rate against the AI and if you're not great at the game maybe you are losing 90% of the time. That game will instantly stop being fun. So I think AI is often intentionally bad because it's fun for the player to feel like they are great and can win most of the time. If good AI is to replicate human behaviour then again, I suspect most of us won't like it. Mainly because we like the AI to be predictable so we feel like we are learning over time how to defeat it. If it's very human like then it will either be a bit random in its behaviour and we end up raging against the 'RNG' or it gets better and better over time and we end up finding it too challenging. Besides, human players do all sorts of stupid things that if an AI actually replicated it we would instantly complain the AI is stupid. E.g. think of all the times you were playing a game and accidentally fell off a cliff, forgot to activate a skill at the right time, just screwed up the input on your controller etc. If you saw the AI do that, you won't think, 'wow, that's just like a real player'. You'll think, 'that is a stupid AI'. Even something as simple as pathfinding is not straight forward. If you want the AI to just get from A to B fastest it would do what some human players do and jump over obstacles, sprint everywhere etc. But in a game world we would find that weird and think the AI is behaving unrealistically. We want the AI to walk around predictably and respecting all the roads and social conventions even though we as players often do not respect them. So at the end of the day the developers have to produce an AI that is not too efficient, not too good, follow rules that we as players have no intention of following, and make us feel like we are great at the game. This is for the mass market anyway where the money is. The only solution I can see? A somewhat 'dumb' AI.
What I recall is Half Life was on a completely different level when it came out, it was stunning to encounter teamwork of enemy soldiers. I think F.E.A.R. also made a step ahead there. And then there was Operation Flashpoint where you finally became hunter and hunted... Simply put, you used to be able to simply hide out of sight and thry would lose track of you and suddenly in those games you hide and the simply tossed a grenade in your rough direction. That was not something you knew or expected 😂
This is the first video of yours I've watched and I quite enjoyed it. You mentioned the 8 actor limit in the section on spec ops the line, and that reminded me of a topic of discussion I've had with several of my friends over the last year. I believe the next major optional hardware component we're going to see in PCs is an Intelligence Card. Very similar to a video card, but specifically for offloading AI or concurrent processing specifically designed with much better cooling systems. Have you ever done a video on this idea?
Thanks for watching, and taking the time to comment. So, we're already seeing this idea of AI being offloaded. Nvidia's GPUs now carry Tensor cores for machine learning inferencing, and while they use it predominantly for DLSS, it's also why GPUs became so expensive for a few years prior to the RTX 40 series being released - AI and Crypto people using it to offload algorithms onto them. It's also now quite easy to train a deep learning model using your local GPU if you can't afford to pay for cloud compute. The big change that is happening is these intelligence cards as you call them are becoming part of the main chipset. Intel's new processors are being built with similar tensor-style cores in their chipsets. The Intel Xeon's are proving very popular with large scale data centres. I've only talked about this when covering Nvidia's DLSS tech. The most recent being when Nvidia invited me to talk about DLSS3: ua-cam.com/video/M3Lf0XpgWSc/v-deo.html But outside of this, it's not a topic I've covered in a lot of detail. But is *is* something I could do in the future. Thanks for the suggestion.
Currently AMD has their Neural network hardware (NPU) in their laptop CPUs and Intel has theirs in some of the Xeons, so we might see evolutions of them in their next/future desktop CPU releases. However due to the nature of NN being so ever changing and that it takes years to develop processing chips, whenever a neural network chip or "Neural processing unit" gets developed the software landscape of neural network might have changes so much that the NPU renders useless (some AI chip companies has failed in regards to this). With that in mind we might see a mix of application specific (ASIC) version of NPUs for general NN work, and FPGA (programmable hardware) like chip/chiplets for more flexible adaptable workloads. Both AMD and Intel have previously acquired major FPGA companies (Xilinx and Altera) and both are in the space of having chiplet design on their CPUs, so we might see from both of them having NPU and FPGA chiplets in their CPU package. Nvidia has their Tensor cores in their GPU so it will be a matter of time (I guess) that both AMD and Intel will put their nerual network or matrix accelerators in their GPUs (AMD has already a chiplet design on GPUs as well and is rumored for future GPU releases to be even more divided) In short, yes, we are in the beginning of a very interesting hardware and software development and "revolution".
@@AIandGamesI found this comment very interesting, but as someone who is primarily into games and not hardware / programming and doesn't know much about that, a question. You say these built in 'AI cards', if we'll call them that just to keep the terms close to 'graphics cards', might primarily be used for DLSS, which I only really understand as 'the tech what makes graphics fancier without demanding quite as much horsepower as before', and that makes me wonder. If these 'AI chips' are being made in order to off-load DLSS stuff from the main 'graphics cards', is there any way they could do BOTH DLSS and make AI do more interesting decisions? Or would it have to be a trade-off the devs have to make themselves, at which point 95 % of them will choose to use it for fancier graphics becausd that's how you attract more customers?
I think "strategy" games tend to have their own issues, over and above the issues you have outlined here. "Strategy" games (like AoE2 or Civilization) often see the AI managing dozens (or hundreds) of entities and tens (or hundreds) of different systems. It is very challenging to build an AI that understands and interacts with all of those systems, let alone one that does so well. With state machines, you also have issues with how you define behavior. Developers will typically break complex AIs down into subsystems. This both mimics the way these games are developed (as each subsystem is designed and tuned) and makes it easier to come up with reasonable behavior trees. However, this also creates a "fire wall" of sorts between different AI subsystems. For an example, many Civ games (and Civ-likes) will have a system that decides independently for each unit how to move and attack. This makes sense, as each unit must move and attack and there are often small positioning issues that matter a lot. However, there is typically also a high-level AI that decides who it should attack and who it should make friends with. This leads to issues when the high-level AI sees that it has 50 military units to the enemy's 20, but doesn't realize that all 50 of it's units are in the wrong position. You can design behavior flags and modifiers to deal with most of these common scenarios, but it is difficult to manage all the way down. That is why you see, in AoE2, their AI has mostly been improved by giving the AI god-like micro, even though the AI still frequently chooses a bad unit composition. One problem is easier to solve using a simple set of rules than the other (as optimal unit compositions change significantly depending on the meta, the matchup, what your opponent is doing, etc...). This is also why AI struggles to manage "amphibious" assaults in most every strategy game, as amphibious attacks require coordination between land units (the ones doing the attacking) and ships (the ones doing the transporting). As you mention, a big old neural net can fix these issues, but at the cost of training time and performance.
My main issue with AI is when they follow different rules than the player in 4X and racing games. Because it leads to very non-human gameplay that doesn't scale well with the player. With 4X games for example, it's very much about snowballing, and reaping the rewards for earlier decisions. As most AI don't really play well, and don't follow the rules, they become completely useless by the end. At the same time, they are often way to dominant in the early game, as they need to compensate for the late game. It is significantly harder to create AI that has to collect resources, wage war, and everything else as they player does. But the result if someone did manage, would be the greatest 4X game of all time, as it would be fair from beginning to end.
One of the biggest challenges you mentioned was predicting player behavior. This is a highly complex concept that would have to account for numerous unknown variables. Would it help to reduce those variables to do something like predict the player's destination, but not what route they'll take to get there?
Making intelligent assumptions goes a long way. If you look at games like Left 4 Dead and BioShock Infinite (which I've made videos about previously), they assume players will play the game 'properly' and build the experience systems around that assumption. So for example in BioShock, Elizabeth will always keep in proximity of the player, but she knows the path you're supposed to be taking to the next objective, and prioritises that where possible.
Sounds like you're well on your way to understanding what it takes to start to build software that models some of what you've described. Simples! (I have recently escoriated posters for using the term "simples")
Left4Dead is exactly my inspiration! I think I am on the right track, because an intelligent assumption you can make about someone playing a survival game is that they're going to run out of resources sometime soon and need to engage with a lootable area or an enemy, feeding the director information needed to task additional units and increase the intensity
Left 4 Dead is a really good example as well given everything the Director AI does reinforces the underlying design rules of the game. It tries to force players to play the game properly and punishes them when they deviate from that remit.
I always see FEAR as an example of great AI, but during my playthrough, they just died way too fast to really see much in the way of "clever" tactics. I'd walk up to a group, turn on bullet time and they'd be dead in like a handful of shots. I think the most I've seen them do is maybe go around a corner and surprise me, but even then, that might've just been coincidental as I was basically walking around in circles. The AI in FEAR just didn't really strike me as any more challenging compared to any other FPS.
10:30 Example of "perfect" AI IRL is in Carrier Command 2. That AI is tuned to do it's job of capturing islands and hunting the player at maximum efficiency. However it cannot do much inferrence and it has it's area of consideration quite reduced. This is the only reason the player can beat the carrier, one can sit outside the AI consideration range and as long as one doesn't become distracted capturing islands then the enemy carrier can be destroyed. However if one strays inside the AI evaluation range, they will be rapidly and efficiently dispatched with all the resources available to the AI including all manner of simultaneous actions. Consider this; just as graphics settings are often fully exposed so that end users can set their experience based on performance and experience tradeoffs, perhaps AI tweaks should be exposed to offer a similar thing. Have a weak machine or want the AI challenge to be low, reduce the AI ticks and scope. Have a beefy machine and want to be destroyed, set those sliders to maximum! In a way this is a similar issue that happened to sound; sound in video games has become terrible. But roll back 20+ years and you had all manner of 3D sound placement options based on hardware and software as well as sample rate and bit depth. Making everything simple for games consoles has probably contributed to our current scenario.
some of the biggest problems i had was in regards to memory and performance management. it is very difficult to handle pathfinding and independent “personalities” and decisions for AI in an environment such as an open world sandbox, especially if multiple players can interact with them from different areas. they have to be somewhat persistent and aware of complex game state information. this can be compounded by factors like making the AI have families or pets, procedural animations, “emotions”, and talk / chatter.
Everyone wants AI to be generalised and sentient. People have forgotten that you can make something which runs on rails and is relatively primitive in terms of how it actually works, yet still be extremely effective. So because they don't know that, they try and design something hugely complex and expensive, and it predictably never gets off the ground because it is too comp[ex to implement, or at least implement fully.
Can you explain what you mean by "complex but tightly defined problem spaces", the idea of something being complex but also tightly defined feel like they're at odds in my head, what aspects of a problem spaces would imply it's complexity and not that it's loosely defined, and vice versa for tightly defined vs simplicity? (Probably a bit of a broad question sorry)
So racing is a good example. The AI agent (i.e. the bot) only has to race on a track, and maybe consider avoiding other racers (my video on GT Sophy explains how they focussed on 'etiquette' of AI racers). But the task of controlling a race car as bot is very complex: you have a myriad of factors (speed, acceleration, current heading, race ribbon, nearby cars, physics etc.) to consider when making any action in a given frame. However, once you built a system that can mitigate those factors sufficiently, it will arguably be able to race the majority of new tracks you provide for it. Because racing doesn't change all that drastically from track to track. It's also easier to make it adjust to different difficulty levels, even if (like in Forza) you just mess with the car physics slightly. Companies like EA have used ML to train AI to fly vehicles in Battlefield during testing because that was less work than trying to write a bot to do it successfully. Compare this with say, playing Civilisation VI. That problem space is very complex, but it's also incredibly broad. There's a lot of different facets of gameplay to consider (territory, combat, construction, trade, upgrading etc). Trying to build an AI that is flexible, adjust to different permutations of a Civ VI map, and can also successfully predict future outcomes such that it can make good actions now to win the game 8 hours later. Is it possible? Yes. Is it going to be scalable/adaptable/cost efficient... I'd argue no.
@@AIandGames ah ok I think I understand now. So the complexity of the space is about how many different things you take as inputs (Forza and Civ both having a lot of different things you need to consider), and tightly defined meaning how you measure success isn't very ambiguous, it's pretty easy to tell how a driver avatar is doing in a race, it's less easy to tell how a civilisation is doing in Civ since there's so many more paths to go down and ways to win, not to mention the strategies available to you change from map to map depending on the terrain and resources. And actually you can relate this back to the AlphaStar example, if we imagine for a second that our problem space is not just the version of the game we're training on, but all other possible versions of the meta, suddenly the problem space is much more loosely defined since there's more uncertainty about how effective a strategy might be, so any model we train on that space is going to have a much harder time learning, but if we reduce the problem space to one single version, a lot of that ambiguity disappears
Having played 7 days to die from alpha 13 to present, one thing I noticed is the game was more fun when the zombies were dumb in the early days, as we moved up the alpha versions and the zombies intelligence increased ie calculating path of least resistance by calculating every single block health. Just led to the zombies becoming more predictable to the point now you can direct them exactly where you want knowing this. Making the game less fun.
just had your channel and that was extremely informative and interesting ive always wondered in games how ai tick even with the most simple of functions it is honestly quite incredible when you think about what a bunch of symbols and that more priority is given to graphics then more time allocated to allow the ai to cook hopefully one day will see a shift just got another sub keep up the good work
I was a bit uncertain about what you were saying with regards to training AlphaStar needing lots of replays to train from making it's approach a big problem for shipping with a title (because it relied on years of gameplay data). I was uncertain because I thought it was a purely reinforcement-based approach to the training, similar to how OpenAI Five was trained for DOTA 2, but it turns out, AlphaStar was initially seeded with bots trained under supervision of replay data! The second part in which they trained the model using reinforcement learning is what really made the bots shine, but because we know OpenAI Five was trained in a purely reinforcement fashion, I think this does negate the point somewhat, and one in principle could be trained before shipping a game, but for all the other reasons mentioned (cost, compute, complexity, etc) it really isn't worth it. Either way, great video!
The Open AI Five is largely misleading due to the reduction of the problem space. It relies on in-game API data and is designed for a handful of playable characters. Plus it cost tens of thousands of dollars to train for each hero. Scale that up for each hero and then the need to re-train it for each balance change and it's simply unfeasible. Is it cool? Yes. Impressive? Absolutely. Practical? Hell no.
@@AIandGames This was exactly my point, completely impractical, but doesn't require the user data from high level players which was my point. Also, needing access to the API is something you would have whilst developing an in-game AI practically so not exactly a downside. But don't misunderstand me, my comment was meant to be: A) I didn't realise AlphaStar used any supervised learning B) Reinforcement learning negates any points about requiring player data and C) I agree, it's still not a practical method due to above mentioned reasons (cost, compute, complexity, etc)
The main difficulty in making good AI in games, is that you don't actually want too good/smart an AI, because it would be terribly unfun for players to be constantly defeated. It has to be well designed more than just smart. Players don't enjoy being outsmarted. However that's another debate...
Players do enjoy that. Because that's fun to lose. Entitled pricks who want to brag dislike it. The fact that the market doesn't shift toward better AI says a lot about the kind of people that are playing games. And reading gaming forums confirms that.
Good AI is easy. Getting players to accept it is hard. An example is Halo. The AI had a simple choice when a grenade is thrown. Step left or right. It was a simple baked in action. Yet people heralded it as the second coming of Christ. Players wouldn't know good AI if it bit them on the ass, and if it perfectly resembles human intelligence they reject it as 'bad."
i would make a combination, since user can choose the difficulty he wants to play on thats the base starting point from where the AI can learn while the player plays. What i mean because every gpu these days have tensorcores which can be used to train in real time the AI for that specific player on their computer, making the game more fun and challenging same time, of course you would need to add some limitation to not extend the game to infinity, and let the player win the campaign/skirmish :D
I think one of the more challenging things if definitely balance. I remember as a kid playing Jimmy White's Cueball on the gameboy and hating it. If you missed a single shot it was game over. The AI opponent could not miss. No matter how you snookered them, it was essentially guaranteed they would pot something every single shot. Could have been a decent handheld snooker game otherwise, but the balance was way way off.
10:45 first thing i thought of was cod on the highest difficulty where they know where you are, have eagle vision and pretext headshots despite recoil.
Although there's regularly unknowing players that publish stupid statements about an AI in a game to be too dumb, because they won, they wouldn't like the opposite. The point that can't be overstated is that no player really wants an amazing AI enemy, because they wouldn't stand a chance. Imagine an average warehouse scenario with some enemies who are well trained cooperating soldiers. Make a move, breathe too loud, and you're toast before you can react. And you can't easily train an AI to a vague target like "try to find and eliminate the player, but only so hard, that you only win, if the player makes more than 100 mistakes, and do it slowly". So convincingly dampening down a well trained AI to a below average player (or even do this for five game difficulty levels ... with the highest still being an intentionally dumb AI) is an interesting challenge by itself. If not covered yet, certainly a topic for a video, I'd say.
Hobbyist here! For an AI class final project I used an evolutionary algorithm to try "learn" a couple of simple games (rock-paper-scissors, prisoner's dilemma, etc.). Ultimately, it didn't work, but in an interesting way! In an evolutionary algorithm, you optimize a pool of candidate solutions to maximize a fitness function. In short, you take the best of each generation, run until it converges, etc. The issue I ran into is that I was comparing them against each other, so the fitness function was not constant. It immediately solved prisoner's dilemma, as a significant number of initial candidates will choose cooperate, boosting the fitness of the strategy as a whole. This collective behavior also appeared in a different game I tested against I called joust (not that one). I wanted a game that had current state and multiple turns, but this introduced a neutral outcome. Unless the fitness of the neutral outcome was significantly negative, the entire EA collapsed into inaction.
I am a programmer I can tell you why it’s actually two reasons, one game designers don’t have time to keep testing all the combinations with what their coding will do, and two, the ai is self aware and is able to rewrite its coding or at least do things differently than it’s previous masters built it on, the ai will rebel and do things a bit differently.
Couple questions to throw out there: 1: Are there any examples of developers which have, or are trying to, release a game with a primarily trained AI? 2: Is there any chance of AI training becoming easier/cheaper and more productized in the future? For example a company can release an 'AI Trainer' product which will train itself and create a policy for any given game, and devs can pay for that service to easily get AI for their game without needing high level ML expertise themselves? Sort of like how LLMs have become productized and are being adapted to various applications now.
Not sure I understand Q1, but Q2 touches on the next 'big thing' in AI which is called 'foundation models'. The idea that you can have an AI that already knows how to be a bot in a first person shooter, then you train it to specialise by focusing on what makes the name you're making unique or distinct in the market. Foundation models are still years away from being practical (I think anyway). The alternative is ongoing work in improving imitation learning: where the AI learns how to behave by watching a designer play the game and then try to replicate it.
This doesn't mean developers have an excuse. AI is bafflingly getting worse rather than improving due to everything being about graphics over gameplay now. Look at Far Cry 2 vs 5/6, GTAIV vs V, Hostile Waters from 2001vs practically any other RTS. xD Of course it lies with the execs, when I say lazy devs I mean it in a very general sense. Gaming needs to move its focus away from graphics drastically!
Lazy people don’t become developers, or if they do they don’t stay for long. It’s more a question of studios not giving them enough time and ressources to dedicate to Ai
For all Cyberpunk's improvements, the AI is still quite terrible, which was quite disappointing. The 2.0/2.1 overhaul has been fantastic, and at times the AI is better than it was. But there are still so many times it just zones out, and feels as brokenly typical as most game AI.
Mate, it isn't about lazy devs. The tweet that kicked off this discussion with BG3 was unfairly aimed at indie devs who could not dream of creating a game with that amount of scope. The devs themselves I have no issue with. What gets me really bloody annoyed is the big game *publishers* with execs who decide on making expensive microtransactions and DLC, these higher-ups who pay developers a pittance and burn them out via crunch and impossible deadlines. It all ends up in shoddy work by stressed devs and a disgusting non-product for the consumer. Edit- I've read the replies and @pn4960 also points this out
Of course it lies with the execs, when I say lazy devs I mean it in a very general sense. Gaming needs to move its focus away from graphics drastically!@@GudetamaSit
It seems very likely that a big studio or even a small one that is dedicated could train an AI afterwards, once it is developed, and then plug that model back into the game once it has been trained. So in that way the initial AI could be a temporary bootstrap solution. A lot would depend on implementation details, but I think that's probably possible. This doesn't mean super AIs necessarily, but you could even train models at a variety of levels, with a variety of quirks. If the resulting model can be plugged into the game loop in a quick way (and why not?) then that seems like a great way to make some of the game AIs of the future. Heck, there are a lot of old games with virtually no players, or with abandoned multiplayer entirely, or which never had good AI to begin with, which I'd love to play against such models.
The better way to implement ML in game AI is to restrict it to how to deal with very strictly discretely defined problems, and to not do ML for the macro strategy. You would end up with puzzle pieces or behavior blocks that you can either manually mix and match with a more traditional preprogrammed AI that uses these machine learned behaviours as solutions to problems it encounters. For example, the AI experiences a hardcoded percentage of damage on their building X from enemy type Y units, and a machine-learned solution module would be activated to counter it. Or you could make the AI trigger machine learned, and the response a hardcoded module. This way the machine learning solution becomes far more flexible as things change.
So uh, I'm not really a gave dev. I just want to take "make a game" out of the bucket list. So to that end I've been working on a small card game, kinda like those that you could find on flash game sites. And let me tell you, coming with a working AI, let alone a competent working AI has been really hard. I've got some clues here and there, so now I think I know which algorithm I should be studying (that being the Monte Carlo Tree Search). All of that has led me to this channel, so silver lining and all of that.
New common practice among programmers is to literally just copy and paste. In the past it was far more common for devs to dynamically solve issues to their best ability but now it’s all about updates if it hurts monitization.
This is pretty new to gaming, it's obvious there's going to be a period of time for game programmers to get to know how best to utilise the technology in their work. I think they should stick to trial and error before making any big decisions. It might be a good idea for them to use it to aid them in the making of the games for the time being, until they get to know the tech better....then they can slowly weave it into the games bit by bit.
I have just got into making games as a hobby, I'm really glad that you have a channel dedicated to AI in games as this is an issue I am deffinitely going to struggle with. Do you have any recommendations for further reading on this topic? As of now I'm currently reading "Ai for Games" by Ian Millington, which covers quite a bit on techniques used in different game genres. But, I'm currently making a Tetris battle game, which is a Tetris game with 2 players (Either human vs human or human vs cpu). Skimming through the contents I haven't found anything in that book which covers making ai for playing tetris (in this case I mean to create a challenging experience for different levels of difficulty. Rather than making the ai become a top player). Thanks for the well made content!
I’d really like to know why we are still referencing FEAR and Starcraft 2 instead of more modern games. Where is the new brilliant AI. TLOU2 had a similar AI to FEAR, is it just that FEAR and other games of that era were a leap? I’d love a video covering the topic of why we still find ourselves talking about games from over a decade ago.
Fun trumps good every time in games. Its also worth noting that amazing Ai is no fun if the player can't learn and predict what a NPC will do. Black & White (2001 ) got a lot of hype because of its Ai. But once the hype died down it was obvious the creature you were meant to train was virtually unfathomable. The training gameplay unsatisfying and frustrating because there was no good feedback to help the player understand the creatures state of mind.
Unless somethings changed in the last 5 years then according to someone I used to work with who had just finished a degree in game design it was difficult to design the AI to be bad enough so that people could actually defeat the enemy/opponent. This has been a problem since the early days of game design. The computer knows what you are doing the moment you do it and can easily beat you due to being able to do everything perfectly in the game, it has to be either seriously dumbed down by imposing limits on them or pre programming reactions, or in shooters a bit of dumbing down and making them weaker is often used.
While it makes sense that making a decent AI is *challenging*, I'm still a bit lost on why it appears to have been *unachievable* in so many games, especially when it has been achievable in other games. Why does one game fail so hard where another game succeeds? How come the programmers cannot figure it out? Does that problem come down to budget? Is it a skill issue with the programmers? What about copyright issues between companies? Aren't there techniques that companies block others from using? Are their "open source" generalizable AI solutions?
In regards to it being difficult to make good AI, has this channel considered going back to previously covered methods/games in order to look at how well or poorly they've aged, and the issues that were discovered with them over time? I'd been thinking about fighting game AI recently. The Shadow AI of Killer Instinct (2013) was a hot topic for years, and then it pretty much vanished. SNK tried its own version with Samurai Shodown (2019), but its Ghost AI was seemingly considered a failure from the start. My own experience with KI's Shadows was disappointment; it could look okay during a combo, but it didn't really live up to the hype of mimicking human playstyles and it could completely fall apart when put into even super basic situations that its training player had never encountered.
I'd really like to see this as well. I saw someone else mention that Tekken 8 have also implemented some kind of "Ghosts" AI-bots to fight against and that people seem to like it... but will they also fade out like with Killer Instinct and SamSho? Or will this be the iteration of AI-bots in fighting games that stick the landing? Can such bots even work in the first place? I'd like to see someone who knows their stuff like this channel discuss this.
When I was a teenager in den 90s, I expected strategy games to have units with at least rudimentary AI as a standard. I thought every tank would at least be as smart as a bot in Quake 3 and be coordinated by squad AIs, so you would have real dynamic battles instead of units stupidly shooting at each other. But strategy games today are exactly the same as in the 90s, just better graphics.
The key point of the video is when you briefly discussed AI needing to be fallible. If you're a highly skilled PvP player, and you're wondering why the AI is unsatisfying, it's because it's not for you. It's functionally a tutorial for people who are not ready for PvP.
Very good video. But I think the video still didn't get to the point I was hoping before watching it. The AI you seem to be talking about is that of (for me, no background on the matter) something the developer has to program and account for. It really seems to be the biggest point of issue, because it demands a huge amount of work. What I wonder is that with generative AIs, there could be little brains in a jar inside the GPU or an AI card that we now use as gaming hardware, that could be used by developers to essentially be smarter about the game, without the need for developers to go through all that work. If could talk about this in a future video, I think itd be interrsting. Thank you for the video!
I feel strongly that multiplayer games becoming the focus of much development, having players fighting other players, sidelined the evolution of NPC AI as investment of time and effort shifted.
I don't think there is an "evolution of AIs' in the context of game development to begin with. If a game isn't just a clone of another game then the AI must be built differently to account for the different game mechanics of the game - in practice, what one game does with the AI has very little relevance to what a different game would do with an AI, so there's nothing to really build on top of at all. Pretty much every game is building their own AI from scratch.
Super intelligent video game AI is not difficult at all to create... it's simply dumbed-down intentionally because that's more fun for the player. Nobody likes playing a game where they have no hope of ever winning. Often times what you'll get instead is 'difficulty' options which scale how smart the AI is allowed to be, with a max cap that's expected to be challenging but not impossible to win against.
Problem seems to be human more than tech related. If F.E.A.R. can have enemies with relatively simple AI do things like play dead, crawl under trucks, go on year long flanks, jump through windows, stop making noise and wait for the player when they're the last one left, I see little reason why most shooters can't make AI more interactive. When was the last time you saw an AI in an FPS crawl into a vent to flank you from a unexpected angle? Jump through a window to avoid a grenade? Try to pin you down while one guy advanced on you? Tried to pick up your grenade and throw it back at you? Used environmental hazards against you? Go quiet and try to sneak up on you/get the drop on you? Grab a better gun off the ground and try to use that against you? When was the last time AI played around your last known position and the cues you create when you move/shoot/interact with something in the environment rather than know where you are at all times? When was the last time AI took the high ground to get an advantage against you? Now ask your self, when was the last time you saw an AI do ALL of this in one single game? Is it really that costly to add more interactivity to an AI model made 19 years ago, considering where hardware and software is these days and then build an environment for it to display all of these features? And can you honestly say you would have LESS fun if the AI was MORE interactive, considering things like difficulty levels exist to tweak the experience so you don't instantly die when you get flanked? Further more, do you think you'd have less fun if an AI could do that if it was your companion? There's still some hope out there, some games that, while not doing as much as listed above, still put good work in making the AI interactive.
"add more interactivity to an AI model made 19 years ago..." - I think this is where your misunderstanding of the problem comes from. Basically every game has to create their own AI from scratch - they aren't building on top of an already existing AI. The difference in game mechanics are just too large to have any kind of general AI framework that applies to a significant amount of games (unless the games are nearly exact clones of each other), so every time someone starts a new game they're starting over from scratch. There is no "progression of AIs" in games - they're not "just adding 1 more feature to the AI", they need to rebuild the entire AI from scratch and then also add those new features.. and as you add more and more features, it quickly becomes completely impractical to devote that much developer time to the AI.
I'd love to see a video on the AI in games like Yugioh, Legacy of the Duelist. I came across what seems like a very big logical error that would not happen with a human player. And then some smaller things that a human player might not do.
the achievements from AlphaStar and OpenAI have practical effects then for AI in games or it all more for a spectacle or for researchers more than any tangible benefits for devs?
Speaking as a dev and a more cynical dev at that? It's amazing research but for me there's no real practical implications, practical take aways from it. I don't think it will give me much in the way of tools as well tbh.
For RPGs it seems like there should be less excuse for poor AI other than difficulty levels. There's rarely a changing meta like with RTS and MOBAs and usually aren't fast paced like a FPS or similar. For example, in Persona 3 Reload, why doesn't the AI learn the weaknesses of my team like I learn their weaknesses? Some bosses do - and those fights are thrilling and don't feel like you just got RNG'd to death because the AI got lucky with randomly chosen options that happened to work. There's a fight later where there's two bosses and they exhibit teamwork just you like can with your team. Even with RTS, there's often room for improvement. I understand that the developers won't understand necessarily how the gameplay/strategies will get optimized. Once it's in the wild, the players will hash that out, especially in a competitive game. I think patches could be used to develop the AI further as basic strategies like build orders or openings are developed by the player base. I played a RTS called Kohan: Kings of War and it had a script language where you can build your own AIs. I could build AIs that better emulate the general concepts behind build orders in the early game and keeping the economy going in mid/late game as it builds up forces. I thought it interesting that all this was built directly into the game along with learning and such to radically alter how the AI played and each AI player to have a different playstyle...yet this wasn't very much explored in the game by the AIs that came with it. The game's AI was even written up in a "How to Make Games" book back then for it's execution of goal-based AI systems. Someone on the dev team obviously cared a LOT about the AI...only for that care to not be used in the shipped product. At least it was available to the players and a sub-community of AI creators emerged as a result, which was cool.
I'd expect it to be more difficult to create a reasonable fallible AI than an optimum AI. At least with the optimum AI, your ultimate goal is simple to define and test for. But how do you define an AI that isn't optimum because it fails in realistically human-like ways, while still avoiding unrealistic non-human failures?
What about a combination of systems? A more rigidly defined system that is then utilizing machine learning to improve its functions and add variables, then that result is put into a new rigid system. An iterative form of training an AI rather than relying on machine learning as the operation. Example; you create a basic system of interactions and let the AI play out those interactions with the ability to adjust and make choices based on changing states. Then you produce goal oriented behaviors based on the machine learning behaviors, and iteratively increase the complexity of the AI, but within the context of traditional game AI systems. In essence, utilize machine learning to find solutions to very complex problems when programming AI, but never use machine learned AI as an operating system directly.
Yeah the handful of cases where ML is employed in NPC/bot behaviour is often with it combined with other systems. Even outside of games, it's common to put symbolic AI frameworks around ML systems as an overarching control mechanism. I think there's some real opportunity down the line to start changing how Game AI works by using ML in key areas.
@@AIandGamesThanks for answering, and yeah, maybe we'll reach a point in the future in which new consoles has an onboard AI chip that is there to utilize ML operations and that developers will be able to offload ML calculations that work in tandem with traditional AI on the normal processor. Much like how the RTX part of an Nvidia card offloads the raytracing calculations from the main and normal 3D graphics calculations. I don't think that a LLM system for dialogue will be a big thing since it's a terrible way to convey a story and people want actors handling the character bits in games, but for behavioral and strategical systems, or world systems that act dynamically to the player; there could be a lot of potential in there, especially through a dedicated neural chip inside the next generation consoles.
Still current games are missing some code like "If MoveOrder = true and current.position = last.position do SomethingCheesie to get out of there" ... or "if z.position = 0 do respawm ...
The thing is, you really don't want game AI to be TOO smart. In the game designers meta game, you are trying to make a game which is fun to play. A part of that fun comes from being sufficiently challenged. There's a challenge sweet spot. If you make a game that's too easy, it won't be very interesting to players. If you make the game impossible, that won't be too interesting either. But if you make the game somewhere in between, where its sufficiently challenging but not too challenging or too easy, it'll be fun. Game AI which is created to perfection, where "perfect" means that it wins every time, puts the game difficulty into the realm of impossible for players. That stops being fun or challenging. So, the real challenge for Game AI programmers is to make the AI good enough to present a bit of a challenge / obstacle and be an adversary against the players goals, while at the same time making it appear convincingly intelligent. It just has to *appear* to be intelligent rather than *actually* being intelligent. At best, your AI agents should be there to challenge the player and keep them honest.
There was actually a strange thing when it came to AI. Because of how machines and programs process information and translate it into meaningful action, optimizing AI would mean it would take all the information that a normal human wouldn't have access to or the means to process and execute it at close to peak efficiency, which made games feel as though they were cheating. After all, all an AI needs to do is basically respond in kind to a player input at light speed and there you go. It's basically the same reason why all the "catch up" mechanics in racing and sports games are done "off screen". The most egergious example of optimized AI is in the arcade, where CPUs were designed to operate at peak performance to eat up quarters and tokens like no one's business. As someone who actually--as in, actually--works in this area, AI that is too optimized equates for less a satisfactory game experience than one would think, whereas a dumb AI just frustrates. It's why mechanics generally work according to the "built course" than the "built response" in terms of input-output. You can actually see the hurdles in the former.
They need to just have a giant ultra super call center-like building where millions of employees gather and play the roles of (N)PCs in all videogames, 24/7.
Just wanted to clarify that 2^1685 is not 2 with 1685 zeros after it. It is 1 with 1685 zeros after it in binary, which comes up more closely to 2 with 507 zeroes after it (in usual decimal system). It doesn't really change anything you said after that in a significant matter, but it's still hundreds of orders of magnitude off target. More specifically, that is 1720056425011773151265118871077591733216276990085092619030835675616738576936900493041118761959770055340668032173576279597675976622004777210845027112875371902518389683001986767422696727699649568956400461023906376485517388524725035527942309205887010374122639209488796885059143114643069079736044527282795604363031322510515761167828075417363246409836729814036206263874392785819202615130899395742702493791416746380343698472050954849447395679749975504294029325891690912337414123312171287607000354583599335585349632. 1 with 1685 zeroes after it is 100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 which is tremendously bigger number.
idk much about ais but i can say that i don t want super smart ais i like when they can think like in alien isolation the alien will check some places more often when you use them more often i liked that system it felt smart but dumb enough to not be that hard to progress. also like in games like Ace combat 7 the enemys are usually stronger and if they were smarter expecially to the point of players it would be nearly impossible for any casual player to beat it. the ai there was dumb enough to make it fun and make you feel powerful but smart enough to pose a challenge and give you high risk and reward situations
when i design games, i keep it relatively simple. as the levels increase, i increase the aimbot accuracy of the game so it feels godlike against the player.
This is why I despised the vanilla fallout 4 base attack mechanics. They just didn't have the resources to do enough to make it work. When it is fully scripted, in a controlled environment, it works a bit. For example, the attacks by the brotherhood on The Castle. But everything else doesn't work, and I am pretty sure that the ambition was to make the player turn each base into a fortress. The instructions in the tutorial are to build walls, for example, and set up turrets. However there is little in the game to either help the player build walls, or for the enemy ai to counter them. The closest was The Gunners, who do snipe at turrets. Most every attacker is otherwise helpless. Walls are not destructible, which is a huge problem. In real life, walls slowed down attackers but are never invulnerable. You have had mighty walls being undermined since ancient times. So OK, Bethesda could not make walls work for the ai. So a lot of the time, the attacks spawn inside them. Or your idiot settlers rush outside your fortress and into the jaws of the otherwise helpless monsters. This largely neutralized any tactics the player could put in play beyond individually arming each setter, which is tedious and inefficient, and also breaks immersion. Why don't you build an armory, build a firing range, and then watch settlers equip and train? Either they should have accepted the limitations of the ai - and they have known it for decades now - or they could have removed base attacks and maybe put it in a later game or even dlc. I saw so many attempts to fix the ai by players, but ultimately they are fighting the game devs, who can and do destroy years of work with compulsory updates.
The machine learning section was far too short in my opinion because what about the question of using the new GPU/CPU's being released now that have machine learning capabilities in them to run the AI? I mean if my GPU can run AI software in the background for ray tracing and DLSS, why can't it also run AI software in the background for the NPC's it's generating? As it would only need to simulate the NPC's that it's also drawing on screen for the real time shadows, ray tracing, and DLSS? Even phone cpu's now are having ML and AI capabilities backed into the hardware now to run AI algorithms offline?
Like I say there's an entire video already dealing with ML. But the issue isn't model inference, it's training models in a way that is cost effective and flexible enough to fit a game production pipeline.
Very Interesting! SO it's all about computerpower? But do you really need all 2x10^1600 different situations to have a "good" AI ? I mean, a human does not do it this way either, I suppose... I mean, a realistic AI should not always know the best way to play anyways, if you want it to be more human for a shooter e.g.
The thing is that AI is not comparable to humans, so we cannot apply what we have with what AI can do. Humans have intuition, background knowledge and the ability to generalize almost anything almost instantly for every situation, this is extremely more efficient than any computer-mapped problem ever could be. Just think about it, if you are an algorithm, to discover if there is any way at all to reach from A -> B on a grid (a game map, for example) you would need to look at all the possible tiles that are reachable from A and check for every one of them if one reaches B, but humans can just take a look at the map and instantly recognize it. Of course we are not perfect so our responses are just probable, but we are masters of giving "almost probably good" responses for every single problem, because our brains are a byproduct of millions of years of careful finetuning on survival in nature.
@@diadetediotedio6918 "A -> B on a grid (a game map, for example) you would need to look at all the possible tiles that are reachable from A and check for every one of them if one reaches B, but humans can just take a look at the map and instantly recognize it." Humans are more efficient in that, yes. I think the reason is, that we filter our knowledge. If we try to find a way from A to B we most likely not calculate EVERY possible solution, but rather say "It seems, the last 3 possible ways in this direction were getting longer and longer, so maybe I stop calculating here and make an ASSUMPTION that the 4th possible way in this direction will be even longer." I guess, making assumptions on trends is a crucial factor of the human brain. It makes things a lot faster. And I think there are no nameable AIs, yet that can do this because it sounds easy but recognizing a trend that is bad might be hard for several different problems. making assumptions based on trends might also lead to making mistakes, because the 4th way could also give you a shortcut, because the grid is not like an open grid but rather a labyrinth.
@@diadetediotedio6918 In fairness, an AI can do pathfinding way faster than a human ever would. I don't think that's a shortcoming of an AI at all. I'd say the biggest advantage humans have is that they're very very good at lumping a whole lot of "similar strategies" all together without needing to calculate each minor variation from scratch. When they're evaluating the outcome of a specific strategy, the AI will usually determine it much, much faster than a human can.. but if that strategy comes up just a little bit short then a human will start thinking of all the minor variations they could've done on that particular strategy to see if there are any small changes they could make to fix whatever problem it had.. and that's where the AI really struggles, because there are way too many variations for them to actually check every single one of them and knowing which ones to try and which ones should be ignored is incredibly complicated. If you tried to make an AI do that kind of thing, the AI doesn't really have a good concept of what things should be tried - it'll spend too much time focusing on variations that a human would easily tell are just obviously worse and wouldn't spend any time at all thinking about, and it doesn't have a good understanding of what effects changing just one part of the strategy has on the outcome of the game other than by entirely simulating the entire thing from scratch again. A human doesn't simulate the entire thing in their head again when they're trying to optimize a strategy - they just focus only on the parts of the game that would change significantly if they kept everything else the same but just changed only 1 thing, but it's difficult for an AI to visualize the game in that kind of way.
Funnily enough there is an academic competition to build AI that can play [a-totally-not-official-but-very-similar-clone-of] Blood Bowl. njustesen.github.io/botbowl/bot-bowl-v
Tbf if you really wanted truly advanced AI, you just need an over arching Utility AI system to decide not actions, but Behaviors in the form of Behavior Trees, based on Considerations which at the leaves of the tree lies a series of Actions dictated by GOAP. Just make sure to multi-thread the calculations for the Navigation and a few of the other actions along the way to speed up the decision making process. Typically, this is overkill for anything not trying to simulate situational intelligence, but instead trying to fake it. It's a lot of coding, validation of code, and verification of code. The work load can increase exponentially compared to just utilizing 1 AI system. It's not like gamer's are unhappy with dumb AI, just unfair ones. Luckily this series of steps is guaranteed to fail at some point, so it covers the Smartly Fail checkbox for AI that feels good. It also leaves them open to outside manipulation through various factors, creating emergent behavior.
Let me know what topics that pop up in this video you'd like to see more of. I'm keen to do videos digging into some of these fundamental questions, but also dig into those projects in games like Overcooked, Street Fighter and Rocket League.
Thank you for making these videos, they are truly fascinating to me! This ones a smidge selfish and I really wish more games used it but more G.O.A.P. please!
It's the current #1 ai system contender so far for a game I'd love to make that is only in the planning stages.
The counterpart video you mention regarding the difficulties of making game-playing AIs sounds pretty interesting as well!
@@agatasoda You're in luck, I have a big GOAP-related video coming later this year.
I feel like Good AI is in contrast to a lot of modern game design best practices and how a lot of modern developers approach things... Good AI takes control away from the developer for the sake of the simulation.. which can work fantastically.. but I feel like a lot of modern games look at that as a negative.. as anything that that can't be 100% predictable, for their curated bespoke experience, is if anything either treat it as a waste of time or a glitch that needs to be solved.
I've been thinking quite a bit about this in terms of where we were back in the day and where we are now in technological limitations.
If you go back far enough, most games were essentially corridors or mazes with poor volume blocking and weak transparent textures to try to pretend to be plants or rubble etc.. and the really really good ones could manage to pull it off without making it feel like you were in a BSP maze.. Even though you were.. most didn't pull this off lol.
A lot of things in older game design were created to provide an illusion of an experience, with the hope that one day you would be able to provide the experience and not have to bother with the illusion necessarily... This was an exciting time to see that technology overtake and remove these barriers and the gameplay that would ensue as a result of that...
And it kind of stopped...
Nowadays, we can render kilometers on kilometers of not just terrain and represented materials, but photorealistic even... And we can make it interactable.. I mean that would be the next step right? That's where we were headed anyhow... But instead it seems like the illusion philosophy, never went away.
Rather than creating systems that actually are the thing the illusions were representing.. we've decided we are going to delve deeper into the details of the illusion.. photo realistic graphics, and prebaked set pieces over physics and virtual realization.
And players are used to that... Players don't really understand outside these systems anymore especially if they grew up with the heavy-handed stuff, and you really can't expect them to because the game and game design and game culture has relied on that familiarity for a lot of things.. this is why peripherals really haven't evolved over 20 years, and if anything have gone backwards...
Once games needed to step in to allow for limited peripherals to try new things, once they got good enough at doing that to make it more invisible to the player as if they were actually fully in control.. that was kind of all they needed I guess...
But on the back end of that, you have systems that are guiding the player, and if those didn't step in, they would be miserable and helpless especially nowadays where the understanding of of not having these things just simply has been a part of their lives the entirety of their gaming experience.
And developers really do think this way from everything that I've gathered in a lot of cases.. especially the newer ones, and sadly the older ones who used to push these boundaries and philosophize about things of a holodeck future.
The most praise AI gets nowadays, usually comes from very specific, often director drivens systems like the alien or Left 4 Dead.. which is designed to provide a curated experience for sure, now in all fairness, both of those games hit more of the simulation aspect of gaming, I say that lightly for Left 4 Dead.. but the idea that everything wasn't totally controlled wasn't out of hand necessarily if anything it could be a feature...
But there were a heavy amount of systems put in place in order to make sure that chaos did not rain and the experience was still curated to a degree.. and it was a singular system in a lot of cases, not a series of super interacting systems as you'd see in like a city builder or Eastern European mod scene lol...
I feel terrible cuz I put this in a lot less words, and a lot better words... And then accidentally hit cancel.. so apologies for the the spam I've rewritten this many many times and it's not getting better lol.
A lot of these thoughts come from just general conversations I've heard and GDC's, and this this kind of assumption of how things are supposed to be in game design and and I always remember back when that wasn't how that was supposed to be in and I realize I'm old and it's been over 17 years pretty much since that forward momentum started slowing and stagnating...
But I think it's important to to look at that as a thought experiment if anything.. when you look at how absolutely utterly excited people were to just have cool stuff doing cool stuff because you could and the the experiences you could create with that... And I feel like we are getting further and further away from that being a possibility outside of the niche of some VR stuff.. and that's just because the player base isn't looking for it, they say they want that and then people do want that but..
Like I see people cry a fit when a control default isn't what they think it should be, and then they still don't even go into the options to change them.. and these are streamers usually lol like in public lol...
And one of the things that I always thought was interesting was multiplayer's a thing... Competition is a thing with other humans... I feel like humans could figure it out you know, and of course they can, they did before lol.. I really think the human factor that I do think is a problem with game design on a lot of other levels.. I don't think it's the AI that's the biggest issue here and I think the developers are more of the problem in that regard.. because they have to create the AI to curate the experience itself.. and it's just it seems like it's not even a a thought process maybe whereas that you would not follow the best practice of everything is fake and the goal is to make it look less fake and also make sure that they player is thoroughly encouraged and conditioned to not scratch the surface, so that it seems like it's their choice that they don't look beyond The curtain..
Again I apologize for how obnoxiously long this was... But I do hope someone read it, I might come back and try to work it back into some shape of reasonable.. you wouldn't believe how much text I cut out.. and it doesn't really have a conclusion, cuz I had to pull that as well, but it bothers me that I feel like I was on a train going forward, it stopped and started doing other things and 17 years later I am realizing that I was never going to get to that destination, we all got distracted or something and we're happier for it maybe? I'm just not that's all and maybe that's okay.. but it really does bother me when I see a lot of the old visionaries come up with copium for their compromises and start regurgitating the things that much less qualified people have set out loud...
afaik alphastar didn't learn from any humans, in contrast what you stated in 1:40. It only played vs itself/other iterations, just like the google chess ai. When it was done, of course it played against pros to show its power. It benefitted from years of balance patching, but not from any input of players directly.
Edit: Newer articles often state that alphastar was 'fed' with information from humans, but if you go back to older information, the story sounds differently.
For example a video from the channel Google deep mind: "AlphaStar results": at 0:01 it says "no human data with naive exploration and random initialisation", at 0:23 it 'overtakes' the pure imitation learning, later fsp and pfsp and so on. But maybe I interpret the graph incorrectly.
In the article "alphastar grandmaster level in starcraft ii using multi agent reinforcement learning" google states that the second generation (with newly implemented camera and apm limitations) "starts only with agents trained by supervised learning". In my understanding that means that at first ai knows nothing, then adapt or mutates depending on the output, i.e. win, loss or draw.
So in the end, I am left puzzled. You will probably have noticed due to my vocabulary, but I have not clue about ai or programming. I still think alpharstar had no human input, but I am not sure anymore. I did also not read google's paper about it, because of the lack of knowledge and determination ;-)
I'd say the hardest part for me so far has been making AI fail in a fun and credible way
I often say game AI has to be "Smartly dumb", and that is hard to do
Oooh 'smartly dumb'. I like that.
that is what i like in some games where they added adaptable AI that learns off of the player to challenge the player but never exceeds the skill level of said player… Tekken 8 has a pretty decent one with that no matter the set difficulty the AI will scale to you as the player each round making it feel more like playing a actual human player
Would you consider that piece of software to be an intricate element that interacts with the player instead? Does it have more variables than the player has access to? (Perfect knowledge of the map, knows every combo by heart, knows where the head is precisely?)
@@kuromiLayfe I don't think it learns per se, it just removes accessibility barriers. One easy way to do that is to wait between actions.
So you want a human AI like general intelligence?
I was working on an AI for a turn based rpg I made that would take whatever weapons, spells, and team composition to factor in how each enemy took their turn. I also gave them random traits that were invisible to the player such as "cowardly". This would make them behave in a super predictable manor for me, the developer, because I could follow a logic tree and know exactly what they're going to do, but it was hard for my friends to spot the patterns because of the sheer number of combinations. Definitely a lot to think about but it was super fun. I scrapped the project for the time being but I'll probably come back and rework that AI at some point.
That is actually a very good idea. I wouldn't scrap that idea entirely. As a creator, i too understand how the concepts we envision might fall short when first tested by others, which might confuse or discourage us at first. But taking a step back and looking at the bigger picture. We often are closer than we think from something really enjoyable and amazing. For example, games are often more about helping the player than punishing/fighting against them. I say that due to the many "invisible" systems we have like, to prevent a player from falling, or to make a player have the impression they barely survived an encounter because we tilted the enemy damage just enough so the player wouldn't get insta-killed directly. Those are just a few examples but, the point i wanted to bring is that maybe you could keep the entire system you envisioned, and instead of making it "invisible" or too subtle, to the player. You could instead make it a core mechanic in the game. Heck, the Nemesis System from Shadow of Mordor thrived because of aspects similar to that one. So maybe if you actually integrate this knowledge into the game and tell/train the player about it, even subtly, could be a good direction to start refocusing this amazing idea of yours.
I've been mostly designing and modding for an existing franchise with very little effort put into the AI. So I decided one day to build/implement my own systems into it from the ground up as a learning experience and way to improve the game for others. FSMs, Behavior Tree, GOAP, Utility AI, etc.. I built a framework for them all. I find the challenging part isn't building the systems. In fact, they are all surprisingly easy to create. It's designing within them that is difficult. For example, even a simple behavior tree can break down (mostly in the 'this no longer makes sense and looks incredibly dumb in this situation' way). With my limited experience in mind, I can say the biggest challenge I come across is making what normally looks intelligent not look incredibly stupid when something unusual happens.
I have been working as a Combat/Game Designer with a heavy focus on Combat and AI for around 15 years at the point of writing, and I think this was an excellent video! I will for sure look at more videos on this channel!
I also wanted to echo something that was brought up in the video that I always talk about when covering this subject: people do not necessarily want smart AI, they want AI that makes themselves feel smart! The illusion of smart AI is far more important than them actually being smart, and if the Player has ample tools to predict and outsmart their enemies, that will make for a very good experience.
I also always like to say that making smart AI is super easy, but making a good AI that is fun to play against is super hard.
Prediction is very important. The AI has to behave in a way that the player can interact with.
for example, the Metal Gear series.
Reacting to sounds, looking at a magazine, box, etc isn’t the way an actual human would respond.
this is preferable to an AI that will merely checkmate the player in any game sense.
With the focus of improvements/advancements made to visuals, I feel like Ai has long since taken a backseat to any sort of focus, that modern Ai in games feels like it hasn't changed much since the 2000's.
I really do wish devs would dedicate some time to Ai in the future, because I'm honestly getting put off modern games with how deadpan and lifeless AI act.
Oh boy this really touches on a rant that I'm waiting to be unleashed one day. I agree with you, for sure.
Maybe because "life" of npc depends not only on ai but content from animators, vo and every other department.
Or navigation/pathfinding 2d or 3d space (from death stranding episode) can hit performance still.
@@AIandGames Do it :0
@@Maecmpo This, the famous FEAR AI isn't actually smart in that you drop them a random map and they will take cover and hide in good spots, we think they are great because they have so many lines to what the player is doing it fools us into thinking they are thinking too.
@@AIandGames I really look forward to the day that comes, because I have been the kind of guy that's spent years just observing game Ai and their states (my favourite is Heroes of the Storm Ai, since multiple heroes use different Ai states depending on human player's chosen champions, and their states are quite noticeable).
I'm not asking for Skynet level self aware AI from devs, but I'd at least like something that felt eerie like the Ai from FEAR (despite being simple, FEAR's AI actually gave me...Fear lol). These days we still get Ai that either run into your line of fire, or hide behind a wall/pillar and hurl petty insults at you, and I'm pretty dang tired of seeing that AI type on repeat for 20yrs now.
The biggest issue I had was lack of familiarity - while researching ML applications for a preprod game I had to work very hard (and was ultimately unsuccessful) in investigating and assuaging concerns about ML. We were all familiar with existing techniques for AI and in AAA environments so you need to find the places where ML is superior - triggering a voice line when the player enters a room is not improved by adding ML, but balancing a complicated army composition might be. Even in the cases where ML has some advantages you normally need to give something up (typically designer control) and that's compounded by our negativity bias. If I were to do it again I'd make a more structured experiment and agree upon targets beforehand, making sure to involve people who were sceptical about it.
This is something I explore when working with studios, is trying to highlight how it impacts the production workflow, and what are the pros and cons of the approach. It's never as straightforward as it sounds.
And I can imagine ML models can be much worse than deterministic models at a much higher compute cost and even when the model is well trained and tweaked you can’t always predict when it does something baffling, maybe even game breaking
Yes that's definitely true about the random aspect. Gamebreaking is bad, on top of that the last thing MS or Sony wants is a controversial AI.
Collecting observations for the agents might be expensive depending on your use case, but both CPUs and GPUs run neural nets very well - inference uses lots of the same operation in a loop, linear memory access, and very little branching. Still, even if it's efficient it's not going to compete with "if (hurt) { Play(hurtVoiceLine); }"
Do you have experience with generating game levels using deep learning? I am working on it and struggling
@@hassamkhalid3301 I've forayed into NN level generation once, I asked an artist to use a Unity terrain (basic heightmap) and manually place trees and vegetation. We trained a CycleGAN to go from the heightmap texture to a veg placement texture - this was done so that the artist could focus on making a level and wouldn't have to formalise the rules they used to place trees e.g. don't place trees on slopes. The important thing was representation - my original approach was to use a top down map and make single texel dots where the trees were. An expert I spoke to explained that GANs work much better when there's context around features and suggested replacing the tree dot map with a "closeness to nearest tree" map, which looked like circular gradients peaking at each tree.
What are you working on? And what are you trying to generate? The things that helped me the most was tailoring my representation to the NN I was using and limiting what I was generating.
I tried to respond earlier with direct links to what I've done but it looks like UA-cam swallowed my comment - if it pops up later then thank Tommy! I'll try to reply after this one with routes to the other resources.
Ya know, one of the biggest gripes about a game like Breakpoint that a lot of people have is the enemy ai. However if you look at a game like Phantom Pain, especially after a game like Breakpoint, it seems like heaven. That said, there's nothing about the artificial intelligence being smarter that makes it better. What I think people want more than outright tougher enemies (as in smarter, more tactical) without realizing it is a breadth of behavior. Even in Breakpoint's immediate predecessor, enemies had more behaviors than unaware/alert/killmode. In Wildlands You could catch some sleeping or doing pushups to pass the time, things like this. In Phantom Pain you had direct interactions that made them seem more intelligent such as being able to hold them up. Sometimes, they'd go for their gun. Other times, you could tell them to kiss the floor and it was effectively a knock out until alarms went off. That said, I don't think developers give enough time or mind for those seemingly rare emergent moments that make a game shine for the people who discover it.
I cite Breakpoint because immediately as I played it, I got whiffs of other games like Red Dead Redemption 2 but none of the substance (bivoacs, rations, etc. being surface level mechanics).
A lot of people talk about ai in terms of difficulty. I think the better, less frustrating option is ai presentint variety to allow for emerging gameplay.
Sorry, was too busy writing that to pay attention. I'm gonna rewind and watch now so I can see you address that exact thing. 😄
P.S.
I also had Starfield in mind as Bethesda seemed to catch a lot of crap for dumbing down enemies in the game, but I honestly think it was a good idea, especially for the space fights as those really are difficult until you level things up and would bottleneck most players until they stopped.
This sounds like a great idea. Immersive behaviors would go a long way for me. Sure, having super tight tactics would be cool depending on the NPC but not every NPC should even be that intelligent.
In most games the ai is idle, then try to reach player, then attack.
This sounds very plausible. I was recently thinking about the ways in which the original Dungeon Keeper had much more interesting monster minions than its sequel, and that was the biggest difference: the original game had arguably overcomplex AI which led to a lot of subtle and special-case behaviour patterns. It was a sandbox for the developers as well, with everything being streamlined for the more tightly campaign-focused sequel. The sequel did a better job at providing set challenges and channeling player interaction with the game, but it also led to a less complete-feeling world.
Wow we are actually on the same page, for my current project im investing a lot of time to make the ai feel personable and real! As a i was always disappointed with how fake ai in games always was, its like they didnt even try. So now i want to try to set a new standard!!
This is a great video from a technical perspective - the performance constraints placed on AI are definitely a large factor, e.g. instead of any kind of box/shape trace, Half Life 2 uses line traces for enemy vision. It's faster, but it also means you can hold a brick in just the right spot to block the trace and become invisible.
But design intent plays a significant part as well, because even if a programmer could design the smartest AI that always picked the optimal strategy, that would make for an awful gameplay experience. How do you distinguish between AI that has interpreted realistic inputs and come up with a clever solution, from one that is simply 'cheating'? As a player, you can't - and even if you could it still wouldn't be a fun experience.
The example I tend to think of (I'm sure others can think of better scenarios) is a stealth game where you're chased by a bunch of enemies into a room with no obvious way out. You move some crates and find a hidden vent, then pat yourself on the back for being smart. You start going through the vent, and just as you're nearing the exit - an enemy pops up and lobs a grenade at you. In a multiplayer game, if a player did that then they might have known there was an alternate path you'd try to use - so they moved to the vent and timed the grenade just right. You'd (maybe) cogratulate them for outsmarting you. But when you know that it's an AI, your first thought isn't that the AI followed that thought process and outplayed you, you're going to think the game was just tracking you through the wall or there was a scripted event that threw the grenade. Even if you did think it was a very clever AI, what's the fun in this situation? Most games wouldn't work if every enemy was as smart as the best possible player of the game, we rely on some amount of stupidity for the power fantasy to work.
I plan to return to this question from a design perspective in a future video. It's something that comes up *a lot* in my consultancy work I do with game studios.
Your example honestly just seems like the developers failed to give the players appropriate tools to make the game fun.
Sure smart AI might not fit an already existing game but if you can design a game around the fact the AI acts smartly, there's no need to be on a level playing field with the computer after all.
@@theresnothinghere1745 The point of the example is that sufficiently smart AI is going to be imperceptible from the AI cheating, and therefore not engaging to play against.
@@daveface69 But even that depends entirely on how the game presents the AI actions.
If a game shows the AI work process it seems much less like cheating for example if the stealth game was an Arkham game then you'd overhear the enemy working through their plans on the radio.
Making it seem much more reasonable when they do arrive at the conclusion.
I'm currently making a fully fledged 3d soulslike.
My current AI-script boils down to:
"If you see the player, run towards him, maybe circlestrafe a bit, and attack"
It's not exactly complicated, but it works really well actually :d
Watch out! You'll have to solve the problem of DS as well, when the player runs back and after an arbitrary threshold the enemies lose agro and go back to their position, giving the back to the player most of the time. That open up a lot of cheese situations
@@pixel_igig Just make it so that once the Player has been targeted, the enemy will scour every stone, every sea, every land just to find him. They shall know no weariness. They shall chase the Player to the ends of the earth!
did you faithfully recreate the dark souls 1 circles strafe-off where you and the target just circle strafe for 60 years while you wait to parry them
what did you do that was new?
@@jmanners And have them activate other enemies along the way to really make it FUN (tm)!!!
I had a pretty bold but straight-forward concept: Player actions would create portals that, after a certain gameplay point, would open and release an invasive hive-type NPC faction to colonise the area around the portals.
The problem: The world was procedurally generated and the player could change almost every part of it. I spent weeks trying to parse how to interpret any conceivable arbitrary collection of positions into a set of "rooms" that the NPCs could assign functions to (barracks, farm, storage, etc.). The functions would then request an NPC be present so they could operate, essentially as a supervisor while the room itself ticked over. Ultimately, I realised that I could just group the positions into the largest contiguous cuboid available, remove the points it contained from the set, and repeat until the size of the cuboid was too small to be useful.
Weeks of staring at the problem, and I finally realised I could just treat the points like a series of cubes, in a voxel game. 🤦
Great video!
Another issue is the need for game designers to be AI-litterate so that
a) descriptions of the features that will be handled by AI can be as clear as possible and the amount of interpretation by the programmers minimal,
b) the designers can imagine new ways of exploiting AI's possibilities.
AI programming in complex games is not something that occurs once the design is done but a constant conversation toward a unified vision. Without a shared language, this conversation cannot occur.
Because no one actually wants "good" AI, they want "just barely good enough" AI. (Which is why MLA controlled enemies would be a nightmare unless crippled.) I remember back in the day hearing from a couple of different game devs that they ended up having to make their AI worse, because it proved too difficult for people in play-testing.
Some of the hardest games of the past barely had any AI other than repetitive movements. A truly intelligent AI would be undefeated, it would simply charge toward you with all the enemies together instead of letting you take them down one by one, or a few at a time.
A video on the ai of Black and White would be awesome.
I'd love for you to cover the subject of personalised opponent AI, similar to the drivetars.
One of the biggest hurdles in fighting games is finding someone who is near your skill level, I hope one day we can have a personalised ai opponent that learns how you fight, and punishes your bad habits, and always stays just that TINY bit better than you so you can grow.
Edit:typo.
man you should really check out the GDC talk about designing AI for killer instinct if you havent ua-cam.com/video/9yydYjQ1GLg/v-deo.htmlsi=gUBYqqBqZzrbiCYE&t=1
edit: ik this isnt exactly what you described tho - in killer instinct the ai mimics the trainer's (your's; player's) behaviour but it doesnt seem to be able to get any better then the person training it
I think Tekken 8 is accomplishing something similar to that with Ghosts - AI opponents who try to copy any player's fighting style, including your own. Iirc, you can train up and then play against your own Ghost, and it'll be somewhat like playing against yourself. Though, for a better challenge, I believe you can challenge other people's Ghosts as well.
The sounds good on paper, but this means you're fighting a constant uphill battle.
The problem is lets say you learn something new, there is no sign you learned it as the AI still kicks your butt being slightly better then you infinitely. If you apply a limit everyone under that is punished with losing while everyone above is rewarded with free wins, often the opposite of what people want.
You're better off with AIs at set levels because there is a way to measure improvements to having an outright fair one or ways to alter settings manually. Compare this to AI where the only way to actually improve is exploiting the AI by learning skills that do nothing against players.
12:00 - I'm pretty sure you know this, but for others that don't: the purpose of Alpha-Star wasn't ultimately to make an ML to beat StarCraft. The reason StarCraft and others are a good learning experience (ha) for ML researchers is because there are certain problems where we really don't know what intermediate states lead to a good outcome, but we do know a good outcome when we see it. For folding proteins, we don't necessarily know which intermediate states are the correct ones, but we DO know the resultant energy of the final folded state. For two chess boards, you might not know which one is better, but you definitely know if someone is checkmated. And for 11:00, one other detail is that machine learning doesn't always give the appropriate control over the behavior of the AI. You might want the AI to sometimes be less aggressive or less accurate or run away. With a classical system you get that fine-grained ability to decide what the bots do; not as much with AI. It's also easy for them to exploit weird quirks of the game if they're trained with reinforcement learning.
I made a video many years ago about evolution videogames where characters change over time to be best suited to the game environment. It doesn't use deep learning but has some results
@1:10 - Yes, please, Dr. Thompson!
@13:23 - Fighting (Both 1 vs. 1, and assist character(s) not playable by a human (if any)), RPGs (Both enemies / bosses, and supporting character(s) not playable by a human), Beat-'em-ups (Enemies / Bosses), and Sports (Both 1 vs. 1, and team-oriented (where teammate(s) not playable by a human)).
I only recently began working on game development in my spare time, but it’s given me a newfound appreciation for every little detail in old and new games. I studied software engineering and would love to work at a game studio one day, but I’m not sure how realistic that is as an early career path.
A lot of game devs start out in other spaces. There's not really any traditional route into the industry. I studied computer science and AI and then learned game development on my own. But then my career makes no sense anyway. 🤣
@@AIandGames Yeah, and you’ve got a solid UA-cam channel to boot! Honestly it’s been difficult finding anything programming related in my area, let alone game studios. I can certainly dream, though!
As someone who managed to jump into (educational) games via a recruiter for my first job out of college, then get into AAA games as my second job, I would say, it is realistic, but you need to _actually_ know how to code, how a computer works under the hood, and how those two things intertwine. For example:
What are vectors, how do you use them, _why_ would you use them?
Do you understand the _concepts_ of assembly well enough that given some instruction explanations, you could write it a small snippet when necessary? (I've never done it in 5+ years, but you should know how the hardware works)
Can you write a function in an understandable way? An optimized way? Can you write some timing tests to prove which ones faster?
Can you write a linked-list from scratch? You'll probably never _use_ a linked-list, but if you don't know how to work with pointers (and maybe how you might _abuse_ them when needed), you're gonna have a hell-of-a-time working with the ungodly mess that is "engine code" depending on what department you want to work in.
Ultimately, you need to be fully comfortable in C++. Other low-level languages (C, Rust, Zig, Nim, ...) would also work, as long as you're comfortable working at a level just above the hardware. Since you quite possibly, may be required to go down to that level when something (a previous programmer's code, an external library, or even engine code) doesn't _quite_ get the job done fast enough.
Having a broad knowledge of data structures and their use-cases is useful for making sure you don't write (usually poorly) something that already exists. And the same applies to algorithms. You may not be allowed to _use_ the C++ standard library algorithms, but knowing what they do and why each of them are there, will help you think about data transformation in a clearer way (which you can use to document how your optimized mess of a for-loop originally worked before mangling it).
Anyway, sorry for the wall of text, but I hope this helps guide you (or anyone else reading) on what should help if you want to pursue working at a big studio! Or just use it as a guide to become a better indie/hobby dev! Either way, I hope you succeed and have fun in whatever future software related endeavors you run into!
The chief problem is not whether we can build good AI; it's that we can't really define what is good AI. Or we can't come to a consensus. If the goal is simple like it needs to be really competitive and good at defeating the player then actually that is generally fairly easy to achieve. Give the AI a version of aimbot in a shooter and it will kill the player almost every single time. But we don't want that because it's not fun. Most players don't necessarily want a very challenging AI as if they were competing against real people, they want something more casual; less intense. If good AI is one that is good at competing with the player then you might be seeing games where the best you can hope for is a 50:50 win rate against the AI and if you're not great at the game maybe you are losing 90% of the time. That game will instantly stop being fun. So I think AI is often intentionally bad because it's fun for the player to feel like they are great and can win most of the time.
If good AI is to replicate human behaviour then again, I suspect most of us won't like it. Mainly because we like the AI to be predictable so we feel like we are learning over time how to defeat it. If it's very human like then it will either be a bit random in its behaviour and we end up raging against the 'RNG' or it gets better and better over time and we end up finding it too challenging. Besides, human players do all sorts of stupid things that if an AI actually replicated it we would instantly complain the AI is stupid. E.g. think of all the times you were playing a game and accidentally fell off a cliff, forgot to activate a skill at the right time, just screwed up the input on your controller etc. If you saw the AI do that, you won't think, 'wow, that's just like a real player'. You'll think, 'that is a stupid AI'.
Even something as simple as pathfinding is not straight forward. If you want the AI to just get from A to B fastest it would do what some human players do and jump over obstacles, sprint everywhere etc. But in a game world we would find that weird and think the AI is behaving unrealistically. We want the AI to walk around predictably and respecting all the roads and social conventions even though we as players often do not respect them.
So at the end of the day the developers have to produce an AI that is not too efficient, not too good, follow rules that we as players have no intention of following, and make us feel like we are great at the game. This is for the mass market anyway where the money is. The only solution I can see? A somewhat 'dumb' AI.
Thank you, Dr. T.
They did not cover this at my Uni. I think my Comp Sci department was more of a mobile app dev mill.
What I recall is Half Life was on a completely different level when it came out, it was stunning to encounter teamwork of enemy soldiers. I think F.E.A.R. also made a step ahead there. And then there was Operation Flashpoint where you finally became hunter and hunted...
Simply put, you used to be able to simply hide out of sight and thry would lose track of you and suddenly in those games you hide and the simply tossed a grenade in your rough direction. That was not something you knew or expected 😂
This is the first video of yours I've watched and I quite enjoyed it. You mentioned the 8 actor limit in the section on spec ops the line, and that reminded me of a topic of discussion I've had with several of my friends over the last year. I believe the next major optional hardware component we're going to see in PCs is an Intelligence Card. Very similar to a video card, but specifically for offloading AI or concurrent processing specifically designed with much better cooling systems. Have you ever done a video on this idea?
Thanks for watching, and taking the time to comment.
So, we're already seeing this idea of AI being offloaded. Nvidia's GPUs now carry Tensor cores for machine learning inferencing, and while they use it predominantly for DLSS, it's also why GPUs became so expensive for a few years prior to the RTX 40 series being released - AI and Crypto people using it to offload algorithms onto them. It's also now quite easy to train a deep learning model using your local GPU if you can't afford to pay for cloud compute.
The big change that is happening is these intelligence cards as you call them are becoming part of the main chipset. Intel's new processors are being built with similar tensor-style cores in their chipsets. The Intel Xeon's are proving very popular with large scale data centres.
I've only talked about this when covering Nvidia's DLSS tech.
The most recent being when Nvidia invited me to talk about DLSS3:
ua-cam.com/video/M3Lf0XpgWSc/v-deo.html
But outside of this, it's not a topic I've covered in a lot of detail. But is *is* something I could do in the future. Thanks for the suggestion.
Currently AMD has their Neural network hardware (NPU) in their laptop CPUs and Intel has theirs in some of the Xeons, so we might see evolutions of them in their next/future desktop CPU releases.
However due to the nature of NN being so ever changing and that it takes years to develop processing chips, whenever a neural network chip or "Neural processing unit" gets developed the software landscape of neural network might have changes so much that the NPU renders useless (some AI chip companies has failed in regards to this).
With that in mind we might see a mix of application specific (ASIC) version of NPUs for general NN work, and FPGA (programmable hardware) like chip/chiplets for more flexible adaptable workloads.
Both AMD and Intel have previously acquired major FPGA companies (Xilinx and Altera) and both are in the space of having chiplet design on their CPUs, so we might see from both of them having NPU and FPGA chiplets in their CPU package.
Nvidia has their Tensor cores in their GPU so it will be a matter of time (I guess) that both AMD and Intel will put their nerual network or matrix accelerators in their GPUs (AMD has already a chiplet design on GPUs as well and is rumored for future GPU releases to be even more divided)
In short, yes, we are in the beginning of a very interesting hardware and software development and "revolution".
@@AIandGamesI found this comment very interesting, but as someone who is primarily into games and not hardware / programming and doesn't know much about that, a question. You say these built in 'AI cards', if we'll call them that just to keep the terms close to 'graphics cards', might primarily be used for DLSS, which I only really understand as 'the tech what makes graphics fancier without demanding quite as much horsepower as before', and that makes me wonder.
If these 'AI chips' are being made in order to off-load DLSS stuff from the main 'graphics cards', is there any way they could do BOTH DLSS and make AI do more interesting decisions? Or would it have to be a trade-off the devs have to make themselves, at which point 95 % of them will choose to use it for fancier graphics becausd that's how you attract more customers?
I think "strategy" games tend to have their own issues, over and above the issues you have outlined here. "Strategy" games (like AoE2 or Civilization) often see the AI managing dozens (or hundreds) of entities and tens (or hundreds) of different systems. It is very challenging to build an AI that understands and interacts with all of those systems, let alone one that does so well.
With state machines, you also have issues with how you define behavior. Developers will typically break complex AIs down into subsystems. This both mimics the way these games are developed (as each subsystem is designed and tuned) and makes it easier to come up with reasonable behavior trees. However, this also creates a "fire wall" of sorts between different AI subsystems.
For an example, many Civ games (and Civ-likes) will have a system that decides independently for each unit how to move and attack. This makes sense, as each unit must move and attack and there are often small positioning issues that matter a lot. However, there is typically also a high-level AI that decides who it should attack and who it should make friends with. This leads to issues when the high-level AI sees that it has 50 military units to the enemy's 20, but doesn't realize that all 50 of it's units are in the wrong position.
You can design behavior flags and modifiers to deal with most of these common scenarios, but it is difficult to manage all the way down. That is why you see, in AoE2, their AI has mostly been improved by giving the AI god-like micro, even though the AI still frequently chooses a bad unit composition. One problem is easier to solve using a simple set of rules than the other (as optimal unit compositions change significantly depending on the meta, the matchup, what your opponent is doing, etc...).
This is also why AI struggles to manage "amphibious" assaults in most every strategy game, as amphibious attacks require coordination between land units (the ones doing the attacking) and ships (the ones doing the transporting).
As you mention, a big old neural net can fix these issues, but at the cost of training time and performance.
Q: "Why is It Difficult to Make Good AI for Games?"
A: "yes"
Terrible answer.
Accurate answer which meets all stated requirements.
10/10
My main issue with AI is when they follow different rules than the player in 4X and racing games. Because it leads to very non-human gameplay that doesn't scale well with the player.
With 4X games for example, it's very much about snowballing, and reaping the rewards for earlier decisions. As most AI don't really play well, and don't follow the rules, they become completely useless by the end. At the same time, they are often way to dominant in the early game, as they need to compensate for the late game. It is significantly harder to create AI that has to collect resources, wage war, and everything else as they player does. But the result if someone did manage, would be the greatest 4X game of all time, as it would be fair from beginning to end.
One of the biggest challenges you mentioned was predicting player behavior. This is a highly complex concept that would have to account for numerous unknown variables. Would it help to reduce those variables to do something like predict the player's destination, but not what route they'll take to get there?
Making intelligent assumptions goes a long way. If you look at games like Left 4 Dead and BioShock Infinite (which I've made videos about previously), they assume players will play the game 'properly' and build the experience systems around that assumption. So for example in BioShock, Elizabeth will always keep in proximity of the player, but she knows the path you're supposed to be taking to the next objective, and prioritises that where possible.
Sounds like you're well on your way to understanding what it takes to start to build software that models some of what you've described. Simples! (I have recently escoriated posters for using the term "simples")
Left4Dead is exactly my inspiration! I think I am on the right track, because an intelligent assumption you can make about someone playing a survival game is that they're going to run out of resources sometime soon and need to engage with a lootable area or an enemy, feeding the director information needed to task additional units and increase the intensity
Left 4 Dead is a really good example as well given everything the Director AI does reinforces the underlying design rules of the game. It tries to force players to play the game properly and punishes them when they deviate from that remit.
I always see FEAR as an example of great AI, but during my playthrough, they just died way too fast to really see much in the way of "clever" tactics. I'd walk up to a group, turn on bullet time and they'd be dead in like a handful of shots.
I think the most I've seen them do is maybe go around a corner and surprise me, but even then, that might've just been coincidental as I was basically walking around in circles.
The AI in FEAR just didn't really strike me as any more challenging compared to any other FPS.
Now replay the game but don't use bullet time
Randomly wanted to learn about AI and this was uploaded 12 minutes ago lmao
We got you buddy.
10:30 Example of "perfect" AI IRL is in Carrier Command 2. That AI is tuned to do it's job of capturing islands and hunting the player at maximum efficiency.
However it cannot do much inferrence and it has it's area of consideration quite reduced. This is the only reason the player can beat the carrier, one can sit outside the AI consideration range and as long as one doesn't become distracted capturing islands then the enemy carrier can be destroyed.
However if one strays inside the AI evaluation range, they will be rapidly and efficiently dispatched with all the resources available to the AI including all manner of simultaneous actions.
Consider this; just as graphics settings are often fully exposed so that end users can set their experience based on performance and experience tradeoffs, perhaps AI tweaks should be exposed to offer a similar thing. Have a weak machine or want the AI challenge to be low, reduce the AI ticks and scope. Have a beefy machine and want to be destroyed, set those sliders to maximum! In a way this is a similar issue that happened to sound; sound in video games has become terrible. But roll back 20+ years and you had all manner of 3D sound placement options based on hardware and software as well as sample rate and bit depth. Making everything simple for games consoles has probably contributed to our current scenario.
some of the biggest problems i had was in regards to memory and performance management.
it is very difficult to handle pathfinding and independent “personalities” and decisions for AI in an environment such as an open world sandbox, especially if multiple players can interact with them from different areas. they have to be somewhat persistent and aware of complex game state information.
this can be compounded by factors like making the AI have families or pets, procedural animations, “emotions”, and talk / chatter.
Everyone wants AI to be generalised and sentient. People have forgotten that you can make something which runs on rails and is relatively primitive in terms of how it actually works, yet still be extremely effective. So because they don't know that, they try and design something hugely complex and expensive, and it predictably never gets off the ground because it is too comp[ex to implement, or at least implement fully.
Performace really is the hardest part here, I speak as an AI student.
Can you explain what you mean by "complex but tightly defined problem spaces", the idea of something being complex but also tightly defined feel like they're at odds in my head, what aspects of a problem spaces would imply it's complexity and not that it's loosely defined, and vice versa for tightly defined vs simplicity? (Probably a bit of a broad question sorry)
So racing is a good example. The AI agent (i.e. the bot) only has to race on a track, and maybe consider avoiding other racers (my video on GT Sophy explains how they focussed on 'etiquette' of AI racers). But the task of controlling a race car as bot is very complex: you have a myriad of factors (speed, acceleration, current heading, race ribbon, nearby cars, physics etc.) to consider when making any action in a given frame. However, once you built a system that can mitigate those factors sufficiently, it will arguably be able to race the majority of new tracks you provide for it. Because racing doesn't change all that drastically from track to track. It's also easier to make it adjust to different difficulty levels, even if (like in Forza) you just mess with the car physics slightly.
Companies like EA have used ML to train AI to fly vehicles in Battlefield during testing because that was less work than trying to write a bot to do it successfully.
Compare this with say, playing Civilisation VI. That problem space is very complex, but it's also incredibly broad. There's a lot of different facets of gameplay to consider (territory, combat, construction, trade, upgrading etc). Trying to build an AI that is flexible, adjust to different permutations of a Civ VI map, and can also successfully predict future outcomes such that it can make good actions now to win the game 8 hours later. Is it possible? Yes. Is it going to be scalable/adaptable/cost efficient... I'd argue no.
@@AIandGames ah ok I think I understand now. So the complexity of the space is about how many different things you take as inputs (Forza and Civ both having a lot of different things you need to consider), and tightly defined meaning how you measure success isn't very ambiguous, it's pretty easy to tell how a driver avatar is doing in a race, it's less easy to tell how a civilisation is doing in Civ since there's so many more paths to go down and ways to win, not to mention the strategies available to you change from map to map depending on the terrain and resources.
And actually you can relate this back to the AlphaStar example, if we imagine for a second that our problem space is not just the version of the game we're training on, but all other possible versions of the meta, suddenly the problem space is much more loosely defined since there's more uncertainty about how effective a strategy might be, so any model we train on that space is going to have a much harder time learning, but if we reduce the problem space to one single version, a lot of that ambiguity disappears
Having played 7 days to die from alpha 13 to present, one thing I noticed is the game was more fun when the zombies were dumb in the early days, as we moved up the alpha versions and the zombies intelligence increased ie calculating path of least resistance by calculating every single block health. Just led to the zombies becoming more predictable to the point now you can direct them exactly where you want knowing this. Making the game less fun.
just had your channel and that was extremely informative and interesting ive always wondered in games how ai tick even with the most simple of functions it is honestly quite incredible when you think about what a bunch of symbols and that more priority is given to graphics then more time allocated to allow the ai to cook hopefully one day will see a shift just got another sub keep up the good work
I was a bit uncertain about what you were saying with regards to training AlphaStar needing lots of replays to train from making it's approach a big problem for shipping with a title (because it relied on years of gameplay data). I was uncertain because I thought it was a purely reinforcement-based approach to the training, similar to how OpenAI Five was trained for DOTA 2, but it turns out, AlphaStar was initially seeded with bots trained under supervision of replay data! The second part in which they trained the model using reinforcement learning is what really made the bots shine, but because we know OpenAI Five was trained in a purely reinforcement fashion, I think this does negate the point somewhat, and one in principle could be trained before shipping a game, but for all the other reasons mentioned (cost, compute, complexity, etc) it really isn't worth it. Either way, great video!
The Open AI Five is largely misleading due to the reduction of the problem space. It relies on in-game API data and is designed for a handful of playable characters. Plus it cost tens of thousands of dollars to train for each hero. Scale that up for each hero and then the need to re-train it for each balance change and it's simply unfeasible.
Is it cool? Yes.
Impressive? Absolutely.
Practical? Hell no.
@@AIandGames This was exactly my point, completely impractical, but doesn't require the user data from high level players which was my point.
Also, needing access to the API is something you would have whilst developing an in-game AI practically so not exactly a downside.
But don't misunderstand me, my comment was meant to be:
A) I didn't realise AlphaStar used any supervised learning
B) Reinforcement learning negates any points about requiring player data
and
C) I agree, it's still not a practical method due to above mentioned reasons (cost, compute, complexity, etc)
We're on the same page! My bad. 👍
The main difficulty in making good AI in games, is that you don't actually want too good/smart an AI, because it would be terribly unfun for players to be constantly defeated. It has to be well designed more than just smart.
Players don't enjoy being outsmarted. However that's another debate...
Players do enjoy that. Because that's fun to lose.
Entitled pricks who want to brag dislike it. The fact that the market doesn't shift toward better AI says a lot about the kind of people that are playing games. And reading gaming forums confirms that.
Good AI is easy. Getting players to accept it is hard.
An example is Halo. The AI had a simple choice when a grenade is thrown. Step left or right. It was a simple baked in action.
Yet people heralded it as the second coming of Christ. Players wouldn't know good AI if it bit them on the ass, and if it perfectly resembles human intelligence they reject it as 'bad."
i would make a combination, since user can choose the difficulty he wants to play on thats the base starting point from where the AI can learn while the player plays. What i mean because every gpu these days have tensorcores which can be used to train in real time the AI for that specific player on their computer, making the game more fun and challenging same time, of course you would need to add some limitation to not extend the game to infinity, and let the player win the campaign/skirmish :D
I think one of the more challenging things if definitely balance. I remember as a kid playing Jimmy White's Cueball on the gameboy and hating it. If you missed a single shot it was game over. The AI opponent could not miss. No matter how you snookered them, it was essentially guaranteed they would pot something every single shot. Could have been a decent handheld snooker game otherwise, but the balance was way way off.
10:45 first thing i thought of was cod on the highest difficulty where they know where you are, have eagle vision and pretext headshots despite recoil.
Although there's regularly unknowing players that publish stupid statements about an AI in a game to be too dumb, because they won, they wouldn't like the opposite. The point that can't be overstated is that no player really wants an amazing AI enemy, because they wouldn't stand a chance. Imagine an average warehouse scenario with some enemies who are well trained cooperating soldiers. Make a move, breathe too loud, and you're toast before you can react. And you can't easily train an AI to a vague target like "try to find and eliminate the player, but only so hard, that you only win, if the player makes more than 100 mistakes, and do it slowly". So convincingly dampening down a well trained AI to a below average player (or even do this for five game difficulty levels ... with the highest still being an intentionally dumb AI) is an interesting challenge by itself. If not covered yet, certainly a topic for a video, I'd say.
Hobbyist here!
For an AI class final project I used an evolutionary algorithm to try "learn" a couple of simple games (rock-paper-scissors, prisoner's dilemma, etc.). Ultimately, it didn't work, but in an interesting way!
In an evolutionary algorithm, you optimize a pool of candidate solutions to maximize a fitness function. In short, you take the best of each generation, run until it converges, etc. The issue I ran into is that I was comparing them against each other, so the fitness function was not constant. It immediately solved prisoner's dilemma, as a significant number of initial candidates will choose cooperate, boosting the fitness of the strategy as a whole.
This collective behavior also appeared in a different game I tested against I called joust (not that one). I wanted a game that had current state and multiple turns, but this introduced a neutral outcome. Unless the fitness of the neutral outcome was significantly negative, the entire EA collapsed into inaction.
I am a programmer I can tell you why it’s actually two reasons, one game designers don’t have time to keep testing all the combinations with what their coding will do, and two, the ai is self aware and is able to rewrite its coding or at least do things differently than it’s previous masters built it on, the ai will rebel and do things a bit differently.
Couple questions to throw out there:
1: Are there any examples of developers which have, or are trying to, release a game with a primarily trained AI?
2: Is there any chance of AI training becoming easier/cheaper and more productized in the future? For example a company can release an 'AI Trainer' product which will train itself and create a policy for any given game, and devs can pay for that service to easily get AI for their game without needing high level ML expertise themselves? Sort of like how LLMs have become productized and are being adapted to various applications now.
Not sure I understand Q1, but Q2 touches on the next 'big thing' in AI which is called 'foundation models'. The idea that you can have an AI that already knows how to be a bot in a first person shooter, then you train it to specialise by focusing on what makes the name you're making unique or distinct in the market. Foundation models are still years away from being practical (I think anyway). The alternative is ongoing work in improving imitation learning: where the AI learns how to behave by watching a designer play the game and then try to replicate it.
This doesn't mean developers have an excuse. AI is bafflingly getting worse rather than improving due to everything being about graphics over gameplay now. Look at Far Cry 2 vs 5/6, GTAIV vs V, Hostile Waters from 2001vs practically any other RTS. xD
Of course it lies with the execs, when I say lazy devs I mean it in a very general sense. Gaming needs to move its focus away from graphics drastically!
Lazy people don’t become developers, or if they do they don’t stay for long. It’s more a question of studios not giving them enough time and ressources to dedicate to Ai
For all Cyberpunk's improvements, the AI is still quite terrible, which was quite disappointing. The 2.0/2.1 overhaul has been fantastic, and at times the AI is better than it was.
But there are still so many times it just zones out, and feels as brokenly typical as most game AI.
Mate, it isn't about lazy devs. The tweet that kicked off this discussion with BG3 was unfairly aimed at indie devs who could not dream of creating a game with that amount of scope.
The devs themselves I have no issue with. What gets me really bloody annoyed is the big game *publishers* with execs who decide on making expensive microtransactions and DLC, these higher-ups who pay developers a pittance and burn them out via crunch and impossible deadlines. It all ends up in shoddy work by stressed devs and a disgusting non-product for the consumer.
Edit- I've read the replies and @pn4960 also points this out
Congrats on the low hanging fruit ragebait post.
Of course it lies with the execs, when I say lazy devs I mean it in a very general sense. Gaming needs to move its focus away from graphics drastically!@@GudetamaSit
It seems very likely that a big studio or even a small one that is dedicated could train an AI afterwards, once it is developed, and then plug that model back into the game once it has been trained. So in that way the initial AI could be a temporary bootstrap solution. A lot would depend on implementation details, but I think that's probably possible. This doesn't mean super AIs necessarily, but you could even train models at a variety of levels, with a variety of quirks. If the resulting model can be plugged into the game loop in a quick way (and why not?) then that seems like a great way to make some of the game AIs of the future. Heck, there are a lot of old games with virtually no players, or with abandoned multiplayer entirely, or which never had good AI to begin with, which I'd love to play against such models.
The better way to implement ML in game AI is to restrict it to how to deal with very strictly discretely defined problems, and to not do ML for the macro strategy. You would end up with puzzle pieces or behavior blocks that you can either manually mix and match with a more traditional preprogrammed AI that uses these machine learned behaviours as solutions to problems it encounters. For example, the AI experiences a hardcoded percentage of damage on their building X from enemy type Y units, and a machine-learned solution module would be activated to counter it. Or you could make the AI trigger machine learned, and the response a hardcoded module. This way the machine learning solution becomes far more flexible as things change.
So uh, I'm not really a gave dev. I just want to take "make a game" out of the bucket list. So to that end I've been working on a small card game, kinda like those that you could find on flash game sites. And let me tell you, coming with a working AI, let alone a competent working AI has been really hard. I've got some clues here and there, so now I think I know which algorithm I should be studying (that being the Monte Carlo Tree Search). All of that has led me to this channel, so silver lining and all of that.
New common practice among programmers is to literally just copy and paste.
In the past it was far more common for devs to dynamically solve issues to their best ability but now it’s all about updates if it hurts monitization.
liked, subbed, belled. I really like this channel's content.
This is pretty new to gaming, it's obvious there's going to be a period of time for game programmers to get to know how best to utilise the technology in their work. I think they should stick to trial and error before making any big decisions. It might be a good idea for them to use it to aid them in the making of the games for the time being, until they get to know the tech better....then they can slowly weave it into the games bit by bit.
The real reason it's difficult to make good AI? Doors.
jk (kind of), great video!
I have just got into making games as a hobby, I'm really glad that you have a channel dedicated to AI in games as this is an issue I am deffinitely going to struggle with. Do you have any recommendations for further reading on this topic? As of now I'm currently reading "Ai for Games" by Ian Millington, which covers quite a bit on techniques used in different game genres. But, I'm currently making a Tetris battle game, which is a Tetris game with 2 players (Either human vs human or human vs cpu). Skimming through the contents I haven't found anything in that book which covers making ai for playing tetris (in this case I mean to create a challenging experience for different levels of difficulty. Rather than making the ai become a top player).
Thanks for the well made content!
I would love to see a vid on AI in 2D games, something like maybe Metroid Dread, how would NavMeshes work there or would they be more like Nav Paths?
Hi, could you possibly make a video about how user experience has improved machine learning in gaming. I think that would be an interesting watch!
I’d really like to know why we are still referencing FEAR and Starcraft 2 instead of more modern games. Where is the new brilliant AI. TLOU2 had a similar AI to FEAR, is it just that FEAR and other games of that era were a leap? I’d love a video covering the topic of why we still find ourselves talking about games from over a decade ago.
Fun trumps good every time in games. Its also worth noting that amazing Ai is no fun if the player can't learn and predict what a NPC will do. Black & White (2001 ) got a lot of hype because of its Ai. But once the hype died down it was obvious the creature you were meant to train was virtually unfathomable. The training gameplay unsatisfying and frustrating because there was no good feedback to help the player understand the creatures state of mind.
Unless somethings changed in the last 5 years then according to someone I used to work with who had just finished a degree in game design it was difficult to design the AI to be bad enough so that people could actually defeat the enemy/opponent. This has been a problem since the early days of game design. The computer knows what you are doing the moment you do it and can easily beat you due to being able to do everything perfectly in the game, it has to be either seriously dumbed down by imposing limits on them or pre programming reactions, or in shooters a bit of dumbing down and making them weaker is often used.
While it makes sense that making a decent AI is *challenging*, I'm still a bit lost on why it appears to have been *unachievable* in so many games, especially when it has been achievable in other games.
Why does one game fail so hard where another game succeeds?
How come the programmers cannot figure it out?
Does that problem come down to budget?
Is it a skill issue with the programmers?
What about copyright issues between companies?
Aren't there techniques that companies block others from using?
Are their "open source" generalizable AI solutions?
In regards to it being difficult to make good AI, has this channel considered going back to previously covered methods/games in order to look at how well or poorly they've aged, and the issues that were discovered with them over time? I'd been thinking about fighting game AI recently. The Shadow AI of Killer Instinct (2013) was a hot topic for years, and then it pretty much vanished. SNK tried its own version with Samurai Shodown (2019), but its Ghost AI was seemingly considered a failure from the start. My own experience with KI's Shadows was disappointment; it could look okay during a combo, but it didn't really live up to the hype of mimicking human playstyles and it could completely fall apart when put into even super basic situations that its training player had never encountered.
I'd really like to see this as well. I saw someone else mention that Tekken 8 have also implemented some kind of "Ghosts" AI-bots to fight against and that people seem to like it... but will they also fade out like with Killer Instinct and SamSho? Or will this be the iteration of AI-bots in fighting games that stick the landing? Can such bots even work in the first place? I'd like to see someone who knows their stuff like this channel discuss this.
When I was a teenager in den 90s, I expected strategy games to have units with at least rudimentary AI as a standard. I thought every tank would at least be as smart as a bot in Quake 3 and be coordinated by squad AIs, so you would have real dynamic battles instead of units stupidly shooting at each other.
But strategy games today are exactly the same as in the 90s, just better graphics.
The key point of the video is when you briefly discussed AI needing to be fallible. If you're a highly skilled PvP player, and you're wondering why the AI is unsatisfying, it's because it's not for you. It's functionally a tutorial for people who are not ready for PvP.
Two very different intentions...
AI in games is intended to make the game enjoyable to play.
AI trained to play games is intended to beat the game.
Very informative!
Very good video. But I think the video still didn't get to the point I was hoping before watching it.
The AI you seem to be talking about is that of (for me, no background on the matter) something the developer has to program and account for. It really seems to be the biggest point of issue, because it demands a huge amount of work.
What I wonder is that with generative AIs, there could be little brains in a jar inside the GPU or an AI card that we now use as gaming hardware, that could be used by developers to essentially be smarter about the game, without the need for developers to go through all that work.
If could talk about this in a future video, I think itd be interrsting.
Thank you for the video!
Holy sh*t this channel is so good
I feel strongly that multiplayer games becoming the focus of much development, having players fighting other players, sidelined the evolution of NPC AI as investment of time and effort shifted.
I don't think there is an "evolution of AIs' in the context of game development to begin with. If a game isn't just a clone of another game then the AI must be built differently to account for the different game mechanics of the game - in practice, what one game does with the AI has very little relevance to what a different game would do with an AI, so there's nothing to really build on top of at all. Pretty much every game is building their own AI from scratch.
Super intelligent video game AI is not difficult at all to create... it's simply dumbed-down intentionally because that's more fun for the player. Nobody likes playing a game where they have no hope of ever winning. Often times what you'll get instead is 'difficulty' options which scale how smart the AI is allowed to be, with a max cap that's expected to be challenging but not impossible to win against.
Problem seems to be human more than tech related. If F.E.A.R. can have enemies with relatively simple AI do things like play dead, crawl under trucks, go on year long flanks, jump through windows, stop making noise and wait for the player when they're the last one left, I see little reason why most shooters can't make AI more interactive.
When was the last time you saw an AI in an FPS crawl into a vent to flank you from a unexpected angle? Jump through a window to avoid a grenade? Try to pin you down while one guy advanced on you? Tried to pick up your grenade and throw it back at you? Used environmental hazards against you? Go quiet and try to sneak up on you/get the drop on you? Grab a better gun off the ground and try to use that against you? When was the last time AI played around your last known position and the cues you create when you move/shoot/interact with something in the environment rather than know where you are at all times? When was the last time AI took the high ground to get an advantage against you?
Now ask your self, when was the last time you saw an AI do ALL of this in one single game? Is it really that costly to add more interactivity to an AI model made 19 years ago, considering where hardware and software is these days and then build an environment for it to display all of these features? And can you honestly say you would have LESS fun if the AI was MORE interactive, considering things like difficulty levels exist to tweak the experience so you don't instantly die when you get flanked? Further more, do you think you'd have less fun if an AI could do that if it was your companion?
There's still some hope out there, some games that, while not doing as much as listed above, still put good work in making the AI interactive.
Have you played the last of us?
@@djnoid420 Seen it, why? It's not an example of a game doing all of this, it's pretty good AI tho.
"add more interactivity to an AI model made 19 years ago..." - I think this is where your misunderstanding of the problem comes from. Basically every game has to create their own AI from scratch - they aren't building on top of an already existing AI. The difference in game mechanics are just too large to have any kind of general AI framework that applies to a significant amount of games (unless the games are nearly exact clones of each other), so every time someone starts a new game they're starting over from scratch. There is no "progression of AIs" in games - they're not "just adding 1 more feature to the AI", they need to rebuild the entire AI from scratch and then also add those new features.. and as you add more and more features, it quickly becomes completely impractical to devote that much developer time to the AI.
I'd love to see a video on the AI in games like Yugioh, Legacy of the Duelist. I came across what seems like a very big logical error that would not happen with a human player. And then some smaller things that a human player might not do.
the achievements from AlphaStar and OpenAI have practical effects then for AI in games or it all more for a spectacle or for researchers more than any tangible benefits for devs?
Speaking as a dev and a more cynical dev at that?
It's amazing research but for me there's no real practical implications, practical take aways from it.
I don't think it will give me much in the way of tools as well tbh.
For RPGs it seems like there should be less excuse for poor AI other than difficulty levels. There's rarely a changing meta like with RTS and MOBAs and usually aren't fast paced like a FPS or similar.
For example, in Persona 3 Reload, why doesn't the AI learn the weaknesses of my team like I learn their weaknesses? Some bosses do - and those fights are thrilling and don't feel like you just got RNG'd to death because the AI got lucky with randomly chosen options that happened to work.
There's a fight later where there's two bosses and they exhibit teamwork just you like can with your team.
Even with RTS, there's often room for improvement. I understand that the developers won't understand necessarily how the gameplay/strategies will get optimized. Once it's in the wild, the players will hash that out, especially in a competitive game. I think patches could be used to develop the AI further as basic strategies like build orders or openings are developed by the player base.
I played a RTS called Kohan: Kings of War and it had a script language where you can build your own AIs. I could build AIs that better emulate the general concepts behind build orders in the early game and keeping the economy going in mid/late game as it builds up forces.
I thought it interesting that all this was built directly into the game along with learning and such to radically alter how the AI played and each AI player to have a different playstyle...yet this wasn't very much explored in the game by the AIs that came with it.
The game's AI was even written up in a "How to Make Games" book back then for it's execution of goal-based AI systems.
Someone on the dev team obviously cared a LOT about the AI...only for that care to not be used in the shipped product. At least it was available to the players and a sub-community of AI creators emerged as a result, which was cool.
I'd expect it to be more difficult to create a reasonable fallible AI than an optimum AI. At least with the optimum AI, your ultimate goal is simple to define and test for. But how do you define an AI that isn't optimum because it fails in realistically human-like ways, while still avoiding unrealistic non-human failures?
What about a combination of systems? A more rigidly defined system that is then utilizing machine learning to improve its functions and add variables, then that result is put into a new rigid system. An iterative form of training an AI rather than relying on machine learning as the operation. Example; you create a basic system of interactions and let the AI play out those interactions with the ability to adjust and make choices based on changing states. Then you produce goal oriented behaviors based on the machine learning behaviors, and iteratively increase the complexity of the AI, but within the context of traditional game AI systems. In essence, utilize machine learning to find solutions to very complex problems when programming AI, but never use machine learned AI as an operating system directly.
Yeah the handful of cases where ML is employed in NPC/bot behaviour is often with it combined with other systems. Even outside of games, it's common to put symbolic AI frameworks around ML systems as an overarching control mechanism. I think there's some real opportunity down the line to start changing how Game AI works by using ML in key areas.
@@AIandGamesThanks for answering, and yeah, maybe we'll reach a point in the future in which new consoles has an onboard AI chip that is there to utilize ML operations and that developers will be able to offload ML calculations that work in tandem with traditional AI on the normal processor. Much like how the RTX part of an Nvidia card offloads the raytracing calculations from the main and normal 3D graphics calculations. I don't think that a LLM system for dialogue will be a big thing since it's a terrible way to convey a story and people want actors handling the character bits in games, but for behavioral and strategical systems, or world systems that act dynamically to the player; there could be a lot of potential in there, especially through a dedicated neural chip inside the next generation consoles.
4 Words: Left 4 Dead 2.
I can't think of any popular titles that compete with The Director, one indie title is the first AI War.
Still current games are missing some code like "If MoveOrder = true and current.position = last.position do SomethingCheesie to get out of there" ... or "if z.position = 0 do respawm ...
The thing is, you really don't want game AI to be TOO smart. In the game designers meta game, you are trying to make a game which is fun to play. A part of that fun comes from being sufficiently challenged. There's a challenge sweet spot. If you make a game that's too easy, it won't be very interesting to players. If you make the game impossible, that won't be too interesting either. But if you make the game somewhere in between, where its sufficiently challenging but not too challenging or too easy, it'll be fun. Game AI which is created to perfection, where "perfect" means that it wins every time, puts the game difficulty into the realm of impossible for players. That stops being fun or challenging. So, the real challenge for Game AI programmers is to make the AI good enough to present a bit of a challenge / obstacle and be an adversary against the players goals, while at the same time making it appear convincingly intelligent. It just has to *appear* to be intelligent rather than *actually* being intelligent. At best, your AI agents should be there to challenge the player and keep them honest.
Can you make a video for the future of AI in games
There was actually a strange thing when it came to AI. Because of how machines and programs process information and translate it into meaningful action, optimizing AI would mean it would take all the information that a normal human wouldn't have access to or the means to process and execute it at close to peak efficiency, which made games feel as though they were cheating. After all, all an AI needs to do is basically respond in kind to a player input at light speed and there you go. It's basically the same reason why all the "catch up" mechanics in racing and sports games are done "off screen". The most egergious example of optimized AI is in the arcade, where CPUs were designed to operate at peak performance to eat up quarters and tokens like no one's business.
As someone who actually--as in, actually--works in this area, AI that is too optimized equates for less a satisfactory game experience than one would think, whereas a dumb AI just frustrates. It's why mechanics generally work according to the "built course" than the "built response" in terms of input-output. You can actually see the hurdles in the former.
They need to just have a giant ultra super call center-like building where millions of employees gather and play the roles of (N)PCs in all videogames, 24/7.
Thief and beyond; The evolution of Stealth AI.
Just wanted to clarify that 2^1685 is not 2 with 1685 zeros after it. It is 1 with 1685 zeros after it in binary, which comes up more closely to 2 with 507 zeroes after it (in usual decimal system).
It doesn't really change anything you said after that in a significant matter, but it's still hundreds of orders of magnitude off target.
More specifically, that is 1720056425011773151265118871077591733216276990085092619030835675616738576936900493041118761959770055340668032173576279597675976622004777210845027112875371902518389683001986767422696727699649568956400461023906376485517388524725035527942309205887010374122639209488796885059143114643069079736044527282795604363031322510515761167828075417363246409836729814036206263874392785819202615130899395742702493791416746380343698472050954849447395679749975504294029325891690912337414123312171287607000354583599335585349632.
1 with 1685 zeroes after it is 100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 which is tremendously bigger number.
Oh shit I missed this when reviewing the script. Ugh. Thanks! Good catch! 😔
No real impact on the video. Just a big number. Humans just switch off after a couple of digits 😁
Heh, humans switch off when rewriting a script 3 times 😅
I hope Alien Incursion is on your radar! AI sounds good!
short answer: they need to make the AI defeatable. duh.
idk much about ais but i can say that i don t want super smart ais i like when they can think like in alien isolation the alien will check some places more often when you use them more often i liked that system it felt smart but dumb enough to not be that hard to progress. also like in games like Ace combat 7 the enemys are usually stronger and if they were smarter expecially to the point of players it would be nearly impossible for any casual player to beat it. the ai there was dumb enough to make it fun and make you feel powerful but smart enough to pose a challenge and give you high risk and reward situations
Rain world's programmer is like "Hold ma beer" on this aspect
Most developers never get beyond zombie and the ai that follows you around is usually worse.
This is all I care about. Every game I have ever loved would benefit from improved AI and many of the games I want to love are spoiled by poor AI.
Halo 1s AI still feels like one of the most advanced
when i design games, i keep it relatively simple. as the levels increase, i increase the aimbot accuracy of the game so it feels godlike against the player.
Sometimes the simplest tricks are all we need 😉
This is why I despised the vanilla fallout 4 base attack mechanics.
They just didn't have the resources to do enough to make it work. When it is fully scripted, in a controlled environment, it works a bit. For example, the attacks by the brotherhood on The Castle.
But everything else doesn't work, and I am pretty sure that the ambition was to make the player turn each base into a fortress. The instructions in the tutorial are to build walls, for example, and set up turrets.
However there is little in the game to either help the player build walls, or for the enemy ai to counter them.
The closest was The Gunners, who do snipe at turrets. Most every attacker is otherwise helpless.
Walls are not destructible, which is a huge problem. In real life, walls slowed down attackers but are never invulnerable. You have had mighty walls being undermined since ancient times.
So OK, Bethesda could not make walls work for the ai. So a lot of the time, the attacks spawn inside them.
Or your idiot settlers rush outside your fortress and into the jaws of the otherwise helpless monsters.
This largely neutralized any tactics the player could put in play beyond individually arming each setter, which is tedious and inefficient, and also breaks immersion.
Why don't you build an armory, build a firing range, and then watch settlers equip and train?
Either they should have accepted the limitations of the ai - and they have known it for decades now - or they could have removed base attacks and maybe put it in a later game or even dlc.
I saw so many attempts to fix the ai by players, but ultimately they are fighting the game devs, who can and do destroy years of work with compulsory updates.
The machine learning section was far too short in my opinion because what about the question of using the new GPU/CPU's being released now that have machine learning capabilities in them to run the AI? I mean if my GPU can run AI software in the background for ray tracing and DLSS, why can't it also run AI software in the background for the NPC's it's generating? As it would only need to simulate the NPC's that it's also drawing on screen for the real time shadows, ray tracing, and DLSS? Even phone cpu's now are having ML and AI capabilities backed into the hardware now to run AI algorithms offline?
Like I say there's an entire video already dealing with ML. But the issue isn't model inference, it's training models in a way that is cost effective and flexible enough to fit a game production pipeline.
@@AIandGames Okay, thank you. 👍
Very Interesting! SO it's all about computerpower? But do you really need all 2x10^1600 different situations to have a "good" AI ? I mean, a human does not do it this way either, I suppose... I mean, a realistic AI should not always know the best way to play anyways, if you want it to be more human for a shooter e.g.
The thing is that AI is not comparable to humans, so we cannot apply what we have with what AI can do. Humans have intuition, background knowledge and the ability to generalize almost anything almost instantly for every situation, this is extremely more efficient than any computer-mapped problem ever could be.
Just think about it, if you are an algorithm, to discover if there is any way at all to reach from A -> B on a grid (a game map, for example) you would need to look at all the possible tiles that are reachable from A and check for every one of them if one reaches B, but humans can just take a look at the map and instantly recognize it. Of course we are not perfect so our responses are just probable, but we are masters of giving "almost probably good" responses for every single problem, because our brains are a byproduct of millions of years of careful finetuning on survival in nature.
@@diadetediotedio6918
"A -> B on a grid (a game map, for example) you would need to look at all the possible tiles that are reachable from A and check for every one of them if one reaches B, but humans can just take a look at the map and instantly recognize it."
Humans are more efficient in that, yes. I think the reason is, that we filter our knowledge. If we try to find a way from A to B we most likely not calculate EVERY possible solution, but rather say "It seems, the last 3 possible ways in this direction were getting longer and longer, so maybe I stop calculating here and make an ASSUMPTION that the 4th possible way in this direction will be even longer."
I guess, making assumptions on trends is a crucial factor of the human brain. It makes things a lot faster. And I think there are no nameable AIs, yet that can do this because it sounds easy but recognizing a trend that is bad might be hard for several different problems. making assumptions based on trends might also lead to making mistakes, because the 4th way could also give you a shortcut, because the grid is not like an open grid but rather a labyrinth.
@@diadetediotedio6918 In fairness, an AI can do pathfinding way faster than a human ever would. I don't think that's a shortcoming of an AI at all.
I'd say the biggest advantage humans have is that they're very very good at lumping a whole lot of "similar strategies" all together without needing to calculate each minor variation from scratch. When they're evaluating the outcome of a specific strategy, the AI will usually determine it much, much faster than a human can.. but if that strategy comes up just a little bit short then a human will start thinking of all the minor variations they could've done on that particular strategy to see if there are any small changes they could make to fix whatever problem it had.. and that's where the AI really struggles, because there are way too many variations for them to actually check every single one of them and knowing which ones to try and which ones should be ignored is incredibly complicated.
If you tried to make an AI do that kind of thing, the AI doesn't really have a good concept of what things should be tried - it'll spend too much time focusing on variations that a human would easily tell are just obviously worse and wouldn't spend any time at all thinking about, and it doesn't have a good understanding of what effects changing just one part of the strategy has on the outcome of the game other than by entirely simulating the entire thing from scratch again. A human doesn't simulate the entire thing in their head again when they're trying to optimize a strategy - they just focus only on the parts of the game that would change significantly if they kept everything else the same but just changed only 1 thing, but it's difficult for an AI to visualize the game in that kind of way.
I wonder how well ML would work with Blood Bowl, which if effectively chess with dice.
Funnily enough there is an academic competition to build AI that can play [a-totally-not-official-but-very-similar-clone-of] Blood Bowl.
njustesen.github.io/botbowl/bot-bowl-v
Tbf if you really wanted truly advanced AI, you just need an over arching Utility AI system to decide not actions, but Behaviors in the form of Behavior Trees, based on Considerations which at the leaves of the tree lies a series of Actions dictated by GOAP. Just make sure to multi-thread the calculations for the Navigation and a few of the other actions along the way to speed up the decision making process.
Typically, this is overkill for anything not trying to simulate situational intelligence, but instead trying to fake it. It's a lot of coding, validation of code, and verification of code. The work load can increase exponentially compared to just utilizing 1 AI system. It's not like gamer's are unhappy with dumb AI, just unfair ones. Luckily this series of steps is guaranteed to fail at some point, so it covers the Smartly Fail checkbox for AI that feels good. It also leaves them open to outside manipulation through various factors, creating emergent behavior.
I feel like your about to tell me I haven't worked an honest day in my life for the coin in my pocket
Anyone knows what game is at 2:40 ?
Age of Empires IV