This is exactly what I needed, i was struggling on how to solve obstacle avoidance without implementing a grid based pathfinding, this seems like the perfect solution
I don’t know who you are. I don’t know where you came from. But I received a great explanation of something I was too lazy to research myself. It’s scary how much we don’t know in these sort of development oriented fields (edit:we don’t know what we don’t know) I hadn’t seen this type of AI system before. Thanks for the video
I don't know who you are. I don't know where you came from xD But your comment really made me smile! I appreciate it so much. Took me a long time to wrap my head around this stuff, I'm glad you got something out of this.
A fun little addition to this, which I used to use to create super simple but effective AIs, is great for creating 'group predators'. The goal is for each predator to try and keep away from each other, while chasing the player. The result of this is that they will naturally surround the player, circling around them. I got the idea from a study I saw on how wolves algorithmically surround their prey - 1. go towards prey, 2. stay away from other wolves, 3. repeat Adding something like this to your 'interest' algorithm would make for fun group dynamics!
This is exactly what I needed, having myself been inspired by that same video trying to implement it one way, I was looking for a less messy way to implement it and you just made my week (like a year+ later)!
This is very useful. After implementing this I have realized that this method has 1 issue. When player stand directly in parallel with the Enemy in between an obstacle. The enemy will not try to circle around the obstacle to reach the target.
@theseangle I think so. The context steering cannot completely replace the pathfinding. Its quite easy to get enrmy to stuck in many different positions such as: In between 2 obstacles, or as I previously mentioned when the player stands directly in parallel with enemy with an obstacle in between. These sort of scenarios require some additional logic to be implemented to complement danger values that are assigned by hitting other obstacle colliders. For example, always favoring right side if there are obstacles on both sides... there are other ways as well but simply setting danger values of a vector and 2 neighbouring vectors ( as has been explained in this video) will not ensure the enemy can find a way towards player if you have many obstacles
@@lukaspetrikas6320 hmm maybe you can create an algorithm which detects when something like this happens, and depending on the cause do some actions, e.g. enlarge the ray casts for x seconds. Maybe it'll help
I didn't have any problems with two obstacles, but standing parallel to it and behind an obstacle is giving me a headache. Did you ever find a reliable way to fix this?
This was great. I watched that Game Endeavor AI video you mentioned like last week, but your video went into detail how to actually do this stuff! Can't wait to try to build this sort of thing myself!
This was so cool! I’ve only been getting into game dev for the last few months in godot, been making a top down maze crawler game and I’m 100% going to try to apply these concepts to the enemies in it.
Thank you for this amazing all-encompassing video. You break down the process of building a rather complex AI into digestible bits and you give a proper view of the whole thing with just the right amount of details. Now I can begin to implement these concepts without the fear that I'll make a mess of them. In short, this is exactly what I was looking for. Truly a life saver
This is awesome! I've seen the video from Game Endeavor and have been looking for a script that does that since, but this video made it easier to understand the process. I've actually added onto this by making the enemy able to detect all obstacles in a radius and have that affect them similar to how the enemy tracks down the target. So thanks!
Neatly explained and accurate. As an aspiring game designer and a fairly seasoned programmer I can tell you know your stuff and put a lot of effort on this. Keep up the good job!
I just saw the Game Endeavor video a few days ago and put on my todo list to look into how he implemented it. I can't thank you enough for making this video! I am still a little confused as to when to add together interests. Do you add the interests arrays all together then subtract from danger or do you get the contexts maps and add them all together? Like this? 1. Normalize a vector to what the AI desires. 2. Make an array that is the dot product of that vector with all the 8 directions. 3. Repeat steps 1-2 for all points of interest. 4. Make danger array/ 5.. Make a new array where you subtract element 1 in interest from element 1 in danger, and so on for element 2-8. AKA danger minus interest. This gives you the context map. 6. The highest value in the context map is the most desired direction. 7. Apply steering to get a more natural most desired direction. 8. Tell AI to go in the direction from step 7.
you add all the interest arrays together first, you should only have 1 context map array. And when you add all the interest arrays together, if you have a number thats greater than the number you put for the danger, than that means you need to raise the danger away. I said 5 was good for me but if you have like 6 interest arrays, you might want to raise it. Hope this helped, sorry for the late reply, I'm a mess :) feel free to ask more questions and lmk how you implementation went!
What Endeavor did in his video was, when the AI get's close to the player. it will change the shape of the dot product, so it will change the desirability. as: float changedDot = 1 - Mathf.Abs(dot - 0.65f). you can also make it run a little to the side with: float changedDot = 1- Matg.Abs(-dot) it's the shaping part of the video. You didn't mention that
I've been looking for a good godot tutorial on context based steering for a while and I have no Idea why this video only got recomended to me now. But Im glad it did eventually because it's the best!!!!
Nice video! Be careful that state machines can become really complex quite quickly. One approach is to have a state machine for each state, if you plan in having complex AI. If, for example, an enemy wants to have an injury state that differs from when it is in combat or idle, you could need a substate machine, isolated from the main one. As someone else suggested, Behaviour Tree is more manageable than state machines and allow for having even really complicated AI under control.
@JackieCodes the ai you could circle around you, or go line for formation and then put archer in back so you have to fight 5 melee guys before getting to the archerer etc. Making it way more difficult. Lol if you do you should do a tutorial
Great tutorial although there's a major flaw that I encountered. The enemies can get stuck between two directions when you get them behind a wall and stand directly behind the wall so they can't prioritize which way to go and they alternate between two vectors indefinitely. This isn't much of a problem if the game in question doesn't have many obstacles (such as your game) but if your game has many sharp corners and tight spaces (which was the case in mine) it can be a serious headache for you. Though it's not without it's solutions. One way I found to combat this problem was to use pathfinding2D in a way. I connected a navigation agent to the enemy and got it's next path's vector after which I got it's dot product with the raycast vectors to figure out which direction was the most similar to the next path location. I then got the index of the closest one and added a number to the corresponding index on the interest vector. It may look jaggy at first but by tuning the number you have added to the interest vector you can smooth it out.
I mean the explained concepts are pretty simple to implement just try implementing on your own and then use stack overflow and reddit if you need help.
I implemented, got a problem with it on getting stuck between two objects and my tip is do the neighbour danger values such as it doesn't sum up to 1, this way even tho a path may be dangerous if the enemy is on that side you need to go reach him, if enemy avoids completely it will end up stuck going back and on again
I really wish there was a link to find out more about the AI part. I already know how to do the state machine part, but I'm struggling to implement the AI pathing. I get that there's a set of 8-directional raycasts and an 8-directional vector array that detects objects within the path. However, translating that in Step 1 to an actual AI path is just a bunch of ???? to me. On top of that, the "Navigation2D" node has been deprecated, so I guess NavigationAgent2D is the closest match..
I remember when I was figering it out, about few months ago. I used those exact technics. I recommend checking behavior trees. It is a modification to state machine that fits game AI needs very well. In Godot recently addon was created implementing behavior tree, it's named BeeHave. Imo it simplifies working with states
Thank you for the amazing video. It works fantastically, one question I had was while implementing FSM with steering, I wasn't sure how to implement the two while trying to decouple them. Would the states be in charge of handling the velocity of the CharacterBody2D or the steering script?
Also FYI Godot 3.5 has a new navigation server with obstacle avoidance 😄 It’s nice, but this implementation is really interesting! I like how flexible it is. And that state machine setup is really nice!
@@JackieCodes my bad, probably just for basic collision, I really like your solution! That weighted values for different obstacles blew my mind when you made the edges of the arena “dangerous”
Hello, I'm still a pretty novice gamedev using Godot and can follow along with this guide until I get to the "Dangers" section. For the life of me, I can't figure out how to get the raycasts to change the values within the Danger Vector array. I've been at this for about two days now and have been spending most of my free time scouring the internet for trying to find answers. I've tried; - Creating the raycasts outside of the code like in the video, putting into their own array and checking is_colliding() on each of them in a for loop, but that just returns an error saying is_colliding isn't a function in the base. - Generating them through code in a few different ways, appending them into a raycasts array and then checking if each is colliding but that doesn't work either. - I got a bit of progress by referencing each raycast in their own onready var and then checking if their colliding and changing the values manually like below, but then I run into a problem where they're constantly overriding the value's other collisions set. ○ If _0.is_colliding: § danger_vector[0] = 5 § danger_vector[1] = 2 § Danger_vector[7] = 2 - And on top of that, after getting it working somewhat, when I try to get the Context Vector by subtracting the values in the Danger Vector from those in the interest Vector, I keep running this error or errors like this "Invalid set index '0' (on base: 'Array[float]') with value of type 'float'" I don't know if I'm overthinking or overworking it but I just can't seem to get it to work properly. Is there any way you can share your process for how you got this done with some example code? Anything would be much appreciated.
I was getting the same invalid set index 0 errors as well, and fixed it, but don't fully understand what's going on. I read that if you declare an empty array, there will be no index 0. So, I thought, okay, declare the arrays with a bunch of zero placeholders (i.e. var danger_array = [0,0,0,0,0,0,0,0]). That worked for the danger array and context array, but not for the interest array. Maybe I had a typo somewhere. Dunno. Regardless, I switched to . append for the interest array (i.e. interest_array.append(var) ) , and made sure to clear the array after every cycle (i.e. interest_array.clear() ). The following is my code for the danger array. for i in range(8): if raycast_array[i].is_colliding(): if i == 7: danger_array[6] += 2 danger_array[7] += 5 danger_array[0] += 2 else: danger_array[i-1] += 2 danger_array[i] += 5 danger_array[i+1] += 2
You need to fix the is_colliding() issue because thats probably the best way to do it, I did it simmilarly to what you mentioned, defined an array with the raycasts, and an array with the danger values, all initially set to 0; then, for each raycast, i checked whether it was colliding, and if it was, I set a variable containing the index of said raycast, so I could change the values in the danger array on the corresponding index (for example, if it was the 5th raycast, I would update the 4th, 5th and 6th values to 2, 5, 2) with an if statement checking whether the values have already been set (for example if the player entered 3 raycasts, it would only update the values assignes to 2 and 0, and leave the 5's so it doenst get all messed up). If you havent already fixed it and my explanation makes 0 sense (very possible), I could send you the gdscript code so you can see what I actually did. Hope this helps.
I implemented this, the caviat is that it doesn't work that great for enclosed areas, the ai will get scared of doors since the diagonals will hit a wall but not the straight path through
@@JackieCodes I am not sure I played with it for like 1-2 hours, the bane of it is a single door with multiple enemies. You can get them to not be scared of doors but then they will get stuck in an angle. An other issue with many of them, is that they ping pong on each other, making the insentive lobsided will make them rotate but again there will be a way where they will rotate and sync to each other to block themselves from doors. I think I ll hard code it, put a detection zone around doors and queue them in a deterministic fashion
Hello, I wanted to ask for your advice. How to program an NPC to understand if it's path is blocked? For example, if the player stands in the doorframe.
Before I go back and rewrite my enemy AI/PathfindingObstacleAvoidanceSpaghetti I wanted to drop a comment and thank you. Thank you for showing me this brilliant idea. Signed, fullstack developer for 20y+ trying gamedev as a hobby 👍❤
I am a discord bot and web developer and i am tryiing to get into web developement. I am going to watch your videos once i get done with my bot, keep it up!!
Great job Jackie I don’t know what any of this means but. In the future I do plan on learning since my childhood dream is to make a game and I will make sure to revisit your channel for any help I may need along the way!☺️
Hey Jackie! Loved the video, I'm curious though, based off your game coding experience how long would a game like Terraria take to build? Years worth of work and effort? Huge fan of the game and always wondered how long a 2D game like that would take to design. Not counting updates haha! 😇💙
@@awesomewow668 it's also called the direction vector. To get it, subtract the position from the goal position. In GdScript: (player.get_global_position() - enemy.get_global_position()).normalized()
This just popped up in my suggested videos. Very nice explanation. I haven’t gotten to my enemy AI, but I will keep this approach in mind. Thanks!
This is exactly what I needed, i was struggling on how to solve obstacle avoidance without implementing a grid based pathfinding, this seems like the perfect solution
Thank you! And please feel free to reach out if you need some help!
I learned stuff, I laughed a lot and picked up new dance moves. Excellent video.
Jidion introduced me to the best UA-camr. You should also invest in a high quality mic
i dont think she got the funds for that
@@Miesko1 what makes u think that?
Which video did Jidion mention her?!
I foudnd the video, Twitch Con, but what is the timestamp?
@@Jriniscool not a popular youtuber innit
Thanks! Valeu!
I don’t know who you are. I don’t know where you came from. But I received a great explanation of something I was too lazy to research myself. It’s scary how much we don’t know in these sort of development oriented fields (edit:we don’t know what we don’t know) I hadn’t seen this type of AI system before. Thanks for the video
I don't know who you are. I don't know where you came from xD But your comment really made me smile! I appreciate it so much. Took me a long time to wrap my head around this stuff, I'm glad you got something out of this.
A fun little addition to this, which I used to use to create super simple but effective AIs, is great for creating 'group predators'. The goal is for each predator to try and keep away from each other, while chasing the player. The result of this is that they will naturally surround the player, circling around them.
I got the idea from a study I saw on how wolves algorithmically surround their prey - 1. go towards prey, 2. stay away from other wolves, 3. repeat
Adding something like this to your 'interest' algorithm would make for fun group dynamics!
I saw that video you referenced, and thought cool, but I couldnt think of how to implement it, this gives me what I need, thank you
This is exactly what I needed, having myself been inspired by that same video trying to implement it one way, I was looking for a less messy way to implement it and you just made my week (like a year+ later)!
This is very useful. After implementing this I have realized that this method has 1 issue. When player stand directly in parallel with the Enemy in between an obstacle. The enemy will not try to circle around the obstacle to reach the target.
Are you sure you implemented the Dangers correctly?
@theseangle I think so. The context steering cannot completely replace the pathfinding. Its quite easy to get enrmy to stuck in many different positions such as: In between 2 obstacles, or as I previously mentioned when the player stands directly in parallel with enemy with an obstacle in between.
These sort of scenarios require some additional logic to be implemented to complement danger values that are assigned by hitting other obstacle colliders. For example, always favoring right side if there are obstacles on both sides... there are other ways as well but simply setting danger values of a vector and 2 neighbouring vectors ( as has been explained in this video) will not ensure the enemy can find a way towards player if you have many obstacles
@@lukaspetrikas6320 hmm maybe you can create an algorithm which detects when something like this happens, and depending on the cause do some actions, e.g. enlarge the ray casts for x seconds. Maybe it'll help
@@lukaspetrikas6320 also the steering is pretty important. Without it the ai gets stuck on corners
I didn't have any problems with two obstacles, but standing parallel to it and behind an obstacle is giving me a headache. Did you ever find a reliable way to fix this?
This was great. I watched that Game Endeavor AI video you mentioned like last week, but your video went into detail how to actually do this stuff! Can't wait to try to build this sort of thing myself!
that looks super cool ima add it the the growing list of very complicated things ima implement later
Very good video! I understood it completely and was able to implement it into my game, thank you!
This was so cool! I’ve only been getting into game dev for the last few months in godot, been making a top down maze crawler game and I’m 100% going to try to apply these concepts to the enemies in it.
Thank you for this amazing all-encompassing video. You break down the process of building a rather complex AI into digestible bits and you give a proper view of the whole thing with just the right amount of details. Now I can begin to implement these concepts without the fear that I'll make a mess of them. In short, this is exactly what I was looking for. Truly a life saver
Thanks for the video! You just helped me figure out an issue I've been having with connecting Utility AI and State Machines.
Hahahaha that herd joke caught me so off guard
Very well done, Jackie! Fun but very educational too. Gave tech details that made sense with good graphic examples.
This was a great video, really concise explanations and the editing was top tier. great job!
This is awesome! I've seen the video from Game Endeavor and have been looking for a script that does that since, but this video made it easier to understand the process. I've actually added onto this by making the enemy able to detect all obstacles in a radius and have that affect them similar to how the enemy tracks down the target. So thanks!
What a great video, thanks for the upload
Your explanation of advanced concepts such as this is incredible! You seem to have a talent for teaching
tyvm, this was so enjoyable. extremely talented, earned my sub!
I appreciate it, was a blast to make!
Glad that Jidion showed love to this channel
Neatly explained and accurate. As an aspiring game designer and a fairly seasoned programmer I can tell you know your stuff and put a lot of effort on this. Keep up the good job!
@godcraftsden thank you so much for the kind comment, good luck with the implementation!
I just saw the Game Endeavor video a few days ago and put on my todo list to look into how he implemented it. I can't thank you enough for making this video! I am still a little confused as to when to add together interests. Do you add the interests arrays all together then subtract from danger or do you get the contexts maps and add them all together?
Like this?
1. Normalize a vector to what the AI desires.
2. Make an array that is the dot product of that vector with all the 8 directions.
3. Repeat steps 1-2 for all points of interest.
4. Make danger array/
5.. Make a new array where you subtract element 1 in interest from element 1 in danger, and so on for element 2-8. AKA danger minus interest. This gives you the context map.
6. The highest value in the context map is the most desired direction.
7. Apply steering to get a more natural most desired direction.
8. Tell AI to go in the direction from step 7.
you add all the interest arrays together first, you should only have 1 context map array. And when you add all the interest arrays together, if you have a number thats greater than the number you put for the danger, than that means you need to raise the danger away. I said 5 was good for me but if you have like 6 interest arrays, you might want to raise it. Hope this helped, sorry for the late reply, I'm a mess :) feel free to ask more questions and lmk how you implementation went!
This is the video that finally made the interest array stuff click for me!! Thank you!
What Endeavor did in his video was, when the AI get's close to the player. it will change the shape of the dot product, so it will change the desirability. as:
float changedDot = 1 - Mathf.Abs(dot - 0.65f).
you can also make it run a little to the side with:
float changedDot = 1- Matg.Abs(-dot)
it's the shaping part of the video. You didn't mention that
Jiddion really boosted her subs good shi jackie keep it up
Incredibly helpful video. Can't wait to use context steering in my own AI!
I've been looking for a good godot tutorial on context based steering for a while and I have no Idea why this video only got recomended to me now. But Im glad it did eventually because it's the best!!!!
Wow, very nice explanation and implementation. Good job!
This video just single-handily reignited my passion for coding which I lost after getting a C+ in my programming college course. Thank you Jackie!
Hey, one more plus and you're golden
I read that as C++ programming college course
this is so useful thank you very much i always wondered how to make interest based steering system
8:16 how we get current velocity?)))))) Sorry)))))
great video Jackie
Great video! It's cool to hear about state machines you used.
OH WOW i didnt realise the Ancient greeks had this advanced AI, you will learn something new with Jackie every day!!
Nice video!
Be careful that state machines can become really complex quite quickly.
One approach is to have a state machine for each state, if you plan in having complex AI.
If, for example, an enemy wants to have an injury state that differs from when it is in combat or idle, you could need a substate machine, isolated from the main one.
As someone else suggested, Behaviour Tree is more manageable than state machines and allow for having even really complicated AI under control.
i’m glad jidion put me on 🙏
The thumbnail is the only reason I clicked on this vid... glad I stayed lol
You should add actual military formations to the ai
@@sonicrocks2007 cool idea!!
@JackieCodes the ai you could circle around you, or go line for formation and then put archer in back so you have to fight 5 melee guys before getting to the archerer etc. Making it way more difficult. Lol if you do you should do a tutorial
That's really cool! I'm gonna try to do that
You really lost me at the steering behavior bit. The array and raycast stuff was great and fairly easy to follow though
JIDION GANG LESS GOOOOOOO
Loved the video! Gracias Jackie!
Great tutorial although there's a major flaw that I encountered. The enemies can get stuck between two directions when you get them behind a wall and stand directly behind the wall so they can't prioritize which way to go and they alternate between two vectors indefinitely. This isn't much of a problem if the game in question doesn't have many obstacles (such as your game) but if your game has many sharp corners and tight spaces (which was the case in mine) it can be a serious headache for you.
Though it's not without it's solutions. One way I found to combat this problem was to use pathfinding2D in a way. I connected a navigation agent to the enemy and got it's next path's vector after which I got it's dot product with the raycast vectors to figure out which direction was the most similar to the next path location. I then got the index of the closest one and added a number to the corresponding index on the interest vector. It may look jaggy at first but by tuning the number you have added to the interest vector you can smooth it out.
is there a code example anywhere of this?
I mean the explained concepts are pretty simple to implement just try implementing on your own and then use stack overflow and reddit if you need help.
Didn't know you are such a talented dancer, Jackie! :D Good job, the video is wonderful :)
I implemented, got a problem with it on getting stuck between two objects and my tip is do the neighbour danger values such as it doesn't sum up to 1, this way even tho a path may be dangerous if the enemy is on that side you need to go reach him, if enemy avoids completely it will end up stuck going back and on again
I really wish there was a link to find out more about the AI part. I already know how to do the state machine part, but I'm struggling to implement the AI pathing. I get that there's a set of 8-directional raycasts and an 8-directional vector array that detects objects within the path. However, translating that in Step 1 to an actual AI path is just a bunch of ???? to me. On top of that, the "Navigation2D" node has been deprecated, so I guess NavigationAgent2D is the closest match..
that steering force function saved me hours of googling thank youuu
how does the steering function work?
this was fantastic, ty
I remember when I was figering it out, about few months ago. I used those exact technics. I recommend checking behavior trees. It is a modification to state machine that fits game AI needs very well. In Godot recently addon was created implementing behavior tree, it's named BeeHave. Imo it simplifies working with states
Oo sounds cool, gotta check it out, thanks!
Thank you for the amazing video. It works fantastically, one question I had was while implementing FSM with steering, I wasn't sure how to implement the two while trying to decouple them. Would the states be in charge of handling the velocity of the CharacterBody2D or the steering script?
How well does this translate to 3D enemies?
8:06 W RIZZ
At 4:01 it's confusing because you say "subtract the interest from the danger" but you're subtracting the danger from the interest.
Noice! I found out we have that Game AI Pro book series in the work library, and I'm going through the first one :)
Omgggg!! So lucky! Lmk how it is!
Also FYI Godot 3.5 has a new navigation server with obstacle avoidance 😄 It’s nice, but this implementation is really interesting! I like how flexible it is. And that state machine setup is really nice!
I already used it and it wasnt working and one of the devs told me not to use nav avoidance on static colliders....
@@JackieCodes my bad, probably just for basic collision, I really like your solution! That weighted values for different obstacles blew my mind when you made the edges of the arena “dangerous”
Hi! Great video but do u think u can go more indepth into the explanation on the steering force? That will be awesome!. Thanks a bunch!
Really clever work on the steering behaviours and state machine.
Congratulations with the jidion video
Jidon crew here boys
This is going to be the next Terminator, I swear.
As an aspiring game developer, I found this really awesome and helpful
Hello, I'm still a pretty novice gamedev using Godot and can follow along with this guide until I get to the "Dangers" section. For the life of me, I can't figure out how to get the raycasts to change the values within the Danger Vector array. I've been at this for about two days now and have been spending most of my free time scouring the internet for trying to find answers.
I've tried;
- Creating the raycasts outside of the code like in the video, putting into their own array and checking is_colliding() on each of them in a for loop, but that just returns an error saying is_colliding isn't a function in the base.
- Generating them through code in a few different ways, appending them into a raycasts array and then checking if each is colliding but that doesn't work either.
- I got a bit of progress by referencing each raycast in their own onready var and then checking if their colliding and changing the values manually like below, but then I run into a problem where they're constantly overriding the value's other collisions set.
○ If _0.is_colliding:
§ danger_vector[0] = 5
§ danger_vector[1] = 2
§ Danger_vector[7] = 2
- And on top of that, after getting it working somewhat, when I try to get the Context Vector by subtracting the values in the Danger Vector from those in the interest Vector, I keep running this error or errors like this "Invalid set index '0' (on base: 'Array[float]') with value of type 'float'"
I don't know if I'm overthinking or overworking it but I just can't seem to get it to work properly. Is there any way you can share your process for how you got this done with some example code? Anything would be much appreciated.
I was getting the same invalid set index 0 errors as well, and fixed it, but don't fully understand what's going on. I read that if you declare an empty array, there will be no index 0. So, I thought, okay, declare the arrays with a bunch of zero placeholders (i.e. var danger_array = [0,0,0,0,0,0,0,0]). That worked for the danger array and context array, but not for the interest array. Maybe I had a typo somewhere. Dunno. Regardless, I switched to . append for the interest array (i.e. interest_array.append(var) ) , and made sure to clear the array after every cycle (i.e. interest_array.clear() ). The following is my code for the danger array.
for i in range(8):
if raycast_array[i].is_colliding():
if i == 7:
danger_array[6] += 2
danger_array[7] += 5
danger_array[0] += 2
else:
danger_array[i-1] += 2
danger_array[i] += 5
danger_array[i+1] += 2
You need to fix the is_colliding() issue because thats probably the best way to do it, I did it simmilarly to what you mentioned, defined an array with the raycasts, and an array with the danger values, all initially set to 0; then, for each raycast, i checked whether it was colliding, and if it was, I set a variable containing the index of said raycast, so I could change the values in the danger array on the corresponding index (for example, if it was the 5th raycast, I would update the 4th, 5th and 6th values to 2, 5, 2) with an if statement checking whether the values have already been set (for example if the player entered 3 raycasts, it would only update the values assignes to 2 and 0, and leave the 5's so it doenst get all messed up).
If you havent already fixed it and my explanation makes 0 sense (very possible), I could send you the gdscript code so you can see what I actually did. Hope this helps.
I’m here because of Jidion (aka chad Gibbs) keep it up Jackie 😎👍🏻
I implemented this, the caviat is that it doesn't work that great for enclosed areas, the ai will get scared of doors since the diagonals will hit a wall but not the straight path through
sometimes it needs adjustments, like messing around with the length of the raycast?
@@JackieCodes I am not sure I played with it for like 1-2 hours, the bane of it is a single door with multiple enemies. You can get them to not be scared of doors but then they will get stuck in an angle.
An other issue with many of them, is that they ping pong on each other, making the insentive lobsided will make them rotate but again there will be a way where they will rotate and sync to each other to block themselves from doors.
I think I ll hard code it, put a detection zone around doors and queue them in a deterministic fashion
Hello, I wanted to ask for your advice. How to program an NPC to understand if it's path is blocked? For example, if the player stands in the doorframe.
Have the Raycasts also detect the player.
Love this
Jidion said hi
Before I go back and rewrite my enemy AI/PathfindingObstacleAvoidanceSpaghetti I wanted to drop a comment and thank you. Thank you for showing me this brilliant idea.
Signed, fullstack developer for 20y+ trying gamedev as a hobby 👍❤
Great explanation thank you :)
man u gained 25k subs, all hail jidion
I am a discord bot and web developer and i am tryiing to get into web developement. I am going to watch your videos once i get done with my bot, keep it up!!
Hey I came from jidion i dont know much about coding but nonetheless this is really interesting keep uo the good work
Great job Jackie I don’t know what any of this means but. In the future I do plan on learning since my childhood dream is to make a game and I will make sure to revisit your channel for any help I may need along the way!☺️
top tier content
Jidion gang
Topic: Cool
Information: Super useful
Embarrassment while green screening: Amazing
Haha appreciate it!!
nice vid!
Came from jidion W UA-camr‼️‼️
Hey Jackie! Loved the video, I'm curious though, based off your game coding experience how long would a game like Terraria take to build? Years worth of work and effort? Huge fan of the game and always wondered how long a 2D game like that would take to design. Not counting updates haha! 😇💙
I feel really guilty saying this but ive yet to play terraria! 🙈 its in my steam library so i will come back to you on that.
from Jideon with love
Hey im here from Jidion You’re W UA-camR
🔥🔥🔥
My enemies seem to incrementally speed up because of velocity = velocity + steering force
limiting the max velocity seems to help, by the way what would be a good value for the steering value?
@@mijkolsmith he can you help me with local vectors
@@awesomewow668 go ahead and ask the question you have instead of asking if you can ask one
@@mijkolsmith around 3:03 how can I get the local vector? with the local_to function
@@awesomewow668 it's also called the direction vector. To get it, subtract the position from the goal position. In GdScript: (player.get_global_position() - enemy.get_global_position()).normalized()
Lets gooooooo
Came from jidion
That’s so cool! But can u do it on a MacBook? + I really like ur videos they give me Michael reeves videos but much more relaxed lol:)
Of course u can! I initially developed this game using a mac mini and then i switched to windows :)
great vid
W Jackie
Here from Jidion, you're the loveliest lady! Xxx
From jidion 💕
W creator
nice video
from the jidion video
jidion w
Pog
Cool video.
please, more code, more dances, more numbers, less auto-hate
tis wanderfull, but I still can not figure it out in my code)))))