It would've been even cooler if it also went for combos as well, but it's just an algorithm that looks for given images and doesn't really know the game strategies, so maybe I'm expecting too much from it.
I think allowing it to recognize how to combo fruits would make this even better, Ike have it wait before slicing and if multiple fruits are on the screen it does one big slice
i got one even better, make a simple bot slice where there is motion with the bombs being a no-go zone determined by image recognition. this means it is only looking for one item, allowing it to be quick and confident in its determination, allows it to pretty easily do combos, and makes it much more responsive when slicing the fruits
I love how the AI was 95% certain that the bomb was a bomb, and as soon as it hit the bomb it plummited to 80% like the AI said "nuh-uh, that wasn't a bomb THAT WAS NOT A BOMB!"
@ they started with easier images but are slowly getting harder to closer represent the real world environments the ai would actually see. If you can understand the blurry images it teaches ai to understand blurry images.
@@nastykerb34 first of all this isn't true. the captcha images have already been recognized otherwise the captcha wouldn't work. however let's say it were true, training on fuzzy images would likely mean when it has more clarity in real life, it's just that much more accurate. however, items are often far away or in poor lighting conditions, so there's a good chance it could be for the purpose of better training on objects that are just far away.
@@reanimationxp I just saw an interview with one of the devs behind Captcha the other day and the images shown in the Captchas are actually not pre-recognized. They are shown to a bunch of people at the same time and the majority decides if something is a certain thing or not. Otherwise this would require a lot of work and would heavily limit the amount of available pictures, thereby reducing security...
Well, it's not training when it's running, so all you're doing is making it play forever, without it being trained on anything. Maybe a modification to it to make the current AI model train a new one while it plays would be cool, but wouldn't be necessary, because this isn't meant to be a perfect AI
this ai is just the photo recognition if you want it to learn to play the game better youd need a neural network thats fed the image recognition ai data as an input
@@modzyy key word: _can_ be. It's not currently made to train a new AI. All it does is take a screenshot, analyze it, and do some input. Then the screenshot may as well be discarded, because it does not use the image further. As well, AIs are trained at a specific point in the process, but aren't trained further. For example, ChatGPT will not get better when people use it, and neither will most AI models. Typically when an AI trains itself, it gets worse. By definition, the day it doesn't is the day of the AI singularity, where it can improve itself into infinity
If you think about it, YOLO (c. end of the world, 2012) and YOLO (object recognition) are really the same thing: You'll find out *very* quickly that You Only Live Once if You Only Look Once while crossing the road!
python is simple to created an image recognition AI with, but it is so slow that by the time it has finished processing the image the results are already outdated. this pretty much sums up this whole situation where the most performance dependent tasks are solved with one of the slowest languages out there
Generally, doing image recognition in Python involves making calls to some library that is actually written in C (or a similar, actually fast language). As a result, relatively little time is spent in Python-land which mitigates the slowness of the language. I don't know what the code used in the video looks like. There could be a bunch of complicated Python code that slows it down. Or maybe it's not Python's fault at all.
So painful seeing him do all this when he could've just had something search the specific pixel color of the fruit/bombs to slice them instead of training a whole ai to recognize fruit
i would love to see an updated version of the ai, doing fruit combos because atm it slices each fruit individually it would be so satisfying if it would do the best slicing combos possible
Trajectory prediction would be wicked to see! Especially in the case of avoiding a slice if a bomb will intercept. And couple that with what I’ve seen others say: getting combos by slicing multiple fruit in one motion. If it can predict the best time to get combos based on the trajectories, I’d love to see how high a score it can really get 😈
It would be interesting to try a roguelike - pixel dungeon/shattered pixel dungeon is FOSS, and I haven't seen anyone do anything like it. You'd have a variety of skills the AI would have to learn: fight mechanics, resource management, item mechanics and selection, synergies etc. You'd probably choose just one class, and since it's FOSS, if it's too complex you could easily tone it down by reducing items, making it set seed, making item generation deterministic, just doing one floor etc (but it'd be a hell of a video to do the whole game haha but maybe there's a reason no one's done it before).
I’m not sure how feasible it would have been to decompile the game and get the models for the fruit and bombs, but if those are obtainable then would it not have been easier to train the AI using screenshots of those models rotated programmatically? Then, instead of searching the entire screen space for fruit, you only need to look in areas with a significant amount of pixels changed between consecutive frames, as these locations have either a fruit or bomb in them. However, if it’s a fruit, then you don’t actually care WHICH fruit it is, you cut it regardless. You ONLY need to care about whether or not it’s a bomb, and so when screen pixels change in an area, check the surrounding region for a bomb. If a bomb is not found, cut, otherwise avoid. It should be noted too that this approach would likely have issues with areas where bombs and fruit overlap, but that can be dealt with. This approach could ALSO be done without the decompiled models at all, instead only providing the model with gameplay screenshots of bombs and fruit overlapping bombs. By providing the AI with these images, it alerts it NOT to cut them, but if it DOESN’T see matches for that data, it DOES cut whenever it sees changed pixels. The main efficiency of this comes from not needing to run image recognition for each individual fruit, and also not running image recognition checks over the whole screen constantly, but instead only in surrounding regions where changed pixels are when those changes occur.
Hi, I love your channel! I would say it is the best channel for game AIs, I love your videos. I am wondering what tool you use to train the YOLOS; thanks!
Here's another challenge for you. Try hitting combos. Basically you've got to track the motion of all fruits by taking continuous screenshots. By that you can make predictions on when a bunch on fruits might be closest to one another. And make a hit. And hit individual fruits only when those are at the end of downward motion
throw the gameplay into after effect -> use motion tracking -> render sequence of image -> label into seperate folder and auto rename much more easy way to do the labeling task
You could’ve tried first idea of image recognition but with checking only for range of colors. And the fruits would have a specific range of it. When you would make it small for each fruit it could’ve worked
May be another way of approaching this problem is using the AI to detech the colors instead of the fruit themselves? Because the background color is distinctive.
You can easily increase your model speed if you use lower screenshot resolution (not a game resolution), then turn image black and white, then use color contrast to make objects looks clear. And also, check to click on fruit little bit away from bomb.
You could make a system of hierarchies, where the AI cuts the least important fruits first and then the most important ones, this way you could avoid cases where it cuts the pomegranate fruit, losing the other fruits.
it would have been so much cooler if the ai would have learned to slice more then 1 fruit at a time but still a really cool vid man keep up the good work! : )
If you wanted to improve the ai, you only need 3 frames and some calculus to predict the exact trajectory of the fruit meaning you'd be able to slice every fruit on the 4th frame that it's on screen
instead of slicing around with a katana, which is what i imagine normal gameplay has behind the camera, the AI is just a crazy bastard dual wielding 2 european style medieval swords and stabbing them like crazy
A pixel search algorithm would've been faster, via color indexing. All fruits have one solid color, so having a simple hex variation of lets say (green) for an apple 🍏 would be way faster and probably even better to find all the fruits. Same goes for the bomb since it is a solid black color, so the AI never would try to attack it within a solid square hit box on screen.
that wouldnt work with bombs you need to use the red outline for them to get their actual hitbox the rest is fine as is tho if u wanna go fancy u can go with outline detection or just as u said a simple HSV range for each fruit #edit another problem would be the already sliced fruits since they have the same color
0:55 you use spiral detection from center and dynamic bitmap or just color tolerances. It'd take several thousand times less processing power and work about the same. Things like simba have made this very easy for decades
Color ID won't work well since the fruits splatter after being sliced, meaning your AI would continue slicing fruit juices long after the fruits were sliced.
@@SasamuelTheCool maybe brainrot isn't that bad, maybe the attention span of a goldfish isnt that bad ... MAYBE FAMILY GUY, SUBWAY SURFERS, MINECRAFT PARKOUR AND ROCKET LEAGUE CLIPS ALONG WITH THE ACTUAL VID AT THE SAME TIME ISN'T THAT BA-
Bro you chose the most complex way, all what it needed was an image recognition only for the bomb and for other moving objects And an if statement If object != bomb Slice
Couldnt you have done some color detection? The fruits are all unique colors you could've taken a 500x500 image for example of where the color is and gotten the fruit labeling data that way? Just a thought idk.
I did this a while back but simpler, just looking for specific ranges of colors per pixel cause the lighting is pretty static, then made a modified TSP algo to be able to chain slice.
there is a significantly faster way of doing this which is looking for clusters of pixels that have changed and only using the AI to detect the bombs to make sure those are in blocked off zones for the motion detection
Not sure which version of YOLO you used, but v4 and v5 was scaling every image down to a VERY low res, like 300x300-ish, so screen resolution shouldn’t matter too much. Unless rescaling the image takes so much of your machine resources for some reason. From a quick google search I see that v8 scales image down to 640x640, but that’s just the first result that I see, and I’m too lazy to catch up on YOLO development.
The fruits and bombs are seemingly predetermined and the same every time, so could you just use reinforcement learning until your AI could get to a score of like 100,000? Also could scanning the screen for non black or non brown pixels work?
Could you have used pyautoguis image recognition? Because of the confidence parameter I feel like you would have only had to take 15 different images of the fruit
Sorry it's late guys, but here is the code if you're interested github.com/TylerMommsen/fruit-ninja-bot
Image Recognition?
Dog. Pig. Dog. Pig. Dog. Pig.
Loaf of Bread. SYSTEM ERROR!
@@justjuniorjaww mitchels vs the machines reference
do you think it would be possible to make this for mobile?
@@rainbowman4723nha i thik mobile software need different codes(I think)
@@rainbowman4723yes
Just spent 10 minutes convincing myself that watching an AI play Fruit Ninja is the productive break I needed. Can confirm: still procrastinating.
Psst. Stop procrastinating. Go do it.
Not procrastinating, just doing side-quests
i am doing the same thing
Doing the same
Same
I’m kinda disappointed that the AI didn’t absolutely obliterate the pomegranates
yeah seeing it get like 100+ slices would have been cooler
Same here. Might just not be good enough unfortunately.
Sword draw, first form: death by a thousand cuts
It would've been even cooler if it also went for combos as well, but it's just an algorithm that looks for given images and doesn't really know the game strategies, so maybe I'm expecting too much from it.
get a better cpu and you'll get better pomegranates
I think allowing it to recognize how to combo fruits would make this even better, Ike have it wait before slicing and if multiple fruits are on the screen it does one big slice
yeah but that would add like 2 to 5 years to dev time
@@gameplaysuffering162099% of that time being procrastination
yeah
i got one even better, make a simple bot slice where there is motion with the bombs being a no-go zone determined by image recognition. this means it is only looking for one item, allowing it to be quick and confident in its determination, allows it to pretty easily do combos, and makes it much more responsive when slicing the fruits
@@dillzilla4454it’ll trap itself on the movement of the slice it makes though
I love how the AI was 95% certain that the bomb was a bomb, and as soon as it hit the bomb it plummited to 80% like the AI said "nuh-uh, that wasn't a bomb THAT WAS NOT A BOMB!"
I‘m guessing that the bomb as it was exploding, was looking less and less like the bombs the AI was trained to recognize
stage 1 of grief: denial
No fruits are being harmed in the making of this video.
more like no bombs
he probably ate one during it
No, fruits were harmed in the making of this video.
Liar liar pants on fire
Except for the ones that were
Big props for making an Ai on the orginal game! Happy to see that rather then a recreation of it
I feel like the recreation is just as cool
it shows off more fundamental ML concepts rather than "I imported an image recognition library :B"
@@elliott6158agreed. i much prefer ML stuff when the AI has access to the data behind the game rather than just recognizing screenshots
1:43 that’s why captchas have you identify common things found on streets. You are training their self driving car ai.
Rhen why are the images fuzzy, would that result in a shotty AI?
@ they started with easier images but are slowly getting harder to closer represent the real world environments the ai would actually see. If you can understand the blurry images it teaches ai to understand blurry images.
@@nastykerb34 first of all this isn't true. the captcha images have already been recognized otherwise the captcha wouldn't work. however let's say it were true, training on fuzzy images would likely mean when it has more clarity in real life, it's just that much more accurate. however, items are often far away or in poor lighting conditions, so there's a good chance it could be for the purpose of better training on objects that are just far away.
@@reanimationxp I just saw an interview with one of the devs behind Captcha the other day and the images shown in the Captchas are actually not pre-recognized. They are shown to a bunch of people at the same time and the majority decides if something is a certain thing or not. Otherwise this would require a lot of work and would heavily limit the amount of available pictures, thereby reducing security...
If the AI sliced the menu fruit (🍉 = play again), it could keep training for hours.
Well, it's not training when it's running, so all you're doing is making it play forever, without it being trained on anything.
Maybe a modification to it to make the current AI model train a new one while it plays would be cool, but wouldn't be necessary, because this isn't meant to be a perfect AI
this ai is just the photo recognition if you want it to learn to play the game better youd need a neural network thats fed the image recognition ai data as an input
@@jmvrit takes more pictures while its running which can be used for training later on, this is common sense.
@@modzyy key word: _can_ be. It's not currently made to train a new AI. All it does is take a screenshot, analyze it, and do some input. Then the screenshot may as well be discarded, because it does not use the image further.
As well, AIs are trained at a specific point in the process, but aren't trained further. For example, ChatGPT will not get better when people use it, and neither will most AI models. Typically when an AI trains itself, it gets worse. By definition, the day it doesn't is the day of the AI singularity, where it can improve itself into infinity
@@jmvr yap yap yap
0:39 amazing, well said
totaly on your side
😂
Language..
@@tuloxelanguage..
@@evereq8970what?
Fym language?@@evereq8970
If you think about it, YOLO (c. end of the world, 2012) and YOLO (object recognition) are really the same thing: You'll find out *very* quickly that You Only Live Once if You Only Look Once while crossing the road!
Unless anime was right and you find out that you live at least twice.
@@DanielLCarrieronly work if it’s a truck with its headlight open
@@windy5405Or a tractor going 2 mph.
@locrianphantom3547 no not Kazuma.
Man that 2012 was awkward
python is simple to created an image recognition AI with, but it is so slow that by the time it has finished processing the image the results are already outdated. this pretty much sums up this whole situation where the most performance dependent tasks are solved with one of the slowest languages out there
Generally, doing image recognition in Python involves making calls to some library that is actually written in C (or a similar, actually fast language). As a result, relatively little time is spent in Python-land which mitigates the slowness of the language.
I don't know what the code used in the video looks like. There could be a bunch of complicated Python code that slows it down. Or maybe it's not Python's fault at all.
@@ahdog8 I know that Python libraries utilize C, however, the overhead that Python introduces is still too much even for complied C code
So painful seeing him do all this when he could've just had something search the specific pixel color of the fruit/bombs to slice them instead of training a whole ai to recognize fruit
Exactly
I am guessing that if he did that then the ai would proceed to again cut the fruits which have already been sliced
@@sakshambaranwal132I can totally imagine the AI doing that
i would love to see an updated version of the ai, doing fruit combos
because atm it slices each fruit individually
it would be so satisfying if it would do the best slicing combos possible
Now teach an AI to play feed the deep.
8:00 Mangoes have as much vitamin C as oranges
Trajectory prediction would be wicked to see! Especially in the case of avoiding a slice if a bomb will intercept. And couple that with what I’ve seen others say: getting combos by slicing multiple fruit in one motion. If it can predict the best time to get combos based on the trajectories, I’d love to see how high a score it can really get 😈
3:18 You know bro's been taken by the Terminator when he called the AI "His"
some time ago, they needed pigeons to do this.
I think England still does
They needed pigeons to play fruit ninja for them?
@@ThisIsAHandle-xz5yo I guess
Bro, I agree
It would be interesting to try a roguelike - pixel dungeon/shattered pixel dungeon is FOSS, and I haven't seen anyone do anything like it.
You'd have a variety of skills the AI would have to learn: fight mechanics, resource management, item mechanics and selection, synergies etc. You'd probably choose just one class, and since it's FOSS, if it's too complex you could easily tone it down by reducing items, making it set seed, making item generation deterministic, just doing one floor etc (but it'd be a hell of a video to do the whole game haha but maybe there's a reason no one's done it before).
that sounds actually sick
This channel is the most perfect example I have ever encountered of a Blue Ocean
Besides maybe the Wright Brothers
Amazing video, thanks! Can't believe you have only ~7k subscribers. 🔥
Why arent you remaking the game
I’m not sure how feasible it would have been to decompile the game and get the models for the fruit and bombs, but if those are obtainable then would it not have been easier to train the AI using screenshots of those models rotated programmatically?
Then, instead of searching the entire screen space for fruit, you only need to look in areas with a significant amount of pixels changed between consecutive frames, as these locations have either a fruit or bomb in them. However, if it’s a fruit, then you don’t actually care WHICH fruit it is, you cut it regardless. You ONLY need to care about whether or not it’s a bomb, and so when screen pixels change in an area, check the surrounding region for a bomb. If a bomb is not found, cut, otherwise avoid.
It should be noted too that this approach would likely have issues with areas where bombs and fruit overlap, but that can be dealt with. This approach could ALSO be done without the decompiled models at all, instead only providing the model with gameplay screenshots of bombs and fruit overlapping bombs. By providing the AI with these images, it alerts it NOT to cut them, but if it DOESN’T see matches for that data, it DOES cut whenever it sees changed pixels.
The main efficiency of this comes from not needing to run image recognition for each individual fruit, and also not running image recognition checks over the whole screen constantly, but instead only in surrounding regions where changed pixels are when those changes occur.
2:12 You don't actually, you can just use a clustering algorithm that can group all the similar objects together which you can then label
Hi, I love your channel! I would say it is the best channel for game AIs, I love your videos.
I am wondering what tool you use to train the YOLOS; thanks!
Your channel is gonna blow up dude this is really high quality and entertaining content. Keep it up!
Here's another challenge for you. Try hitting combos. Basically you've got to track the motion of all fruits by taking continuous screenshots. By that you can make predictions on when a bunch on fruits might be closest to one another. And make a hit. And hit individual fruits only when those are at the end of downward motion
Dude you are a genius🙏🙏🙏
Love the effort you put in each video ❤❤❤
I would’ve just “cut” the center of the area that popped up that didn’t match the backgrounds
throw the gameplay into after effect -> use motion tracking -> render sequence of image -> label into seperate folder and auto rename
much more easy way to do the labeling task
wake up babe Tyler just uploaded a new video
I love Yolo Ai. Amazing for digital surveillance and AI cheating in games like Counter Strike. What an amazingly versatile piece of software.
Yes using the predicitons from the previous models and reannotating is the best option for the faster finetunning of the model.
2:15 respect this job
Cool it’s another UA-camr that is underrated and has good content (:
1:58 lies, i have done this, but did absolutely nothing with it :)
You could’ve tried first idea of image recognition but with checking only for range of colors. And the fruits would have a specific range of it. When you would make it small for each fruit it could’ve worked
May be another way of approaching this problem is using the AI to detech the colors instead of the fruit themselves?
Because the background color is distinctive.
In the same way you can teach AI to recognize an enemy soldier and an ally and make it shoot in less than a second. AI is becoming scary .
Well yeah, that's how classification algorithms work lol
i think just using color to find everything that isnt a bomb would be faster. the red and black on the bomb look unique.
omgg you’re only 9k subs ?! you deserve way more
You can easily increase your model speed if you use lower screenshot resolution (not a game resolution), then turn image black and white, then use color contrast to make objects looks clear. And also, check to click on fruit little bit away from bomb.
Loved your hardwork ❤❤😂
You could make a system of hierarchies, where the AI cuts the least important fruits first and then the most important ones, this way you could avoid cases where it cuts the pomegranate fruit, losing the other fruits.
it would have been so much cooler if the ai would have learned to slice more then 1 fruit at a time but still a really cool vid man keep up the good work! : )
It was pitch black in my room watching this a inch away from my face it felt like a real flash bang
3:24 the voices- they won’t stop…
FISH
absolutley amazing video and also very educational, nice man
If you wanted to improve the ai, you only need 3 frames and some calculus to predict the exact trajectory of the fruit meaning you'd be able to slice every fruit on the 4th frame that it's on screen
instead of slicing around with a katana, which is what i imagine normal gameplay has behind the camera, the AI is just a crazy bastard dual wielding 2 european style medieval swords and stabbing them like crazy
A pixel search algorithm would've been faster, via color indexing.
All fruits have one solid color, so having a simple hex variation of lets say (green) for an apple 🍏 would be way faster and probably even better to find all the fruits.
Same goes for the bomb since it is a solid black color, so the AI never would try to attack it within a solid square hit box on screen.
that wouldnt work with bombs you need to use the red outline for them to get their actual hitbox
the rest is fine as is tho if u wanna go fancy u can go with outline detection or just as u said a simple HSV range for each fruit
#edit
another problem would be the already sliced fruits since they have the same color
I feel like that might cause issues with the ai going after splatter.
@@lekkobotWhen the reply has more likes than the original comment:
@@HitSpaceGDthat’s called a ratio young one
@@-CENSORED0- thanks for enlightening me 👍
0:55 you use spiral detection from center and dynamic bitmap or just color tolerances. It'd take several thousand times less processing power and work about the same. Things like simba have made this very easy for decades
Color ID won't work well since the fruits splatter after being sliced, meaning your AI would continue slicing fruit juices long after the fruits were sliced.
@@peterchristensen8843 that isn't ai
Idk if it is only me, but here: 5:40 you could have put only the AI Vision or 20 seconds one 20 seconds another. The both at the same time are weird.
Vision issue
@@SasamuelTheCool maybe brainrot isn't that bad, maybe the attention span of a goldfish isnt that bad ...
MAYBE FAMILY GUY, SUBWAY SURFERS, MINECRAFT PARKOUR AND ROCKET LEAGUE CLIPS ALONG WITH THE ACTUAL VID AT THE SAME TIME ISN'T THAT BA-
Who gonna tell bro 😭🙏
Great video! I would suggest adding combos
0:39 That was a little to personal 💀
should’ve made it insane at the part where it combos
I accidently scrolled too far7:50
You're doing a great job in content creation.... 🎉
Bro you chose the most complex way, all what it needed was an image recognition only for the bomb and for other moving objects
And an if statement
If object != bomb
Slice
How would the AI be able to tell that fruits are on the screen?
Way too underrated channel ❤
Couldnt you have done some color detection? The fruits are all unique colors you could've taken a 500x500 image for example of where the color is and gotten the fruit labeling data that way? Just a thought idk.
TOP content bro
Here before this channel BLOWS UP!
I did this a while back but simpler, just looking for specific ranges of colors per pixel cause the lighting is pretty static, then made a modified TSP algo to be able to chain slice.
there is a significantly faster way of doing this which is looking for clusters of pixels that have changed and only using the AI to detect the bombs to make sure those are in blocked off zones for the motion detection
But then what would stop the AI from continuing to try to slice the fruits even after they've already been sliced
This guy is godly
I like that he dosent respect the viewers when they say any bad comments and he doesn't ignore them
Make it beat the world record next.
Great job, but watching the game play there was a lack of combos, if you ever want to revisit fruit ninja that's a thing you could look at.
Now this is a real AI. Finally!!
my personal best in that mode is closely around 800
is what happens when you have over 2 years of fruit ninja gameplay
I love this guy
You could've scaled down the screenshots to a much smaller size before feeding that to the AI.
You could make the AI not chop the fruits when theres bombs overlapping.
Not sure which version of YOLO you used, but v4 and v5 was scaling every image down to a VERY low res, like 300x300-ish, so screen resolution shouldn’t matter too much. Unless rescaling the image takes so much of your machine resources for some reason. From a quick google search I see that v8 scales image down to 640x640, but that’s just the first result that I see, and I’m too lazy to catch up on YOLO development.
What program did you use for the labelling of the objects at 2:16?
Roboflow
bros ai: if its green or blue or yellow then its a fruit but if its BLACK THEN ITS A BOMB
Me with yellow and green bomb.😈
you can just hit any moving object that isn't a bomb, I think that'd be easier. but I don't know what you'd do for objects that need multiple slices
This would have been a great gameto try some reinforcement learning. Oh well.
This guy worth more subscribers guys.
The fruits and bombs are seemingly predetermined and the same every time, so could you just use reinforcement learning until your AI could get to a score of like 100,000?
Also could scanning the screen for non black or non brown pixels work?
3:24, FISH🐟
Could you have used pyautoguis image recognition? Because of the confidence parameter I feel like you would have only had to take 15 different images of the fruit
Quick question, how did you get both visions side by side? I just wanna know for future use
The dev of fruit ninja made a new game called feed the deep, I suggest watching Aliensrock play it
i like this youtuber
what is the program you use for the labeling?
We got fruit ninja aim bot before GTA VI
now i want you to make an ai that can solve captcha
This video is the exact amount of time it takes me to eat a Totino's pepperoni party pizza.
Ai's worst enemy, pomegranates.
Now feed the recognized images (x,y,w,h,type,confidence) to a neural network and see how well it can learn to play. Maybe it would even learn combos!
Now you need a genetic algorithm model
You could of also made it so if the bomb is overlapping wait for it to un overlap
3:22 🐟
grandma's gonna be happy with this one
I mean the vid is fine but without any bad/dumb/dark jokes of CodeBullet it just won't cut it
What if you made it really good at detecting bombs and just made it slice everywhere else but the bombs
Please do subway surfers!!