It would've been even cooler if it also went for combos as well, but it's just an algorithm that looks for given images and doesn't really know the game strategies, so maybe I'm expecting too much from it.
I think allowing it to recognize how to combo fruits would make this even better, Ike have it wait before slicing and if multiple fruits are on the screen it does one big slice
i got one even better, make a simple bot slice where there is motion with the bombs being a no-go zone determined by image recognition. this means it is only looking for one item, allowing it to be quick and confident in its determination, allows it to pretty easily do combos, and makes it much more responsive when slicing the fruits
I love how the AI was 95% certain that the bomb was a bomb, and as soon as it hit the bomb it plummited to 80% like the AI said "nuh-uh, that wasn't a bomb THAT WAS NOT A BOMB!"
@ they started with easier images but are slowly getting harder to closer represent the real world environments the ai would actually see. If you can understand the blurry images it teaches ai to understand blurry images.
@@nastykerb34 first of all this isn't true. the captcha images have already been recognized otherwise the captcha wouldn't work. however let's say it were true, training on fuzzy images would likely mean when it has more clarity in real life, it's just that much more accurate. however, items are often far away or in poor lighting conditions, so there's a good chance it could be for the purpose of better training on objects that are just far away.
@@reanimationxp I just saw an interview with one of the devs behind Captcha the other day and the images shown in the Captchas are actually not pre-recognized. They are shown to a bunch of people at the same time and the majority decides if something is a certain thing or not. Otherwise this would require a lot of work and would heavily limit the amount of available pictures, thereby reducing security...
Well, it's not training when it's running, so all you're doing is making it play forever, without it being trained on anything. Maybe a modification to it to make the current AI model train a new one while it plays would be cool, but wouldn't be necessary, because this isn't meant to be a perfect AI
this ai is just the photo recognition if you want it to learn to play the game better youd need a neural network thats fed the image recognition ai data as an input
@@modzyy key word: _can_ be. It's not currently made to train a new AI. All it does is take a screenshot, analyze it, and do some input. Then the screenshot may as well be discarded, because it does not use the image further. As well, AIs are trained at a specific point in the process, but aren't trained further. For example, ChatGPT will not get better when people use it, and neither will most AI models. Typically when an AI trains itself, it gets worse. By definition, the day it doesn't is the day of the AI singularity, where it can improve itself into infinity
If you think about it, YOLO (c. end of the world, 2012) and YOLO (object recognition) are really the same thing: You'll find out *very* quickly that You Only Live Once if You Only Look Once while crossing the road!
i would love to see an updated version of the ai, doing fruit combos because atm it slices each fruit individually it would be so satisfying if it would do the best slicing combos possible
python is simple to created an image recognition AI with, but it is so slow that by the time it has finished processing the image the results are already outdated. this pretty much sums up this whole situation where the most performance dependent tasks are solved with one of the slowest languages out there
Generally, doing image recognition in Python involves making calls to some library that is actually written in C (or a similar, actually fast language). As a result, relatively little time is spent in Python-land which mitigates the slowness of the language. I don't know what the code used in the video looks like. There could be a bunch of complicated Python code that slows it down. Or maybe it's not Python's fault at all.
So painful seeing him do all this when he could've just had something search the specific pixel color of the fruit/bombs to slice them instead of training a whole ai to recognize fruit
Trajectory prediction would be wicked to see! Especially in the case of avoiding a slice if a bomb will intercept. And couple that with what I’ve seen others say: getting combos by slicing multiple fruit in one motion. If it can predict the best time to get combos based on the trajectories, I’d love to see how high a score it can really get 😈
It’s nice to see someone using yolo correctly, my college final project was trying to use yolo on wav files of audio recordings to find bird calls. It was not a good idea.
It would be interesting to try a roguelike - pixel dungeon/shattered pixel dungeon is FOSS, and I haven't seen anyone do anything like it. You'd have a variety of skills the AI would have to learn: fight mechanics, resource management, item mechanics and selection, synergies etc. You'd probably choose just one class, and since it's FOSS, if it's too complex you could easily tone it down by reducing items, making it set seed, making item generation deterministic, just doing one floor etc (but it'd be a hell of a video to do the whole game haha but maybe there's a reason no one's done it before).
I’m not sure how feasible it would have been to decompile the game and get the models for the fruit and bombs, but if those are obtainable then would it not have been easier to train the AI using screenshots of those models rotated programmatically? Then, instead of searching the entire screen space for fruit, you only need to look in areas with a significant amount of pixels changed between consecutive frames, as these locations have either a fruit or bomb in them. However, if it’s a fruit, then you don’t actually care WHICH fruit it is, you cut it regardless. You ONLY need to care about whether or not it’s a bomb, and so when screen pixels change in an area, check the surrounding region for a bomb. If a bomb is not found, cut, otherwise avoid. It should be noted too that this approach would likely have issues with areas where bombs and fruit overlap, but that can be dealt with. This approach could ALSO be done without the decompiled models at all, instead only providing the model with gameplay screenshots of bombs and fruit overlapping bombs. By providing the AI with these images, it alerts it NOT to cut them, but if it DOESN’T see matches for that data, it DOES cut whenever it sees changed pixels. The main efficiency of this comes from not needing to run image recognition for each individual fruit, and also not running image recognition checks over the whole screen constantly, but instead only in surrounding regions where changed pixels are when those changes occur.
You could make a system of hierarchies, where the AI cuts the least important fruits first and then the most important ones, this way you could avoid cases where it cuts the pomegranate fruit, losing the other fruits.
I had to label images before for yolo except that my images was like 200 cows in a huge picture times 1k similar pictures and god, that is the worst thing possible. So I respect your effort in labeling these images by yourself.
instead of slicing around with a katana, which is what i imagine normal gameplay has behind the camera, the AI is just a crazy bastard dual wielding 2 european style medieval swords and stabbing them like crazy
Here's another challenge for you. Try hitting combos. Basically you've got to track the motion of all fruits by taking continuous screenshots. By that you can make predictions on when a bunch on fruits might be closest to one another. And make a hit. And hit individual fruits only when those are at the end of downward motion
A pixel search algorithm would've been faster, via color indexing. All fruits have one solid color, so having a simple hex variation of lets say (green) for an apple 🍏 would be way faster and probably even better to find all the fruits. Same goes for the bomb since it is a solid black color, so the AI never would try to attack it within a solid square hit box on screen.
that wouldnt work with bombs you need to use the red outline for them to get their actual hitbox the rest is fine as is tho if u wanna go fancy u can go with outline detection or just as u said a simple HSV range for each fruit #edit another problem would be the already sliced fruits since they have the same color
throw the gameplay into after effect -> use motion tracking -> render sequence of image -> label into seperate folder and auto rename much more easy way to do the labeling task
You could’ve tried first idea of image recognition but with checking only for range of colors. And the fruits would have a specific range of it. When you would make it small for each fruit it could’ve worked
Hi, I love your channel! I would say it is the best channel for game AIs, I love your videos. I am wondering what tool you use to train the YOLOS; thanks!
If you wanted to improve the ai, you only need 3 frames and some calculus to predict the exact trajectory of the fruit meaning you'd be able to slice every fruit on the 4th frame that it's on screen
it would have been so much cooler if the ai would have learned to slice more then 1 fruit at a time but still a really cool vid man keep up the good work! : )
@@SasamuelTheCool maybe brainrot isn't that bad, maybe the attention span of a goldfish isnt that bad ... MAYBE FAMILY GUY, SUBWAY SURFERS, MINECRAFT PARKOUR AND ROCKET LEAGUE CLIPS ALONG WITH THE ACTUAL VID AT THE SAME TIME ISN'T THAT BA-
May be another way of approaching this problem is using the AI to detech the colors instead of the fruit themselves? Because the background color is distinctive.
You can easily increase your model speed if you use lower screenshot resolution (not a game resolution), then turn image black and white, then use color contrast to make objects looks clear. And also, check to click on fruit little bit away from bomb.
0:55 you use spiral detection from center and dynamic bitmap or just color tolerances. It'd take several thousand times less processing power and work about the same. Things like simba have made this very easy for decades
Color ID won't work well since the fruits splatter after being sliced, meaning your AI would continue slicing fruit juices long after the fruits were sliced.
Bro you chose the most complex way, all what it needed was an image recognition only for the bomb and for other moving objects And an if statement If object != bomb Slice
Couldnt you have done some color detection? The fruits are all unique colors you could've taken a 500x500 image for example of where the color is and gotten the fruit labeling data that way? Just a thought idk.
I would love to see what happens when you train the ai on as low a resolution as you can go on the golden fruit level. I wanna see it obliterate them, dude.
there is a significantly faster way of doing this which is looking for clusters of pixels that have changed and only using the AI to detect the bombs to make sure those are in blocked off zones for the motion detection
Overall that would make the ai slower because it has to take 1 SS, process it, take a second, process it and then calculate the exact velocity of the fruit then compare how the "gravity" affects the loss of momentum.
Not sure which version of YOLO you used, but v4 and v5 was scaling every image down to a VERY low res, like 300x300-ish, so screen resolution shouldn’t matter too much. Unless rescaling the image takes so much of your machine resources for some reason. From a quick google search I see that v8 scales image down to 640x640, but that’s just the first result that I see, and I’m too lazy to catch up on YOLO development.
Sorry it's late guys, but here is the code if you're interested github.com/TylerMommsen/fruit-ninja-bot
Image Recognition?
Dog. Pig. Dog. Pig. Dog. Pig.
Loaf of Bread. SYSTEM ERROR!
@@justjuniorjaww mitchels vs the machines reference
do you think it would be possible to make this for mobile?
@@rainbowman4723nha i thik mobile software need different codes(I think)
@@rainbowman4723yes
Just spent 10 minutes convincing myself that watching an AI play Fruit Ninja is the productive break I needed. Can confirm: still procrastinating.
Psst. Stop procrastinating. Go do it.
Not procrastinating, just doing side-quests
i am doing the same thing
Doing the same
Same
I’m kinda disappointed that the AI didn’t absolutely obliterate the pomegranates
yeah seeing it get like 100+ slices would have been cooler
Same here. Might just not be good enough unfortunately.
Sword draw, first form: death by a thousand cuts
It would've been even cooler if it also went for combos as well, but it's just an algorithm that looks for given images and doesn't really know the game strategies, so maybe I'm expecting too much from it.
get a better cpu and you'll get better pomegranates
I think allowing it to recognize how to combo fruits would make this even better, Ike have it wait before slicing and if multiple fruits are on the screen it does one big slice
yeah but that would add like 2 to 5 years to dev time
@@gameplaysuffering162099% of that time being procrastination
yeah
i got one even better, make a simple bot slice where there is motion with the bombs being a no-go zone determined by image recognition. this means it is only looking for one item, allowing it to be quick and confident in its determination, allows it to pretty easily do combos, and makes it much more responsive when slicing the fruits
@@dillzilla4454it’ll trap itself on the movement of the slice it makes though
I love how the AI was 95% certain that the bomb was a bomb, and as soon as it hit the bomb it plummited to 80% like the AI said "nuh-uh, that wasn't a bomb THAT WAS NOT A BOMB!"
I‘m guessing that the bomb as it was exploding, was looking less and less like the bombs the AI was trained to recognize
stage 1 of grief: denial
Big props for making an Ai on the orginal game! Happy to see that rather then a recreation of it
I feel like the recreation is just as cool
it shows off more fundamental ML concepts rather than "I imported an image recognition library :B"
@@elliott6158agreed. i much prefer ML stuff when the AI has access to the data behind the game rather than just recognizing screenshots
No fruits are being harmed in the making of this video.
more like no bombs
he probably ate one during it
No, fruits were harmed in the making of this video.
Liar liar pants on fire
Except for the ones that were
1:43 that’s why captchas have you identify common things found on streets. You are training their self driving car ai.
Rhen why are the images fuzzy, would that result in a shotty AI?
@ they started with easier images but are slowly getting harder to closer represent the real world environments the ai would actually see. If you can understand the blurry images it teaches ai to understand blurry images.
@@nastykerb34 first of all this isn't true. the captcha images have already been recognized otherwise the captcha wouldn't work. however let's say it were true, training on fuzzy images would likely mean when it has more clarity in real life, it's just that much more accurate. however, items are often far away or in poor lighting conditions, so there's a good chance it could be for the purpose of better training on objects that are just far away.
@@reanimationxp I just saw an interview with one of the devs behind Captcha the other day and the images shown in the Captchas are actually not pre-recognized. They are shown to a bunch of people at the same time and the majority decides if something is a certain thing or not. Otherwise this would require a lot of work and would heavily limit the amount of available pictures, thereby reducing security...
@@johannesbohm6458 but if the picture is new and there is no consensus on it yet how does the captcha recognise if you are correct?
0:39 amazing, well said
totaly on your side
😂
Language..
@@tuloxelanguage..
@@evereq8970what?
Fym language?@@evereq8970
If the AI sliced the menu fruit (🍉 = play again), it could keep training for hours.
Well, it's not training when it's running, so all you're doing is making it play forever, without it being trained on anything.
Maybe a modification to it to make the current AI model train a new one while it plays would be cool, but wouldn't be necessary, because this isn't meant to be a perfect AI
this ai is just the photo recognition if you want it to learn to play the game better youd need a neural network thats fed the image recognition ai data as an input
@@jmvrit takes more pictures while its running which can be used for training later on, this is common sense.
@@modzyy key word: _can_ be. It's not currently made to train a new AI. All it does is take a screenshot, analyze it, and do some input. Then the screenshot may as well be discarded, because it does not use the image further.
As well, AIs are trained at a specific point in the process, but aren't trained further. For example, ChatGPT will not get better when people use it, and neither will most AI models. Typically when an AI trains itself, it gets worse. By definition, the day it doesn't is the day of the AI singularity, where it can improve itself into infinity
@@jmvr yap yap yap
If you think about it, YOLO (c. end of the world, 2012) and YOLO (object recognition) are really the same thing: You'll find out *very* quickly that You Only Live Once if You Only Look Once while crossing the road!
Unless anime was right and you find out that you live at least twice.
@@DanielLCarrieronly work if it’s a truck with its headlight open
@@windy5405Or a tractor going 2 mph.
@locrianphantom3547 no not Kazuma.
Man that 2012 was awkward
3:18 You know bro's been taken by the Terminator when he called the AI "His"
Now teach an AI to play feed the deep.
i would love to see an updated version of the ai, doing fruit combos
because atm it slices each fruit individually
it would be so satisfying if it would do the best slicing combos possible
some time ago, they needed pigeons to do this.
I think England still does
They needed pigeons to play fruit ninja for them?
@@ThisIsAHandle-xz5yo I guess
Bro, I agree
This channel is the most perfect example I have ever encountered of a Blue Ocean
Besides maybe the Wright Brothers
python is simple to created an image recognition AI with, but it is so slow that by the time it has finished processing the image the results are already outdated. this pretty much sums up this whole situation where the most performance dependent tasks are solved with one of the slowest languages out there
Generally, doing image recognition in Python involves making calls to some library that is actually written in C (or a similar, actually fast language). As a result, relatively little time is spent in Python-land which mitigates the slowness of the language.
I don't know what the code used in the video looks like. There could be a bunch of complicated Python code that slows it down. Or maybe it's not Python's fault at all.
@@ahdog8 I know that Python libraries utilize C, however, the overhead that Python introduces is still too much even for complied C code
So painful seeing him do all this when he could've just had something search the specific pixel color of the fruit/bombs to slice them instead of training a whole ai to recognize fruit
Exactly
I am guessing that if he did that then the ai would proceed to again cut the fruits which have already been sliced
@@sakshambaranwal132I can totally imagine the AI doing that
8:00 Mangoes have as much vitamin C as oranges
Trajectory prediction would be wicked to see! Especially in the case of avoiding a slice if a bomb will intercept. And couple that with what I’ve seen others say: getting combos by slicing multiple fruit in one motion. If it can predict the best time to get combos based on the trajectories, I’d love to see how high a score it can really get 😈
Why arent you remaking the game
💀
What is there to remake its perfect already
In the same way you can teach AI to recognize an enemy soldier and an ally and make it shoot in less than a second. AI is becoming scary .
Well yeah, that's how classification algorithms work lol
It’s nice to see someone using yolo correctly, my college final project was trying to use yolo on wav files of audio recordings to find bird calls. It was not a good idea.
It would be interesting to try a roguelike - pixel dungeon/shattered pixel dungeon is FOSS, and I haven't seen anyone do anything like it.
You'd have a variety of skills the AI would have to learn: fight mechanics, resource management, item mechanics and selection, synergies etc. You'd probably choose just one class, and since it's FOSS, if it's too complex you could easily tone it down by reducing items, making it set seed, making item generation deterministic, just doing one floor etc (but it'd be a hell of a video to do the whole game haha but maybe there's a reason no one's done it before).
that sounds actually sick
I’m not sure how feasible it would have been to decompile the game and get the models for the fruit and bombs, but if those are obtainable then would it not have been easier to train the AI using screenshots of those models rotated programmatically?
Then, instead of searching the entire screen space for fruit, you only need to look in areas with a significant amount of pixels changed between consecutive frames, as these locations have either a fruit or bomb in them. However, if it’s a fruit, then you don’t actually care WHICH fruit it is, you cut it regardless. You ONLY need to care about whether or not it’s a bomb, and so when screen pixels change in an area, check the surrounding region for a bomb. If a bomb is not found, cut, otherwise avoid.
It should be noted too that this approach would likely have issues with areas where bombs and fruit overlap, but that can be dealt with. This approach could ALSO be done without the decompiled models at all, instead only providing the model with gameplay screenshots of bombs and fruit overlapping bombs. By providing the AI with these images, it alerts it NOT to cut them, but if it DOESN’T see matches for that data, it DOES cut whenever it sees changed pixels.
The main efficiency of this comes from not needing to run image recognition for each individual fruit, and also not running image recognition checks over the whole screen constantly, but instead only in surrounding regions where changed pixels are when those changes occur.
Your channel is gonna blow up dude this is really high quality and entertaining content. Keep it up!
bro record a 1 hour long video of this and add it to any reddit ai voice video
Amazing video, thanks! Can't believe you have only ~7k subscribers. 🔥
You could make a system of hierarchies, where the AI cuts the least important fruits first and then the most important ones, this way you could avoid cases where it cuts the pomegranate fruit, losing the other fruits.
If fruit () :
Slice()
Easy bro
wake up babe Tyler just uploaded a new video
I had to label images before for yolo except that my images was like 200 cows in a huge picture times 1k similar pictures and god, that is the worst thing possible. So I respect your effort in labeling these images by yourself.
instead of slicing around with a katana, which is what i imagine normal gameplay has behind the camera, the AI is just a crazy bastard dual wielding 2 european style medieval swords and stabbing them like crazy
3:24 the voices- they won’t stop…
FISH
Here's another challenge for you. Try hitting combos. Basically you've got to track the motion of all fruits by taking continuous screenshots. By that you can make predictions on when a bunch on fruits might be closest to one another. And make a hit. And hit individual fruits only when those are at the end of downward motion
It was pitch black in my room watching this a inch away from my face it felt like a real flash bang
A pixel search algorithm would've been faster, via color indexing.
All fruits have one solid color, so having a simple hex variation of lets say (green) for an apple 🍏 would be way faster and probably even better to find all the fruits.
Same goes for the bomb since it is a solid black color, so the AI never would try to attack it within a solid square hit box on screen.
that wouldnt work with bombs you need to use the red outline for them to get their actual hitbox
the rest is fine as is tho if u wanna go fancy u can go with outline detection or just as u said a simple HSV range for each fruit
#edit
another problem would be the already sliced fruits since they have the same color
I feel like that might cause issues with the ai going after splatter.
@@lekkobotWhen the reply has more likes than the original comment:
@@HitSpaceGDthat’s called a ratio young one
@@-CENSORED0- thanks for enlightening me 👍
Bro just created Sukuna AI, well made video.
throw the gameplay into after effect -> use motion tracking -> render sequence of image -> label into seperate folder and auto rename
much more easy way to do the labeling task
Dude you are a genius🙏🙏🙏
Love the effort you put in each video ❤❤❤
Patchy the Pirate: "That's it?"
How do you make this? 🤯
Its amazing!!
Do you play the Fruit Ninja at a emulator? 2:59
And, what is this app that you use for create rotules? 3:41
i think just using color to find everything that isnt a bomb would be faster. the red and black on the bomb look unique.
You could’ve tried first idea of image recognition but with checking only for range of colors. And the fruits would have a specific range of it. When you would make it small for each fruit it could’ve worked
absolutley amazing video and also very educational, nice man
2:15 respect this job
Hi, I love your channel! I would say it is the best channel for game AIs, I love your videos.
I am wondering what tool you use to train the YOLOS; thanks!
Loved your hardwork ❤❤😂
Yes using the predicitons from the previous models and reannotating is the best option for the faster finetunning of the model.
If you wanted to improve the ai, you only need 3 frames and some calculus to predict the exact trajectory of the fruit meaning you'd be able to slice every fruit on the 4th frame that it's on screen
it would have been so much cooler if the ai would have learned to slice more then 1 fruit at a time but still a really cool vid man keep up the good work! : )
1:58 lies, i have done this, but did absolutely nothing with it :)
Idk if it is only me, but here: 5:40 you could have put only the AI Vision or 20 seconds one 20 seconds another. The both at the same time are weird.
Vision issue
@@SasamuelTheCool maybe brainrot isn't that bad, maybe the attention span of a goldfish isnt that bad ...
MAYBE FAMILY GUY, SUBWAY SURFERS, MINECRAFT PARKOUR AND ROCKET LEAGUE CLIPS ALONG WITH THE ACTUAL VID AT THE SAME TIME ISN'T THAT BA-
Who gonna tell bro 😭🙏
I would’ve just “cut” the center of the area that popped up that didn’t match the backgrounds
May be another way of approaching this problem is using the AI to detech the colors instead of the fruit themselves?
Because the background color is distinctive.
You can easily increase your model speed if you use lower screenshot resolution (not a game resolution), then turn image black and white, then use color contrast to make objects looks clear. And also, check to click on fruit little bit away from bomb.
We got fruit ninja aim bot before GTA VI
2:12 You don't actually, you can just use a clustering algorithm that can group all the similar objects together which you can then label
should’ve made it insane at the part where it combos
0:39 That was a little to personal 💀
This video is the exact amount of time it takes me to eat a Totino's pepperoni party pizza.
Great video! I would suggest adding combos
I love Yolo Ai. Amazing for digital surveillance and AI cheating in games like Counter Strike. What an amazingly versatile piece of software.
Now you need a genetic algorithm model
omgg you’re only 9k subs ?! you deserve way more
now i want you to make an ai that can solve captcha
Cool it’s another UA-camr that is underrated and has good content (:
Making it not hit the bomb should had been easy enough
0:55 you use spiral detection from center and dynamic bitmap or just color tolerances. It'd take several thousand times less processing power and work about the same. Things like simba have made this very easy for decades
Color ID won't work well since the fruits splatter after being sliced, meaning your AI would continue slicing fruit juices long after the fruits were sliced.
@@peterchristensen8843 that isn't ai
Ai's worst enemy, pomegranates.
Now feed the recognized images (x,y,w,h,type,confidence) to a neural network and see how well it can learn to play. Maybe it would even learn combos!
bros ai: if its green or blue or yellow then its a fruit but if its BLACK THEN ITS A BOMB
Me with yellow and green bomb.😈
grandma's gonna be happy with this one
Is this what computer science majors be doing?😭 I regret it now
Man I am so dumb I thought the normal version was the run where he reached 488
Make it beat the world record next.
What program did you use for the labelling of the objects at 2:16?
Roboflow
Bro you chose the most complex way, all what it needed was an image recognition only for the bomb and for other moving objects
And an if statement
If object != bomb
Slice
How would the AI be able to tell that fruits are on the screen?
I think it's gonna sick if you train the ai to play FNAF, what it can be if you put the AI on custom night on 20/20 😂
bro is dani 2.0
keep up the good work bro
You're doing a great job in content creation.... 🎉
1:16 r35 spotted
?
@@Kavanaconsulting car
@@TsSC_unofficialwhat
@@Kavanaconsulting yes
@@TsSC_unofficial 👍
I accidently scrolled too far7:50
Couldnt you have done some color detection? The fruits are all unique colors you could've taken a 500x500 image for example of where the color is and gotten the fruit labeling data that way? Just a thought idk.
3:24, FISH🐟
I would love to see what happens when you train the ai on as low a resolution as you can go on the golden fruit level. I wanna see it obliterate them, dude.
Way too underrated channel ❤
I feel like fruit ninja is on everything nowadays 💀
You could probably sell this to all of the 8years old around the world
Now this is a real AI. Finally!!
its happening, rise of the machines and fall of the humankind. skynet starts here
These image recognition models have been around for a while lol
there is a significantly faster way of doing this which is looking for clusters of pixels that have changed and only using the AI to detect the bombs to make sure those are in blocked off zones for the motion detection
But then what would stop the AI from continuing to try to slice the fruits even after they've already been sliced
The ai isnt even getting combo slices either
As a kid I actually managed to get over 1000 one time. I think I retired from fruit ninja at that point
The AI has not seen Arcade mode.
Oh boy..
You could have used a second screenshot to predict the movement of the fruits allowing for a single slice per fruit rather than spamming for each one.
Overall that would make the ai slower because it has to take 1 SS, process it, take a second, process it and then calculate the exact velocity of the fruit then compare how the "gravity" affects the loss of momentum.
@@Yesbutnoimnotso many idiots in the newest replies lol
you are right
@@CoolDude-mq8dh lol the only reason I bothered replying to them was because I code and it's painful watching things like this
What if you labeled the bombs in ai and programmed it to cut everything except the bomb, that would reduce the dataset.
Not sure which version of YOLO you used, but v4 and v5 was scaling every image down to a VERY low res, like 300x300-ish, so screen resolution shouldn’t matter too much. Unless rescaling the image takes so much of your machine resources for some reason. From a quick google search I see that v8 scales image down to 640x640, but that’s just the first result that I see, and I’m too lazy to catch up on YOLO development.
my personal best in that mode is closely around 800
is what happens when you have over 2 years of fruit ninja gameplay
You could've scaled down the screenshots to a much smaller size before feeding that to the AI.