Since the guy has to run across the distance of the bridge, if you pass distance as the parameter for sleep you should get a pretty dynamic/accurate result
@@undefinedchannel9916 you’d just have to divide the distance by the constant speed of a player moving per pixel and that’ll give you a near perfect rest time
i dont think that would work perfectly as the distance is not the same as the time the person has to hold on the screen bc the distance can be like lets say 100 px but i think holding only 0.1 sec on the screen would not make that far as you can see in the video too it sometimes drops a little bit far so its no perfect but it works so for the perfection someone would have to actually calculate the exact distnce it draws in like 1 sec and then we can get the value for 1 milli sec according to that
The stick growth uses an easing algorithm, so at the start it grows slowly and then grows a little faster. That's why you miss on short sticks and very long ones.
I bet you could account for that by making the percentage decrease with the distance by some other coefficient. Like instead of * .98 it's * (.98 - 0.000002*distance)
Actually I tried to recreate the script and I found a lot easier to just find the first gap and then the little red box at the center of the second column. It worked fine
Easier way: Have the exact height for the 'Red box', and start and stop based on the distance till the Red box, without worrying about any Blacks (or even gaps) because the starting position of the Ninja remains the same on the screen.. It's a matter of minor trial and error to calibrate. Great video E-Man !
Instead of multiplying by .98, you should've subtracted by a constant value. the problem was that the stick doesnt spawn exactly at the start of the gap but a few pixel to the left.
Good job! I would have a diiferent approach on how to find the distance by finding the red square that each column has using the pyautogui that has a function that finds an specific image in the screen and return the center position of that and then do all the math you did.
Stumbled across this while random video hopping on UA-cam. Fantastic - I work with Python daily, and do automation with it. I never thought about applying that to android games. You have a new subscriber here. looking forward to seeing more :)
@@EngineerMan Many thanks! You inspired me to try out some python adb stuff myself, so I built a Sudoku Solver for the Sudoku.com Android App.ua-cam.com/video/bTKpGMR1km0/v-deo.html Have a look if you have 30 seconds.
Taking a screenshot makes sense, but I wish there was a more "live" way of reading screen pixels in general. If you could read pixels as they update you could press the screen until a certain pixel above you turns black, always giving you the right distance.
@@EngineerMan your resolution is the "long pole in the tent" when it comes to using CV. For this simple task it would be suitable, however operating on text/more intricate designs makes it far less trivial. Also, maybe you can consider saving the screencap to a bytesIO object to ensure you're always operating on the correct image. You can reduce your time.sleep that way as well. Thanks for the videos! Keep it up!
Sweet and informative, thanks dude! I wanna add to the convo that ISPs will frequently reassign IPs to consumer modems and restarting it (or power events) can result in the WAN IP discussed here changing. This would require updating users and/or dns records.
First Cr1tikal and now you. The two Florida men that make Florida a proud state. All jokes aside, I've watched your videos for a little while and I'm a huge fan. Thanks for the videos, man!
Python is just awesome. Great video! Beyond the scope I know but I'd scan from top-left downwards to get the proper starting location of Y without hardcoding. That would suggest breaking out the transition detection into a reusable function. Furthermore, I feel it's kinda waste if time to store the screenshot on file. Haven't tried the libs in question but if it's possible to screen cap directly inro memory I'd consider it way more neat. Finally it would be interesting to actually clock the time drawing the stick and dynamically adjust the timing along the way to get better accuracy. Of course, that's all overkill for a proof of concept as shown here. However, it does take into account good practices and interesting extensions like reusability and device independent code. I believe the code required to do it wouldn't be too complicated. Again, great video!
i know i'm kinda late to the party, but i just found this today. Seems like the sleep command at the end of the loop should be dependent somehow on the length of the previous transition. because a longer gap will take longer to walk across, having a set time interval isn't optimal. alternatively you could put a condition at the beginning of the loop that checks the delta of your transitions to see when the screen scrolling motion stops indicating the character has arrived at the edge of the next platform and the rest of the loop is ready to go, no waiting. Good stuff man. can't wait to see what else you've got in store for me on the channel!
Great idea! I was thinking about automating few android games my self too. Can you please elaborate more about how to connect the phone and get the adb working? That's the only part this video is missing.
Great and simple approach. It's probably overkill but instead of 'time sleep' in the end we can pull image and as soon as previous is different from current start applying the algorithm.
if you dont want to write the image to your harddrive all the time you can use img = np.array(pil.open(io.BytesIO(image))) with numpy, PIL and io to convert it for image recognition or whatever
I've been searching for a while for a way to automate some stuff on android, i did know about adb and adb shell since i've been rooting for some years now, but damn to be able to use python as well!! that's mind blowing, is there a documentation on shell commands? Have you thought about social media automation?
I’m new to coding and I’m a bit confused at the 7:27 mark because he sets the value ignore to true, and from my very small knowledge of code I thought True has a value > 0, but in his code he checks to see if ignore plus the value of the pixels was not equal to zero, but if what I said previously about the value of true was correct won’t the expression ignore + (r + g + b) always be greater than zero? please correct me wherever I messed up
he never added ignore + (r + g + b) Instead he used the "and" python operator which means the expression is true only if both terms are true. so that if statement is true if ignore is true AND (r + g + b) isnt 0 which means the color isn't black
I think an answer might be that the background (of what we know, it can changes multiples times) may contain red pixels before the one you're trying to get the x-value of. So this algorithm doesn't work with any given background and therefor is not "automatic" !
@@spicytelescope5487 Yes I thought about that but it would be unlikely that one pixel of the row would have the exact same RGB value. But I agree that EM's way prevents it
@@spicytelescope5487 Plus we could check for the red pixel only once the first black pixel of the platform has been found, as EM said there is plenty ways to implement this
@@EngineerMan would it be easier if you use android emulator instead? And one more question, does automating other game have similar workflow like, screen capture, detecting certain color on certain part and adding certain input? I'm working on mobile development, and some bit of python. I think game automation would be an interesting project to work on. Idle games should be a simple project to automate, right?
@@laizerwoolf for most games it should work like this but for shooting games you might need an esp and capture that and as for idle games yea it should be this simple
I think that you can calculate the "line draw" rate, making a first screenshot then press for 1000ms then screenshot again and measure the distance (even using a photoshop or anything else), that would be a solution, I don't know if there are lags when you take screen shot, but in case of not, that would be more accurate measure
I think the reason why at the end it wasn't getting shorter distances perfect is because the inaccuracy in the time it touches isn't a percentage but instead a discrete value. Should be an easy fix.
The sleep-timer could be optimized based on the distance-value, the ninja travels that distance, thus the animation time should roughly be a multiple of the distance-value + a set value.
You should show a sped up clip of your automated game at the end, like an hour of it playing the game all sped up to a minute for us to watch, and if you stick an ad the first 3 seconds in to it then you can most likely get good ad view time, just make sure you have good attention grabbing- remix making music for the sped up clip so it will take their attention before they decide to stick through the ad
You could use a distance multiplied by a certain coefficient as sleep time. That way, you could minimize the amount of time the bot waits on shorter distances.
Nice 1-1,5 hours coding task. Including full environment installation (had already only Pycharm). I did it some coding with openCV, as I have experience working with it - and I would assume to code more on that side, anyways. The most difficult one was to detect where the ledge the ninja was standing is *in all situations*. In that sense this is a good task also, that your phone may have different speed with keypress; I had to adjust speed by factor of 0,73. 100 climbs(?) passed already.
I put some conditions on the coefficient: the smaller the distance, the smaller the coefficient. I think it is a great deal to also put some AI to play it!
Instead of a coefficient there is a gap of a constant since the stick 'falls' a little ahead. Just a lil thing but hopefully will make it perfect everytime
please help I have an error: Traceback (most recent call last): File "stickHero.py", line 50, in start, target1, target2 = transitions ValueError: not enough values to unpack (expected 3, got 0)
transitions are not filled with the data, you have messed something up when checking the black values. transitions should be a list with 3 elements, instead, you have got an empty list.
try this: pixels = [list(i[:3]) for i in image[1920]] i do no why EM take 1400 cause our screen 1920 pixels, and no matter what we can calculate all 1920 and still get black values from numpy array or simply if you want to turn on the dX, dY, Xv, and Yv coordinates, they are in your developer > input section and manually take coordinates
The fourth number “255” is the max white and “0” is black. If the pixel is solid black then it would be R(0), G(0), B(0) and solid white would be R(255), G(255), B(255). R, G, and B stand for Red, Green, and Blue. I should be right
You can take screenshots every few milliseconds and compare them to know if the animation is over, maybe just the lower part of the image where nothing moves
The only other thing you could possibly do with this is instead of taking the pixels is to use rng manipulation/prediction to determine how far the gap will be on the next one.
is there a way to get the screen shot image directly into the numpy array instead of saving it to disk and the loading. it is time consuming that's y. I'm working on automation piano tiles so I saving to disk and loading is very time consuming. pls help. Thank you
Cant you find the distance between the guys red head banner and the red dot? The RBG values of both will always be the same and always on the same x axis ?
Hi, is it possible to make it like that it takes a screenshot from a specific area (X, Y) not full screen with the code below? image = device.screencap() with open('screen.png', 'wb') as f: f.write(image)
Thanks for sharing, this is really awesome, I had no idea there was a thing called Android Debug Bridge and that you could control it with Python! I am excited to create some tutorials controlling android from Python! Subscribed.
Maybe this is a stupid question, because I don't know much about this, but would it be a viable approach to check for the right pixel above the ninja to turn black, instead of calculating the time? I can imagine getting screenshots the whole time would be bad for performance, but I wonder if it's possible to just check for one specific pixel. If so, could it be more accurate without losing too much performance?
Hey, is there any way that you run a code like this on the phone itself? If so, what changes should I make to this code? Í'm using pydroid for Android, and its terminal can do these actions, but I have no idea if you can do the "device.shell()" part on the phone itself.
Why not convert the image to grayscale first since you are looking for black pixels. And afterwards use a threshhold for everything above zero to become 1. turns the problem in a very easy binary problem.
It may seem a stupid question (i just started programming), but how does he open the terminal with the folder already opened? Every time i want to run a python script do i have to change the folder in the terminal? What does the dot mean when he writes ./stick_hero.py and why doesn't he write python3 to run it? PD: Sorry if these are basic questions
Small tip: Its unnecessary to write the image to a file and then open it again to read it. It probably slows the script down by quite a bit, especially with a hard drive (not ssd). Otherwise really solid video man :)
Why not dynamicly fetch display size? # just replace shell with your own shell method or connection object shell. def display_size(self): size = [int(val.split("=")[1]) for val in self.shell_command("dumpsys window | grep 'DisplayFrames'").split(" ") if "=" in val] return {"width": size[0], "height": size[1]}
Engineer man, nicely done. Q. If the start point is always the same and the height does not change, why not got to the row that contains the target piece, check that row for the red target RGB values and subtract the difference?
Anyone figure out the source variance in stick length? I haven't noticed any obvious patterns from target width or distance from stick figure. Seems to typically overshoot by 20 pixels or so, but undershoots occasionally.
Awesome! Keep doing! I got an idea before you begin the next game: Show us how to configure your notepad(atom, right?) and the Virtual Android environment. So we can sync :)
May I ask, what emulator are you using? I'm currently using Bluestack5 and there seems to have a lot of UIs I dont need. The one in the VDO look simple and clear. How do I do that?
Hi I dont know where someone would be there to answer this but when I tried 1400 index for the image, I got 0 parameters. I tried out more values and at 1700, I get a score of about 35 and then I get the error : ValueError: not enough values to unpack (expected 3, got 1). I dont know what to do here. Can you please help me out? Thanks for this amazing video.
you could make it a lot more faster by waiting untill last screen shot was the same as current example: loop untill the screenshot bytes stay the same then run
This tutorial is very educational in terms of learning to code stuff. And yeah, using this kind of thing for the purpose of cheating it takes away the fun of the game.
Did I win? What game should I do next?
Try the game go
Please do Tower Twist it will be very interesting to see how you approach it...
Dark souls
You did bro! Big time!
What about the old classic vector
EM, the creator of Stick Hero wants to know your location.
Since the guy has to run across the distance of the bridge, if you pass distance as the parameter for sleep you should get a pretty dynamic/accurate result
Someone's here is smart
distance is how long it takes for the bridge to fall but he'll also need to take in account the time it takes for the stick figure to run across.
@@undefinedchannel9916 you’d just have to divide the distance by the constant speed of a player moving per pixel and that’ll give you a near perfect rest time
i thought the same
i dont think that would work perfectly as the distance is not the same as the time the person has to hold on the screen bc the distance can be like lets say 100 px but i think holding only 0.1 sec on the screen would not make that far as you can see in the video too it sometimes drops a little bit far so its no perfect but it works so for the perfection someone would have to actually calculate the exact distnce it draws in like 1 sec and then we can get the value for 1 milli sec according to that
Rumor has it... Engineering Man is secretly The Stick Hero
The flow of this video feels like the perfect speed for me.
Yeah, that exactly how everything should be. Perfect
The stick growth uses an easing algorithm, so at the start it grows slowly and then grows a little faster. That's why you miss on short sticks and very long ones.
I bet you could account for that by making the percentage decrease with the distance by some other coefficient. Like instead of * .98 it's * (.98 - 0.000002*distance)
Actually I tried to recreate the script and I found a lot easier to just find the first gap and then the little red box at the center of the second column. It worked fine
Hey can you help me set up the adb
hey can you help me with the same to find the gap between the pillars... I'm really confused!!!
I thought the same when i saw the video
Haha exactly what I was thinking
@@haydencordeiro Why do people ask for help on UA-cam comments? And then like the comment? Figure it out yourself dumbasses.
Easier way: Have the exact height for the 'Red box', and start and stop based on the distance till the Red box, without worrying about any Blacks (or even gaps) because the starting position of the Ninja remains the same on the screen.. It's a matter of minor trial and error to calibrate. Great video E-Man !
اخ حسن
ايش رايك ابتدي بتعلم لغة الجافا او بايثون؟
Someone always comes along and shortens something you thought was destined to be much more
The end of loop sleep time could be a function of the distance.
Smort
plus some times for the character to move between 2 pillars
This was INCREDIBLY interesting! Hope you do more stuff like this in the future.
This is actually an interesting problem-solving in programming to control android. I did same trick for Piano Tile game.
I love the logic used to solve problems. Want to see more videos like this.
Engineering Communicated with Unparalleled Clarity 👏🏻
Instead of multiplying by .98, you should've subtracted by a constant value. the problem was that the stick doesnt spawn exactly at the start of the gap but a few pixel to the left.
Can you do a tutorial on making a basic Android app? Or go over an introduction to adb and scrcpy?
Good job!
I would have a diiferent approach on how to find the distance by finding the red square that each column has using the pyautogui that has a function that finds an specific image in the screen and return the center position of that and then do all the math you did.
It would be interesting if the program would find this percentage on his own by measuring how long it takes to grow the stick a certain distance.
Stumbled across this while random video hopping on UA-cam. Fantastic - I work with Python daily, and do automation with it. I never thought about applying that to android games. You have a new subscriber here. looking forward to seeing more :)
Welcome!
@@EngineerMan Many thanks! You inspired me to try out some python adb stuff myself, so I built a Sudoku Solver for the Sudoku.com Android App.ua-cam.com/video/bTKpGMR1km0/v-deo.html Have a look if you have 30 seconds.
Btw, you can see the coordinates (it's actually the delta/change in touches) from the developer options on your android device
This comment saved me! if you want to turn on the dX, dY, Xv, and Yv coordinates, they are in your developer > input section.
Wow, this was so much fun to see, and so interesting! Thanks! Keep doing similar stuff, I think people will love it!
Taking a screenshot makes sense, but I wish there was a more "live" way of reading screen pixels in general. If you could read pixels as they update you could press the screen until a certain pixel above you turns black, always giving you the right distance.
I intend to experiment by using opencv to monitor the screen in real time then apply a solution in that way. Stay tuned.
@@EngineerMan your resolution is the "long pole in the tent" when it comes to using CV. For this simple task it would be suitable, however operating on text/more intricate designs makes it far less trivial.
Also, maybe you can consider saving the screencap to a bytesIO object to ensure you're always operating on the correct image. You can reduce your time.sleep that way as well.
Thanks for the videos! Keep it up!
Sweet and informative, thanks dude! I wanna add to the convo that ISPs will frequently reassign IPs to consumer modems and restarting it (or power events) can result in the WAN IP discussed here changing. This would require updating users and/or dns records.
First Cr1tikal and now you. The two Florida men that make Florida a proud state.
All jokes aside, I've watched your videos for a little while and I'm a huge fan. Thanks for the videos, man!
You're welcome, glad they've been helpful :)
Woah, this rekindled my interest in coding again. Nice work, EM!
I would like to see how you'd do with some bullet hell games like Bullet Hell Monday
Python is just awesome. Great video! Beyond the scope I know but I'd scan from top-left downwards to get the proper starting location of Y without hardcoding. That would suggest breaking out the transition detection into a reusable function.
Furthermore, I feel it's kinda waste if time to store the screenshot on file. Haven't tried the libs in question but if it's possible to screen cap directly inro memory I'd consider it way more neat.
Finally it would be interesting to actually clock the time drawing the stick and dynamically adjust the timing along the way to get better accuracy.
Of course, that's all overkill for a proof of concept as shown here. However, it does take into account good practices and interesting extensions like reusability and device independent code. I believe the code required to do it wouldn't be too complicated.
Again, great video!
This is my favorite video of yours. Thanks! Loving python after picking it up lately.
i know i'm kinda late to the party, but i just found this today. Seems like the sleep command at the end of the loop should be dependent somehow on the length of the previous transition. because a longer gap will take longer to walk across, having a set time interval isn't optimal.
alternatively you could put a condition at the beginning of the loop that checks the delta of your transitions to see when the screen scrolling motion stops indicating the character has arrived at the edge of the next platform and the rest of the loop is ready to go, no waiting.
Good stuff man. can't wait to see what else you've got in store for me on the channel!
This was awesome! Do more stuff like this please! Love your video style too, straight to the point and coding, keep it up :) subscribed!
Great idea! I was thinking about automating few android games my self too.
Can you please elaborate more about how to connect the phone and get the adb working? That's the only part this video is missing.
Android is super easy to connect via ADB. All you'll need is the adb drivers and you're good to go in the command line.
There are probably enough videos about it on youtube already :)
you can do it through wifi too,, check google on how to
Great and simple approach. It's probably overkill but instead of 'time sleep' in the end we can pull image and as soon as previous is different from current start applying the algorithm.
if you dont want to write the image to your harddrive all the time you can use
img = np.array(pil.open(io.BytesIO(image)))
with numpy, PIL and io to convert it for image recognition or whatever
I've been searching for a while for a way to automate some stuff on android, i did know about adb and adb shell since i've been rooting for some years now, but damn to be able to use python as well!! that's mind blowing, is there a documentation on shell commands? Have you thought about social media automation?
you most likely will get banned by instagram twitter... if you automate follows likes as they have a limit
@@befruky5868 For this you're probably better off using selenium (unless it's something like Snapchat where you can't use a web client)
@@Jack-yz6ypYep selenium is better but I don't think it work for android, only on PC
I’m new to coding and I’m a bit confused at the 7:27 mark because he sets the value ignore to true, and from my very small knowledge of code I thought True has a value > 0, but in his code he checks to see if ignore plus the value of the pixels was not equal to zero, but if what I said previously about the value of true was correct won’t the expression ignore + (r + g + b) always be greater than zero? please correct me wherever I messed up
he never added ignore + (r + g + b)
Instead he used the "and" python operator which means the expression is true only if both terms are true.
so that if statement is true if ignore is true AND (r + g + b) isnt 0 which means the color isn't black
@9:10 shouldn't line 44 technically set black=black to switch back?
Why not check for the first red pixel then compute the distance ? Amazing video as always EM
I think an answer might be that the background (of what we know, it can changes multiples times) may contain red pixels before the one you're trying to get the x-value of. So this algorithm doesn't work with any given background and therefor is not "automatic" !
@@spicytelescope5487 Yes I thought about that but it would be unlikely that one pixel of the row would have the exact same RGB value. But I agree that EM's way prevents it
@@spicytelescope5487 Plus we could check for the red pixel only once the first black pixel of the platform has been found, as EM said there is plenty ways to implement this
I am curious is the app running on his android phone or he has some emulator like Genymotion or something ?
I'm using a physical device and scrcpy for the mirroring.
@@EngineerMan Thanks I will have to read up on that. Very neat.
@@EngineerMan would it be easier if you use android emulator instead? And one more question, does automating other game have similar workflow like, screen capture, detecting certain color on certain part and adding certain input?
I'm working on mobile development, and some bit of python. I think game automation would be an interesting project to work on. Idle games should be a simple project to automate, right?
@@laizerwoolf for most games it should work like this but for shooting games you might need an esp and capture that and as for idle games yea it should be this simple
@@Ammarirfanofficial thanks for the reply!
Thank you for that, always looking for Python ideas, you've just gained a sub
This is what I needed for an automation project I have in work, thanks!😊
I think that you can calculate the "line draw" rate, making a first screenshot then press for 1000ms then screenshot again and measure the distance (even using a photoshop or anything else), that would be a solution, I don't know if there are lags when you take screen shot, but in case of not, that would be more accurate measure
Got it working so that he also catches the cherries! Great channel keep it up
I think the reason why at the end it wasn't getting shorter distances perfect is because the inaccuracy in the time it touches isn't a percentage but instead a discrete value. Should be an easy fix.
Numpy accept row, columnn indexing why did you do list comprehension copying all the data again? 5:15
The sleep-timer could be optimized based on the distance-value, the ninja travels that distance, thus the animation time should roughly be a multiple of the distance-value + a set value.
At least someone is going after scammers. Not the phone company. Not the government. thank you!
I am currently learning python, can do some basic stuffs but i enjoy watching what these modules do even though i havent heard most of them !!!
You should show a sped up clip of your automated game at the end, like an hour of it playing the game all sped up to a minute for us to watch, and if you stick an ad the first 3 seconds in to it then you can most likely get good ad view time, just make sure you have good attention grabbing- remix making music for the sped up clip so it will take their attention before they decide to stick through the ad
You could use a distance multiplied by a certain coefficient as sleep time. That way, you could minimize the amount of time the bot waits on shorter distances.
doesnt it move faster on longer sticks?
Well, this one was super fun! :D Hope to see more videos of this sort! :)
Wouldn’t it be easier to search for the red box since the pillars seem to stay at the same y level all of the time?
I couldn't unpack any values from the transitions. Turned out my image[value] was too low. Works now ;)
Same here :)
@@mikaelh9584 @Martijn Facee Schaeffer
same here sir....how to solve it? change image[value] to what?
Looking forward to trying this, thanks for the video!
Hey Engineer Man, where did you learn python?
@2:15 what shortcut do you use to create IF statements? :p
Maybe the 2.5 seconds for transition could be changed as percentage of gap. Really awesome work! Congrats!!
Nice 1-1,5 hours coding task. Including full environment installation (had already only Pycharm). I did it some coding with openCV, as I have experience working with it - and I would assume to code more on that side, anyways. The most difficult one was to detect where the ledge the ninja was standing is *in all situations*. In that sense this is a good task also, that your phone may have different speed with keypress; I had to adjust speed by factor of 0,73. 100 climbs(?) passed already.
I put some conditions on the coefficient: the smaller the distance, the smaller the coefficient. I think it is a great deal to also put some AI to play it!
Using the red spot on the top of the pillar as a reference point instead of the beginning of the pillar might help to make the program shorter
Instead of a coefficient there is a gap of a constant since the stick 'falls' a little ahead. Just a lil thing but hopefully will make it perfect everytime
Sleep time again should depend on distance that figure travels. Longer it travels, longer the move animation.
please help I have an error:
Traceback (most recent call last):
File "stickHero.py", line 50, in
start, target1, target2 = transitions
ValueError: not enough values to unpack (expected 3, got 0)
transitions are not filled with the data, you have messed something up when checking the black values. transitions should be a list with 3 elements, instead, you have got an empty list.
try this:
pixels = [list(i[:3]) for i in image[1920]]
i do no why EM take 1400 cause our screen 1920 pixels, and no matter what we can calculate all 1920 and still get black values from numpy array
or simply if you want to turn on the dX, dY, Xv, and Yv coordinates, they are in your developer > input section and manually take coordinates
The fourth number “255” is the max white and “0” is black. If the pixel is solid black then it would be R(0), G(0), B(0) and solid white would be R(255), G(255), B(255). R, G, and B stand for Red, Green, and Blue. I should be right
Use a decompiler to decompile game files determine the real time => distance equation.
You can take screenshots every few milliseconds and compare them to know if the animation is over, maybe just the lower part of the image where nothing moves
I would really want to see how you automate game like "Summoners War" that would be a challenge ^^
i am using autoclicker to finis hmy toa and toah. you can start with that :P
This guy is really awesome 👍👍👍 keep sharing. You motivated me to learn python after watching your videos.
Man that's awesome. I wish i could be as good as you are one day. :) happy coding everybody!
The only other thing you could possibly do with this is instead of taking the pixels is to use rng manipulation/prediction to determine how far the gap will be on the next one.
is there a way to get the screen shot image directly into the numpy array instead of saving it to disk and the loading. it is time consuming that's y. I'm working on automation piano tiles so I saving to disk and loading is very time consuming. pls help. Thank you
a better way of approaching would be to just use the row with the red dot in. Just calculate the distance to the red dot.
how did you open the game in pc?
Cant you find the distance between the guys red head banner and the red dot? The RBG values of both will always be the same and always on the same x axis ?
Hi, is it possible to make it like that it takes a screenshot from a specific area (X, Y) not full screen with the code below?
image = device.screencap()
with open('screen.png', 'wb') as f:
f.write(image)
Thanks for sharing, this is really awesome, I had no idea there was a thing called Android Debug Bridge and that you could control it with Python! I am excited to create some tutorials controlling android from Python! Subscribed.
engineer man is a hero actually.... lets call him e-man
Maybe this is a stupid question, because I don't know much about this, but would it be a viable approach to check for the right pixel above the ninja to turn black, instead of calculating the time?
I can imagine getting screenshots the whole time would be bad for performance, but I wonder if it's possible to just check for one specific pixel. If so, could it be more accurate without losing too much performance?
Hey, is there any way that you run a code like this on the phone itself? If so, what changes should I make to this code? Í'm using pydroid for Android, and its terminal can do these actions, but I have no idea if you can do the "device.shell()" part on the phone itself.
Why not convert the image to grayscale first since you are looking for black pixels. And afterwards use a threshhold for everything above zero to become 1. turns the problem in a very easy binary problem.
It may seem a stupid question (i just started programming), but how does he open the terminal with the folder already opened? Every time i want to run a python script do i have to change the folder in the terminal? What does the dot mean when he writes ./stick_hero.py and why doesn't he write python3 to run it?
PD: Sorry if these are basic questions
I liked the way you did the logic... and Thank you for sharing.
how do you have the coordinates panel in scrcpy? I am on a mac and it only shows my phone screen, but no coordinates.
I don’t think that you really need to save the screenshot. You could maybe just pass the raw image got with screencap to Pillow 🤔
Or use io.BytesIO
You make it look so easy. Bravo!
Small tip: Its unnecessary to write the image to a file and then open it again to read it. It probably slows the script down by quite a bit, especially with a hard drive (not ssd). Otherwise really solid video man :)
Why not dynamicly fetch display size?
# just replace shell with your own shell method or connection object shell.
def display_size(self):
size = [int(val.split("=")[1]) for val in self.shell_command("dumpsys window | grep 'DisplayFrames'").split(" ") if "=" in val]
return {"width": size[0], "height": size[1]}
Which linux distro are you using and which desktop environment or theme. i like it minimal design.
Engineer man, nicely done. Q. If the start point is always the same and the height does not change, why not got to the row that contains the target piece, check that row for the red target RGB values and subtract the difference?
Anyone figure out the source variance in stick length? I haven't noticed any obvious patterns from target width or distance from stick figure. Seems to typically overshoot by 20 pixels or so, but undershoots occasionally.
Awesome! Keep doing! I got an idea before you begin the next game: Show us how to configure your notepad(atom, right?) and the Virtual Android environment. So we can sync :)
hey mate, was wondering what software you use to get the android screen on your pc so you can see coordinates?
May I ask, what emulator are you using? I'm currently using Bluestack5 and there seems to have a lot of UIs I dont need. The one in the VDO look simple and clear. How do I do that?
Can u do browser game automation instead of android .. using cv2 and mouse module
Great Video man!, Just a doubt, what you are using to run the game ?
Hi I dont know where someone would be there to answer this but when I tried 1400 index for the image, I got 0 parameters. I tried out more values and at 1700, I get a score of about 35 and then I get the error : ValueError: not enough values to unpack (expected 3, got 1). I dont know what to do here. Can you please help me out? Thanks for this amazing video.
it's scary seeing u code O.O u make it looks super easy
you could make it a lot more faster by waiting untill last screen shot was the same as current
example:
loop untill the screenshot bytes stay the same then run
I have no idea what's going on, but it's super interesting, I'd like to learn to code and Python seems to be able to automate really well!
You are an inspiration. Thanks for the awesome videos
Hi! Really good video. What are you using to emulate the phone device?
Maybe you could use the distance plus some in the sleep function?
This tutorial is very educational in terms of learning to code stuff. And yeah, using this kind of thing for the purpose of cheating it takes away the fun of the game.
it only ruins the game if you didn't make a program that automates it
The game was never fun to me, until I saw this video.