i made an ULTRAKILL AI
Вставка
- Опубліковано 5 лип 2024
- I just got sponsored by Boston Dynamics!!! they said that they are developing a secret technology which can transform Hemoglobin (a protein commonly found in red blood cells) into electricity!!!
this has so much potential for robotics😁
-----------------------------------------------------------------------------------------------
github:
github.com/marmust/ULTRANET
-----------------------------------------------------------------------------------------------
music:
1: The Cyber Grind - Meganeko
2: Synth City - Synthwave Nation
3: Ascention - Sub Morphine
4: Supernova Run - Absolute Valentine
5: Heist - El Tigr3
6: some ai generated slop (download: drive.google.com/drive/folder...)
-----------------------------------------------------------------------------------------------
looking for feedback:
1: I added a neon glow to all of the texts, even though yt compression kinda ruined it, does it look good?
2: I tried to write the script in more of a [to solve this problem, we can do...] format instead of the usual [I solved this problem by doing...]. was this a good decision?
-----------------------------------------------------------------------------------------------
timecodes:
0:00 - Intro
1:14 - General Structure Of Code
1:52 - Interface With ULTRAKILL Script
2:53 - Sensor Scripts Explanation
3:30 - Targeting Sensor (object detection)
6:00 - Depth Sensor (depth estimnation)
7:44 - Unimportant Filler
8:00 - Main Behaviour Of Agent
10:37 - Last Recap
11:30 - In Game Performance
13:45 - Outro
-----------------------------------------------------------------------------------------------
thanks for watching :) - Розваги
V1's AI designer explaining how he made the world's deadliest AI, Circa - 2112:
real 🔥
"in hindsight it was not cool" my ass, this is the coolest shit ever
“DON’T BUILD THE TORMENT NEXUS!!!!” V1 programmer:
A machine, made to play on a machine, as a machine, to fight enemies that are machines
A game without reason
the loop is complete
@@8AAFFF, NO CONCLUDING TESTAMENT, NO POINT, *PERFECT CLOSURE*
*T H I S I S T H E O N L Y W A Y I T S H O U L D H A V E C O N T I N U E D*
I use the stones to destroy the stones
Now make a machine that makes a machine that plays on a machine as a machine to fight enemies that are machines
AI done! Now we need Robot.
How about using Blood as Fuel, and Coins as Most deadly weapon?
"OH MY GOD OK ITS HAPPENING! EVERYBODY STAY CALM EVERYBODY STAY CALM!"
nvm...
this is a certified nevermind moment
this is why drones are like that
this guy coded them
Yep. Drones are meant to stay away from the crosshair and move incredibly fast to make aiming harder.
@@Thy_Punishment_Is_OOF coin
1:01 "Unlike what some other 'coders' would have done" - One minute in and you've got my like of approval 👍
Throwing shade by minute one lmao
CODE BULLET DOESNT DESERVE THIS
Sassy to a fault, but limiting a model to the player's controls is the only valid way
Downloaded and ran this myself, super cool work. Would love to see more stuff like this!
As one of those ADD people .... I really enjoyed the entire video, a really good and in-depth explanation.
me too!
It's because of the flashing lights
This video is incredibly high quality! I cannot imagine how much effort went into it, both in making the actual AI, and the editing of the video itself. The little v1 animations that play throughout are also really cool. This is the kind of quality content that keeps me on youtube. Great work!
A machine created to be the machine to kill the machines. Give it a year and we are doomed.
I initially thought that this was a Uni project but the jokes made it clear it wasn't lmao. Super well scripted and edited, wonderful video. The code was very easy to follow from start to finish in the way you presented it, dynamic all the way through.
For the added mechanics I think coin tech is very complicated to do considering the sensors you mentioned in the video, but since the coins inherit the velocity and crosshair placement of V1 a way to shoot the coins would be to look up (or down too, which would make the coin tosses more precise but it will have to jump), toss a coin and shoot on the sliptshot window, though it may be easier for the AI to do this without the splitshots and rather just coinshots.
The sensing for tossing coins would be an image reference of the Marksman's display where the green discs mean it's ready, and once it changes weapons the AI can read the display and tell if there are coins that can be tossed. I'm unfortunately not very knowledgeable on how to code so for the coinshots I'd just make it go in a straight line as it tosses the coin in the air and shoot it after a second or two, then return to moving and killing the way it normally does lmao.
Great job! Hope you blow up soon!
thanks :)
i did some testing with the coins and basically as u said the easiest way to shoot them
is to stay still, look fully up, and then toss. this is really useful for shooting enemies that are placed on a high ground bcs the coin actually flies up pretty high
@@8AAFFF Great to hear my input was useful! Would love to see a machine actually use coin tech, it would be pretty fun to see it happen lol.
This is an awesome video. It's always cool to see projects that use computer vision instead of modding or getting actual data directly from the game.
But why stop here, keep going, I wanna see how crazy this eventuality gets
jadokar
Okay, there's already a V1-like robot out there, now we have an AI for V1, now we need to make a blood engine and weapons with nearly infinite ammo
I feel like a sound sensor would be good to give the ai an extra level of security in its assumptions as 3d audio can clue the ai in on enemies and dangers and that it cannot see.
wow if ultrakill also has like a surround sound option then it could be really good
@@8AAFFF it has baked in surround sound. you just have to have physical outputs connected, or a soundcard that can do physical outputs without needing to have any drivers connected. (Sound Blaster Z SE user here, I've used it over 2.1 surround sound.)
we all knew this was gonna happen
this was the only way it should have ended
lore accurate v1
What a gem of a video! Instant subscription!
I'm already knee deep into your repository.
What about training your sensor script to detect red balls, and prioritize them, using their size on screen as a trigger for parry?
doing all of this with just screenshots when you could have extracted all the sensor data from unity is insane, props!
Your channel is so underrated! All your videos are just amazing
Idea on how to improve navigation:
I'm not that great at ideas, but ULTRAKILL has a navmesh (I think), so that could potentially be used.
If not, then maybe a way to generate your own navmesh?
Also, amazing video, top notch editing and explanations.
(also it's about coding, and ULTRAKILL, currently two of my favorite things, so I'm probably biased, but this is absolutely amazing keep it up!)
There are multiple problems with using the navmesh.
a) Based on the rules stated, navmesh isn't usable
b) Navmesh wouldn't be that useful (a little bit subjective)
Navmesh is, simply saying, a bunch of invisible nodes/points in the map that enemies use in the game to move around, specifically, enemies move to existing points
a) The AI can't see the navmesh. To get access to it, you would need to hook it up to the map files of the game, which isn't possible
b) Navmesh isn't set everywhere. You could generate it, but doing that would require some level of connection to the game files (s. point a). Besides that, it's only useful for understanding where the ground is. Which is... useful, considering how AI sometimes falls down the Cybergrind arena
I only know about how navmeshes are used in other games, maybe ULTRAKILL has it's own way of using it
I was almost waiting for a "Directed by 8AAFFF" at the end 😎Production quality is off the charts on your videos (as always), man. And wow; what a project, and open-sourced. 😍 Even sponsored by Boston Dynamics, dude, that's so cool, can't wait for your future projects 👏
holy crap this video is well made!
the graphics are cool!
the presentation is great for those who get stuff and those who don't
good music
great concept! love it, subbed! :D
Very good video, the editing is top tier and the project is very impressive, I was also toying around with image recognition and such with python but i would not be able to create a whole ultrakill ai. also didnt expect to hear goddamn el tigre music lol. Nice work
Not only a really cool concept and end product, but also an extremely cool video editing style!
We must put this AI into the new Boston Dynamics Atlas
This is almost just V2. Make it learn how the other weapons work, further develop the phases, and this could probably compare to an actual boss in the game.
why does this channel have so little viewers? You deserve so much more with content of this quality!
i have a feeling you might have been able to tap into the render pipeline to get the depth map
you gotta keep making this better, its an awesome project, and if it can be as good as a pro player? thats gonna be awesome
The next Watson
How would you give this thing object permanence? If it wouldn't run too bad, I'm assuming making it take older scans into account would be a good improvement
Also, you've almost convinced me to learn python solely to make a cracked Ultrakill AI
I think one of the important steps is to make it see more than 1 frame at a time. Most AI I've seen playing games are basically like having lobotomy and forgetting everything 60 times per second (or whatever the framerate is) There is no object-permanence, no mental objective list, it's basically like someone new walking up to the computer, seeing 1 picture, and deciding what to do before the next person walks in without any communication. An easy way I've seen before is to give it a few inputs and outputs that loop on themselves, maybe the AI can be smart enough to figure out how to store useful data in them?
Now its real AI machine. not just a player in form of robot
Never played ultrakill but love the editing man! Since you're labelling the youtube chapters, get rid of that top timeline and let me see more of that editing full screen 😝
For better maze navigation you could use the SLAM algorithm (Simultaneous Localization And Navigation). There are python libraries for that, although I never used them. SLAM is made to create a map of your environment while positionning yourself in it.
To use SLAM you will probably have to use a depth field (which you do) and an estimation of your speed/traveled distance between time steps.
Now it's literally a machine...
feedback:
1: I added a neon glow to all of the texts, even though yt compression kinda ruined it, does it look good?
I like the sharp contrast berween bright text and a black background, but I also really love this subtle bloom. It helps the content stand out, without making the text hard to read. I'm also a big fan of the ASCII progress bar lol. Very cool.
2: I tried to write the script in more of a [to solve this problem, we can do...] format instead of the usual [I solved this problem by doing...]. was this a good decision?
Yes, that's it! It's a pretty efficient way of making sure you don't go on for too long on tangents that should become their own video, and helps explain the process of what you're doing to the viewer.
as an amateur programmer and ultrakill fan, this was perhaps the most interestingly amazing video i saw today!
Canon lore accurate V1
my friend made an ai that literally could play any game
its name was bob
bob has been retired and is just gonna be used as a reference to make a new ai sadly
bob had a fucking personality
Lore accurate V1
You did a really great job.
i think something similar to how photogrammetry identifies the same points in space over multiple images could help with navigation
you did great on this video btw. i like the neon glow and how the script is formatted.
oh hell yeah! I wish you well 8AAFFF!!
Random side note: I wonder how this process would work on an isometric view ARPG (like Diablo, etc.). The depth mapping wouldn't be necessary, and calculations can largely remain 2D. I think it'd probably take more object detection learning due to both enemies and obstacles needing to be identified, but once complete the bulk of the operations can live in the weights, reserving more power in the loop for gameplay tweaks and what not...
yeah when i think about it object detection is far better for 2d games
also because usually objects are not obstructed
love the editing
Kinda looks like if V1 was moving like V2
Bro really roasted the bullet. Still subbing
Cool video, just one little thing... please check your audio mixing. I could barely understand you with all the heavy bass music in the background, which made it muffled.
ah thanks ill fix it next time :)
As a TV viewer with a subwoofer this was painful to watch
is there no modding API for ultrakill or steam? I mean, surely there's a way to get input from V1's perspective without being JUUUST screenshots, right?
Bro is too good for this world
have a subscription 😎
Dope! I'm definitely going to check out the repo. Don't know that I'll be able to improve it, but it'd be awesome to try!
Yeah...my antiquated video card choked on it. This project is fun as hell, so I think it's time for an upgrade....
@@jakemeyer8188 I'd suggest at minimum 20 series NVIDIA or equivalent from AMD. this needs **alot** of processing power.
@@scarecrow5848 Thank you for the great suggestion. My NVIDIA 10 series did not like it at ALL haha
I was wondering if you could pull the depth buffer from ultrakill's shader, it is generated while the game renders, and determines what polys to cull from the render queue. They are formatted in a similar way that the ai estimates depth. You might even get clean silhouettes from the entity's on the map. I am unsure if this process would be faster than scraping the screen though.
a few ppl suggested that and when i think about it there is probably both a depth map and some sort of enemy mask on there. maybe its even possible to see enemies thru walls with that idk.
thanks for the suggestion :)
now its actual v1
add a void detector or a pit detector for the cg
a big fan of shotgun swapping
literally lore accurate V1
I'll be impressed when it starts chargebacking/j
WHO HIJACKED MY BODY?
let it parry please!!!!
tfw: programmed an actual war machine
"in hindsight it was not cool"
>huh
Thank you for doing a great job
...and making end of humanity one step closer
Badass
Try to extract the depth buffer instead of using extra neural network for this matter. Most likely, the game uses OpenGL to draw shaders on the screen, so this thing already has depth buffer calculated in it.
good idea this would def be faster
i never dealt with shader caches but there is probably a bunch of helpful stuff from the game
This is a really good idea. I didn't even think about pulling from shader data...
Hell yeah!
aren't we technically the AI as we follow the beginnings rules but don't question the countless humans we kill on the upper layers?
lmfao rip code bullet (it does confuse me why he tends to prefer completely remaking games himself but I still love his channel and understand such a desire to c r e a t e)
Different fixations. Vivisection/reverse engineering, vs. gameplay optimization
cant you hook into the game's memory and get real game data? then you can train a model based on runtime game data and player inputs. would be interesting to use the scoring system as an actual reward/punishment system, or maybe your own artificial system so it uses weapons in a specific way, to make it not just good at playing but also good at playing cool.
This is the only way it could have ended.
Okay so I made a fork and changed *a couple things*
Lmk what you think, I should be able to finish everything tomorrow.
I do like the way that the modules I made work because it's very easy to add or remove them and stuff :D
i just checked it out and yes really solid
imageGrab is probably the worst part of this, using something else would def improve performance
also i think i just commented it out but somewhere in the grab screenshot function there is a resize that speeds up a lot the object detection, at the cost of worse performance on small looking enemies, so if you want to use it maybe the targeting thing will go even faster.
thanks for working on it regardless :)
@@8AAFFF I'll be getting on my PC in maybe 10 minutes or so, I'll look into that (and also look into converting the rest of your code into the structure I made)
@@8AAFFF I basically completed the transfer of everything you had to my format, the only thing I really have left to do is implement a PID controller for the rotation and it should perform basically the exact same as yours did, except at around 20fps
If you want me to, I could also start making a sensor to use enemy outlines and make contours to target, as that should be blazing fast compared to yolov5m
Also you should start a disc server or something so I can share progress :p
3 years later there will be speedrun AI%
theres a minor assist option that causes all enemies to be a silhouette of a single (customisable) colour. Couldn't this be used to make the enemy detection script much more lightweight? idk if it'd mess with the depth sensor though.
dead bodies dont have silhouettes so itd also solve that issue too
bruuuh where were you when i was making the targeting sensor
thats such a good idea! besides maybe explosions warping the color but its prob fixable
thanks for the suggestion :)
@@8AAFFF do u have a discord or smthn, I was considering contributing (I’m working on another ultrakill-related project rn but once that’s done I’ll see what I could do)
Half motivates me to make an AI to play ultrakill that actually learns how to play the game, but the game mod I'd have to create would be at best mildly annoying. Maybe if I end up finishing my machine learning library :p
In any case the editing in the video looked pretty good, the way you went about the actual ai seems to be pretty good in terms of playing without any internal access. Maybe if I get bored tomorrow I'll add some features
You have your own machine learning library? Thats impressive af
@@8AAFFF It's nothing very impressive as of yet, I'm mainly just throwing together my own version of the gym standard with shared memory buffers to make everything as fast as can be, then I'm gonna code some example algorithms and environments and see how fast it is
@@ctag07 i once tried making sort of a pytorch clone for training small models, but got stuck on backprop and abandoned it XD. Good luck with the library though
@@8AAFFF Backprop is basically where the complications come from, so that's understandable lmao. I did the same thing over a year ago and had to end up stealing backprop code from someone else, and I didn't use numpy so it was as slow as could be.
I'll definitely be taking a look at the code tomorrow and possibly reworking or adding a buncha stuff, depends on how bored I am :D
lol the diss on code bullet D:
Do NOT put this code on the Boston Dynamics Atlas robot
awesome man
I'm not sure if it's feasible, but what about accessing the game's memory during runtime? There might be addresses containing useful information, which could reduce the reliance on a model for screenshot recognition.
This could work, some1 already suggested accessing the shader cache and seeing what information they have
i wonder. can this adapt to updates?
this is what AI is good for!
im glad to know im atleast better then a ai
I wanna see one an ultrakill ai using a deep neural network
6:50 your gpu already has a depth map for the game... you can probably get it out of the gpu.
Can YOUR EYES get the depth map out of the GPU? It needs to be entirely external, a glorified external autoclicker
Hey btw, do you have atleast one pair of the stripey programmer socks?
linux ahh socks
i thought it was an actual neural network 😭
but still very very impressive, i've been thinking of training an NN just for ultrakill, i just didnt know how to interface with ultrakill
at the start i did actually want to train a full neural network that would take the past 3 screenshots and turn them into controls, but either because i messed something up or didnt use a big enough network, it only worked on really basic stuff like holding the trigger when i was looking at the sky, and releasing when looking at the ground. nothing impressive, but if you can pull it off i would be really impressed :D
@@8AAFFF lol i wasnt planning on making an entire RNN/CNN for processing image data, that would be insanely hard
not because of skill but because of computing power and the time for inference
i was thinking making a bunch of "sensors" in the ultrakill mod kinda like what you did and feeding the data to the model
ah also good idea
0:01 dude doesn't even know how to use a gun.
now we just need some sort of goberment to make a war machine fueled by blood and install it with this
then we'll finally have Ultrakill on Hyper Realistic Graphics (maybe a little too realistic)
we gonna be the first to go in the machine uprising
canon
"perry" lol
++ video
Is the goal to have it only work off of a screenshot, or would you be okay with directly intercepting frame data on the gpu?
well initially i was trying to make a genetic algorithm to get scored by how much style it got in N cybergrind waves
and that would have worked off of screenshots
but especially with the depth estimation youre right it could be way faster pulling directly from the gpu
Hakita, your thoughts?
cool video
I think the shade thrown at Code Bullet is unnecessary. There are different techniques for different folks.
This reminds me of when microsoft made A generative AI that plays minecraft!
I was thinking you could try to use A photo scanned replica to search for the cameras position based on screenshots, but that sounds unreasonable in real time. Maybe if you run the game slow enough?
idk because the game time is just a slider in the accessibility settings that goes down to 0.5
but yes actually mapping out spaces to not visit them twice is a good idea :)
omg
Damn. We gonna dead 💀
ok now program v1 telling me everything is going to be ok
mf keeps saying "As an AI language model, I can't feed you false hopes"💀
@@8AAFFF ;-;
What does TAS stand for?
tool assisted speedrun
its when someone plans out frame perfect inputs for a whole level or whatever
and then a code executes them
better than underthemayo lol