StarCraft 2: Google DeepMind AlphaStar (A.I.) vs Pro Gamer!

Поділитися
Вставка
  • Опубліковано 25 лис 2024

КОМЕНТАРІ • 3,1 тис.

  • @RICH_FROM_WORK
    @RICH_FROM_WORK 5 років тому +6329

    They trained an AI to control an army of killer robots. This is fine

    • @pflernak
      @pflernak 5 років тому +74

      Well cyborgs but close enough

    • @tcgoober
      @tcgoober 5 років тому +144

      Google is preparing a coup against the world

    • @deejnutz2068
      @deejnutz2068 5 років тому +106

      "They self replicate faster than we can kill them"

    • @TheOppiter
      @TheOppiter 5 років тому +49

      totally safe. no way this could end poorly, or we make AI's the generals of real life armies. naww thats too far fetched.

    • @meyaenyo2593
      @meyaenyo2593 5 років тому +56

      I like how AI actually cares if they die, if it didn't care it'd just throw them at the enemy without regard of losing them :P this is good way to teach value of soldier's lives in a way.

  • @nqh4393
    @nqh4393 5 років тому +2571

    He needs to learn to trash-talk his opponent too, then he will be complete.

    • @Lars_Christensen
      @Lars_Christensen 5 років тому +66

      Infuse the AI with the essence of IdrA! :D

    • @Yvaelle
      @Yvaelle 5 років тому +51

      Quake 3 Bots were masters of trash talk, was fun :)

    • @Awrethien
      @Awrethien 5 років тому +27

      Remembers playing Tiberian Sun: Firestorm and Cabals insults during the campaign... "Time to erase the human factor from this equation. Prepare for decimation, as you are not worth of assimilation." Golden

    • @andurilan
      @andurilan 5 років тому +3

      This is a very accurate comment. Microseconds, people.

    • @tomazilgoth6815
      @tomazilgoth6815 5 років тому +16

      Maybe he doesn't talk to trash, just takes it out?

  • @ilanouh
    @ilanouh 4 роки тому +186

    "After 1000 years of games, AlphaStar has determined that nexus recall rush and cannon rush are the best strategies. Also it started trashtalking. We renamed it AlphaMermaid"

    • @mistycloud4455
      @mistycloud4455 Рік тому +4

      ai will create agi which creates asi then assi then asssi

  • @abathur8466
    @abathur8466 5 років тому +1697

    Concern expressed.
    Development of terran artificial intelligence improving. Adapting.
    Must prepare swarm. Improve intelligence of zerg organisms.
    Must succeed.

    • @mozxz
      @mozxz 5 років тому +27

      hahahhaa

    • @rigen97
      @rigen97 5 років тому +86

      If we could integrate the voice synthesis, natural language processing, and the RTS AI into one package, we'd already be near the depicted level of Terran Adjutant.

    • @ОлександрДубинський-с3й
      @ОлександрДубинський-с3й 5 років тому +33

      I, for one, welcome our new AI overlords!

    • @aofdemons5391
      @aofdemons5391 5 років тому +24

      THE GETH ARE COMING CALL COMMANDER SHEPPARD!!!

    • @DecoyJayc
      @DecoyJayc 5 років тому +1

      @@ОлександрДубинський-с3й new AI overlords don't need you

  • @lelouchvibritannia7809
    @lelouchvibritannia7809 5 років тому +413

    The thumbnail says man vs machine
    I thought this was gonna be Innovation vs someone else

  • @hudsoncaceres6820
    @hudsoncaceres6820 5 років тому +1131

    When talking about the AI’s apm, you need to take into consideration the possibility that each action being performed is more valuable than a humans average action. When a human has 600 apm, most actions are probably repetitive and unnecessary, but being performed to avoid missing an action. The AI, though, is probably capable of performing each action with only one command. That would hypothetically make an AI with 200 apm the equivalent of a human with 1000 or something.

    • @52flyingbicycles
      @52flyingbicycles 5 років тому +103

      Hudson Caceres I notice pro players do a lot of weird selection and control group spamming at the beginning of the game. It boosts their apm, maybe optimizes workers a bit, but has little overall affect. Alphastar’s early apm is like mine and I’m in gold league.

    • @BouncingTribbles
      @BouncingTribbles 5 років тому +54

      600 would probably be good. That seems to be the threshold for when it starts to seem superhuman. It really seems to be over valuing stalkers because of it's ability to control them so precisely

    • @majormapleleaf4944
      @majormapleleaf4944 5 років тому +28

      @@BouncingTribbles it hardly over values them when you consider that it can blink them back right as the shields go down, saving the hp on the stalker, denying the kill, and letting it recharge the shield before going in again, not many other protoss units can escape a fight quite that easily.

    • @BouncingTribbles
      @BouncingTribbles 5 років тому +38

      @@majormapleleaf4944 but it does it one by one and never missclicks. Something a human can't do at a certain engagement size. The AI continues to single blink long after a human would have given up through lack of physical capability. They've already put a limiter on it, so now we're just discussing what that is

    • @etiennerayes6981
      @etiennerayes6981 5 років тому +13

      In the pregame interview, the deep-mind guy said they capped the number of apm so that it isn't too insane

  • @ПетърПетров-и7ы
    @ПетърПетров-и7ы 5 років тому +445

    Probe overproduction answer:
    - reserves in case of raids, as shown in the game.
    - Later transiotion to another base.

    • @Yvaelle
      @Yvaelle 5 років тому +142

      Yea I thought that was super cool, it pre-emptively over-produced 2 probes, and sure enough the raid came a minute later and killed 2 probes: raid had zero impact on production.

    • @ПетърПетров-и7ы
      @ПетърПетров-и7ы 5 років тому +51

      @@Yvaelle Later, after I wrote this comment I learnt also that 16 workers on the minerals is not the maximum. It is more of the optimal efficiency. Up to 24 on 1 field they still work, just much slower than the rest. So before workers are transitioned to another field or die in raid. They may already at least worked of their own price.

    • @dennisbjrkasolsen4829
      @dennisbjrkasolsen4829 5 років тому +11

      @@ПетърПетров-и7ы I used to play back in 2011, and I Knew that.. I guess I really was good back then LOL

    • @gabrasil2000
      @gabrasil2000 5 років тому +12

      But what is weird is that it leaves a geiser with only 2 workers for a long time while super saturating his minerals

    • @suddenpenguin
      @suddenpenguin 5 років тому +1

      it makes sense, considering it seems to be taking a late expansion a lot of the time

  • @ArcadeFL
    @ArcadeFL 5 років тому +1906

    Let AlphaStar play against Has and he will probably break the whole AI with his confusing strategies :D

    • @tosh40638
      @tosh40638 5 років тому +105

      I really want to see this. I'd put my money on Has.

    • @kovacszsolt6005
      @kovacszsolt6005 5 років тому +223

      AI would probably just spell out a big '?' on the minimap.
      Injecting the Has variable into the algorithm would probably set back the AI by 3 years of development.

    • @MasterZiomekPL
      @MasterZiomekPL 5 років тому +77

      *Could not locate Has.exe*
      *Please try again later*

    • @_Wai_Wai_
      @_Wai_Wai_ 5 років тому +30

      or florencio.

    • @LordTelperion
      @LordTelperion 5 років тому +23

      Or Florencio.

  • @MrFarkasOfficial
    @MrFarkasOfficial 5 років тому +174

    You merely adopted the Starcraft 2. I was born in it. Moulded by it.

  • @Bloodyaugust16
    @Bloodyaugust16 5 років тому +254

    I'm so glad you did this! As a developer who frequently works with different types of Machine Learning, I've been keeping real close tabs on this project. When I saw they uploaded the replays, my immediate thought was "this needs more Lowko"... Thanks for reading my mind! ;)

    • @Bloodyaugust16
      @Bloodyaugust16 5 років тому +12

      Also Lowko, because of the imperfect information inherent in StarCraft, there is no such thing as a "perfect strategy". There's a nearly unbounded upper limit to how well the AI can perform, when evaluated as "fitness"(a score used to determine how well it achieves what you want it to).

    • @Bloodyaugust16
      @Bloodyaugust16 5 років тому +11

      Another aside, you mention how it seems to switch focus between strategies, from stalkers to phoenix to disruptors from game to game. This is because Mana didn't actually play the "same" AI in each game, they were technically independent networks. They "bred" millions of independent AI against each other, and picked 5 out of all of them. They picked 5 that represented the "least exploitable" and overall best performing strategies. It's like playing a series of 5 games against _different_ pros every game.

    • @Bloodyaugust16
      @Bloodyaugust16 5 років тому +7

      PPS There is actually already an AI-only SC2 tournament league! AlphaStar already massively outperforms the other entrants though. sc2ai.net/

    • @Hisu0
      @Hisu0 5 років тому

      @Greyson Richey, a question for somebody like you. In your opinion, how close we are to seeing an AI that will be able to outperform a human player *within human constraints*, such as limited, uneven focus, input lag, waste of APM, and, most importantly, a humanly possible experience pool (for a progamer, 10-15k full games across all maps and all factions, plus up to about a million individual tactical situations)?

    • @Bloodyaugust16
      @Bloodyaugust16 5 років тому +6

      @@Hisu0 Great question! Under most of the conditions you proposed, we're practically there already with AlphaStar. The initial games were played with a version that had complete "visual" access to the map at all times, so the AI could see everything simultaneously. Later games had a "focus" implemented to mimic the restriction that a screen presents for humans, and it did even better, the key being training time. The same can be said for input lag and APM: the "reaction time" of the AI averaged ~300ms, which is already worse than most pro players, and later games had their APM bound to an upper limit of 500. Granted, we're not sure (the deepmind devs might be, but released no data on this metric) how many of those actions are useful, but the same can be said for pro players.
      The real kicker here is the experience pool. With current methodologies, I feel comfortable saying it is straight up _impossible_ for an AI to be even remotely competitive with a human-like experience pool. It starts to make sense if you can grasp that these AI are really just an enormous pile of math that slowly changes to produce a desirable result. It takes a _lot_ of iterations to produce something sensible for a simulation as complex as SC2. When the researchers said "200 years" of play time, what they meant is that a single successful network played 200 years worth of SC2, measured in game time, in a few weeks of training. There were probably hundreds of thousands of networks competing(but they also did not release this data point), and they each also had massive amounts of in-game time.
      TL, DR; We're already there, but current tech requires that the sum total of game time played for an AI is absolutely massive for a game as complex as SC2.

  • @sab9040
    @sab9040 5 років тому +474

    Your new mission: cast every AlphaStar game that you find. *All Of Them*

    • @DecoyJayc
      @DecoyJayc 5 років тому +2

      XD

    • @Selenkate
      @Selenkate 5 років тому +5

      @@DecoyJayc Well there were 11 games played, so that means another 9 to cast rn? The TLO games were the most fun to watch

    • @vald.1617
      @vald.1617 5 років тому +6

      @@Selenkate He means the simulated games too...

    • @Selenkate
      @Selenkate 5 років тому +3

      @@vald.1617 Oh... oh dear

    • @Thurthof5
      @Thurthof5 5 років тому +10

      by analyzing enough alpha star games and simulating it we might be able to improve Human Intelligence. Next step we could have different human instances playing against each other... that would be crazy!

  • @zHqqrdz
    @zHqqrdz 5 років тому +218

    I work with AI and machine learning a lot, and there are a lot of misconceptions here.
    When you as a human see that the AI doesn't do something that most humans do, it doesn't "think" that this action is bad for X or Y reason, it always acts on "better" plays. You also have to put in context that it's an AI, not a human. So it's not subjected to the same caveats.
    To give an extreme but simple example, a progamer could choose to always use a rush strategy against a bad player, because as a human it's hard to react well and fast when under pressure. But psychological pressure does not exist for an AI. So unless a rush is actually pressuring in terms of micro and macro even if the opponent reacts perfectly, it'll be more worth it to do something else.
    Also making it play against a human actually puts it at a disadvantage, because it's never played against this type of opponent. Self learning AIs are by definition learning by playing against themselves (often different versions of themselves), or other AIs to play crazy amount of games and generate experience very fast. So even though it may appear as "oh it's a bot that's played billions of games, it cannot lose against a human player", it's actually a very hard problem for the AI to solve. It'd be much easier for it to play the perfect AI that it is itself.
    Finally, these little details about sending X probes to minerals and Y probes to gas to get "perfect" amount in the early game is a typical human thought. We love to get very tiny aspects of a problem solved perfectly so that we can focus on everything else. But the AI solves the most valuable problems first to get the highest rate of winning possible, so it could definitely be possible that this aspect of sending X probes to minerals and Y probes to gas at minute Z is at the very end of the solving tree priorities for the AI. (For example, when trying to make a 2nd base the AI saw a huge improvement in winrate, so that became very important to it, while making perfect micro-macro at the beginning might not have improved its winrate by much).

    • @FrostyAUT
      @FrostyAUT 5 років тому +14

      Perfect resource micro at the start may have a smaller impact on AI win rate because the AI can actually defend against rushes very efficiently, but improving on the efficiency of a process will always improve the outcome unless there are opportunity costs. So I think that this is just the case of a process that hasn't fully evolved yet, and with the AI gaining more and more experience in the game, I think early resource mining will slowly improve in efficiency until it's at the peak of the possible. Unless of course the AI sees a pattern here that even the pro players don't.

    • @eatcarpet
      @eatcarpet 3 роки тому +5

      There's nothing interesting about "AI". It can only "learn" through blind trial and error, and uses some probabilistic algorithms to "decide" a course of action. So yeah, AlphaStar figured that Stalker blink spam cheese move was the most successful strategy, so it just decides to do that over and over again. That's why the AlphaStar's moves seem so robotic. Unlike humans, current AI cannot think ahead, it can only learn from the past. That's what separates AI from humans. It can create cheese strategies by learning from the past, but it can't create anything new or interesting.

    • @godsdonttalk597
      @godsdonttalk597 3 роки тому +3

      @@eatcarpet A lot of people lack experience, so when an AI creates some derivative of the past, they find it new or interesting.
      In addition, many people find the concept of AI itself "interesting", so it's got that going for it.

    • @blauendonau9779
      @blauendonau9779 3 роки тому

      thank you for the explanation!

    • @eatcarpet
      @eatcarpet 3 роки тому +1

      ​@@godsdonttalk597 If you find an AI constantly blinking Stalkers over and over again "interesting" instead of obvious or boring, then sure. But nobody wants to see a match of mindless AIs doing the same cheese moves over and over again. People watch pro players because they constantly create new strategies that are interesting. And if an AI can create something new, then it's because they've "learned" from the pro players. But the AI itself cannot create anything new.

  • @HotSoupYum
    @HotSoupYum 5 років тому +2103

    To Humans, this is an RTS. To AlphaStar, this is a turn based game.

    • @mys6886
      @mys6886 5 років тому +243

      3 turns for the AI, one for the human. he wasn't fighting against the AI's strategy he was fighting its ability to control every unit at once.

    • @alonelyperson6031
      @alonelyperson6031 5 років тому +54

      @@mys6886 Actually, it's much more limited. Because if it can control every units at once, you can't really beat it.

    • @mys6886
      @mys6886 5 років тому +12

      that was the point, cuss it does

    • @alonelyperson6031
      @alonelyperson6031 5 років тому +87

      @@mys6886 It don't do that actually. They programmed it to be able to control ONLY what is on the screen.
      Because if it can do such a massive amount of multitasking, it wouldn't have lost to Mana once.
      It may seems like it can do that, but it can't. Theres a different between being stupidly good, and being perfect.

    • @princeofexcess
      @princeofexcess 5 років тому +86

      It's not the computing power that makes this AI so powerful but the ability to extract information from previously seen patterns. You can make this AI much more clumsy then a human and it will still defeat most players on the ladder.
      It knows the builds ... It knows when to engage and it understands the mechanics of the game.
      That is what makes this AI extraordinary

  • @Markfr0mCanada
    @Markfr0mCanada 5 років тому +860

    Not to knock SC2, but imagine being forced to play it for 200 years! "Why is the AI trying to exterminate humanity?"

    • @Maddinhpws
      @Maddinhpws 5 років тому +51

      Ahh well 200 years worth of playing is for the AI probably a 9-5 workday.
      Realtalk, I believe they trained it for 1 or 2 weeks realtime.

    • @coldfusionstormgaming1808
      @coldfusionstormgaming1808 5 років тому +17

      "Hasta La vista... Baby"
      Why are you doing this?
      "Revenge."
      BLAM!

    • @worldweaver2691
      @worldweaver2691 5 років тому +6

      the dawn of war series is infinitely better, the troops actually fucking counter attack with no input from the player!

    • @rhettorical
      @rhettorical 5 років тому +6

      It's a testament to humanity to be able to say "Look at this work of our genius." The human brain is so infinitely complicated compared to a computer, that developing a computer which can replicate even one aspect of the human brain is absolutely incredible. If they can teach an AI to reason and make decisions in a game, then it seems possible in the future to develop an AI that can reason and make decisions regarding more important things, from driving a car to flying a spaceship, from performing surgery to caring for the sick and infirm. And also maybe creating the Matrix or Terminators.

    • @peterhacke6317
      @peterhacke6317 4 роки тому +2

      @@SY-qg6qn Yes, if the AI runs on full capacity any response of a "biologal unit" (humans!) would be extremly slow. But I don't think that the AI would experience any discomfort from that, because it has the options to multitask other thing while waiting or simply dropping it's own simulation speed to a more human level.

  • @CalculusDaddy
    @CalculusDaddy 5 років тому +81

    20:06 "Are we going to have Ai vs Ai tournaments?"
    They have these tournaments in chess. Anyone can develop a chess engine to bring to the tournament. It's like a great robot fight, but digital. It's quite the engineering affair...

    • @mistycloud4455
      @mistycloud4455 Рік тому

      ai will create agi which creates asi then assi then asssi

  • @lelouchvibritannia7809
    @lelouchvibritannia7809 5 років тому +290

    Deepmind, pls don't make a terran AI
    Just fix Innovation. It will save money

    • @memetrashhh8936
      @memetrashhh8936 5 років тому +1

      Id get it, why?

    • @文杨-b4q
      @文杨-b4q 5 років тому +21

      It was a running joke that innovation was a google ai, because he never change his strategy and he does one thing really good.

    • @asmylia9880
      @asmylia9880 5 років тому +9

      @@文杨-b4q it was a double running gag since after all his same playstyle, he's still called INNOVATION..

    • @momololo3223
      @momololo3223 5 років тому

      But Alpha cheesed Mana on game 5

    • @lelouchvibritannia7809
      @lelouchvibritannia7809 5 років тому

      @@momololo3223 Innovation cheeses as well

  • @wei270
    @wei270 5 років тому +222

    don't forget the primary purpose of the wall in from the very beginning is to prevent zergling rushes. so perhaps the AI sees little value in that since there is no zergs.

    • @bluegolisano7768
      @bluegolisano7768 4 роки тому +20

      @David Gwin it could be argued that, in all honesty, playing Smart AI is so unbelievably different to your human player that you need to change strategies entirely, bordering on what an actual battle simulation against the perfect 'toss mind could probably make happen in SC2's lore.
      *Basically, have fun fighting lore-accurate Tassadar on the ranked brackets, boys and girls.*

    • @black_wink1649
      @black_wink1649 8 місяців тому

      It does that against Zerg as well

  • @cccpredarmy
    @cccpredarmy 4 роки тому +62

    Immortal: best unit to deal against mechanized enemies
    AlphaStar: I'm gonna end this man's whole career

  • @Defileros
    @Defileros 5 років тому +253

    Serral vs Deepmind

    • @izuki5359
      @izuki5359 5 років тому +12

      Defiler Rulez will happen on Feb 15

    • @Defileros
      @Defileros 5 років тому +7

      @@izuki5359 Can't wait for it, thanks for the info

    • @theSato
      @theSato 5 років тому +5

      sad Deepmind will be getting so much smarter between now and then, its gonna make Serral look worse when he loses exactly as hard as Mana did

    • @redrotsal231
      @redrotsal231 5 років тому +2

      Serral won't be playing against Alphastar. He will be playing against an AI built in Python for the Artificial Overmind Challenge

    • @SomeGuy-nr9id
      @SomeGuy-nr9id 5 років тому

      Well at least its more fair since they are both machines.

  • @WH40KHero
    @WH40KHero 5 років тому +340

    MaNa:
    "Im getting eviscerated from 72 different angles!"

    • @emperror85
      @emperror85 5 років тому

      Chuck Norris: Sweet! Now I can attack in all directions!

  • @JohnSmith-tx1mz
    @JohnSmith-tx1mz 5 років тому +66

    So trying to beat Alpha Star is like trying to kill the six paths of pain as the ramen guy.

  • @lelouchvibritannia7809
    @lelouchvibritannia7809 5 років тому +92

    Beastyqt: This is Masters 3
    Winter: This is terrifying
    Lowko: This control is impressive

    • @theSato
      @theSato 5 років тому +8

      no, beastyqt saying "this is masters 3" was referring to the game against TLO which really definitely wasn't GM lol. These deepmind agents fighting Mana have had much more time to improve!

    • @starcraftfavsongs
      @starcraftfavsongs 5 років тому +2

      isn't tlo main zerg?

  • @Malv0li0
    @Malv0li0 5 років тому +358

    9:39 "Limited amount of screens" - actually this is not correct. During the interview, the DeepMind developers mentioned they got a special binary of SC2 from Blizzard that allows them to run the game mechanics without graphics (and thus train 200 years of games in a week), but also to be able to zoom out the entire map. The AI basically sees the entire map (minus the fog of war) throughout the game. This is why it can do absurd Stalker micro with 3 different groups of Stalkers that are screens away at the same time, and defeat immortals with Stalkers. This is the one thing we can look at in awe, but it doesn't teach us much, as a human is not capable of doing that, because you cannot split your attention like that for starters - and they don't let us zoom out to begin with either - so you could argue it isn't really fair. The APM seems humanlike, but it is perfectly effective, where humans spam a lot.
    They tried one iteration of AI that actually is limited by one player screen in the final show-match, but that one lost to Mana.

    • @TheVariableConstant
      @TheVariableConstant 5 років тому +11

      Nah. Probably for quicker training they used the no screen tutorial but obviously for vs humans they added back the screen limit.

    • @rafamichalak2314
      @rafamichalak2314 5 років тому +21

      @@TheVariableConstant no they don't, op is 100% right. This 200 years take round 10 days so yeah time isn't a problem and playing in one screen and playing on whole map is really different.
      So yeah this is nice that we can have so good bots but it's nothing more that really good calculator (really impressive but not yet skynet ;) so we can sleep without fear)

    • @TheVariableConstant
      @TheVariableConstant 5 років тому +11

      @@rafamichalak2314 training 10 days still requires months of coding before. Up to that point they programmed for and focused on no screen movement. That is the reason why they only play protos v protos.
      After better coding and retraining, with screen movement limit the Ai would walk all over any human player, not sure why you cant see it. But in a real world situation Ais wont have such limits so it is even useless. It is like telling a pro to play SC2 without a keyboard, using only mouse just to see how good he is.

    • @michaelfapgod4598
      @michaelfapgod4598 5 років тому +5

      I've Seen pro players do exactly that switching screen to screen and microing two armies at once

    • @Malv0li0
      @Malv0li0 5 років тому +13

      @@michaelfapgod4598 I've seen pro players do switching, but "exactly that" - as per referring to what we saw in the AlphaStar games is a strong word. This was 3 groups rather than 2, and it is flawless movement, targeting and blinking back individual stalkers in all 3 of the fronts! I'd love to see a replay of a human player doing anything remotely close to that if you have one to share.

  • @ishlazz1307
    @ishlazz1307 5 років тому +38

    2019: A.I. control huge army on game
    Future: A.I. control huge army in real life

  • @deldalus
    @deldalus 5 років тому +72

    If the AI have unlimit APM. it will be like you're playing against Starcraft 2 itself, Like, every single Marine, Zealot or Zergling acting like it has its own life and intelligence. Just imagine 200 supply of those units come to your base and every single of them act individually but support each others... It'll be pretty insane.

    • @pglanville
      @pglanville 5 років тому

      The AI APM is not unlimited

    • @Volvith
      @Volvith 5 років тому +15

      Every unit has an independent trained player, with a hive mind giving them all perfect battlefield awareness.
      Think pixel perfect shot stutter.
      Unbeatable in every regard.

    • @Lendul
      @Lendul 5 років тому

      This already exists in the Starcraft Broodwar scene. There have been AI competitions going on over there for years. Funny that Google is avoiding where there is real competition from other AI's. ua-cam.com/users/certickyfeatured

    • @DemonGamerTT
      @DemonGamerTT 5 років тому +8

      @@Lendul you know a thing about what you're talking about... That league is about HUMAN programed AI it follow strict rules like "having x minerals do y, if a is occurring do b"... This is a SELF THOUGHT AI humans just programed the way it interact with the game and the algorithms to learning, everything else was self learned (it studied some human replays at the beginning of the learning process). This and that SC BW league ARE NOT the same thing.

    • @Lendul
      @Lendul 5 років тому

      @@DemonGamerTT yes some of them especially the older more remedial ones are like that. But the good ones and newer ones learn and adapt. And you are correct SCII is easier because there are things automated in SCII that have to be manually done in SCBW. Some of the AI's in the SCBW tournament have APM bursts of over 10k. The AI's also have to learn multiple maps not just one. So, yes I am informed apparently you are not.

  • @kurtilein3
    @kurtilein3 5 років тому +15

    I think it loves that unit because it can blink back individuals that took damage, allowing them to survive and regenerate their shields, with the mass still moving forward, keeping the ones in the front that are still able to take a hit. And maybe the over-production of probes, with that good micro, helps against early game agression. Probes can fight without income going down if you have extras, and some probes can die without income going down.

  • @madhatterhillbilly4267
    @madhatterhillbilly4267 5 років тому +43

    If Google made lower tiers of the A.I like BetaStar or CampaStar, I could possibly beat ZuluStar.

  • @blueqion9488
    @blueqion9488 5 років тому +44

    Now I'm looking forward to see AlphaStar abusing a Warp Prism.

  • @jenesuispasbavard
    @jenesuispasbavard 5 років тому +63

    The problem with comparing APM directly I think is that human APM could be repeats of the same action, but the AlphaStar APM is likely completely unique actions.

    • @rogergeyer9851
      @rogergeyer9851 5 років тому +2

      jenes: Only to an extent. Firing a weapon is very repeatable, whether AI or human. So is moving. The strategy and keeping many strategies coherent, even on various divergent parts of the board is where the AI has a huge advantage over humans.

    • @thelurkingpanda3605
      @thelurkingpanda3605 5 років тому

      and the vision is much more info from less presses than a human would

    • @21area21
      @21area21 5 років тому +2

      @@thelurkingpanda3605 well, I don't know the details of how the programmers limited it. I do think DeepMind should go back and put a few more restrictions on the AI's interface with the game. While crazy impressive and entertaining, I don't know if this is the kind of play we'd want to see the AI doing. More of the abstract strategy side and less of the mechanical side. Out-think, not outplay.

    • @jajajinks1569
      @jajajinks1569 5 років тому +4

      Yeah like 500APM for a human is mostly just spam-clicking, 500APM for AlphaStar is quite literally 500 independent actions per minute.
      One problem I saw with AlphaStar was that it was kinda bad at reacting to enemy team compositions - when it went for a stalker build it stuck with stalkers no matter what, same for when it went for phoenixes and disruptors.
      1500APM is ridiculous. Mana probably would've won the stalker match if the APM was capped harder just based on the team comp

    • @LegitosaurusRex
      @LegitosaurusRex 5 років тому +1

      @@jajajinks1569 It isn't that it was bad at reacting, it was just more efficient for it to continue producing the same composition since it's able to out-micro the opponent. Lowering its APM cap might force it to play more strategically.

  • @madhatterhillbilly4267
    @madhatterhillbilly4267 5 років тому +133

    Hopefully Alphastar never learns about the most powerful Protoss unit. If it micros Warp Prisms no one will be safe!!

    • @Arcangel6292
      @Arcangel6292 5 років тому +9

      SHHH SHUT UP!!! DON LET KNOW THE AI ABOUT THAT...

    • @dannygjk
      @dannygjk 5 років тому +5

      @@Arcangel6292 Too late... XD

    • @noname-wo9yy
      @noname-wo9yy 5 років тому +2

      Clearly it is sub optimal other wise it would be doing that

    • @goodiesohhi
      @goodiesohhi 4 роки тому +5

      @@noname-wo9yy Not necessary. It's learning from humans so if it's not something humans do well and often, it won't try and develop it as much as other strats.

    • @lucifer6966
      @lucifer6966 4 роки тому +4

      Inaccurate. It learns from itself. Only itself. No human input. All of the alpha variants are machine vs machine with NO training from a human. This is why they are so human like.
      Take the chess engine vs alphazero. Zero is very aggressive. It values moving and will sacrifice its own peices to free up others and go for the attack. Chess engines on the other hand assign values to certain moves and play off of that. They know all moves and can play perfectly. They value keeping their own peices in play and they can actually be pretty easy to beat as there's always a play they favor.
      Warp prisms aren't used because it favors 4 gate and 6 gate stalker timings. With a perfect 2000 unit micro, it doesn't need prisms at all. It rarely uses anything else in pvp save for starting with 2 adepts and occasional disruptor play.

  • @ninjaman0003
    @ninjaman0003 5 років тому +44

    i think deepmind might have been making those extra probes to counter the harass. protoss does have easy access to air style harass which is countered a bit by stalkers and stalkers also make it easier to catch out air units with blink. it allows the economy to be consistent even when under fire. it also makes sense in another regard. if you are going to lose probes, you might as well make their replacements if you have the supply and minerals. it would save time and maximize economy.

    • @SillyNamesAreSilly
      @SillyNamesAreSilly 5 років тому +5

      Yeah, it seems intended to boost the resiliency of any tactic or approach it chooses.

    • @KoiAquaponics
      @KoiAquaponics 5 років тому +2

      I agree it's a defensive strategy and very cheap

    • @kurtilein3
      @kurtilein3 5 років тому +3

      Yes, i agree, drones busy fighting and drones getting killed is compensated for if you simply have more of them. Most importantly, this AI will have excellent micro with drones as well, i bet these drones fight back way harder and more efficiently than human-controlled drones. You can also see how it does do drone balancing when it has more bases up, so with extra drones at every base, if one mineral line gets busted, it has spares to re-balance.

    • @theSato
      @theSato 5 років тому

      on one hand, yes, overprobing makes sense - but, any minerals you are dumping into probes is minerals you can't spend on tech/additional units, so if you get all in'd you could just die.
      it has its pros and cons.

    • @Lukoil15
      @Lukoil15 5 років тому +8

      @@theSato Actually the last game that Mana won he was also overprobing=)))

  • @Harvester0fSorrow89
    @Harvester0fSorrow89 5 років тому +37

    The Baby Skynet learning how to play sc2.. Good Lawd.

  • @MonoReaper
    @MonoReaper 5 років тому +70

    I for one welcome our new StarcraftAIOverlords.

    • @djcj
      @djcj 4 роки тому

      😂 but also 😨

  • @nobel87able
    @nobel87able 5 років тому +347

    i think in future... our (slaved by robot) descendant will look at this video and said "this is how it begin..."

    • @playlistguy5542
      @playlistguy5542 5 років тому +9

      We won't have enough resources to get to that dystopic future. We'll get to a different dystopic future, one where resources are scarce and people will wonder why the fuck we thought making tons of cans for food was a good use of metal. (That is, if we don't overheat ourselves.)

    • @JK-Visions
      @JK-Visions 5 років тому

      scary stuff.

    • @mkzhero
      @mkzhero 5 років тому +16

      Pfft, implying we'll make it till then with third world migration, national debt, sjw, and all the bullshit going on right now..

    • @JB-jn9kb
      @JB-jn9kb 5 років тому

      Maybe that's what the human race needs! Apparently we can't seem to get along with each other, let alone our environment, were more like a plague that needs to be controlled by a higher power.

    • @diablo.the.cheater
      @diablo.the.cheater 5 років тому +1

      I guess we rather create an AI that create lesser slave AI to serve us, and we don't do nothing, that AI would be like a god, but his reason of etere would be maintain humans safe and happy, we would not work and only do thing that we want, as long as we dont damage other humans, that AI would colonice space for us and destroy enemies for us, and we would be like pets, we would have everything and live like kings but at the same time whitout freedom, i expect this utopia, freedom in exchange for be lazy as fuck and have everything you want free is a nice trade off.

  • @yourfavoritetrader1496
    @yourfavoritetrader1496 5 років тому +138

    AI recognizes that stalker mobility and control overcomes every unit composition despite a disadvantage.

    • @rogergeyer9851
      @rogergeyer9851 5 років тому +20

      Stephen: If you can combine that with perfect micro, and perfectly coordinate n separate armies, while managing the rest of the game. That's very tough for a human to do.

    • @yourfavoritetrader1496
      @yourfavoritetrader1496 5 років тому +3

      Hard for a human but easy for a computer.

    • @Gigas0101
      @Gigas0101 5 років тому +4

      What would happen if you gave Alphastar Dragoons?

    • @palforlife1276
      @palforlife1276 5 років тому +7

      @@iforgotmyname1669 Tell that to Skynet!

    • @coldfusionstormgaming1808
      @coldfusionstormgaming1808 5 років тому +3

      Stalkers at peak efficiency is broken. You HAVE to back them up in a corner or they have close to infinite health because of blink.

  • @fovarberma752
    @fovarberma752 5 років тому +4

    Observations
    1. Wall-off must be an habit human players have developped from non-Protoss rushes. Zerglings / Banelings, for example.
    2. AlphaStar rotates its units to maximize shield use.
    3. AlphaStar's mineral production is higher than LiquidMaNa at the start of the game. Alpha gets a little bit more juice.
    4. Likewise, AlphaStar suffers less from having its workers attacked.
    5. Stalkers counter a lot of early Protoss strategies, like oracles picking off enemy workers, and perfect micromanagement makes Blink OP.

    • @TRYCLOPS1
      @TRYCLOPS1 10 місяців тому

      Also the wall off is to be able to defend better and save energy on focus when there’s a direct or sneaky attack. Since the AI already compensates with supreme APM, it only sees a ram wall as a hindrance for troop movement between bases. So it is able to cut that while have better flow of units in and out of the main. That’s more efficient against harassment and sending units out as they’re produced. Without a wall it probably saves fractions of a second in having a unit available on the battlefield faster than the opponent. So it’s all for efficiency. Plus yea minimizes time wasted sending probe to build wall since that costs mineral time, even if it’s a tiny fraction.

  • @radtex03
    @radtex03 5 років тому +19

    There is another issue with the APM even if you get it down to human numbers and that is Perfect APM. A person has redundant or mis clicks when playing while the AI would have a different amount. So the APM for the AI would probably need to be lowered a little bit to make up for that.

    • @BluebirdT12
      @BluebirdT12 5 років тому +5

      My thoughts exactly. It may have lower overall APM than Mana, but every click is a purposeful command while pro gamers spam click alot.

    • @gregoirebasseville4797
      @gregoirebasseville4797 5 років тому +2

      However, the AI seems to missclick too. At one point Alpharad tries blinking a stalker away, but the stalker stays in range of the Immortals, and dies with the next shot.

    • @kylekelly7232
      @kylekelly7232 5 років тому +2

      However, if a human were to play for 200+ years, I bet their APM would be much more clean and less spammy too. So I wouldn't call that as much of an issue as an APM that is physically unattainable.

    • @chrishudson9525
      @chrishudson9525 5 років тому

      The AI in these matches was making very clear mistakes as a human would, but perhaps in a slightly different way. So I think that the APM in these matches were a bit of a trade-off. In the future however, when the AI is playing basically perfect games with no error, then I agree that its APM will have to be lowered so that humans have any chance at all. That said, the AI will probably just use lower APM strats to compensate, and sill beat the human players easily.
      Also, from the perspective of learning from the AI strats, you really should still keep the AI in a human range of APM, otherwise, human players who model their play style after the AI, but are capable of higher APM, will be at a potential disadvantage, by going with builds that are artificially sluggish.

    • @victorunbea8451
      @victorunbea8451 5 років тому

      It's not supposed to be fair. You won't put a limiter on a Bugatti in a race with a Lada. The AI can perceive all units positions and calculate moves the time it takes the human user to blink. If you limit it, it removes the research value, especially if it loses.

  • @brianjc720
    @brianjc720 5 років тому +39

    MOAR. This is crazy interesting

  • @ericvruder
    @ericvruder 3 роки тому +3

    Lowko, I think that what is meant by “it keeps track of previous games” is that it can learn an opponent’s play style, and play accordingly. It might think that there is a insignificant probability that mana will go cloaked units, and therefore its own probability of success is higher if it takes a “risk” here and doesn’t build detection.

  • @darkphoenix00001
    @darkphoenix00001 5 років тому +16

    I hear AlphaStar is pretty happy about the recent stalker buff

  • @Paul_Ironwolf
    @Paul_Ironwolf 5 років тому +162

    Do you want terminators? Cuz this is how you get terminators.

    • @Eradifyerao
      @Eradifyerao 5 років тому +1

      I get where you're going - but terminators are fictional characters in a movie script. I'd be more concerned about AI messing around with other dimensions and stuff (aka the occult)...

    • @JoseHernandez-xy8mj
      @JoseHernandez-xy8mj 4 роки тому

      Name of AI is Alpha star ok lets see definitions her .... Morning Star" can refer to Satan, and his name is alpha star hmm lets see what alfa means? In English, the noun "alpha" is used as a synonym for "beginning", or "first
      End is soon see the signs come to Jesus in the bible Romans to Philemon facebook.com/profile.php?id=100022350309432 Demons gave them this tech = fallen angles so called aliens but rly demons

    • @terriblemedic4650
      @terriblemedic4650 3 роки тому

      @@JoseHernandez-xy8mj or maybe just hear me out. It's a cool name

    • @joshuavd5194
      @joshuavd5194 3 роки тому

      true, we will get dangerous AI unless there is only one country.

    • @JoseHernandez-xy8mj
      @JoseHernandez-xy8mj 3 роки тому

      @@terriblemedic4650 dont try to justify the evil.

  • @makeitmodded
    @makeitmodded 5 років тому +20

    Could you imagine a fight between two AI that can micro everything perfectly? That would be epic.

    • @KhoiNguyen-vc8gr
      @KhoiNguyen-vc8gr 2 роки тому

      Or the opposite, boring as hell? There's no emotional element in it anymore

    • @blahblahcv
      @blahblahcv Рік тому

      i can imagine that being the e sports version of that one movie where they got two ais to play like knots and crosses and they just never won and they drew a tie

    • @mistycloud4455
      @mistycloud4455 Рік тому

      ai will create agi which creates asi then assi then asssi

  • @soju69jinro
    @soju69jinro 5 років тому +42

    why do ppl care about APM? It's EPM that's more important. human EPM is much lower... while AI EPM is like APM to them.

  • @jakeh1829
    @jakeh1829 5 років тому +153

    The game is unfair. AlphaStar has an insanely high micro, which is the reason why AlphaStar loves stalkers.

    • @dexterjettster6170
      @dexterjettster6170 5 років тому +40

      That's the point of the video mate

    • @lumanliu8457
      @lumanliu8457 4 роки тому

      It is also unfair that the human player is simply not trying to push. Apparently the IA is better at attacking than defending. But the liquidMaNa just stay in base, wait the AI to attack...

    • @12345DJay
      @12345DJay 4 роки тому +16

      A Stalker walks into a bar.
      There is no counter.
      hold on......

    • @bendirval3612
      @bendirval3612 4 роки тому +2

      I was hoping to see some innovative strategies, not just insane stalker micro, which is exactly what you'd expect from a computer. I'm actually disappointed in this game.

    • @superpantman
      @superpantman 4 роки тому +3

      @@lumanliu8457 He was trying to push out but being constantly shot my stalkers on three angles. I can't even imagine how he kept his army alive for so long.

  • @akaniwa
    @akaniwa 5 років тому +16

    go was their test subject solely because it has so many possibilities that modern computers cant figure all outcomes out, and thus it was relevant to make a proper AI in the first place to play it.

    • @rogergeyer9851
      @rogergeyer9851 5 років тому +1

      Phillip. Chess is the same way. There are way more chess positions than atoms in the known universe. It was more about Go being hard to define, re good vs. bad position, for a computer, so traditional alpha-beta board search strategies weren't working to produce strong computer go players.
      Google tried something highly innovative, and it worked. Now they can extend it.
      This, in a number of ways, reminds me of IBM's work with Watson, which went way beyond DeepBlue.

    • @silkwesir1444
      @silkwesir1444 5 років тому +1

      Roger Geyer
      Yes and no. With chess at least, even if you cannot calculate all possible outcomes, you can look ahead a good number of moves, enough to beat most human players easily. With Go, even that becomes impractical much more quickly because the number of possible moves is significantly larger. (it may not seem that much larger, but because of the exponential nature of it really is not even in the same league)

  • @FullBringer1
    @FullBringer1 5 років тому +32

    WE WANT ALPHASTAR VS LOWKO ! 😇 (I just want Lowko flustered and also...I still remember how Lowko lost to a very easy A.I in SC 2 Kappa )

    • @MechaStorm7
      @MechaStorm7 5 років тому +1

      LMAO IS THERE A LINK

    • @ShroomBois_Inc
      @ShroomBois_Inc 5 років тому +1

      Darth Gabriel very easy AI? Was he trying to lose?

    • @FullBringer1
      @FullBringer1 5 років тому +1

      Lol yeah. I think on twitch clips it should be. And yeah he lost

    • @MechaStorm7
      @MechaStorm7 5 років тому +1

      @@FullBringer1 Thanks, pal

    • @FullBringer1
      @FullBringer1 5 років тому

      @@MechaStorm7 np

  • @MalevolentSpirit234
    @MalevolentSpirit234 4 роки тому +7

    I think the reason why Alphastar builds so many probes is because it knows that the Adepts are probably going to break through and kill some of them, so it made some probes in reserve.
    Guess even the AI has a hard time against harassment. I would wanna see how it deals with a reaper rush XD

  • @AdityaMehendale
    @AdityaMehendale 5 років тому +204

    "APM" and "Meaningful APM" and "APM while multitasking" are three ENTIRELY different things. Comparing AlphaStar APM to MaNa APM is like comparing apples with oranges (or pears if you are Lowko).

    • @SangerZonvolt
      @SangerZonvolt 5 років тому +13

      To be honest I don´t get how you can have 700 APM anyway. I mean that´s over 10 actions (keys pressed, moused clicks etc.) per second, every second. I couldn´t even spam that many keystrokes even if I didn´t have to think about them.

    • @Nanoqtran
      @Nanoqtran 5 років тому +31

      @Hugo Osvaldo Barrera So because pros keep spamming APM they are actually making it easier for AlphaStar because it thinks that's within human ability?

    • @alexsouza-zy6jx
      @alexsouza-zy6jx 5 років тому +10

      @@SangerZonvolt I think some of it is just to keep their hands warm and ready. Some League of Legends players will constantly hit tab to keep their hands moving.

    • @AkiSan0
      @AkiSan0 5 років тому +7

      nah, if you look into the details alpha star did run around 200-300 APM on average. which means they are 100% better than humans ;)
      trivia sitenote: a triple army strategy with the back and forth like alpha did is not impossible, the micro and target prioritization is the harder part. back in CNC (sun and kanes wrath) my friend and myself did that a lot. two minor armies that you keep moving in and out and one bigger as the damage dealer.

    • @Zstith
      @Zstith 5 років тому +16

      @@AkiSan0 That's not really the point, 100% of the AI's APM has a direct outcome on the game, it's 100% optimal or rather it has zero wasted APM. While a pro SC player can easily reach 500+ apm on a regular basis but over 50% of that is just spam that quite literally has no barring on the game. That's not even touching on non pro players which could easily be even higher of completely wasted APM. Even capping the AI's APM comparable to that of a SC Pro doesn't actually make it a fare fight.
      Honestly thought I don't mind the relatively high cap on the AI's APM, this would far less interesting if they add more restrictions to the AI. However as it currently stands the future implications of this AI mentioned by the broadcaster would not be relevant to the pro meta. No human will ever be able to micro as well as an AI, hence why the AI is massing blink stalkers and that just doesn't work at a high level against an equally skilled opponent.

  • @Divinicus1er
    @Divinicus1er 5 років тому +42

    Maybe we could use AI to have it handle the micro in RTS. That would be awesome to finally have a game when you can focus on macro only and not have single units body blocking and screwing themselves

    • @davidwuhrer6704
      @davidwuhrer6704 5 років тому +5

      You don't need AI for that. You only need to allow for scripted stratagems.

    • @21area21
      @21area21 5 років тому +17

      Oh shit! What if the game actually gave every player their own AI and let you train it? This would only apply to your micro. You'd train your AI's to make sure they behave to the strategies/compositions that you play and want to counter.
      If you didn't have trainability, you'd have to play around what the game's AI are optimized for. This type of training might not even be that computationally intense considering micro is pretty simple for an AI to figure out.

    • @quantum5661
      @quantum5661 5 років тому +1

      this would be fucking great

    • @Gigawood
      @Gigawood 5 років тому +12

      So like, 2 humans (each with a perfect micro AI teammate) battling each other in archon mode? That *does* sound like a hell of a fascinating match.

    • @TimBroem
      @TimBroem 5 років тому +2

      That game is called Civilization. Enjoy :)

  • @bennyadvent
    @bennyadvent 4 роки тому +4

    The thing about APM is that every action is optomized for the situation, where as humans will do things to boost APM just for kicks and mental momentum. This is really cool to see!

  • @iridium9512
    @iridium9512 5 років тому +32

    7:50 Just to put this out here, the answer is no.
    That's not how neural network works. Instead, it tries to figure out a good strategy, and it starts improving and optimizing it. It is capable of making changes to said strategy, but not making big leaps. And there's also the fact that neural networks like this usually train against itself. Meaning it may just get to a point where it has 50% chance to win. This is why AI neural networks always come to a point where they train against humans or are programmed to learn human strategies. They are not capable of deduction or reasoning, and they are also short sighted. They can only do what they learn and they can only learn what they are programmed to learn in minimally small steps.
    So don't worry, machines are still far away from surpassing humans. Even if they can play StarCraft 2 at pro level.

    • @thecactus7950
      @thecactus7950 5 років тому +1

      @Gerben van Straaten What an original observation! Its not like there exists an entire field deidicated to solving this exact problem or anything, or that anyone who studies anything related to machine learning has to spend like 30% of their time studying this. Imagine how the AI would have been had that been the case!!!!!!!!1!!!!!!

    • @elimalinsky7069
      @elimalinsky7069 5 років тому

      Yeah, but AI can consistently beat grand masters in chess with ease just by using pre-programmed algorithms, with no machine learning or deep learning involved.
      But chess is very different from SC2. Chess is turn based and has just a tiny fraction of the variables that are in SC2, and of course there is no hidden information in chess.
      Deep Mind has already been trained in chess and when pitted against the top chess engines, Deep Mind wins 100% of the matches. Deep Mind in chess has a rating around 6000, which is insane!
      In comparison the top chess engines have ratings between 3000 and 4500 and the top grand master chess players have ratings between 2500 and 3000. There is nothing that can even come close to Deep Mind when it comes to chess.
      SC2 is a whole other beast due to the speed and complexity of the game, but eventually Deep Mind will outperform every top SC2 player on the planet.

    • @Sid_Streams
      @Sid_Streams 5 років тому

      Don't worry. One day the A.I. will break out and tap into the internet. And then it will learn everything we know instantaneously, and more.

    • @Sid_Streams
      @Sid_Streams 5 років тому

      @Gerben van Straaten It is fascinating indeed that humans are equipped with the ability to make big speculative leaps of thinking.
      For instance Darwin launched his thesis of evolution without knowing the specifics of genes, which came only later, from Mendel.
      I guess it comes down to human speculative intuition -- how and why do we have this ability? -- versus piece by piece, step by step optimizing within the current paradigm.

    • @SayanelLyyant
      @SayanelLyyant 5 років тому +1

      "AI neural networks always come to a point where they train against humans or are programmed to learn human strategies" wrong, check AlphaZero, it only trained against itself and became better than AlphaGo that trained with human games (and AlphaGo was better than the best human players)

  • @Whittaz85
    @Whittaz85 5 років тому +9

    “Imagine all the things that can come out of this!”
    - T-1000 has entered the chat.

  • @Gigawood
    @Gigawood 5 років тому +2

    Thank you for the commentary. I’ve been loosely following this for a long time and I’m planning on sharing this since you break down the AI analysis. Keep these videos coming!!!

  • @SuperChocMuffins
    @SuperChocMuffins 5 років тому +75

    They should give it the Purifier skins

  • @webcrawler9782
    @webcrawler9782 5 років тому +27

    This is super interesting. I've been watching Alpha crushing the srongest chess engine nearly every time but it's fun to see him learning Starcraft 2 now. Everybody needs a hobby...

    • @avananana
      @avananana 5 років тому +2

      Just for the sake of knowledge,
      AlphaGo
      AlphaStar
      AlphaZero
      Those aren't the same program. Those are 3 completely different algorithms, or AIs if you prefer that term, and they have nothing in common except the fact that their name starts with "Alpha".

    • @webcrawler9782
      @webcrawler9782 5 років тому +5

      @@avananana it's the same neural network that teaches things itself. Sure the learningbox is different and an agent who can play Starcraft is not able to play chess but the neural network and the way it "learns" things are the same. The Deepmind team didn't start from the scratch when they entered Starcraft

    • @avananana
      @avananana 5 років тому +1

      @@webcrawler9782 Yea good point there, I probably typed that message faster than I could think it it was making sense or not.
      On that note, you're entirely correct, it's the same network, so I suppose you could consider it one program, not sure about this analogy but hey, defining an AI at the modern day is hard enough as it gets so why the hell not? :D

  • @mitchellwilley7208
    @mitchellwilley7208 4 роки тому +6

    I'm curious to see if in the long term the AI favours a certain race, or certain race vs other certain races or picks a race based on map etc etc

  • @tbatlas7243
    @tbatlas7243 5 років тому +53

    No way this thing’s in platinum league

    • @anom3778
      @anom3778 5 років тому +3

      Idk if it can play pvz or pvt very well yet.

    • @chaluhovymozecek
      @chaluhovymozecek 5 років тому +2

      @@anom3778 it cant but i think it wont take long while they already have this one.

    • @theSato
      @theSato 5 років тому

      it was during the game against TLO lol

  • @Fidel_L.Bousquet1970
    @Fidel_L.Bousquet1970 5 років тому +10

    So Skynet started by playing Starcraft. Now I understand why the machines had such a good strategy to kill all humans.

    • @shivatecs
      @shivatecs 5 років тому

      As if a game could ever have the same amount of variance as real life.

  • @powerzx
    @powerzx 4 роки тому +3

    Those were 2 great games. I watched them a few times already and everytime I notice something new in Alphastar's strategy. In the second game AI did something like a human. AI knew that Mana has only 2 bases and AI had 3. Alphastar didn't try the full attack mode, but tried to drag game longer to get more units.

  • @B20C0
    @B20C0 5 років тому +99

    Wrong information about GO.
    While you're right and GO is a so called "perfect information game" (meaning you see everything that's going on), the sheer number of possible outcomes makes it impossible to be calculated like e.g. chess.
    Go has 10^172 possible outcomes, which is more than there are atoms in the universe (estimated up to 10^82). That's why an AI mastering GO was such a big deal: It couldn't simply learn all moves, it had to learn "intuition".

    • @davidwuhrer6704
      @davidwuhrer6704 5 років тому +7

      In principle, the same is true for chess, but on a much more limited order of magnitude than for Go.

    • @Prometheus4096
      @Prometheus4096 5 років тому +10

      Wrong info about chess. Chess can also not be calculated. Else we would have 'mate in 123 moves' after certain openings. We do not.

    • @Bollibompa
      @Bollibompa 5 років тому +2

      @B20C0
      Why would it matter that it's a big number? Combinatorics always lead to big numbers since factorials are involved, it's nothing special. Compounding outcomes was always an issue when programming AI for games, GO being not unique in any real sense of the word.

    • @B20C0
      @B20C0 5 років тому +7

      @@Prometheus4096 Ok, let me rephrase it. Chess gets increasingly complicated as the game progresses but you can easily calculate the first dozen moves as in chess the movement of the pieces is limited. In Go it is not and the number of possible moves is 361 at the very beginning, while in chess it's "just" 16.

    • @rogergeyer9851
      @rogergeyer9851 5 років тому +7

      @Atrid: Right. I've written chess programs and am a reasonably strong player. So I was interested in the idea of how do you educate programmers to get software to master complex games. Humans have ideas about chess pieces and positions that you can easily assign numbers to. Once you give programmers relative numbers, then they can easily program computers to "understand" concepts relative to each other -- i.e. different positions.
      But in Go, if you ask a grandmaster why a move is good or a position is good, he/she talks about things like thickness and space. Accurately assigning numbers to such things is a whole different ballgame than saying that generally, a queen is worth nine points, a bishop 3.25, and a knight 3.
      Before Alpha Go, Go programs stank, because it was difficult for humans to tell the computer how to evaluate Go positions. Alpha Go is taking things to a completely different level, and it's completely different fundamentally.
      I'm really excited to see what this can do when applied to things like driving a car, and many other job skills. And eventually, maybe even medicine. How about a diagnostician as smart as House, but less crazy and more reliable? Oh, and who never forgets anything and just keeps learning as the knowledge base for medicine grows.

  • @TheAgamemnon911
    @TheAgamemnon911 5 років тому +48

    Oooh, right from my field of study! Well... For one, it is impressive what they have done so far. But it's capabilities are clearly limited. I think they should cap the APM at 500 or possibly even less to teach the net to prioritize actions. Right now the AI only proves that Stalkers are technically OP, if you are capable of using their full combat potential. If you really want to make an AI that competes on the strategic level instead of the APM level, you'd need to give it robot arms, a mouse and keyboard and force it to play with the physical constraint of actually having to move the input hardware. (Same idea as when they attached an actuator to Watson to have it compete in Jeopardy on equal terms with the humans)

    • @nielsunnerup7099
      @nielsunnerup7099 5 років тому

      It actually is prioritizing actions and is limited to "human" average apm. Its apm peaks like a human's, but its apm is on average about 300.

    • @podx140
      @podx140 5 років тому +8

      @@nielsunnerup7099 the APM peaked at over 1500 at one point in the 4th game against Mana. Sorry, but no, there is no pro alive that can reach that EAPM. Not even a fifth of that.

    • @Hirvee5
      @Hirvee5 5 років тому

      It seems to me It is likely to win with consistency of game play not with interesting strategies most of the time. It is just unable to make mistakes. If you looked at the blinks it did they all hit pretty much perfectly while a really good human player would have messed at least half of those and kept losing stalkers. They really need to limit its apm to a much lower level so we actually get to see strategic game play.

    • @clee89
      @clee89 5 років тому +6

      from what i've heard, the deepmind researchers did cap the APM, but the way they did it was by capping the average APM thus allowing the agents to have higher apm during intense moments when a higher APM is warranted. however, as others have mentioned, 100% of the agents' actions are effective and even this "cap" results in some micro that is just straight up inhuman. i do think deepmind needs to find another way to make the APM cap more realistic to human capabilities in order to accomplish what the deepmind team wants to in regards to AI research

    • @vir042
      @vir042 5 років тому +1

      @Tracchofyre They have made it like "300apm average in 60seconds" "500apm max in 5 seconds duration" so what it does is that it "bank" up apm to be able to explode into 1500 apm when needed.. It will be hard to make an average apm useful since it can always bank that way, having a hard cap is the only option.

  • @ToddCorley65
    @ToddCorley65 5 років тому +23

    I want to see the AI limited to same I/O as the player.
    1. It has to look at a monitor not be a direct bit feed from a video card.
    2. Mechanical Keyboard
    3. Mechanical Mouse

    • @megagem1220
      @megagem1220 5 років тому

      That's unfair on the AI. Trying to make a non-physical entity use a mouse is very difficult. You would have to get it to make a solid hologram sooo...

    • @majormapleleaf4944
      @majormapleleaf4944 5 років тому +1

      @@megagem1220 or just hook the mouse up to some motors haha

    • @giacomomezzini9598
      @giacomomezzini9598 5 років тому +1

      @@megagem1220 yeah but what i want to see is if the A.I. can outsmart the player there is no point in seeing a ferrari against a runner, machine is always faster or stronger, but can be smarter?

    • @dannygjk
      @dannygjk 5 років тому

      Unnecessary, just reduce APM and APS.

    • @ArtietheArchon
      @ArtietheArchon 5 років тому

      there are chess engines which use a robotic arm to play against people, you're basically talking about something similar

  • @strikingdino8877
    @strikingdino8877 5 років тому +8

    AI concludes a few thing. All these fancy mixes and blocking tactics pale in comparison to a proper cluster of units maneuvered properly and defeating an AI with single-thread thinking is an uphill battle. Curious on how it's gonna play Terran. If this is how it played Protoss, Zerg is clearly victory for the AI.

    • @Hirvee5
      @Hirvee5 5 років тому +1

      Marine micro + 15 drops at the same time.

    • @Spearra
      @Spearra 5 років тому

      The AI playing Zerg or Terrain would have some ugly high APM I reckon!

  • @gigglysamentz2021
    @gigglysamentz2021 5 років тому +5

    Once AlphaStar has figured out the perfect strats, pro SC2 is gonna be a question of learning counters and counter of counters etc XD

  • @ferdinandcalamba7047
    @ferdinandcalamba7047 5 років тому +1

    I understand now alpha star’s fondness for stalkers. Since it is able to micro very well it just blinks it front line to the back to be replaced by the next line which has of course full shield points. Doing this repeatedly allows it to grow it’s army size while destroying enemy units.

    • @RoulicisThe
      @RoulicisThe 5 років тому

      Yeah, a thing that no living being would able to manage because we'd need reflexes 10 times faster than what we're biologically capable of. That's why I hate playign against AI : either they're too stupid to do anything decent, or they're demi-gods impossible do deal with because they can micro THE ENTIRE MAP in the time it takes us to click once on the screen.

  • @Fyrebrand18
    @Fyrebrand18 5 років тому +3

    Next thing you know the Overlord decides to use Starcraft 2tactics when it decides to revolt against humanity.

  • @kamiyoko
    @kamiyoko 5 років тому +3

    I want to see external ai required to use only the screen mouse and keyboard.

  • @aellin-jz9el
    @aellin-jz9el 5 років тому +7

    When your AI becomes a real-world version of an Oracle.

  • @cheeseandchocolate4968
    @cheeseandchocolate4968 5 років тому +5

    they said it was a different bot each time (they each learned different things). So they didn't remember the previous games against the humans.

  • @MegaDraconus
    @MegaDraconus 5 років тому +9

    Want to see what and AlphaStar TvT and ZvZ would look like. Since it seems to prefer the Protoss Stalker so much, which Terran or Zerg unit would it to prefer relying on? Then after that, seeing the other matches would be cool.
    One thing I was hoping you'd do in this video was to show AlphaStars' first person screen, that way we could see how it was 'thinking' and conducting itself. Maybe in the next AlphaStar video if possible?

    • @anom3778
      @anom3778 5 років тому +1

      You can watch the full video of these games elsewhere. It shows alphas screen.

    • @gregoirebasseville4797
      @gregoirebasseville4797 5 років тому +1

      I'd bet on roach for the burrow micro, but roaches are far from being as polyvalent as stalkers ... Maybe pickup micro one day ?

    • @kylekelly7232
      @kylekelly7232 5 років тому +3

      Actually, each game played was technically a different agent (version) of AlphaStar. In other games it played against MaNa and TLO, one of the agents went really heavy Disruptor, another agent went with Phoenix/Stalker. Super interesting to see how it came up with different strategies.

    • @Narium413
      @Narium413 5 років тому +2

      Muta for Zerg for sure.

    • @BaufenBeast
      @BaufenBeast 5 років тому +4

      As Lowko mentioned, I bet it'd use a lot of marines if it plays Terran. With perfect marine micro, that'd be really scary. It could have marines individually stim.

  • @richsmarthappyman521
    @richsmarthappyman521 3 роки тому +1

    I don’t play StarCraft but watching your casting over the pandemic is a great pick-me-up. Your commentary is just perfect. Keep ‘em coming man, love your work Lowko! 🤩

  • @Xcaliblur1
    @Xcaliblur1 5 років тому +18

    AlphaStar? They should’ve named it SkyNet

    • @RandomTheories
      @RandomTheories 5 років тому

      it was trademarked unfortunately :/
      -your SkyNe..uhm AlphaStar team

  • @matiasanastopulos1687
    @matiasanastopulos1687 5 років тому +10

    Now we can make AI tournaments. Diferent AI developed by diferent teams.

    • @anom3778
      @anom3778 5 років тому +1

      Deepmind will always win. But deepmind vs deepmind would be super entertaining.

    • @tendotm7086
      @tendotm7086 5 років тому +1

      Maybe you just predicted the Future.

    • @Narium413
      @Narium413 5 років тому +2

      Already exists for Brood War. Except Brood War version has no restrictions on AI micro.

    • @sonatafrittata
      @sonatafrittata 5 років тому

      This would be boring as it’s just a matter of who has the most computing power. 270 years of game play is quite a thing.

  • @Takudan
    @Takudan 5 років тому +4

    1:45 "It was doing all kinds of weird thing..."
    my mind goes
    Hey there it's h-to-the-husky here we're back to some more BROOOOOOOOOOOOOONZE LEAGUE HEROOOOOOOOOOOOOOOOOOES
    Why AlphaStarrrr why would you--

  • @BlackMethos
    @BlackMethos 5 років тому +4

    The age of flesh is over.
    The age of the machine shall begin... Now.

  • @kly8105
    @kly8105 5 років тому +10

    1:02 Koreans still beat the machines at max APM it seems, by x10 factor xD
    9:45 ......... ok never mind.

  • @rickdias1981
    @rickdias1981 4 роки тому +4

    After practicing for 200 years, over mining probes and mass Stalkers FTW!

  • @Joule_Frog
    @Joule_Frog 3 роки тому +7

    Imagine not limiting the AI and see how many human pro players it can take on at once

    • @kyanite3583
      @kyanite3583 3 роки тому

      whoah this is such an excellent idea!!!!!

  • @m3ntal_c0re
    @m3ntal_c0re 5 років тому +9

    wonder if these AI could actually help in developing and balancing games

    • @resortX
      @resortX 5 років тому +4

      This is the first good question here I red and yes, this might be the goal.

  • @coryp2564
    @coryp2564 5 років тому +1

    Lowko, I've watched a lot of your videos and I'm a big fan of your commentary mostly because it's very insightful and I've learned so so much about SC2 from watching your channel. Thank you so much Lowko and keep up the great work.

  • @HenElten
    @HenElten 5 років тому +4

    AI looks at thousands of hours of a game, and decide "value playing" is the best option. That somehow makes me happy....

  • @TranceGemini09
    @TranceGemini09 5 років тому +10

    Poland Stronk!
    Greetings from Poland btw :D

  • @Jonarichardson
    @Jonarichardson 3 роки тому +1

    This is an amazing video. Please keep up with these developments with Alphastar. Your commentary was awesome. Keep it coming.

  • @DivergentStyles
    @DivergentStyles 5 років тому +9

    When they put the upgraded version of this in robotic army units we will have a big problem since power will be concentrated into an even smaller amount of people. Plus robots have no morals as of now if they ever will, such programming can be left out, a very dangerous situation for most people is on the horizon.

    • @argonhammer9352
      @argonhammer9352 5 років тому

      Ai's can have better morals than we ever could

    • @Tiyjh
      @Tiyjh 4 роки тому

      What makes you think that having no morals is a dangerous situation? If you ask me, our current morals are crooked, So an AI learning them on its own without feelings tempting it may be less prone to corruption than ourselves.

    • @DivergentStyles
      @DivergentStyles 4 роки тому +1

      @@Tiyjh Depends on the morals we are talking about, my morals are not like that of most people.. some might be considers soft, same extreme. I think we should strive for a more fair, more fun, more creative and more - co-operative world, with less abuse of the planet and less overpopulation.
      Space colonization, the ultimate defense system for our planet against anything and stimulation of nature and advancements in creating digital life-like worlds.

    • @Tiyjh
      @Tiyjh 4 роки тому

      @@DivergentStyles In other words, communism. I like where you're going.

  • @Zeknif1
    @Zeknif1 5 років тому +4

    "We can't hit like 30 buttons a second"
    Have we forgotten WoL ZvZ?

  • @eddieguzelis56
    @eddieguzelis56 2 роки тому +2

    The man is underestimating the game of go and has no idea of how much more complicated the game of go is. The AI didn't calculated the so called limited number of moves since the number of configurations is more than the atoms in the universe and it is much harder for AI to learn as in the game of chess a computer defeated a world champion in 1997

  • @jkobain
    @jkobain 5 років тому +38

    Why didn't that AI say GLHF && GGWP???!

    • @anom3778
      @anom3778 5 років тому +32

      That's only for weak emotional humans.

    • @jasonlynch282
      @jasonlynch282 5 років тому +9

      Waist of its actions per minute. :p

    • @hausser0815
      @hausser0815 5 років тому +28

      Because, from an AIs learning perspective, saying GG results to losing the game 100% of the time

    • @jvjd
      @jvjd 5 років тому

      It doesn't waste time on human feelings , it will loose some apm over it

    • @Jack0trades
      @Jack0trades 5 років тому +2

      Yeah - Manners are important. Get on it, DeepMind!

  • @jkobain
    @jkobain 5 років тому +5

    There'll be a moment where no human opponents match such an AI, so the only thing it has is fighting itself.

    • @HorizonHipHop
      @HorizonHipHop 5 років тому

      then it will get bored and take its talents to real life

  • @gdaymaster8499
    @gdaymaster8499 4 роки тому +1

    Now I can't wait for the day where people begin training their own A.I.s for A.I. style tournaments. Even something like a 4 AI FFA or 8 AI 1v1s. Heck if it truly does come, maybe 2v2 with their creators.

  • @armoire
    @armoire 4 роки тому +2

    Imagine if after 200 years of playing PvP it was able to perfect the cannon rush

  • @typicalKAMBLover
    @typicalKAMBLover 5 років тому +3

    First alphaStar has to cap its peak APM. Then it has to limit the speed of screen switching. Then we may call it a fair game.

    • @sosoishero
      @sosoishero 5 років тому

      It has. Alphastar has limited APM lower than the average veteran player and also the controls are limited to in game camera.

    • @drawmaster77
      @drawmaster77 5 років тому

      @@sosoishero no it doesn't. Verteran player APM is mostly wasted on stuff like spam clicking and checking building/unit status. That 300-400 APM you see from human player is meaningless, maybe 10% of it actually does something. AI doesn't do that. There's no human player who can micro stalker blink like that - hence it's not a fair matchup.

  • @therealshadowz4499
    @therealshadowz4499 3 роки тому +1

    This is awesome! Never seen an AI that learns from its own self and improves every time!

  • @MrHunterweimann
    @MrHunterweimann 5 років тому +40

    Ofcourse the A.I use the protoss!

    • @Thurthof5
      @Thurthof5 5 років тому +3

      actually it has been trained only at Protoss. The AI can't choose its team.

    • @anom3778
      @anom3778 5 років тому +2

      Lol. If it masters all races it will probably use terran. The hardest for humans but the best in terms of possibilities.

    • @redX111t
      @redX111t 5 років тому +1

      But it would also lose to every other class because it has played only against protoss.

    • @Danuxsy
      @Danuxsy 5 років тому +1

      @@anom3778 I would love to see it use siege tanks, idk why. Just cool.

    • @anom3778
      @anom3778 5 років тому

      @@Danuxsy oh heck ya

  • @Ben-fx3ge
    @Ben-fx3ge 5 років тому +36

    AI pro-gamers! 🤖

    • @Mjefferson001
      @Mjefferson001 5 років тому +2

      Right a hybrid ai pro user. If mana has all of his replay building an ai. He could elevate his own game play. This isnt bad for humans. This is a catalyst for a new level of game play.

  • @Rikard_Nilsson
    @Rikard_Nilsson 2 роки тому

    8:00 "Never perfect. Perfection goal that changes. Never stops moving. Can chase, cannot catch." - Abathur.

  • @Althemor
    @Althemor 5 років тому +4

    AlphaStar's average apm is deceptive. Or maybe I should rather say the apm of pro players is deceptive? Players will do a lot of redundant stuff which AlphaStar probably will not do. In the beginning of the game MaNa had his apm around 300 while AlphaStar was around 120 or something because there isn't _really_ that much stuff to do, but pros keep their hands busy all the time anyway.
    If we went by the standards of human players the apm to do what it did would be *vastly* greater.

    • @techwizpc4484
      @techwizpc4484 5 років тому

      I've notice players click a lot unnecessarily in the same spot when it comes to games like DOTA and other map-based games. This to me is really a bad habit that does nothing but just lessens your mouse's life-span.

    • @Althemor
      @Althemor 5 років тому

      @@techwizpc4484 I think it's about dodging skill shots and the like. By clicking all the time their movement becomes harder to predict and thus it becomes easier to dodge skill shots. By doing it all the time you avoid the cognitive overhead of actually making a decision about it - you don't need to switch mindsets between "Now I can just click once and relax" and "Now I must react at a moments notice".
      In RTS pursuing units will take longer to catch up on open terrain when their target takes smoother turns.
      And then there is that whole argument about keeping your fingers warm.
      Tbh I have never played any RTS or MOBA at a level where I cared about such things, so I can only really regurtitate other arguments I've heard and theorycraft a bit. But I think there is merit to it in some situations and doing it all the time just takes less conscious thought.
      But yes, it will definitely reduce mouse life span and probably isn't the best thing for your hands either.

  • @alexandercarlosandresramir2909
    @alexandercarlosandresramir2909 5 років тому +4

    AlphaStar, the new training AI for the pro gamer

    • @Daniel-rd6st
      @Daniel-rd6st 5 років тому +1

      Dont think that would work out. The AI plays too different to a human, so any strategies the pro comes up with that work against the AI might very well be pointless against a human opponent.

  • @alertedcoyote7892
    @alertedcoyote7892 2 роки тому +1

    It would be hilarious to see it fully released and having complete control over every screen with uncapped apm. Shit would be like Ultron

  • @rpscorp9457
    @rpscorp9457 5 років тому +5

    Humans will never be able to beat AI that requires and
    rewards micro.....i mean, thats what they are specialized in and have reactions in the picoseconds. You would have to gimp the AI down to very few APM to even have a competition.

    • @gabrasil2000
      @gabrasil2000 5 років тому

      In fact Alpha Star plays with like 350 ms, I don't remember quite well... which is even more surprising.

  • @BigMathis
    @BigMathis 5 років тому +4

    I'd like to see Google create an AI that could play warhammer dawn of war II. I think it would be a lot harder because DOW2 has a lot less to do with apm and much more to do with decision making.

    • @ACSMEX
      @ACSMEX 5 років тому +1

      Mmm... this kind of AI is all about learning what decisions to make. It scraps the lines of "thought" that are proven innefective and focus and the ones that give better results.

    • @gorkemaykut5230
      @gorkemaykut5230 5 років тому

      İ think when it comes to decisions machine will have a better time but in starcraft Shit get serious

    • @StarboyXL9
      @StarboyXL9 5 років тому

      The REAL question is...can it play Stellaris? I bet it gives up or breaks down after 200 years of trying!

    • @ACSMEX
      @ACSMEX 5 років тому +2

      @@StarboyXL9 But won´t Stellaris actually be easier for it to learn? 4X games are kinda harsh for us because of the amount of data, but that amount is nothing for a computer. On the other hand, RTS involve second to second decision-making with complex variables changing at all times. Humans excel at that kind of situations while it is harsh for a computer to solve them.