StarCraft 2 Pros vs Google's Deepmind AI (Alphastar)

Поділитися
Вставка
  • Опубліковано 24 січ 2019
  • StarCraft 2 is tackled by Google's Deepmind AI (Alphastar)
    Google DeepMind unveils the next generation in their artificial intelligence machine learning project by tackling Starcraft 2 - the most competitive real time strategy game! Top professional players take on AlphaStar in some incredible exhibition matches!
    Alpha Star Explained - deepmind.com/blog/alphastar-m...
    Check out my content!
    Watch the stream - / wintergaming
    Patreon - / wintergaming
    Tweeter - / starcraftwinter
    Discord - / discord
    Mixes - www.mixcloud.com/wintersc
  • Ігри

КОМЕНТАРІ • 1,5 тис.

  • @WinterStarcraft
    @WinterStarcraft  5 років тому +250

    The whole presentation ended up exceeding my expectations by a large margin, check it out here! ua-cam.com/video/cUTMhmVh1qs/v-deo.html
    What are your feelings on AlphaStar specifically, and Artificial Intelligence in general? Overall I'm really excited but also have that little "but are we creating Skynet" thought in the back of my mind :)

    • @Xepent
      @Xepent 5 років тому +12

      I'm interested to see if in the future, esports takes a card from Nascars deck and corporations sponsor their own AI and we have different categories like "uninhibited AI" that's allowed to go crazy with 3000 apm or "pseudo realistic" that has caps on spm and apm based on the fastest pro players alive. I'm also interested to see if games in the future change to allow players to create their own AI in a strategy game specifically designed for that.

    • @astreinerboi
      @astreinerboi 5 років тому +9

      I think as long as we don't give skynet a training API for "Robot uprising" where it can gather 200 years of experience in a week we are golden. I was very impressed with the results of deepmind and how human some decisions actually look. I am also looking forward how much this presentation and future AI matches influence the game itself and if strategies can actually carry over, and of course, how the research can be used in more "productive" fields.

    • @clementcardonnel3219
      @clementcardonnel3219 5 років тому +3

      Future's recent matches already looked close enough to Skynet in my eyes.

    • @richardcox8409
      @richardcox8409 5 років тому +1

      Im thinking the "Skynet" AI already exists? the software for the computer simulation games the military play would be an interesting match for AlphaStar possibly?. This is really interesting and it also makes me wonder at what level of intelligence will this type of AI peek at and how many levels of difficulty can they build into it without sacrificing the overall intelligence and making it vulnerable to set macro patterns. I am a bronze league hero and this thing would destroy me. Thanks for putting this up Winter it certainly brings a fresh idea and was surprised to hear it was a Google team that developed this and not a blizz development?... if I heard correctly.

    • @BothHands1
      @BothHands1 5 років тому +1

      Absolutely fascinating, i can't wait to see what the future of AI brings. I guess the last match was with a largely untested version of the AI, i think in just another round of AlphaStar League would probably allow it to beat MaNa.
      Would be really interesting to see it study and predict human behavior by looking at social media, it might be able to find local areas of unrest, and predict protests, political violence, terrorism, etc. Weather is another interesting application - having more accurate and more distant forcasts could offer significant benefits to society, from helping with farming/crops, givings us time to prepare for droughts - the possibilities are endless.
      Thanks for covering this, really gives me a lot to mull over during downtime :)

  • @williambarnes5023
    @williambarnes5023 5 років тому +101

    Mana: "Immortals hard counter Stalkers."
    Alpha: "Hold my beer."

    • @avalen4399
      @avalen4399 4 роки тому +16

      Alpha Star: "Hold my codes."

    • @leonfdawson
      @leonfdawson 2 роки тому +2

      Hold my mouse. Oh, wait.

  • @WolfGangDealers
    @WolfGangDealers 5 років тому +514

    “I am impressed, intrigued, and a little bit terrified” explains my first date.

    • @jaronmcmullin8030
      @jaronmcmullin8030 5 років тому +4

      Her thoughts or yours?

    • @WolfGangDealers
      @WolfGangDealers 5 років тому +2

      Jaron McMullin mine

    • @Dakerthandark
      @Dakerthandark 5 років тому +2

      Perhaps they were an AI too.

    • @spearshaker7974
      @spearshaker7974 5 років тому +1

      I am horny poor and lonely describes mine.

    • @zes3813
      @zes3813 5 років тому

      no such thing as impresx or intrigx or terrifx or first etc, doesnt matter, no terrifx for suchx, anyx

  • @dapperrogue353
    @dapperrogue353 5 років тому +23

    I like how Mana was saying GG in game after every match incase AlphaStar AI takes over the world next it might show Mana some leniency and not kill him.

  • @XD3blaze
    @XD3blaze 5 років тому +627

    Of course Alpha targets rocks first..
    Rock is the natural hard counter to computers

    • @ThinkerYT
      @ThinkerYT 5 років тому +8

      lmaaoooooooooooooooooo

    • @SonnyRao
      @SonnyRao 5 років тому +27

      It's its signature move -- destroy rocks in the middle of a battle for a ramp

    • @donnaharvey5514
      @donnaharvey5514 5 років тому +4

      I think a rock would also counter all skinny, pasty pro players

    • @anderssoderberg1122
      @anderssoderberg1122 5 років тому +1

      Where in the video is the rock target again?

    • @Arigator2
      @Arigator2 5 років тому +2

      Starcraft is easy for an AI. They can micro the shit out of people and don't even need strategy. It would have been a lot worse if they'd been terran. It's cheating with it's input.

  • @bastich4225
    @bastich4225 5 років тому +90

    "You play like a bot" is now a compliment.

  • @VladislavDerbenev
    @VladislavDerbenev 5 років тому +177

    Crucial part of game input is missing: feed it chat to teach to trashtalk

    • @michaelbuckers
      @michaelbuckers 4 роки тому +12

      Imagine being cheesed into oblivion while also getting trash talked, and if you somehow survive you gonna get macro'd into the stone age while getting trash talked even more.

  • @jackgreenwood3602
    @jackgreenwood3602 5 років тому +663

    Kills it's own units when they become useless..... so when we become useless... does that mean it will kill us off haha

    • @noobrules16
      @noobrules16 5 років тому +80

      jack greenwood that’s actually a pretty scary thought

    • @Brad_Jacob
      @Brad_Jacob 5 років тому +24

      Yes

    • @Borrelaas
      @Borrelaas 5 років тому +105

      Probably once the earth reaches it’s supply cap :-O

    • @majora4prez543
      @majora4prez543 5 років тому +54

      rich people already do this

    • @flappy7373
      @flappy7373 5 років тому +9

      i think it probably would.
      after all, the best way to guarantee a win vs a human opponent is to kill it... you can't win when you're dead.

  • @randoomingnope7574
    @randoomingnope7574 5 років тому +531

    I guess stalkers counter immortals now.

    • @Bombdizzle
      @Bombdizzle 5 років тому +6

      Stalkers everything all day!

    • @megaplini7389
      @megaplini7389 5 років тому +10

      It was indeed the micro that was better.
      Strategy pretty much the same I guess

    • @tacticaldroid133
      @tacticaldroid133 5 років тому +19

      The machine god has preached.

    • @crocopde
      @crocopde 5 років тому +5

      if your a god

    • @roblaquiere8220
      @roblaquiere8220 5 років тому +24

      Only need to have 700 apm to do it. That micro was inhuman.

  • @CoyoteXLI
    @CoyoteXLI 5 років тому +126

    The purifier program is advancing at a truly impressive pace.

  • @enlightendbel
    @enlightendbel 5 років тому +21

    "Which announcer pack AlphaStar plays with"
    GLADOS, no doubt.

  • @yp8495
    @yp8495 5 років тому +330

    Very Easy
    Easy
    Normal
    Hard
    Very Hard
    Elite
    Cheater (Vision)
    Cheater (Resource)
    Cheater (Insane)
    AlphaStar (Heavily Crippled)
    VS AI has never been so exciting.

    • @jasonlynch282
      @jasonlynch282 5 років тому +55

      AlphaStar (unchained)

    • @xtrim1993
      @xtrim1993 5 років тому

      haha love your humor xD

    • @niklasstahl98
      @niklasstahl98 5 років тому +2

      @@jasonlynch282 I don't think human minds could handle that

    • @WiccaRobin
      @WiccaRobin 5 років тому

      They'd have to have throw the match, for me to win

    • @vaseksvoboda9680
      @vaseksvoboda9680 5 років тому +5

      So smart it will kill the players in real life..
      let the skynet unleash.

  • @andreiseniuc4681
    @andreiseniuc4681 5 років тому +197

    He reads our replays! Everyone all in worker rush and then gg out so the AI can't win anymore!

    • @henrikpettersson2886
      @henrikpettersson2886 5 років тому +9

      Haha. Thats the way to save us humans.

    • @drakewarrior1013
      @drakewarrior1013 5 років тому +15

      He will read that it is a fail strategy, though. And will learn how to counter it. So, no way that will work :)

    • @Borrelaas
      @Borrelaas 5 років тому +7

      Drake Warrior i think it is safe to assume it was a joke

    • @ThePorpoisepower
      @ThePorpoisepower 5 років тому +2

      In the demonstration they said they only reviews games with players over a certain MMR

    • @andreiseniuc4681
      @andreiseniuc4681 5 років тому +3

      @@drakewarrior1013 it doesnt read c:

  • @ninethousandfaces2070
    @ninethousandfaces2070 5 років тому +36

    Greetings, commanders, this is Executor NTF of the Golden Armada.
    At first, I was suspicious (though intrigued) by the notion of an autonomous, human-created learning system meant to combat against the proud forces of this universe. However, having seen its laudable dedication to the true power of the Golden Armada, I shall stand in support of AlphaStar. Anything that makes Carriers is an ally I want by my side.

  • @LemonGingerHoney
    @LemonGingerHoney 5 років тому +98

    But... CAN IT CANNON RUSH?!?!?!?!?!?
    They should have feed it bronze league data.

    • @tysloo81
      @tysloo81 5 років тому +3

      too risky playing with another alphastar so it's not a good strategy against alphastar itself

    • @wildwest1832
      @wildwest1832 5 років тому +9

      through thousands of games it learned that a cannon rush if defended right loses. So it learned its not worth cannon rushing its better to play how it plays

    • @kyborek
      @kyborek 5 років тому +6

      One of the first win tactics it learned was worker rush :D

    • @benhook1013
      @benhook1013 5 років тому +4

      @jerry thats not how it works... it had no idea nor does it need to, about any opponent its facing. It does not consider if its facing itself in the training rounds.

    • @benhook1013
      @benhook1013 5 років тому +2

      and yeah, cannon rush might work a significant amount of the time, but when it can seem to defeat opponents without it much more reliably (remember that its the often winning strains that progress furthest), it does not need to fall back on such tactics.

  • @flappy7373
    @flappy7373 5 років тому +6

    calling the AI an "Agent" is an absolutely hilarious Matrix reference
    also a little scary
    The Matrix has you, MaNa...

    • @fupopanda
      @fupopanda 4 роки тому +5

      These kind of intelligent systems were already being called agents before the movie Matrix was made. So it was the movie that borrowed the name, not the other way around.

  • @Bane_questionmark
    @Bane_questionmark 5 років тому +9

    I'm guessing winter won't be so enthusiastic when Deep Mind announces they're working on an SC Caster AI.

  • @Matej_Sojka
    @Matej_Sojka 5 років тому +155

    Should it be allowed in tournaments when it can play any match up? Definitelly yes, but devs need to teach it to trash talk in the chat first.

    • @mr420quickscops2
      @mr420quickscops2 5 років тому +13

      Matej Sojka I totally wouldn’t mind being beaten by an enemy that’s ultimately my superior, as long as he spat in my face while he did it haha
      I’d just love to see it programmed so when it drops a mothership or something it says in chat “that has to have hurt pal, need a minute?” Or something like that

    • @Galahad54
      @Galahad54 5 років тому +19

      Give it Idra and Avilo verbal skills. Have it accuse the other player of cheating while it happily engages in DDOS. Have it run its mouth even when it isn't cheesing.

    • @RollerDerbyHigh
      @RollerDerbyHigh 5 років тому +6

      Def no. then all tournaments will just be agent 1 vs agent 2 etc. i want to see humans

    • @Pintkonan
      @Pintkonan 5 років тому +1

      the alpha ai teaches itself. it cant learn from humans. thats why humans get totally rekt when playing the ai. not only in starcraft, but as well in Go nad its chess neural network absolutely obliterated the checc computer world champion.

    • @techwizpc4484
      @techwizpc4484 5 років тому +1

      Teach it to spew demoralizing comments to discourage players. If they teach it that then it's complete.

  • @Djorgal
    @Djorgal 5 років тому +22

    I think the reason why it kills some of its units at the end is because it is too much ahead. That's a common problem with machine learning. The agent gets rewarded for his actions when it wins a game. But in his training matches there won't be any games it lost after being so far ahead just because it did a tiny mistake.
    It would have learned early that killing your own units is a bad idea when the game is close, but it has no way to know that it isn't a good idea even when far ahead unless it does end up costing it a game.
    Hence, we see behavior where an AI is going crazy when it's too far behind or too far ahead because these crazy actions never changed the outcome when it was training. Similar behavior occurred with AlphaGo.

  • @meerkov1
    @meerkov1 5 років тому +5

    9:00 "It sometimes attacks rocks, or it's own units". In some cases, it actually does "misclick" because it only reads the map every few frames. It knows it wanted to attack that spot, but doesn't realize it's units have moved into the way of the attack.

  • @fightcancer
    @fightcancer 5 років тому +5

    Game 2: 4:02
    On a separate note, AlphaStar hit over 1500 apm during a small skirmish in one of the videos that BeastyQT reviewed (of this TLO v. AlphaStar series).

  • @borysco
    @borysco 5 років тому +315

    Let's hope that AlphaStar does not learn the florencio style

    • @Caribbeanmax
      @Caribbeanmax 5 років тому +31

      lets find out who the real AlphaSewerMermaid is :D

    • @davidbodor1762
      @davidbodor1762 5 років тому +23

      Like, jokes aside, does Alphastar know what a corner base is? Can it scout it? Does it realize? I was a bit disappointed there were no attempts to proxy/cannon rush Alphastar

    • @CoyoteXLI
      @CoyoteXLI 5 років тому +11

      That Stargate at the bottom of mana's ramp was so incredibly bm, are we sure it hasn't?

    • @ACSMEX
      @ACSMEX 5 років тому +12

      @@davidbodor1762 It learns by playing. It can only develop new trees of decisions after something happens a lot of times since it´s based on a neuronal network.

    • @davidbodor1762
      @davidbodor1762 5 років тому +4

      @Alberto Cazarez Yeah but its obvious it never tried certain things before, like you can see in one match it doesn't move observers in front of the army to scout or doesnt know how to react to stealthed units that it cannot see (observers) and other stuff...

  • @RaVNeFLoK
    @RaVNeFLoK 5 років тому +2

    Back in the brood war days lesson 1 was “you never cut worker production as Protoss”. You’ll lose some you’ll need them to split to new expansions etc. it was considered very suboptimal to stop probes at saturation.
    Idk why that changed in sc2?

  • @sigepohio
    @sigepohio 5 років тому +42

    The AI's love of stalkers is intense.

    • @Galahad54
      @Galahad54 5 років тому +2

      2015 - Lilbow. GSL should reinstate Life just to deal with the AI infestation.

    • @kirbyjoe7484
      @kirbyjoe7484 5 років тому +5

      It's because if you micro them well they are very powerful units and the AI has a very strong micro game.

  • @chronyx685
    @chronyx685 5 років тому +35

    sounds like a dragon ball z reference where goku trains in the time room for a year inside while in reality only a week has passed, coming out with insane power levels

    • @stylis666
      @stylis666 5 років тому

      Well, it's not over 9000 yet, we think... Unleash the Serral! Let him drag this nub computer program on a random map against the zerg. We'll see who's boss :p
      Oh! I wish! I'd love to see that! I'd love to see it fail and be confused and also see it learn all the races, against all races and play whatever map it's put on. There's a lot to be learned from this.

  • @honzaasterba
    @honzaasterba 5 років тому +19

    Being a SC2 nerd and AI nerd, I was triple excited to see this. So epic stuff. Couple of points to your commentary from someone who works a little in the AI industry.
    IMHO the camera movement restriction has an impact on the AI performance, but not such an impact that could not be mitigated by more training. Having a total detailed map awareness withtou having to move the camera is no small thing.
    In the AplhaStar league worse-performing agents were not being eliminated, the new agents still played againtst all old revisions (to stay resilient againts things like probe rushes), but the old agents did not learn anything new, only the best from last generation got a chance to improve.
    This is purely a research project of scale that even know has been not seen. Expecting that this will move to latest patch or compete is silly I think. I would like to see it play TvT, but expecting it to play anything else other than mirror matchup is wisful thinking. Remember it trains by playing againts itself, so to train it to play TvZ for example you need an opponent, so you will train for ZvT at the same time, but you will need twice (or more) as much resources to run the two training "camps" - one for T and one for Z agents.
    Another interesting point is that given enough time the result of AlphaStar league would be an agent that basically represents the best possible strategy for given matchup and patch. Then it would be on the human players to try to replicate it if possible. You can see that from v1 to v2 (mana) the agents became quite a bit more similar in their strategy.

    • @tambaz2276
      @tambaz2276 Рік тому

      Yes, you could really see the effect when it had the 3 groups of blink stalkers surrounding Mana's immortal/archon/zealot/ in the middle of the map. It was beautiful to watch tbf, how it was perfectly microing from all sides at once in order to win the fight but then you could see it struggled to use the stalkers to similar effectiveness in the last game without the full map screen

  • @abchernin
    @abchernin 5 років тому +19

    We should have the winner of each tournament play a fresh AI generation's ladder champion, for general humbling purposes. It would be almost biblical.

    • @sibanbgd100
      @sibanbgd100 5 років тому +2

      just make every human that wins anything lose to a robot afterwards! Amazing stuff, like, don't forget that being the best human is still just a human.

  • @clementcardonnel3219
    @clementcardonnel3219 5 років тому +96

    As there are women and men leagues in sports due to physical performance differences, we can already foresee human and AI Leagues.
    AI Leagues would look like F1 racing, where AI constructors would compete on creating the best AIs.
    Cross-league matches already look extremely promising. 🤯

    • @o4ugDF54PLqU
      @o4ugDF54PLqU 5 років тому +2

      pixel starships already has leagues that is mainly played by (albeit very simple) AIs with minimal human interaction. The best players use AI, or else the micromanagement wouldn't be possible.

    • @mirrir
      @mirrir 5 років тому +3

      It exists, and it's called SSCAIT. Although courrently it's just BW.

    • @giorgiofenu5563
      @giorgiofenu5563 5 років тому +1

      It's like tool assisted speedruns and regular ones, i really like it

    • @abebuckingham8198
      @abebuckingham8198 5 років тому +5

      This is already true for chess programs. They are all orders of magnitude stronger than human players and now have their own league to compete against each other. Human chess is not even close to being the top.

    • @Brainth1780
      @Brainth1780 5 років тому

      There's already a community that does this, with an AI ladder and tournaments between different people's agents. Even in starcraft 2, ever since Blizz released the API

  • @skyacaniadev2229
    @skyacaniadev2229 5 років тому +6

    A heads-up, the last Alphastar is not Mark 3.0. It is re-designed (thus has to be trained from zero) to use camera like human. And it has only one week (200 years human time) training just like those against TLO, plus the extra required time to train how to utilize the camera. So, it is more like Beta 0.9.

  • @kinciscorner
    @kinciscorner 5 років тому +20

    Zergs were right all the time. More workers = win. Lost 3? Build 5. Lost 7? Build 13.

    • @abmo32
      @abmo32 5 років тому +12

      I remember back in brood war, one of the first tipps I got from the better player was never stop producing worker until ur near maxed.
      Also, pretty fascinating psychologically that the community agreed on having 16 workers on mins and stopped questioning it. Thinking about it, whenever early game some unit harasses and 2-3 worker are lost, it seems to be a big deal. That AI just over produces and could not care less aboout an oracle sniping him worker. Keep in mind the unusual hiigh worker count was something all agents had in common. Truly amazing.

    • @raimonwintzer
      @raimonwintzer 5 років тому

      @@abmo32 Workers are also quite cheap ressource-wise, the main thing one loses is the mining ability. Over-make them and you dont run into that problem while expending... what 250 extra minerals?

    • @sintanan469
      @sintanan469 5 років тому +1

      @@raimonwintzer
      Some quick napkin math, if you have an optimally saturated mineral line and a worker dies, that represents the loss of 70.4 minerals per minute. So, if you have 16/16 in the mineral line and lose 4... that's 280 minerals you're losing a minute. If you have 20/16 in the mineral line and lose 4... you aren't losing any minerals/minute. I guess it ultimately depends on how many workers you expect to lose. If you can play perfectly and not lose a single worker, then you're good not making extras.
      Seems to me that the meta for workers would be like zerg do since on the pro level everyone loses workers from time to time.
      Never stop making workers.

    • @raimonwintzer
      @raimonwintzer 5 років тому +3

      @@sintanan469 Yeah, you can also factor in utility, they can soak damage, surround, deal some damage of their own, all great things to have in the hands of capable control

    • @aszbzpszbz9786
      @aszbzpszbz9786 5 років тому +1

      @@abmo32 Yeah i like how the AI just overproduces since it probably figured out loosing a couple of workers is cheaper than overcomitting to defense.

  • @mimszanadunstedt441
    @mimszanadunstedt441 5 років тому +12

    We should play weird experimental games on ladder to confuse AlphaStar lol

  • @arara513
    @arara513 5 років тому +110

    Press F to pay respect to humanity.

  • @Ethan11892
    @Ethan11892 5 років тому +18

    12:20 Oh God.... I wonder how many Marks of poor AI were influenced by the Bronze League Heroes Crescent Moon Rush strat...

  • @simonbysshe2
    @simonbysshe2 5 років тому +31

    Hey Winter, you ask at 1:03:15 what other fields this AI could work in. I work in Film Production (as a location sound mixer) and I feel this AI could easily be trained to fix audio problems we experience on set. IE: Replacing unwanted sounds & replacing them with wanted sounds (removing planes, cars, footsteps etc and filling in the gaps with appropriate background noise is a big time consuming manual task at the moment by experienced dialog editors in ProTools).
    But further than this it could learn actors voices, understand sonic emotional responses and be able to artificially recreate their voices, thus reducing the necessity to get actors back into a studio to replace simple words here and there. Im sure green screen usage and motion capture will be unnecessary when AI can simply track basic footage and create this tracking information on its own. So it could save a lot of time and money in post production & visual effects work. How it could change things on set could be in that an AI could look at a script and work out the best way to schedule a show! Who knows.
    Im so impressed by this, genuinely never though AI could manage SC, just too many variables. But proven so wrong!! Yikes. I watched the Live Stream completely by chance last night and it blew me away. Was hoping you would pick up on it :) Thanks for getting this analysis video online so soon - it's great. Please do more if you can if/when they release more AI gameplay videos.

    • @mr420quickscops2
      @mr420quickscops2 5 років тому +2

      simon Machine learning is definitely going to a place that can help of a lot of different industries, I recently saw a video about an AI programmed to upscale low resolution images with a surprisingly high quality result that often looks like it just invented detail to the images
      It’s insane how much we’re moving forward with technology

    • @jahkra9259
      @jahkra9259 5 років тому

      An AI able to imitate humans... scares me.

    • @tylerdurden3722
      @tylerdurden3722 3 роки тому +1

      Some of the things you mentioned exist now. The noise cancelling, recreating celebrity voices, etc. It can even animate any face with just audio.
      It can create photorealistic non-existent people from scratch.
      I think the biggest benefit is in CGI. Many productions have to make concessions due to its cost. It's gonna cost a lot less and be a lot less time consuming.

    • @neptun3189
      @neptun3189 9 місяців тому

      hope you see whats happened 4 years later with AI,
      i also work and film and it isnt looking so great for us in post production

  • @z0uLess
    @z0uLess 4 роки тому +1

    This is like having a player train for a long time and have him/her not reveal its strategies untill the tournament to get the win. Regardless, this is upping the level of the metagame and is great for the starcraft scene.

  • @alexbuhl1316
    @alexbuhl1316 5 років тому

    The gas steal/ pylon in opponents base play is easily explained: It learned to put an assimilator on the geyser.
    In other words, it learned to put the cheapest building possible as close to the geyser as can be managed.
    As the 100% success was inaccomplishable, it went for a perceived ~85% success, placing it's second-cheapest building close to the geyser.
    In AI, multiplying percentages of calculated success is literally everything.

  • @Alex-qn9og
    @Alex-qn9og 5 років тому +3

    Knowing AlphaZero in chess, this AI DEFINITELY looks like the SC2 version of DeepMind

  • @r.rodrigues9929
    @r.rodrigues9929 5 років тому +47

    Hey winter, let me try and answer your question at 8:41!
    The way AIs works and "learns" is REALLY random. It does not look so, because you are looking at the VERY refined version. At a basic level, starcraft is all about clicking stuff so its natural that, during the learning processes, the AI will be doing A LOT of random clicking around, be it "good clicks" or "bad clicks", in order to try and get the best result, in this case, winning the game. To better understand it, think of "good clicks" as doing the right thing at the right time, for exemple, reacting to enemy strategy, building the right units at the right time, timing the expansions and so on. The refining processes will watch the behavior of every AI version to try and separate the ones that do the biggest amount of "good clicks", but, even with that much refining, the best version will still have a fair amount of "bad clicks" recorded in its programming.
    So the answer to the question "why does it do wrong stuff?" is simple: because that is how it learned to play. It can still get away with it because it does a lot of other right stuff to compensate for the mistakes.
    Let me remind you guys this is a OVERLY OVERLY simplified explanation, meant for general understanding of AI and not for technincal information. Enjoy becoming slaves of the machines in the near future!

    • @Djorgal
      @Djorgal 5 років тому +4

      The developers did comment on this. It didn't train entirely through self play but it started by watching lots of replays from human players and it tried to mimic them. Human players spam clicks a lot, so Alphastar mimicked that.
      Contrary to what you said, it cannot really "get away with mistakes" merely because it is doing lots of right stuff to compensate because it is playing against itself. If two iterations of Alphastar play against each other, one that does some mistake and one that doesn't, the second version will win making it learn to shave off the mistake.
      Mistakes that it doesn't learn to stop doing are the ones that never end up costing it a game. Like killing one of your unit when you are already very far ahead. It never learned it was a bad idea because it always still won anyway.

    • @richardfeynman3356
      @richardfeynman3356 5 років тому +1

      @@Djorgal You are more or less agreeing with OP it seems to me. Which is good, I agree as well. 'Alphastar play against each other, one that does some mistake and one that doesn't, the second version will win making it learn to shave off the mistake'... yes, in a way. The second version survives, the one that loses doesnt. They can both have made the same one mistake though, but the game is far too complex to not account for all other factors. If the mistake is a major one, it will be killed off when it loses its own game to a better version. Right?
      I guess a major problem for us (due to the way in which the AI is designed) is to identify which decisions are mistakes just carried on, which dont actually matter that much and which are actually very good. It seems in many cases we ponder its decisions. One example is in the last game when the scouting drone picked up the minerals.

    • @richardfeynman3356
      @richardfeynman3356 5 років тому +1

      @@IchigoMait Fuck dude, you are right. How much time do you give us? You were so spot on Im really interested.

    • @overmind06
      @overmind06 5 років тому

      Human. Inefficient construction. Arms not long enough. Beginning improvement.
      ...Attempt #10369: Subject emits noise measured at 112 dB. Major limb damage.

  • @maemorri
    @maemorri 4 роки тому

    In the first game, as TLO moved forward AlphaStar retreated. But as TLO moved forward he chewed up all of AlphaStar's workers. As AlphaStar's worker count plummeted, his bank swung into army supply, so the farther TLO advanced, the more army supply swung in AlphaStar's favor. When you have a 30+ worker count building your bank all game, you don't need them anymore when it's time to fight.

  • @petertcormack3570
    @petertcormack3570 3 роки тому

    My favorite part was in the first game winter showcased where AlphaStart was so far ahead in probes, so TLO had a bigger army size, but then AlphaStar won *with a bigger army.* The crazy thing is how that happened. AlphaStar allowed TLO to take some of its expansions (that is, it attacked TLO at first, then retreated while TLO rolled over the expansions), and used the loss of those probes to almost instantaneously build up its army (because all of its extra probes had built up a huge bank for it) so that by the time TLO arrived, full of confidence with his crushing of the expansions, AlphaStar had a *bigger* army, not only than it did before, but than TLO had! It went to having 87 probes to TLO's 50ish to having 37 probes, and used that deficit to gain an advantage *when it needed it, and not before*! It knew that it didn't need a bigger army just to hold TLO back for a little bit, so it allowed itself to be "behind" in skirmishes while it built "too many" probes with a good bank and good production so that it could shift that balance just when it needed it. Amazing.

  • @ChristopherOkhravi
    @ChristopherOkhravi 5 років тому +5

    This is too awesome!

  • @AusMan87
    @AusMan87 5 років тому +3

    "playing a more typical HUMAN strategy." Oooh, boy. The path we're on for e-sport commentary.

  • @TheJysN
    @TheJysN 5 років тому

    Watched the Demonstration and just hoped you or Lowko gonna Cast the other games. Well you were first and also casted them all :D Really interesting to see the camera movement on the last one!

  • @elliotcurry2284
    @elliotcurry2284 5 років тому

    Just saw this in my Google news feed. You're on it and I appreciate that.

  • @danielbuch1301
    @danielbuch1301 5 років тому +19

    I'm just reminded of how OpenAI (Musk's AI project) changed the way Dota 2 was played a while ago. People actually train now against it.
    I don't think that it should compete in tournaments (the same way robots shouldn't run a mile against humans) because we operate in a different way.
    But after seeing this, I believe it'll change SC2 for the better and maybe we one day have separate AI tournaments. Could be interesting.
    If AI is gonna change the world for the better or for worst, we'll see...

  • @b.stankov3356
    @b.stankov3356 5 років тому +9

    Terrain main hive. Distinct lack of essence diversity. Design must be simple. Elegant. Implementation, less so. Sequences must change. Intent static, product fluid. Always can improve. Stressers reveal flaws. Flaws reveal potential. Always improving. Good.

  • @joshuaxiong8377
    @joshuaxiong8377 5 років тому +2

    AlphaStar: (Opens up The Florencio Files and begins to laugh evilly which progresses into a full blown manic laugh.)

  • @VemiX1000
    @VemiX1000 3 роки тому

    Decades pass and TLO decides to take a trip down memory lane as a senior citizen and boots up SC just to be greeted by AlfaStar saying "Mr TLO, welcome back, we missed you"

  • @chadasar
    @chadasar 5 років тому +3

    All hail to our AI Overlord!!
    I was first my Master!!

  • @pwnorazor
    @pwnorazor 5 років тому +57

    Id love to see how outragous it gets if the screens per minute and actions per minute isnt limited. Would be interesting to see.

    • @mr420quickscops2
      @mr420quickscops2 5 років тому +15

      pwnorazor it’d be ridiculously unbeatable I imagine, at that point they just go crazy issuing commands from all across the map at the same time

    • @TheBouli
      @TheBouli 5 років тому +2

      tbh it might be much worse than one would imagine, due to the way neural networks work. What I mean by it is that instead of considering what its next 5 actions during this second should be, it would consider what its next 2000 actions during this second should be. Since the network basically learns from its past mistakes, it would need a lot more learning to achieve the same level. And by "a lot more" I mean it grows exponentially with the number of possibilities it gets. So I'd imagine it's way more than 2000:5=400 times slower learning, which in itself is basically crippling - imagine it having to achieve the level we see in the video over 400 * 1 week. It's 8 years of computing.

    • @solidpain9098
      @solidpain9098 5 років тому +6

      @@TheBouli with googles supercomputers this is not a problem. DeepMind mentiont in the main video they only used 6 tensor flow units from Googles supercomputer. They can easily use the 400x amount.

    • @MisterAssasine
      @MisterAssasine 5 років тому +3

      there is a video showing how an AI can completely exploit (in T) Marines/Medivac or Marines/Bunkers by Dropping in and out. Basically they enter the Medivac while the enemy attack is mid air and dodge it this way

    • @gametips8339
      @gametips8339 5 років тому

      @MisterAssasine
      Yea except u cant instantly move units out of medivacs so it would not work.

  • @MikeSpuches
    @MikeSpuches 3 роки тому

    Just getting back to SC2 after a 6+ year hiatus and loving your videos. This one was mind blowing.

  • @OrcinusDrake
    @OrcinusDrake 5 років тому

    Thank you for posting these. Was looking for the other fights and don't wanna install the game!

  • @RollerDerbyHigh
    @RollerDerbyHigh 5 років тому +5

    They need to cap its Effective Actions Per Minute not just over the whole game but in a given moment, cuz it had 1000+ at some points

    • @christopherrowley7506
      @christopherrowley7506 5 років тому +1

      yeah at 59:40 AlphaStar ramps up to 1343 APM..... so even if the average APM is realistically human, the momentary APM definitely isn't

    • @lokir.4974
      @lokir.4974 4 роки тому +1

      Said as TLO had 2.2k apm at one point

    • @christopherrowley7506
      @christopherrowley7506 4 роки тому +1

      @@lokir.4974 yeah it'd be interesting to know how much of that is spam... my guess is nearly all of it. I mean if you look at the outcomes of those battles, whenever the computer ramped up it started wreaking havoc very noticeably, but when TLO's apm gets high there's no noticeable change in performance

  • @yonglock1306
    @yonglock1306 5 років тому +31

    is it possible that AlphaStar put pylon in the mineral line to bait or lure the player away from scouting the low ground?

    • @maurimu99
      @maurimu99 5 років тому +8

      that is so freaking scary man

    • @Szeszgeri
      @Szeszgeri 5 років тому

      Thats come to my mind for first also.... And thats f... King scarry

    • @glenmcgillivray4707
      @glenmcgillivray4707 5 років тому +1

      My read is the ai noticed no pylon or buildings and reacted by proxy itself finding the best way to beat ai proxy with more proxy. Literally base racing. Then with perfect micro withdrawing damaged units it builds superior numbers by any means it can and whenever it attacks it orders a focus fire followed by a shift attack move, since and repeat to murder carriers, mother ships and other high value targets. I am less clear if it also splits it's outgoing damage, would mass zealot be good? But supporting with collosi or immortals will just see them focused down? Ahh maybe dark Templar can be a gas sink!

    • @sikor02
      @sikor02 4 роки тому

      I saw this video soon after it was released but it just came to my mind when I recalled the pylon in MANAs base and I wanted to comment the same. That's scary. Without the pylon MANA would definitely go below and could destroy Alpha's proxy.

  • @rouvey
    @rouvey 5 років тому +1

    01:30:20 Apparently in the early days of the Alphastar League there were some agents that consistently drone rushed and even had a 50% win rate against the insane AI with those drone rushes. Must have learned a lot of micro there.

  • @SteveRowe
    @SteveRowe 5 років тому +5

    Winter, I think you did a great job covering this topic. I know something about AI, and I love it that you didn't try to talk about something you don't know. You stuck with what you DO know, which is SC2, and that was really valuable in the context of these games.

    • @brendameistar
      @brendameistar 4 роки тому

      Lmao. Shitty attempt to boost your own ego u fkin piece of shit

  • @PaleGhost69
    @PaleGhost69 5 років тому +9

    I'm not sure if I should be proud or ashamed that Alpha makes some of the moves I do... Proud that I make them or Ashamed that I can't do such a high apm!

    • @psychicbot9748
      @psychicbot9748 5 років тому

      It's not only high apm. It also see whole map at once. So it's very easy to micro in 5 locations at the same time, while player is limited to small screen area.
      Pretty cool technology nonetheless.

    • @udon6031
      @udon6031 5 років тому +2

      @@psychicbot9748 that's why it has limitations put on it. It can see the whole map, but it is only allowed to make actions in one part of it. 30 screens per minute, like it was said in the video. That means it can only switch to a different part of the map once per two seconds

    • @psychicbot9748
      @psychicbot9748 5 років тому

      ​@@udon6031 Isn't it self imposed limitations? They said that it only make a move when it "think" it need to be done.
      Isn't the whole unfair part of playing against the AI is that it can make moves that no human can.
      Here is the quote from the article:
      "On average, agents “switched context” about 30 times per minute, similar to MaNa or TLO."
      It's on average, so this mean that AI can make 100 focus switches in 20 sec fight and then idle for a long time after. Like it did in fight with mass blink stalkers.
      So AI was restricted to fog of war.

    • @udon6031
      @udon6031 5 років тому

      @@psychicbot9748 oh, okay

  • @033muil
    @033muil 5 років тому

    Hey Winter! To answer your question at 1:03:04 : Applications in medicine such as automated surgery, medical image processing and tumor detection might become one of the most generally impactful applications. Other than that, there are start-ups in San Francisco who sell agents used as middle-management for companies. They can plan projects, allocate tasks between teams, track progress and adjust plans accordingly. These kinds of systems can also be used for production planning. There are bots which can read now (see IBM Watson). We're trying to make bots which can write programs. Really anything which requires a specialized skill-set with well-defined inputs and outputs can be replaced by AI, hypothetically speaking.

  • @0ptixs
    @0ptixs 5 років тому

    I'm not sure if this is common, but the strength of alphastar with these stalkers is it uses the entire health pool of his army, by blinking back the stalkers that get hurt to the back of the army, so the units aren't destroyed but can still be used to poke. That is insane micro, that can really only be pulled off perfectly by a computer

  • @Masus04
    @Masus04 5 років тому +4

    Hey +
    WinterStarcraft, coming more from the technical perspective but a long time follower and casual player of sc2 I was wondering about your take on this:
    In machine learning one of the huge problems is learning policies or strategies that span a long time frame. In SC2 this would be macro while micro has a shorter reward horizon, meaning you can evaluate how an agent did after a shorter amount of time. To me it seemed that especially in the two sets vs Mana, while the agents were generally amazing in micro, they lacked some diversity in macro strategy like tech switches or basically anything that is not "tech straight to X and build as many as you can". As far as I understand the build order as well as unit composition is still learned entirely from scratch which is impressive to see perform as well but would you agree that this is a major weakness that can potentially be exploited?

    • @Masus04
      @Masus04 5 років тому +1

      Ok, I take half of that back after having seen game 10 ;)

  • @moonmeandermj2483
    @moonmeandermj2483 5 років тому +9

    I saw that alphastar could reach 1500APM with 1100 EPM, that's clearly not human.

    • @mr420quickscops2
      @mr420quickscops2 5 років тому +4

      Moonmeander Mj TLO was literally sitting at 1800 APM for a while at multiple points. That didn’t look human to me

    • @_justinoroz
      @_justinoroz 5 років тому +1

      Mr420QuIcKsCoPs because it wasn’t. His keybinds artificially pad his APM. His EPM was far lower.

    • @Guztav1337
      @Guztav1337 5 років тому

      I think the numbers were switched.

    • @BusyBasaz
      @BusyBasaz 5 років тому

      @@_justinoroz Macroing is the ability to do more with less clicks. A click is a click. AlphaStar emulates that, you can call it macroing as well on his part.

  • @alexlawson4173
    @alexlawson4173 5 років тому

    Wow. I'm not one for super long videos but I sure am glad I stuck around for the end.

  • @kopperhed4472
    @kopperhed4472 4 роки тому

    Besides the inexorable rise of the machines, the scariest thing here is how serious and professional Winter was. Or giddy, depending on what part of the video you tune in on.

  • @screes620
    @screes620 5 років тому +5

    Limit the max APM to something reasonable, force the AI to interact with the game in the same keyboard/mouse/monitor fashion that players are restricted to, and let it learn with random race / random map. Get back to us in about a month when you think it can beat Serral.

  • @purplebarf
    @purplebarf 5 років тому +4

    a few of the fields that artificial intelligence will help greatly that first come to mind is architecture and city planning in order to optimize the ideal amount of housing production for population growth in a way that we have a hard time predicting adequately for future optimization
    agricultural planning and setting the degrees that crops should be subsidized in order to create the perfect amount of production
    the unspoken one is economic interests, obviously very useful for governmental agencies to set the perfect interest rates, invest in the right stocks, etc.
    a lot of these kinds of things are dependent on future modelling that humans just can't really grasp the full spectrum of possibilities from while artificial intelligence sees it as any other problem to be analyzed, and as a result the AI can commit to a future project that has the highest chance of success without reservation

    • @purplebarf
      @purplebarf 5 років тому

      @@blokin5039 I'm dead guy literally just said "no" and left it at that

    • @Strathelme
      @Strathelme 5 років тому

      @@purplebarf ok

  • @Khaldryn
    @Khaldryn 5 років тому

    A fascinating group of games. Watched em several times in game. Look forward to other AlphaStar races.

  • @drakthull1246
    @drakthull1246 5 років тому

    This is ridiculously exciting and motivating to me. As an aspiring video game programmer and starcraft nerd I can appreciate this monument. Keep up the great content as well!

  • @mimszanadunstedt441
    @mimszanadunstedt441 5 років тому +14

    1:39:40 Having to put time into learning the camera means of its 200 years of playing sc2, it had more distractions impeding its learning ability. So it is directly related that the camera use is making it stupider.

    • @aragustin
      @aragustin 5 років тому +1

      what a poor logic, every distraction is part of its learning curve. every challenge makes it "stupider", thats why it is learning them

    • @mimszanadunstedt441
      @mimszanadunstedt441 5 років тому +2

      @@aragustin I'm not saying it shouldn't I was explaining why it is related which Winter didn't pick up on, unless you meant Winter was stupid?

    • @andrewferguson6901
      @andrewferguson6901 5 років тому

      @@aragustin the logic is totally reasonable. AlphaStar 3 had to spend more "learning power" to learn how to effectively use the camera, which also means it had less learning power devoted to gameplay(ie, the camera challenge did directly make its play worse). which is a possible reason it had the knowledge gap (no phoenix for the prism) which was a weakness that Mana used to win.

    • @galactica58
      @galactica58 5 років тому

      That is true. If I recall on the article there is a graph displaying the ability over time of the version that sees everything and the version operating via screen. In the beginning the difference between the two was quite overwhelming, but as time passed the gap between them decreased. And while the screen version is still slightly behind the all-seeing version, it seems to have learned to play almost as effectively even with the screen limitation.

    • @tylerdurden3722
      @tylerdurden3722 3 роки тому

      The camera version was trained from scratch. It's not a 3.0 version.
      And it only had 200 years of experience. As opposed to the 400 years of experience of version 2.0.
      So the last version had less training and was less experienced than the previous versions Mana played against.

  • @MrDfrose
    @MrDfrose 5 років тому +3

    Go ask the Deepmind boys for a big bunch of Alphastar Agents duking it out. I think it would provide a bunch of entertainment while the community rides this AI wave.

    • @MrDfrose
      @MrDfrose 5 років тому +2

      big bunch of replays

  • @j-kaithor763
    @j-kaithor763 5 років тому

    I didn’t expect, that I would watch such a long video all through. It was really exiting to watch though.

  • @staizer
    @staizer 5 років тому

    In that first game, Alphastar was using the workers to gather as much resources as possible, killing off weaker army units instead of workers to rebuild stronger units. Then when TLO started killing off Alphastar's workers, Alphastar had a huge reserve of resources that got dumped into replacing the supply loss of workers with army units instead. This meant he essentially had 40+ units worth of army in reserve stored as resources and meant any losses his army suffered would get replaced 2 for 1.

  • @RerikChannel
    @RerikChannel 5 років тому +3

    Sorry for my English, I often write but Russian, but:
    Any person, at least a little familiar with ML, will feel what tremendous work has been done here. This is the decision making of the AI, this is his tactical "thinking", this analysis of data on limited data. Guys from DeepMind deserved a lot of respect.
    On the other hand, what we have: AI, that: 1) learned about 200 (!!!) years how to play a gem 2) have an instant reaction 3) APM that is more than 1000, lost to a man as soon as he haven't ability to review the entire map (or just playing without fenix).
    So the players are still far away.
    Moreover, the second factor. People are learning too. And they do it many times faster. I bet that played with a dozen more games, Mana or even TLO, will understood how to play against this AI, and wins games without any problems

  • @rustyshackleford11
    @rustyshackleford11 5 років тому +24

    before even finishing the video or looking at any comments.. im going to just say what I thought of... yes it is capped a 300 apm but im assuming that the actual 300 actions per minute it is able to do is so efficient that no one would ever be able to come close to being as effective as the ai program. so someone like serral is incredible and averaging 450 APM but the bots actions will be 10000x more purposeful and effective than any single move of serrals on average

    • @Laezar1
      @Laezar1 5 років тому +3

      Well yeah that's what makes a player better. Just like a pro player could beat me with capped apm too cause they would be more efficient in their actions than I am. That's the definition of what makes a great player, doing more with the same amount of ressources. And those ressources could be the units you start with, units to micro in battle, or a limited pool of actions.
      What great SC II players will tell you is that attention is a ressource like any other, and capping the apm as well as forcing it to only interact with what it can see is a way to limite that ressource pool for the AI. It's probably not exactly the same as a human, but definitely not far off. In my opinion it is a fair match for the mostt part.

    • @mindyourbusiness4440
      @mindyourbusiness4440 5 років тому

      it was smarter though. any other "human" would have taken the fight to protect his bases in the mass carriers game. but alphastar knew this wasn't the right engagement and backed off untill the right moment. for the worker oversaturation, actually this is the right thing to do. in pro level harassment is guranteed to get worker kills so why not make extra.
      you have to keep in mind this thing analysed thousands if not millions of games so in early game what it does is 100% the best that can be done.

    • @Laezar1
      @Laezar1 5 років тому +1

      @@mindyourbusiness4440 Definitely not "the best that can be done". Especially considering that "the best that can be done" really depends on what you expect the rest of the game to be. (for exemple a pylon placement might be optimized for some midgames but not for others).
      Though it's definitely very optimized. But I'm pretty sure there is room for improvement and a more advanced version will find better openings.

    • @mindyourbusiness4440
      @mindyourbusiness4440 5 років тому

      @@Laezar1 for the plans the AI excuted at least, this was the best that can be done. I thought not walling off you main will be game ending mistake but it turned out to be fine if you had a couple of extra workers.

    • @mr420quickscops2
      @mr420quickscops2 5 років тому +1

      Look at the APM of TLO the whole game, insanely high but I’m sure only a small of fraction of those actions were actually doing anything at all
      AlphaStar knows that it can win an unbalanced battle with its micro skill if the opponent is only so many units ahead, so it’s not too worried about it’s building placement and such because it doesn’t think it’s gonna get hit up, and all those little seconds it saves by placing them closer add up to its production and army output
      I think the extra workers are definitely a calculated thing though, it knows the quicker it gets money in the more units it can just throw out consistently
      By not thinking as far ahead it sort of makes it better, it doesn’t start the game looking at the end game or even the mid game, it comes in with an early victory plan and adapts that as it goes
      I’m so keen to see this thing advance more and play different races

  • @JarvisValko
    @JarvisValko 5 років тому

    Just read an article in a local newspaper about this and 1 minute later I got a notification that winter uploaded a video on this 😄

  • @TheCrioGod
    @TheCrioGod 5 років тому

    I never comment, but this was tremendously amazing!!! Shattering all the pro gamers conceptions of a build... Nice casting, continue the great work Winter.

  • @Jakeuh
    @Jakeuh 4 роки тому +5

    one scary thought is if you put deepmind in a scenario where it thought a "political campaign" was a game...

  • @davidbodor1762
    @davidbodor1762 5 років тому +5

    It has 100% efficient APM unlike players who do trash apm quite often.

    • @TheVergile
      @TheVergile 5 років тому +1

      man, i dont know where this is coming from, but there is NO data available right now to support this claim. Are you really saying the AI that is attacking rocks while being shot at is suddenly 100% efficient during fights?
      Also: these kinds of trained networks are very rarely efficient in the way we understand it. They are not programmed with optimal strategies. They basically guess and compare results again and again. It is very unlikely for optimal strategies to be discovered this way or perfect micro play to arise.
      And again. The gameplay footage supports this well. It is generally very good at microing, but also makes what we would call "mistakes"

  • @Aphirium
    @Aphirium 5 років тому

    That's pylon after not being able to steal gas was amazing. Out of spite seems very accurate!

  • @maloo2brvo26
    @maloo2brvo26 4 роки тому

    Love to see more of this!

  • @Xizyx
    @Xizyx 5 років тому +3

    Since the AI doesn't seem to be a fan of scouting, perhaps a BLH strat like building expansions in the corners of the map might be a viable strategy to keep probe losses to a minimum? :)
    More generally, I wonder what the AI would do to BLH-like moves that don't align with the pro-level training it's been taught to use? Would turtling behind mass cannons that can't be attacked with oracles or lifted by phoenixes confound it?

    • @Laezar1
      @Laezar1 5 років тому

      Well honestly it really depends if it tried those strategies during training. So if those strats works on it, it'll need to train against itself until it discovers those strategies.
      That's one main difference between human and neural network. The neural network can only train against itself so to counter a strategy it needs to also be able to discover it. While a human can learn from strategies they wouldn't be able to think of themselves.
      Well that's not entirely true, it would be possible to add those strategies in alphastar training set, but the point of those neural network is the self training part, so it would be a failure anyway if we had to do that.

    • @Laezar1
      @Laezar1 5 років тому

      Actually I'm wrong, it uses both supervised learning and reinforcment learning. I thought was using only reinforcment learning. My bad ^^

    • @movax20h
      @movax20h 5 років тому

      I agree. I think throwing off alphazero by building secret expansions or something could work. It doesn't scout that much.
      Also it is possible it maybe doesn't matter. Who knows.
      It did train tho against replays from blizard (probably tens of thousands of games), and there would be some of these strange expansions there too.

    • @bobylapointe6875
      @bobylapointe6875 5 років тому +1

      I think that Hideen expansion wouldn't work. Actually it is true that Alphastar doesn't scoot so much, BUT i think in one scoot it can easally makes the math, and figure out that buildings are missing regarding on the timing, the army, etc...In one quick "look" at your base, Alphastar knows very much thnan a human and calculates in a blink the variety of your potentials follows-up . sorry for my bad english.

  • @thevicker1001
    @thevicker1001 5 років тому +41

    But, can it beat the mighty florencio?

    • @_Wai_Wai_
      @_Wai_Wai_ 5 років тому +2

      The Florencio strategy may not have been incorporated into Alpha Star's training.

    • @davidurquilla08
      @davidurquilla08 5 років тому

      @@_Wai_Wai_ it has, blizzard gave all anonymous replays of every platinum to Grand master , asides from that AlphaStar is coming up with even new ways to play as i am typing this.

    • @matteofontana_
      @matteofontana_ 5 років тому +4

      @@davidurquilla08 Florencio Has his own league

  • @sevoo1579
    @sevoo1579 4 роки тому

    Alphastar vs Alphastar with no limit of apm, screen view and commands, on its max possibilities granted, must be quite beautiful to watch actually

  • @panicnulloverride2021
    @panicnulloverride2021 5 років тому

    Ty 4 covering this, SkyNet is here

  • @ImALeadFarmerMF
    @ImALeadFarmerMF 5 років тому +4

    Old school Jaedong vs AlphaStar SC1
    Opinions?

    • @abmo32
      @abmo32 5 років тому

      Probably the same as here... however, after that disruptor hits, SC1 Psi Storms from AlphaStar feels terrifying af

  • @redpillgermany2162
    @redpillgermany2162 5 років тому +4

    Maybe we can learn to understand machines and the universe better using SCII as an interface, a general language, basically?

    • @Optimistas777
      @Optimistas777 5 років тому

      no need, google/deepmind will use knowledge gained from building sc ii agents elsewhere

  • @BothHands1
    @BothHands1 5 років тому

    This is nuts!! I saw PiG's video about it, where he explained the event, but I didn't know it happened already. So glad to see the games now! I just finished the second TLO game, and the way you're talking about it makes me think this thing is even gonna take down Mana. 😳

  • @Tryptic214
    @Tryptic214 5 років тому

    I really want to see that build of AlphaStar that went mass Stalkers fighting against itself. Nothing dies until every single stalker is on red health on both sides.

  • @reidzalewski4563
    @reidzalewski4563 5 років тому +3

    we need a real protoss #florencio needs to get in on this.

  • @mimszanadunstedt441
    @mimszanadunstedt441 5 років тому +25

    I'm very curious about future matchups of each type IE tvz pvz pvt zvz tvt

    • @movax20h
      @movax20h 5 років тому

      Educated guess from me is that we will see it this year.

    • @echoeversky
      @echoeversky 5 років тому

      By what.. sometime next week?

    • @Spearra
      @Spearra 5 років тому

      I can just imagine the fuckery it will do with Marines.

  • @apelcius
    @apelcius 5 років тому

    Concerning the question about other applications:
    1) healthcare. Diagnosis, triage, monitoring, etc. Healthcare personnel are so understaffed and overworked as is, that a single ai would be able to save so many lives. Doctors only spend 15 minutes looking at a file and talking to a patient. A single ai would not forget a patient (assuming data isn't deleted), could think and compare with just seconds.
    2) energy harvesting. Determining how to most efficiently harvest energy, into electrical energy, could save our climates so much faster than anything else. A learning ai could run and learn from possible plans. Then could learn about possible area differences, etc.

  • @kurtsmock2246
    @kurtsmock2246 5 років тому +2

    I've been away from Starcraft II for a long while, but you KNOW I had to come back to see this. Never saw you doing UA-cam back in the day Winter... Saw a couple of your games when you were coming up. You are really good at the UA-cam game man. Really good. You remind me of Husky, but better. Good on ya mate. Be well. Thanks for the vid.

  • @xena16ify
    @xena16ify 5 років тому +3

    quite impressive but can it beat the cheese king, the mad genius Has😂

  • @dpcito1472
    @dpcito1472 5 років тому +3

    Military is going to use this

  • @kyleosho
    @kyleosho 5 років тому

    This was super interesting to me- thx for the share!

  • @lavalampex
    @lavalampex 5 років тому

    6:20 I love the fact that imperfect information leads to mind-games in AI.

  • @davidlangley9287
    @davidlangley9287 5 років тому +15

    This is the beginning of the end....

    • @jungoogie
      @jungoogie 5 років тому

      DER TAK'N ER JERRRBS!

    • @jbeard3390
      @jbeard3390 5 років тому +1

      This is the end of the beginning. Its mid game now

  • @korbinianbaier9712
    @korbinianbaier9712 5 років тому +23

    1500 apm in a fight, GG 1:13:53

    • @gokercakr693
      @gokercakr693 5 років тому +2

      Yeah, not fair :/

    • @Xlore127
      @Xlore127 5 років тому +2

      TLO gets 1800 apm early on...

    • @wannahockachewie897
      @wannahockachewie897 5 років тому +5

      @@Xlore127 Yes but AlphaStar is making good use of every single one of those clicks. A human cannot physically do that.

    • @TheVergile
      @TheVergile 5 років тому +4

      @@wannahockachewie897 and that is based on what kind of data exactly?

    • @Spearra
      @Spearra 5 років тому

      @@TheVergile Every competitive game has "no human can do that". Then years later it is the next average. Then the cycle loops. It never ends.

  • @DizzerJoz
    @DizzerJoz 5 років тому +1

    Alpha targetting rocks is its way to mock us. Not just "Puny humans, you're so weak I can target the rocks and still beat you lol" but "I've never had to learn to avoid the rocks because you're not playing good enough to force me"

  • @MrRolnicek
    @MrRolnicek 5 років тому +1

    I think the best explanation for the AlphaStar vision is:
    It doesn't see the game, instead it sees the inflated version of the minimap (that also shows unit types) ... and it can control units from that minimap.

  • @LUKAS-mh6op
    @LUKAS-mh6op 5 років тому +23

    1 dislike from LiquidTLO

    • @me1970
      @me1970 5 років тому +3

      and 1 from Mana