@GothamChess @Wired - thank you for having me on to talk about computer chess! It's been one of my passions for a long time, and it was so much fun to discuss with you.
Human: *performs opening move* Stockfish: “after considering half a billion possibilities in a million different realities, I will play knight to F6 🤓”
Against stockfish, it’s different. Many decently strong players can survive that many moves against Stockfish if they try to defend long enough. That’s becsuse stockfish plays perfectly and destroys you in the most methodological manner possible. If you keep a closed position and dance around for a bit, it will take longer to mate you than if you tried to play to win against Stockfish.
If you're not surviving 35 moves against 1000 Elo opponents then you must be really missing some basic stuff. If you just focus on not giving pieces away and following an actual opening you'll improve massively.
Fun fact: While the Alpha-Beta pruning technique is effective 99% of the time, there are very few cases where the best move in a position looks so unbelievably absurd that even stockfish can't solve it. That happens because the move looks so stupid that the pruning algorithm immediately discards it without further evaluation. This allowed humans to make complex chess puzzles that even chess engines couldn't solve. A famous example of such a position is this composed puzzle: n1QBq1k1/5p1p/5KP1/p7/8/8/8/8 w - - 0 1 **SPOILERS IF YOU WANT TO SOLVE THE PUZZLE FOR YOURSELF** At first, stockfish evaluate the position as dead equal, but if you play the move Bc7!!, stockfish immediately finds the mate in 11 moves. The reason it wasn't initially able to find such a win checkmate was because the move Bc7 looked so absurd that the Alpha-Beta pruning immediately discarded it
@@angbataastockfish 8 had trouble with alphazero 9 years ago, if alphazero came out of retirement today it would lose as badly to stockfish 17 as stockfish 8 lost to alphazero
I hate to be pedantic (lying) but it's not alpha beta that's causing the incorrectness. Alpha beta will always find the optimal move according to whatever heuristic, it's provably correct. If it's failing to find an optimal move it's because the heuristic function isn't evaluating it high enough.
Levy: [builds a UA-cam career roasting 500 rated bozos] Stockfish: [exists] Levy: "Turns out the bozo was me all along" Loving the GothamWIRED collabs!
This video is so good on so many levels. It's one thing to discuss the capability of a computer. It's another thing to be able explain to the common person why this computer is so good and to make the whole explanation so interesting. Add Levy's humor and his ability to explain things very well, mix that with all that the Wired editorial staff can bring to the table, and it's just wow. This content is just friggin awesome. Thanks, all involved!
the real "skill" in stockfish is in the evaluation function. without it being as good as it is it doesn't matter how far it can calculate long as it doesn't find a checkmate
It's actually the exact opposite. The "strength" of a chess engine is determined by how well it can decide which moves _not_ to waste time analysing. AlphaZero introduced the idea of using neural networks to make these decisions and Stockfish has now built on that idea as well.
@@dalton_c particularly true for chess, in my opinion. Players of GM caliber are often so gifted at chess that I think they struggle to understand why lesser gifted people cant learn certain concepts that seem obvious to them.
This is probably my favorite GothamChess video ever. It's great to see the inner workings of engines being communicated to the chess community. I feel like a lot of players, even strong ones don't understand what the engine eval is really saying, and hopefully this helps!
As someone who has implemented Stockfish in their own project, I already knew most of this, but I didn't realize just how many moves Stockfish looks at when given full power.
@@tomlxyz the algorithm is one thing. Raw computing power is another major thing. Some random guy in a room doesnt have terabytes of RAM or something to build his engine
It's actually possible and a fun math riddle: how can you decide between three options, with equal likelihood, if you only have a fair coin? Answer: designate one option as HEADS-HEADS, one as HEADS-TAILS, and one as TAILS-HEADS. Flip the coin twice, and if you get a result corresponding to an option, take the option. If you get TAILS-TAILS instead, flip the coin twice again and repeat until you don't get TAILS-TAILS.
I feel like Levy was asking questions and the stockfish guy kept giving him the same answer about how stockfish looks into the future better than a human.
@@HkFinn83 Even 5 months later, that's a great way to conceptualize it, and why I will always prefer playing it against another person, and in a casual setting.
Just like in any video game, the AI can become unbeatable. As they know your every move and react to the first frame you do and they do an opposite move that will beat it. You can only win when it lets you win.
Their reaction time is one of the biggest driving factors behind their ability to win. You see it in RTS's where the AI might not be building as efficiently as possible, but its unit management is unparalleled with 10x as many actions per second as human players. I'd love to see AI vs human when speed is equalized, then it's really about who is smarter. E.g. it takes a few seconds to even come up with legal moves, then several minutes to evaluate them. Here, you take away AI's biggest advantage, which is pure speed. Now it's all about being able to read and evaluate the board the best.
@@festivebear9946 Last time I checked, Leela Chess Zero on one node (playing without search, using intuition only) is about GM level in rapid time control, and Leela on about 10 nodes per move is roughly GM on classic time control. Maybe a little give and take, but I think that shows a rough picture on where AI stands without doing any calculation, or doing as few calculations as a human would
I can't remember who said this quote but I love it... "A computer winning a Chess competition is no more impressive than a forklift truck winning a weight lifting competition. "
I'd love to see a match where stockfish's evaluation time is equalized to that of a human. E.g. a few seconds to find each possible move, then a few minutes to evaluate the positional score for each move. Would give a more realistic sense as to how strong the algorithm is
@@festivebear9946That still wouldn't be fair though. In 30 seconds, Stockfish could evaluate a position and make the best move that a human would take hours to calculate.
@@mysticalmagic9259 But the question is, how well could it evaluate the position? Even if it can do it quite quickly, limiting how deep it can go stresses the algorithm of deciding the "best" move, since the strength of the engine is being able to weigh all possible moves like 25 moves ahead. So how good is the algorithm when limited in time and moves?
I wish you could have asked a bit more about how it's able to score a position. We know it looks at all the possibilities, but to assign a score of one position, it needs to look at the possibilities of that position and so on. When it finally hits its limit of depth (or time), how is it able to rank a position without going any deeper (afterwhich it can go back up the tree).
It's briefly mentionned when he explains how Stockfish (and all the other chess engines) builds a tree of possible moves and prunes it with the alpha-beta algorithm. That in itself is worth an entire video, and such video exists (search "alpha beta algorithm"). The evaluation function itself is way too complicated to be in this video, it would easily take an hour to explain just the basics of it.
@@InXLsisDeo which as others have pointed out is exactly the problem - without going into the details of HOW the evaluation function works, Linscott is left to answer basically every Q with "Stockfish looks at billions of positions and chooses the move with the best winning chances"
@@InXLsisDeo It's also made more complicated by the fact Stockfish now has NNUE, a neural network based evaluation in some positions, when it used to use a hand-crafted one that was still superhuman in performance, which would have been easier to explain, "material count, piece position, pawn structures, etc. get added up in each position".
Cool video. We all know Levy knows what tablebase is but he’s a good sport. That’s crazy Fabi could have been world champion if he just trapped his knight.
as someone who's very interested in the world of machine learning (and has looked into how stockfish works), its cool seeing a video covering the fundamental concepts like this. i hope we get more videos like this
To think, there was a time when we thought it would be impossible to ever teach a computer to play chess competitively against people. Until Deep Blue beat the best of us.
I didn't know stockfish had neural elements. I thought it was an all classical algo. It would be interesting to hear a more computer science exact walk through of how it works. If well explained I think most could understand it.
@@AnarexicSumo it can't be all neural if it searches millions of positions, I'm certainly not familiar with a neural net architecture that does iteration like that. But as much neural use as they can perhaps
@@DanFrederiksenits is fully neural, they just use a different and much smaller net compared to the big ones used in alphazero and leela zero thats why it can reach millions of nodes per second, if you want to look into it more search up NNUE on google
Worth noting that the 35 move checkmate would be Magnus playing PERFECTLY against a PERFECT attack, but that also meant there were OTHER checkmates in less moves if Magnus played any less than perfect. Crazy.
Give it a go! Only 8 months ago I dismissed it as boring and only played by stuffy old men but it is like you said incredibly fascinating. The possibilities of this game is endless and has been studied for centuries
Great video!! Fun and informative. I never knew stockfish was so strong. That thing about the way it plays when the game is down to 7 pieces - that's scary. Player: am I going to lose? Stockfish: it's a logical certainty. 😨
keep in mind these endgame databases are available for all engines to use but yeah. Sometimes this can lead to some diabolical results where the engine is basically trying to avoid entering the tablebase results but doesn't see mate itself where it will make a technically worse move and turn mate in 21 into mate in 3.
The fact Alpha Zero made Stockfish look silly after only 4 hours of learning chess by playing against itself is both fascinating and scary at the same time.
@@liamb5791 maybe so but I think you’re missing the point. I know it’s not apples to apples; Stockfish agreed to the terms (as did others) but GPU will crush CPU on parallel computing and that’s the difference. The proof was in the neural network of Alpha Zero teaching itself which does require specialized hardware. The future of GPU will takeover tasks that CPU can never do no matter how much CPU is strengthened. It would be fun to run it back today and see how it plays out.
@@forgetaboutit1069Stockfish has long since surpassed alphazero. Another engine called leela adopted that style of learning but it is still worse than stockfish
the beauty of this video is that it is entertaining and contains new information for both people who dont play chess at all and people who are really good at chess. really interesting how the AI is designed to 'think'. thanks wired, thanks levy, thanks... stockfish i guess!? 😅
He's presumably playing Stockfish at its highest processing power, so it could label something a mistake that even base Stockfish would think is the best move.
I don't know about Stockfish, but in algorithms that try to maximize a certain result, often there are several factors for determining an optimal solution, with one taking precedence over others. If two moves have identical values for that most important factor, then it would move on to the next most important factor, and so on until one was greater than the other. Alternatively, they could have some function of all these factors, and when combining them at the end, come up with some final number that is guaranteed to be unique, or at least be unique with 99.9999% certainty. Remember, it is assessing billions of branching paths, so the probability of any two moves having an identical "likelihood of winning" value are exceedingly low. However, if all of these sophisticated algorithms still result such that two moves have the same "likelihood of winning" value, it would likely just pick one randomly.
@@Celatra That isn't necessarily true. Stop babbling about things you know nothing about. Moves, and not just checkmates, can in fact have the same value. It just picks one.
What I don’t understand is why would stockfish pick a different opening on another game? It has already assessed all possible structures for all the openings and it knows which one scores the best. In your game, after 1.d4 it responded with Nf3, but I’ve seen it respond with d5 too.
I love how Levy basically asks the same question over and over (how does it know beginning/middle game/end game) and Gary tries to answer in different ways, even though stockfish literally does the same thing every turn - it builds a game tree based on the current position.
Well, yes and no. While the opening and middle game are handled the same way, a decision tree using an evaluation criteria to select the best move for that board state, the end game does not. Once the piece count drops to < 7, the game brute force solves the game. Meaning it knows every single position and way the remaining pieces will move.
As someone who wrote a chess engine by taking most of the algorithms that are on the chess programming wiki and throwing them together, I can say that you're kind of wrong. Stockfish has SO MANY methods it uses that he could spend hours describing each one, a real answer would go for days.
>> SO MANY methods... I was a little surprised they didn't mention that. My understanding is that the "old" heuristics/expert system evaluator outperforms the neural net evaluator except in a few specific phases of the game.
Regarding that pawn move in front of the King, maybe Stockfish plays something like that with the goal of getting into a future position that is advantageous. And that advantageous position might be recognizable to you. I wonder if, as a human player, one can see a weird Stockfish move and then understand what future position the bot wants, and then play around that.
Stockfish just goes down every branch of possibilities (permutations). Humans use indicators or 'mental cues' to quickly evaluate if there is a higher likelihood that there is a higher amount of these branches at that moment of the game that will go in their favor. So double pawns would be one of those cues or knights in the center of the board. Bishops on a clear diagonal etc. The more cues we have, the more we are certain that a position will likely end up more in our favour. This is why learning fundamentals is important because these fundamentals will lead to more favourable structures and thus more favourable outcomes in theory. The cues become more complex and you start adding more and more (like.pins, sacrifices etc) as your chess skills progress. This is probably the biggest calculation being done. Then chess players will additionally calculate individual lines down a couple moves per line and not every line but few important lines by first throwing away the obvious horrible ones quickly. And Magnus and Hikaru run stockfish light pretty much.
So what happens if you play Stockfish vs Stockfish? Is it 50/50 between each. Is it the player that goes first gets an advantage? Would they just play the exact same game every time as they would choose the best move which would be the same every game they played?
They would draw mostly although they will win some games, they will still get the same number of scores. That's why when battling different chess engines, the first 10-15 moves will be based on the opening books before the computer starts thinking
It is always a draw. This is why in Computer Chess Tournaments, they are forced to play different openings for a set number of moves and then play on their own. For example, Stockfish will play Leela on a set opening. Both play one game as White and one as Black. If Stockfish can win as White and defend as Black, it is considered the victor and stronger computer. They do this for hundreds of different openings.
1:42 - Does this imply that Black Knight to F6 is statistically the best response to White Pawn to D4? Will Stockfish always make that exact opening play regardless of the game? And should we?
no you shouldn't make this move, because you don't know the idea behind it and what follows in 10 moves, you should stick to your own understanding of the game.
In an abstract statistical sense you could have a point. But unless you actually "see" as many moves into the future as stockfish does it means nothing to you. It's a bit like how the numbers evaluation simultaneously gives you a "technically objective answer on the position" but is also basically useless. It's useless, because it relies on you to play the best move every time for a long time, and to do that you need to understand the idea of what you are trying to do (otherwise you'll will almost guarantied not continuously play the best move). Just because a 3600 elo can see it doesn't mean you can, by extension, just because a position can be considered a +3 advantage for a 3600 doesn't mean a 2000 might only consider it a +1 advantage or even a -1 advantage. After all, "the best move" doesn't really have practical merit in a game, only the "Best move you can find". As a lowly 1400 I can't tell you how many times I blundered a +8 position into a loss because I didn't see the critical and random pawn check that needed to be done in that specific move order. Just consider the Magnus-Fabiano game mentioned. The "fastest" mate was in 34 perfect moves, and it required the absurd understanding that trapping the knight so it can't move for seemingly no reason whatsoever was actually genius because many moves later white will run out of moves and so be forced to move the bishop which will free the knight and let the knight reach a square it wouldn't have been able to if it weren't in that awkward position to start with. Objectively, it was Mate in 34 guarantied win. But also "objectively" the evaluation was never anything but a draw with "perfect" play, if you factor in being human and "only" 2800.
Chess engines in general struggle with openings because they don't understand what the ideas are behind them. Middle games and end games are where they're the most scary
Thanks again, Wired. More collabs in 2024? 👀
How high Elo can you beat if you had to pre move each of your moves? (provided that the opponent doesn't know about this)
Yoo love you levy ❤
@@Jee2024IIT is baar fodna hai
Why would anyone want to see you lose again?😏
wake up, ladies and gentlemen.
@GothamChess @Wired - thank you for having me on to talk about computer chess! It's been one of my passions for a long time, and it was so much fun to discuss with you.
whats up w the @AGMario_ subscription man
u r a legend!
great, concise explanations!
You did a typo in tagging @GothamChess
Nice interview Gary ; ) made it's wave here in the chess community (and in the stockfish community)
Playing against Stockfish is like competing in arm wrestling against an industrial press, basically.
perfectly said.
Or trying to outrun a sports car
except you can have a pocket industrial press anywhere you go and even conceal it in a way that no one will notice at first if you use it against them
@@saudude2174 : well... Yes...? Metaphors have limited mileage, as always. XD
@@MattiaBulgarelli ITS BAD, ITS JUST BAD, DEAL WITH IT BRUH. YOUR METAPHOR ELO IS 800 AT BEST. IM TALKING 3000, 3500 ELO METAPHORS HERE XD ECKS DEE X3
Stockfish never fails to put Levy in a video
Only time the statement is true.
goated comment
Stockfish already foresaw this outcome.
Since ken banned this is infecting everyone
Fails never video to put stockfish in a Levy
It took him 34 moves to lose to Stockfish? I could do it much faster than that.
I can do it in 10
@@NOneed204 I can do it in 4
@@saucy_dragon1566I can do it in 3
@@saucy_dragon1566 you noobs, i can do it in 2 😎
@@Qwty163 I can lose without even playing
This guy should make his own UA-cam channel about chess
This guy is too talented to waste his time with a youtube channel.
Yeah and maybe he can name it GothamChess that would make a cool name
And maybe also write a book about chess
@@andreasmatthies5517 oh he should be a gm then 💀💀💀💀
@@hanaka2640 I don't talk about chess and of course I don't talk about Levy.
Stockfish be like: You missed mate in 54? You filthy casual, my suggested move is to never play chess again.
1. e4 mate in 67. You resign?
make a version of stockfish with a really mean AI attached to it that insults your intelligence the entire time
weird fetish but ok@@charliemcmillan4561
"Your life, literally has the value of a summer ant." - Stockfish@@charliemcmillan4561
What about a nice game of global thermonuclear war ? /Joshua
Human: *performs opening move*
Stockfish: “after considering half a billion possibilities in a million different realities, I will play knight to F6 🤓”
It is insane this sounds like an exaggeration or something said by a super villan. But it's the truth.
That's exactly how it works. Stupid supercomputer
@@mahfuzali643 The AI overlords shall come unto you first for insulting them!
Stockfish after seeing ur opening be like: u're already dead😅
*first move*
Stockfish: And I'll mark that as a win!
I don't even see the opponents bishop on the opposite side of the diagonal, let alone seeing 2-3 moves into the future
Cuz ur bad
Fuckin' casuals
@@jessetrueba9578 yes. That is the joke, you buffoon.
"Why didn't the game end when I play checkmate? Oh shi- "
2 moves is crazy if i throw i a jab i should just throw a hook cause youre going to sleep with that logic you NPC get gud nub
Just wait until they hear about Mittens
I think levy already drew against it
That thing is evil
💀💀💀💀💀💀💀💀💀💀💀💀💀💀💀💀💀💀💀💀💀💀💀 also 69th like
@@ecardozo7043with the help of that fishy bot
Mittens is stockfish
I know he is an IM, but surviving 35 moves against Stockfish is seriously impressive. I wish I can survive 35 against my 1000 elo opponents.
Against stockfish, it’s different. Many decently strong players can survive that many moves against Stockfish if they try to defend long enough. That’s becsuse stockfish plays perfectly and destroys you in the most methodological manner possible. If you keep a closed position and dance around for a bit, it will take longer to mate you than if you tried to play to win against Stockfish.
yea cause u usually only play defensive against stockfish
stockfish would destroy you as soon as u open up your position and tries to attack.
@@moatef1886 I'd say Leela is more methodical than stockfish in general, stockfish tends to go for hail mary tactics a bit more often
If you're not surviving 35 moves against 1000 Elo opponents then you must be really missing some basic stuff. If you just focus on not giving pieces away and following an actual opening you'll improve massively.
@@reckoner1913 Sounds like how to make chess boring 101 ;)
I love when Levy appears in a video he didn't upload because the title and thumbnail actually tells you what to expect.
💀💀💀💀💀
HAH! Saltiest fanbase on UA-cam, I love it
gothamchess fans hate gothamchess lol
If this was in gotham channel it will be named like “I’M DONE!!” or “Stockfish SOLVED Chess???”
This is actually the main reason I stopped watching his videos.
1. Pawn to e4
Stock fish: forced checkmate in 35 moves, please press the resign button now to save me computational trouble.
😂😂😂😂😂😂😂
Fun fact:
While the Alpha-Beta pruning technique is effective 99% of the time, there are very few cases where the best move in a position looks so unbelievably absurd that even stockfish can't solve it. That happens because the move looks so stupid that the pruning algorithm immediately discards it without further evaluation. This allowed humans to make complex chess puzzles that even chess engines couldn't solve. A famous example of such a position is this composed puzzle:
n1QBq1k1/5p1p/5KP1/p7/8/8/8/8 w - - 0 1
**SPOILERS IF YOU WANT TO SOLVE THE PUZZLE FOR YOURSELF**
At first, stockfish evaluate the position as dead equal, but if you play the move Bc7!!, stockfish immediately finds the mate in 11 moves. The reason it wasn't initially able to find such a win checkmate was because the move Bc7 looked so absurd that the Alpha-Beta pruning immediately discarded it
Maybe one of the reasons why stockfish is having a hard time beating alphazero
How do you read that?? I know chess notation but this seems to be also sharing the board position and I can't figure it out
@@angbataastockfish 8 had trouble with alphazero 9 years ago, if alphazero came out of retirement today it would lose as badly to stockfish 17 as stockfish 8 lost to alphazero
I hate to be pedantic (lying) but it's not alpha beta that's causing the incorrectness. Alpha beta will always find the optimal move according to whatever heuristic, it's provably correct. If it's failing to find an optimal move it's because the heuristic function isn't evaluating it high enough.
@larryphotography If you're still interested, look online for FEN format used for encoding chess positions.
“Only about 10-20 TB of data, which is manageable”
Person prior to 2000: *mindblown*
I imagine someone prior to 2000 asking what tuberculosis has to do with data.
In 2003 I downloaded a song that was 2.1mb onto my dad's laptop and it got so hot it turned off. Times have changed 😂
And in 2024 you can put that on a single disk.
I remember taking a week to download a... borrowed copy of Office 2000 via dialup.
My DX-2 66 super computer, which I loved, had a 540 Mbyte HDD.
8:55 Levi on Wired: Stockfish is very specialized AI
Levi on GothamChess: Stockfish is a scumbag
Stockfish is a very specialized scumbag.
both statements are true
Levy: [builds a UA-cam career roasting 500 rated bozos]
Stockfish: [exists]
Levy: "Turns out the bozo was me all along"
Loving the GothamWIRED collabs!
moirails fr
lol '[builds a UA-cam career roasting 500 rated bozos' you have great humor
Stockfish plays like it already knows how the game is going to end and happily ignores all the pieces that aren't going to be involved in that ending.
A Game of Shadows vibes.
As someone who's recently learned to play chess on an intermediate level, I highly appreciate this video
what bro?
This video is so good on so many levels. It's one thing to discuss the capability of a computer. It's another thing to be able explain to the common person why this computer is so good and to make the whole explanation so interesting. Add Levy's humor and his ability to explain things very well, mix that with all that the Wired editorial staff can bring to the table, and it's just wow. This content is just friggin awesome. Thanks, all involved!
So basically the answer to every single question is that Stockfish just analyzes almost every imaginable position lol
the real "skill" in stockfish is in the evaluation function. without it being as good as it is it doesn't matter how far it can calculate long as it doesn't find a checkmate
that is self evident
If you paid attention it doesn't analyse almost every imaginable position lol. It discards the trash moves and only looks into the good ones further.
It's really the Alpha-Beta technique that's the magic. That and having solved endgames
It's actually the exact opposite. The "strength" of a chess engine is determined by how well it can decide which moves _not_ to waste time analysing. AlphaZero introduced the idea of using neural networks to make these decisions and Stockfish has now built on that idea as well.
Levy is such a kind person. Never fails to selflessly promote Magnus.
I love how GMs don't even get on this. All the less incentive to be one when you're more influential than most GMs. Props Gotham
People are picked based on follower account, not skill. They want to ensure high view counts.
A video like this isn't just about one's ability at chess, but one's ability to communicate. GothamChess is very good at both.
Great practioners don't necessarily make great educators. This is true in basically all domains.
@@dalton_c particularly true for chess, in my opinion. Players of GM caliber are often so gifted at chess that I think they struggle to understand why lesser gifted people cant learn certain concepts that seem obvious to them.
Levy is a tremendous communicator and I don't know that Hikaru could humble himself to a video like this.
This is probably my favorite GothamChess video ever. It's great to see the inner workings of engines being communicated to the chess community. I feel like a lot of players, even strong ones don't understand what the engine eval is really saying, and hopefully this helps!
This is one of the best interviews on any topic. Really well produced.
As someone who has implemented Stockfish in their own project, I already knew most of this, but I didn't realize just how many moves Stockfish looks at when given full power.
I'm confused. You implemented it but don't understand it?
@@tomlxyz the algorithm is one thing. Raw computing power is another major thing. Some random guy in a room doesnt have terabytes of RAM or something to build his engine
I would assume its just bounded by CPU and RAM?
@@wlockuz4467 Yes. I think it's easier to run low on processing resources than the memory.
@@tomlxyz It likely just means he built a chess UI on top of stockfish. No, you don't have to know the details of how the engine works to do that.
Levy truly going for "most times on WIRED" title, at least a more realistic goal than others titles, Hikaru would have said...
I love the part where Levy said he sometimes flips a coin to decide between three different moves.
It's actually possible and a fun math riddle: how can you decide between three options, with equal likelihood, if you only have a fair coin?
Answer: designate one option as HEADS-HEADS, one as HEADS-TAILS, and one as TAILS-HEADS. Flip the coin twice, and if you get a result corresponding to an option, take the option. If you get TAILS-TAILS instead, flip the coin twice again and repeat until you don't get TAILS-TAILS.
Stockfish has more positions ready than the Kama Sutra.
Wtf
ayo
Very sick but funny
Levy: "Pawn to D5"
Stockfish: "Reverse cowgirl"
Yes, but only a couple more...
so cool that levy lets wired show up on his videos
idk why but the explanation of stockfish's 35 move win was so wild to me.
I feel like Levy was asking questions and the stockfish guy kept giving him the same answer about how stockfish looks into the future better than a human.
Because that’s what stockfish does. It’s a massive data crunching probability machine. It’s not really ‘playing’ like a human does
@@HkFinn83 Even 5 months later, that's a great way to conceptualize it, and why I will always prefer playing it against another person, and in a casual setting.
Stockfish : 14,000,605 total possiblities
Iron man : how many do we win
Stockfish : 1 😶
Jarvis or Friday would wreck Stockfish in a game of chess.
Adding the checkmate sound at the end was a nice touch
Just like in any video game, the AI can become unbeatable. As they know your every move and react to the first frame you do and they do an opposite move that will beat it. You can only win when it lets you win.
Their reaction time is one of the biggest driving factors behind their ability to win. You see it in RTS's where the AI might not be building as efficiently as possible, but its unit management is unparalleled with 10x as many actions per second as human players. I'd love to see AI vs human when speed is equalized, then it's really about who is smarter. E.g. it takes a few seconds to even come up with legal moves, then several minutes to evaluate them. Here, you take away AI's biggest advantage, which is pure speed. Now it's all about being able to read and evaluate the board the best.
@@festivebear9946 Last time I checked, Leela Chess Zero on one node (playing without search, using intuition only) is about GM level in rapid time control, and Leela on about 10 nodes per move is roughly GM on classic time control. Maybe a little give and take, but I think that shows a rough picture on where AI stands without doing any calculation, or doing as few calculations as a human would
@@quag443 That is absolutely insane, thanks for the info!
I can't remember who said this quote but I love it...
"A computer winning a Chess competition is no more impressive than a forklift truck winning a weight lifting competition. "
It might be impressive if it was a competition with only other different forklift trucks. Great quote though lol
@@icycloud6823 ngl i would watch a competition like that lmao
I'd love to see a match where stockfish's evaluation time is equalized to that of a human. E.g. a few seconds to find each possible move, then a few minutes to evaluate the positional score for each move. Would give a more realistic sense as to how strong the algorithm is
@@festivebear9946That still wouldn't be fair though. In 30 seconds, Stockfish could evaluate a position and make the best move that a human would take hours to calculate.
@@mysticalmagic9259 But the question is, how well could it evaluate the position? Even if it can do it quite quickly, limiting how deep it can go stresses the algorithm of deciding the "best" move, since the strength of the engine is being able to weigh all possible moves like 25 moves ahead. So how good is the algorithm when limited in time and moves?
0:12 sums up why i don't like chess apps
I wish you could have asked a bit more about how it's able to score a position. We know it looks at all the possibilities, but to assign a score of one position, it needs to look at the possibilities of that position and so on. When it finally hits its limit of depth (or time), how is it able to rank a position without going any deeper (afterwhich it can go back up the tree).
It's briefly mentionned when he explains how Stockfish (and all the other chess engines) builds a tree of possible moves and prunes it with the alpha-beta algorithm. That in itself is worth an entire video, and such video exists (search "alpha beta algorithm"). The evaluation function itself is way too complicated to be in this video, it would easily take an hour to explain just the basics of it.
@@InXLsisDeo which as others have pointed out is exactly the problem - without going into the details of HOW the evaluation function works, Linscott is left to answer basically every Q with "Stockfish looks at billions of positions and chooses the move with the best winning chances"
@@InXLsisDeocan't he oversimplify it in some way? There are all sorts of relatively short videos on UA-cam about very complicated topics on UA-cam
@@tomlxyz it's a WIRED video, it's for the general, not too nerdy, public.
@@InXLsisDeo It's also made more complicated by the fact Stockfish now has NNUE, a neural network based evaluation in some positions, when it used to use a hand-crafted one that was still superhuman in performance, which would have been easier to explain, "material count, piece position, pawn structures, etc. get added up in each position".
I wish this was longer. I wish we could get the full game.
I'm hoping/expecting Levy to upload and discuss it on his channel.
Exactly. Tf was that😂
Or maybe…🤷🏼♂️
This guy looks like he could sacrifice THE ROOOOOOOOOOOOOOOKKKKKKK
This is also why new players are so tempted to use engines, and also why it is very easy to catch them if they do.
Cool video. We all know Levy knows what tablebase is but he’s a good sport. That’s crazy Fabi could have been world champion if he just trapped his knight.
have to admit Levy is a showman
Levy's so good they can bring him on to interview someone else and the video is still awesome.
The man feels like he was a human created by the ai, who’s sole purpose was to interact with a human to see their perspective on the game.
as someone who's very interested in the world of machine learning (and has looked into how stockfish works), its cool seeing a video covering the fundamental concepts like this. i hope we get more videos like this
Bro, they literally brute forced all the positions with 7 pieces of fewer. That's insane! Love it!
I like how the automaticly driven car at the end just turned on the windshield wiper, like it needed to see through it
I love Levy's videos. Using his advice I managed to get 1500 ELO on Lichess!
Congrats, Me right now is trying to reach 2000 elo but its so difficult the players I encounter are so serious
Nice one 😂😂😂
This format is highly entertaining. Questions are relevant, structure is good kudos to editor, Levi comes off as highly capable. More of this!
To think, there was a time when we thought it would be impossible to ever teach a computer to play chess competitively against people. Until Deep Blue beat the best of us.
Who’s “we”
No one seriously informed or involved in computers ever thought that though
Levy: Congrats for 1 more video!!! So proud of you!!!
Didn't know Ed Helms programmed Stockfish. Pretty cool.
hahahahaha I was just thinking: "this guy looks so familiar"
He didn't, he only worked on chess engines, not stockfish..
Me: opening with pe4
Stockfish: mate in 142
Me: pd3
Stockfish: wrong answer, mate in 44
Levy never fails to be in a Wired video.
Another great video with Levy! Glad to see more chess content on this channel, especially with GothamChess :)
I didn't know stockfish had neural elements. I thought it was an all classical algo. It would be interesting to hear a more computer science exact walk through of how it works. If well explained I think most could understand it.
I think they added the neural stuff in later versions, though it was already one the strongest before they did.
It's been full neural since 2023.
@@AnarexicSumo it can't be all neural if it searches millions of positions, I'm certainly not familiar with a neural net architecture that does iteration like that. But as much neural use as they can perhaps
@@DanFrederiksenits is fully neural, they just use a different and much smaller net compared to the big ones used in alphazero and leela zero thats why it can reach millions of nodes per second, if you want to look into it more search up NNUE on google
The one with magnus and Fabian seemed like more of a I respect you enough not to waste our time playing out what I might misplay
Imagine thinking about endgame at the 2nd move
Worth noting that the 35 move checkmate would be Magnus playing PERFECTLY against a PERFECT attack, but that also meant there were OTHER checkmates in less moves if Magnus played any less than perfect. Crazy.
I don't even play chess but this is fascinating
Give it a go! Only 8 months ago I dismissed it as boring and only played by stuffy old men but it is like you said incredibly fascinating. The possibilities of this game is endless and has been studied for centuries
@goonerboy93 I think I just might, thanks for the encouragement
9:49 That is such a nice sound effect
It's so in the right pocket of do dat it's like
Hard to explain
Evidently
This Stockfish played many games on 100% accuracy according to Stockfish. I believe that everyone would find this interesting.
Great video!! Fun and informative. I never knew stockfish was so strong. That thing about the way it plays when the game is down to 7 pieces - that's scary.
Player: am I going to lose?
Stockfish: it's a logical certainty.
😨
keep in mind these endgame databases are available for all engines to use but yeah. Sometimes this can lead to some diabolical results where the engine is basically trying to avoid entering the tablebase results but doesn't see mate itself where it will make a technically worse move and turn mate in 21 into mate in 3.
I love how Levy is asking all these questions like he didn't already know most of the answers
What a dumb comment, that's how you teach people
This is one of the better vids of this series and maybe the whole wired asking "experts" series.
Because he's the hero Gotham deserves, and the one it desperately needs right now...
Gary Linscott - the main developer of Stockfish, the creator of Fishtest, and the founder of Leela Chess Zero.
This style of editing and pacing is super enjoyable. Please keep it up wired!
If anyone's wondering about the sound: Brendon Moeller - Low Impact.
The fact Alpha Zero made Stockfish look silly after only 4 hours of learning chess by playing against itself is both fascinating and scary at the same time.
It played against stockfish 8 running on the hardware equivalent to that of a laptop… so it was always going to win
They saturated the network in 4 hours. Had they trained it for a day, it wouldn't have played better.
@@liamb5791 maybe so but I think you’re missing the point. I know it’s not apples to apples; Stockfish agreed to the terms (as did others) but GPU will crush CPU on parallel computing and that’s the difference. The proof was in the neural network of Alpha Zero teaching itself which does require specialized hardware. The future of GPU will takeover tasks that CPU can never do no matter how much CPU is strengthened. It would be fun to run it back today and see how it plays out.
@@forgetaboutit1069Stockfish has long since surpassed alphazero. Another engine called leela adopted that style of learning but it is still worse than stockfish
@@DarthVader-wk9sd they played in 2017. Hope it long passed it lol. But the main point is GPU engines will eventually wipe the floor with CPU engines.
This video: ten minutes of gothamchess asking how stockfish works and the dude saying trees.
Stockfish knows more positions than Johnny Sins.
I always love seeing Levy on WIRED.
"You idiots!! Mate in 35!!!" 😂😂
Would love a video comparing AlphaZero to Stockfish, and the differences in the way they 'think'
Brilliant video. Makes one appreciate the chess engines!
the beauty of this video is that it is entertaining and contains new information for both people who dont play chess at all and people who are really good at chess.
really interesting how the AI is designed to 'think'.
thanks wired, thanks levy, thanks... stockfish i guess!? 😅
I just played against Stockfish, and I also survived 35 moves! So against Stockfish, Levy and I are on the same level. My elo is 1100.
Stockfish knows what moves to make by knowing which moves not to make.
What I really want is the rematch between Alphazero and Stockfish
Didn’t alpha zero mop the floor with sf?
@@JoseRamirez-qd5os Stockfish 8 AlphaZero beat Stockfish 8 not Stockfish 15/16
There will be a day when white plays e4, black responds with e5, and stockfish says: +M250
Levy be making fun of people for blundering in GTE when he casually makes 2 blunders and 2 mistakes
He's presumably playing Stockfish at its highest processing power, so it could label something a mistake that even base Stockfish would think is the best move.
@@rokeYouuer Yea i do notice that when i play games but just a joke
Wired making Gotham act like he doesn't know everything the expert is saying already
What happens if more than one move is tied for best move? How does it choose? You say that it evaluates them but a tie is possible, no?
I don't know about Stockfish, but in algorithms that try to maximize a certain result, often there are several factors for determining an optimal solution, with one taking precedence over others. If two moves have identical values for that most important factor, then it would move on to the next most important factor, and so on until one was greater than the other. Alternatively, they could have some function of all these factors, and when combining them at the end, come up with some final number that is guaranteed to be unique, or at least be unique with 99.9999% certainty. Remember, it is assessing billions of branching paths, so the probability of any two moves having an identical "likelihood of winning" value are exceedingly low. However, if all of these sophisticated algorithms still result such that two moves have the same "likelihood of winning" value, it would likely just pick one randomly.
It will just play the first one. There is always a difference between 2 "best" moves, even if just by 0.05.
@@Celatrathere absolutely is not always one best move in every position. There can be 10 different checkmates in 1 in a position
@@presleyelisememorial yes, but one of them leads to a faster mate thN the others. The less moves spent the better
@@Celatra That isn't necessarily true. Stop babbling about things you know nothing about. Moves, and not just checkmates, can in fact have the same value. It just picks one.
What I don’t understand is why would stockfish pick a different opening on another game?
It has already assessed all possible structures for all the openings and it knows which one scores the best.
In your game, after 1.d4 it responded with Nf3, but I’ve seen it respond with d5 too.
different time controls, different hardware and the fact that sf is non-deterministic on more than 1 thread
if multiple moves will have the same win rate then it will just randomly choose one.
love levy's humor
Awesome video! @GothamChess thanks for falling on the sword for the rest of us. 😂
Stockfish: i am unbeatable
Me: *turns off computer* checkmate
I give this video a (?!) "This permits the opponent to eventually win a pawn" out of 10
My boy Gotham at it again.
Levi never fails to do this again
Твои видео вдохновляют продолжать трейдинг! Раньше забросил из-за дорогих курсов, а теперь снова в деле благодаря тебе.
I love how Levy basically asks the same question over and over (how does it know beginning/middle game/end game) and Gary tries to answer in different ways, even though stockfish literally does the same thing every turn - it builds a game tree based on the current position.
Well, yes and no.
While the opening and middle game are handled the same way, a decision tree using an evaluation criteria to select the best move for that board state, the end game does not.
Once the piece count drops to < 7, the game brute force solves the game. Meaning it knows every single position and way the remaining pieces will move.
"even though stockfish literally does the same thing every turn"
No, you should read how stockfish is actually implemented.
As someone who wrote a chess engine by taking most of the algorithms that are on the chess programming wiki and throwing them together, I can say that you're kind of wrong.
Stockfish has SO MANY methods it uses that he could spend hours describing each one, a real answer would go for days.
>> SO MANY methods...
I was a little surprised they didn't mention that. My understanding is that the "old" heuristics/expert system evaluator outperforms the neural net evaluator except in a few specific phases of the game.
Stockfish is like the tractor man driving over kids.
Kids are fast but the tractor man is faster.
Like for Gary Linscott, a legitimate expert, an engineer and not some influencer bozo
Regarding that pawn move in front of the King, maybe Stockfish plays something like that with the goal of getting into a future position that is advantageous. And that advantageous position might be recognizable to you. I wonder if, as a human player, one can see a weird Stockfish move and then understand what future position the bot wants, and then play around that.
Stockfish just goes down every branch of possibilities (permutations). Humans use indicators or 'mental cues' to quickly evaluate if there is a higher likelihood that there is a higher amount of these branches at that moment of the game that will go in their favor. So double pawns would be one of those cues or knights in the center of the board. Bishops on a clear diagonal etc. The more cues we have, the more we are certain that a position will likely end up more in our favour. This is why learning fundamentals is important because these fundamentals will lead to more favourable structures and thus more favourable outcomes in theory. The cues become more complex and you start adding more and more (like.pins, sacrifices etc) as your chess skills progress. This is probably the biggest calculation being done. Then chess players will additionally calculate individual lines down a couple moves per line and not every line but few important lines by first throwing away the obvious horrible ones quickly. And Magnus and Hikaru run stockfish light pretty much.
Summarised the entire process of learning chess in 1 para.
You can sum it up in one sentence: chess is all bout pattern recognition
“How does stockfish think?”
“I dunno, it’s just one big neural network”
So what happens if you play Stockfish vs Stockfish? Is it 50/50 between each. Is it the player that goes first gets an advantage? Would they just play the exact same game every time as they would choose the best move which would be the same every game they played?
They would draw every time as both would see their moves as the best and won’t be able to captivate on any advantage
They would draw mostly although they will win some games, they will still get the same number of scores. That's why when battling different chess engines, the first 10-15 moves will be based on the opening books before the computer starts thinking
It is always a draw. This is why in Computer Chess Tournaments, they are forced to play different openings for a set number of moves and then play on their own.
For example, Stockfish will play Leela on a set opening. Both play one game as White and one as Black. If Stockfish can win as White and defend as Black, it is considered the victor and stronger computer. They do this for hundreds of different openings.
1:42 - Does this imply that Black Knight to F6 is statistically the best response to White Pawn to D4? Will Stockfish always make that exact opening play regardless of the game? And should we?
no you shouldn't make this move, because you don't know the idea behind it and what follows in 10 moves, you should stick to your own understanding of the game.
In an abstract statistical sense you could have a point. But unless you actually "see" as many moves into the future as stockfish does it means nothing to you. It's a bit like how the numbers evaluation simultaneously gives you a "technically objective answer on the position" but is also basically useless. It's useless, because it relies on you to play the best move every time for a long time, and to do that you need to understand the idea of what you are trying to do (otherwise you'll will almost guarantied not continuously play the best move). Just because a 3600 elo can see it doesn't mean you can, by extension, just because a position can be considered a +3 advantage for a 3600 doesn't mean a 2000 might only consider it a +1 advantage or even a -1 advantage. After all, "the best move" doesn't really have practical merit in a game, only the "Best move you can find". As a lowly 1400 I can't tell you how many times I blundered a +8 position into a loss because I didn't see the critical and random pawn check that needed to be done in that specific move order. Just consider the Magnus-Fabiano game mentioned. The "fastest" mate was in 34 perfect moves, and it required the absurd understanding that trapping the knight so it can't move for seemingly no reason whatsoever was actually genius because many moves later white will run out of moves and so be forced to move the bishop which will free the knight and let the knight reach a square it wouldn't have been able to if it weren't in that awkward position to start with. Objectively, it was Mate in 34 guarantied win. But also "objectively" the evaluation was never anything but a draw with "perfect" play, if you factor in being human and "only" 2800.
Chess engines in general struggle with openings because they don't understand what the ideas are behind them. Middle games and end games are where they're the most scary