@GothamChess @Wired - thank you for having me on to talk about computer chess! It's been one of my passions for a long time, and it was so much fun to discuss with you.
Human: *performs opening move* Stockfish: “after considering half a billion possibilities in a million different realities, I will play knight to F6 🤓”
Against stockfish, it’s different. Many decently strong players can survive that many moves against Stockfish if they try to defend long enough. That’s becsuse stockfish plays perfectly and destroys you in the most methodological manner possible. If you keep a closed position and dance around for a bit, it will take longer to mate you than if you tried to play to win against Stockfish.
If you're not surviving 35 moves against 1000 Elo opponents then you must be really missing some basic stuff. If you just focus on not giving pieces away and following an actual opening you'll improve massively.
Fun fact: While the Alpha-Beta pruning technique is effective 99% of the time, there are very few cases where the best move in a position looks so unbelievably absurd that even stockfish can't solve it. That happens because the move looks so stupid that the pruning algorithm immediately discards it without further evaluation. This allowed humans to make complex chess puzzles that even chess engines couldn't solve. A famous example of such a position is this composed puzzle: n1QBq1k1/5p1p/5KP1/p7/8/8/8/8 w - - 0 1 **SPOILERS IF YOU WANT TO SOLVE THE PUZZLE FOR YOURSELF** At first, stockfish evaluate the position as dead equal, but if you play the move Bc7!!, stockfish immediately finds the mate in 11 moves. The reason it wasn't initially able to find such a win checkmate was because the move Bc7 looked so absurd that the Alpha-Beta pruning immediately discarded it
@@angbataastockfish 8 had trouble with alphazero 9 years ago, if alphazero came out of retirement today it would lose as badly to stockfish 17 as stockfish 8 lost to alphazero
I hate to be pedantic (lying) but it's not alpha beta that's causing the incorrectness. Alpha beta will always find the optimal move according to whatever heuristic, it's provably correct. If it's failing to find an optimal move it's because the heuristic function isn't evaluating it high enough.
This video is so good on so many levels. It's one thing to discuss the capability of a computer. It's another thing to be able explain to the common person why this computer is so good and to make the whole explanation so interesting. Add Levy's humor and his ability to explain things very well, mix that with all that the Wired editorial staff can bring to the table, and it's just wow. This content is just friggin awesome. Thanks, all involved!
the real "skill" in stockfish is in the evaluation function. without it being as good as it is it doesn't matter how far it can calculate long as it doesn't find a checkmate
It's actually the exact opposite. The "strength" of a chess engine is determined by how well it can decide which moves _not_ to waste time analysing. AlphaZero introduced the idea of using neural networks to make these decisions and Stockfish has now built on that idea as well.
As someone who has implemented Stockfish in their own project, I already knew most of this, but I didn't realize just how many moves Stockfish looks at when given full power.
@@tomlxyz the algorithm is one thing. Raw computing power is another major thing. Some random guy in a room doesnt have terabytes of RAM or something to build his engine
This is probably my favorite GothamChess video ever. It's great to see the inner workings of engines being communicated to the chess community. I feel like a lot of players, even strong ones don't understand what the engine eval is really saying, and hopefully this helps!
Levy: [builds a UA-cam career roasting 500 rated bozos] Stockfish: [exists] Levy: "Turns out the bozo was me all along" Loving the GothamWIRED collabs!
@@dalton_c particularly true for chess, in my opinion. Players of GM caliber are often so gifted at chess that I think they struggle to understand why lesser gifted people cant learn certain concepts that seem obvious to them.
I wish you could have asked a bit more about how it's able to score a position. We know it looks at all the possibilities, but to assign a score of one position, it needs to look at the possibilities of that position and so on. When it finally hits its limit of depth (or time), how is it able to rank a position without going any deeper (afterwhich it can go back up the tree).
It's briefly mentionned when he explains how Stockfish (and all the other chess engines) builds a tree of possible moves and prunes it with the alpha-beta algorithm. That in itself is worth an entire video, and such video exists (search "alpha beta algorithm"). The evaluation function itself is way too complicated to be in this video, it would easily take an hour to explain just the basics of it.
@@InXLsisDeo which as others have pointed out is exactly the problem - without going into the details of HOW the evaluation function works, Linscott is left to answer basically every Q with "Stockfish looks at billions of positions and chooses the move with the best winning chances"
@@InXLsisDeo It's also made more complicated by the fact Stockfish now has NNUE, a neural network based evaluation in some positions, when it used to use a hand-crafted one that was still superhuman in performance, which would have been easier to explain, "material count, piece position, pawn structures, etc. get added up in each position".
I feel like Levy was asking questions and the stockfish guy kept giving him the same answer about how stockfish looks into the future better than a human.
@@HkFinn83 Even 5 months later, that's a great way to conceptualize it, and why I will always prefer playing it against another person, and in a casual setting.
Cool video. We all know Levy knows what tablebase is but he’s a good sport. That’s crazy Fabi could have been world champion if he just trapped his knight.
Just like in any video game, the AI can become unbeatable. As they know your every move and react to the first frame you do and they do an opposite move that will beat it. You can only win when it lets you win.
Their reaction time is one of the biggest driving factors behind their ability to win. You see it in RTS's where the AI might not be building as efficiently as possible, but its unit management is unparalleled with 10x as many actions per second as human players. I'd love to see AI vs human when speed is equalized, then it's really about who is smarter. E.g. it takes a few seconds to even come up with legal moves, then several minutes to evaluate them. Here, you take away AI's biggest advantage, which is pure speed. Now it's all about being able to read and evaluate the board the best.
@@festivebear9946 Last time I checked, Leela Chess Zero on one node (playing without search, using intuition only) is about GM level in rapid time control, and Leela on about 10 nodes per move is roughly GM on classic time control. Maybe a little give and take, but I think that shows a rough picture on where AI stands without doing any calculation, or doing as few calculations as a human would
I can't remember who said this quote but I love it... "A computer winning a Chess competition is no more impressive than a forklift truck winning a weight lifting competition. "
I'd love to see a match where stockfish's evaluation time is equalized to that of a human. E.g. a few seconds to find each possible move, then a few minutes to evaluate the positional score for each move. Would give a more realistic sense as to how strong the algorithm is
@@festivebear9946That still wouldn't be fair though. In 30 seconds, Stockfish could evaluate a position and make the best move that a human would take hours to calculate.
@@mysticalmagic9259 But the question is, how well could it evaluate the position? Even if it can do it quite quickly, limiting how deep it can go stresses the algorithm of deciding the "best" move, since the strength of the engine is being able to weigh all possible moves like 25 moves ahead. So how good is the algorithm when limited in time and moves?
as someone who's very interested in the world of machine learning (and has looked into how stockfish works), its cool seeing a video covering the fundamental concepts like this. i hope we get more videos like this
Give it a go! Only 8 months ago I dismissed it as boring and only played by stuffy old men but it is like you said incredibly fascinating. The possibilities of this game is endless and has been studied for centuries
I didn't know stockfish had neural elements. I thought it was an all classical algo. It would be interesting to hear a more computer science exact walk through of how it works. If well explained I think most could understand it.
@@AnarexicSumo it can't be all neural if it searches millions of positions, I'm certainly not familiar with a neural net architecture that does iteration like that. But as much neural use as they can perhaps
@@DanFrederiksenits is fully neural, they just use a different and much smaller net compared to the big ones used in alphazero and leela zero thats why it can reach millions of nodes per second, if you want to look into it more search up NNUE on google
I love how Levy basically asks the same question over and over (how does it know beginning/middle game/end game) and Gary tries to answer in different ways, even though stockfish literally does the same thing every turn - it builds a game tree based on the current position.
Well, yes and no. While the opening and middle game are handled the same way, a decision tree using an evaluation criteria to select the best move for that board state, the end game does not. Once the piece count drops to < 7, the game brute force solves the game. Meaning it knows every single position and way the remaining pieces will move.
As someone who wrote a chess engine by taking most of the algorithms that are on the chess programming wiki and throwing them together, I can say that you're kind of wrong. Stockfish has SO MANY methods it uses that he could spend hours describing each one, a real answer would go for days.
>> SO MANY methods... I was a little surprised they didn't mention that. My understanding is that the "old" heuristics/expert system evaluator outperforms the neural net evaluator except in a few specific phases of the game.
Regarding that pawn move in front of the King, maybe Stockfish plays something like that with the goal of getting into a future position that is advantageous. And that advantageous position might be recognizable to you. I wonder if, as a human player, one can see a weird Stockfish move and then understand what future position the bot wants, and then play around that.
To think, there was a time when we thought it would be impossible to ever teach a computer to play chess competitively against people. Until Deep Blue beat the best of us.
Stockfish just goes down every branch of possibilities (permutations). Humans use indicators or 'mental cues' to quickly evaluate if there is a higher likelihood that there is a higher amount of these branches at that moment of the game that will go in their favor. So double pawns would be one of those cues or knights in the center of the board. Bishops on a clear diagonal etc. The more cues we have, the more we are certain that a position will likely end up more in our favour. This is why learning fundamentals is important because these fundamentals will lead to more favourable structures and thus more favourable outcomes in theory. The cues become more complex and you start adding more and more (like.pins, sacrifices etc) as your chess skills progress. This is probably the biggest calculation being done. Then chess players will additionally calculate individual lines down a couple moves per line and not every line but few important lines by first throwing away the obvious horrible ones quickly. And Magnus and Hikaru run stockfish light pretty much.
He's presumably playing Stockfish at its highest processing power, so it could label something a mistake that even base Stockfish would think is the best move.
I played a game against Stockfish 8 a few days ago just for fun, and it was all going as normal during the opening; I was attempting to play the London and I was developing my pieces and not doing anything stupid (or so I thought). But then about 7-8 moves in Stockfish just jumps its knight forward into my territory and suddenly I was totally screwed. Not checkmated or anything, but suddenly there were multiple forks and pins everywhere, no squares that I could move to without losing a piece, and any move I made just lead to disaster, and I was going to lose multiple pieces no matter how I followed up. I was just flabbergasted, and after watching this video it kind of makes sense. Stockfish at high levels is just merciless.
There is one thing i struggle about chess engines, about pruning bad lines. Let's say there are so many bad moves that give up a piece for free. Stockfish prunes that line because it is bad. But after thinking a bit, it turns out that sacrifice actually leads to mate or huge material/positional gain in 30 moves. How does it decide when to unprune a pruned line? When does a prune happen, it checks a material is lost, looks 10 more moves ahead, and if there still isnt nothing, it prunes? Basically. When and how to prune lines.
Leetcode 4819 - Medium - Create a chess engine, an all time classic. Jokes aside, as a CS major, it's so fascinating to learn more about how stockfish was built, and all of the algorithms behind it.
Great video!! Fun and informative. I never knew stockfish was so strong. That thing about the way it plays when the game is down to 7 pieces - that's scary. Player: am I going to lose? Stockfish: it's a logical certainty. 😨
keep in mind these endgame databases are available for all engines to use but yeah. Sometimes this can lead to some diabolical results where the engine is basically trying to avoid entering the tablebase results but doesn't see mate itself where it will make a technically worse move and turn mate in 21 into mate in 3.
I got some ideas on how I would write a chess engine, never looked into it or how awful it is to setup. I would for example maximize the number of legal moves, or pick a move where the fewest number of positive moves are available for the opponent. Now this will turn into sacrifices all the time - but you could go a few layers deep. Essentially give the opponent as many possible options of only a few are good. this way you allow them to make most mistakes. You could also do something else, like chose a move where you opponent only has equal moves. To then win on times. I wonder if you can finetune an engine based on their opponent. As in the computer championships, you do have limited time and equal hardware. One idea I have had is to make a chess learning game. The beginner level would be finding all legal moves (to understand the game). And the actual challenge then is to classify moves into blunders, mistakes, waiting, good. and the master level would be to rank them in order. I wonder if such a tool already exists, because forcing the human to think "like an engine" was an option.
While Stockfish is only good at playing chess, there are some more skilled engines. For example, there's a fork of Stockfish called Fairy-Stockfish that can play most chess-like board games. And it's still better at chess than any human. You can even invent a chess variant (with some limitations), give it to the engine and it will straight up demolish you in it.
The fact Alpha Zero made Stockfish look silly after only 4 hours of learning chess by playing against itself is both fascinating and scary at the same time.
@@liamb5791 maybe so but I think you’re missing the point. I know it’s not apples to apples; Stockfish agreed to the terms (as did others) but GPU will crush CPU on parallel computing and that’s the difference. The proof was in the neural network of Alpha Zero teaching itself which does require specialized hardware. The future of GPU will takeover tasks that CPU can never do no matter how much CPU is strengthened. It would be fun to run it back today and see how it plays out.
@@forgetaboutit1069Stockfish has long since surpassed alphazero. Another engine called leela adopted that style of learning but it is still worse than stockfish
the beauty of this video is that it is entertaining and contains new information for both people who dont play chess at all and people who are really good at chess. really interesting how the AI is designed to 'think'. thanks wired, thanks levy, thanks... stockfish i guess!? 😅
Worth noting that the 35 move checkmate would be Magnus playing PERFECTLY against a PERFECT attack, but that also meant there were OTHER checkmates in less moves if Magnus played any less than perfect. Crazy.
Really interesting how AI can play! It would also be interesting to see how strongly the AI plays Murkekos Stars. In that game, the number of opening theories is much higher.
I don't know about Stockfish, but in algorithms that try to maximize a certain result, often there are several factors for determining an optimal solution, with one taking precedence over others. If two moves have identical values for that most important factor, then it would move on to the next most important factor, and so on until one was greater than the other. Alternatively, they could have some function of all these factors, and when combining them at the end, come up with some final number that is guaranteed to be unique, or at least be unique with 99.9999% certainty. Remember, it is assessing billions of branching paths, so the probability of any two moves having an identical "likelihood of winning" value are exceedingly low. However, if all of these sophisticated algorithms still result such that two moves have the same "likelihood of winning" value, it would likely just pick one randomly.
@@Celatra That isn't necessarily true. Stop babbling about things you know nothing about. Moves, and not just checkmates, can in fact have the same value. It just picks one.
Thanks again, Wired. More collabs in 2024? 👀
How high Elo can you beat if you had to pre move each of your moves? (provided that the opponent doesn't know about this)
Yoo love you levy ❤
@@Jee2024IIT is baar fodna hai
Why would anyone want to see you lose again?😏
wake up, ladies and gentlemen.
@GothamChess @Wired - thank you for having me on to talk about computer chess! It's been one of my passions for a long time, and it was so much fun to discuss with you.
whats up w the @AGMario_ subscription man
u r a legend!
great, concise explanations!
You did a typo in tagging @GothamChess
Nice interview Gary ; ) made it's wave here in the chess community (and in the stockfish community)
Playing against Stockfish is like competing in arm wrestling against an industrial press, basically.
perfectly said.
Or trying to outrun a sports car
except you can have a pocket industrial press anywhere you go and even conceal it in a way that no one will notice at first if you use it against them
@@saudude2174 : well... Yes...? Metaphors have limited mileage, as always. XD
@@MattiaBulgarelli ITS BAD, ITS JUST BAD, DEAL WITH IT BRUH. YOUR METAPHOR ELO IS 800 AT BEST. IM TALKING 3000, 3500 ELO METAPHORS HERE XD ECKS DEE X3
Stockfish never fails to put Levy in a video
Only time the statement is true.
goated comment
Stockfish already foresaw this outcome.
Since ken banned this is infecting everyone
Fails never video to put stockfish in a Levy
It took him 34 moves to lose to Stockfish? I could do it much faster than that.
I can do it in 10
@@NOneed204 I can do it in 4
@@saucy_dragon1566I can do it in 3
@@saucy_dragon1566 you noobs, i can do it in 2 😎
@@Qwty163 I can lose without even playing
This guy should make his own UA-cam channel about chess
This guy is too talented to waste his time with a youtube channel.
Yeah and maybe he can name it GothamChess that would make a cool name
And maybe also write a book about chess
@@andreasmatthies5517 oh he should be a gm then 💀💀💀💀
@@hanaka2640 I don't talk about chess and of course I don't talk about Levy.
Human: *performs opening move*
Stockfish: “after considering half a billion possibilities in a million different realities, I will play knight to F6 🤓”
It is insane this sounds like an exaggeration or something said by a super villan. But it's the truth.
That's exactly how it works. Stupid supercomputer
@@mahfuzali643 The AI overlords shall come unto you first for insulting them!
Stockfish after seeing ur opening be like: u're already dead😅
*first move*
Stockfish: And I'll mark that as a win!
I don't even see the opponents bishop on the opposite side of the diagonal, let alone seeing 2-3 moves into the future
Cuz ur bad
Fuckin' casuals
@@jessetrueba9578 yes. That is the joke, you buffoon.
"Why didn't the game end when I play checkmate? Oh shi- "
2 moves is crazy if i throw i a jab i should just throw a hook cause youre going to sleep with that logic you NPC get gud nub
Stockfish be like: You missed mate in 54? You filthy casual, my suggested move is to never play chess again.
1. e4 mate in 67. You resign?
make a version of stockfish with a really mean AI attached to it that insults your intelligence the entire time
weird fetish but ok@@charliemcmillan4561
"Your life, literally has the value of a summer ant." - Stockfish@@charliemcmillan4561
What about a nice game of global thermonuclear war ? /Joshua
Just wait until they hear about Mittens
I think levy already drew against it
That thing is evil
💀💀💀💀💀💀💀💀💀💀💀💀💀💀💀💀💀💀💀💀💀💀💀 also 69th like
@@ecardozo7043with the help of that fishy bot
Mittens is stockfish
I love when Levy appears in a video he didn't upload because the title and thumbnail actually tells you what to expect.
💀💀💀💀💀
HAH! Saltiest fanbase on UA-cam, I love it
gothamchess fans hate gothamchess lol
If this was in gotham channel it will be named like “I’M DONE!!” or “Stockfish SOLVED Chess???”
This is actually the main reason I stopped watching his videos.
I know he is an IM, but surviving 35 moves against Stockfish is seriously impressive. I wish I can survive 35 against my 1000 elo opponents.
Against stockfish, it’s different. Many decently strong players can survive that many moves against Stockfish if they try to defend long enough. That’s becsuse stockfish plays perfectly and destroys you in the most methodological manner possible. If you keep a closed position and dance around for a bit, it will take longer to mate you than if you tried to play to win against Stockfish.
yea cause u usually only play defensive against stockfish
stockfish would destroy you as soon as u open up your position and tries to attack.
@@moatef1886 I'd say Leela is more methodical than stockfish in general, stockfish tends to go for hail mary tactics a bit more often
If you're not surviving 35 moves against 1000 Elo opponents then you must be really missing some basic stuff. If you just focus on not giving pieces away and following an actual opening you'll improve massively.
@@reckoner1913 Sounds like how to make chess boring 101 ;)
1. Pawn to e4
Stock fish: forced checkmate in 35 moves, please press the resign button now to save me computational trouble.
😂😂😂😂😂😂😂
“Only about 10-20 TB of data, which is manageable”
Person prior to 2000: *mindblown*
I imagine someone prior to 2000 asking what tuberculosis has to do with data.
In 2003 I downloaded a song that was 2.1mb onto my dad's laptop and it got so hot it turned off. Times have changed 😂
And in 2024 you can put that on a single disk.
I remember taking a week to download a... borrowed copy of Office 2000 via dialup.
My DX-2 66 super computer, which I loved, had a 540 Mbyte HDD.
Fun fact:
While the Alpha-Beta pruning technique is effective 99% of the time, there are very few cases where the best move in a position looks so unbelievably absurd that even stockfish can't solve it. That happens because the move looks so stupid that the pruning algorithm immediately discards it without further evaluation. This allowed humans to make complex chess puzzles that even chess engines couldn't solve. A famous example of such a position is this composed puzzle:
n1QBq1k1/5p1p/5KP1/p7/8/8/8/8 w - - 0 1
**SPOILERS IF YOU WANT TO SOLVE THE PUZZLE FOR YOURSELF**
At first, stockfish evaluate the position as dead equal, but if you play the move Bc7!!, stockfish immediately finds the mate in 11 moves. The reason it wasn't initially able to find such a win checkmate was because the move Bc7 looked so absurd that the Alpha-Beta pruning immediately discarded it
Maybe one of the reasons why stockfish is having a hard time beating alphazero
How do you read that?? I know chess notation but this seems to be also sharing the board position and I can't figure it out
@@angbataastockfish 8 had trouble with alphazero 9 years ago, if alphazero came out of retirement today it would lose as badly to stockfish 17 as stockfish 8 lost to alphazero
I hate to be pedantic (lying) but it's not alpha beta that's causing the incorrectness. Alpha beta will always find the optimal move according to whatever heuristic, it's provably correct. If it's failing to find an optimal move it's because the heuristic function isn't evaluating it high enough.
8:55 Levi on Wired: Stockfish is very specialized AI
Levi on GothamChess: Stockfish is a scumbag
Stockfish is a very specialized scumbag.
both statements are true
This video is so good on so many levels. It's one thing to discuss the capability of a computer. It's another thing to be able explain to the common person why this computer is so good and to make the whole explanation so interesting. Add Levy's humor and his ability to explain things very well, mix that with all that the Wired editorial staff can bring to the table, and it's just wow. This content is just friggin awesome. Thanks, all involved!
So basically the answer to every single question is that Stockfish just analyzes almost every imaginable position lol
the real "skill" in stockfish is in the evaluation function. without it being as good as it is it doesn't matter how far it can calculate long as it doesn't find a checkmate
that is self evident
If you paid attention it doesn't analyse almost every imaginable position lol. It discards the trash moves and only looks into the good ones further.
It's really the Alpha-Beta technique that's the magic. That and having solved endgames
It's actually the exact opposite. The "strength" of a chess engine is determined by how well it can decide which moves _not_ to waste time analysing. AlphaZero introduced the idea of using neural networks to make these decisions and Stockfish has now built on that idea as well.
As someone who's recently learned to play chess on an intermediate level, I highly appreciate this video
what bro?
Levy is such a kind person. Never fails to selflessly promote Magnus.
Stockfish plays like it already knows how the game is going to end and happily ignores all the pieces that aren't going to be involved in that ending.
A Game of Shadows vibes.
As someone who has implemented Stockfish in their own project, I already knew most of this, but I didn't realize just how many moves Stockfish looks at when given full power.
I'm confused. You implemented it but don't understand it?
@@tomlxyz the algorithm is one thing. Raw computing power is another major thing. Some random guy in a room doesnt have terabytes of RAM or something to build his engine
I would assume its just bounded by CPU and RAM?
@@wlockuz4467 Yes. I think it's easier to run low on processing resources than the memory.
@@tomlxyz It likely just means he built a chess UI on top of stockfish. No, you don't have to know the details of how the engine works to do that.
Levy truly going for "most times on WIRED" title, at least a more realistic goal than others titles, Hikaru would have said...
This is probably my favorite GothamChess video ever. It's great to see the inner workings of engines being communicated to the chess community. I feel like a lot of players, even strong ones don't understand what the engine eval is really saying, and hopefully this helps!
0:12 sums up why i don't like chess apps
I love the part where Levy said he sometimes flips a coin to decide between three different moves.
Levy: [builds a UA-cam career roasting 500 rated bozos]
Stockfish: [exists]
Levy: "Turns out the bozo was me all along"
Loving the GothamWIRED collabs!
moirails fr
lol '[builds a UA-cam career roasting 500 rated bozos' you have great humor
idk why but the explanation of stockfish's 35 move win was so wild to me.
This is a great video! It's always good when levy is in these videos. Have a good day!
So, despite the huge number of 64 squares, stockfish already knows if stockfish will lose win or draw? is that what the guy in the video means?
I love how GMs don't even get on this. All the less incentive to be one when you're more influential than most GMs. Props Gotham
People are picked based on follower account, not skill. They want to ensure high view counts.
A video like this isn't just about one's ability at chess, but one's ability to communicate. GothamChess is very good at both.
Great practioners don't necessarily make great educators. This is true in basically all domains.
@@dalton_c particularly true for chess, in my opinion. Players of GM caliber are often so gifted at chess that I think they struggle to understand why lesser gifted people cant learn certain concepts that seem obvious to them.
Levy is a tremendous communicator and I don't know that Hikaru could humble himself to a video like this.
so cool that levy lets wired show up on his videos
I wish you could have asked a bit more about how it's able to score a position. We know it looks at all the possibilities, but to assign a score of one position, it needs to look at the possibilities of that position and so on. When it finally hits its limit of depth (or time), how is it able to rank a position without going any deeper (afterwhich it can go back up the tree).
It's briefly mentionned when he explains how Stockfish (and all the other chess engines) builds a tree of possible moves and prunes it with the alpha-beta algorithm. That in itself is worth an entire video, and such video exists (search "alpha beta algorithm"). The evaluation function itself is way too complicated to be in this video, it would easily take an hour to explain just the basics of it.
@@InXLsisDeo which as others have pointed out is exactly the problem - without going into the details of HOW the evaluation function works, Linscott is left to answer basically every Q with "Stockfish looks at billions of positions and chooses the move with the best winning chances"
@@InXLsisDeocan't he oversimplify it in some way? There are all sorts of relatively short videos on UA-cam about very complicated topics on UA-cam
@@tomlxyz it's a WIRED video, it's for the general, not too nerdy, public.
@@InXLsisDeo It's also made more complicated by the fact Stockfish now has NNUE, a neural network based evaluation in some positions, when it used to use a hand-crafted one that was still superhuman in performance, which would have been easier to explain, "material count, piece position, pawn structures, etc. get added up in each position".
I feel like Levy was asking questions and the stockfish guy kept giving him the same answer about how stockfish looks into the future better than a human.
Because that’s what stockfish does. It’s a massive data crunching probability machine. It’s not really ‘playing’ like a human does
@@HkFinn83 Even 5 months later, that's a great way to conceptualize it, and why I will always prefer playing it against another person, and in a casual setting.
Stockfish : 14,000,605 total possiblities
Iron man : how many do we win
Stockfish : 1 😶
This is one of the best interviews on any topic. Really well produced.
Adding the checkmate sound at the end was a nice touch
I wish this was longer. I wish we could get the full game.
I'm hoping/expecting Levy to upload and discuss it on his channel.
Exactly. Tf was that😂
Or maybe…🤷🏼♂️
Cool video. We all know Levy knows what tablebase is but he’s a good sport. That’s crazy Fabi could have been world champion if he just trapped his knight.
Just like in any video game, the AI can become unbeatable. As they know your every move and react to the first frame you do and they do an opposite move that will beat it. You can only win when it lets you win.
Their reaction time is one of the biggest driving factors behind their ability to win. You see it in RTS's where the AI might not be building as efficiently as possible, but its unit management is unparalleled with 10x as many actions per second as human players. I'd love to see AI vs human when speed is equalized, then it's really about who is smarter. E.g. it takes a few seconds to even come up with legal moves, then several minutes to evaluate them. Here, you take away AI's biggest advantage, which is pure speed. Now it's all about being able to read and evaluate the board the best.
@@festivebear9946 Last time I checked, Leela Chess Zero on one node (playing without search, using intuition only) is about GM level in rapid time control, and Leela on about 10 nodes per move is roughly GM on classic time control. Maybe a little give and take, but I think that shows a rough picture on where AI stands without doing any calculation, or doing as few calculations as a human would
@@quag443 That is absolutely insane, thanks for the info!
I can't remember who said this quote but I love it...
"A computer winning a Chess competition is no more impressive than a forklift truck winning a weight lifting competition. "
It might be impressive if it was a competition with only other different forklift trucks. Great quote though lol
@@icycloud6823 ngl i would watch a competition like that lmao
I'd love to see a match where stockfish's evaluation time is equalized to that of a human. E.g. a few seconds to find each possible move, then a few minutes to evaluate the positional score for each move. Would give a more realistic sense as to how strong the algorithm is
@@festivebear9946That still wouldn't be fair though. In 30 seconds, Stockfish could evaluate a position and make the best move that a human would take hours to calculate.
@@mysticalmagic9259 But the question is, how well could it evaluate the position? Even if it can do it quite quickly, limiting how deep it can go stresses the algorithm of deciding the "best" move, since the strength of the engine is being able to weigh all possible moves like 25 moves ahead. So how good is the algorithm when limited in time and moves?
This is also why new players are so tempted to use engines, and also why it is very easy to catch them if they do.
have to admit Levy is a showman
This guy looks like he could sacrifice THE ROOOOOOOOOOOOOOOKKKKKKK
I like how the automaticly driven car at the end just turned on the windshield wiper, like it needed to see through it
Bro, they literally brute forced all the positions with 7 pieces of fewer. That's insane! Love it!
The man feels like he was a human created by the ai, who’s sole purpose was to interact with a human to see their perspective on the game.
I love Levy's videos. Using his advice I managed to get 1500 ELO on Lichess!
Congrats, Me right now is trying to reach 2000 elo but its so difficult the players I encounter are so serious
Nice one 😂😂😂
as someone who's very interested in the world of machine learning (and has looked into how stockfish works), its cool seeing a video covering the fundamental concepts like this. i hope we get more videos like this
Stockfish has more positions ready than the Kama Sutra.
Wtf
ayo
Very sick but funny
Levy: "Pawn to D5"
Stockfish: "Reverse cowgirl"
Yes, but only a couple more...
I don't even play chess but this is fascinating
Give it a go! Only 8 months ago I dismissed it as boring and only played by stuffy old men but it is like you said incredibly fascinating. The possibilities of this game is endless and has been studied for centuries
@goonerboy93 I think I just might, thanks for the encouragement
I didn't know stockfish had neural elements. I thought it was an all classical algo. It would be interesting to hear a more computer science exact walk through of how it works. If well explained I think most could understand it.
I think they added the neural stuff in later versions, though it was already one the strongest before they did.
It's been full neural since 2023.
@@AnarexicSumo it can't be all neural if it searches millions of positions, I'm certainly not familiar with a neural net architecture that does iteration like that. But as much neural use as they can perhaps
@@DanFrederiksenits is fully neural, they just use a different and much smaller net compared to the big ones used in alphazero and leela zero thats why it can reach millions of nodes per second, if you want to look into it more search up NNUE on google
Levy's so good they can bring him on to interview someone else and the video is still awesome.
Levy never fails to be in a Wired video.
This format is highly entertaining. Questions are relevant, structure is good kudos to editor, Levi comes off as highly capable. More of this!
I just played against Stockfish, and I also survived 35 moves! So against Stockfish, Levy and I are on the same level. My elo is 1100.
Levy: Congrats for 1 more video!!! So proud of you!!!
I love how Levy basically asks the same question over and over (how does it know beginning/middle game/end game) and Gary tries to answer in different ways, even though stockfish literally does the same thing every turn - it builds a game tree based on the current position.
Well, yes and no.
While the opening and middle game are handled the same way, a decision tree using an evaluation criteria to select the best move for that board state, the end game does not.
Once the piece count drops to < 7, the game brute force solves the game. Meaning it knows every single position and way the remaining pieces will move.
"even though stockfish literally does the same thing every turn"
No, you should read how stockfish is actually implemented.
As someone who wrote a chess engine by taking most of the algorithms that are on the chess programming wiki and throwing them together, I can say that you're kind of wrong.
Stockfish has SO MANY methods it uses that he could spend hours describing each one, a real answer would go for days.
>> SO MANY methods...
I was a little surprised they didn't mention that. My understanding is that the "old" heuristics/expert system evaluator outperforms the neural net evaluator except in a few specific phases of the game.
Regarding that pawn move in front of the King, maybe Stockfish plays something like that with the goal of getting into a future position that is advantageous. And that advantageous position might be recognizable to you. I wonder if, as a human player, one can see a weird Stockfish move and then understand what future position the bot wants, and then play around that.
Didn't know Ed Helms programmed Stockfish. Pretty cool.
hahahahaha I was just thinking: "this guy looks so familiar"
He didn't, he only worked on chess engines, not stockfish..
The one with magnus and Fabian seemed like more of a I respect you enough not to waste our time playing out what I might misplay
To think, there was a time when we thought it would be impossible to ever teach a computer to play chess competitively against people. Until Deep Blue beat the best of us.
Who’s “we”
No one seriously informed or involved in computers ever thought that though
There should be AI chess bot competitions, like they do normally, but for robots and we rank the bots with velo (virtual elo)
Imagine thinking about endgame at the 2nd move
9:49 That is such a nice sound effect
It's so in the right pocket of do dat it's like
Hard to explain
Evidently
Because he's the hero Gotham deserves, and the one it desperately needs right now...
If anyone's wondering about the sound: Brendon Moeller - Low Impact.
This Stockfish played many games on 100% accuracy according to Stockfish. I believe that everyone would find this interesting.
I always love seeing Levy on WIRED.
Stockfish just goes down every branch of possibilities (permutations). Humans use indicators or 'mental cues' to quickly evaluate if there is a higher likelihood that there is a higher amount of these branches at that moment of the game that will go in their favor. So double pawns would be one of those cues or knights in the center of the board. Bishops on a clear diagonal etc. The more cues we have, the more we are certain that a position will likely end up more in our favour. This is why learning fundamentals is important because these fundamentals will lead to more favourable structures and thus more favourable outcomes in theory. The cues become more complex and you start adding more and more (like.pins, sacrifices etc) as your chess skills progress. This is probably the biggest calculation being done. Then chess players will additionally calculate individual lines down a couple moves per line and not every line but few important lines by first throwing away the obvious horrible ones quickly. And Magnus and Hikaru run stockfish light pretty much.
Summarised the entire process of learning chess in 1 para.
You can sum it up in one sentence: chess is all bout pattern recognition
Visuals on this video are amazing
love levy's humor
Gary Linscott - the main developer of Stockfish, the creator of Fishtest, and the founder of Leela Chess Zero.
Stockfish knows more positions than Johnny Sins.
Would love a video comparing AlphaZero to Stockfish, and the differences in the way they 'think'
Levy be making fun of people for blundering in GTE when he casually makes 2 blunders and 2 mistakes
He's presumably playing Stockfish at its highest processing power, so it could label something a mistake that even base Stockfish would think is the best move.
@@rokeYouuer Yea i do notice that when i play games but just a joke
Another great video with Levy! Glad to see more chess content on this channel, especially with GothamChess :)
Brilliant video. Makes one appreciate the chess engines!
I played a game against Stockfish 8 a few days ago just for fun, and it was all going as normal during the opening; I was attempting to play the London and I was developing my pieces and not doing anything stupid (or so I thought).
But then about 7-8 moves in Stockfish just jumps its knight forward into my territory and suddenly I was totally screwed.
Not checkmated or anything, but suddenly there were multiple forks and pins everywhere, no squares that I could move to without losing a piece, and any move I made just lead to disaster, and I was going to lose multiple pieces no matter how I followed up.
I was just flabbergasted, and after watching this video it kind of makes sense.
Stockfish at high levels is just merciless.
This style of editing and pacing is super enjoyable. Please keep it up wired!
There is one thing i struggle about chess engines, about pruning bad lines.
Let's say there are so many bad moves that give up a piece for free. Stockfish prunes that line because it is bad. But after thinking a bit, it turns out that sacrifice actually leads to mate or huge material/positional gain in 30 moves.
How does it decide when to unprune a pruned line?
When does a prune happen, it checks a material is lost, looks 10 more moves ahead, and if there still isnt nothing, it prunes?
Basically. When and how to prune lines.
My boy Gotham at it again.
Levi never fails to do this again
Leetcode 4819 - Medium - Create a chess engine, an all time classic.
Jokes aside, as a CS major, it's so fascinating to learn more about how stockfish was built, and all of the algorithms behind it.
"You idiots!! Mate in 35!!!" 😂😂
This is one of the better vids of this series and maybe the whole wired asking "experts" series.
Like for Gary Linscott, a legitimate expert, an engineer and not some influencer bozo
Great video!! Fun and informative. I never knew stockfish was so strong. That thing about the way it plays when the game is down to 7 pieces - that's scary.
Player: am I going to lose?
Stockfish: it's a logical certainty.
😨
keep in mind these endgame databases are available for all engines to use but yeah. Sometimes this can lead to some diabolical results where the engine is basically trying to avoid entering the tablebase results but doesn't see mate itself where it will make a technically worse move and turn mate in 21 into mate in 3.
I got some ideas on how I would write a chess engine, never looked into it or how awful it is to setup.
I would for example maximize the number of legal moves, or pick a move where the fewest number of positive moves are available for the opponent. Now this will turn into sacrifices all the time - but you could go a few layers deep.
Essentially give the opponent as many possible options of only a few are good. this way you allow them to make most mistakes.
You could also do something else, like chose a move where you opponent only has equal moves. To then win on times.
I wonder if you can finetune an engine based on their opponent. As in the computer championships, you do have limited time and equal hardware.
One idea I have had is to make a chess learning game. The beginner level would be finding all legal moves (to understand the game).
And the actual challenge then is to classify moves into blunders, mistakes, waiting, good. and the master level would be to rank them in order. I wonder if such a tool already exists, because forcing the human to think "like an engine" was an option.
Engines already do this and have been doing this for a long long time. It’s part of their evaluation function.
While Stockfish is only good at playing chess, there are some more skilled engines.
For example, there's a fork of Stockfish called Fairy-Stockfish that can play most chess-like board games. And it's still better at chess than any human.
You can even invent a chess variant (with some limitations), give it to the engine and it will straight up demolish you in it.
The fact Alpha Zero made Stockfish look silly after only 4 hours of learning chess by playing against itself is both fascinating and scary at the same time.
It played against stockfish 8 running on the hardware equivalent to that of a laptop… so it was always going to win
They saturated the network in 4 hours. Had they trained it for a day, it wouldn't have played better.
@@liamb5791 maybe so but I think you’re missing the point. I know it’s not apples to apples; Stockfish agreed to the terms (as did others) but GPU will crush CPU on parallel computing and that’s the difference. The proof was in the neural network of Alpha Zero teaching itself which does require specialized hardware. The future of GPU will takeover tasks that CPU can never do no matter how much CPU is strengthened. It would be fun to run it back today and see how it plays out.
@@forgetaboutit1069Stockfish has long since surpassed alphazero. Another engine called leela adopted that style of learning but it is still worse than stockfish
@@DarthVader-wk9sd they played in 2017. Hope it long passed it lol. But the main point is GPU engines will eventually wipe the floor with CPU engines.
the beauty of this video is that it is entertaining and contains new information for both people who dont play chess at all and people who are really good at chess.
really interesting how the AI is designed to 'think'.
thanks wired, thanks levy, thanks... stockfish i guess!? 😅
What I really want is the rematch between Alphazero and Stockfish
Didn’t alpha zero mop the floor with sf?
@@JoseRamirez-qd5os Stockfish 8 AlphaZero beat Stockfish 8 not Stockfish 15/16
Worth noting that the 35 move checkmate would be Magnus playing PERFECTLY against a PERFECT attack, but that also meant there were OTHER checkmates in less moves if Magnus played any less than perfect. Crazy.
I love how Levy is asking all these questions like he didn't already know most of the answers
What a dumb comment, that's how you teach people
Really interesting how AI can play! It would also be interesting to see how strongly the AI plays Murkekos Stars. In that game, the number of opening theories is much higher.
9:33 oh ChatGPT certainly was net benefit for you Levy :v
I give this video a (?!) "This permits the opponent to eventually win a pawn" out of 10
What happens if more than one move is tied for best move? How does it choose? You say that it evaluates them but a tie is possible, no?
I don't know about Stockfish, but in algorithms that try to maximize a certain result, often there are several factors for determining an optimal solution, with one taking precedence over others. If two moves have identical values for that most important factor, then it would move on to the next most important factor, and so on until one was greater than the other. Alternatively, they could have some function of all these factors, and when combining them at the end, come up with some final number that is guaranteed to be unique, or at least be unique with 99.9999% certainty. Remember, it is assessing billions of branching paths, so the probability of any two moves having an identical "likelihood of winning" value are exceedingly low. However, if all of these sophisticated algorithms still result such that two moves have the same "likelihood of winning" value, it would likely just pick one randomly.
It will just play the first one. There is always a difference between 2 "best" moves, even if just by 0.05.
@@Celatrathere absolutely is not always one best move in every position. There can be 10 different checkmates in 1 in a position
@@presleyelisememorial yes, but one of them leads to a faster mate thN the others. The less moves spent the better
@@Celatra That isn't necessarily true. Stop babbling about things you know nothing about. Moves, and not just checkmates, can in fact have the same value. It just picks one.
Твои видео вдохновляют продолжать трейдинг! Раньше забросил из-за дорогих курсов, а теперь снова в деле благодаря тебе.
It's alright bro, if you want to feel better about losing to a bot, just play me in chess. I'll make you look like Stockfish 16.
AHAHAHAHAH
This Levy guy is pretty good. He should write a book.