THIS is how chess cheating should be approached. Rigorous and thorough testing without slander and personal attacks. No one is denying that it is an existential threat to the game, but we need to find a way to deal with it in a civilized and professional manner.
Agreed, but many aren't even suspicious of their opponents while cheating had happened so I believe suspicion should also be encouraged, and questions should be asked with reports made.
One of the most interesting podcasts you guys have done. So many debates in chess are based around conjecture, received wisdom or arguments from authority. I hope Smerdon gets to implement more of these experiments in chess.
Yeah, this type of discussion really highlights how little actual data there is about chess cheating, at least in the public. Most of the public discussion is based on vibes.
Would’ve listened to a couple more hrs of this, excellent guest, would love to dive deeper into all of those topics in future episodes 🤙🏼 hopefully as more data from further experiments is gathered, I feel like doing more and more of these is super important.
Amazing episode, watching a fellow Australian tackle both the chess world and academia in this way is super refreshing. Thanks to all parties for facilitating such an interesting episode 😊
Projection bias caused by bad play...sounds familiar. And sound data sets with expertise in the field...sounds unfamiliar. This was an exceptional episode and I hope that you have David on again after further experiment. I really hope that someone we all know catches this episode. Though I doubt they have the ability to recognize how much this undermines their actions.
What a brilliant episode. Even by the standards of this podcast, this was really a great conversation touching on some very important topics for both chess and society. It's also great to see someone approaching these topics from the point of view of data and evidence rather than purely as a source of drama.
22:25 Mamedyarov accused GM Igor Kurnosov who died in a car accident several years after that. Kurnosov was mostly away from the board, but he was a chain smoker (and tried to relieve the mental pressure) and was seen in public areas.
I first became aware of Smerdon through his 2015 book covering the Nf6 Scandinavian. He's a king of offbeat openings and has a well honed way of exploring data. He sounds like an excellent economist.
At the risk of repeating yet again what has already been said here: Fabulous interview. Smerdon is brilliant and the two of you handled the conversation superbly. Thanks for all the effort that goes into these podcasts
It could perhaps be interesting to run the same experiment, tell all the players there may be cheating, but actually never have any cheating take place, and see how many people still get accused.
it seems to me that the competitive cheater (in any sport) is someone whose competitiveness or ambition eclipses his sense of right and wrong. they simply want to win so much they'll take any risk. lance armstrong, carl lewis, those type of people. so I don't know what it tells us when we select random people to cheat in an experiment, probably just that non-cheaters feel very bad about it and can't handle the pressure they never wanted in the first place.
opportunity plays a larger role than motive so many ppl took steroids in baseball. its not like baseball players are inherently less ethical than basketball players or w/e it's just that it was so much easier to not get caught. its really really easy to cheat in online chess.
While Lance is that kind of character, it was virtually the whole peloton that were taking EPO. So I am not sure you can draw conclusions about character of cheaters from that era of cycling.
Yeah I think it's moreso using grey areas to gain advantages in sports. Also when there is serious money on the line that's what happens. Barely any cheating at the lower levels because it simply does not pay well enough.
In the 2nd experiment discussion around players looking at 7 games and predicting where the cheating was - it goes against the most profound finding of the first experiment ie you tend to think your opponent is cheating if you blunder more. When observing a game as a neutral, you’re not playing, therefore not blundering, therefore have no personal connection to the game and so the 2nd experiment design is significantly different from the 1st experiment and the hypothesis should be different if nothing else. They can’t both be answering the same question if a significant detail that we know predicts when someone thinks another is cheating is different. May sound pedantic but to an academic this should be obvious.
Just a reminder, that the 2nd experiment in the podcast - the 7 games - was last year and it was the initial experiment. The first one mentioned - the tournament - is the most recent one.
One of the things about checking with an engine for moves is it's not your idea, so you need to understand the meaning behind the idea in order to make use of it. I can understand the super GMs who say 1 or 2 moves a game is enough for 100 ELO, but I think for me only 1 or 2 moves a game may make the result worse for me because it wasn't my idea and I don't know how to follow up that move. It may be better if, as Svidler suggests, not the move but the fact that there is something important to find here is really what would be advantageous.
China managed to get total domination in Women's chess just by spending money on a program for it. Imagine Rex decides actually wants super-serious women's chess, and he doesn't just pay stipends to someone like Alice Lee, but partners with a local St Louis private high school and does an all-expenses-paid thing for 15 top prospects, where chess is a part of the curriculum. We'd make GMs every year. Add a similar boys scholarship, and we'd see 2700s. But that's far more money than the 100k prize.
They should do an experiment where after the tournament, players are allowed to discuss who is cheating. This will tell if discussing the game with other players will increase or decrease accuracy of detection. My hunch is, it should lead to lower accuracy, potentially about 50, because you don’t wanna cross someone who’s convinced someone cheated
This is one of my favorite episodes so far! I caught myself in that bias just a couple of days ago, actually. I suspected my opponent had cheated, analyzed the game and looked over their account and had to conclude that I was wrong. I just played a much worse game than I thought I did, while they didn't play anything ground breaking at all.
Regarding playing worse overall when you were asked to cheat: I'm pretty sure this affects primarily those who were asked to cheat but wouldn't cheat out of their own volition. There is at least a subset of cheaters who will not perform worse at all overall, if they throw in an engine move now and then. Those cheaters tell themselves that their cheating is justified (e.g., because others are cheating as well, or they would find the moves if they weren't tired, etc.) and don't feel any guilt. The play of that subset of real cheaters would not suffer from the cheating.
Loved dadvid, please bring him back for a longer episode where you guys delve deeper into many of the topics in the video Especially Fabi and david seem to gel so well
I wonder if you could devise an experiment where you have between 10 and 20 volunteers who play at various open tournaments and are randomly selected as cheaters in certain games, with the full knowledge and cooperation of arbiters and tournament organizers, but without their opponents' knowledge ahead of time to ensure that it would be a double-blind study. Obviously you would have to refund any points lost due to games that were won by the cheaters, and you're on slightly sketchy ground ethically, but it would still be interesting to see the results of a study like that. It would also be a huge data point for analysis of OTB cheating.
Could you avoid the ethical dilemma in the experiment by doing the following? = You run the (openly advertised) cheaters' tourament, you incentivise accurate accusations of cheating - but in reality you assign no cheaters. Accusations likely ensue regardless at a similar rate to a true cheater's tournament just based on hysteria, cognitive dissonance when players blunder, etc. I think this could simulate the paranoia in the chess world at the moment.
Would it avoid the ethical dilemma? Yes. Would it be useful data? Not really. We already know that people are paranoid, and prompting them to be suspicious of their opponents by creating a "cheaters" tournament would only prove that people are susceptible to paranoia... which again, we already know. It could make for an interesting psych experiment if you're into that, but there would be no practical application for the results.
Data set would be way more relevant if the cheaters could be in control of as many requests as needed for best move in any number of difficult positions, rather than just waiting around passively for the tester to send moves in a limited number of positions merely assumed by the tester to be those that the player needs help in finding the best move.
I thought so too. That would dramatically affect the results, because cheaters would always find their way out of difficult situations. And opponents would be more likely to get that ‘feeling’ of their opponent cheating.
They were the perfect trio to test if an average GM with a few hints can beat or at least be competitive against a super GM. Nonethereless a fantastic interview, David Smerdon is one of the most interessing chess player.
The Books Under the Goat Unknown Title: The Performance Cortex, Author: Zach Schonbrun Title: Revolution, Author: Russell Brand Title: Einstein His Life and Universe, Author: Walter Isaacson Unknown Title: Bobby Fischer, Author: Harry Benson That's the best I can do. Any improvements?
31:00 The main takeaway I had in intro level statistics is correlation is causation. Fabiano is right to question the subjectiveness. The idea of “I play bad so they must be cheating.” Vs “Feels like they are cheating…” and the info processing is messed up because of being distracted or developing counter strategies to cheating, or even any other possibilities.
here we go with a much appreciated and professional approach to the whole thing. kramnik pointed out the problem and now we need more work on this problem like this instead of destroying innocent peoples life. kramnik should show up with works on this like we see here.
What about Kramnik's thoroughly substantiated suspicions, like the videos of Danya looking at engine lines, mentioning computer moves, repeatedly lying about his own on-video behavior, getting ranked #1 in the world online, above Magnus, Hikaru, Alireza?
@@illarionbykov7401 thoroughly substantiated my ass! he once mentioned a move that was also suggested by one of the computers, this an evidence for nothing.
Very interesting! I´d like to add one thought: Two moves with illegal assistance can make a huge difference if the cheater can decide when to use help. It is a different story when he/she does not know when and how often assistence comes. So the cheater might wait for a move in a difficult position and burn time or might not fully focus.
May I make a suggestion in fighting FGM? It's a very difficult case to make when the arguments defending this horrific practice mirror arguments made in the West to defend MGM. I believe there is little vigor in ending this practice because the moral arguments apply to both genders.
Problem is, like in most sports/games at the top level, there isn’t an actual Male GM title. The title is open to everyone. It’s only socially referred to as the male because of the existence of the other. There is however a handicapped title only available to one group.
The first thing to evaluate is the consequences of cheating at various levels. I would propose an experiment involving low level computer bots and evaluate their estimated rating (bradley terry) when some of their moves are augmented by stockfish 17. This would help quantify the impact of cheating when left to a handful of moves. Obviously, always playing like stockfish gives you stockfish
The point around 40:00 probably isn't relevant to serious cheaters. Of course if you put people who normally play fair in a situation where they are allowed to cheat, using a system they haven't practised run by an accomplice they don't know, it follows they will be uncomfortable about it to the point that their legitimate play suffers. A serious cheat will have rehearsed the process ahead of time and they will be confident in their plan.
It makes 100% sense that the more you blunder the more you blame, The explanation is when you blunder, it means you haven’t seen the move(s) your opponent is going to play after the blunder, hence you think these moves were difficult to find.
I really enjoy this channel, but I believe it would help to use microphones with better sound quality. In this episode, for instance, Fabiano's voice sounds less distinct than the interviewer’s, which can be a bit distracting. His voice is clear in other videos, so it seems to be an issue with the audio capture in this episode.
I think there is no way to resolve the cheating issue without super invasive techniques like actively scanning brain activity. You can detect super deep tactics happening unnaturally or situations where there are too many possible good moves for you to be frequently chosing top engine moves, but all the cheater needs to do to defeat this is to only choose moves they understand and use engines that take into account cheating detection techniques. I wonder if in 50 years chess players will be wearing hats that scan their brain activity while they play.
Smurfo! I love his Portuguese Gambit. Been a fan of his work since around 2014-15. Impressive draw against Magnus in the Baku Olympiad in 2016. Can't believe it's been 8 years.
I was thinking of the moral dilemma of the experiment. You can have two tournaments. Their tasks are identical. They all know there are two tournaments, where in one of them no one is cheating and in the other there will be cheaters. Of course, in this experiment you would unfortunately have to have the same people be cheaters the whole tournament to not give away which group you are in, but I think that is closer to real life as well.
i guess the only problem with this is, and not to sound like kramnik here, but can you absolutely guarantee no one is cheating in the no cheaters tournament? they might not be cheating with moves you gave them, but-
@wrjalmond they would be playing on designated computers in designated rooms without their phones. There would be no prize money for winning. I guess it is possible someone would cheat. How much probability would you give that would happen in an experiment everyone knew they were part of?
Can David expand on the 69% accusation success rate thing? He says that "50% would be chance" but that is dubious and depends very much on the nature of how you're measuring accuracy. 50% would only be chance if half the games had a cheater and you were asking after every round, "did your opponent cheat?" But that wasn't the methodology at all here. Players could accuse anyone at their discretion (and it cost them to do so), out of the space of all games they played. The expected accuracy by random guessing is way, WAY below 50% in this scenario, and 69% would be very significant.
David is a super cool and intelligent smart dude ! He should turn his chess research into a doco film and or making more chess related content ! now that would be interesting !
I think the players in the first tournament did relatively well in spotting cheating because 1) They were told there would be cheating so they were actively looking for it 2) The relative strength (or lack thereof) of the players involved. A 2600+ player playing a few engine moves is far less suspicious than a 2000 elo one.
I remember the times when a player could lose a game fair and square and not have to feel like they had been subjected to a cheat that they have no effective defense against. Now it seems the threat is as bad at the act, the psychological effects are stacking up over time. Chess engines, a blessing and a curse.
Maybe you could do the same experiment. That you say that some people will be chosen as cheaters, but actually no cheaters would appear. The results might indicate, whether people would actually play differently than lets say regular tournament enviroment
Great and informative episode! The discussion about biases and the accuracy of guessing if someone is cheating was very interesting. Does anyone know the time control of the tournament experiment?
That was an excellent listen! Thanks to all parties involved for thought provoking questions and answers. Hopefully we could have more players, or even top players participate in experiments!
You could do it over the board if everyone played with their phones open and flipped up in front of them like when you play battleship. Turn the volumes and vibrations off. And you just get a text with the move at key moments.
Could you please specify the date of recording for every video of yours, because nowadays it is important, we need to know whether you ignore Kramnik or not.
playing against a concrete wall, against an opponent who refuses to make mistakes, will result in a blunder, sooner or later, so this I find curious in the study and its conclusions.
THIS is how chess cheating should be approached. Rigorous and thorough testing without slander and personal attacks. No one is denying that it is an existential threat to the game, but we need to find a way to deal with it in a civilized and professional manner.
Actually not. This guy seems WOKE. I was unable to listen to him.
Nah let's make Chess into FOX News instead... - Kramnik
Nah, let's discuss eye movements
I think that the accusers are a greater threat than the actual cheaters.
Agreed, but many aren't even suspicious of their opponents while cheating had happened so I believe suspicion should also be encouraged, and questions should be asked with reports made.
Academics like David here are such a joy to listen to. Thanks for bringing him on!
I was absolutely riveted to this conversation!
Academics don't understand women... 😂
Do you hate academics? And maybe journalists? Love cops and anyone promising Law and Order?@@onetwo-xu2jw
@@onetwo-xu2jw oh look at that, we found the double digit IQ knuckledragger
One of the most interesting podcasts you guys have done.
So many debates in chess are based around conjecture, received wisdom or arguments from authority.
I hope Smerdon gets to implement more of these experiments in chess.
I can't agree more!
Yeah, this type of discussion really highlights how little actual data there is about chess cheating, at least in the public. Most of the public discussion is based on vibes.
@@ol-os-so I see what you did there!
Would’ve listened to a couple more hrs of this, excellent guest, would love to dive deeper into all of those topics in future episodes 🤙🏼 hopefully as more data from further experiments is gathered, I feel like doing more and more of these is super important.
I love David Smerdon's academic approach to systemic issues in chess. We're lucky to have Smerdon in our game.
This episode felt too short-It's a joy to listen to David! I hope you bring him back again.
Amazing episode, watching a fellow Australian tackle both the chess world and academia in this way is super refreshing. Thanks to all parties for facilitating such an interesting episode 😊
Projection bias caused by bad play...sounds familiar. And sound data sets with expertise in the field...sounds unfamiliar. This was an exceptional episode and I hope that you have David on again after further experiment. I really hope that someone we all know catches this episode. Though I doubt they have the ability to recognize how much this undermines their actions.
What a brilliant episode. Even by the standards of this podcast, this was really a great conversation touching on some very important topics for both chess and society. It's also great to see someone approaching these topics from the point of view of data and evidence rather than purely as a source of drama.
David’s humbleness is next level. The guy was good enough to draw freaking Magnus, yet he is talking about himself like he was some 1500 elo hobbyist
You cant judge a man from mannerism. You have to judge him by his ethics.
Ones ethics influence their mannerisms, so youre wrong.
Well as a 1500 elo hobbyist, I take great offense to this, because Magnus has never defeated me either.
@@adomaskuzinas2137 no it doesn't..😂
@@monarkjain613 so youre saying a junkie with rotten ethics will have the same mannerisms as Jordan Peterson? :D
The first time I got drunk was going out celebrating David's final GM norm
Hahaha wish I was old enough at the time to have done this.
David is such a nice guy. I emailed him a few years ago, and he was very nice to me and generous with his time and thoughts....
22:25 Mamedyarov accused GM Igor Kurnosov who died in a car accident several years after that. Kurnosov was mostly away from the board, but he was a chain smoker (and tried to relieve the mental pressure) and was seen in public areas.
Great episode!! On so many levels, the counterpoint we need from Kramnik’s shenanigans.
Very refreshing to hear from people who are using chess to look at bigger problems in the world. Thanks for the pod!
You guys always get the best interviews, this is fascinating!
As an academic and chess enthusiast, I love this episode. Would love more guests like this in the future. Thanks.
I agree
I first became aware of Smerdon through his 2015 book covering the Nf6 Scandinavian. He's a king of offbeat openings and has a well honed way of exploring data. He sounds like an excellent economist.
That was super interesting. I mean that unironically btw. Thanks for having him on.
At the risk of repeating yet again what has already been said here: Fabulous interview. Smerdon is brilliant and the two of you handled the conversation superbly. Thanks for all the effort that goes into these podcasts
This was a great episode. Really enjoyed getting an academic perspective on this (for some) rather emotional topic!
Classy interview all round. Well done
huge kudos for this episode guys. Always a pleasure to listen to intelligent people discussing interesting topics
Super interesting! Would love to see more of Smerdon's work in the future.
It could perhaps be interesting to run the same experiment, tell all the players there may be cheating, but actually never have any cheating take place, and see how many people still get accused.
But that have have to be one round only. If you are never the designated cheater you would most likely know whats going on
@@raphaellfms Not everyone gets a turn.
@33:00 fabi cheating on the test to figure out who's cheating is so funny. Lol
it seems to me that the competitive cheater (in any sport) is someone whose competitiveness or ambition eclipses his sense of right and wrong. they simply want to win so much they'll take any risk. lance armstrong, carl lewis, those type of people. so I don't know what it tells us when we select random people to cheat in an experiment, probably just that non-cheaters feel very bad about it and can't handle the pressure they never wanted in the first place.
opportunity plays a larger role than motive so many ppl took steroids in baseball. its not like baseball players are inherently less ethical than basketball players or w/e it's just that it was so much easier to not get caught.
its really really easy to cheat in online chess.
While Lance is that kind of character, it was virtually the whole peloton that were taking EPO. So I am not sure you can draw conclusions about character of cheaters from that era of cycling.
Yeah I think it's moreso using grey areas to gain advantages in sports. Also when there is serious money on the line that's what happens. Barely any cheating at the lower levels because it simply does not pay well enough.
@@fauge7 in some sports the amateur level has a decent amount of cheating. Golf for example.
@@peterhardie4151"that era of cycling" started in the 1920's and continues today. Cycling is the most doped sport.
What an episode! What a guest! 😍 Well done and thank you! 🥳🥳🥳
In the 2nd experiment discussion around players looking at 7 games and predicting where the cheating was - it goes against the most profound finding of the first experiment ie you tend to think your opponent is cheating if you blunder more. When observing a game as a neutral, you’re not playing, therefore not blundering, therefore have no personal connection to the game and so the 2nd experiment design is significantly different from the 1st experiment and the hypothesis should be different if nothing else. They can’t both be answering the same question if a significant detail that we know predicts when someone thinks another is cheating is different. May sound pedantic but to an academic this should be obvious.
Just a reminder, that the 2nd experiment in the podcast - the 7 games - was last year and it was the initial experiment. The first one mentioned - the tournament - is the most recent one.
49:00 "Do you think the people with that bias would be receptive to the evidence?" Hm, I wonder who he could possibly be talking about here...
One of the things about checking with an engine for moves is it's not your idea, so you need to understand the meaning behind the idea in order to make use of it. I can understand the super GMs who say 1 or 2 moves a game is enough for 100 ELO, but I think for me only 1 or 2 moves a game may make the result worse for me because it wasn't my idea and I don't know how to follow up that move. It may be better if, as Svidler suggests, not the move but the fact that there is something important to find here is really what would be advantageous.
That’s exactly what I was thinking. Telling me a stockfish recommendation would tell me nothing at all 😂
China managed to get total domination in Women's chess just by spending money on a program for it. Imagine Rex decides actually wants super-serious women's chess, and he doesn't just pay stipends to someone like Alice Lee, but partners with a local St Louis private high school and does an all-expenses-paid thing for 15 top prospects, where chess is a part of the curriculum. We'd make GMs every year. Add a similar boys scholarship, and we'd see 2700s. But that's far more money than the 100k prize.
They should do an experiment where after the tournament, players are allowed to discuss who is cheating. This will tell if discussing the game with other players will increase or decrease accuracy of detection. My hunch is, it should lead to lower accuracy, potentially about 50, because you don’t wanna cross someone who’s convinced someone cheated
Tests like this are usually shown to be really poor as people come in with preloaded presumptions and a goal to find a cheater
This is one of my favorite episodes so far! I caught myself in that bias just a couple of days ago, actually. I suspected my opponent had cheated, analyzed the game and looked over their account and had to conclude that I was wrong. I just played a much worse game than I thought I did, while they didn't play anything ground breaking at all.
Superb interview, thank you.
Chess Players are wired differently: "When evidence comes we change our minds."
Er....
Regarding playing worse overall when you were asked to cheat: I'm pretty sure this affects primarily those who were asked to cheat but wouldn't cheat out of their own volition. There is at least a subset of cheaters who will not perform worse at all overall, if they throw in an engine move now and then. Those cheaters tell themselves that their cheating is justified (e.g., because others are cheating as well, or they would find the moves if they weren't tired, etc.) and don't feel any guilt. The play of that subset of real cheaters would not suffer from the cheating.
What an outstanding episode and an arguably more outstanding guest. I am extremely interested in following his work going forward
I'll watch this on Kramnik's channel when he reacts to it.
😂😂😂😂
He should open a separate reaction channel
I'll wait for Eric Hanson (chessbrah) reaction to Kramnik 's reaction to be able to watch it. Cannot watch less than a 9 hours video.
@@angosalvo5734 Oh, even better!
30:13 LOL
The video and sound seems desynced on my end.
Loved dadvid, please bring him back for a longer episode where you guys delve deeper into many of the topics in the video
Especially Fabi and david seem to gel so well
Here, we have a very smart and hard-working guy (and not because he plays chess, btw). Nice episode! Really enjoyed it.
I wonder if you could devise an experiment where you have between 10 and 20 volunteers who play at various open tournaments and are randomly selected as cheaters in certain games, with the full knowledge and cooperation of arbiters and tournament organizers, but without their opponents' knowledge ahead of time to ensure that it would be a double-blind study. Obviously you would have to refund any points lost due to games that were won by the cheaters, and you're on slightly sketchy ground ethically, but it would still be interesting to see the results of a study like that. It would also be a huge data point for analysis of OTB cheating.
Could you avoid the ethical dilemma in the experiment by doing the following? = You run the (openly advertised) cheaters' tourament, you incentivise accurate accusations of cheating - but in reality you assign no cheaters. Accusations likely ensue regardless at a similar rate to a true cheater's tournament just based on hysteria, cognitive dissonance when players blunder, etc. I think this could simulate the paranoia in the chess world at the moment.
This is a fantastic idea.
That would create a baseline. But I think right now that paranoia is already there in online chess in general.
I LOVE this idea!
So... A normal online tournament with money prizes?
Would it avoid the ethical dilemma? Yes. Would it be useful data? Not really. We already know that people are paranoid, and prompting them to be suspicious of their opponents by creating a "cheaters" tournament would only prove that people are susceptible to paranoia... which again, we already know. It could make for an interesting psych experiment if you're into that, but there would be no practical application for the results.
Great interview! Well done C^2!
Wow, such a great podcast, one hour passed like one minute !
Data set would be way more relevant if the cheaters could be in control of as many requests as needed for best move in any number of difficult positions, rather than just waiting around passively for the tester to send moves in a limited number of positions merely assumed by the tester to be those that the player needs help in finding the best move.
I thought so too. That would dramatically affect the results, because cheaters would always find their way out of difficult situations. And opponents would be more likely to get that ‘feeling’ of their opponent cheating.
Great idea. You could give them the option of, say, 4-10 moves per game that they could phone a friend, as it were.
It is so refreshing to hear the heavy emphasis on evidence.
They were the perfect trio to test if an average GM with a few hints can beat or at least be competitive against a super GM. Nonethereless a fantastic interview, David Smerdon is one of the most interessing chess player.
The Books Under the Goat
Unknown
Title: The Performance Cortex, Author: Zach Schonbrun
Title: Revolution, Author: Russell Brand
Title: Einstein His Life and Universe, Author: Walter Isaacson
Unknown
Title: Bobby Fischer, Author: Harry Benson
That's the best I can do. Any improvements?
The green one is Twin Peaks: The Final Dossier
31:00 The main takeaway I had in intro level statistics is correlation is causation. Fabiano is right to question the subjectiveness. The idea of “I play bad so they must be cheating.” Vs “Feels like they are cheating…” and the info processing is messed up because of being distracted or developing counter strategies to cheating, or even any other possibilities.
Exactly.
That was a great podcast, keep them coming
here we go with a much appreciated and professional approach to the whole thing. kramnik pointed out the problem and now we need more work on this problem like this instead of destroying innocent peoples life. kramnik should show up with works on this like we see here.
exactly. enough of kramniks poorly substantiated 'suspicions' and bullying
What about Kramnik's thoroughly substantiated suspicions, like the videos of Danya looking at engine lines, mentioning computer moves, repeatedly lying about his own on-video behavior, getting ranked #1 in the world online, above Magnus, Hikaru, Alireza?
@@illarionbykov7401 thoroughly substantiated my ass! he once mentioned a move that was also suggested by one of the computers, this an evidence for nothing.
Very interesting! I´d like to add one thought: Two moves with illegal assistance can make a huge difference if the cheater can decide when to use help. It is a different story when he/she does not know when and how often assistence comes. So the cheater might wait for a move in a difficult position and burn time or might not fully focus.
May I make a suggestion in fighting FGM? It's a very difficult case to make when the arguments defending this horrific practice mirror arguments made in the West to defend MGM. I believe there is little vigor in ending this practice because the moral arguments apply to both genders.
Problem is, like in most sports/games at the top level, there isn’t an actual Male GM title. The title is open to everyone. It’s only socially referred to as the male because of the existence of the other. There is however a handicapped title only available to one group.
@@mikemck4796 I wasn't talking about the Grandmaster title. I was talking about something else
The first thing to evaluate is the consequences of cheating at various levels. I would propose an experiment involving low level computer bots and evaluate their estimated rating (bradley terry) when some of their moves are augmented by stockfish 17. This would help quantify the impact of cheating when left to a handful of moves. Obviously, always playing like stockfish gives you stockfish
The point around 40:00 probably isn't relevant to serious cheaters. Of course if you put people who normally play fair in a situation where they are allowed to cheat, using a system they haven't practised run by an accomplice they don't know, it follows they will be uncomfortable about it to the point that their legitimate play suffers. A serious cheat will have rehearsed the process ahead of time and they will be confident in their plan.
It makes 100% sense that the more you blunder the more you blame,
The explanation is when you blunder, it means you haven’t seen the move(s) your opponent is going to play after the blunder, hence you think these moves were difficult to find.
Aussie!!! We love seeing our own out and about. It’s rare to hear of an Aussie in pretty much any field, but especially chess!!
subtle kramnik reference at 32:21. nice!
Don't make accusations you don't have evidence for...
@@onetwo-xu2jw ah ok
I really enjoy this channel, but I believe it would help to use microphones with better sound quality. In this episode, for instance, Fabiano's voice sounds less distinct than the interviewer’s, which can be a bit distracting. His voice is clear in other videos, so it seems to be an issue with the audio capture in this episode.
I think there is no way to resolve the cheating issue without super invasive techniques like actively scanning brain activity. You can detect super deep tactics happening unnaturally or situations where there are too many possible good moves for you to be frequently chosing top engine moves, but all the cheater needs to do to defeat this is to only choose moves they understand and use engines that take into account cheating detection techniques. I wonder if in 50 years chess players will be wearing hats that scan their brain activity while they play.
Wow! That was amazing. By far I think my favorite interview in chess.
Smurfo! I love his Portuguese Gambit. Been a fan of his work since around 2014-15. Impressive draw against Magnus in the Baku Olympiad in 2016. Can't believe it's been 8 years.
Excellent episode. Really enjoyed it.
I was thinking of the moral dilemma of the experiment. You can have two tournaments. Their tasks are identical. They all know there are two tournaments, where in one of them no one is cheating and in the other there will be cheaters. Of course, in this experiment you would unfortunately have to have the same people be cheaters the whole tournament to not give away which group you are in, but I think that is closer to real life as well.
i guess the only problem with this is, and not to sound like kramnik here, but can you absolutely guarantee no one is cheating in the no cheaters tournament? they might not be cheating with moves you gave them, but-
@wrjalmond they would be playing on designated computers in designated rooms without their phones. There would be no prize money for winning. I guess it is possible someone would cheat. How much probability would you give that would happen in an experiment everyone knew they were part of?
best episode yet! Davids fantastic
This is fantastic work, thanks for sharing with us!
This experiment is going to provide massive amounts of data to tune any cheating detection engine!
Kramnik should pay that guy 100k to learn how to properly tackle cheating. ;)
Nice propaganda
Would love more guests like this in the future
Can David expand on the 69% accusation success rate thing? He says that "50% would be chance" but that is dubious and depends very much on the nature of how you're measuring accuracy. 50% would only be chance if half the games had a cheater and you were asking after every round, "did your opponent cheat?" But that wasn't the methodology at all here. Players could accuse anyone at their discretion (and it cost them to do so), out of the space of all games they played. The expected accuracy by random guessing is way, WAY below 50% in this scenario, and 69% would be very significant.
David is a super cool and intelligent smart dude ! He should turn his chess research into a doco film and or making more chess related content ! now that would be interesting !
I think the players in the first tournament did relatively well in spotting cheating because
1) They were told there would be cheating so they were actively looking for it
2) The relative strength (or lack thereof) of the players involved. A 2600+ player playing a few engine moves is far less suspicious than a 2000 elo one.
I remember the times when a player could lose a game fair and square and not have to feel like they had been subjected to a cheat that they have no effective defense against. Now it seems the threat is as bad at the act, the psychological effects are stacking up over time. Chess engines, a blessing and a curse.
Maybe you could do the same experiment. That you say that some people will be chosen as cheaters, but actually no cheaters would appear. The results might indicate, whether people would actually play differently than lets say regular tournament enviroment
Fantastic interview. Super interesting.
Brilliant episode, thanks!
I guess a weakness with the study is tthat the cheaters were not motivated/self selected cheaters
Great and informative episode! The discussion about biases and the accuracy of guessing if someone is cheating was very interesting. Does anyone know the time control of the tournament experiment?
Great episode, very much enjoyed it thanks.
32:31 yet you actually don't know if any of the players were cheating of their own accord, in addition to the tagged cheaters of the experiment.
He seems like a great guy
edit after watching : it was a fascinating podcast.
That was an excellent listen! Thanks to all parties involved for thought provoking questions and answers. Hopefully we could have more players, or even top players participate in experiments!
Clicked for the chating title, stayed for the interesting topics!
You could do it over the board if everyone played with their phones open and flipped up in front of them like when you play battleship. Turn the volumes and vibrations off. And you just get a text with the move at key moments.
awesome pod! i certainly learned quite a bit, and his research really contextualised a lot of the cheating drama. thanks!
Would be nice if the guessing cheating question is still available, I would love to try it myself and see how I do.
this guy was dope! 😂 where’s his pod??
Could you please specify the date of recording for every video of yours, because nowadays it is important, we need to know whether you ignore Kramnik or not.
🤣🤣
+1
Absolutely amazing voice and podcast.
More of this guy, please!
fascinating episode and great questions from fabi and christian
Awesome interview. 👍
Great Episode!!!
What a banger experiment! Awesome stuff.
This is by far the best pod you have done in a while, less whingy whiny chess players and more guests like him please
Audio and video not synced
Audio isn't great overall. Especially guest's mic seems over-sensitive and distorted.
24:23 - the experiment is discussed here
This is so constructive dialog!
playing against a concrete wall, against an opponent who refuses to make mistakes, will result in a blunder, sooner or later, so this I find curious in the study and its conclusions.