Greg thoughtfully responded to a series of comments by Statisticool. Instead of engaging in the conversation, Statisticool deleted his comments and with them Greg's responses. Greg Glassman: "Statisticool" posted some questions/objections to this video. I took the time to answer each of them. He then removed all of the comments/objections and my replies. His misunderstanding is far from uncommon, even for a "professional statistician of 17 years", so I see potential value in reposting the questions and my replies. His questions/objections are boldfaced and my replies immediately follow." I am reposting each of Stasticool's initial comments and Greg's responses here so others with similar questions can benefit from Greg's extensive replies.
i see no comments/responses here as referred to.. Also, re 9/10 dentists - the procedure leading to the outcome '9/10 recommend Sterodent' - procedure they followed would be interesting as an eg of manipulation, eg did the dentist get a free box of sensodent with a 'y' response?. if you asked the 9/10 on what grounds they recommend, you will find they have none, and even if there was 'a reason' (eg it contains fl) the dentist will be echoing what his lecturers told him, ie a consensus view of his training college. Remember the ad that said drs prefer a certain cig brand
This reminds me of first listening to Coach’s lectures on what is Fitness. I thought, I don’t understand what this dude is talking about, but i have to learn more. I also loved recess
*Statisticool:* Glassman and Briggs should read Deborah Mayo's "Statistical Inference As Severe Testing", to understand p-values and science better. *Greg Glassman:* I did as a matter of fact and your predicted effect didn’t materialize. Should I assume that you’ve not read Jayne’s “Probability Theory - the Logic of Science”? How about Gigerenzer’s “Mindless Statistics” (2004), “Surrogate Science: The Idol of a Universal Inference” (2015), “Statistical Rituals: The Replication Delusion and How We Got There” (2018)? How about Intel’s Charles Lambda’s “Significance tests as sorcery: Science is empirical - significance tests are not.” (2012)? The “Test of Significance in Psychological Research”, David Bakan (1966), or even The American Statistician’s publication of The ASA’s Statement on p-value: “Context, Process, and Purpose” (2016) where America’s oldest scientific organization, seems to recognize much of what you and Dr. Mayo don’t acknowledge, but notice, I neither assume you’ve neither seen nor read them, nor that doing so would change your mind.
the nails story is one of best stories i have heard. charming & instructive. the 9/10 dentist claim is cryinng out to be scrtinized & would be a valuable exercise to review it. I bet your dad looked at the last digit & saw you had favorite numbers
*Statisticool:* The Framingham Heart Study has measured many things...what are you talking about? *Greg Glassman:* Framingham correlated answers to survey questions to health outcomes and that data was used to support the mistaken notion that cholesterol is a cause of heart disease. That has been a public health disaster of incomparable degree. Observations of the real world become measurements when tied to a standard scale with a well characterized error. Answers to surveys on lifestyle scarcely qualifies as measurement of a real world observation. Nutritional epidemiology suffers greatly from this shortcoming among others. In a review of Uffe’s Ravnskov’s Cholesterol Myths, “Abacus” offers this, and I post it here because it seems you may have some interest in the topic of health metrics and public health. The Seven Countries Study, Framingham Heart Study, and the Nurses Health Study are the targets of Abacus’s “Thirteen Sins of Medical Statistics”. “The first sin is misdirection. Many studies indicated there were inverse relationships between mortality and cholesterol, especially at advanced age, and with women. The researchers ignored this, and stated unequivocally the data showed a direct proportional relationship between cholesterol and mortality. The second sin is data cherry picking. Ancel Keys, leading advocate of cholesterol theory, gathered data from 22 countries. He deducted that % of calories derived from fat in diet is related to cholesterol and higher mortality rate by selecting only the 7 countries that supported his hypothesis. The third sin is ignoring qualitative differences in cultural practices. In U.S., coronary heart disease (CHD) is diagnosed related to uncertain causes of death 33% more often than in England and 50% more often than in Norway. As a result, the three countries respectively are associated with a high, moderate, and low level of (CHD). Yet, their consumption of cholesterol is similar. Keys ignored these factors and took out the countries that did not support his conclusion. The fourth sin is confusing association with causation. Researchers sometimes derived that higher cholesterol was causing CHD. Meanwhile, the true cause may have been age, weight, or diabetes. The author shows how you could similarly demonstrate that radio ownership is correlated with mortality rate! The fifth sin is not doing a random sampling. The Framingham study included a postmortem analysis concluding that cholesterol does cause atherosclerosis. But, this was after selecting only the 14% of the test subjects who died prematurely. A large proportion had familial hypercholesterolemia. This is a rare disorder associated with high cholesterol and CHD. But, this relationship between cholesterol and CHD does not exist in the general population. The sixth sin is using the wrong test to boost significance. The two-tail t test is the appropriate one in medical hypothesis testing. But, researchers often used the one-tail t test to inappropriately boost confidence level from 90% to 95%. This allowed them to claim their findings were significant when they were not. The seventh sin is not looking at the whole picture. When testing the impact of cholesterol lowering drugs, researchers focused on the reduction in death from CHD while ignoring increase in death from other causes. Those drugs often boosted total mortality. The eight sin is focusing on relative risk vs absolute risk. If a drug reduces mortality from 0.7% to 0.6%, the pharmaceutical industry will broadcast that it reduces mortality rate by 14% (change in relative risk). This improvement overstates that it will reduce mortality by only 0.1% or save only one in 1000 lives (change in absolute risk). Medical studies use relative risk to boost claims of drug merits. They use absolute risk to minimize implication of side effects. The ninth sin is adding variables to get the prediction you want. Researchers never found statistically adequate evidence that high cholesterol causes CHD. So, they added smoking. They found that the combination of smoking and high cholesterol did cause CHD. But, smoking was responsible for most of the CHD. The tenth sin is not doing a double blind test. Many of the studies were done with doctors knowing who were the patients who received the drug. Invariably, such studies result in overly optimistic assessment of the tested drug. The eleventh sin is believing frequency of a study's citation is proportional to its quality. Within medical research, the studies that demonstrate that a cholesterol lowering drug reduce CHD risk are cited 10 to a 100 times more often than the ones who don't. The twelfth sin is testing the same hypothesis over and over. The scientific method consists in testing a hypothesis once. If results reject such hypothesis, the researchers should come up with a different hypothesis. Instead, medical researchers test whether lowering cholesterol reduces CHD until they get the results they want. That's not science. The thirteenth sin is believing the consensus is more important than the source of funding. The reverse is true. The pharmaceutical industry funds the majority of studies. Thus, researchers reach their financiers' consensus. The few dissenters are dismissed. But, their judgment is not distorted by Big Pharma. By uncovering statistical flaws, the author debunks the merits of the Mediterranean diet and the French Paradox. Similarly, he refutes the concept of good vs bad cholesterol and the related multiple between the two as a metric for CHD also falls apart. He also refutes the merit of Dr. Ornish draconian diet (only 10% of calories from fat). He indicates that Dr. Ornish own study included so many variables (exercise, lifestyle, meditation) that he could not tell the low fat diet contribution. Ravnskov advances Dr. Ornish program would work as well without the diet component. I recommend three other books: Charles McGee's "Heart Frauds", Lynne McTaggart's "What Doctors Don't Tell You" and Nortin Hadler's "The Last Well Person." The first book covers cardiovascular treatment. The other two books cover Western medicine. The books messages converge. Western medicine is costly, overly invasive, and not always effective.” We covered this material nicely at CrossFit.com and that may be a good starting point for you, Statsiscool, to begin looking at this topic more seriously. I would also highly recommend Ravnskov’s “Cholesterol Myths.”
*Statisticool:* You talk down probability being in coins and die, but probability via the bell curve was present in your nail measurements, even before you measured them. Explain? *Greg Glassman:* I “talk down” uncertainty inhering in objects or their behavior. The point is that uncertainty does not inhere in objects nor their behavior but in our heads. You thought I was denying the utility of mapping a data set to a test statistic or model.
*Statisticool:* You are saying predictive power is science, and at 29:11 you say if you think about it, we trust everything based on predictive power, kids, spouse, etc. Yet at 44:15 you said there are think... [comment cut off] *Greg Glassman:* No, and a pattern emerges here with you. Try this…respond to what I said and not a characterization of same. It will make for better exchange and fuel any potential for either of us to learn from one another. Here’s is what I said science is: Science is man’s source and repository for objective knowledge. That knowledge siloes in models. Those models map a fact to a future unrealized fact as a prediction. A fact is a measurement. A measurement is an observation of the real world tied to a standard scale with a well characterized error. Models are ranked and graded by their predictive strength, eg, conjecture, hypothesis, theory, and law, and that is the singular determinant of validation - predictive strength. I offered the analogy of trust in people and institutions, saying that like scientific theories rational trust comes from their predictability. That doesn’t make love science love, but it may may do that for insurance (actuarial science, in fact).
*Statisticool:* Greg basically saying 'consensus is not science' and Briggs saying 'look at all these scientific studies that disagree!' (ie. do not have consensus). Contradiction? Of course, scientific consensus is agreement based on experiments, not just people merely saying 'I agree with you'. Of course, religions have less consensus and cannot measure anything about the gods they believe in or falsify, so picking nits about scientific process is amusing, since it helps us understand the world much better already. *Greg Glassman:* Imagine the wet blanket effect for me in having to, again, address your characterization of what I said rather than what I said. And to this mischaracterization, I now find “God” somehow introduced and apparently involved. Please. Does formal training in null hypothesis testing somehow green-light straw-man creations. I could imagine that. Start with this, Sir, NHST and publication in highly esteemed magazines (PRJ’s), both hallmarks of much university research, is a shitty alternative to predictive strength for validation of a scientific theory and expectation of replication under that schema, of course, produces outcomes that cannot be replicated. To expect otherwise is irrational.
The scientismo talking point is that concensus means consensus "based on experiments" or "the data" or other such inevitable motte-and-bailey (fallacy) setups. Paradigms are viciously self-reinforcing. "The data" is a silly notion. Who collected the data, how, why, how was it interpreted, by what model, funded by whom, with what classifications, definitions, etc. Oh and have those definitions changed midway through the period of study? Were they chrerrypicked? "Covid" was a better class in How to Lie with Statistics than the book itself.
Hard to follow, perhaps slow down in difficult parts & repeat harder parts in a talk. More details of the gym data would be interesting. Legal system more corrupt now & judges more likely to stay aligned with big business (Pepsi). Saw an ad that said sugar improves testosterone.
I did as a matter of fact and your predicted effect didn’t materialize. Should I assume that you’ve not read Jayne’s “Probability Theory - the Logic of Science”? How about Gigerenzer’s “Mindless Statistics” (2004), “Surrogate Science: The Idol of a Universal Inference” (2015), “Statistical Rituals: The Replication Delusion and How We Got There” (2018)? How about Intel’s Charles Lambda’s “Significance tests as sorcery: Science is empirical - significance tests are not.” (2012)? The “Test of Significance in Psychological Research”, David Bakan (1966), or even The American Statistician’s publication of The ASA’s Statement on p-value: “Context, Process, and Purpose” (2016) where America’s oldest scientific organization, seems to recognize much of what you and Dr. Mayo don’t acknowledge, but notice, I neither assume you’ve neither seen nor read them, nor that doing so would change your mind.
@@gregglassman2092 Regarding "The American Statistician’s publication of The ASA’s Statement on p-value: “Context, Process, and Purpose” (2016)" Pssst, read their "The ASA President’s Task Force Statement on Statistical Significance and Replicability" from 2021.
Greg thoughtfully responded to a series of comments by Statisticool. Instead of engaging in the conversation, Statisticool deleted his comments and with them Greg's responses.
Greg Glassman: "Statisticool" posted some questions/objections to this video. I took the time to answer each of them. He then removed all of the comments/objections and my replies. His misunderstanding is far from uncommon, even for a "professional statistician of 17 years", so I see potential value in reposting the questions and my replies. His questions/objections are boldfaced and my replies immediately follow."
I am reposting each of Stasticool's initial comments and Greg's responses here so others with similar questions can benefit from Greg's extensive replies.
Thank you for being an example of open science, in contrast to the person you are dealing with.
i see no comments/responses here as referred to..
Also, re 9/10 dentists - the procedure leading to the outcome '9/10 recommend Sterodent' - procedure they followed would be interesting as an eg of manipulation, eg did the dentist get a free box of sensodent with a 'y' response?. if you asked the 9/10 on what grounds they recommend, you will find they have none, and even if there was 'a reason' (eg it contains fl) the dentist will be echoing what his lecturers told him, ie a consensus view of his training college.
Remember the ad that said drs prefer a certain cig brand
I see the comments below now & notice ad hominem insult & a list of titles as a response in place of reason
This reminds me of first listening to Coach’s lectures on what is Fitness. I thought, I don’t understand what this dude is talking about, but i have to learn more.
I also loved recess
*Statisticool:* Glassman and Briggs should read Deborah Mayo's "Statistical Inference As Severe Testing", to understand p-values and science better.
*Greg Glassman:* I did as a matter of fact and your predicted effect didn’t materialize. Should I assume that you’ve not read Jayne’s “Probability Theory - the Logic of Science”? How about Gigerenzer’s “Mindless Statistics” (2004), “Surrogate Science: The Idol of a Universal Inference” (2015), “Statistical Rituals: The Replication Delusion and How We Got There” (2018)? How about Intel’s Charles Lambda’s “Significance tests as sorcery: Science is empirical - significance tests are not.” (2012)? The “Test of Significance in Psychological Research”, David Bakan (1966), or even The American Statistician’s publication of The ASA’s Statement on p-value: “Context, Process, and Purpose” (2016) where America’s oldest scientific organization, seems to recognize much of what you and Dr. Mayo don’t acknowledge, but notice, I neither assume you’ve neither seen nor read them, nor that doing so would change your mind.
Bienvenido de nuevo, coach!
the nails story is one of best stories i have heard. charming & instructive. the 9/10 dentist claim is cryinng out to be scrtinized & would be a valuable exercise to review it. I bet your dad looked at the last digit & saw you had favorite numbers
*Statisticool:* The Framingham Heart Study has measured many things...what are you talking about?
*Greg Glassman:* Framingham correlated answers to survey questions to health outcomes and that data was used to support the mistaken notion that cholesterol is a cause of heart disease. That has been a public health disaster of incomparable degree.
Observations of the real world become measurements when tied to a standard scale with a well characterized error. Answers to surveys on lifestyle scarcely qualifies as measurement of a real world observation. Nutritional epidemiology suffers greatly from this shortcoming among others.
In a review of Uffe’s Ravnskov’s Cholesterol Myths, “Abacus” offers this, and I post it here because it seems you may have some interest in the topic of health metrics and public health. The Seven Countries Study, Framingham Heart Study, and the Nurses Health Study are the targets of Abacus’s “Thirteen Sins of Medical Statistics”.
“The first sin is misdirection. Many studies indicated there were inverse relationships between mortality and cholesterol, especially at advanced age, and with women. The researchers ignored this, and stated unequivocally the data showed a direct proportional relationship between cholesterol and mortality.
The second sin is data cherry picking. Ancel Keys, leading advocate of cholesterol theory, gathered data from 22 countries. He deducted that % of calories derived from fat in diet is related to cholesterol and higher mortality rate by selecting only the 7 countries that supported his hypothesis.
The third sin is ignoring qualitative differences in cultural practices. In U.S., coronary heart disease (CHD) is diagnosed related to uncertain causes of death 33% more often than in England and 50% more often than in Norway. As a result, the three countries respectively are associated with a high, moderate, and low level of (CHD). Yet, their consumption of cholesterol is similar. Keys ignored these factors and took out the countries that did not support his conclusion.
The fourth sin is confusing association with causation. Researchers sometimes derived that higher cholesterol was causing CHD. Meanwhile, the true cause may have been age, weight, or diabetes. The author shows how you could similarly demonstrate that radio ownership is correlated with mortality rate!
The fifth sin is not doing a random sampling. The Framingham study included a postmortem analysis concluding that cholesterol does cause atherosclerosis. But, this was after selecting only the 14% of the test subjects who died prematurely. A large proportion had familial hypercholesterolemia. This is a rare disorder associated with high cholesterol and CHD. But, this relationship between cholesterol and CHD does not exist in the general population.
The sixth sin is using the wrong test to boost significance. The two-tail t test is the appropriate one in medical hypothesis testing. But, researchers often used the one-tail t test to inappropriately boost confidence level from 90% to 95%. This allowed them to claim their findings were significant when they were not.
The seventh sin is not looking at the whole picture. When testing the impact of cholesterol lowering drugs, researchers focused on the reduction in death from CHD while ignoring increase in death from other causes. Those drugs often boosted total mortality.
The eight sin is focusing on relative risk vs absolute risk. If a drug reduces mortality from 0.7% to 0.6%, the pharmaceutical industry will broadcast that it reduces mortality rate by 14% (change in relative risk). This improvement overstates that it will reduce mortality by only 0.1% or save only one in 1000 lives (change in absolute risk). Medical studies use relative risk to boost claims of drug merits. They use absolute risk to minimize implication of side effects.
The ninth sin is adding variables to get the prediction you want. Researchers never found statistically adequate evidence that high cholesterol causes CHD. So, they added smoking. They found that the combination of smoking and high cholesterol did cause CHD. But, smoking was responsible for most of the CHD.
The tenth sin is not doing a double blind test. Many of the studies were done with doctors knowing who were the patients who received the drug. Invariably, such studies result in overly optimistic assessment of the tested drug.
The eleventh sin is believing frequency of a study's citation is proportional to its quality. Within medical research, the studies that demonstrate that a cholesterol lowering drug reduce CHD risk are cited 10 to a 100 times more often than the ones who don't.
The twelfth sin is testing the same hypothesis over and over. The scientific method consists in testing a hypothesis once. If results reject such hypothesis, the researchers should come up with a different hypothesis. Instead, medical researchers test whether lowering cholesterol reduces CHD until they get the results they want. That's not science.
The thirteenth sin is believing the consensus is more important than the source of funding. The reverse is true. The pharmaceutical industry funds the majority of studies. Thus, researchers reach their financiers' consensus. The few dissenters are dismissed. But, their judgment is not distorted by Big Pharma.
By uncovering statistical flaws, the author debunks the merits of the Mediterranean diet and the French Paradox. Similarly, he refutes the concept of good vs bad cholesterol and the related multiple between the two as a metric for CHD also falls apart.
He also refutes the merit of Dr. Ornish draconian diet (only 10% of calories from fat). He indicates that Dr. Ornish own study included so many variables (exercise, lifestyle, meditation) that he could not tell the low fat diet contribution. Ravnskov advances Dr. Ornish program would work as well without the diet component.
I recommend three other books: Charles McGee's "Heart Frauds", Lynne McTaggart's "What Doctors Don't Tell You" and Nortin Hadler's "The Last Well Person." The first book covers cardiovascular treatment. The other two books cover Western medicine. The books messages converge. Western medicine is costly, overly invasive, and not always effective.”
We covered this material nicely at CrossFit.com and that may be a good starting point for you, Statsiscool, to begin looking at this topic more seriously. I would also highly recommend Ravnskov’s “Cholesterol Myths.”
Welcome back coach
Thank you!
*Statisticool:* You talk down probability being in coins and die, but probability via the bell curve was present in your nail measurements, even before you measured them. Explain?
*Greg Glassman:* I “talk down” uncertainty inhering in objects or their behavior. The point is that uncertainty does not inhere in objects nor their behavior but in our heads. You thought I was denying the utility of mapping a data set to a test statistic or model.
Uncertainty is about humans being uncertain. It inheres and minds and not things. David Hume is the man.
Probability is in the mind. Randomness is in the mind. Patterns are in the mind. Hume was right that everything must be grounded in subjectivity.
A dice I roll has nothing to do with mind, the nails measured giving an approximate bell curve has nothing to do with mind.
53:55 What's the name of the website?
brokenscience.org/
*Statisticool:* You are saying predictive power is science, and at 29:11 you say if you think about it, we trust everything based on predictive power, kids, spouse, etc. Yet at 44:15 you said there are think... [comment cut off]
*Greg Glassman:* No, and a pattern emerges here with you. Try this…respond to what I said and not a characterization of same. It will make for better exchange and fuel any potential for either of us to learn from one another. Here’s is what I said science is:
Science is man’s source and repository for objective knowledge. That knowledge siloes in models. Those models map a fact to a future unrealized fact as a prediction. A fact is a measurement. A measurement is an observation of the real world tied to a standard scale with a well characterized error. Models are ranked and graded by their predictive strength, eg, conjecture, hypothesis, theory, and law, and that is the singular determinant of validation - predictive strength.
I offered the analogy of trust in people and institutions, saying that like scientific theories rational trust comes from their predictability. That doesn’t make love science love, but it may may do that for insurance (actuarial science, in fact).
Do the science on the "gangsta limp"
You mean the polio he had as a kid?
*Statisticool:* Greg basically saying 'consensus is not science' and Briggs saying 'look at all these scientific studies that disagree!' (ie. do not have consensus). Contradiction? Of course, scientific consensus is agreement based on experiments, not just people merely saying 'I agree with you'. Of course, religions have less consensus and cannot measure anything about the gods they believe in or falsify, so picking nits about scientific process is amusing, since it helps us understand the world much better already.
*Greg Glassman:* Imagine the wet blanket effect for me in having to, again, address your characterization of what I said rather than what I said. And to this mischaracterization, I now find “God” somehow introduced and apparently involved. Please. Does formal training in null hypothesis testing somehow green-light straw-man creations. I could imagine that.
Start with this, Sir, NHST and publication in highly esteemed magazines (PRJ’s), both hallmarks of much university research, is a shitty alternative to predictive strength for validation of a scientific theory and expectation of replication under that schema, of course, produces outcomes that cannot be replicated. To expect otherwise is irrational.
The scientismo talking point is that concensus means consensus "based on experiments" or "the data" or other such inevitable motte-and-bailey (fallacy) setups. Paradigms are viciously self-reinforcing. "The data" is a silly notion. Who collected the data, how, why, how was it interpreted, by what model, funded by whom, with what classifications, definitions, etc. Oh and have those definitions changed midway through the period of study? Were they chrerrypicked? "Covid" was a better class in How to Lie with Statistics than the book itself.
Hard to follow, perhaps slow down in difficult parts & repeat harder parts in a talk. More details of the gym data would be interesting. Legal system more corrupt now & judges more likely to stay aligned with big business (Pepsi). Saw an ad that said sugar improves testosterone.
Glassman and Briggs should read Deborah Mayo's "Statistical Inference As Severe Testing", to understand p-values and science better.
I did as a matter of fact and your predicted effect didn’t materialize. Should I assume that you’ve not read Jayne’s “Probability Theory - the Logic of Science”? How about Gigerenzer’s “Mindless Statistics” (2004), “Surrogate Science: The Idol of a Universal Inference” (2015), “Statistical Rituals: The Replication Delusion and How We Got There” (2018)? How about Intel’s Charles Lambda’s “Significance tests as sorcery: Science is empirical - significance tests are not.” (2012)? The “Test of Significance in Psychological Research”, David Bakan (1966), or even The American Statistician’s publication of The ASA’s Statement on p-value: “Context, Process, and Purpose” (2016) where America’s oldest scientific organization, seems to recognize much of what you and Dr. Mayo don’t acknowledge, but notice, I neither assume you’ve neither seen nor read them, nor that doing so would change your mind.
@@gregglassman2092 Regarding "The American Statistician’s publication of The ASA’s Statement on p-value: “Context, Process, and Purpose” (2016)" Pssst, read their "The ASA President’s Task Force Statement on Statistical Significance and Replicability" from 2021.
@@gregglassman2092 Greg, do you believe Briggs' statistical analysis of election 2020 that he did for Sidney Powell was good?