Is Most Published Research Wrong?

Поділитися
Вставка
  • Опубліковано 23 гру 2024

КОМЕНТАРІ • 10 тис.

  • @raznaot8399
    @raznaot8399 3 роки тому +19050

    As the famous statistical saying goes, "If you torture data long enough, it will confess to anything"

  • @qwerty9170x
    @qwerty9170x 3 роки тому +6067

    I really think undergrads should be replicating constantly. They dont need to publish or perish, step-by-step replication is great for learning, and any disproving by an undergrad can be rewarded (honors, graduate school admissions, etc) more easily than publication incentives can change

    • @jatmo6991
      @jatmo6991 3 роки тому +389

      Agreed. The peer review portion of the scientific method is it's weakest link IMO.

    • @lucaslopez2091
      @lucaslopez2091 3 роки тому +296

      I agree. Undergrads replicating classic experiments can also help with their education.

    • @chucknorris3752
      @chucknorris3752 3 роки тому +81

      Undergrads do preform classic experiments ^

    • @sYnSilentStorm
      @sYnSilentStorm 3 роки тому +161

      When I was an undergrad in the physics department, we were (and still are) required to reproduce quite a few experiments that led to Nobel prizes. It's a fairly common practice, and you do this while working in a research group that is producing new research.

    • @qwerty9170x
      @qwerty9170x 3 роки тому +182

      @@sYnSilentStorm How common are undergrads in research groups / replicating newer works? Well trodden ground breaking-now-foundational Nobel prize works make sense for learning, but we would benefit from a more institutionalized path of replication of newer claims. Especially if we can before graduate school, which is who populates research groups in my mind.

  • @Campusanis
    @Campusanis 6 років тому +5110

    The most shocking thing to me in this video was the fact that some journals would blindly refuse replication studies.

    • @krane15
      @krane15 5 років тому +58

      Maybe they're system is flawed?

    • @spiritofmatter1881
      @spiritofmatter1881 5 років тому +63

      The Economist publishes blindly. The entire EVidence Based Medicine does not yield dosage for medicine tested - that includes your painkillers, your blood pressure pills, your mood stabilisers and your anesthesia.
      But this is not science's fault, nor is it a conspiracy of the medical system.
      The scientific method is limited in its ability to give us data.
      Yet, practically we need to understand the world. The only way out is innovative methods that people can actually understand their pros and cons and get published without trying to immitate the scientific methods to be listened to.
      Oh, and developing the right hemisphere science. Which is what I experiment with.

    • @ewqdsacxz765
      @ewqdsacxz765 5 років тому +18

      We should boycott that unscrupulous journal. Does anyone know the name?

    • @krane15
      @krane15 5 років тому +107

      @@spiritofmatter1881 No, the scientific method can be skewed to provide misleading data; and if you don't believe there's a conspiracy in the medical/pharmaceutical system, there's a bridge in NY I can sell you for a good price. The pharmaceutical industry is mostly shady -- and medical doctors are their extended hand of national snake oil salesmen.

    • @fuNaN89
      @fuNaN89 5 років тому +130

      Journals are just newspaper for scientists.
      Publication house want to publish things that people want to read. And people want to read interesting and new things, not replication studies.
      Even though in modern times there are journals that accept and publish replication studies. These journals usually incur a much larger publication fee.
      Nothing is perfect. But at least, everyone is trying

  • @14MCDLXXXVIII88
    @14MCDLXXXVIII88 Рік тому +1650

    this happens because of "publish or perish" mentality. I hate writing scientific papers because it is too much of a hassle. I love the clinic work and reading those papers, not writing them. in this day and age it is almost an obligation that EVERYBODY HAS TO PUBLISH. if you force everyone to write manuscripts, flood of trash is inevitable. only certain people who are motivated should do these kind of work, it should not be forced upon everyone.

    • @migenpeposhi6881
      @migenpeposhi6881 Рік тому +88

      Indeed. You are totally right. I also feel that this over-publishing phenomenon has led these researchers into manipulating the data because it feels like a competition. In my opinion, researching is hard and sometimes frustrating, but you should always stay loyal to the fact. Otherwise, you are cheating for profit. Totally unethical. There are people who devoted their lives for this but still, they don't get as much recognition as these 'cheaters' do.

    • @karlrovey
      @karlrovey Рік тому +30

      And not only is it "publish or perish," but you also have to pay the journals to publish your work once it is approved for publication.

    • @arvidbergman
      @arvidbergman Рік тому +50

      @@migenpeposhi6881 Certain people don't realise that "Research" doesn't mean "Prove something is true" but instead "See _if_ something is true"

    • @HyperVectra
      @HyperVectra Рік тому +4

      @karlrovey I choose perish

    • @justsayin47
      @justsayin47 Рік тому

      I'll tell you exactly why. The academia is controlled by rabid looney slavemasons and rabid looney jesus. Since they are both mentally and intellectually retarded rendering them incapable of any intellectual manifestation, they force grad students to do it in the publish or perish way. They actually steal and collect it for their own records because of the following reasons: 1. They think that these are valuable intellectual property which should only belong to them. Hoarding basically 2. They will pass it off as their own intellectual property subsequently in a post apocalyptic era or a different realm 3. Being intellectually retarded, they prefer quantity over quality. More the better 4. They are basically mining your intellectual property by exploiting you for their use 5. Commercial reasons just in case your papers somehow fetch money
      This is actually good. You should publish more crap for the slavemasons to prevent then from getting hold of some actual research

  • @ModernGolfer
    @ModernGolfer 2 роки тому +2554

    As a very wise man once stated, "It's not the figures lyin'. It's the liars figurin'". Very true.

  • @josephmoya5098
    @josephmoya5098 3 роки тому +3096

    As a former grad student, the real issue is the pressure universities put on their professors to publish. When my dad got his PhD, he said being published 5 times in his graduate career was considered top notch. He was practically guaranteed to get a tenure track position. Now I have my Masters and will be published twice. No one would consider giving you a post doc position without being published 5-10 times, and you are unlikely to get a tenure track position without being published 30 or so times. And speaking as a grad student who worked on a couple major projects, it is impossible to be published thirty times in your life and have meaningful data. The modern scientific process takes years. It takes months of proposal writing, followed by months of modeling, followed by months or years of experimentation, followed by months of pouring over massive data sets. To be published thirty times before you get your first tenure track position means your name is on somewhere between 25-28 meaningless papers. You'll be lucky to have one significant one.

    • @apolloandartemis4605
      @apolloandartemis4605 3 роки тому +129

      Damn. I really want to be a researcher in the natural sciences one day with hopefully a Master's or PhD, but I must say seeing this is a little unnerving. Would you happen to have any advice for aspiring researchers?

    • @mohdhazwan9578
      @mohdhazwan9578 3 роки тому +150

      I studied in Japan, now Japanese has changed their view in research, not to get to top of the world, but how the reseach could be applied and contributed to society. If you look at the ranking today, most japanese univs are not at the top ranking as they used to be. Now, univs from Korea, HK, China, singapore are climbing for top ranking. But every year these univ have suicide cases.

    • @cutefidgety
      @cutefidgety 3 роки тому +64

      @@mohdhazwan9578 friend I was totally with you until you mentioned suicide. How is that even relevant in how research can be applied to society?

    • @ShapedByMusic
      @ShapedByMusic 3 роки тому +35

      Not really accurate though. None of my professors have been published 30 times and they've taught at Yale, Texas A&M, Brown, that's not really true at all.

    • @josephmoya5098
      @josephmoya5098 3 роки тому +40

      @@ShapedByMusic they might not have, but it is beginning to become a standard for new hires. 30 might be a bit of an exaggeration, but there is no way you are getting hired with under 15-20 for a tenure track position.

  • @psychalogy
    @psychalogy 4 роки тому +2291

    It’s almost impossible to publish negative results. This majorly screws with the top tier level of evidence, the meta analysis. Meta analyses can only include information contained in studies that have actually been published. This bias to preferentially publish only the new and positive skews scientific understanding enormously. I’ve been an author on several replication studies that came up negative. Reviewers sometimes went to quite silly lengths to avoid recommending publication. Just last week a paper was rejected because it both 1. Didn’t add anything new to the field, and 2. disagreed with previous research in the area. These two things cannot simultaneously be true.

    • @shreksthongg
      @shreksthongg 4 роки тому +183

      This is very frustrating to hear.

    • @notme-ji5uo
      @notme-ji5uo 4 роки тому +20

      damn

    • @BestAnimeFreak
      @BestAnimeFreak 4 роки тому +78

      "These two things cannot simultaneously be true."
      Yeah, I just wanted to say it.
      If 1. is true, 2. can't be and vica versa.

    • @alanbarnett718
      @alanbarnett718 4 роки тому +56

      Don't get me started about meta analyses. I've never heard of one being undertaken except where the backers have some kind of agenda. And I've never heard of one the results of which didn't support that agenda. The entire concept is deeply flawed.

    • @spike4850
      @spike4850 4 роки тому +13

      BestAnimeFreak that’s what he said...

  • @etanben-ami8305
    @etanben-ami8305 Рік тому +330

    When I was in grad school for applied psychology , my supervising professor wrote the discussion section of a paper before the data was all gathered. He told me to do whatever I needed to do in order to get those results. The paper was delivered at the Midwestern Psychology Conference. I left grad school, stressed to the max by overwork and conscience.

    • @stevekru6518
      @stevekru6518 Рік тому +6

      What was the thesis? Asking to learn if topic was especially controversial or important to particular interests.

    • @ButtersCCookie
      @ButtersCCookie Рік тому +8

      Why are you the minority? How can people who have the power to create Utopia choose self interest. It's like I'm in the Twilight Zone. Everybody knows and nothing is done. I wish I was never born and I hope I never am again.

    • @MidwesternCracker_2000
      @MidwesternCracker_2000 Рік тому +5

      Why not mention the subject of the paper then, brother? You lying…?

    • @jougetsu
      @jougetsu 11 місяців тому +3

      chances are high that the paper wasn't going to reveal anything new

    • @pinkz7911
      @pinkz7911 11 місяців тому

      @@ButtersCCookie ??? nihilist looking goober

  • @MrMakae90
    @MrMakae90 8 років тому +10450

    For people freaking out in the comments: we don't need to change the scientific method, we need to change the publication strategies that incentive scientific behavior.

    • @2adamast
      @2adamast 8 років тому +106

      You *believe* that we don't need to change the method

    • @MrMakae90
      @MrMakae90 8 років тому +573

      Adamast I did not for once stated my believe. I stated the point of the video, which many seem to have missed. Ironically, you now missed the point of my comment.

    • @slendy9600
      @slendy9600 8 років тому +456

      +Adamast no we dont need to change the method, the problem (which was explained VERY clearly in the video) is that people arent USING THE METHOD PROPERLY. like when someone crashes a car cause they were drunk, the car isnt broken its just being used incorrectly

    • @2adamast
      @2adamast 8 років тому +49

      You do state your believes. You think you know the point of the video, where you see other people just freak out. Personally I read: _Mounting evidence suggests a lot of published research is false._ Nothing more. There is I admit a short faith message at the end of the video.

    • @MrBrew4321
      @MrBrew4321 8 років тому +314

      The scientific method- you go out and observe things, and develop a hypothesis, and test the hypothesis. If you run a bunch of tests and come out with the wrong deductions that is called flawed research methodology. Flawed research doesn't imply the concept of going out and observing coupling with experimentation is flawed, it just means you suck at being a scientist.

  • @2ndEarth
    @2ndEarth 3 роки тому +2010

    My favorite BAD EXPERIMENT is when mainstream news began claiming that OATMEAL gives you CANCER. The study was so poorly constructed that they didn't account for the confounding variable that old people eat oatmeal more often and also tend to have higher incidences of cancer (nodding and slapping my head as I type this).

    • @hydrolito
      @hydrolito 3 роки тому +71

      Maybe don't stand so close to the microwave oven when you cook it.

    • @kdanagger6894
      @kdanagger6894 3 роки тому +42

      Perhaps Oatmeal isn't the problem. The problem could be with a change in the way it is grown and produced. Contamination, soil depletion, pesticides, etc. There are always variables that are un-accounted for in scientific research that invalidates the conclusions.

    • @NoTimeForThatNow
      @NoTimeForThatNow 3 роки тому +184

      @@kdanagger6894 the true answer was a lot simpler than that.

    • @hamsterdam1942
      @hamsterdam1942 3 роки тому +68

      Don't forget about "vaccines-autism" one

    • @NoTimeForThatNow
      @NoTimeForThatNow 3 роки тому +29

      My favorite is probably the story of how it was proven that stomach ulcers are caused by bacteria and not by stress and spicy food. Big arguments for decades between the established science and its supporters vs the scientists discovering the truth. It illustrates how poorly established science treats scientists with new ideas no matter how valid.

  • @GiRR007
    @GiRR007 3 роки тому +4657

    "There is no cost to getting things wrong, the cost is not getting them published"
    It's a shame this also applies to news media as well.

    • @NaatClark
      @NaatClark 3 роки тому +4

      books are media...

    • @fernando4959
      @fernando4959 3 роки тому +92

      @@NaatClark maybe they meant news

    • @GiRR007
      @GiRR007 3 роки тому +7

      @@fernando4959 i did, i fixed it

    • @theintolerantape
      @theintolerantape 3 роки тому +44

      @@GiRR007 At this point I think it's indisputable that mainstream news is literally state propaganda.

    • @zehirmhann9326
      @zehirmhann9326 3 роки тому +6

      @@NaatClark I don't know why it wouldn't apply to books

  • @karldavis7392
    @karldavis7392 2 роки тому +170

    This has influenced my thinking more than any other video I have ever seen, literally it's #1. I always wondered how the news could have one "surprising study" result after another, often contradicting one another, and why experts and professionals didn't change their practices in response to recent studies. Now I understand.

    • @thewolfin
      @thewolfin Рік тому +10

      Weird how popular this video is recently (most top comments are from ~1yr ago)...
      None of these problems apply to the fields of virology or immunology... right?

    • @karldavis7392
      @karldavis7392 Рік тому +2

      @@thewolfin I have no idea how much or little different areas of study are affected. I assume the very worst ones are when that ask people what they eat, and ask how healthy they are. Beyond that, no clue.

    • @rb98769
      @rb98769 Рік тому +9

      Yeah, I remember how eggs were bad for your health, then they were good, then they were bad again. Not even sure where the consensus on that is at this point.

    • @bridaw8557
      @bridaw8557 Рік тому +5

      Meta analyses are difficult to conduct, but help weed out bad data and contradictory findings. Not enough of these are done.

    • @karldavis7392
      @karldavis7392 Рік тому +5

      @@bridaw8557 Some meta analyses weed out bad data, while others average it in. The trick is carefully reviewing how the original studies were done.

  • @Vathorst2
    @Vathorst2 8 років тому +4958

    Research shows lots of research is actually wrong
    _spoopy_

    • @thulyblu5486
      @thulyblu5486 8 років тому +242

      Science can actually falsify science... makes more sense than you might think

    • @RichieHendrixx
      @RichieHendrixx 8 років тому +129

      Science is a battlefield of ideas, the darwinism of theories, if you will. Only the best ideas will survive. That's why the scientific method is so powerful.

    • @nal8503
      @nal8503 8 років тому +106

      Unfortunately the "best" ideas today are those that will result in profitable gadgets and not exactly those that would best propel human knowledge forward.

    • @Ludix147
      @Ludix147 8 років тому +59

      +Nal but that isn't science's fault, nor capitalism's. It is the fault of the consumers that value gadgets so highly.

    • @P07H34D
      @P07H34D 8 років тому +43

      Research shows that most statistics and published research are false, statistically speaking.

  • @danknfrshtv
    @danknfrshtv 3 роки тому +798

    Just started my PhD. This video has inspired me to call in consultants outside of my supervisory team to check my methods. I don't want to be wasting my time or anyone else's with nonsense research, and I'm honestly feeling a little nervous about it now.

    • @chertfoot1500
      @chertfoot1500 3 роки тому +10

      What is your research area?

    • @angrydragonslayer
      @angrydragonslayer Рік тому +15

      Have you had a bad time with trying to be honest in science yet?

    • @whyplaypiano2844
      @whyplaypiano2844 Рік тому +29

      @@angrydragonslayer What does this even mean?

    • @angrydragonslayer
      @angrydragonslayer Рік тому +8

      @@whyplaypiano2844 what part do you not get?

    • @whyplaypiano2844
      @whyplaypiano2844 Рік тому +40

      @@angrydragonslayer The whole comment? It isn't phrased very well. Are you being sarcastic, or serious? If you're being sarcastic, it's either because you were trying to be funny, or because you're--for a lack of better words--salty that people aren't honest in science. If you're being serious, it's either a sincere question, or you genuinely think scientists are dishonest on purpose. Explain the mentally you had when you made the comment, I guess?

  • @saeedbaig4249
    @saeedbaig4249 8 років тому +1854

    This is why statistics should be a mandatory course for anyone studying science at university.
    Knowing how to properly interpret data can be just as important as the data itself.

    • @David-ud9ju
      @David-ud9ju 6 років тому +72

      It's generally not done at undergraduate, but a massive part of a PhD is understanding the statistical analysis that is used in research. It is extremely complicated and would be way too advanced for an undergraduate stats course for, say, a biology student.

    • @Mirabell97
      @Mirabell97 6 років тому +104

      David that‘s usually not taught in undergrad in the us? Wow, that surprises me - a biology student from Germany, where we have to take a class in statistics in our bachelors.
      It might be easier to understand it during your PhD if you heard about it before

    • @forgotaboutbre
      @forgotaboutbre 6 років тому +19

      As a graduate MS student in A.I., I found my research statistics course to be probably my most relevant in terms of learning to think properly as an individual with an advanced degree. I was very much taken by surprise by statistics, pleastantly so.

    • @forgotaboutbre
      @forgotaboutbre 6 років тому +17

      Mirabell97 I took an Engineering statistics class in undergrad in America. I've also taken graduate level research statistics as a Comp Sci student, which was taught at a much much higher and more relevant level.
      There are also high school statistics class, which are even more watered down. So as you say, many have indeed heard about if before.

    • @Mirabell97
      @Mirabell97 6 років тому +3

      forgotaboutbre glad to hear that :)

  • @callumc9426
    @callumc9426 2 роки тому +402

    As someone who studies theoretical statistics and data science, this really resonates with me. I see students in other science disciplines such as psychology or biology taking a single, compulsory (and quite basic) statistics paper, who are then expected to undertake statistical analysis for all their research, without really knowing what they're doing. Statistics is so important, but can also be extremely deceiving, so to the untrained eye a good p-value = correct hypothesis, when in reality it's important to scrutinise all results. Despite it being so pertinent, statistics education in higher education and research is obviously lacking, but making it a more fundamental part of the scientific method would make research much more reliable and accurate.

    • @romanbucharist4708
      @romanbucharist4708 2 роки тому +12

      I barely passed my statistics class, and I'm in biology. Even now I feared that I might not gonna be able to interpret my data.

    • @Guizambaldi
      @Guizambaldi 2 роки тому

      Observational studies in the social sciences and health sciences are mostly garbage. People who don't do experiments or RCTs need to study a hell lot of statistics to get things right. And only recently we got natural experiments to help us with good research design for those areas.
      Until my masters, I was relatively well trained in traditional statistics (I'm an economist) but unaware of natural experiments. I was completely disheartened about how awful my research was, given that different specifications in my observational studies were giving me different results. I only gained renewed enthusiasm in a much better quality PhD who taught me much better research designs.

    • @mihailmilev9909
      @mihailmilev9909 2 роки тому +1

      @@romanbucharist4708 dang. So how's it going now?

    • @mihailmilev9909
      @mihailmilev9909 2 роки тому +1

      Also happy soon new year guys! 2023 Greetings from Florida

    • @mihailmilev9909
      @mihailmilev9909 2 роки тому +1

      @@romanbucharist4708 Also happy soon new year guys! 2023 Greetings from Florida

  • @NurseKillam
    @NurseKillam 7 років тому +1864

    Interesting. I am adding this video to my research courses. My students don't always understand why we need to be critical of research.

    • @nosferatu5
      @nosferatu5 6 років тому +28

      Modern science is science. Just because you're emotionally upset over something in the news doesn't invalidate the scientific method.

    • @alittlelifeleft8232
      @alittlelifeleft8232 6 років тому +20

      J Thorsson you can't talk about the "lack of self critical thinking" after trying to say that you're smarter than someone else because your daddy is a manager at a science institute... you can't contract knowledge lol

    • @tibfulv
      @tibfulv 6 років тому +44

      + nosferatu5
      The scientific method is indeed unsurpassed. But what this video is about (though it doesn't say so) is a relative newcomer in academia, the NHST, and its use is based on anti-science. Does fitting evidence to theory sound like science to you? I hope not, it most certainly does not to me. But that and anecdotes is what a young academic told me was the norm in many of these fields. I'd order a retest or reinterpretation of every NHST study from 1940 onwards using the actual scientific method, complete with logic and falsifications, given these results. A failure rate of 64% is abysmal, yet predicted by Ioannides.

    • @jmanfiji
      @jmanfiji 6 років тому +16

      Don't forget something Derek left out though. Re-sampling fixes a lot of these issues. Run a sampling test including replications to get your standard deviations and get a p-value less than 0.05 (or even better, less than 0.01 or 0.001). Then rerun the sampling tests multiple times to see if you can repeat the p-value. THEN (most importantly) report ALL your experimental runs with p-values. If even one out of (at least) three to five separate independent runs has a non-significant p-value, consider the entire study with a huge pinch of salt. Most reputable journals though nowadays insist on this - the peer reviewers worth anything will at the very least.

    • @PP-yx4rv
      @PP-yx4rv 6 років тому +7

      youre students have the same "publication incentives" as those publishing these "findings".

  • @ColeJT
    @ColeJT 8 років тому +873

    An engineer with a masters in nuclear engineering, a mathematician with PhDs in both theoretical and applied mathematics, and a recent graduate with a bachelors in statistics are all applying for a job at a highly classified ballistics laboratory. Having even been given the opportunity to interview for the job meant that each candidate was amply qualified, so the interviewers ask each the simple question, "what's one third plus two thirds?"
    The engineer quickly, and quite smugly calls out, "ONE! How did you people get assigned to interview me!?"
    The mathematician's eyes get wide, and he takes a page of paper to prove to the interviewers that the answer is both .999... and one without saying a word.
    The statistician carefully looks around the room, locks the door, closes the blinds, cups his hands around his mouth, and whispers as quietly as he can, "what do you want it to be?"

    • @janshegers7667
      @janshegers7667 5 років тому +32

      That is a good one!

    • @42Lailoken
      @42Lailoken 5 років тому +36

      i thought the punchline was going to be 5/9

    • @ImperatorMo
      @ImperatorMo 5 років тому +11

      there is no bachelor in statistics -.-

    • @common_undead
      @common_undead 5 років тому +13

      @@ImperatorMo there is right??

    • @acetate909
      @acetate909 5 років тому +11

      @@ImperatorMo
      It's a joke. I can't tell if you're making a joke as well or trying to insert asinine information into a his humorous comment.

  • @Christi3443
    @Christi3443 3 роки тому +554

    As a PhD student, I can fully agree with this. I have come to hate the word "novel". No matter how correct and in-depth an analysis is, anything that doesn't turn the world upside down is always gladly dismissed with "not novel (enough)" as a killer argument. By now I've decided for myself that I don't want to have anything more to do with the academic world after the PhD. I love research, but I HATE academic publishing.

    • @boblynch2802
      @boblynch2802 3 роки тому +8

      Consider starting your own publication journal?

    • @apolloandartemis4605
      @apolloandartemis4605 3 роки тому +3

      Is there anyway to pursue a research or research-like career without the problematic issues of academia?

    • @branchcovidian2001
      @branchcovidian2001 3 роки тому +20

      Academic publishing is a nepotistic and simultaneously cannibalistic _industry._

    • @TheBartgry
      @TheBartgry 3 роки тому +7

      Nah Christian, just go work in industry or big company. Better money HA

    • @AUGUSTINEMINH
      @AUGUSTINEMINH 3 роки тому +14

      I know right. I just did some academic research on the effects of eggs on cardiovascular risk and the findings were very confusing. It frustrated me when I found one new study contradicted many previous studies stated that more than 3 eggs a week caused many health problem, and out of no where it receives lots of media coverage. I am still not really sure which study is correct. Another time is when I recently study on wealth inequality and its relation with the pandemic. The research process is very interesting but I can see many bias in those papers that I read (sorry for my English)

  • @noirekuroraigami2270
    @noirekuroraigami2270 Рік тому +114

    The problem is people are suppose to be able to replicate the results by doing the experiment over again. If I can’t find multiple experiments of a study, it’s hard for me to not be skeptical

    • @88porpoise
      @88porpoise Рік тому +32

      The big problem with that is noted in this video, replicating work already done generally has no rewards.
      The time, money, and need to publish to advance their careers mean even the best intentioned researchers are likely to avoid redoing someone else's study.

    • @zy9662
      @zy9662 8 місяців тому

      Some experiments are very expensive in terms of time and money but We shouldn't worry too much about our money getting into publishing false positives. At the end of the day only the true positives will be the basis for further advancements, experimental science is built on previous results and if those results are spurious, nature will stop you from discovering further real relationships. That's what this video is failing to point out, the incremental nature of scientific knowledge in the natural sciences is a natural peer review system and the best that we can ever have hoped for. So keep funding science, at the end only the true relationships will stand the test of time

  • @Neo_to
    @Neo_to 3 роки тому +863

    I had so much trouble to publish when I corrected the p-values to counteract "p-hacking" or alpha inflation. Since I tested for multiple variables, I adjusted the models to minimise false positives and low and behold, almost all hypotheses that would have shown p

    • @aravindpallippara1577
      @aravindpallippara1577 3 роки тому +87

      Ugh, apparently negative results are so damn untouchables - the publication system really needs to chage

    • @IndigoIndustrial
      @IndigoIndustrial 3 роки тому +28

      Stick all those negative findings in the supplementary figures of a somewhat related paper.

    • @Neo_to
      @Neo_to 3 роки тому +54

      ​@@aravindpallippara1577 Just imagine, it's not compulsory to adjust the p-values. It's not mandatory to counteract alpha inflation. How much of published research must be (intentionally or not) not significant, but published as such.

    • @ryaandnice
      @ryaandnice 3 роки тому +22

      But you kept your integrity.

    • @Sol-fu4nm
      @Sol-fu4nm 3 роки тому +44

      Imagine all the human time saved from being able to get information from someone elses research without having to do it yourself.
      Now imagine all the time lost from all the people who have to research what has already been done, but they cant learn it because negative results don't show up in books.

  • @Deupey445
    @Deupey445 3 роки тому +1843

    Gotta love when a published research article states that most published research findings are false

    • @cinegraphics
      @cinegraphics 3 роки тому +123

      Research has found that 73.2% of all statistics are made up.

    • @genepozniak
      @genepozniak 3 роки тому +5

      @@cinegraphics Rumor, not research, found that.

    • @genepozniak
      @genepozniak 3 роки тому +41

      It's a complete misunderstanding of how science research works. "Eureka" moments are rare. Instead, the truth is eked out little by little: many rounds of test, falsify, retest, improve until the truth is arrived at.

    • @4bidn1
      @4bidn1 3 роки тому +18

      @@genepozniak Can't tell if you're taking the piss or not........

    • @genepozniak
      @genepozniak 3 роки тому +5

      @@4bidn1 If I gather your meaning correctly, no, I'm deadly serious. Think about it. If published research was the crap he said it is, where is all this successful bio-tech coming from?

  • @DaveGarber1975
    @DaveGarber1975 3 роки тому +269

    It's definitely bad in medicine. John Ionnadis has conducted "meta-research" into the quality of medical research and concluded that most medical research is severely flawed---in fact, "80 percent of non-randomized studies (by far the most common type) turn out to be wrong, as do 25 percent of supposedly gold-standard randomized trials, and as much as 10 percent of the platinum-standard large randomized trials." Wow. There are also problems with dealing with complex systems, and with challenging scientific orthodoxy into which some scientists have invested their entire careers.

    • @theninja4137
      @theninja4137 2 роки тому +4

      As an engineering student specializing in med tech have the strong impression that med publications are less elaborate, lower quality and contain less explanation than eng ones

    • @nmarbletoe8210
      @nmarbletoe8210 2 роки тому +6

      I'd say 25% wrong is really really good for something as complex as medcin

    • @BobKerns4111
      @BobKerns4111 2 роки тому

      I just *knew* John Ionnadis would come up here. If you want to see "mostly wrong", take a look at his record on COVID-19 predictions-off by two orders of magnitude.
      His sensationalized contrarian kick has gotten people killed. There are many better, more thoughtful critics of the state of research.
      I'm not saying he's always wrong. But he does go for the sensational, and is often sensationally wrong, and doggedly so.
      A lot of progress has been made, especially in medicine, with pre-registration of trials, data & code repositories, etc, and I'll give him credit for helping kick-start some of that. (Preprints seem to me to be a move simultaneously in the right and wrong directions!)
      But statements like "80% of non-randomized studies turn out to be wrong" isn't even well-defined enough to be falsifiable. It's a non-scientific statement. And meta-research, like meta-analysis, is itself extremely subject to selection bias. Each need to be approached with great care and skepticism.
      A lot of what he says is not controversial. I'm not here to demolish John Ionnadis, but to urge people to steer clear of his broad sensationalized generalizations, and look carefully at the arguments he makes. Apply the same critical standards that he urges to his own research.
      Sometimes the kettle calls the pot black-but black is still black.

    • @MrJdsenior
      @MrJdsenior 2 роки тому +5

      There is also the problem that you can test one medication, to some degree, but if you start talking about interactions between different medications in different people, most of the bets are definitely off. People discount "anecdotal" data completely, but if that data comes from doctors reporting on those medications, it definitely has value, as well, IMHO.

    • @squirrelpatrick3670
      @squirrelpatrick3670 2 роки тому +1

      The vast majority of medical research is a shell game, run by pharma. You can tell little by study conclusions: you can actually tell something by the set of study parameters. Where study parameters will produce a unwanted conclusion, the research doesn't happen or isn't published. Example: no clinical or epidemiological evidence for the safety of aluminium adjuvants in vaccines. Draw your own conclusion

  • @-30h-work-week
    @-30h-work-week Рік тому +41

    Sabine Hossenfelder: "Most science websites just repeat press releases. The press releases are written by people who get paid to make their institution look good, and who for the most part don't understand the content of the paper. They're usually informed by the authors of the paper, but the authors have an interest in making their institution happy. The result is that almost all science headlines vastly exaggerate the novelty and relevance of the research they report on."

    • @fuelks
      @fuelks 7 місяців тому

      This is unrelated to what the video is talking about

    • @Ha-nz2vy
      @Ha-nz2vy 7 місяців тому +1

      ​@@fuelkslet me introduce you to the word "implication"

    • @epicchocolate1866
      @epicchocolate1866 7 місяців тому

      She’s a fraud, and a grifter, she says what she’s been paid to say

  • @darth0tator
    @darth0tator 6 років тому +845

    we should open up a journal for replication studies only

    • @mashotoshaku
      @mashotoshaku 5 років тому +50

      With full time staff who never take any compromised funding.

    • @warwolf6359
      @warwolf6359 4 роки тому +32

      Whose gonna pay for a journal no one reads?

    • @Krystalmyth
      @Krystalmyth 4 роки тому +49

      Call it "Well yes, but also No"

    • @daveyjones3016
      @daveyjones3016 4 роки тому +2

      darth tator definitely

    • @matthewvaughan8192
      @matthewvaughan8192 4 роки тому +36

      warwolf6 Anyone who believes in the importance of replication studies? It’s not like everyone who reads it will have read every study. Far from it. It could be multi-disciplined also, which could be interesting to learn about other sciences

  • @brianjonker510
    @brianjonker510 3 роки тому +173

    It should be one of the requirements for getting a Bachelors in a field of science to do a replication study. Even with small sample sizes.
    It is a useful experience and pattern of thinking to carry into adulthood.
    Furthermore a meta-analysis using dozens or hundreds of further experiments would shake out all incorrect P values

    • @user-fk8zw5js2p
      @user-fk8zw5js2p Рік тому +8

      An honestly, meta-analysis should be just as important to publish as novel as there is so much data out there and it has never been easier to analyze quickly.

    • @Heyu7her3
      @Heyu7her3 10 місяців тому +2

      That's essentially what they do in a lab class

  • @glenmartin2437
    @glenmartin2437 3 роки тому +280

    Years ago, I questioned some chemistry methodologies. It was very frustrating, because nobody was listening. Then a publication came out discrediting the methods used and discrediting many journal articles. Somebody had listened, or came to the same conclusions I did. Corrections were made.

    • @aunieafifahabdulmutalib7410
      @aunieafifahabdulmutalib7410 2 роки тому +10

      TRUE., I HAVE THE SAME EXPERIENCE.

    • @MrJdsenior
      @MrJdsenior 2 роки тому +4

      Vindicated!

    • @whatisahandle221
      @whatisahandle221 2 роки тому +6

      When you say you questioned, did you establish a line of communication/collaboration with any of the authors or users of the method, working to test its limits, improve it, or compare it to other methods?

    • @voskresenie-
      @voskresenie- Рік тому

      What were the methodologies, and were any significant findings overturned / discredited as a result? Or did it only affect small findings, with larger findings still being correct (or considered correct) in spite of some methodological errors?

    • @yaroslavsobolev9514
      @yaroslavsobolev9514 Рік тому +13

      You should have used an approach developed in cybersecurity research long time ago for the same issue: notify the authors that in 3 months you are going to publish your findings about all their mistakes no matter what they do. Then the authors have 3 months to retract their papers on their own and/or correct them. This solution is called "responsible disclosure" of vulnerabilities. You see, in cybersecurity the problem of "nobody listens unless you publish" has been acknowledged a long time ago. You can do this anonymously as well: from my experience, scientists are not more ethical than average human, and when you threaten their career and self-image they quite often freak out and try to hurt you back by all imaginable means -- just as many normal humans would in such a situation.

  • @kunk8789
    @kunk8789 2 роки тому +42

    “p

  • @pouncebaratheon4178
    @pouncebaratheon4178 8 років тому +605

    P values of 0.05 are a joke.
    Look, I'm going to sound biased, and that's because I am.
    This is a much bigger problem in fields like Psychology than in fields like Physics. The emphasis on constant publication and on positive results is still a massive problem. Researcher bias is still a massive problem (although still, not as much as in Psych/Sociology). The existence of tenure helps a little since researchers become able to research whatever they want rather than what the system wants.
    But we aren't claiming world-changing discoveries with P=.05. Derek brushed right past this like he was afraid of sounding biased but I'll repeat: 5 sigma is a 1 in 3 million chance of getting a false positive purely by chance. Every physicist "knew" the Higgs had been discovered years before we finally announced it and started celebrating. But we still waited for 5 sigma.
    I did some research with one of my Psych professors in my freshman year. She was actually quite careful outside of the fact that her sample sizes were pathetic. We went to a convention where we saw several dozen researchers presenting the results of their studies, and it was the most masturbatory display I could have imagined. There were some decent scientists there, no doubt, but the *majority* of them were making claims too grandiose for their P-values and sample sizes, confusing correlation with causation, and most of all *failing to isolate variables.* If a freshman is noticing glaring problems in your research method, your research method sucks.
    The next year I had a Physics prof. who had a friend of mine and his grad students run an experiment 40,000 times. There is no comparison. We need a lot more rigor in the soft sciences than we have right now. Mostly because science. (But also because they're making us all look bad...)

    • @Madsy9
      @Madsy9 8 років тому +69

      And there's also the problem that experiments might be difficult to perform in fields outside physics. It can expensive and it requires a lot of planning and logistics. Not to mention that ethical dilemmas might stand in the way, which happens a lot in medicine. In a way, the physics field is blessed by not depending on studying people, and overall physics experiments are cheap; expensive particle accelerators non-withstanding.
      One thing I think Derek missed is to emphasize that one shouldn't be looking at single studies anyway. You look multiple studies for trends and toss out the flawed ones and the outliers. Or even better, look for meta studies.
      I'm also unsure if changing your model / what you measure *after* you have looked at the data is p-hacking. Such a mistake seems way more serious to me, as you're basically making your model fit a specific data set. Give me any data set and I can make a polynomial fit all the points. Basically, reusing the data after changing the model should be a crime :)

    • @bangboom123
      @bangboom123 8 років тому +64

      On this note, simply because you mention my discipline (Psychology), I will point out that Psychology lacks any kind of unifying theory that organises the predictions it makes. It's a lot easier to be a physicist trying to confirm the predictions of Einstein and Newton than a psychologist guessing at what the underlying mechanics of the mind are.

    • @erikziak1249
      @erikziak1249 8 років тому +6

      We are all biased, but even the ones who admit it find it hard to fight it. I guess we can never win.

    • @themightyleek
      @themightyleek 8 років тому +35

      Another issue is that Sociology, Psychology and Economics are all black boxes that we don't know nearly enough about. In Physics, we can lower the temperature to close to absolute zero, and do the experiment in a vacuum. It is currently impossible to have that level of rigour in Sociology, Psychology and Economics. We still have a while to go.

    • @-Gorbi-
      @-Gorbi- 8 років тому +9

      I don't see why you need to single out psychology, even this video gives examples of neuroscience and physiology research holding even lower rates of reproducibility. When you look at the success of psychotherapy for individuals, you will find most people find it an indispensable resource in their lives, unlike the health tips or the vague claims about tiny brain regions coming out of neurology and physiology.

  • @Boahemaa
    @Boahemaa 3 роки тому +67

    "Data doesn't speak for itself, it must be interpreted"~ and there we have it people the point of this thesis.

  • @MrFritzthecatfish
    @MrFritzthecatfish 5 років тому +894

    Publish or perish ... and quality goes to the drains

    • @Thermalions
      @Thermalions 5 років тому +61

      So much research exists purely so someone can get their PhD, or bring funds into their University to keep themselves employed. When the pressure is on, no-one really cares whether the research is useful or even reliable - just got to fill the coffers and get your research published and referenced to drive up your University's rankings.

    • @UncleKennysPlace
      @UncleKennysPlace 5 років тому +10

      @@Thermalions Well, certainly, there are millions of theses out there, all required for the PhD. No way around that. Most of them are garbage.

    • @arvind31459
      @arvind31459 5 років тому +8

      @@UncleKennysPlace I would say around 95% of them are garbage

    • @dbmail545
      @dbmail545 5 років тому +8

      It has been estimated that it takes $5 million in funding to make a Ph.D in a STEM field. The research community has been corrupted from the base.

    • @Metrion77
      @Metrion77 5 років тому +2

      money is the root of all evil

  • @jakebayer3497
    @jakebayer3497 Рік тому +63

    I wanted to thank you for speaking up on this issue. The state of science today is a travesty and I’m glad to finally hear someone acknowledge this as I have been along in the dark with these troubles for far too long. I know we are creating the foundation of something great but acknowledging that the current state of science is not something we can rely on is just simply not said or acknowledged. Im so happy and so grateful that you have spoken about his issue and brought it to the public’s attention. Thank you for you work and congratulations.

    • @HyperVectra
      @HyperVectra Рік тому +2

      The model we have is great, the problem is anything can be hacked if that is your goal. If you build a better mousetrap, Nature will build a better mouse. The problem is the incentive. Not enough money to go around for your own research and tenure is disappearing, so to do the work you want to do you need either to be a. Well known / respected in your field OR take funds to do work you don't want to do so you can do the work you do want to.

    • @zy9662
      @zy9662 8 місяців тому

      We shouldn't worry too much about our money getting into publishing false positives. At the end of the day only the true positives will be the basis for further advancements, experimental science is built on previous results and if those results are spurious, nature will stop you from discovering further real relationships. That's what this video is failing to point out, the incremental nature of scientific knowledge in the natural sciences is a natural peer review system and the best that we can ever have hoped for. So keep funding science, at the end only the true relationships will stand the test of time

  • @kcwidman
    @kcwidman 8 років тому +1865

    I feel like everyone in the world needs to watch this video. There's so much crap out there an no one ever thinks past what they want to hear. This should help.
    This should be a Ted Ed

    • @AlexKnauth
      @AlexKnauth 8 років тому +2

      +

    • @oM477o
      @oM477o 8 років тому +45

      Do you love science and all it's complixity but wish it could be a little less complex, and a lot less scientific?
      Introducing TODD Talks...

    • @fatsquirrel75
      @fatsquirrel75 8 років тому +33

      You're right, people often only hear what they want, so this video would likely make that even worse. It gives people ammunition to discredit others with an informed view. People are going to see that if this is the result from honest science, then what happens to paid and biased science.
      To a wider audience I think this video would likely do a lot more harm than good.

    • @fatsquirrel75
      @fatsquirrel75 8 років тому +10

      For me, If there was a video I'd like everyone to watch it'd be one purely on the benefits of science. The last thing we need to throw out to the general public is something that might look at first glace to highlight its flaws.

    • @EvilNeonETC
      @EvilNeonETC 8 років тому +2

      Oh I "Heard" that if you eat butter, you'll be healthier than those who don't eat butter. Therefore it is correct. /sarcasm

  • @briancreech9990
    @briancreech9990 3 роки тому +852

    This seems more like a problem with the publishing system over the scientific method.

    • @nosson77
      @nosson77 3 роки тому +13

      Yes you are right but the bottom line is that any new scientific theory is completely unreliable. Since there is no other way to do science today other than the peer review method.

    • @davidwebb2318
      @davidwebb2318 3 роки тому +111

      I think the problem is actually quite deeply embedded in academic research. Right from the selection of which projects get grant funding and resources onwards there is bias to show the result that the department head wants to be true. Their career, prestige and income relies on this. The careers, prestige and income of every person in every academic research department relies on only ever finding 'convenient' results.

    • @harrycooper5231
      @harrycooper5231 3 роки тому +2

      It's human nature. That's what publishing is all about, exposing the study to other scientists, and seeing if it survives.

    • @blitzofchaosgaming6737
      @blitzofchaosgaming6737 3 роки тому +19

      Publishing is about making money, so they have the exact same problem as scientists do. Go for money or go for truth. Since the publications wouldn't exist without money, they are making the only choice they can.

    • @viktorvondoom9119
      @viktorvondoom9119 3 роки тому +2

      ​@@blitzofchaosgaming6737 Publishers earn (among other ways) money by selling subscriptions. They could publish anything

  • @Talik13
    @Talik13 8 років тому +254

    I'd like to point out, as he hints at near the end, that the underlying reason for much of these "p-hacked" studies is due to human nature and not the scientific process itself. Stopping a sample size when you find convenient, not getting published to counter-study, people only interested in unique findings; these are all human fallacies. A manipulation of the scientific method.

    • @d4n4nable
      @d4n4nable 6 років тому +15

      There's no "the scientific method." That's a complete myth. You should read "Against Method" by Feyerabend. Even if he goes overboard in his argument (which I wouldn't necessarily agree he does), it's naive to think of a defined, precise method in which science *is* done, or *ought to be* done. It's really "anything goes," as long as you convince your peers. Hopefully truth is convincing.

    • @neurofiedyamato8763
      @neurofiedyamato8763 6 років тому +19

      @@d4n4nable That's wrong. There's no set methods, but a general guideline to seek the truth. As OP said, if it wasn't for bias in the publishing, then the system would work fine. The scientific method is more of a way of thinking and general guidelines in how truth can be determined.

    • @d4n4nable
      @d4n4nable 6 років тому +7

      @@neurofiedyamato8763 You act as if epistemology were solved. There's no consensus as to how to get to "truth." There are various methodologies implemented in various fields of research.

    • @arhamshahid5015
      @arhamshahid5015 5 років тому +6

      @@d4n4nable I can just smell the narcissm from across the screen .

    • @d4n4nable
      @d4n4nable 5 років тому +5

      @@arhamshahid5015 Narcissm? Why? Because I'm pointing to a classic contribution to the philosophy of science? It's not that I wrote it. I just read it, like thousands of others. How in the world is that narcissistic?

  • @LincolnDWard
    @LincolnDWard Рік тому +22

    Science isn't the initial idea, it's the dozens of people who come along and test the idea afterwards

    • @zy9662
      @zy9662 8 місяців тому

      Agree. We shouldn't worry too much about our money getting into publishing false positives. At the end of the day only the true positives will be the basis for further advancements, experimental science is built on previous results and if those results are spurious, nature will stop you from discovering further real relationships. That's what this video is failing to point out, the incremental nature of scientific knowledge in the natural sciences is a natural peer review system and the best that we can ever have hoped for. So keep funding science, at the end only the true relationships will stand the test of time

  • @samsonlovesyou
    @samsonlovesyou 6 років тому +239

    Outstanding video. It wasn't until I really started getting into research at MSc level that I began to realise so much of the research I was appraising was deeply flawed. At undergrad, I assumed that it was ME who was flawed every time I saw a glaring error. At that level, you don't have the confidence to criticise the work of experienced researchers.

    • @Blirre
      @Blirre 5 років тому +31

      We had to write a literature review on a chosen subject for our B.Sc. I read through dozens of articles on my subject and to my horror I realized that the results weren't in line at all. It seemed that some scientists had worked with rats and some with mice and they got different results. Still, many sources quoted each other regardless. It was difficult to piece through that mess and know who to trust.

    • @alessiodebonis2710
      @alessiodebonis2710 5 років тому +9

      industry influence is everywhere unfortunately. Climate science in an example of that. It's sad because you grow up learning to trust others. Now it seems so confused that we are starting to rely in religion, faith, miths, and so on. In Italy the misinformation campaign is tragic 😷

    • @kevinbyrne4538
      @kevinbyrne4538 5 років тому +30

      An undergraduate whom I knew, spent months trying to replicate a chemical synthesis that had been published in a journal. He failed repeatedly. Finally he contacted the authors. They told him that there was a typographical error in the article: the concentration of one chemical was listed as being 10 times higher than it was supposed to be. With that correction, his synthesis worked on the first attempt.

    • @ephemera...
      @ephemera... 5 років тому

      Kevin Byrne Does this mean that scientific journals don’t publish errata?

    • @kevinbyrne4538
      @kevinbyrne4538 5 років тому +1

      @@ephemera... -- The errata often don't appear until months after the original article. And the errata are often buried. It would also be helpful if authors checked the galleys.

  • @JavierBacon
    @JavierBacon 8 років тому +387

    The lack of incentives for replication studies is obviously the biggest problem. The fact that some of those "landmark" studies were only attempted again recently...
    Hopefully, as people become more aware of this (it's happening), all those journals will change their mind about replications. They should release a separate issue for them, even.

    • @jarrethcutestory
      @jarrethcutestory 8 років тому +6

      Agree. At some stage we will almost need to press "reset" and start again.

    • @Khanryu
      @Khanryu 8 років тому +5

      Yea, especially since almost every article in their conclusion implies that "further research in the area is needed" :p

    • @JavierBacon
      @JavierBacon 8 років тому

      ***** Significant or not... It's always significant in some way

    • @GOBIAS.INDUSTRIES.
      @GOBIAS.INDUSTRIES. 8 років тому +3

      +JavierBacon I get what you're saying and agree. Even though your testing/conclusions don't have statistical significance, the findings are still significant. In most cases, it would still help increase our understanding of a subject if null results were published.

    • @Bourinos02
      @Bourinos02 8 років тому +1

      The best way to start is to get rid of journals telling us what is worth publishing and what isn't. Then kill the h-index/impact-factor that are genuine SHITS. Then put everything in open access, the universities have all the infrastructure necessary and could even save millions $ in subscription fees that are frankly incredibly stupid to begin with...

  • @SpruceOaks
    @SpruceOaks 3 роки тому +344

    Short answer: yes. That was a real wake-up call when I was doing my Masters degree literature review - how often university professors push publications using "academic standard" statistical analysis to come to a demonstrably wrong conclusion. It is scary, not only how often this was the case, but how often these studies would be cited and their misinformation spread through academic circles without question.

    • @davidwebb2318
      @davidwebb2318 3 роки тому +27

      Most academics doing the research are young and inexperienced in the real world. The people managing the research departments have a vested interest in only promoting research that finds 'convenient' results that will enhance their chance of getting bigger budgets next year.
      Maybe we should take people with 30 years of industry experience and put them in charge of research in academic institutions.....

    • @haraldtopfer5732
      @haraldtopfer5732 3 роки тому +16

      @@davidwebb2318 Unfortunately true. If, as a young scientist, you talk to the head of your lab or department about your work and what your ideals are or what your idea of good science is, you will quickly be taught. You don't know anything! No, you really don't know what is important in science. What you know even less about is what "good work" is and what is expected of you. The most important thing is neither "good science" nor a prestigious publication. At the very top of the hierarchy is an accepted proposal letter! No funding, no research. All other output must be directed towards this goal and are just means to an end. The larger the organisation (Pareto Principal), the greater the pressure to meet this requirement. Exceptions exist.

    • @davidwebb2318
      @davidwebb2318 3 роки тому +22

      @@haraldtopfer5732 I agree. Academia has become a big industry with big careers to support. The priority of the people heading up departments is to build bigger empires, secure bigger budgets and increase their personal exposure/status. This secures their jobs and the jobs of their colleagues/friends. That trumps everything else in many cases.
      It is really obvious in the climate change industry where nobody ever proposes or approves any budget for spending on anything that doesn't support the pre-existing narrative. They carefully choose and support only work that adds weight to the doom stories because this expands the 'importance' of their industry. Their future careers and their salary depends on doing it so they embrace it and steer all the research in one direction. The system is really flawed and has created a monster where half the world are intent on economic suicide to cure a problem that is relatively minor and will only have any impact over generations.

    • @aravindpallippara1577
      @aravindpallippara1577 3 роки тому +20

      @@davidwebb2318 Well the thing is virtually every study that disapproves climate change are very usually well funded themselves - There is a vested interest among the folks with resources to forward that narrative as well, and they have resources, profits they can lose. Not to mention these studies also have to exercise pretty big mental gymnastics as the mounting evidence grows.
      Money does make the world go around after all.
      Wouldn't you agree?

    • @davidwebb2318
      @davidwebb2318 3 роки тому +8

      @@aravindpallippara1577 No, I wouldn't agree. The climate change industry is mostly based on an emotional sales pitch pushed by celebrities and political activists who haven't got the first clue about the actual data concerning the climate.
      This is obvious because the main activists are pushing the idea that humans will be extinct in under 10 years. Politicians who are too weak-minded to work out this is complete lunacy have simply demonstrated their lack of intellectual horsepower by going along with it.
      Money does not make the world go round. It is just a convenient method of exchange used to buy and sell goods and services. Of course, the political activists that are using the climate change narrative to promote their political agenda will try to persuade you that money is evil (or that only evil people have money so they should take it and give it to people they consider more worthy).

  • @FreeWaves9
    @FreeWaves9 Рік тому +6

    There is a pressure to publish significant results. As a research assistant, I know for a fact my professors engage in this. I was preparing the data I collected on a crop, and somehow the paper was published a week after I finished the data... didn't make sense

    • @terrencedent3071
      @terrencedent3071 9 місяців тому

      Definitely doesn't make sense, as the peer review process alone takes months. Could it be that you were reproducing some past experiments, or gathering the same data to be used in a future publication?

  • @SuperfluousIndividual
    @SuperfluousIndividual 4 роки тому +455

    As a researcher, I find those numbers very conservative, even when I'm 4 years late to the video.
    I also feel like there's a reason missing for the false-positive results category which is a deviation from the main objective. Some true positive results shouldn't be considered as such when you make an in detail analysis of their methods, statistics and final findings just for the pure reason that, mid-study, some parts of the objetive were changed to accomodate the findings. This is also an issue that pisses me off, especially in my research field where there's such a huge mix of different scientific areas that it's next to impossible to verify anything at all in detail because everyone just pulls the results their way.
    As some people already mentioned here, some authors do withold critical pieces of information for citation boosts. If people can't reproduce something from a study, they can neither be proved wrong by the paper's information alone (as long as it checks out in theory) nor can they be denied autorships and citations from other papers which effectively boosts their 'worth'. The fact that researchers are evaluated using citations/autorship numbers is also one of the leading problems as to which false-positives exists in such large numbers (I don't believe false-positives are only ~30% for a damn second, but this is my biased opinion) and why some papers, even though everything checks out in theory, can never be truly reviewed on a p2p manner on the practical results sides of things.
    Anyone who works in research knows there's a lot of... misbehaving on most published works, regardless of the results. Therefore I have to disagree with the fact that researchers are fixing some of the problems. It's not that we don't want to fix them, but because the system itself, as it stands, is essentially rigged.
    We can sift through p-hacked results. We can't, however, sift through p-hacked results if the objective is mismatched with the reported findings (if someone told me that was involuntary, I'd believe them because I know how easy it is to deviate from it) nor from a paper which withholds critical information. And the worst part about it is that this is further fueled by higher degrees thesis such as masters or PhD's where it's mandatory to cite other people for their work to be 'accepted' as 'valid'.
    You have to approach published works with a very high level of cynicism and with some time and patience on your hands if you're even dreaming of finding a published work that remotely fits your needs and actually shows a positive result on most scientific areas.

    • @lordspongebobofhousesquare1616
      @lordspongebobofhousesquare1616 4 роки тому +27

      I hope someday a scientist gets very rich and decides to devote his/her money and time in creating a healthier scientific publishing environment.

    • @pattygould8240
      @pattygould8240 3 роки тому +25

      When I finished my undergrad, I worked compiling a database for a retired professor. One day he asked me to find an article that had been recommended by one of his peers during review. He already had the author and subject so it was pretty easy to find and got me a nod in the paper for my invaluable research assistance. The paper was on how long bones had been drawn incorrectly in every medical text forever. Someone had drawn it incorrectly once and everyone had copied the original mistake.

    • @ejipuh
      @ejipuh 3 роки тому +4

      @@pattygould8240 What happened with the paper? Is it available?

    • @pattygould8240
      @pattygould8240 3 роки тому +6

      @@ejipuh I have a copy that he gave me when it was published but it's packed away somewhere and I frankly don't remember what journal it was published in. I worked for him summer and fall 2004 so that's when it was published.

    • @pattygould8240
      @pattygould8240 3 роки тому +6

      @Luís Andrade doctors have been learning from those textbooks for over a century, the mistake in the drawing didn't have an impact or someone would have pointed it out sooner. It took a scientist studying bones to point out the error.

  • @Qba86
    @Qba86 3 роки тому +495

    Well, as a wise man once said "Some poeple use statistics like a drunk would use a streetlamp -- not for illumination but for support".
    That being said, the most frustrating bit is that the journals and financing agencies actively encourage p-hacking and discourage replicating dubious studies.

    • @PrzemyslawDolata
      @PrzemyslawDolata 3 роки тому +3

      I'm stealing this quote, it's amazing.

    • @MrJdsenior
      @MrJdsenior 2 роки тому +9

      There is nothing wrong with using statistics for support, as long as they are accurate and honest, AND you don't cherry pick them. That last part is often the biggest problem. I don't think pharma changes numbers any more, but they most definitely fund several studies and pick and choose what they want from each. That is not research, that is advertising. It's also changed a bit for them now that they have to have conclusions that at least have SOMETHING to do with the data collected. There was no requirement for that before, as I understand it.

    • @whatisahandle221
      @whatisahandle221 2 роки тому +7

      The point about discouraging replicating dubious-or any-studies is important. There just aren’t incentives to duplicate or refute someone else’s findings, but rather come up with something ”original”.
      On a similar note, as an engineer who frequently volunteers at elementary - high school science fair judging, I’m constantly dismayed at the emphasis that other judges-both somewhat “lay” and professional STEM judges-place on “originality”… at the elementary & middle school level, even, not to mention the high school level! (Ok: maybe a district or regional winner at HS needs to be decently original, but…) Many people place originality and presentation skills (not to be entirely discounted, of course, but still not #1) above scientific inquiry, larger data trials, strict controls, and even just a good, solid use of the basic fundamentals of an experience by as taught in elementary science class.

    • @Qba86
      @Qba86 2 роки тому +1

      @@whatisahandle221 I believe that in experimental physics it is customary to publish independent replications of breakthrough studies in comparatively high-impact journals (as well as to cite replication studies along with the original ones in future papers). Sadly this is more of an exception that proves the rule.
      In life sciences on the other hand there are so many subfields and so much competition, that far too many "original" yet shoddy papers (methodologically speaking) get published. My subjective impression is that this problem is slightly smaller in niche and/or "old-fashioned" subfields, where the odds of getting a reviewer who knows all the ins and outs of the topic are relatively high.

    • @stevenr5149
      @stevenr5149 Рік тому

      ​@@MrJdsenior They still do-and still do so much more. It is what it is.

  • @MetalMachine131
    @MetalMachine131 3 роки тому +442

    The real problem here are the journals. They have established themselves as the primary way of publishing. There are other ways, but in the end, the journals get you recognition and jobs.
    That results in many studies being done with the intent of publishing. Scientists cant be blamed for that. After all, they not only do the research but also have to contantly beg for money.
    The actual goal of optaining information gets lost along the way.

    • @IndigoIndustrial
      @IndigoIndustrial 3 роки тому +10

      Exactly. One high-impact publication can set up a career, and leads to 'light-touch' peer review at other good journals, soft authorships on colleague's papers and requests to be co-investigators on other people's grants. More publications leads to more funding. Even as a Co-I that doesn't actually get money from a grant, you have demonstrated 'grant funding' success. The incentives to join that group are high.

    • @FlyingPastilla
      @FlyingPastilla 3 роки тому +4

      It seems even more absurd to still have these gatekeepers publishing a limited number of papers when we live in the era of long tail economics

    • @AngryReptileKeeper
      @AngryReptileKeeper 3 роки тому +7

      Researchers also have to pay the journals to publish their work, who in turn often charge you to read them.

    • @newagain9964
      @newagain9964 3 роки тому +2

      Like all other systems and institutions, scholarly research and the academy is a game, with numerous irrational inputs and agents in pursuit of self serving interests.

    • @mohdhazwan9578
      @mohdhazwan9578 3 роки тому

      Indeed. I lost interest to pursue to phd bcoz of this reason.

  • @Goldcrusty
    @Goldcrusty 2 роки тому +16

    Thanks for making these videos they are such an eye opener for me. I never thought this would be an issue at all, now I understand.

  • @hunterterrell9930
    @hunterterrell9930 4 роки тому +97

    This is the kind of material UA-cam needs more of

    • @rodrigo-vl7bi
      @rodrigo-vl7bi 3 роки тому +1

      It's something extremely ironic, but UA-cam encourages other types of content, just like journalism encourages certain kind of results in science

  • @erichoceans
    @erichoceans 4 роки тому +481

    Why would anyone give this a thumbs down?
    Spent most of my life in research, painful yet true....

    • @jamese9283
      @jamese9283 4 роки тому +24

      Ignore most of the thumbs down. 10-year-olds and trolls will down-vote a good video just to agitate people. It doesn't mean anything.

    • @astrobiojoe7283
      @astrobiojoe7283 3 роки тому +6

      Life gave them a thumb down. Ignore 😂

    • @charanckck
      @charanckck 3 роки тому +4

      Reason 1: Some one worked so much only to add a number to the papers published but not quality. Some other person points out a mistake in those papers.

    • @silvervirio3642
      @silvervirio3642 3 роки тому +1

      Why not? Maybe they dont know dont recommend video function , so they thought thumb down this video will result in similiar type of video became featured in their homepage.

    • @neofromthewarnerbrothersic145
      @neofromthewarnerbrothersic145 3 роки тому +5

      9 times out of 10, the answer to this question is BOTS. They have to like/dislike videos at random to try and fool the algorithm. That's all it is. I'm so tired of seeing "how could anyone dislike this GREAT video??" IT. IS. BOTS.

  • @AuliaAF
    @AuliaAF 3 роки тому +108

    In Indonesia, many of supervisors in medicine would reject replication studies, expecting new studies and publication and therefore, causing us to have nearly zero epidemiological data. We prefer "good-looking research" to actually researching anything. Better not research than not looking good

    • @Szszymon14
      @Szszymon14 3 роки тому +1

      Your supervisors are speaking the language of gods

  • @robrobason
    @robrobason Рік тому +26

    Thanks for the analytical look at this topic. It seems timely with the recent resignation at Stanford University. It reminds me of a former colleague who shared the quip "publish or perish." In today's political world, the phrase "follow the science" is frequently and ignorantly applied, I'm glad to see science influencers such as yourself shedding light on this topic.

  • @johndriscoll7803
    @johndriscoll7803 3 роки тому +816

    “Science is the interpretation of data. The data is usually crap.”
    Liam Scheff, science journalist and author

    • @siddharthrawat7205
      @siddharthrawat7205 3 роки тому +2

      Ever heard of data wranglers?

    • @lixloon
      @lixloon 3 роки тому +8

      Science journalist and author and he doesn't know that "data" is a plural noun? FYI, "datum" is the singular.

    • @rickross9829
      @rickross9829 3 роки тому +65

      @@lixloon Why exactly did you assume he is talking about one point of datum? It's the less logical explanation. I'll just assume you're a moron who wanted to let the world know something that makes you feel smart.

    • @kholofelolebepe9637
      @kholofelolebepe9637 3 роки тому

      Sugasphere and the Lancet concur

    • @aphroditesaphrodisiac3272
      @aphroditesaphrodisiac3272 3 роки тому +37

      @@lixloon data is gramatically correct. It's not possible to interpret a single datum.

  • @polarisgemini52
    @polarisgemini52 4 роки тому +757

    When I first came across this problem, I wanted to become a scientist who simply redoes the old experiments. I am still very far away from becoming a scientist but I hope this becomes a legitimate job. Having a subset of scientists who simply redo the experiments with a little or no tweaking.

    • @galanoth17
      @galanoth17 4 роки тому +147

      Problem is who will pay you for it.

    • @CuriouslyCute
      @CuriouslyCute 4 роки тому +37

      We need this. Can someone start an organization that does this? Not me, I have another thing to start. :P
      Also, there are AI to analyze data of experiments regardless fo the human conclusion. I think those are pretty helpful in sorting out truth from falsehood.

    • @KT-pv3kl
      @KT-pv3kl 4 роки тому +81

      there is almost ZERO funding for this important task . more money is spent each year to study the mating behaviour of saltwater mud worms. I'm not even kidding ....

    • @xXWorldgamefunXx
      @xXWorldgamefunXx 4 роки тому +9

      "scientist" What does that even mean?
      You have to study a certain field and then you can get a job at a university where they'll pay you for your research.

    • @JacobRy
      @JacobRy 4 роки тому +73

      @LazicStefan If you're talking about climate change, it's real and the effects are observable outside of papers

  • @carnafillian113
    @carnafillian113 7 років тому +230

    This reminds me of in college trying to find trends in data by any means possible just to come to a conclusion that would result in a good research grade.
    I think when your motivation becomes solely about money or grades (or whatever other comparable unit u might think of), you lose sight of the actual purpose behind what you're doing. In my case, because of my fear of getting a bad grade, i twisted the research process to show results that would impress my teacher, but which ultimately were false and useless. This video made me realize how many systems (in education, business, science) are actually structured for their participants to waste their time pursuing arbitrary goals rather than the ones which are actually valuable. If we could make it so a thorough and honest process would be rewarded just as well as one that has a flashy result then we would have a lot more true value being generated via these systems.
    This has been on my mind in school recently so I'm really curious to hear what others think if anyone wants to reply. Great video!

    • @Danskadreng
      @Danskadreng 7 років тому +1

      Hey, can you tell me a little about this research process?
      And how the current systems are a waste of time in education, business, and science?
      I am very interested in hearing what you think could be more valueable :)

    • @forgotaboutbre
      @forgotaboutbre 6 років тому +4

      Um, if you have no motivation or imagination for your research, you probably shouldn't be doing it. I would recommend you not blame your professor or department, but rather look at yourself and ask yourself why you dont like what you are doing.
      I am currently in a research field I am passionate about and I dont have to bend over backward to get results because I come up with imaginative solutions every single day.

    • @Jake12220
      @Jake12220 6 років тому +21

      His point was simple, the rewards for being able to show the desired results are better than for getting less desirable results within these systems. In all these feilds if you can show the desired result (regardless of if the results are valid) then you get better rewarded, be it grades, promotions, bonuses or publication.
      While in most scientific feilds most errors would likely be accidental bias, in areas like testing diet supplements or doing studies funded by corporations these are well known deliberate issues.
      Unfortunately most people have a very poor grasp on statistics and for that matter the scientific process so it's all too easy to make a lot of people believe false data.
      We really do need to improve the systems at all levels. There are currently moves to make things better, but we will continue to have these problems for a very long time, especially with journals not publishing studies that show other studies to be wrong and publishing studies that didn't pre submit their methods of evaluation before commencing.

    • @forgotaboutbre
      @forgotaboutbre 6 років тому

      Okay, I have both locked down pretty well

    • @Cyberspine
      @Cyberspine 5 років тому +3

      I think that unproductive incentives is a common theme in every problem with society.

  • @davidmackie3497
    @davidmackie3497 9 місяців тому +1

    This 12 minutes should be mandatory viewing for every course that touches the slightest bit on any kind of science, engineering, statistics, political science, or journalism. Starting in junior high school.

  • @Iuwl
    @Iuwl 5 років тому +182

    Each and everytime I see some article that says "According to studies by scientists...", I always and always read with skepticism.

    • @chriszeeman5647
      @chriszeeman5647 4 роки тому +42

      Good! Always read with skepticism. That only benefits science.

    • @alvinlepik5265
      @alvinlepik5265 4 роки тому +5

      Yes, that's the point ;)

    • @AndrewDRoyappa
      @AndrewDRoyappa 4 роки тому +21

      which means how much more skeptical we should be of everything else, "alternative news" sites, alternative medicine, health blogs, mom blogs, etc etc...

    • @ScientificReview
      @ScientificReview 4 роки тому +1

      Read with skepticism and report them to the authorities!

    • @sarenareth689
      @sarenareth689 4 роки тому +6

      And then to think that scientists are bound to produce more truth than anyone else, you need to question everything and everyone around you

  • @atomicnolxix
    @atomicnolxix 3 роки тому +80

    When the contestants found out one of the walls would contain an erotic image, they enabled their inner chakras to get it right

  • @6iaZkMagW7EFs
    @6iaZkMagW7EFs 7 років тому +89

    Anyone who reads articles online about "new research" needs to watch this

    • @ps3master72
      @ps3master72 7 років тому +4

      or people who hear science quoted (sometimes incorrectly) by Today Show, Dr. Oz, even Time Magazine etc.

  • @elgoog-the-third
    @elgoog-the-third 2 роки тому +2

    I love your background music. It's so early-2000s-techy in the best possible way

  • @12magic
    @12magic 8 років тому +116

    one the big problems are the big media who search for those crappy headlines: 1 chocolate bar a day or a cup of whine a day.
    The media search for those because it make good clickbait and they will even distord the scientific research and sometimes use words as increase the chances of x things instead of saying increase the cances by 0.01% of x thing
    Just the way they word it make it sound bigger than they are

    • @megumaniac
      @megumaniac 8 років тому +7

      Stop whining man.

    • @12magic
      @12magic 8 років тому +2

      +Marvin Y but it's a fact

    • @megumaniac
      @megumaniac 8 років тому +23

      +Cédric Raymond no, I'm just making a joke because you spelt wine wrong

    • @amcghie7
      @amcghie7 8 років тому +4

      But then could you not say its the fault of the human psychology, that people are drawn to unusual things making the media jumping on this crap inevitable? I am not saying the media are not to blame or that publishing that stuff isn't irresponsible but I do think everyone should take everything the mainstream media publishes about science with a large pinch of salt.

    • @MrBrew4321
      @MrBrew4321 8 років тому +2

      I just hope that most people are inherently skeptical. When I hear something stupid like "chocolate makes you skinny" my reaction is "bull pucky"

  • @SustainableHuman
    @SustainableHuman 8 років тому +363

    I'm curious about the comment you made at the end that "as flawed as our science may be, it is far and away more reliable than any other way of knowing that we have."
    I'd love to see a video on:
    1) What are the "other ways of knowing that we have?"
    2) A critical evaluation on why science is better than those "other ways of knowing"
    ~ A loyal fan

    • @alveolate
      @alveolate 8 років тому +34

      well, there's using logical deduction to eliminate improbable causes.

    • @SkizzlePiano
      @SkizzlePiano 8 років тому +7

      have you ever heard of IB theory of knowledge? These are exactly the type of questions we discussed in class in high school, it really opens your mind

    • @SustainableHuman
      @SustainableHuman 8 років тому

      That's the scientific way of knowing, isn't it?

    • @SustainableHuman
      @SustainableHuman 8 років тому +2

      What is the "IB" stand for?

    • @Krashoan
      @Krashoan 8 років тому +3

      International Baccalaureate

  • @bhp1719
    @bhp1719 8 років тому +357

    I've been a world-class AI researcher for almost three decades now. I have personally, during this time, witness much deliberate scientific fraud, including rigged demos, fake results, and outright lies. Additionally, numerous colleagues have admitted to committing scientific fraud, and I've even been ordered to do so myself. I have always refused. I will not, as a scientist, report results I know or suspect to be misleading. My family and I have been severely punished for this. So I recently returned to mathematics, where true and false still seem to reign. And lo and behold, instead of abusive rejection letters, written on non-scientific grounds, I get best-paper nominations. PS: don't believe any of the current hype around AI.

    • @carolalvarez3728
      @carolalvarez3728 7 років тому +26

      That's terrible , there are many stories like this that keep popping up. Stay strong this crap will change soon .

    • @DdotTindall
      @DdotTindall 6 років тому +2

      Could we talk I'd love to hear your thoughts on this Christer

    • @Mirabell97
      @Mirabell97 6 років тому +25

      Christer Samuelsson why would I believe this?

    • @forgotaboutbre
      @forgotaboutbre 6 років тому +30

      Dang man I think you got out of A.I. at the wrong time lol. People dont have to fudge their results anymore because the results are real and improving every day now.

    • @jacobbellamy7640
      @jacobbellamy7640 6 років тому +2

      Let me guess- it was natural language processing wasn't it?

  • @YossiSirote
    @YossiSirote 7 місяців тому +1

    One of your best. I go back and watch this one every once in a while.

  • @-syphec-3600
    @-syphec-3600 4 роки тому +182

    I'm taking a science research class and this is literally what I was thinking about with like 90% of my peer's projects.

    • @juantelle1
      @juantelle1 4 роки тому +3

      same.

    • @allenholloway5109
      @allenholloway5109 3 роки тому +12

      Even taking a basic science lab course that requires you to write up papers based on your "experiment," you run into this constantly. And even knowing this myself while taking that course, I found it hard to not let my own biases affect how I conducted the experiments. In small ways, but those small ways add up significantly.

    • @astrobiojoe7283
      @astrobiojoe7283 3 роки тому +9

      @@allenholloway5109 Experienced the same! The reinforcement of bias is so joyful. Funny how subjectivity creeps in unnoticed like this in a field which demands objectivity. Some scientists employ outright dishonest policies of manipulating images, it's unbelievable.

    • @theendicott2838
      @theendicott2838 3 роки тому +2

      @@allenholloway5109 you putting quotes around experiment made me remember an anecdote from my basic Chemistry class: we were given a substance in vial, and we had to do an experiment to identify it. Things like weight to volume ratio and boiling point. Well, the school was at an elevation significantly different from sea level. When we measured boiling point, we got the exact temperature you would expect at sea level, and the teacher was shocked. We asked him if we needed to redo the experiment, but he said “no, it’s probably fine.”
      I know my table of chemistry idiots weren’t the bleeding edge of research, but I feel like it illustrates the point that there isn’t enough time/funding to actually conduct proper experiments in several cases.

    • @aaaab384
      @aaaab384 3 роки тому

      And was that 90% statistically relevant? Especially when measured by someone who uses apostrophes to form plurals?

  • @MarkARoutt
    @MarkARoutt 4 роки тому +9

    This is something I am learning a lot on reading studies for food health science. So many variables are not put into account in the final findings. Reminds me of the phrase, "If you look for something, you will find it"

  • @ericselectrons
    @ericselectrons 8 років тому +107

    The problem Veritasium exposes in this video is the same thing Richard Feynman spoke about during a Cal Tech speech that was published in his book "Surely You're Joking, Mr. Feynman." Richard Feynman spoke about Cargo Cult Science; which comprises practices that have the semblance of being scientific, but do not in fact follow the scientific method.
    In his speech, Feynman said,
    "We've learned from experience that the truth will come out. Other experimenters will repeat your experiment and find out whether you were wrong or right. Nature's phenomena will agree or they'll disagree with your theory. And, although you may gain some temporary fame and excitement, you will not gain a good reputation as a scientist if you haven't tried to be very careful in this kind of work. And it's this type of integrity, this kind of care not to fool yourself, that is missing to a large extent in much of the research in cargo cult science."
    It pretty much sums up the problem within the science community. The lack of integrity as a scientist, largely influenced by the lack of freedom given to scientists at select institutions, is the downfall to most careers in science and scientific research. Feynman ends his speech by giving the students much needed advice on how to be a better scientist by saying,
    "So I wish to you-I have no more time, so I have just one wish for you-the good luck to be somewhere where you are free to maintain the kind of integrity I have described, and where you do not feel forced by a need to maintain your position in the organization, or financial support, or so on, to lose your integrity. May you have that freedom."
    I could've have said it better!

    • @carolalvarez3728
      @carolalvarez3728 7 років тому +2

      Thanks Eric , I know & like Richard Feynman . I'm going to get that book . it sounds fascinating . Thanks for the tip .

    • @redrounin1440
      @redrounin1440 6 років тому +5

      Thank you for posting this! I'm often commenting on people's sycophantic acceptance of anything scientific. I try to point out that science is not about accepting wholesale the word of authority figures, but about being skeptical. Testing the world around you, and doing so in a way so as to limit your biases. The way some people rush to defend their favorite scientists or pet theories has prompted accusations of cult like behavior from me on a number of occasions. It's nice to hear such a man as Feynman speaking out about this.
      Of course I'm talking more about the lay persons blind acceptance of anything handed down on high from what they perceive as a high priest in the 'cargo cult'. The kind of person who parrots what they hear on NPR or PBS with _zero_ understanding of what they're talking about. You know this person, they wear a NASA t-shirt and are quick to comment on the fundamental nature of the universe despite receiving a 'D' in high school physics. They watch the Big Bang Theory and laugh along merrily. They have conjectures on the nature of black holes yet struggle to calculate a fifteen percent tip on their bill. They have a wealth of scientific and pseudo-scientific "fun facts" but no integrated understanding of any of it. You'd be hard pressed to find a single original thought floating around in their brain.
      These people are the tributaries of the 'cargo cult' of science. And unfortunately they represent the majority of people who are at least interested in science. It's their votes the government counts on when allocating money to scientific institutions. It's their views the little pop science videos all over youtube count on for their ad revenue. None of these institutions are interested in actually teaching their tributaries how the scientific method works, the value of skepticism, or a groundwork understanding of the subjects they claim to love so dearly.
      Long story short, the cult priests need the tributaries as much as the tributaries need the priests. And it's not going to end anytime soon. The only solution is to educate yourself. An undergraduate understanding of mathematics, and at least a conceptual understanding of some of the models we use in physics and engineering take hours of work but if you love these subjects put the work in and learn them. Most importantly, understand _what a model is_ and _what their limitations are_ . No real scientist claims to have a complete and true understanding of how the universe works! We do however have some _amazing models_ we can use to predict how the universe will behave.
      TLDR: Don't be a cult member. Don't take people's word for it. Put the work in and learn it for yourself!

    • @sudarshanh.v993
      @sudarshanh.v993 6 років тому +1

      That book is one of the most awesome books I have ever read so far. It perfectly describes what science is today and also shows how beautiful the pursuit of science is.

    • @rubiks6
      @rubiks6 6 років тому +3

      It's not that scientists are put in positions where they are forced to surrender their integrity, it is their _willingness_ to surrender their integrity. Integrity holds little value for many people. Notoriety, money, prestige, security - these things are esteemed higher than integrity.
      Integrity should be of infinite value but, alas, human nature is such that integrity is bought and sold and many times simply given away.

    • @Dowlphin
      @Dowlphin 6 років тому +5

      That kind of freedom can suck though.
      But basically, capitalism is the enemy of truth, since it is the way of corruption. It's all about existential fears and breeding bad character traits like greed.

  • @yourdiytechlife
    @yourdiytechlife Рік тому +4

    As a person that loves science but is not in the field, I’ve become quite disgusted by the lack of integrity shown by the university system. They have been corrupted to the core and need to be cleaned out. It’s become big business now and is not to be trusted if profit is the driving motivation, that’s not what universities are for.
    I have no issue with for profit companies doing research and development as long as everyone knows where it’s coming from and is driven solely by profit and is treated as such.

    • @terrencedent3071
      @terrencedent3071 9 місяців тому

      I definitely understand that feeling. As a scientist who has spent a disenchantingly long time in academia, I still have faith in individual scientists and the prevailing winds of science overall. Look how far the world has come in such a short span of time (for good and bad). That progress is built laregly on a basis if good science; the bad stuff ends up getting filtered out. Universities absolutely operate for profit, but not everything that makes a profit is without merit in my eyes.

  • @StructEdOrg
    @StructEdOrg 2 роки тому +81

    This is huge in my field, Structural Engineering, as people get way too lax about sample size. Thanks to testing things like full-sized bridge girders being incredibly expensive, samples sizes of 1-3 have become all too common, and no one does replication studies... Then that mentality bleeds over to things like anchor bolts that can be had for $5 a piece at any big box hardware store. It's getting dangerous out there!

    • @LogicNotAssumed
      @LogicNotAssumed 10 місяців тому +3

      I took a course on rigging moving loads. There I learned Working Load Limit is 10% of Minimum Breaking Strength.
      That makes me feel safe.

    • @leeduke5746
      @leeduke5746 8 місяців тому

      ⁠​⁠@@LogicNotAssumed can you explain what you mean by rigging moving loads? Does this refer to loading up delivery vehicles and such or something else? Or is this 10% rule used for many different applications?

    • @Triple_J.1
      @Triple_J.1 7 місяців тому

      This is Crane operator stuff.
      There is a certain "strain" that is allowed for fatigue reasons. (Strain is material stretch vs applied stress). Exceeding that strain, while still below the breaking strength, will result in weakening of the material with repeated use causing failure below it's published minimum strength.
      E.g. steel might have a tensile strength of 110,000psi but a fatigue strength of only 63,000psi (63ksi/110ksi = only 57.27%).
      So, for conservative use, most industries require robust safety factors to account for fatigue, use, damage, etc.
      Commercial airliners are rated for +3.0g x 1.5 safety factor at maximum weight.
      Bridges vary, depending on seismic requirements, etc. But it's not a good idea to cross an old country road bridge rated for 6 tons, with a 12 ton vehicle. You might survive, but the bridge will be damaged.

    • @erenfe
      @erenfe 7 місяців тому

      @@Triple_J.1 Strain is stretch per original length, like if you stretch 2" in what was originally a 100" rod, you've got 2% strain, or 0.02

  • @alfiaishmetova5652
    @alfiaishmetova5652 3 роки тому +274

    "The Okay, the Bad and the Erotic" actually sounds like a reasonable movie name!

    • @richardgurney1844
      @richardgurney1844 3 роки тому +4

      Yes!

    • @TROGULAR10000
      @TROGULAR10000 3 роки тому +4

      Use me. I'll sign the release form.

    • @mekamk3674
      @mekamk3674 3 роки тому

      Reminded me of "The Good, the Bad and the Ugly"

    • @slaviceno
      @slaviceno 3 роки тому +4

      Interesting how dude also gone right from" Erotic" image to Slight "deviation" and then to "Pee" value phrase after phrase

    • @trakkaton
      @trakkaton 3 роки тому +2

      The subtitle could be: "Get ready for a pee under one in twenty."

  • @MikeM8891
    @MikeM8891 8 років тому +673

    I have an hypothesis. I think getting in car accidents decreases your chances of dying from cancer
    ...but increases your chances of dying in a car accident.

    • @noahwilliams8996
      @noahwilliams8996 8 років тому +12

      "I shall test this! >8/ " -Hopefully some scientist out there.

    • @7781kathy
      @7781kathy 8 років тому +7

      Good analogy.

    • @IAMDIMITRI
      @IAMDIMITRI 8 років тому +78

      False. Somebody just published a paper about that. You have 100% chance to die from cancer if you where in a car accident. It was a small sample size, about 1 man. He was a truck Driver in chernobyl and he has been in small accident once. He died from cancer.

    • @Andromedon777
      @Andromedon777 8 років тому +12

      "You have 100% chance to die from cancer if you where in a car accident."
      so if you get in an accident, you will for sure die from cancer!

    • @7781kathy
      @7781kathy 8 років тому +2

      ***** xD

  • @tomsanders5584
    @tomsanders5584 11 місяців тому +1

    As an electrical design engineer I had the mantra "Everything works great until the current flows." You can design a circuit, model it in software, have your peers review it, take all kinds of steps to eliminate risk, but in the end you have to apply power and let mother nature decide if you were right or wrong. I have to say that a majority of the time there was a mistake or two in the first prototype.

  • @thelastcube.
    @thelastcube. 5 років тому +323

    This is an awesome explanation and it's going to be really fun reading everyone commenting with their confirmation bias while I read their comment with my own biases that I have about what biases they have
    Oh sweet brain, such complex things

    • @Ludifant
      @Ludifant 5 років тому +2

      Exactly why I went into the comment section of this one. Lovely.

    • @anthonyesquire9830
      @anthonyesquire9830 5 років тому +16

      This is gold. It is like playing 4D chess with your own brain. With every agreement and disagreement with your own intuitions, you fall into the trap of asking the perpetual questions of "what if my bias is the bias showing me others' bias, and what if that itself is some bias or error in judgement I fail to consider." This becomes too much and can throw people (myself) off, into a spiral of extrapolating truth. I suppose the remedy of bias is not only itself recognizing bias,but is perhaps the understanding of updating beliefs based on a consistent or more reliable framework such as Bayesian Thinking. So, I do think beginning with investigating biases and attempting nuance by finding multiple sides to research or a thought, is a starting point. It is tiring playing mental chess and questioning yourself, however, it does sometimes provide some insight is getting closer to truth. It also makes it easier to detach ideas when new information is presented. Well, that is my mental state. It might be a bias of its own :). If we are emotionally invested in an idea, consciously or subconsciously, we tend to be more inelastic to new and valid evidence that doesn't support our intuitions. These are just my observations and of course, given valid criticism, I shall update them. :)

    • @MadsterV
      @MadsterV 4 роки тому +6

      And this is why the method exists. When in doubt, test it yourself, never rely on trust or opinion, even your own. This is the most important part of it all, as multiple independent tests reduce the bias.
      This is why when gauging any studies, you must look for indepenent confirmation: a single study can always go wrong in subtle ways. Even the methodology could be wrong, and then while it gets confirmed by others, different studies on the same subject cast a different light on it.
      The more studies, the more confident you can be. Evidence over opinion.

  • @Darwins_Fink
    @Darwins_Fink 8 років тому +187

    this discussion is so important! thanks for making this video

    • @Ludix147
      @Ludix147 8 років тому

      +

    • @veritasium
      @veritasium  8 років тому +15

      you're very welcome!

    • @kimohearn3087
      @kimohearn3087 8 років тому

      +

    •  8 років тому +4

      That was indeed one of the best veritasium videos so far. So glad Derek tackled this problem!

  • @de0509
    @de0509 7 років тому +310

    Some people make fun of greek philosophers because some of their ideas are nothing but thoughts and speculation. Wait for the future perhaps they will make fun of us in the future for being biased

    • @tibfulv
      @tibfulv 6 років тому +18

      Yep, concerningly, 'thoughts and speculation' describes perfectly what is going on in parts of academia.

    • @MusaM8
      @MusaM8 6 років тому +13

      Philosophy is even more prone to bias.

    • @monad_tcp
      @monad_tcp 6 років тому +6

      computing science is not even a science, more like a prescription, too little testing of the hypothesis and too little formalism

    • @mosestewelde8163
      @mosestewelde8163 6 років тому +12

      @@MusaM8 Philosophy is like maths. The logic either adds up or does not after much scrutiny. Maths is pretty much the only field in research still maintaining integrity.

    • @mosestewelde8163
      @mosestewelde8163 6 років тому +9

      @@asumazilla Wrong maths does not obey the laws of nature. There are rules in maths that are based on the natural universe as we not it. If any of those are broken, then that maths is useless to us.

  • @ralphmerkle9315
    @ralphmerkle9315 Рік тому +1

    Thanks!

  • @Aelipse
    @Aelipse 8 років тому +766

    148% of people don't really understand statistics.

    • @jonathangibson9098
      @jonathangibson9098 8 років тому +39

      I absolutely 101% understand this.

    • @Bustaperizm
      @Bustaperizm 8 років тому +43

      This stat has -176% chance of being stolen by me.

    • @clapetto
      @clapetto 8 років тому +8

      But... only 37% of statistics are actually right.

    • @DjVortex-w
      @DjVortex-w 8 років тому +28

      Up to 50% of people are less intelligent than the average.

    • @Steelmage99
      @Steelmage99 8 років тому +17

      78 % of all statistics are made-up on the spot.

  • @djayjp
    @djayjp 8 років тому +233

    Yes. Results are not science until verified/replicated! This is the scientific method.

    • @djayjp
      @djayjp 8 років тому +5

      Very informative video, thank you.

    • @AwesomeSauce7176
      @AwesomeSauce7176 8 років тому +15

      Too bad the studies you see on Doctor Oz (the studies most of the sheep enjoy listening to) are never fact-checked because that would cut into profits.

    • @ThePseudomancer
      @ThePseudomancer 8 років тому +17

      Tell that to sociology majors and they'll call you a bigot.

    • @irvalfirestar6265
      @irvalfirestar6265 8 років тому +3

      Sadly though, the same can be true to a greater extent for legitimate science. Replication studies weren't getting funded much back then by the government or other sources precisely because it's not bombastic or groundbreaking enough to advance the field, so basically only a trace number of replication studies ever gets funded and published.
      In short, landmark studies didn't get fact-checked and replicated a lot because it would cut into their grant money application and prevent them from conducting the studies in the first place.
      Good thing it's changing nowadays though.

    • @boxhead6177
      @boxhead6177 8 років тому +4

      Sorry cant afford to replicate this experiment, the client didn't give us enough of their product to do further testing beyond the results they requested we deliver. We are a private laboratory and need to be profitable.

  • @Baughbe
    @Baughbe 3 роки тому +25

    I remember the chocolate study. It may have been this one or a similar one being done about that time. I was a potential candidate for testing. However I turned it down when I was told they would be taking deep tissue samples... not by a nurse but by a 'Trained Technician', and not in a medical facility, but in a rented room at the local university. Those red flags were enough for me to tell them no, and to keep their $600 and the portioned food they were going to give me for my meals. I was poor as heck at the time and barely scrapping by. Still having a non-medical person digging into my body for samples was not better then ramen again for dinner.

  • @miloelite
    @miloelite 2 роки тому +2

    8:18 “Data doesn’t speak for itself. It must be interpreted.”

  • @jasonliu6857
    @jasonliu6857 3 роки тому +57

    As just a stat grad student I did this a lot to pass classes. Imagine what if you make a living by publishing papers

  • @onuraydinrc0997
    @onuraydinrc0997 8 років тому +21

    Derek,
    I am convinced that the whole reason for this reproducibility problem is money. Unfortunately, science/academia is run like a business. Profit is key. Journals need to make money, institutions need to make money, funding agencies need to convince the rest of the world that the tax dollars spent on scientific research are well spent - and that means results, tangible results. So the scientific effort is reduced to the production of results - as much and as fast as possible. This creates a very destructive pressure on the researcher.
    I'm a graduate student currently working towards an academic career. I have been told by several profs in my field including my advisor that if I want to get a faculty job at a good institution after finishing my phd, I need to have at least 8-9 papers in my CV with at least 2-3 in a high impact journal. The reason is, of course, the sheer amount of competition; there are huge numbers of applicants and very few open positions. When hiring researchers, universities look at their previous research i.e., papers - and people can count much better than they can read. As a grad student, you can dedicate your 4-6 years to one fundamental problem, work on it rigorously, and -if things work out- end up publishing one or two papers with a huge impact on the field. But when you then go looking for jobs, you'll have trouble because people can count better than they can read. They'll say "Oh this guy only has 2 papers but this other guy has 15, let's hire the other guy." I know a lot of people in this situation - extremely bright grad students who cannot get faculty positions or even decent postdocs because they don't have enough papers in their CV. Many grad students who intend to stay in academia are aware of this, and there is no way you can publish at least 8 papers in 4-6 years without sacrificing rigor and/or reproducibility.
    Sorry for the long comment, but this is something that constantly bothers me and I felt a need to say something. Hope you get a chance to read this, I'd be very interested in what you think.

    • @pursuitofknowledge6119
      @pursuitofknowledge6119 2 роки тому

      8-9 papers? Heck no, rigour and precision is impossible at that level.

  • @jordanblatter1595
    @jordanblatter1595 8 років тому +319

    I intend to live forever. So far, so good.

    • @nicokuhne3255
      @nicokuhne3255 8 років тому +6

      i like that haha

    • @imveryangryitsnotbutter
      @imveryangryitsnotbutter 8 років тому +5

      Who wants to live forever?

    • @grampton
      @grampton 8 років тому +1

      Ifyou live forever you'll see everyone you know die and then everything you know die because the universe willend.

    • @StretchReality
      @StretchReality 8 років тому

      +RedEyes Cat dude no way! That must be a record

    • @nal8503
      @nal8503 8 років тому +2

      "If you live forever you'll see everyone you know die and then everything you know die because the universe willend."
      If the universe ends, something will come along to replace it. I'd be quite excited to see that.
      Plus, it's not necessarily true that "the universe" will end, although that's a widely spread myth so I can't really blame you for assuming that.

  • @krishnaveti
    @krishnaveti 2 роки тому +2

    Man, I'm watching this and seeing P(hit ratio = 53% | H0 = People can't see into the future) at 1:20.
    What should be evaluated is P(Can people see into the future? | Hit ratio is x%).
    We don't do statistics well at all.

  • @samharkin9981
    @samharkin9981 3 роки тому +91

    It would seem a lot of researchers are are engaging a pursuit to "prove" a hypothesis, rather than exploring scientifically for science sake.

    • @Pesso86
      @Pesso86 3 роки тому +20

      This happens because if you don't pubblish, you will be out of a job. In many cases, the system does not promote high quality science

    • @jonathanschweiss316
      @jonathanschweiss316 3 роки тому +2

      @@yaroslavsobolev6868 In short, science is provisional; it doesn't _prove_ anything is true but instead tells us what is _likely_ to be true.

    • @jonathanschweiss316
      @jonathanschweiss316 3 роки тому

      @@yaroslavsobolev6868 Interesting. I've never actually heard of instrumentalism, but maybe that's more what I was attempting (and failed) to get at.

  • @alexanderlyon
    @alexanderlyon 3 роки тому +73

    In the social sciences, the problems extend into qualitative studies. Qualitative studies are known for relatively small sample sizes (of often hand-picked participants) and researchers aren't obligated to disclose any data other than what they decide to display in the write-up of the study/article. They can tell whatever story they want to tell about how they collected and analyzed their data and there's almost no way to verify it. A researcher could look through 500 pages worth of interview data, for example, and only pull quotes from 2 pages worth of carefully selected text to craft an article. Essentially, a dishonest qualitative researcher can produce any type of conclusions they want to and even honest qualitative researchers doing the best job they can are frequently just publishing the point of view their favorite theory is already driving at. I've seen entire qualitative studies that hinge on one story from one participant and the rest is merely filler.

    • @theendicott2838
      @theendicott2838 3 роки тому +6

      Yeah, I ran into that a lot when reading papers for my Sociology class in my bachelors program. The paper had a conclusion, but no real supporting evidence to draw that conclusion from. I was very glad I was going out for Mathematics and not something that would require more classes like that, because I found it hard to swallow my reactions and write what the teacher wanted to hear to get a good grade.

    • @gamerwoman6991
      @gamerwoman6991 3 роки тому +5

      But qualitative research does not pretend to be objective. You go in with the idea that this is an interpretation heavily based on the researcher's worldview

    • @SeanKH19
      @SeanKH19 3 роки тому +1

      Other than maybe economics, social sciences in universities are pretty much a cult at this point.

    • @bd3531
      @bd3531 3 роки тому

      There is only one science, physics. Social studies or humanities are NOT science.

    • @granthurlburt4062
      @granthurlburt4062 2 роки тому

      IMHO, qualitative research is an oxymoron. I had long debates with nursing professionals who conducted "qualitative researches", basically writing down "narratives" and ethnographies; i.e. things patients and nurses had said. They adamantly refused to say they would generalize from these results because "that's science". We're not doing science." Me" "So what value does this have?". Answer" Oh we might learn something from it. Someone might have a similar experience". They get funding for this, and it enhances their career. They ask doctors to put their names on it and the doctors say "Sure" because they need thier names on publications

  • @jarrethcutestory
    @jarrethcutestory 8 років тому +55

    The problem with the approach to science is that nobody likes negative results. They aren't sexy. We always have to see "differences". And people feel pressure to find them no matter how tenuous they are. Because of this it's very difficult to correct problems in the literature.

    • @Supermanohman
      @Supermanohman 8 років тому

      And they all want big crazy findings. I made a comment above about the new study saying women are more attracted to altruistic guys. Journals know that popular media will be all over that so they will do anything they can to connect themselves to the study so they can have their name out there. Peer reviewed scientific journals are all about brand recognition just like any other business out there.

    • @ravex24
      @ravex24 8 років тому +12

      A lot of the biggest discoveries in science have been because of negative results. They are the sexiest forms of science.

  • @xitaris5981
    @xitaris5981 Рік тому +2

    I was encouraged to contribute to at least 3 published studies before finishing my undergrad and I was supposed to be the primary author on at least one. You wouldn't believe the kind of nonsensical crap people were proposing to study in hopes of meeting the 'standard'. People weren't concerned about science or making meaningful observations, they just cared about checking to box so they'd have an impressive resume for graduate school.

  • @jossbox4794
    @jossbox4794 8 років тому +138

    How can we tell this research isn't wrong

    • @veritasium
      @veritasium  8 років тому +88

      oh the endless loop - there have been a fair number of attempted replications recently that have found pretty dismal results. When you consider they are all in agreement, that biases exist, that incentives are skewed, that .05 is not all that low, that p-hacking occurs, it is fairly unambiguous that a sizeable fraction (if not a majority) of research is actually false.

    • @leonardokallin9135
      @leonardokallin9135 8 років тому +1

      +Veritasium Wouldn't the odds of the exception being wrong be higher, than the odds of the norm being wrong? There's a reason why there's such a thing as peer review, after all. The scientific model is there to make sure you can replicate the results and methods of published papers. If something doesn't stand up to peer review, it's bad science, as it means something didn't add up.

    • @user-yd6qq5pr7c
      @user-yd6qq5pr7c 8 років тому +1

      you don't

    • @LethalSword666
      @LethalSword666 8 років тому +2

      sadly low sample sizes are a very common problem due to lack of finances or various other reasons.

    • @IceMetalPunk
      @IceMetalPunk 8 років тому +2

      That's the point: when deciding which papers to publish, the scientific method isn't being respected. There's selection bias tending toward publishing mostly positive results and not the inconclusive ones, and there's a complete lack of respect for replication since those studies are often rejected outright.

  • @philipstevenson5166
    @philipstevenson5166 3 роки тому +189

    My 30 year experience of virology and immunology (a more objective bit of biology) is that the data are generally OK but they don't mean much (or anything really) because they use model systems toio artificial to translate to real life. I would estimate that at least 90% of published biological science is worthless for this reason. I can't judge for other disciplines.

    • @joesterling4299
      @joesterling4299 3 роки тому +6

      What does "experience" mean? I have over 50 years of experience with rocket science. I even watched NASA land men on the moon. But I'm not a rocket scientist. I am absolutely unqualified to challenge any part of it.

    • @philipstevenson5166
      @philipstevenson5166 3 роки тому +54

      @@joesterling4299 government funded research fellowships, university academic, running research group, etc. I.e. doing as a profession rather than just watching.

    • @trakkaton
      @trakkaton 3 роки тому +3

      Medicine: Above 90%.

    • @Azerty72200
      @Azerty72200 3 роки тому +10

      @@joesterling4299 It was worth asking though, all the more so on the internet.

    • @artemmen7357
      @artemmen7357 3 роки тому +1

      what is wrong with model systems?

  • @GenesisAria
    @GenesisAria 8 років тому +88

    Wow, i'm impressed actually. As a follower of empirical sciences (i study dielectric rotational magnetic fields, and the unified field), getting through to people that statistics and studies are frequently heavily flawed isn't always easy.
    The best method of delivering fact is delivering hard fact through retroduction. Reproducing and delivering impenetrable logic that confines a model to irrefutability. Abstract studies are pointless. If you wanna find out how to make people thin, you learn the physical chemical processes that increase body fats, and catch the problem at it's root. Experimenting blindly with random combinations of living habits is unimaginably inefficient.

    • @AlekseyVaneev
      @AlekseyVaneev 8 років тому +2

      Most simple predictable models were already proposed and studied, very little is left for any scientist in the world of "simplicity". The problem now is the increasing complexity of new models, and at this point you can't really "design" an experiment, you have to design general methods that run experiments stochastically in massive amounts. Rarely you can thoroughly test a complex multi-dimensional model or design an easy and encompassing experiment for it. Living habits is actually an example of a complex multi-dimensional model.

    • @GenesisAria
      @GenesisAria 8 років тому +6

      Aleksey Vaneev Well sure, but that's because the studies are impatient. If everything was learned retroductively and factually from the ground up, each process studied meticulously, there would be no mystery and no confusion. We would know each fundamental process and be able to compound them understandably into macro multi-dimensional models as such, because each dimension is understood in full with explanations.

    • @AlekseyVaneev
      @AlekseyVaneev 8 років тому +3

      Real-world systems can never be broken down into a graph of sub-systems with known relationships. Weather or human body, or economy are such systems which cannot be completely decomposed into elements. They are like systems of equations where variable A depends on B, C, D, and each variable depends on the others, in non-linear way. It works as a complete system, we see that it works, but if you start decomposing it into elementary things, they won't add up back, mainly because you just can't standardize (tie one fact to the other) nor detect all of system's elements.

    • @GenesisAria
      @GenesisAria 8 років тому +3

      Aleksey Vaneev Nono, they absolutely can, but not with Cartesian and Euclidean modelling. The real world is not made that way; it's an incommensurate system which behaves in a fractal sense. There are numerous elements at one scale that cohere to make a compound element have more presence at a larger scale. The standard mathematical systems require relative coordinate logic, which works on paper, but causes all of the apparent problems that we all face in the world of things not adding up.
      If you don't know, Cartesian mathematics is working with x,y,z graphing, and Euclidean math starts with a point (you just decide where it starts) and then a uni-directional line. Reality works with bi-directional curved lines (spiralling into a hypotrochoid), as described in Newton's Third Law: recoiling inertia. All things in reality behave this way.
      You can't divide something that does not have Cartesian dimensionality. The primest example is a magnet. If you chop a magnet in half anywhere, it will immediately form 2 fields that have identical geometry to the original, with N/S and inertial plane. You have to work from the bottom up, figuring out how the thing itself works before you can make any assumptions on how complex constructions using this thing works (even if you know how to use it, doesn't mean you know how it works fundamentally).

    • @AlekseyVaneev
      @AlekseyVaneev 8 років тому +2

      Well, what you are trying to say is an example attempt to generalize/standardize things. But they cannot. Some work one way, some work another way, in one dimensionality and another. That's why you can't build a model of a substantially complex system. You can do that in imagination, as a general point of view, but not in an actual model that can be computed and predicted. And without ability to predict there's no science.

  • @paulster185
    @paulster185 Рік тому +5

    Years afterwards the problem is still in no way solved, not much if anything was improved, but it's no longer talked about.
    I guess silencing critics worked well.

  • @Writeous0ne
    @Writeous0ne 3 роки тому +10

    You see a lot of peer reviewed research that has many citations, which people consider to prove that the research is correct but when you read the citations they actually oppose the paper or its just a paper on something similar.

  • @nikhilramabhadra6052
    @nikhilramabhadra6052 4 роки тому +194

    Book called "How to lie with statistics" sums up everything. 😂

    • @andrewharrison8436
      @andrewharrison8436 3 роки тому +11

      Thumbs up. That book is so out of date and so relevant at the same time.
      Everyone should get a copy and read or reread it. Ignore how old the example data is, just apply the thinking to everything you see as curent news.
      How to lie with statistics by Darrell Huff first published 1954, reprinted (quite rightly) many times.

    • @pozzowon
      @pozzowon 3 роки тому +5

      Man wrote "How to lie with statistics" and then went to work for big tobacco as a statistical liar.
      Interesting story that guy's

    • @steveperreira5850
      @steveperreira5850 3 роки тому +2

      I want to read that book, I’m gonna Google it right now and see if I can get an audio version and listen while I’m working, but I suppose this kind of book doesn’t work with pure audio, probably a lot of graphics and formulas

  • @Hist_da_Musica
    @Hist_da_Musica 4 роки тому +21

    Publication strategies of scientific findings are pretty unscientific. The dynamics of social prestige involved in publishing are clearly incompatible with the scientific method.

  • @SofiaStark-be4jw
    @SofiaStark-be4jw 8 місяців тому +2

    thanks, now i don't know what to do with my life. i'm a senior in highschool wanting to study physics, but i have watched a ton of videos that explain the reaserch paper publication strategies, and the way academia works in general, and now i realise that the perfect knowledge making science world i wanted to be a part of is nothing like i though it was....

  • @tournedede
    @tournedede 6 років тому +48

    Nicely summarized! I am a scientist (engineering) and reproducibility is a huge problem. I think there is a lack of throughout scientific method/experimental design teaching as well. I had to learn on my own about all the possible drawbacks (cognition bias etc) and I am still unsure I do everything correctly.
    Another important source of error can be listed for experimental science: it is literally impossible to control all variable in the environment (where the experiment is conducted), apart for very expensive facilities (ultra-clean rooms, cyclotrons...). Which means that a simple change of weather, some new vibrations (new road nearby the building), a new type of equipment (it is impossible to compare data from different groups that own the same machine, they are never the same - mostly after time pass and parts need to be replaced)... will differ the data set. All in all, it should be possible to by-pass it, if you had infinite resources and time. But since we don't (and as you show, it is hard to publish both negative and reproduced results), most researchers try to do the minimum amount of experiments. Sometimes not even reproducing their own data (because it will not be the same at all!).
    Well, all is not loss, as most of the time, a hypothesis is often quite robust to our errors, and that being aware of those errors can help reducing them.

    • @phuckgewgle4751
      @phuckgewgle4751 6 років тому +2

      You may be missing the difference between what are called pure sciences and what are called applied sciences. Applied sciences are not true science, i.e. they do not apply the scientific method to arrive at conclusions through data. Often they use trends, probabilities, criteria and statistics to allow for conclusions when the factors of experimentation cannot be controlled for. I think this video is really only trying to debunk these applied sciences as not producing scientifically supported facts. The experimental or 'hard' sciences should be exempt from this critique if I am not mistaken.
      You make a great point though, one I have always maintained, but on that note I would say don't forget that science never attempts to assert it has 'proved' something through the acquisition of its data but rather simply has 'found cause to support certain conclusions over others'. The conclusion that certain well-tested hypothesis are debunked due to a margin of error in the data, such as might be produced by a variance in the machines or proximal road construction etc etc, is far less tenable than it being explained, or even written off as they most likely are, as the consequence of such events. But things like scientific laws are so constantly observed under their expected conditions that we have never observed instances which could cause us to conclude they were not laws of the universe. To all intents and purposes laws are 'proven' but tomorrow could reveal observations which entirely destroy those conclusions based on today's observations, thus science can't 'prove' anything because at best science only produces conclusions appropriate for today's observations.
      To add to your critique though, one of the things I like to bring to the table is something I think which is missed by even most hard scientists today. That is, the current theories which account for current observations could actually be 'in-discernibly incorrect while entirely observably aligned with the measurable parts of the real universe'. That is, it is still entirely possible that our universal theories are actually only a model that can superimpose, without us noticing it doesn't do so entirely or actually, due to the possibility of our incapacity to measure or experience certain parts of the universe. We could be fooled into thinking our theories are more accurate than they are because there is no guarantee we can experience, measure or even comprehend the universe in its entirety but we would have to do so to think there is not a possibility of an an unnoticeable overlay.

    • @aaronthomas8834
      @aaronthomas8834 5 років тому +2

      Man, that is really surprising. That is something that is definitely taught in a chemistry degree track in Analytical Chemistry courses, the injection of personal bias, the bias towards measurements that end in even numbers or five, etc etc. the list goes on. Being as logic and mathematics based as engineering is I'm surprised to hear that. I'm sorry you had that experience, man.

    • @virvisquevir3320
      @virvisquevir3320 5 років тому +4

      Phuck Gewgle - No, of course we can't measure every possible variable in the universe. We don't know what they are. And we don't know what we don't know. There could be an infinite amount of unknown variables. All we can do is model the variables we do know about and give them the changing values as measured over time. A model is a simplified abstraction of one small part or aspect of the universe. Models are man made. We use models for the purpose of prediction and control. They are tools. They are not "truth" in any final, exclusive, complete sense. All models are provisional. As soon as a better model comes along, we will drop the old one or relegate it to certain approximations, parameters or purpose. For instance, the Apollo program brought men to the moon and back to earth using only Newtonian Mechanics, good enough, no need for Relativity or Quantum Mechanics. For different purposes, Newtonian Mechanics will not do as well as Relativity or Quantum Mechanics. And the beat goes on... Cheers!

  • @MaxLohMusic
    @MaxLohMusic 5 років тому +54

    The xkcd "Jelly Beans" comic deserves a mention. I'm so glad it became popular because it illustrates the whole issue so well, and in just one frame. It should be required reading for the whole world!

    • @jayaarmstrong5830
      @jayaarmstrong5830 4 роки тому +18

      @Fluffynator The scientists decide to test whether jelly beans cause acne. They do not get statistically significant results, so one of them suggests testing each of the 20 colours separately. When you break up the data into many categories, you increase your chances of a category showing statistically significant results by pure coincidence. This is essentially what happened with the chocolate study in the video. By monitoring many different categories of conditions (weight loss, sleep quality etc.), it was more likely that one of categories returned a false positive. The same thing happens in the comic. One of the 20 tested colours shows statistically significant results, which is not unexpected due to the number of categories they created. They publish the paper showing the (presumably) false positive with green jellybeans while the other 19 studies that correctly identified no relationship go unpublished and forgotten.

  • @willdehne1
    @willdehne1 5 років тому +32

    "To support the Greater Good we present positives and suppress negative data."
    I find this corrupt/disgusting. Unfortunately this thinking is applied often in politics and science.

    • @Ludifant
      @Ludifant 5 років тому +6

      And most importantly in every human subconscious.

    • @willdehne1
      @willdehne1 5 років тому +2

      @@Ludifant Yes! Insisious. Study this with people from different cultures and upbringing in those cultures. We observe this with people born in one culture but raised in another. It goes deep.

    • @PreciousBoxer
      @PreciousBoxer 4 роки тому +3

      The scientific approach has been corrupted in order to preserve politician's power.

    • @willdehne1
      @willdehne1 4 роки тому +1

      @@PreciousBoxer On a lighter note. I recently read and study a post by Sabine Hossenfelder on UA-cam. The subject is Gravity. What I get from that post is that the most fundamental Physics are unknown. See discussions on Cause of Global Warming. Nothing is certain. Discussion on distribution of wealth and taxation. Very contentious. Good or bad of Capitalism vs Socialism vs Communism. Very heated discussions. Some people spend their life trying to manipulate these subjects. For good or bad.

    • @PreciousBoxer
      @PreciousBoxer 4 роки тому

      @@willdehne1 That's good to hear. According to your response to me... UA-cam says "read more" but no more is available.
      Do you have a question for me? If so, I do not know. You sound intelligent and I'd like to hear more. I'm just unsure where you are going with this and how to continue with a conversation.

  • @salamatunnafiah9033
    @salamatunnafiah9033 Рік тому +15

    When I was in undergrad, I was so obsessed to do postgrad for the sake of "in search of knowledge", now, i am a master student, and although I love the university library, I just feel "empty" inside, knowing that we cannot rely on "human knowledge" ... and even more interested in religious studies. Looking for the "pure" knowledge is hard today...

    • @bubblesbomb8949
      @bubblesbomb8949 Рік тому

      As someone interested in postgrad, would you recomend it to someone fully expecting the absurdity of human knowledge as an absolute truth?

    • @salamatunnafiah9033
      @salamatunnafiah9033 Рік тому

      @@bubblesbomb8949 yes, just do it. I just realized that, it's part of the learning journey....

  • @Mark-sc4bu
    @Mark-sc4bu 3 роки тому +16

    Beautifully presented. P-hacking has been the plague of accurate and truthful research for decades.

  • @diablominero
    @diablominero 5 років тому +9

    There's also the journal "Series of Unsurprising Results in Economics," which publishes results everyone would have expected.

  • @rogerlundstrom6926
    @rogerlundstrom6926 4 роки тому +55

    I just looked at this video again, and I realized something: When you say that they don't publish as many negative results, you still assumed that they ONLY would publish TRUE negative results, not false ones.

    • @panner11
      @panner11 3 роки тому +14

      true, though considering how few negative results get published, on that graphic it would only account for about one checkmark or maybe not even.

    • @georgeelmasry9376
      @georgeelmasry9376 3 роки тому

      Academia is full of politics. Journals are trying to avoid vendictive publications.

    • @patu8010
      @patu8010 3 роки тому +1

      That's what immediately stood out to me too.

  • @amphernee
    @amphernee 9 місяців тому +1

    While getting a psych ba I wondered why journals are pretty much unregulated. The fact that a journal can publish findings then refuse to publish studies that disprove or refute them is troubling to say the least.