The BROKEN system at the heart of Academia

Поділитися
Вставка
  • Опубліковано 11 чер 2024
  • My Website: petejudo.com
    Follow me:
    Behavioral Science Instagram: @petejudoofficial
    Instagram: @petejudo
    Twitter: @petejudo
    LinkedIn: Peter Judodihardjo
    Good tools I actually use:
    Shortform: www.Shortform.com/pete
    Ground News: ground.news/Pete

КОМЕНТАРІ • 500

  • @ericpenrose3649
    @ericpenrose3649 9 місяців тому +627

    The peer review that matters is the criticism that happens after a study has been published to the wider community. Not the three sleep deprived researchers who skimmed your paper on their lunch break.

    • @videomakville
      @videomakville 9 місяців тому +8

      Super super correct!

    • @salganik
      @salganik 9 місяців тому +28

      there are some journals where you can leave a comment for the new publications. But, honestly, most papers are so niche that there are not that many comments. And, of course, one would not leave a public non-anonymous critical comment as it would damage the relationship between these two researchers.

    • @aklem001
      @aklem001 9 місяців тому +1

      This is what Eric Weinstein said.

    • @ericpenrose3649
      @ericpenrose3649 9 місяців тому +8

      @@salganik The more niche a paper is the more closely its community tends to pay attention to publications in that field. If a niche paper has no criticism that mean every expert in the field who could understand it considered it carefully and had no major notes.

    • @salganik
      @salganik 9 місяців тому +9

      @@ericpenrose3649 Other researchers are usually busy and most probably will not spend their time leaving anonymous critical comments. Usually, bad quality papers are just not cited well.

  • @zray2937
    @zray2937 9 місяців тому +271

    I've reviewed 11 papers in the last 2 years, and I don't know if I want to laugh or cry about being "top professors" and "highly paid".

    • @PeteJudo1
      @PeteJudo1  9 місяців тому +24

      Hahahahaa

    • @SiMyt848
      @SiMyt848 9 місяців тому +25

      I wanted to say the same thing! Ahah ... In astrophysics it is common to start reviewing papers during the PhD after publishing the first or second paper.

    • @anakrajinovic1767
      @anakrajinovic1767 9 місяців тому +8

      yes, I think it's very common for early-career researchers to review more papers on average than more established people because they are more available.

    • @notusingpremium
      @notusingpremium 9 місяців тому +8

      I suggested to a few researchers in my uni to stop reviewing for free and everyone looked at me like I am an Alien. Is it wrong to ask to be paid for work? There should be a worldwide or at least U.S./ E.U./ China researchers union that would make such demands. Journals must pay for the reviewer's time.

    • @LNVACVAC
      @LNVACVAC 9 місяців тому +2

      ​@@notusingpremiumpeer review is gatekeeping...

  • @melimsah
    @melimsah 9 місяців тому +59

    I'm not in academia but I care about science and education so deeply. One thing I hear about is how important it is that experiments are able to be replicated, but that it's often frowned upon or flat impossible to replicate a study, or if you do and find that it doesn't replicate, it's assumed you're in the wrong rather than the original researchers.

    • @TheThreatenedSwan
      @TheThreatenedSwan 9 місяців тому +6

      There is big publication, and funding, bias against studies designed to find whether something replicates and then showing it doesn't. But for the studies that got caught recently or the related ones that got caught before that people are now talking about again, they're about things that never replicate anyway like priming effects

    • @noneatallatanytime
      @noneatallatanytime 9 місяців тому +9

      There is no such thing as science without replication. It is in the definition of what we take science to be, i.e. empirical knowledge. Anyone who claims otherwise is trying to sell you something.

    • @alfredoprime5495
      @alfredoprime5495 9 місяців тому +1

      @@noneatallatanytime replication invariably comes down to time and money especially when we're talking about the hard sciences. Unless your own research is so similar to the work you're trying to replicate that you don't need to spend a lot of money on new equipment or materials, there's really no incentive to try to do it.

  • @emockensturm
    @emockensturm 9 місяців тому +178

    A huge part of this that you’re missing (especially in STEM fields) is not that faculty members aren’t compensated for peer reviews, they are also not credited. They get no recognition from the community or those in charge of their promotions. When the personal benefit of doing something is nearly zero, a cost-benefit analysis tells you to skip it. I think a big reason people don’t say no when asked to review a paper is that they are likely familiar with the journal editor asking for the review. Saying no makes you look pretty selfish.

    • @psytoolkit
      @psytoolkit 9 місяців тому +1

      If you say no to journal, why should they review your own journal later on.

    • @jacobfreeland6881
      @jacobfreeland6881 9 місяців тому +2

      I like the idea of crediting reviewers as such. Gives them incentive to do good work, since it will be known that they contributed to the quality of the article or, if it sucks, that they failed to do a good job.

    • @rafaelagd0
      @rafaelagd0 9 місяців тому +4

      ​@@psytoolkitbecause you are paying 2kU$, for example?

    • @rafaelagd0
      @rafaelagd0 9 місяців тому +1

      We do it because we have to. The system is set like this and you have to comply. It doesn't mean it is the best way. I think that an open review is more important than paying for the review. But a system of credits where newbies gain some free credits and seniors need to review to gain credits would be a possible solution. An open public review of preprints would be another. To mention two alternatives. It is crazy to think that the supposed highest intellectual community on the planet could not invent a better way to rank and filter articles. So much so that there are in fact many proposed alternatives. The editorial companies lobby against them every single day. I am probably responding to some Elsevier proxy anyway.

    • @emockensturm
      @emockensturm 9 місяців тому +2

      @@rafaelagd0 I was a professor in engineering for 20 years and reviewed 100's of papers.
      No, you do not have to review papers; I know plenty of people who did not. Most of the time, this wasn't because they explicitly said, "No," but instead ignored the request. Part of the issue is that if you do a good job reviewing papers in a timely manner, you're just asked to review more. Most journals would like to get three reviews for every paper, although they'll usually make a decision using two of them. While the acceptance rate differs greatly between journals, let's say one in three is accepted. So, about nine people were asked to review a paper for every accepted paper.
      If you want to know my feelings on Elsevier, I can send you a rejected proposal I submitted to NSF about 15 years ago. This resulted from chairing the faculty senate libraries committee for a couple of years. I'm intimately familiar with the games they play (and their net profit margins).
      There are reasons the system is stuck in non-optimal equilibrium, and the activation energy to find a better equilibrium is huge.

  • @ferusskywalker9167
    @ferusskywalker9167 9 місяців тому +77

    There's a huge issue with confirmation bias in the papers as well. If the paper directly goes against what someone believes to be true, it's going to be looked at a lot harsher and more rigorously than something that agrees with the reviewers belief.

  • @Draxis32
    @Draxis32 9 місяців тому +211

    For someone who has been in academia and gave up will likely adore this video.
    I always grew up wanting to become a scientist or someone who worked at academia. But when I finally made it, I've found there was only Egos vs Egos. An eternal clash of personalities over the most petty reasons possible. I've made the terrible mistake of confusing knowledge with wisdom, and thought that the people who were in academia were "elevated" due to their vast knowledge.
    When it was all the same monolithic color of human ignorance there was in society. In sum, I find it hard to solve the BIAS problem without an external factor added in. However, that may be too costly to implement given the complex nature of scientific work.

    • @cleitoncamilojr8094
      @cleitoncamilojr8094 9 місяців тому +2

      Same

    • @andrewnorris5415
      @andrewnorris5415 9 місяців тому +19

      I've learned that from watching them on Twitter. They reveal a lot about themselves there.

    • @Velereonics
      @Velereonics 9 місяців тому +24

      People really high in academia suffer from something I think those who are in creative industries, especially who became successful as minors, suffer from. It's sort of like, their world is small. Even if they have money and have been all over the Earth, the sphere of people that matter to them, who can even understand what they're doing or their experience getting there, the number of places that have any relevance to their profession, is smaller than it is for most and it doesn't change as many times.
      Even if they get a new position at another university, it's still a university environment, they're working in the same field dominated by the same group of people. They're teaching people who are similar to what they were years before. They've been in a cohort their whole life.
      And I think this makes a lot of them very petty, because little perturbations to them seem enormous, and petty actions to gain some small lead don't feel as immoral.
      It's like an immaturity that they won't surmount no matter how intelligent they are. The way they became intelligent is the issue, in and of itself.

    • @Draxis32
      @Draxis32 9 місяців тому +2

      @@markhall2414 I gave a similar statement in a video about Noam Chomsky criticism of intellectuals.

    • @mircopaul5259
      @mircopaul5259 9 місяців тому

      One simple way to mitigate the bias problem would be to make reviews double blind (potentially also w.r.t. institution)

  • @warrenkruger1966
    @warrenkruger1966 9 місяців тому +154

    I have done a LOT of peer review (100s). Everything you say is true, but you missed the largest cause of bias. Because your own work is is judged by citations, papers that cite your work a lot tend to be favored. The competitor issue is generally less of a problem with papers as most journals allow you to exclude reviewers. I do think better peer review would occur if Reviewers were compensated.

    • @salganik
      @salganik 9 місяців тому +4

      Well, I am not sure this is a huge issue. Why would you compromise your ethical standards just to get several new citations? Yet, that is true that many authors try to guess their reviewers and cite them in advance.
      But you are right that citations are a bit broken. When reviewing the papers, I saw so many times super unethical references. When authors just cite everyone from their lab even when there are much better and earlier papers on the same topic. And each paper, even a very bad one, may produce tens of such unethical citations, breaking the whole concept of the h-factor.

    • @vampir753
      @vampir753 9 місяців тому +5

      I have also thought about that. The issue with compensation is that then there is a incentive to review "as much as fast as possible" and there will be a minority of researchers that exploit this to get additional "funding" on the side. Hence there will be the tendency that a majority of reviews is done by a minority of researchers for the wrong reasons. I'm a researcher myself (post-doc) and haven't done many reviews yet, because I only do them when I get a good paper on a topic I'm really an expert on and have enough time to do a thorough review.

    • @Secret_Moon
      @Secret_Moon 9 місяців тому

      I have never heard about journals allowing you to exclude reviewers. The whole reviewer selection is done by the journal and completely blind to you. You can "suggest" reviewers, but the journals rarely pay heed to your suggestion anyway. And honestly, being able to exclude reviewers brings about more problem than it solves. It would turn the whole reviewing process into echo chambers.

    • @salganik
      @salganik 9 місяців тому +3

      @@Secret_Moon while as you said the choice of reviewers is done by editors, you are often asked to suggest reviewers and (optionally) to exclude other reviewers. Of course, editors can ignore this too, but I doubt they would bother with it. You can sometimes even explain the reason for the exclusion. And it sounds to me as a good option. Some people can be just unreasonably harsh to your work for various reasons including a conflict of interest. And after a single unfair revision you don't want it to happen again. Unfortunately, a lot of reviewers are not following the guidance of the revision process and can be extremely subjective (for example, providing reasons which are not supported by publications).

    • @jsupim1
      @jsupim1 9 місяців тому

      If the reviewer doesn't like the fact that his papers aren't cited and thinks they should be, I think they can suggest adding new citations when they request edits to the paper.

  • @doctorlolchicken7478
    @doctorlolchicken7478 9 місяців тому +85

    I did peer reviews in the early 90s and it was bs then. Didn’t expect it to improve. In my field in business we are required by the government to have independent verification by a third party, which include replicating a lot of the data analysis, or at least studying the data closely. I think for research involving data some examination of the data should be required.

    • @salganik
      @salganik 9 місяців тому

      Each paper requires 2-3 reviewers + 1-2 editors. Assuming 1.5 papers per year per academic, everyone would need to review 4-5 papers per year with data analysis, but since PhDs are excluded, maybe even two times more. This is unrealistic. With such requirements, most reviewers would skip most of their reviews when they see that the research is based on large datasets (satellite imagery, numerical simulations, machine learning). It is the job of authors to convince others that their data makes sense, we are not in elementary school.

  • @lisleigfried4660
    @lisleigfried4660 9 місяців тому +132

    Making stock footage of yourself to save time editing is genius

  • @gretalaube91
    @gretalaube91 9 місяців тому +129

    Did research for 30 plus years. This is why when everybody cheers: "Because SCIENCE!" I laugh. The blind leading the blind. Oh, and don't forget egos!

  • @theysisossenthime
    @theysisossenthime 9 місяців тому +53

    I'm not involved in a scientific peer review process, but I am directly involved in engineering peer review. I see many similar issues between the two. The biggest issue I continuously struggle with is that a significant population of people that I've worked with don't see the value added by the peer review process. Even after seeing how engineering projects have literally been saved by peer review, too many just want to wave a magic wand over it and move on.

    • @TheThreatenedSwan
      @TheThreatenedSwan 9 місяців тому

      There is an economic question, but usually institutional peer review isn't that great even if it applied certain minimal standards. Pretty much all papers are peer reviewed by other people working in similar areas before it gets to institutional review, and like he said in the review, inter-rater reliability is too low

    • @otm646
      @otm646 9 місяців тому +4

      Within private companies I've seen very strong internal engineering reviews because it's a private company. It's their corporate reputation and money on the line.
      Are you in the public sector?

    • @theysisossenthime
      @theysisossenthime 9 місяців тому

      Same is relatively true is engineering peer review. Even though technical standards exists, the act of peer reviewing engineering projects is more art than science. Standards are valuable, but often don't solve practical risks associated with projects. Good peer review can save a company/government agency millions of dollars per year and ensure more realistic schedules. I think one of the challenges comes up is which entity saves that money and time (especially if one of the entities can exploit another to make more money by going "over budget"). If we all followed better practices, in the end everyone would save money. But in the short-term, on the project immediately in front of me, the cost to peer review may only save another entity money. If only everyone could see the longterm value to all of us if we just suck it up and value this process.

    • @theysisossenthime
      @theysisossenthime 9 місяців тому

      @@otm646 I have worked in both sectors. From my limited observations, I have not seen materially increased quality nor consistency from either sector.

    • @Aaron-kj8dv
      @Aaron-kj8dv 9 місяців тому +3

      I remember when they made a big deal out of this bridge that was designed by an all female team in Florida and then it collapsed. As a story it's funny, until you think of the real world implications and I remember thinking how was this allowed to be built? I'm sure there were even really smart guys on the construction crew who built the bridge who could intuit it wouldn't work,

  • @lindasegerious9248
    @lindasegerious9248 9 місяців тому +15

    I have published research, and I was offered to be a peer reviewer for an academic journal, but on a topic where my background and level of expertise are minimal. I mentioned that. Still they insisted. In the end I never accepted, but the fact that they insisted opened my eyes about the reliability of the peer-review system.

    • @Tugela60
      @Tugela60 9 місяців тому

      That has never happened to me. Journals never say anything if I indicate that I feel I am not qualified to comment on a particular subject. They have always provided the option to decline if I felt that way.
      Many journals ask for a list of suggested reviewers from the authors, which they may or may not use. In general you will only get asked to do a peer review if the author has named you as a potential reviewer, or if you have previously published in that journal. It is expected that if the journal has accepted one of your papers, in the future you will reciprocate by agreeing to act as a reviewer in the future.
      If you are asked to do a review, you probably fall into one of those two categories,

    • @Tassdo
      @Tassdo 8 місяців тому

      In math/CS at least (probably in other fields also) you can often indicate your level of confidence in your review (high = I'm an expert in this exact topic, mid = I'm well versed in this topic and work on closely related ones, that sort of stuff). The comittee can then weigh the different reviews based on this.

  • @RoughPerspective
    @RoughPerspective 9 місяців тому +29

    As a nobody in psych, meaning that I have yet to conclude my MS, I was offered a peer reviewer "job" by a high impact journal. Of course I never took up the opportunity because... it would be unethical to say the least...but it speaks volumes about how stringent the process is. Just wanted to put it out there, love your videos ❤

    • @Tugela60
      @Tugela60 9 місяців тому +2

      An editor is not peer review, their job is to determine if a manuscript is relevant to the journals subject coverage and is sufficiently important to meet the journals criteria for publication. If the manuscript gets past that hurdle it is sent of to independent reviewers for peer review.

  • @doctorgumby
    @doctorgumby 9 місяців тому +23

    The problems with peer review you mentioned are well known and have been for decades. I would be interested in alternate solutions. In my opinion, peer review is the best option we have. That being said, one improvement I hope is implemented soon is that verification studies are promoted alongside novel studies. You can't have a complete scientific process without rigorous verification of the results.

  • @phillustrator
    @phillustrator 9 місяців тому +6

    Edit: I wrote this comment before watching the video thinking Pete would not dare go where I went. Props to you Pete for your courage.
    Peer review for some (a lot) of researchers is just a way to either make you cite their papers, or make sure you don't publish in a "prestigious" journal. If you think about it, them being not paid for their work means they have no other incentive, Your peers are also your competitors. We need to make peer review open, not anonymous, and voluntary. Also: reviewers need to disclose ALL their conflicts of interest, not just the financial ones. Working on the same topic and being upset that you were scooped is a conflict of interest too. We also need to abolish fancy journals like Nature because all they do is incentivize fraud. I have much more trust in a paper published in a mid-tier journal than a Nature paper, because I have seen how the Nature sausage is made.

  • @UnsafeKibble
    @UnsafeKibble 9 місяців тому +15

    Peer reviewed is further problematic because this third party system is tied to advancement. Some university tenure systems will not accept a publication in an open access journal as good enough for tenure and promotion considerations. Instead, they demand publication is a prestigious peer review journal. Some journals charge thousands to make the articles that they publish available as open access.

  • @mortifinkenbein9559
    @mortifinkenbein9559 9 місяців тому +8

    One problem with peer review is the sheer number of papers that have to be reviewed. It became very obvious during the Covid-19 pandemic, when every small medical insitution pushed out two papers a month because they saw a chance to make a name for themselves.

  • @Tcheera
    @Tcheera 9 місяців тому +17

    Yeah... I worked at one of the top centers of excellence in the nation for my field and on a project that had really unique datasets. Usually just by the abstract, the intro, or the participants / methods section ANYONE in the field would know -- maybe not who the first author was, but they would know whose project it was who was literally the top author in the field, and just about anyone affiliated would get accepted.
    I was first author for two submissions during my PhD early on before I was even a dissertator on two publications -- it was funny because I was both forced to "dumb down" both publications and not use the best stats methods for my multi-time point studies so that people in the field would be more likely to understand it (i.e. I was going to use HLM but was encouraged to just use Multiple Regression because they knew that none of the reviewers would probably be trained in HLM). And it was also clear that everyone would know whose study it was -- the field was run and reviewed by such a small cluster. You look at the dataset across studies -- there's only one dataset that could match the one that I used LITERALLY in the whole world based on the description. And I think that's kind of unfortunate in some ways because I think my study was fine -- but I remember my professors being -- I don't know maybe feigning being shocked that both of my studies got through on the first round. I remember kinda not being shocked because I thought -- well who is going to reject these studies by them?
    *shrug* -- even if they were all double-blind, they would be completely known. And even with the feedback from the reviewer a lot of times we would know who the reviewers were LOL. I mean, the reviewer challenges you on contradicting studies from the past "what about the studies that say x, y, z!" And they're so passionate -- and we've kind of already addressed them, but it's clear that person is the reviewer because they're upset we've kind of found findings that are contradicting so we have to put in a line giving them props and explaining how their work was INTEGRAL to our current theory even though it was about children and not adolescents.

  • @cipaisone
    @cipaisone 9 місяців тому +24

    Before leaving academia, I reviewed dozens journal in my niche area of material science. I do agree with all you said, and I want to add another point, that concerns material science and perhaps not so much other areas.
    In material science, many works often involve multiple area of competence, as a material or phenomenon in material science might be characterised with a number of methods that are very different, and theories involved to explain different characteristics can require very different background too. Often, No reviewer can possibly have a extensive-enough background to verify all the various data presented, let alone the theoretical speculation proposed by the authors. Even with 2-3 reviewers, it is often the case that none of them is able to judge some of the presented data. As such, it is very likely to have works where very large portion of the study was simply neglected by the reviewer, due to lack of knowledge.
    Although this is not generally concerning major results of the study, but rather supporting data, this situation leads to a lot of garbage (unreliable) data produced on scientific journal in the field of material science.
    Over the time, this poison the entire literature, making it very difficult to find reliable scientific data for even basic parameters. In fact, data on simple, “reference”, materials has become so corrupted that it is virtually impossible to find reliable sources on most of material science journal.
    I believe that this trend will leave to the collapse of the entire field of research of material science, over some decades, as it will become clear that most of the scientific data published are unreliable….and perhaps that would be a good change, as it will force material science to focus on reliability rather than fanciness .

    • @bettaneron
      @bettaneron 9 місяців тому

      I am holding my desire to name another research area experiencing same exact problems, because I am a well tempered person.

    • @Chris-zd7gw
      @Chris-zd7gw 9 місяців тому

      ​@@bettaneron What do you think is "well tempered" about colluding to hide pertinent facts?
      You're as dishonest as the people deliberately perpetrating this fraud.

    • @czakotmiszermawi
      @czakotmiszermawi 9 місяців тому +1

      @@bettaneron Maybe you could write its name backwards. :-)

  • @allisthemoist2244
    @allisthemoist2244 9 місяців тому +21

    Still. You'd think that peer reviewers would have caught James Lindsey rewriting Mein Kampf but switching the language with language about men and women

    • @avaraportti1873
      @avaraportti1873 9 місяців тому +17

      Peer review doesn't mean anything when the reviewers are hacks and the field is inherently pseudoscientific

    • @TheThreatenedSwan
      @TheThreatenedSwan 9 місяців тому +6

      @@avaraportti1873 That's why it's silly to put this thing "peer review" on a pedestal when people should be asking, what if the "peers" are crap

    • @otm646
      @otm646 9 місяців тому

      ​@@avaraportti1873That's great except that politicians and news media, plus people with access to Google use that as their gold star of what's trustworthy.
      The only way peer review is going to bring itself down a couple notches is if academia itself does it and they won't, because it's academia.

    • @grahamstrouse1165
      @grahamstrouse1165 9 місяців тому

      James, Helen & Peter were flabbergasted by their successes…and not in a good way. It was funny but it was also deeply horrifying. This was back before James went a wound the bend, of course.

    • @TheThreatenedSwan
      @TheThreatenedSwan 9 місяців тому

      @@grahamstrouse1165 Stopping at 2010 liberalism is going round the bend now is it?

  • @DarianCabot
    @DarianCabot 9 місяців тому +6

    As a layman, not in academia, I find these videos eye-opening. I have always naively assumed the peer review process worked. I'm now questioning all those statistics quoted by articles and experts about "research has proven XYX". I'm interested in hearing more about peer review alternatives. Thanks for your videos!

  • @corrinebresky4116
    @corrinebresky4116 9 місяців тому +8

    I never knew that double blind review was mainly optional. Sheesh!
    & Congrats on 50k 🎉 incredibly well-deserved

  • @notusingpremium
    @notusingpremium 9 місяців тому +2

    I had a journal reject my paper because "it was already published". They found the preprint in arxiv. I didn't even bother explain that to them, I just published on a competing journal.

  • @cdxcsfu
    @cdxcsfu 9 місяців тому +5

    This is an excellent summary of the process, particularly for people outside of academia. As a recent PhD graduate and current postdoc however, I will say you have a bit of a misconception about who does the peer reviewing. Often times, it is the graduate students or postdocs within the lab that do the reviews for the head of the lab, the principle investigator (PI). As you point out, the PI’s time is valuable so it doesn’t make a ton of sense for them to sit and read manuscripts, try to poke holes in arguments, write comments, etc. We can chat about whether that’s a good or bad thing, but having come from a highly ranked Uni in my field, I don’t know a single prof who actually does the peer review completely on their own.

    • @franzculetto5962
      @franzculetto5962 9 місяців тому +1

      In other words, peer review essentially is slave labour... no pay, no chance to refuse, no credit for, and if the youngest among us are coerced to, then almost child labour...

    • @douglasb5046
      @douglasb5046 7 місяців тому

      I do!

  • @johnanthony9923
    @johnanthony9923 8 місяців тому +3

    I feel like these problems could be *EASILY* solved by the journals (if they really wanted to).
    1) Reward people who find errors in someone else's paper.
    2) Punish people who submit flawed papers (and *BAN* anyone who submits a significantly flawed or fraudulent papers).
    If there was enough of an incentive to review papers and enough of a cost to submit bad papers, the problem would go away. The fact that the scientific community as a *WHOLE* doesn't see the need to do this is the real problem!

  • @FBCDC
    @FBCDC 7 місяців тому +1

    The problem with competition is tough. My advisor is not well-known within my field. He did collaborated with a group of researchers at Oxford. They ended up took an idea that he suggested during a meeting for a next paper, and published a paper without him. This also affects the results of my work. We had to avoid submitting papers to a top tier journal where one of his Oxford collaborators is an associate editor.

  • @ideeRotolanti
    @ideeRotolanti 9 місяців тому +9

    totally agree. Though Pete misses one key flaw of peer review: it is simply impossible to perform for empirical science like Machine Learning because the reviewer would need to rebuild from scratch the whole software (experimental test bed) to assess the many findings in the papers. Note that just looking at the experimental data (that is the experimental output from a piece of software in Machine Learning) it is not enough to assess if the software described in one's paper does indeed produce the reported results. It follows that most papers in Experimental ML are accepted on the base of an 'opinion' that has nothing to do with any form of scientific validation.
    The hugly truth is that Peer Review is the building block on which Academic Power is wielded and is used to build or slow down academic career.
    I speak about the same problems about peer review targeting Italian speakers aiming to raise awareness among ordinary people.

  • @kayak0000
    @kayak0000 8 місяців тому +3

    I'm an MD with MSc in epidemiology and do peer review in 3 scopus indexed journals. I'm very serious with my job in critiquing the methods and numbers, and there are times when I require people to submit their dataset for validation of results i.e., recomomputing the p values etc. I'm not paid enough but it's ok since i know what I do is for the good of our profession and our patients who are the real consumers of these research papers. In terms of openness and accountability of the review process, we follow the standards set by indexing groups and our reviews are placed in an open peer review website for everyone to read.

  • @killzonia
    @killzonia 9 місяців тому +2

    All of your points are spot on. I've even seen the peer review 'delay' used by one researcher to benefit another. In my field, there was recently an instance where a newly-appointed academic and his ex-supervisor at his previous institute published essentially the same work, with minor modifications. It was clear that the idea had been shared between them in order to give the new academic a decent publication, and the submission dates suggested that both papers had intentionally been submitted at the same time to different journals to allow both to be published without complaints about lack of novelty. Of course, there's no easy way to prove that they colluded to allow this to happen, which is why it was possible!

  • @user-kr7xg3nf4c
    @user-kr7xg3nf4c 4 місяці тому

    When I was a PhD student, it was very common for my supervisor to "outsource" peer review to his first year (!) students. Me and 3 of my colleagues would regularly get papers sent to us for us to "comment on". Ostensibly, it was to help us think more critically about the papers we read, which made the process seem like it was aimed at helping us grow as researchers. Only later was I made aware that those were not just some interesting papers we were sent to help us practice, they were the papers he was meant to be reviewing, and his "reviews" consisted of taking our 4 emails and rewriting them into one single response. Some of the papers he "reviewed" I know for a fact he never read past the abstract. I found out when I was in the process of writing my thesis and saw him actually emailing review papers to the fresh PhD candidates in real time while visiting him during office hours.

  • @rexsprouse1744
    @rexsprouse1744 5 місяців тому +1

    I'm an Associate Editor of a prestigious language acquisition journal. I was shocked to learn that there is such as thing as peer review that is not double-blind. In my field, double-blind peer review for academic journals has been universally in place for decades.

  • @michalchik
    @michalchik 9 місяців тому +3

    Peer review is also NOT just the review done by the the journal or people working for the journal. It was never supposed to stop once the article is published. A study is not done with review once published. The scientific community still needs to discuss, debate, replicate and validate. This can take decades if it happens at all.

  • @adamabdulrahman1932
    @adamabdulrahman1932 9 місяців тому +6

    I definitely want to hear alternatives. My biggest concern is if submitting work to journals that use alternative methods is disadvantageous for the scientists, and if there are any proposed solutions to this problem. Thanks!

  • @Saturn2888
    @Saturn2888 9 місяців тому +2

    I'm a software engineer. It's our job to do code reviews before letting anything in the codebase. If you don't do it correctly, bad things always creep in.

  • @alfredoprime5495
    @alfredoprime5495 9 місяців тому +1

    2:30 another factor of the "volunteer" aspect of peer review is that in many cases the professors delegate the reviewing to their post-docs or even a grad student.

  • @Aaron-kj8dv
    @Aaron-kj8dv 9 місяців тому +6

    One of the worst things is it seems like public intellectuals are so arrogant about being right but ironically so many are not interested in making sure the data is sound. Even if you're not a public intellectual I feel like people would WANT the data to be real because it gives us a better understanding of the world.

    • @seanrrr
      @seanrrr 9 місяців тому

      Yep, there seems to be a high correlation between the number of public talks one gives, and the number of studies they've faked. The best scientists spend their time working, with public speaking secondary. The ones that have given 5 TED talks and are featured on a dozen podcasts are obviously more interested in the fame and publicity.

    • @Tugela60
      @Tugela60 9 місяців тому +2

      That is not the issue, the issue is the public not understanding the nature of the publication process. Ideally everthing should get published and then either stand or fall on it's merits, it is up to the experise of the reader to decide which.
      People seem to think that the act of publishing makes something fact, when in reality all it is is a record of work. The final evaluation of the merit of that work is up to the reader.

  • @offtowander29
    @offtowander29 8 місяців тому +1

    I was a project officer/editorial assistant for an academic journal once, helped facilitate the peer review process (I was just an intern finishing my postgrad at the time). I saw the types of comments coming in from the reviewers, and just the lack of structure in the process, I always thought it was waaaay too noisy. Sometimes when a reviewer didnt submit on time, theyd even ask ME to review it "for readability". Granted it was within the discipline i was studying in at the time (politics), but it was the dodgiest thing!! Definitely shattered my perception of academia...

  • @mattmexor2882
    @mattmexor2882 9 місяців тому +25

    Peer review isn't just subjective, it's social. The entire academic system is social. That didn't introduce so much bias when academia was aloof to outside influence. But that's changed radically since WWII. The relationship between academia and government and industry today means the social system can easily be exploited by prevailing interests, whether they be political, financial, or ideological. The current system of academia is an easily captured, unhealthy monoculture.

    • @40NoNameFound-100-years-ago
      @40NoNameFound-100-years-ago 9 місяців тому

      Totally agree, the more connections you have in academia, the higher possibility your paper is to get published even without REAL peer review....I mean the peer review process in that case will be just a formality and not taken seriously.

  • @mikebaker2436
    @mikebaker2436 9 місяців тому +5

    The main problem with peer review is that it is made up of peers.

  • @justindoucette9242
    @justindoucette9242 9 місяців тому +3

    Your content is seriously interesting and informative. I really appreciate the effort and dedication to your grind. I am just an interested bystander in psychology (not a student or in the field) but the information is so helpful in terms of rational reasoning and logic.

  • @mariovelasquez1243
    @mariovelasquez1243 9 місяців тому +8

    I don't even think double-blind fixes it, because in a lot of cases fields that are too niche will have too few researches to be able to hide identities correctly, or to keep reviewers from inferring who the paper is from given the line of research. In any case, double-blind as an option without any real incentive to take it (if you're big a enough name to be benefited by SB why would you ever do it) is doomed to be an alternative only aspiring researchers in smaller labs will take, almost as if they're playing the same sport in different divisions than the big honchos. A broader conversation, and a good follow up to this, should be had on the type of incentives that could make DB an industry standard, but something tells me that there's not much of an appetite to fix it in the peer review underbelly...

    • @zray2937
      @zray2937 9 місяців тому

      I have only reviewed one double-blind paper, and with just one glance at it, I knew exactly who the author was.

    • @40NoNameFound-100-years-ago
      @40NoNameFound-100-years-ago 9 місяців тому

      ​​@@zray2937 i have one friend who got a call from the reviewer because he happens to know him as a friend and that reveiwer asked my friend to write the review for his own paper and send it back to him so that ha cab submit it on the journal system

  • @picahudsoniaunflocked5426
    @picahudsoniaunflocked5426 9 місяців тому

    I'm late to the party but happy to be here to watch you hit 50G congrats! You make interesting subjects comprehensible to a general audience & that's a valued cherished service. Thank you.

  • @stopthefomo
    @stopthefomo 9 місяців тому

    ANYTHING done by humans is subject to all the flaws of human beings: hubris, greed, envy, vengeance, laziness, etc. Whether it’s organized religion (graft and coverups) or governing bodies (Olympic committee corruption anyone?) people are HUMANS and we must always accept our own frailties regardless of the “higher calling”

  • @karltaht2370
    @karltaht2370 9 місяців тому +4

    I am a bit surprised to learn double blind submissions aren’t common. I did a PhD in computer science and helped organize a conference (more typical than journals in CS world). All the conferences we submitted to were double blind. That said, I always did feel it was biased haha

    • @izidorbenedicic8807
      @izidorbenedicic8807 9 місяців тому

      In some areas, it is considered so important to be first to get your results out that a lot of papers end up on a preprint server (e.g. arXiv, medRxiv or similar) as soon as they are sent to the editors. This means the preprint version could be public months before it gets into the hands of the reviewers, making a double-blind review impossible.

  • @ioannischristou2362
    @ioannischristou2362 25 днів тому

    completely agree with everything you said. I'm an Associate Professor of Computer Science, and in my field there have been many voices that argue for an objectives-based system that will allow papers (both in journals and conferences) to pass assuming they meet a number of well thought-out criteria, including proper experimentation setup, sound analysis of results and so on. I would go on a step further and claim that maybe Large Language Models could actually replace humans in the peer-review process, whereby the LLM would be called upon to decide if all criteria are met, and even make an informed decision about novelty, on the basis of what the LLM already knows about the field. I know many people would find this "unacceptable blah blah", but the truth is that I would trust an LLM more to do peer-review than a graduate student with no time on their hands that may simply want to blow steam by rejecting a paper just because it was assigned to them by their supervisor...

  • @charliem5254
    @charliem5254 9 місяців тому

    Your work is a big help to me bro, thanks!

  • @ONAROccasionallyNeedsARestart
    @ONAROccasionallyNeedsARestart 9 місяців тому

    YES! This is the video I've been waiting for/wanting from you! Thank you!

  • @ThePowerMoves
    @ThePowerMoves 9 місяців тому

    Great video man, thank you for sharing

  • @enriqueag8869
    @enriqueag8869 9 місяців тому

    I came across your channel recently and I am enjoying these kind of videos a lot.
    I am currently doing my PhD and have encountered some of the issues you bring up on the subjetc of peer review. I am most concerned about the bias in peer review. Research often involves contradicting hypotheses and reviewing a publication from the "competing" hypothesis is a recipe for biased review. This is also true for groups working in the same field and depending on publications for financing, grants or promotion. The conflicts of interest are evident here.

  • @jefft8597
    @jefft8597 9 місяців тому +3

    Data is not a requirement of peer review?????? Whaaaaaaat???!!!! NOOooooooo!!! UnBOILievable!!

  • @namenloss730
    @namenloss730 9 місяців тому

    I had that issue of massive delay and waste of time on my work.
    A publication I wrote was rejected a first time by someone who clearly hadn't fully read it (citing our limitations described in the article as things we failed to disclose).
    months later it was rejected on the premise of "NO DIRECT APPLICATION IN ENGINEERING" (which I wasn't aware was our goal when doing exploratory work).
    Also on that second round, reviewers criticized us for adding something that round one criticized us for not having.
    And then finally on the third try it got through.
    It's taken a year and a half, weeks upon weeks of rewriting to fit templates, countering nit picks of reviewers, writing rebuttals, waiting for answers, etc...

  • @silver6054
    @silver6054 9 місяців тому +4

    Not sure if this is true in many/most fields but in some at least: there are just too many papers! The publish or perish culture means people write papers that are very small increments on prior papers (in some cases on their previous papers). Now I realize that sometimes small increments can be significant, but at other times, certainly not! I remember a stream of conference papers from one well-regarded author with incredibly small incremental changes of no significance whatsoever, but, perhaps because of some of the factors in the video, got accepted with no problem.
    With many less but perhaps important papers, people may be able to do a better job. Or not!

    • @lobstermash
      @lobstermash 9 місяців тому

      Agreed. The 'publish or perish' culture in 'performance review' inevitably drives towards mediocrity because the teaching and research and administration workload for the average academic are not sustainable with a high standard of performance.

    • @Tugela60
      @Tugela60 9 місяців тому

      The purpose of publishing is to document your work. Are you suggesting that work done should not be documented for future generations to consult?

    • @silver6054
      @silver6054 9 місяців тому

      @@Tugela60Well, not just documenting your work (i.e. it's not a daily Facebook status update), it should still be documenting significant work, for some varying definitions of significant. If all conference and journal papers were of the form "Our work has shown this important thing" (and that was honest!) that would be fine. But some segment are still "Oh, I haven't published enough, let's see what I can write up quickly" So this leads to work at unimportant stages being published, potentially drowning out some of the real significant stuff, and so future generations might really not want to wade through too much of it. Much like I am not that interested in the fact the on Jan 4, 2011 user NotARealName reported on facebook, presumably truthfully, that their breakfast consisted of cornflakes, toast and coffee.

    • @Tugela60
      @Tugela60 9 місяців тому

      @@silver6054 If it is trivial or minor it won't be accepted for publication. Journal space is limited, they don't accept everything, even if there is nothing wrong with it.
      Whether you like it or not the primary purpose of publication is to present the results of a study. There is no other valid reason to publish.

    • @silver6054
      @silver6054 9 місяців тому

      @@Tugela60 It's nothing to do with whether I "like it or not" Do you dispute that some academics are under great pressure to publish, which really means whether or not there is really something worthwhile to say (yet)? And this may vary from area to area but in some there is a large range of journals, of various "tiers" and it really isn't that hard to get published in some of the less prestigious journals. (The academic may not get as much credit for a lower tier publication, but still better than nothing) And I was also referring to conferences, which again vary in quality.

  • @DistantThunderworksLLC
    @DistantThunderworksLLC 9 місяців тому +2

    Very well said! Too many people are putting way too much faith in the peer review process. Science isn't about peer review, although it has its place, and it's definitely not about consensus. It's about repeatable output from experimentation. Lots of experimentation.

  • @josephnardone1250
    @josephnardone1250 9 місяців тому

    This was a very interesting and informative video to me. I had a completely different conception as to what the peer review process was.

  • @alusandrea1501
    @alusandrea1501 9 місяців тому +1

    That's interesting. In my field (education) all the top journals are double blind review except for their special issues or invited articles. Although the quality of articles they will take vary wildly. I have had multiple manuscripts that I find another journal for with no changes after the first journals rejected them.

  • @DanTercelify
    @DanTercelify 9 місяців тому

    Great video, Pete! Clearly communicated. Can you talk more about open science and self publishing studies?

  • @ru40342
    @ru40342 8 місяців тому +1

    I have never been asked to submit any raw data file or get questioned about the validity of my data or asked why the they cant replicate my findings etc. Most reviewers focus more on the methodology, originality of the study, objectives and the findings, even for Q1 journals.
    Pretty certain I can just delete some of the data to achieve significant results and simply justify it as outlier or unbalanced data without being detected as P-hacking.

  • @LanceHKW
    @LanceHKW 9 місяців тому +1

    You do great work!

  • @ojt3869
    @ojt3869 7 місяців тому

    I would like to see the peer reviewers having to provide a signed succinct summary of what they did and what they found. The summaries would then be published at the end of the paper for all to see.

  • @timothyrday1390
    @timothyrday1390 9 місяців тому +3

    I think tenured professors with light teaching loads could be required to do more peer review work to maintain their status. It appears that we need more brilliant minds reviewing the quality and credibility of work in a given field, not churning out more questionable research.

  • @jloiben12
    @jloiben12 Місяць тому +1

    To be fair, have you read Einstein’s On the Electrodynamics of Moving Bodies paper? That peer review process (to the extent it even happened)… well, proves this point very well

  • @gnoelalexmay
    @gnoelalexmay 9 місяців тому

    I'd be very interested to study the way peer review is used in different study-areas.
    Things like...
    1. Attempts at publication
    2. Time taken to publish
    3. Time taken to peer review
    4. Amount of revisions required
    ...and compare the data, for & against different fields/hypotheses.
    There are some study-areas that appear highly suspicious to me.

  • @crypticnomad
    @crypticnomad 9 місяців тому +2

    I've always thought of peer review as basically just being a "smell test", if you will, and the first public step in a fairly long process. I'd argue that the issue isn't so much the peer review process but rather that there is basically little to no incentive to do replication studies and null results rarely get published. Another problem is that people who commit academic fraud don't end up with lengthy prison sentences. I mean public shame sucks, losing a job sucks but doing all that and going to prison where nerds are probably not going to do well is a huge incentive on the back end to not commit academic fraud.

  • @wetwingnut
    @wetwingnut 9 місяців тому

    I have a friend who works in neuroscience. His complaint about peer review is that it severely limits research. He believes that some questions which his colleagues have deemed to be settled are not. But his research proposals to investigate these questions are shot down on peer review.

    • @Tugela60
      @Tugela60 9 місяців тому

      Then he is not making a compelling argument on the merits. You are talking about peer review for grants, not papers btw.

  • @AngelaRichter65
    @AngelaRichter65 9 місяців тому

    I gave up on peer review in the 90s. It was obvious even back then which way the wind was blowing and i left all of my training and love of biology behind and left the lab. I was told that the ONLY way to get papers through and grant proposals approved was the add "... and its effect on the ecology." and then it became "...and it's effect on the climate.", and ".. it's effect on global warming." As a molecular biologist I would have been hard put to prove any of that and it in no way had anything to do with my work on the cytochrome chain. I no longer cared as I saw the entire so-called scientific world as full of nothing but egos trying to get TV deals. I spend my time working on important things like potty training my grandchildren and helping them learn prime numbers before kindergarten.

  • @cypressdrill
    @cypressdrill 9 місяців тому

    As an additional argument you could use the Sokal Squared experiment as proof that in several academic fields peer review works against objective evidence. In these cases, the more the article adheres to field orthodoxy and pushes this further, the more chance of publication and positive peer review.

  • @nerdybuddy7415
    @nerdybuddy7415 8 місяців тому +1

    Academic dishonesty is EXTREMELY common in subjects such as social "science", behavioral "science" and humanities. You can easily create your own data because no experiments can be reproduced by anyone else. Take a look at subjects like "gender studies." This is much harder to do the same in subjects like chemistry or physics.

  • @user-qb7fk7eq3l
    @user-qb7fk7eq3l 9 місяців тому +1

    In my opinion, the quality and overall tone of peer review has been declining over the last 15 years or so. This is probably due to exponential increases in the number of PhDs and PhD students in many fields, coupled with the increase in the number of open access pay-to-publish journals. The scientific community is no longer a rigorous, objective circle but a dog-eat-dog system on an industrial scale.

  • @III-zy5jf
    @III-zy5jf 9 місяців тому

    One time I used a website to research academic papers in the U.S, and every file had unreadable, broken English and meaningless graphs written by Chinese students. I don't know who to blame. I couldn't find the resources I needed.

  • @ethan20559
    @ethan20559 9 місяців тому

    peer review should be like jury duty tbh, small panel of multiple people in the field who volunteer to be "called" to review papers

  • @dominicgonzalez2995
    @dominicgonzalez2995 9 місяців тому

    Thanks for making this, it changed my view on the process.

  • @michelebelot6825
    @michelebelot6825 9 місяців тому +1

    Thanks for this video; I am in the Economics field; just a few comments: in most Econ journals, you would not be able to review a paper from someone from the same institution. In principle one should reveal conflicts of interests. Also, I don't think that peer review is a "full tax" on the reviewer. You usually get to evaluate papers you should probably be reading anyway. I usually enjoy reading other people's work and learning from it. And I think many people do (I am not as cynical...!) And if anything, it probably ensures that papers by young researchers in unknown institutions get feedback they would have a hard time getting otherwise. What I really find troubling at the moment is the relatively recent trend of journals to "desk reject" about half of the manuscripts they receive. That gives considerable power to editors, and given that they have many papers on their desk, the 'signal' is even noisier. As an editor, I try to do all I can to implement the system as well as I can. But you are right that the system is noisy, and wasteful because one usually needs to go through several rounds of reviews before publishing. But it is not easy at all to come up with a good alternative... Academia also has some good apples. I know the recent news is depressing, but please keep some hope!

  • @WittingL
    @WittingL 9 місяців тому

    Years ago I was assigned an article for peer review it turned out to be a paper by direct boss and people I worked with everyday. I told the journal editor that I wouldnt be able to do the peer review for these reasons. I never heard again about the article, and the journal never contacted me for reviewing again...

  • @DethWench
    @DethWench 9 місяців тому

    Congratulations on your wonderful channel! Great video. I am an epidemiologist and biostatistician. Most papers I review or I read have obvious flaws in statistical approach, study design, or both. Reviewers are supposed to decline to review it if they don’t know enough to give a good review. Yet they fudge reviewing the stats which is the only way you can tell what’s up with your hypothesis! I feel like I’m basically saying most of what is published is not technically science. What do you think? Isn’t that crazy?

  • @alanhe4476
    @alanhe4476 9 місяців тому +1

    A paper I've contributed has been in review for 5 years now. As a result of being on a as-of-yet unpublished paper, I have been asked to perform peer reviews regularly, despite having left academia. I do not understand the papers I'm given to review, and I cannot spend hours researching a topic I'm completely unfamiliar with for basically no benefit to myself. It would be highly presumptuous of me to criticize any of the content in the paper I'm told to review, so the best I can say in honesty is "too dense, hard to understand. could use rewording to be more readable"
    The only incentive to keeping academia running as a well oiled machine is some sense of honor, which is probably why everything has rusted

  • @queens.dee.223
    @queens.dee.223 3 місяці тому

    Tangentially related to peer review are that replication studies are rare and that papers that find no effect are also rare. Also, give the sheer number of studies being conducted, some portion of them must yield "false positives" for significance as a result of sheer luck. Not publishing "no effect" found papers means that the more researchers asking a similar question, the more likely one researcher (or research group) will get results that seem significant by chance, so publishing those "no effect" results might mitigate that problem.
    But of course, that doesn't bring in dollars so that doesn't happen.

  • @jeeed6390
    @jeeed6390 9 місяців тому +1

    “Data is not a requirement of peer review”?!
    What exactly is going on in academia.

  • @stephenkneller6435
    @stephenkneller6435 9 місяців тому

    I would say that one problem is that there are certain topics, in which the politics have superseded the science, and journals and reviewers will refuse to “rock the boat” regardless of how sound a paper is and outright refuse some authors because of the same reason.

  • @hannassewingschool4874
    @hannassewingschool4874 9 місяців тому

    Keep up this important work!!

  • @socialnetworking4782
    @socialnetworking4782 7 місяців тому

    I think open science is our best bet. I'm not in the field of academia but I've worked in the applied sciences my whole career. Outside looking in, it seems as though the peer review and academia in general has a problem with misusing trust. Being transparent with everything all together, sharing the information across the board, and requiring testing/experiments/samples to be repeated by a third party could help a lot.

  • @EricAwful313
    @EricAwful313 9 місяців тому

    Yes, I'd love to hear more about alternatives. I'd also like to know if there are any discussions on how to fix peer review. Maybe peer reviewers should get compensated somehow? Some kind of incentive would be helpful. Finding some way to alleviate the stress of ceaselessly competing for funds would be nice too.

  • @eoiny
    @eoiny 9 місяців тому +1

    There’s a lot of politics, vested interests, conflicts of interest, and gatekeeping in peer review. It’s also very susceptible to idea/theory laundering.

  • @x10mark24
    @x10mark24 9 місяців тому

    one thing I would also add is that if your paper is controversial or in a controversial subject the odds of it making it through the peer review process drop as well. Scientists are humans too, they can fall prey to the same kinds of bias that any professional can.

  • @sphakamisozondi
    @sphakamisozondi 9 місяців тому +1

    The peer review process need to be... Reviewed

  • @lisboastory1212
    @lisboastory1212 9 місяців тому

    I would love to hear about open science, thanks!!!

  • @parthsavyasachi9348
    @parthsavyasachi9348 9 місяців тому

    I don't publish but do lots of innovative work because i write research software for industrial use.
    So recently i decided to publish something which was quite important problem for the field. In fact the issue is there for many years and i created an innovative way to solve the problem.
    So decided to publish and uploaded the paper after showing the paper to someone who has published many papers in the field. Made all the changes etc that this prof. Suggested and paper looked good.
    The paper was rejected with reviewer saying that the title font is smaller and one reference was not formatted correctly.
    The solution to most important problem in the field is rejected because font size in title of the figure.
    The reviewer didn't even read the paper.

  • @noneatallatanytime
    @noneatallatanytime 9 місяців тому +1

    As I understand it, peer review was never intended to make a paper "scientific" but to get feedback from your peers. The standard for making a paper scientific is to reproduce it, which is most likely why you don't peer review data as that could influence reproductions. I didn't know people thought peer review was important (other than feedback) in the scientific process so the question I have is: how did we get to the point where peer review replaced reproduction for the purpose of making something "scientific"?

    • @xsatsuki98x
      @xsatsuki98x 9 місяців тому

      Something "scientific" must be reproducible. Unfortunately peer review can really impact your work by not being published in the way things work...

    • @Tugela60
      @Tugela60 9 місяців тому +1

      It is a filtering process because journals can't publish everything. The peer review process allows them to select only those manuscripts that are relevant to the journals field of interest and sufficiently important to advance the field in some way.

  • @TakeShotAction
    @TakeShotAction 2 місяці тому

    I'm extremely interested in evoltuionary biology and the benefits of eating meat for our species. In almost every way our bodies seem to be clearly adapted for eating meat more than vegetables/fruit. I'm curious as to whether or not a lot of issues that we associate with "Getting older" are actually in fact due to nutritional problems both in life and during gestation. I'm scared about twin studies and how much emphasis is put on them when nutritional and chemical gestation is an extreme contaminent that isn't controlled for or mentioned in nearly any article I've read.

  • @HeraldoS2
    @HeraldoS2 9 місяців тому

    GG I think your ending note covers the other issue that I see with the system which is the low amount of scrutiny over scientific journals, and how if anything goes wrong they can just blame it on "science is a procesos and mistakes happen"...

  • @MrGiggitygoo31
    @MrGiggitygoo31 9 місяців тому

    I once wrote a review on non NRTI treatment regimens and alternative regimens for HIV. The peer reviewers were so biased against it, they pretty much said, "aside from these trials showing no change in viral load, where is the proof". I just moved on and didn't want to deal with it as I wasn't in academia and didn't care about a publication; I just wanted the concise information out there. Fast forward 3 years later and someone else notices that there is a missing review on the same topic and writes similar thing. I think it was the Lancet that picked it up.
    Glad it was published and really only bitter to the peers that prevented this info that could have helped out patients years earlier.

  • @zeitvergessen2709
    @zeitvergessen2709 9 місяців тому +1

    Can u make a video about publishing?
    How a scientist has to pay for journals to review a paper, how the journal doesn't pay for the research OR the peer review (which is done by other scientist for free) and how in the end the scientist has to pay the journal again, to read their own articles... makes no sense

  • @SmileyxKyley
    @SmileyxKyley 9 місяців тому +1

    speaking on double blind papers being basically impossible, most labs are doing research based on their previous research, and will cite their past work in their introduction or methods to establish the background. If I say “our previous work identified compound X as a potent and selective inhibitor of protein Y”(insert citation to my labs prior work), the reviewer has my lab info in the bibliography 💀

    • @PeteJudo1
      @PeteJudo1  9 місяців тому

      Great point

    • @SmileyxKyley
      @SmileyxKyley 9 місяців тому

      @@PeteJudo1 One time I was (probably) able to identify one of the reviewers of a paper I submitted because they used their own papers as examples for types of figures/data analysis they wanted us to use and there was only one author as a common thread across all the papers 😭

  • @kevint1910
    @kevint1910 9 місяців тому

    Halton Arp vs Caltech still defines what the big telescopes are allowed to look at and what people are allowed to say about the observations they make.

  • @bobbyfeet2240
    @bobbyfeet2240 9 місяців тому +1

    My view on peer review always was "I'm going to give this paper a thumbs up unless I'm given a reason not to." I might disagree with the conclusion, but if the method is reasonable (even if not the one I'd use) and the data support the conclusion/the conclusion isn't overstated, let it through. Obviously, I was never able to really do much to look for fraud, which would be another reason to block a paper, but if a paper generally has a reasonable flow... let it get published and then let the community weigh it against other research on the topic. No one paper can ever be definitive on a topic and we need to stop acting like it can be.

  • @MatthewFrancisLandau
    @MatthewFrancisLandau 9 місяців тому

    I don't have the citation off the top of my head, but it has been shown before that peer reviewers are biased toward papers that have positive results.
    This means that peer review essentially suppresses negative results and encourages researchers to exaggerate their results to get them through peer review.

  • @snowybutt
    @snowybutt 9 місяців тому

    I'm very very interested in the open science topic, please make some videos on that :0

  • @cherylvanepps66
    @cherylvanepps66 7 місяців тому

    Agreed. I saw a number of these examples when I was studying & training for my graduate research degree. As a note: the federal governments of UK, USA, Japan, Australia, Canada and a few other nations should and could be providing compensation for all peer review work. Academic science in those countries should and could be much better funded- because they are sovereign currency issuers and have the fiscal capacity to do so. It would take the financial incentives conflict of interest out of the equation if ample public money were available to secure the funding of laboratory operations and faculty & staff salaries. Unfortunately we are stuck in a scarcity- austerity mindset where it is popular to be cheap and deprive our citizens of their basic needs and future prospects.

  • @BobJones-rs1sd
    @BobJones-rs1sd 9 місяців тому

    As someone who was in academia for over 15 years in the humanities, I find this video positively shocking. I had no idea that it was common practice in the sciences for authors not to be anonymized for peer review! That is SHOCKING and frankly outrageous. In a humanities field, I not only published quite a few articles and was a peer reviewer many times (including for the top journals in my field), but I was on the editorial board of a journal, as well as review committees and boards for many conferences. I was personally in charge in a few instances of situations where I would have to check metadata of files submitted to ensure ALL references to authors were completely absent and removed (and to remove them by stripping metadata if necessary).
    I have NEVER heard of non-anonymized authors for a legitimate high-level peer-review process in my discipline or in neighboring disciplines that I also published papers in. That, to me as a researcher, sounds completely unethical. The ONLY situations where an author name would generally be known would be something like conference proceedings (or things similar to that) where the paper had already been presented at a conference and the author was already known, so the review process was mostly for ensuring a standard of quality in writing for the final written publication. The other exception might be things like invited collections of papers, where the whole point was often to gather a bunch of known researchers in an area together. I think some low-level journals might have had only "single-blind" standards, but they were the exception and typically not viewed as strongly as the most "reputable" journals.
    I never knew scientific fields and journals were so incredibly lax compared to the humanities, though many of the scandals and issues with replication in recent years are seeming to make more sense to me. An author's reputation should NEVER be influencing perception of paper quality. This is absurd. (To be clear, I think most humanities disciplines also have too lax standards for publication, but that's a different story... at least we're not practicing nepotism and simply rubber-stamping famous names! Holy crap!) Yes, I do realize that many fields are niche, and it's often somewhat easy to narrow down the POSSIBLE authors for a given study. (I thought about that too when doing some of my own peer reviews, but in several cases I was wrong... very wrong with my guesses, one time even missing that an article was authored by a friend of mine who simply hadn't mentioned this project in our conversations. But just because someone might guess sometimes is no reason to make it obvious and definitely influence a reviewer's perception!)
    My perception of many scientific fields has sunk tremendously from watching this video. Maybe I should have known this before about scientific peer review, but I never looked at detailed submission guidelines for scientific journals, as I never submitted to them. Making your paper "anonymous" is simply standard practice before submitting for review in my former discipline, and in every adjacent humanities discipline I published in or did peer review or editorial work in. At least for any reputable journal (not paper mills).

  • @Pengochan
    @Pengochan 9 місяців тому

    The question is, what to do instead or how to improve the system, because doing away with peer review would be worse. The peer review problem is tied to the problem that publications are effectively the currency that gets scientists reputation and funding of research. That leads to a lot of publications that aren't motivated by the interesting science, but only to justify funding.
    That said i had my frustrations with peer reviews myself, like the reviewer focusing on an aspect of the paper that's not really at the core of interest, because that's the part he understands and is interested in, or some very general request that the english should be improved because of some expressions that aren't common. Sometimes it seems the reviewer just felt he had to criticize something. I also got constructive critique in some instances, and the result was an improvement, some parts better explained or more precise.
    As for the problem of direct competition: I never had that, but then in my field it practically doesn't happen that two groups do the exact same research.

  • @McSimPlaneta
    @McSimPlaneta 9 місяців тому

    Although I agree that all aforementioned problems are true, I believe their scale is over exaggerated. For example, it is true that sometimes you can reliably guess who is the author of double blind paper, it does not happen often. It also worth mentioning that some of the problems are different from domain to domain. For my field of computer science it is virtually impossible to publish in adequate single blind venue.
    And then the question is if peer review is so broken, what do you replace it with? Especially if you do not want to make things more expensive. On the other hand there are many known ways to improve peer review.