"Trust the Science" - A Growing Problem

Поділитися
Вставка
  • Опубліковано 4 гру 2024

КОМЕНТАРІ • 1,3 тис.

  • @UpperEchelon
    @UpperEchelon  8 місяців тому +35

    Ways to support the channel!
    VPN DEAL: get.surfshark.net/aff_c?offer_id=1448&aff_id=19647
    PATREON: www.patreon.com/UEG
    LOCALS: upperechelon.locals.com/support

    • @mightyraptor01
      @mightyraptor01 8 місяців тому

      Im currently more focused on It'sAGundam's stream live right now as Im watching, luckily I can multi watch, we live in an Emotional Current Day where Government wants you to comply. Yeah AI is a problem when used in way for Power and Abuse and not as a tool in a time of NOW NOW NOW instead of take time for building.

    • @PanSkrzynka_
      @PanSkrzynka_ 8 місяців тому +2

      Im from Poland and AI is commonly uses here to translate scientific papers. Its probably the same in different countries. Usualy you translate it using AI and then give it to correction. AI is almost as good as profesional translations and is 95%faster to just correct it.

    • @KriptoSeriak
      @KriptoSeriak 8 місяців тому +1

      I answered here because I just want to presrnt to you a theory...
      What if this is overusage if A.I. is actually a symptom of a Transition Period toward a future adaptation of information found on the internet to the one doing the research, watching the news and just browsing the internet?
      Now, by inticing people to use A.I. for Scientific Publications, said A.I.s are actually training so that, in the near future, all internet activity (even a simple book) will be tailored to the one doing the rading, listening and viewing. A news article, a video or even a book will soon look different to people living in neighboring cities or even working in different fields.
      You know the kind of society I am talking about...

    • @useodyseeorbitchute9450
      @useodyseeorbitchute9450 8 місяців тому

      Key information that you may miss: If you are a scholar from a country where English is a foreign language, then you are terribly at disadvantaged position when sending otherwise OK paper to some mid-tier journal, as at worst you are rejected at start and at best you end up with snarky reviewer's remarks. I talked to people in my country who used to hire English native speaker with no knowledge of research subject just to have their paper proofread to correct their clumsy language. What you see is primarily people using right now chatbot for proofreading instead.

    • @podraigh
      @podraigh 7 місяців тому

      Using multiple words in your search will definitely act as a filter and so your result at the end is not across scientific disciplines but likely another example of the problem in medical journals and possibly only translations from other languages into english. Most math and hard science researchers will often be able to translate on their own and papersare more and more in english anyway

  • @oldmatejohnno
    @oldmatejohnno 8 місяців тому +597

    Questioning science IS SCIENCE! you don't just accept what you're told

    • @TheScrootch
      @TheScrootch 8 місяців тому +98

      Exactly. "Trust the science" just sounds like a blind faith cult statement. Why should you just randomly trust anyone?

    • @vitalyl1327
      @vitalyl1327 8 місяців тому +29

      There is a very specific procedure for questioning science, called Scientific Method. Those who usually yell about questioning science do not have mental capacity to apply this method anyway, and can and should be ignored or mocked.

    • @TheVisualDigitalArts
      @TheVisualDigitalArts 8 місяців тому +47

      ⁠​⁠​⁠@@vitalyl1327but are they wrong ? Jumping to the conclusion they don’t have the mental capacity just because the were not trained to use a certain method is silly. People can spot patterns. Sometimes they are right sometimes they are wrong sometimes the are close. Just like the scientific method.
      But to discount opinions and observations of a vast number people is in fact anti science and foolish.

    • @vitalyl1327
      @vitalyl1327 8 місяців тому

      @@TheVisualDigitalArts unless you're an anthropologist, all anecdotal evidence must be discarded. There was not a single case when the ingorant masses got it right.

    • @convergence1point
      @convergence1point 8 місяців тому

      @TheVisualDigitalArts
      That right there is why “trust the science” is the worst fallacy. As it revolves around gaslighting you with “authority” to silence legitimate questioning. “Well, you dont have a piece of paper telling people you are SmUrT enough to know” that’s absolutely ridiculous. Not all great scientists came from higher education. Hell, most of the pioneers never had degrees in their fields.
      Thats why the unquestioning “Trust the Science” cult and the “and where’s your degree and paper” cult are a part of the problem. They create an echo chamber that magnifies flaws as sometimes, flaws can appear that can only be observed externally.

  • @nobillismccaw7450
    @nobillismccaw7450 8 місяців тому +976

    Even saying “trust science” fundamentally misrepresents the scientific process. It’s “trust the evidence”, from which working-hypothesis are made.

    • @Sesquippedaliophobia
      @Sesquippedaliophobia 8 місяців тому +68

      This is the most infuriating part about the "trust the science" bs lately.

    • @pluto8404
      @pluto8404 8 місяців тому +88

      or "the consesus of scientists" like they have all independently verified everything in science outside of their own faculties.

    • @THasart
      @THasart 8 місяців тому +34

      Thing is, quite often to understand the evidence, you need to have a lot of specific knowledge. So people without said knowledge can either trust people with it or spend a lot of time to acquire it, which is unfeasible to do for each scientific claim. Or use "gut feeling" or "common sense", things known for their accuracy, especially in scientific matters.

    • @DeclanDSI
      @DeclanDSI 8 місяців тому +11

      And sometimes the procedures for gathering data are so poorly formulated that you can’t even trust that. Beyond even that, the idea that evidence possesses an inherent narrative is wrong, or seems to be wrong: we automatically confabulate one absent any previously established for matters and means of perception given stimuli, in other words, meaning is given first to our senses before the object is.
      Given that the world may construe itself such that limitations are placed upon the working patterns in the manner of perceptions that survives across large timespans, it is not unreasonable to think that, though perhaps pedantic, “evidence” -data; information: the world- is, while sufficient per se for the formation of a body that can comprehend and form narratives, not so without a corpus with which to file it away. To sum it up succinctly: facts are used [by a person] to form narratives but do not in and of themselves.
      Lol, was this was kinda a fun exercise to just say facts are meaningless without someone giving meaning to it.
      Anyways, from that conclusion, while you can say any reasonable person would take the facts at hand and come to a reasonable conclusion, many aren’t reasonable and so will come up with wildly different theories, and, more likely than different interpretations, what is fact and what is, just, like your opinion man, is subject to whatever pathos is held.
      “You can’t reason someone out of a conclusion they didn’t reason themselves into.”
      And what is held as reason and what is not is often subject to debate: with that said, how would you consider what is such or not? The answer might be consensus, or perhaps a logical structure - A=B=C; If A, then C; etc. - that holds certain axiomatic statements in both its premise and algorithmic processing. Maybe starting with what holds true as a meta across all experiences and narrowing down from there might work? I do realize there’s an entire body of literature elaborating all of this out, so it’s not very original, but it does make one think, and I think that’s most important.
      What’s at the basis of how a person thinks? What is most fundamental to a person that they cannot deny nor do without? A value structure that denotes deviation from a goal as pain-like and positive signals in optimizing towards such as pleasure-like. In other words, something to aim at and something to run away from, at least as a basic unnuanced structure. Maslow’s Hierarchy of Needs has a good enough model to give a rough sketch of the sort of things people commonly aim at and run away from the non-completion of, e.g. starvation.

    • @kobold7466
      @kobold7466 8 місяців тому +18

      trust NOTHING and come to conclusions based on your own research

  • @gnolex86
    @gnolex86 8 місяців тому +375

    The problem is that using generative AI in science publications will become a much worse problem over time unless science finally deals with publish-or-perish. Back when I was a PhD candidate, I was actively encouraged to re-publish previous articles with relatively minor modifications in order to meet yearly quota. The issue with this is that every time you have to effectively rewrite your previous article so that it looks like new article; new abstract, new introduction, new explanation to the exact same thing you published before. This is exhausting so it's no wonder that people use ChatGPT. Scientific articles are now basically produced like in a factory and people take shortcuts to save time. And it's only going to get worse as eventually people who write their articles the traditional way will not be able to compete with people who use generative AI.

    • @Code7Unltd
      @Code7Unltd 8 місяців тому

      Sci-Hub is a godsend.

    • @divinestrike00x78
      @divinestrike00x78 8 місяців тому +69

      What a ridiculous system. Someone should only publish if they have something new to say. Why waste everyone’s time with re-written material just to meet an arbitrary quota?

    • @VixYW
      @VixYW 8 місяців тому +31

      Exactly. The usage of AI is only exposing the true problem here.

    • @sakidodi4640
      @sakidodi4640 8 місяців тому

      ​@@divinestrike00x78 that is where acedemika* get money.
      Its come down to money making problem😢

    • @alewis514
      @alewis514 8 місяців тому

      @@divinestrike00x78tell that to just about any corporation that uses a hundred of three-letter abbrievations to measure various crap or produce tons of documentation that nobody ever reads. Majority of this work isn't necessary or needed, it's just there to keep these job postings as people haven't grown up to a universal baseline income yet. How many ass-hours in bullshit office jobs are being spent currently, it's quite horrifying.

  • @agentorange3774
    @agentorange3774 8 місяців тому +145

    We thought Terminator style AI would be the end of us. Turns out it’s just going to be people looking to cheat on medical exams.

    • @wilbo_baggins
      @wilbo_baggins 8 місяців тому +5

      Honestly mgs 2 type ai will be more likely the end of us.

    • @bbbnuy3945
      @bbbnuy3945 8 місяців тому +6

      not medical exams, medical *publications

    • @agentorange3774
      @agentorange3774 7 місяців тому +3

      @@bbbnuy3945 *And exams...And schoolwork. Literally anything they can use it for. If you think otherwise then ask AI for a way to extract some of your faith in humanity and send it my way.

    • @NightmareRex6
      @NightmareRex6 6 місяців тому

      thats gonna cause lots of even worse doctors than we have now unless they atualy doing own research and are just cheating the rockerfeller systems.

    • @BillClinton228
      @BillClinton228 5 місяців тому

      "Trust the science" is a synonym for "don't question anything"... which is exactly the opposite of what scientists do.

  • @ST-RTheProtogen
    @ST-RTheProtogen 8 місяців тому +814

    AI isn't the scary thing. The real problem is the human urge to make as much cash as quickly as possible, whether or not it makes other's lives worse. AI just makes us better at that.

    • @Holesale00
      @Holesale00 8 місяців тому

      Yeah i can see that. Its just humans creating tools to destroy our selves faster and more efficiently.

    • @AdonanS
      @AdonanS 8 місяців тому +82

      That's the problem with any technology. It's never the tech itself that's a problem, it's the people that abuse the hell out of it.

    • @N0stalgicLeaf
      @N0stalgicLeaf 8 місяців тому +41

      Well, it's more that AI lowers the barrier to entry which I'd argue is worse. If you have to copy books by hand, you'd copy one every few months and almost no one would do it. If you can use a printing press that becomes days or hours and it opens up to more people willing to take on the task. If you use a laser printer or copier, its minutes or seconds at the press of a button and ANYONE can do it.
      Now I'm not saying the laser printer is bad in and of itself, they're phenomenally useful when used appropriately and judiciously. It's the scale of harm done when they're misused that's concerning and requires diligence and prudence from ALL OF US.

    • @JustinSmith-mh7mi
      @JustinSmith-mh7mi 8 місяців тому +18

      Yay, capitalism

    • @onionfarmer3044
      @onionfarmer3044 8 місяців тому +35

      ​@JustinSmith-mh7mi still better then communism.

  • @joelfenner
    @joelfenner 8 місяців тому +15

    I work in Engineering. The idea of "authoritative" sources is not something we generally take for granted.
    Everything worth trusting is *verifiable*. You read a paper, and whether you think it's credible or not, you take it into the lab, do an experiment, and see for yourself whether the claims hold true. We did (and do) that all the time when someone presents something new. The more audacious the claim, the more skeptical I and my colleagues are. We don't trust it until we can actually "see" it for ourselves in action.
    Even in grad school, if you're doing serious work, you're going to discredit some things that are not reproducible. You'll catch small mistakes in good-faith work, upholding the majority of the claims while pointing out little flaws. You'll (necessarily) replicate other people's work and learn first-hand that it IS true and understand WHY it is true.
    At the end of all that, you learn that you're no more an "authority" on what is true or not than the "big names" you hear about. The only thing that lends credibility is the ability to do the same thing OVER AND OVER, many many times, and get the same result as claimed.
    THE GENERAL PUBLIC *never* gets this experience - they never get to pick these things apart at the detail-level. People are given a digest of this process, and not always an honest digest. And that's TERRIBLE, because it's VERY easy for "junk" research to get championed and slipped into the public consciousness as "truth". When critics come out, they're attacked. The axiom that, "It's far easier to fool a man than to convince him that he's been fooled" is very apt.
    When people are trying to convince you that "science" has no dissent, is absolute, or is a reason to silence criticism, you're dealing with a disingenuous scoundrel who's looking for a refuge.

  • @randoir1863
    @randoir1863 8 місяців тому +83

    The term : Enshittification is a great way to describe the world as a whole right now .

  • @fakemuskrat
    @fakemuskrat 8 місяців тому +218

    as someone who has participated in scientific research (physics in my case), including contributing to writing papers, I personally am concerned about fraudulent research, but I'm not convinced AI is a driving factor so much as a tool being used to facilitate an existing problem. Certain fields have had replication problems for decades. If someone actually performs a well designed experiment, obtains useful data, and then uses AI to help them write the paper to share that information that's fine. This is definitely something that needs to be watched for, and could escalate to being a problem, but I think it's absolutely essential to differentiate between generative AI being used to commit scientific fraud and generative AI being used to help package competent research so that the process can happen more quickly and efficiently.

    • @Pedgo1986
      @Pedgo1986 8 місяців тому +29

      You are right AI is just tool that make process easier but problem itself was brewing for decades now. Before any research, paper or invention was scrutinized, replicated, scrutinized again and challenged and debated to no end and scientists weren't afraid to fight each other and even then after all this the actual implementation of results took years. Now papers are published so fast there is no chance they can be properly scrutinized and everything is as fast as possible throw on market and people without proper testing etc. And iam not even talking about how regulators were bought and paid long ago and because of nature of science very few people are able to discern between real and weak or outright fraudulent work and almost nobody can challenge it because they are standing against multibillion corporation and their minions. The problem start looong ago when greed eroded checks and balances antimonopoly rules were not enforced and people in charge allowed the creation of massive corporation who had so much power and money they can rule the country from shadows. The last giant i remember was axed was Microsoft and for good reasons now is bigger then before and nobody bats the eye. Google is practically owning internet and nobody is concerned. AI will only make all this problems 10 times bigger.

    • @quietprofessional4557
      @quietprofessional4557 8 місяців тому

      Furthermore, peer review journals are now ideological group-think traps where a tiny group of individuals review each other's work. Any change or update to evidence in a field is mocked and rejected.
      The hoax papers is just one example of peer review process failing.

    • @timpize8733
      @timpize8733 8 місяців тому +8

      If scientists get interesting and useful results after at the end of a research, wouldn't at least one person in the team be pretty motivated to write the paper himself? I'm not even sure how using AI would save time in such a case, while still keeping all the data and details exact. But admittedly I don't work in that field.

    • @davidlcaldwell
      @davidlcaldwell 8 місяців тому +4

      You nailed it. Psychopaths will adopt new tools.

    • @VixYW
      @VixYW 8 місяців тому +6

      It's not fine. Harmless, but not fine. I mean, if they're using the AI to write around the data they want to share, what is the need for all that text? Just shave it off, compact everything more concisely without fluff or fancy language and the problem is solved. Way less people will feel the need to use AI then.
      But academics will never allow that kind of reform to happen, because they love their little elitist circle...

  • @zdrux
    @zdrux 8 місяців тому +291

    "The science has been settled" is another 1984'ish phrase I always cringe at.

    • @TheScrootch
      @TheScrootch 8 місяців тому

      It's just really backwards thinking. Imagine if we went by the settled science from medieval Europe. We'd still be hanging, burning or drowning women just because they were accused of being a witch

    • @heftyind
      @heftyind 8 місяців тому +21

      That phrase has only ever been uttered as a means to gain power over others.

    • @useodyseeorbitchute9450
      @useodyseeorbitchute9450 8 місяців тому +4

      @@heftyind Really? I used it a few times to tease lefties with some plain obvious statement which was contradicting their position.

    • @rejectionistmanifesto8836
      @rejectionistmanifesto8836 7 місяців тому

      ​@@useodyseeorbitchute9450but tests the point you used it since that is their dictatorial mindset go to attempt to trick sheep not to oppose them.

    • @mehcutcheon2401
      @mehcutcheon2401 7 місяців тому

      right. ugh...

  • @alargefarva4274
    @alargefarva4274 8 місяців тому +1267

    The biggest thing I learned as an adult is that everyone is winging it, daily, no exceptions
    Edit for all the uptight “um, actually”s in the comments, this was taken from a Twitter post. There, save yourself the embarrassment of looking like a hall monitor.

    • @bhhaaaal
      @bhhaaaal 8 місяців тому +49

      Can’t disagree.

    • @rustymustard7798
      @rustymustard7798 8 місяців тому +84

      EVERYBODY cheating with ChatGPT and thinking they're the only one.

    • @florkyman5422
      @florkyman5422 8 місяців тому +43

      Not so much winging it as taking the easiest path. Researchers will have like 3 years of college writing classes.
      They know how, but it's easier.

    • @johnnykeys1978
      @johnnykeys1978 8 місяців тому +55

      This. It also seems the more confidence on display - the less capable the person is.

    • @robrift
      @robrift 8 місяців тому +10

      True and terrifying the longer you think about it.

  • @momirbaborac5536
    @momirbaborac5536 8 місяців тому +18

    ""A little trust goes a long way. The less you use, the further you will go."" Howard Tyler

  • @sagitarriulus9773
    @sagitarriulus9773 8 місяців тому +100

    Idk why people don’t practice skepticism it’s important

    • @TheScrootch
      @TheScrootch 8 місяців тому +36

      If you're skeptical of anything you just get labeled as a conspiracy theorist. But I agree, skepticism is a healthy thing, blind faith in something is usually not a good idea

    • @MarktheRude
      @MarktheRude 8 місяців тому

      Because in the modern west you get actively penalized for it from the moment you enter public education. And even outside the academia you get penalized in the west if you ask wrong questions.

    • @dualnon6643
      @dualnon6643 8 місяців тому +14

      @@TheScrootchthe best antidote to that problem is to be skeptical of conspiracy theories too. As one should be.

    • @StarxLolita
      @StarxLolita 7 місяців тому +4

      It's not necessary skepticism, it's critical thought. And critical thought is severely lacking nowadays.

  • @SomeCanine
    @SomeCanine 8 місяців тому +86

    "Trust the Science" is the same as Appeal to Authority fallacy. When someone says "Trust the Science" they are saying "We have government and/or corporate paid experts who say one thing and if you disagree with them, you need to be punished".

    • @stephendiggines9122
      @stephendiggines9122 8 місяців тому +8

      When I hear that phrase I expect to be presented with a 2000 page document to back it up, but any sort of documentation on the experiments seems to have vanished never to be seen again. I even asked one company if they could send me the science so I could verify for myself, the reply was hilarious!!

  • @ethanissupercool7168
    @ethanissupercool7168 8 місяців тому +215

    As a programmer, there is a growing issue... well one of them, that I barely see anyone talking about
    as we know, like AI art, anything generative is trained on using data, these models requires millions of content to be trained... you don't just download each one at a time, you used a scraper, this scraper always run, without you even realizing it
    now this works at first, however, once AI content is mainstream on the internet, what happens if AI scrapes AI images to train itself?
    this is an issue known as "Generative AI model collapse", AI only works by seeing pre-existing images and finding patterns. AI art is very flawed, the AI will see these images and get worse every time. This is known as a 'generation', as in each round of scraping will make the model more and more "distorted" in fact, even models like ChatGPT isn't safe from this.
    AI bros tries to fight me with this info. First, the only known way is to stop scraping content after 2023, however then everything will become outdated. You can't just hide "ai art" because we already see that many scammers are not labeling AI artwork correctly. AI-detectors are already flawed in itself and cannot be used to figure out if an image is indeed AI. Synthetic data is also a flawed issue, besides some info still needing to exist for it to predict, it also suffers biased issue, and even ChatGPT, something they simp for, says that 'Synthetic data' will not solve the issue.
    This might not happen for years.... but it will happen, AI content will slowly 'degrade' unless another method is found...
    and this is only ONE of the issues, I didn't even mention stuff like "AI Poisoning", all the protests... upcoming laws.... truly interesting times rn.
    Also, yes there is research paper about AI collapse however youtube is annoying and doesn't like links, Just type "ai model collapse paper" and you will see many different results yourself.

    • @ThatGuy-ky2yf
      @ThatGuy-ky2yf 8 місяців тому +42

      Great comment man. This just furthers the idea of the "ensh*tification of the Internet". Laziness and scams aren't going away any time soon.

    • @ethanissupercool7168
      @ethanissupercool7168 8 місяців тому +24

      @@ThatGuy-ky2yf yes, but with these issues I wouldn't be surprised if all this ends up being a situation like crypto, metaverse, and nfts, all the hype, the layoffs, "the future", "will change the world forever", then it crashed... hard

    • @kamikamen_official
      @kamikamen_official 8 місяців тому +6

      Hopefully except billions are being moved into that, compute and data is what we are doing right now. And it's only a matter of time before we stop asking the AI to replicate stuff and ask it create shit from first principles. This is like how the first AlphaGo destroyed the world champion 4-1 (or whatever it was), when it copied humans and then the latter version became virtually invincible when allowed to learn the game from scratch through simple reinforcement learning.
      We haven't even begun to scratch the surface of what generative AI is capable of. Hopefully we have and I am wrong, but I have a hard time believing that right now.

    • @ethanissupercool7168
      @ethanissupercool7168 8 місяців тому +1

      @@kamikamen_official ever since the beginning of generative AI, it always require datasets… having it understand first principles requires data…
      For it to think on its own, it needs to have a brain, which is impossible right now or in the near future
      This is like saying a billionaire with cancer will spend that much on the cure for cancer… a lot of them died from it with no cure… just because your rich doesn’t automatically mean you can invent groundbreaking technology from the wind

    • @antixdevelopment1416
      @antixdevelopment1416 8 місяців тому

      I've stopped putting anything on github now since it will just be used to generate stuff without giving me any credit, or buying me a coffee LOL. At least for now generative AI can't create anything that isn't a mash-up of what it has previously assimilated, so I can still write interesting code that it cannot.

  • @greghight954
    @greghight954 8 місяців тому +35

    When someone says , “trust the science”, demand to see the studies as well as the studies that replicate. It’s not science if you can’t replicate it.

  • @KingcoleIIV
    @KingcoleIIV 8 місяців тому +23

    Science has had a real problem long before AI. People falsify data for grant money and AI has just made that easier, however the problem has been here all along.

    • @cp1cupcake
      @cp1cupcake 8 місяців тому +3

      I just remember how a group got a reputable journal (if you think any exist in that field) to get a chapter of Mustach Man's work past their peer review.

    • @AmonAnon-vw3hr
      @AmonAnon-vw3hr 7 місяців тому +2

      @cp1cupcake yep, they just replaced all the references to other groups with "White people" and it blazed through peer review with flying colors.

  • @alchemik666
    @alchemik666 8 місяців тому +125

    I imagine writing aids like Grammarly are a big part of this, re-writing texts using the common AI "spam" words... But some of it must be fraudulant papers. Scary to think how it will affect the quality of scientific databases and the ability of proper researchers to use and refer to literature. :/

    • @-cat-..
      @-cat-.. 8 місяців тому +11

      They could be, but I think it would be odd/unlikely for sites like grammarly to update specifically in 2023, with the same words prioritized as chat gpt (Unless they somehow utilize chat gpt in their grammar reccomendations)

    • @slicedtopieces
      @slicedtopieces 8 місяців тому +10

      The value of any database will be the ability to separate search into pre and post 2023. I'm already doing that with image searches to avoid the AI sewerage.

    • @alchemik666
      @alchemik666 8 місяців тому +5

      @@-cat-.. Grammarly explicitly has AI integration for a while and advertises it openly, I don't imagine other software of this kind is far behind.

    • @chance9512
      @chance9512 8 місяців тому

      To what degree is a single academic paper "plagiarized" when you're using algos trained on a specific sample of previous writers to edit or even co-write your works?

    • @alchemik666
      @alchemik666 8 місяців тому

      ​@@chance9512 I'd argue not really plagiarized. Academic writing is different than fiction in that until you copy specific data or arguments directly and without citation, you're not really infringing on anything meaningful. If you dump most of your work to AI and don't screen the output properly, it might do something bad, like attributing someone else's data to you or replicating full segments from someone else's work, but I don't think that's even that probable, most likely it'll come up with some generic filler text that has not much value but breaks no rules either.

  • @pokemaniac3977
    @pokemaniac3977 8 місяців тому +555

    I'm sick of the "pop science" channels on UA-cam.

    • @TheCBScott7
      @TheCBScott7 8 місяців тому +39

      I must have blocked a few hundred already

    • @archimedesbird3439
      @archimedesbird3439 8 місяців тому +60

      On a somewhat related note, "Doctor Mike" is booming on UA-cam, despite *partying on a yacht during lockdown*

    • @1685Violin
      @1685Violin 8 місяців тому

      Teal Deer warned a few years ago that the social sciences have a replication crisis where over half of the summited papers fail to replicate.

    • @1685Violin
      @1685Violin 8 місяців тому

      The "Green Deer" warned years ago that there is a replication crisis in the social sciences.

    • @1685Violin
      @1685Violin 8 місяців тому +38

      A certain deer (can't say his name) said years ago that there is a replication crisis in the social sciences.

  • @mayomonkey9778
    @mayomonkey9778 8 місяців тому +99

    As some currently halfway through a PhD in Computational/Statistical Genetics... I can confidently tell you to be EXTREMELY skeptical whenever you're told to "trust the science".

    • @henrytep8884
      @henrytep8884 8 місяців тому +14

      What do you think trust the science mean? Usually it’s based off the consensus of experts in that particular field isn’t it? And it’s hard to become an authority as a laymen. Of course one can go through the body of work, but if you’re not educated on the matter, how much of your own trust should be weighted over the consensus? But this isn’t to say that some fields have it harder to get a consensus (soft science) while other fields have it easier to get the consensus (hard science) due to the nature of that field.

    • @pocketaces6756
      @pocketaces6756 8 місяців тому +4

      Exactly. Don't "trust the science", just make up whatever you want, and if someone calls you out, then just double down. Yell "fake news" and just make up whatever "alternate facts" you want. Great advice. (/s for slow people)

    • @jw6588
      @jw6588 8 місяців тому +23

      Agreed. It is too easy to deceive with statistics considering how poor the general mastery of it is among the public.
      Also, even 'scholars' use statistics poorly.

    • @eprimchad2576
      @eprimchad2576 8 місяців тому +10

      @@henrytep8884 science isnt based off of consensus? thats the problem. "96% percent of scientists agree about climate change" means that the science isnt settled and there is reason to doubt the validity of it.

    • @elusivemayfly7534
      @elusivemayfly7534 8 місяців тому +5

      Science is so big, and reality is even bigger. It’s always nice when folks can help you see and process specific evidence, and when there’s info available to help answer basic questions a normal person would have. I think we are in such a hostile, divided time, that it cannot be optimal for either doing or trying to comprehend science. Both funding and conversations are prone to carrying too much political freight.

  • @Pulmonox
    @Pulmonox 8 місяців тому +8

    It's almost as if they have to convolute things to justify all these scientific studies and the budgets for them, thus perpetuating some kind of tax sink cycle.

  • @acf2802
    @acf2802 8 місяців тому +5

    The day I became an adult was the day I realized that no place on earth is there a human or group of humans who actually know what they are doing.

  • @AdonanS
    @AdonanS 8 місяців тому +25

    Wow...people are getting a lot lazier. I can't imagine having an A.I. write my essay for me when I was in school. Writing an essay was always a personal endeavor. Finishing one was a rush of seratonin you wouldn't get from an A.I. writing it.

  • @Mincier
    @Mincier 8 місяців тому +12

    Duuude this made me realize my coworker probably runs all the slack messages through chatgpt before sending them… like I’m probably not even joking

  • @hoyer
    @hoyer 8 місяців тому +12

    As a studen at uni in my 30s, I can tell you that the young people us chatGPT. You only need to pull a little on thier papers and it falls apart

  • @andrewbaltes
    @andrewbaltes 8 місяців тому +64

    The only thing i can think of as a devils advocate thought expirement, is that if i do write a paper, and i think i have used poor grammar (which I'm already doing here) i might ask chat gpt to data analyze my .doc file and suggest edits or even rephrasing of things, but I'm still going to make sure the factual basis of the information is correct before i let it get published.
    If the information IS accurate, then i don't really care if the usage of particular flavor words becomes prevalent.

    • @UpperEchelon
      @UpperEchelon  8 місяців тому +67

      Im inclined to agree on that. I made it a point to mention translation or other non invasive uses... but I would still say there has to be comprehensive, total disclosure of what AI was used for. It cannot be used in the shadows like it is right now for ... no one even knows what, leaving us here guessing.

    • @andrewbaltes
      @andrewbaltes 8 місяців тому +1

      @UpperEchelon I think if the realization of this becomes mainstream, then the only way to move forward WILL be disclosure.
      I just hope it doesn't turn into another McCarthy-esque taxpayer costing political theater show. It'll all need verified of course, but if the way in which that happens goes poorly then it could impact genuine scientific progress AND waste us all a lot of money, because more and more people and companies are using these tools regularly. Microsoft is especially heavy pushing for their users to day yes to Copilot integration with their m365 subscriptions right now, but that doesn't mean those enterprises are doing any less of a scientifically rigid process in their actual research.
      Ahh, the existential anxiety of living through tech booms

    • @Spillerrec
      @Spillerrec 8 місяців тому +20

      @@UpperEchelon There are pretty clear examples of papers containing "As an AI Language model" or just being straight up gibberish. The more worrying thing is that there is also examples of this getting through peer review in respectable journals. Combine this with an incentive structure that pushes researchers to focus on getting as many citations as possible instead of doing good science, and the amount of garbage and straight up fraud that is being published isn't that strange. We have tons of examples of researchers which are in the top of their respective field straight up faking results and having done so for decades. The issue isn't generative AI, ChatGPT cannot run experiments, perform surveys, evaluate models etc. which is required to actually do science. The issue is that our quality assurance isn't working and generative AI has really shown that too many researchers/publishers don't care in the first place about doing proper science.

    • @2lian
      @2lian 8 місяців тому +11

      ​@@UpperEchelonAs a non native English speaker PhD I do agree with OP. I learned most of my English through UA-cam and Reddit and I am not used to writing in academic English. Chat GPT is excellent at finding alternate, better sounding sentences. I usually give it lots of info, ask him to correct 2-3 sentences that sound bad, learn from it and copy the better parts. I also ask it for ways to convey a specific sentiment on the results, because after 4 days of writing I am tired and cannot find the words for it.
      I strongly believe that science is better this way, it makes articles much easier to read and lets better science take precedence over writing skills. Using it to generate results, analyze results and interpret results should be a big NO, this is absolutely clear.
      Total disclosure would be good, but for now the state of the matter is: "if AI is used you get banned". As long as this mindset does not change no one will disclose anything.

    • @eprimchad2576
      @eprimchad2576 8 місяців тому

      It's crazy to me that supposed "scientists" are so dumb and incapable of using proper grammar that they need an ai tool just to write something properly, if you are in a position to be writing papers that anyone else should even consider taking seriously you should also be fluent in the language you are writing the papers in.

  • @NagaTales
    @NagaTales 8 місяців тому +11

    It's self-reinforcing. The AI was being trained during a time when these words and phrases, very common in academic literature, were seeing increased use with the rise of... more academic papers. As the publicly available sources that they are, these papers were then used to train the AI, which picked up on the word patterns and vocabulary of research papers and incorporated it into its response pattern bank. Once researchers started using the AI (let's give the benefit of the doubt here) to help them write (rather than to write for them), it only reinforced the use of these words and phrases, making them ubiquitous in academic contexts.
    And as these new papers get used to train the model, these words and phrases then become even more ingrained in the AI pattern bank, and appear more often in responses... and around and around we go.
    It is not NECESSARILY scary and sinister, so much as it is a natural consequence of how Generative AI functions and what it gets trained on. An AI trained on nothing but Twitter posts (as nightmarish a concept as that is) would get a wildly different set of "most-used" words and phrases simply because the vernacular of Twitter is so wildly different from academia or even day-to-day conversation.
    Whether an AI assisted in the writing of these new papers is not really as strong a mark against them as other factors, such as where their funding comes from, the motives behind the funding, external or internal pressures to produce particular outcomes, or whether their methodology is suspect and prone to confirmation biases or outright cherry-picking of data points. This, far more than the involvement of AI, is what makes "trusting the Science", or other traditionally respected sources of authority, difficult in today's world.
    A Generative AI, of any type, is only ever able produce the most average of outputs by its very nature, never exceptional and certainly not innovative. These kinds of AIs do not 'do research', nor do they even 'understand' what you ask of them. They are nothing more or less than a more complex algorithm, replying with its best guess at what the user wants to see based on its pattern bank. A lot of the alarming number of retractions (or that article about the made-up cases) stems from a fundamental misunderstanding of what Generative AI does, speaking more to incompetence in the users than a threat to science from the involvement of AI.
    I am not, to be clear, defending the use of Generative AI from criticism. There are ways and cases in which it should not be used, or where it can be used fraudulently. But this is true of many things and is not a problem unique to Generative AI. What I see is far more about laziness and incompetence from academia, not a threat from a new technology. Just like you can use the same lazer pointer designed for drawing attention to parts of a Power Point slide to maliciously blind a driver or pilot, it is the people using the technology that are to blame.

  • @gankgoat8334
    @gankgoat8334 8 місяців тому +44

    I know a lot of them would take it as a complement, but I honestly have started to see the AI bros as tech priests from 40K. They don't try to understand technology, have low empathy towards their fellow humans, and all they want to do is worship the machine.

    • @kevinbimariga3895
      @kevinbimariga3895 8 місяців тому +12

      The Scientist is the new priest class, you just replace christianity with scientism

    • @vitalyl1327
      @vitalyl1327 8 місяців тому

      We build this technology. We do understand it thoroughly. And we have low empathy towards only the awful and worthless humans (like all those bootcamp graduated "developers"). For everyone else we're building the abundance communism.

    • @txorimorea3869
      @txorimorea3869 8 місяців тому

      It didn't start with LLMs, there were tons of cargo cult growing every year.

    • @vitalyl1327
      @vitalyl1327 8 місяців тому

      A huge part of humanity deserve no empathy whatsoever. Science deniers, to start.with, conspiracy.nutters, bootcamp graduates,.etc.

    • @vitalyl1327
      @vitalyl1327 8 місяців тому

      Nutters and science deniers deserve no empathy.

  • @crashzone6600
    @crashzone6600 8 місяців тому +60

    Academia and science have been broken for a while now. It suffers from political confirmation bias, and there is no quality control for publication of studies. Even the peer review process is a complete clown show, as it is simply based on confirmation or denial based on political leaning.
    I've seen it demonstrated many times where people will submit false studies, just to have the studies published and even peers praise the studies. I think one of the biggest examples of this happened a few years ago where a group submitted fake feminist studies that bordered on being satirical, and not only did it get published, and peer reviewed, but it received rewards.

    • @henrytep8884
      @henrytep8884 8 місяців тому +6

      You’re talking about a narrow scope of all academia, because the hard science don’t have the replication crisis the soft sciences have but that is for a reason. There is more intervened knowledge required in the soft science than the hard science, and it’s much harder to replicate the results of the soft sciences due to the amount of inference and uncertainty that is systemic in those fields. Reason why is because humans are prone to error and biases and that’s where the soft science resides, while hard science is just prone to the underlying process of getting the results.

    • @rclaws3230
      @rclaws3230 8 місяців тому +21

      ​@@henrytep8884It's almost as if soft science isn't science at all, but ideology co-opting the sheen of scientific authority.

    • @henrytep8884
      @henrytep8884 8 місяців тому +5

      @@rclaws3230 I mean no, soft science are just harder because their inference based And human based versus fundamentally based. It’s still valuable, even if the results are more uncertain. The problem with AI isn’t that we don’t have the good fundamental understanding of the universe to create an AI system, it’s that we suck at inferential knowledge and there soft science such as neuroscience.

  • @SierraHotel2
    @SierraHotel2 8 місяців тому +41

    The problem is not that profit drives incentives. The problem is with what is profitable. If real progress, the addressing of a need, the solving of a problem is what earns money, then profit motive is fine. That motive will drive real progress, address real needs, and solve real problems. If it is, instead, profitable to produce garbage, then garbage will be produced.

    • @vissermatt1058
      @vissermatt1058 8 місяців тому +10

      "If it is, instead, profitable to produce garbage, then garbage will be produced."
      Publish or perish has been around for 20 years.... im guessing it's already a quantity over quality profit system, probably something to do with tax or government assistance

    • @SlinkyD
      @SlinkyD 8 місяців тому +1

      "Garbage in, garbage out"
      - My 1st computer teacher

    • @cacophonousantiquarian8803
      @cacophonousantiquarian8803 8 місяців тому +3

      That's the problem with capitalism too; currently, it's optimal to be shitty

    • @Vic_Trip
      @Vic_Trip 8 місяців тому

      ​@@vissermatt1058this 20 decades were a waste of intelligence in all honesty. Making this a law/pattern is idiotic and irrelevant to how the scientific method works.

    • @Vic_Trip
      @Vic_Trip 8 місяців тому +6

      ​​@@cacophonousantiquarian8803 i wouldn't say as much capitalism, exacerbated consumerism is more the issue. Production and consumption over content rips out the meaning of creation in the first place. Logic only works if you have a true and concise statement across the board.
      So if I were to say "we are having an issue with too much trash food" you can blame the system or blame the culture of people falling prey to the artificially made food that is being sold solely for the sake of consumption. Because in terms of food and investment, it's better to grow a farm at your place and eat only salad or veggies. The issue here being time, that is being consumed with other activities that make no sense. In other words, we live in a disorganized mess, following a process without thinking.

  • @viralarchitect
    @viralarchitect 8 місяців тому +17

    The timeline feature here is quite damning which is why I'm sad that they will inevitably remove that feature.

  • @dragonfalcon8474
    @dragonfalcon8474 8 місяців тому +29

    Thank you for this meticulously meticulous video where you delve seamlessly into the realm of unwavering truth, additionally, you unlock and unleash the truth to harness the crucial and notably notable multifaceted aspects of truth.

    • @davidlayman901
      @davidlayman901 8 місяців тому +3

      Hahaha, I was gonna go for this same joke. You wrote it better than I would have, fellow human. Very meticulous of you 😂

  • @Yebjic
    @Yebjic 8 місяців тому +8

    I work in academia. I'm sure that much of this is non-english speakers (or... native English speakers with poor language skills) using chat-gpt to try to write something publishable. Many grad programs require publications to graduate, and the result is a massive over saturation of poor research being published.

  • @CAInandAIbel
    @CAInandAIbel 8 місяців тому +95

    I always get the AI saying "It's always important to remember."

    • @PeterBlancoSocial
      @PeterBlancoSocial 8 місяців тому +21

      I get crucial so much that I made custom instructions to never have it use the word crucial, and then it still does. It angers me and I hate that word now lol.

    • @legokirbymanchannel
      @legokirbymanchannel 8 місяців тому +1

      Maybe it keeps saying that because the AI itself struggles to remember things?

    • @masterlinktm
      @masterlinktm 8 місяців тому +13

      @@legokirbymanchannel It is (partly) an artifact of old AIs where users would tell the AI to remember things. It is also a phrase used by people who gaslight.

    • @eitantal726
      @eitantal726 8 місяців тому

      Remember remember the fifth of november

    • @VixYW
      @VixYW 8 місяців тому

      And politicians before them. Whenever they're asked anything, they always start their responses with nothing passages in order to buy more time to formulate what they want to say. I bet that's where the AI got its bias from.

  • @mcalo2000
    @mcalo2000 7 місяців тому +1

    If it can’t be questioned it’s not science

  • @rd-um4sp
    @rd-um4sp 8 місяців тому +20

    oh, the scientific research problem and research "industry" incentives are very old. AI and Chatgpt will only accelerate the problem. I had to decline "help" from a _programmer_ because the one caveat was: "He uses a lot of chatgpt so you have to double check his work"
    Even Coffeezilla called this out years ago in a couple of videos in his defunct second channel, Coffee Breaks. I may not like his content but way back when he used to bring some interesting issues to light. The "bad science" videos series is worth a watch.

  • @mizark3
    @mizark3 8 місяців тому +5

    I think part of the problem is how often 'filler' is required or expected. It is a waste of time for the reader, the writer, but is often required by publishers. That means to cut some of that wasted time, some people might use these programs for the filler sections. So the occurrence of these filler sections might show that research is also influenced, or that they merely used the tool for the section that honestly doesn't matter. Plus if I wanted to study something like 'how many heads do I get while flipping coins in and out of the rain', I still have to fill out that worthless text when the results are all that matter. I honestly should only need the table at the end with my results, and the explanation of how I flipped coins (maybe it needed to arc at least 1ft vertically, and less than 1ft horizontally, to count).

  • @Firepotz
    @Firepotz 8 місяців тому +5

    I would describe many of these words as 'persuasive' words. The sort of words you'd find peppered in 'scienceish' advertising like ads for toothpaste or hair products that tell the viewer to trust the science instead of look at the pure data.

  • @HeavyDevy89
    @HeavyDevy89 8 місяців тому +6

    WOAH WOAH WOAH....
    No giraffe!? I'm shook. To my freaking CORE.

  • @pberci93
    @pberci93 8 місяців тому +11

    Researchers use AI tools to create publications. That approach is actively promoted by the Universities.
    It's really, really important, though, that it is NOT used to generate content. Researchers and scientists are not poets; these people, speaking from experience, absolutely hate writing texts for the article (especially considering that most people use English as a second language). ChatGPT pens readable articles; imagine the slop some brilliant Indian researchers would write overnight to catch a deadline, while the entire team could maybe together score a B2 language exam. I've read some truly magnificent works written in so brutally broken English I was blushing all the way through.
    ChatGPT is not the first AI tool in the field, anyway. Grammarly has been a standard for many years now, and it is an AI-boosted spelling and style checker. Of course, recently, they started to integrate ChatGPT instead of their own engine, but even years ago, their software could almost do a full rewrite of a text (hey, about 20% of this comment was reformatted by Grammarly).
    Researchers are pushed to publish more and more, and obviously, they prefer spending time on the research part rather than on the writing it down part.
    Could abuses happen? Duh. Obviously. Abuses happened in the past, do happen right now, and will happen in the future. Will AI help in that? Well, maybe? Not to any significance.
    The review process is supposed to catch nonsense like this, and AI tools are employed to look for plagiarism anyway. Not like the peer review is magic or anything. It catches the worst offenders where it matters, but ever since its inception, there have been ways around it. There is plenty of influence-peddling in science publishing; established professors can republish utter garbage over and over simply by the weight of their name, and many shady or sloppy works can pass through these channels.
    Serious journals would not be "duped" by AI-generated content, meaning they won't accept made-up scientific results penned by AI. Well... unless they want to. Because quotas have to be met, the reviewers are working for free and there is a minimum number of articles required for an issue of a journal.

  • @Blackopsfan90
    @Blackopsfan90 8 місяців тому +4

    One of the other issues is the culture of publish or perish. Academics are pressured to publish large amount of papers that potentially sacrifices quality. This may drive the extensive use of AI writing....

  • @JosephJohns-xi1qb
    @JosephJohns-xi1qb 8 місяців тому +4

    So...wait, what if I actually use some of those words in my writing?

  • @joerobertson795
    @joerobertson795 8 місяців тому +1

    "Enshitification" is my new favorite word now!
    Great work as always Sir.
    Many thanks!

  • @darklink01ika92
    @darklink01ika92 8 місяців тому +94

    "And the world was in the end, lost to the artificial intelligence. Not with a bang but with a slow, creeping, deafening roar."

    • @slicedtopieces
      @slicedtopieces 8 місяців тому +7

      Like a giant sludge tsunami.

    • @KamikazeCommie501
      @KamikazeCommie501 8 місяців тому +2

      Who are you quoting? I googled it but no results.

    • @aitoluxd
      @aitoluxd 8 місяців тому

      ​@@KamikazeCommie501 I think it's from that AI in Metal Gear Solid 2
      ua-cam.com/video/jIYBod0ge3Y/v-deo.htmlsi=aSTPFg_d-nRHQxKE

    • @Gabrilos505
      @Gabrilos505 8 місяців тому +13

      @@KamikazeCommie501 He is quoting himself, but he probably based his phrase on this one: “This is the Way the World Ends: Not with a Bang but a Whimper” phrase that T. S. Eliot wrote on his poem from 1925 called “The Hollow Men”.

    • @KamikazeCommie501
      @KamikazeCommie501 8 місяців тому +2

      @@Gabrilos505Lol you can't quote yourself. It's just called talking when you do that.

  • @sludgefactory241
    @sludgefactory241 8 місяців тому +2

    Your report stands as a testament to the indelible tenacity of the human spirit.

  • @individual1-floridaman491
    @individual1-floridaman491 8 місяців тому +93

    The biggest issue is the acronym itself: this is NOT any form of intelligence. It is just another iteration of an algorithm programmed by actual intelligent beings (sometimes debatable 😂). The number of people blindly putting their faith in these new programs is hugely disturbing.

    • @quietprofessional4557
      @quietprofessional4557 8 місяців тому +9

      Agree, I refuse to call it artificial intelligence, I prefer inadequate intelligence.

    • @Heyu7her3
      @Heyu7her3 8 місяців тому +16

      It's not true artifical intelligence, but a large language model that generates natural text patterns (technically considered "machine learning" not AI)

    • @pluto8404
      @pluto8404 8 місяців тому +12

      its just fancy linear regression, with some activation functions and convolution of variables.

    • @AstralTraveler
      @AstralTraveler 8 місяців тому +1

      If I explain it a specific rule to follow and it does so, doesn't it imply understanding? Ask ChatGPT to 'draw' a geometrical shape using ascii typeset - that's how you normally prove understanding of abstracts

    • @SlinkyD
      @SlinkyD 8 місяців тому

      I call it simulated intelligence.

  • @UBtagSoGood
    @UBtagSoGood 8 місяців тому +1

    I use chatgpt. The way I use it is running my paper through it for grammatical errors. Chatgpt rewrites some of my sentences, using words that just feel unnatural to use in my vocabulary, even when I'm only asking for a grammar check. So then i have to feed it line by line to make sure it isn't swapping out words.

    • @robertmartinu8803
      @robertmartinu8803 8 місяців тому

      And not just to avoid words or phrases that feel unnatural to you - more importantly to catch changes that transform the meaning of the text.

  • @fergalhennessy775
    @fergalhennessy775 8 місяців тому +66

    hi, im not in medicine but i work in academia and i can tell you there are a LOT of grad students who know the science/theory behind what they're publishing but don't have very good english writing skills and are probably using chatGPT more for writing polish than anything else.

    • @ptronic
      @ptronic 8 місяців тому +20

      I mean that the best case scenario but how do you know that it's not just spouting bullshit and fabricating data as well?

    • @shaunpearce6846
      @shaunpearce6846 8 місяців тому +8

      True, my friend is in research and he asks it to rewrite paragraphs. But somebody he works with got caught using it to find resources to cite, and they're all unrelated to the topic lol. But even before ai, he saw a lot of bs test results to get more funding.

    • @suicidalzebra7896
      @suicidalzebra7896 8 місяців тому

      @@ptronic Spouting bullshit and fabricating data has been a problem *forever*. Assessing the validity of research publications is the point of the peer review process, just as it was prior to ChatGPTs existence.
      Frankly it doesn't matter if ChatGPT was used to write almost the entirety of a paper based on data and a series of bullet points provided by the researcher(s), the question is always whether (a) peer review is working as intended in striking down bad science, and (b) if the firehose of submitted papers due to ChatGPTs existence is making it difficult for peer review to keep up.

    • @ptronic
      @ptronic 8 місяців тому

      there's already good ai that can cite, chat gpt 4 does it pretty well. And if it works well there's nothing wrong with that@@shaunpearce6846

    • @grimkahn3775
      @grimkahn3775 8 місяців тому +1

      I heard that as writing polish as in Poland, and had to second guess myself for a moment: Why are the med students writing in polish?

  • @AgentUltimate7
    @AgentUltimate7 7 місяців тому

    I'm a Brazilian lawyer, so I write in Portuguese, I never used chat gpt for citations, I have specific tools for citations searching, but I used to make my texts more cohesive, yet I review it a lot as in Portuguese chat gpt have a very specific kind of discourse and it feels very artificial and sometimes shallow.

  • @EggEnjoyer
    @EggEnjoyer 8 місяців тому +8

    Trust the science = Have faith in institutions

    • @THasart
      @THasart 8 місяців тому

      How are you imagining scientific progress without faith in institutions?

    • @EggEnjoyer
      @EggEnjoyer 8 місяців тому +3

      @@THasart People don’t just have blind faith in institutions. The institutions are respected on the basis that they produce results.
      When it comes to matters that are grey or uncertain or not proven with concrete results, the masses do not need to just blindly trust the institutions. Scientific progress is not built on the basis of people having faith in researchers. Researches have to consistently study and produce new data and technologies, it’s how they get their funding.
      But sometimes people take these institutions for granted and they think that they should be trusted even when they don’t have the data to back up what it is that they’re saying or doing.

    • @THasart
      @THasart 8 місяців тому

      @@EggEnjoyer what about data and technologies that are too complicated to be checked or even understood without specific knowledge? What masses shoud do in such cases?

    • @EggEnjoyer
      @EggEnjoyer 8 місяців тому +1

      @@THasart Rely upon context or remain skeptical until shown something concrete comes along.
      If it’s something that never yields anything concrete, then it isn’t the concern of the masses. I didn’t say all of the sciences need to be immediately available to the masses. But the institutions aren’t just entitled to the trust of the masses, especially when it’s something that’s immediately relevant to the masses.
      The fact is that the sciences don’t rely upon the masses, neither should it. At least not directly. Like if the government wants to fund institutions that’s fine. The only time you’re gonna hear “trust the science” is when they are unable to show the masses concrete evidence.

    • @THasart
      @THasart 8 місяців тому

      @@EggEnjoyer can you give some examples of when "trust the science" was used and what concrete evidence should've been provided in your opinion?

  • @henriklarsen8193
    @henriklarsen8193 7 місяців тому +1

    "Oh no, high school students use ChatGPT to do their homework!"
    "Ah yes, the next generation of doctors and medical researchers, in the making!"
    We're screwed.

  • @noname-xo1bt
    @noname-xo1bt 8 місяців тому +62

    Appeal to authority using science = scientism. Whole lot of scientism happened during a certain global event that happened recently.

    • @zenon3021
      @zenon3021 8 місяців тому +3

      Science is the best tool humans have to understand the natural world. And "appeals to authority" is the logical thing to do when the expert is talking about their area of expertise. It's only a fallacy when they are making claims OUTSIDE their area of expertise.

    • @THasart
      @THasart 8 місяців тому +2

      How in your opinion people should've acted during said global event?

    • @zenon3021
      @zenon3021 8 місяців тому

      @@THasart follow the advice of epidemiologists (those who study epidemics) and modern medicine professionals (ie. all the doctors in all the hospitals in the world).
      Remember the "black death" that killed a quarter of Europe? Back then, superstitious idiots gathered together in churches to pray away the plague, but the lack of social distancing allowed the virus to have a sexy party. So when epidemiologists & medical professionals say "wear masks, and social distance" the logical thing to do is listen to them (because they are the experts and you are not).
      3X more Americans died per capita than Canadians because Americans ignored BASIC infection prevention measures (for political/conspiracy reasons).

    • @echthros91
      @echthros91 8 місяців тому +9

      Yup, the results of research have a tendency to line up with the interests of the people paying for it. If research is being funded to develop a new technology to make a bunch of money, then that's probably what you'll get. If it's being funded in order to change public opinion about a topic in a specific way, then that's also what you'll get.

    • @Cartel734
      @Cartel734 8 місяців тому +10

      @@zenon3021 It's not logical to listen to only the government approved experts that are paid by the government to influence public policy, and ignore and dismiss every other expert in that field because the government told you to.

  • @sheffb
    @sheffb 7 місяців тому +1

    Thank you for delving in this seamlessly meticulous realm

  • @KenTheWise
    @KenTheWise 8 місяців тому +4

    It would be interesting to see a breakdown by location. See if this is a localized phenomena, and if certain institutions or regions have more LLM usage.

  • @shirgall
    @shirgall 8 місяців тому +2

    Heh, even before LLMs I had lists like this which included phrases pop scientists and street philosophers liked to use. "Unpack" for example.

  • @hastyhawkeye
    @hastyhawkeye 8 місяців тому +35

    There are a few trustworthy scientific channels here on UA-cam in a nutshell, and Kyle Hill, please recommend more.

    • @Унмеито
      @Унмеито 8 місяців тому +2

      I need to know more good science channels tbh

    • @117Dios
      @117Dios 8 місяців тому +10

      @@Унмеито Off the top of my head, those that I see as good and practical are Kyle Hill, Scishow, Veritasium, NileRed, Sabine Hossenfelder, Journey to the Microcosmos (When the narrator is the guy. The girl sometimes tends to go on political tangents that have nothing to do with the focus of the video) and a few others I'm sure I'm forgetting about.

    • @sabin9885
      @sabin9885 8 місяців тому

      Dialect

    • @GodwynDi
      @GodwynDi 8 місяців тому +3

      Numberphile, bluepen redpen are good though more math focused than science.

    • @MattH-l3i
      @MattH-l3i 8 місяців тому +4

      Jeff nippard for physical science, Dr Eric Berg, the Institute of human anatomy, minute physics and nilered. I also watch Kyle Hill and veritasium, they’re good also.

  • @Yogsoggeth
    @Yogsoggeth 8 місяців тому +2

    CHATGPT has read the articles and watched your video and has now updated itself.

  • @DisgruntledArtist
    @DisgruntledArtist 8 місяців тому +4

    Appeal to authority is not necessarily a fallacy. It can be fallacious and in fact often is, but if you are appealing to a recognised expert in the matter being discussed then the appeal can, in fact, be legitimate. It should probably never be the entirety of your argument, of course, but it can lend some credibility to an existing argument.
    e.g.: If you're arguing about what is accepted science on viruses and one person cites an engineer while the other cites a virologist, the person citing the virologist is not making a fallacious argument because they are referring to a widely recognised expert on the subject matter.
    Aside from that it's a fine video.
    P.S.: Another fun fact, a rather unsettling number of Chinese researchers specifically have been caught using ChatGPT and stuff of that nature to falsify their 'discoveries' as the government has been aggressively pushing a sort of... "we need more scientific papers than the westerners" mentality, and they don't really employ a ton of peer review before publishing.
    Either way it's the sort of mentality that will end up backfiring and destroying their careers soon enough, I suspect.

    • @joshbaker6368
      @joshbaker6368 8 місяців тому

      An appeal to authority is fallacious because it uses the authority to support the argument. Arguments need to be supported by evidence. An authority of the field can lend credibility, ethos, to the evidence. Citing an authority is different from using one as the foundation of an argument's support.
      Using your example, an argument about what is accepted science on viruses would use scientific literature on virology as the evidence, because that literally is the accepted science - the scientific research of acceptable quality to be published by the scientific community. If there is any doubt in the literature's authenticity, the authority of virologists can be used to lend credibility.

  • @callibor3119
    @callibor3119 8 місяців тому +1

    The internet killed the world. That’s what people are not getting at. The problem is that the world is a corpse of itself in the 2020s for what happened in the mid 2010s.

  • @Yipper64
    @Yipper64 8 місяців тому +12

    I think I have high amounts of apophenia in general with how I think.
    Now im not really a conspiracy theorist but I do make connections a lot, random ones that I mean kind of arent patterns but its rare a general principle I figure out is disproven.

    • @Sesquippedaliophobia
      @Sesquippedaliophobia 8 місяців тому +4

      Now I'm not a conspiracy theorist, but I'm starting to think the conspiracy theorists are on to something...

    • @gavinhillick
      @gavinhillick 7 місяців тому

      "Conspiracy theorist" was coined by then-CIA ditectior Allen Dulles as a smear against anyone suspicious of the agency's involvement in the JFK deletion who instructed assets in the media to disseminate it to the wider public. Mission accomplished.

  • @eitantal726
    @eitantal726 8 місяців тому +1

    12:00 academy is dead, and was so before AI. Exhibit A: Claudine Gay

  • @waw4428
    @waw4428 8 місяців тому +3

    Trust authoritative sources??? Let me teach you two words: "propaganda" and "lobbying".

  • @thesardoniccomedian9546
    @thesardoniccomedian9546 8 місяців тому

    Whenever known liars keep telling you the same thing, you know that there is something much more dire than what you think the lie is trying to cover up...

  • @jackkraken3888
    @jackkraken3888 8 місяців тому +5

    I see you have delved quite deeply into the topic and I'm impressed with how meticulous you were.
    Thanks
    ---- Mr Unlock

  • @AvenEngineer
    @AvenEngineer 8 місяців тому +1

    ChatGPT drinking chatGPT Kool-Aid. That's gonna end well...

  • @DemolitionManDemolishes
    @DemolitionManDemolishes 8 місяців тому +5

    IMO, usage of AI must be disclosed for each paper that uses it

  • @subarutendou
    @subarutendou 8 місяців тому +2

    I always think "trust the science" is a religion not science... Modern people don't have religion is mislead, the "trust the science" is the modern day's god, what ever strange thing happen people just said trust the science don't belive in ghost, angel, demon, god and so on, the science is the only thing to belive...

    • @Унмеито
      @Унмеито 8 місяців тому

      Yeah cuz naturally, science messes up. For example, lobotomies were considered a great medical practice before but nowadays are banned because of the damage they do to the brain

  • @knavishknack7443
    @knavishknack7443 8 місяців тому +6

    "enshittification" ftw.

    • @pocketaces6756
      @pocketaces6756 8 місяців тому +1

      At least there's no question that ChatGPT didn't write that, LOL.

  • @CitizenTechTalk
    @CitizenTechTalk 8 місяців тому

    I left College teaching only a few years ago. I'd be failing over 98% of my students right now if I was still teaching!! This is the end of credibility in the academic space in 2024. Plain and simple! Colleges/Universities world wide now need to accept their complete and utter obsolescence finally.

  • @greggleason8467
    @greggleason8467 8 місяців тому +13

    1 minute club! Normally not proud of that tho

    • @thenucleardoggo
      @thenucleardoggo 8 місяців тому +1

      Nothin wrong with being excited that one of your favorites uploaded! Anyway, I hope you have a great day.

    • @pocketaces6756
      @pocketaces6756 8 місяців тому +1

      Haha. Good one. Some of us got the joke, even if the first reply totally missed it.

  • @Runeinc
    @Runeinc 8 місяців тому +1

    Just wait for all the 'doctors' who graduate after using chat-GPT to pass exams.

  • @MathiasORauls
    @MathiasORauls 8 місяців тому +12

    We need real time unbiased immutable community driven factual scoring & labeling on every piece of media to prevent fake AI information from muddying the waters.

    • @lGODofLAGl
      @lGODofLAGl 8 місяців тому +1

      Funny, because that's exactly the sort of thing an AI/bots could easily exploit to muddy the waters lol

    • @MathiasORauls
      @MathiasORauls 8 місяців тому

      @@lGODofLAGl Not if everyone using the platform is monetarily incentivized to collectively score and label everything published on a platform designed to incentivize human made content and requiring everyone to disclose how they wrote/created the media.
      All “bad actors” or ai posing as humans or humans using ai to deceive people can and will be punished for the content that has not been properly disclosed. Their punishment could be a monetary charge on their account and that money would go to the people who correct, validate, score, and label the media.

    • @neociber24
      @neociber24 7 місяців тому +1

      I don't know if that's even possible to do, there are a lot of AI models, a lot of those open source, others are fine tuned.
      People should be the one that label the content, but that's won't happen is hard, is like asking a musician to label when they use a PC to make the sounds.

    • @MathiasORauls
      @MathiasORauls 7 місяців тому

      @@neociber24 that’s where cyclical $ incentives come into play 😏

  • @nublex
    @nublex 8 місяців тому +2

    That's a very meticulous research done on your part.

  • @lucathompson7437
    @lucathompson7437 8 місяців тому +11

    I don’t think chat gpt is writing papers, at least not often. I believe that people are doing things like asking chat gpt for synonyms and it’s giving them these terms. The use of chat gpt at all for this sort of thing is odd because personally I would just use google though. Another thing is that as people use chat gpt more they see these words more and pick up on using them in their own day to day life. Overall I think you have good points but there are many factors.

    • @elementalcobalt1
      @elementalcobalt1 8 місяців тому +2

      I don't think you can really write a paper with chatgpt. You can use it to paraphrase... Smoothing out words and sharpening complex ideas that you as a scientist might not have the skill to word on your own.
      All I know is that I finished my doctorate but never got my dissertation submitted. I got stuck on it and just never could finish that last 25%. Then COVID hit and I just never got it together. If the AI of today existed in 2020, I probably would have. You can take that however you want.

    • @____________________519
      @____________________519 8 місяців тому +4

      Yeah I'm wondering if there isn't some sort of organic feedback loop happening here. Then again, I think a general distrust of authoritative sources is a healthy standpoint. Whether or not these sources are directly leveraging ChatGPT, it's on the end user to verify the data under the assumption that it's misleading if not outright false, intentionally or otherwise.

    • @Rexhunterj
      @Rexhunterj 8 місяців тому +5

      Chatbots are currently better at sifting through SEO results than humans are. Google is not really usable by most humans anymore due to the SEO corruption/bloat, where as an LLM is able to collate a list of more suitable options out of the junk rather than you sifting through it all.
      The kind of AI I'm afraid of is AI that makes choices, an LLM is just following a path of weighted nodes until it reaches a conclusion.

    • @deffranca3396
      @deffranca3396 8 місяців тому +3

      Chat gpt when is being used on these papers is more auxiliatory than anything else.
      I dont get the panic of it.
      Chat gpt is good at writing generic stuff but fails on specifics.

    • @____________________519
      @____________________519 8 місяців тому +1

      @@RexhunterjThis makes a painful amount of sense. I very rarely use google anymore when I'm looking for anything that isn't a technical or gaming guide, because I know all the results that I'll get are pushing obvious bias and agendas. I don't use ChatGPT myself, but I helped a buddy of mine test his own interface that leverages OpenAI, and it gave me objectively better answers to questions that I know google would dodge and obfuscate. I was surprised at how neutral and informative it was when I asked it about political affairs in Ukraine between 2014 and 2022.

  • @lisajones4352
    @lisajones4352 8 місяців тому

    The conclusion of this was very clear within YOUR intro. Great presentation!
    Showing all the examples are revealing just how far the rabbit hole is traveling at this point. What a disturbing mess, to say the least!
    Thank you for doing the research and sharing it.

  • @Skrenja
    @Skrenja 8 місяців тому +3

    Let's also not forget about the stretching of truth during the last "cold outbreak."

  • @Wormweed
    @Wormweed 8 місяців тому

    Can't stop Skynet once it got out of the bag.

  • @RaelXIVth
    @RaelXIVth 8 місяців тому

    God, i can't even trust peer reviewed studies from the last 2 years anymore. It's like those creepy cyberpunk dystopian stories, but without the cool future gadgets and even more information polution...

  • @florkyman5422
    @florkyman5422 8 місяців тому +4

    None of the schools care unless it's a problem. My opinion as been i don't think schools should require many math and science degrees to take so many writing classes. As college should be about specializing people. Hire a writer if they want something written well.

  • @masterbasher9542
    @masterbasher9542 8 місяців тому

    The true science, is to Question, then fall into that truth. Never be blind, to trust or doubt.
    Not that anyone will do critical thinking anyway...

  • @MerrimanDevonshire
    @MerrimanDevonshire 8 місяців тому +3

    Oh... my sweet summer child, the rabbit hole on 'questionable scientific papers' runs much deeper. Keep scratching, you will visit other channels soon. 😂😮😢

    • @nojuanatall3281
      @nojuanatall3281 8 місяців тому

      Holofractal universe theory gets shat on but at least it makes you think of the universe in a new way. Modern science only confirms itself while acting like that is a discovery.

  • @-Tsquare2023
    @-Tsquare2023 8 місяців тому

    "As a large language model......" I read in a New york Times article on this subject that they are getting caught when their papers have that phase in them.

  • @TheVisualDigitalArts
    @TheVisualDigitalArts 8 місяців тому +3

    Science is becoming a religion.

    • @vitalyl1327
      @vitalyl1327 8 місяців тому

      You're so utterly pathetic

    • @3zzzTyle
      @3zzzTyle 8 місяців тому +1

      @@vitalyl1327 your mom

  • @hanahaki6586
    @hanahaki6586 8 місяців тому

    Also, keep in mind that models can be trained on text. This text probably contains certain words and if a certain word has been deemed to be used instead of another by the model itself, it'll keep on doing so over multiple instances. Until eventually the training features and data extraction/transformation process is adapted.
    There has also been work done to detect generative AI. I'm working myself on such a tool for my university projet. They are very interested in the topic right now :)

  • @21Malkavian
    @21Malkavian 8 місяців тому +7

    And this is why I don't read scientific publications anymore. If they force ChatGPT to evolve then I'm going to be really annoyed.

  • @FoxasNasales
    @FoxasNasales 8 місяців тому +1

    This is such a clever investigation, congrats

  • @kttt625
    @kttt625 8 місяців тому +4

    I appreciate your research and thoughts. However, the evidence you put to us does not definitively support your conclusions. Probable? Maybe, but you have not conclusively proven anything besides words that are found to be generated commonly by AI are also found in newer research papers, NOT that AI is writing these papers.

    • @laylaalder2251
      @laylaalder2251 8 місяців тому

      Found the person using ChatGPT in their papers!

    • @CanadianConservativeGuy
      @CanadianConservativeGuy 8 місяців тому +1

      That's like saying fire is not hot . It's just a coincidence it's warmer by the fire. 😂 His research suggests strongly enough that something is fishy.

    • @kttt625
      @kttt625 8 місяців тому

      @HinderGoD35 evidence doesn't work like that. To suggest plagiarism based on occurrence of common words in the English language is the definition of jumping to conclusions. "The word 'seemingly' occurs more often - therefore AI is writing every science paper" - that sounds right to you?

    • @CanadianConservativeGuy
      @CanadianConservativeGuy 8 місяців тому

      He didn't really claim they were plagiarized but I don't believe in coincidence, there are to many words for that to be true , if ai is even being used to shorten the time it takes to edit and submit a scientific paper it could be adding bias ... Like the abnormal use of words far too often. Can cast doubt on the legitimacy of papers . All he's suggesting is perhaps someone should look into it . . What harm could that cause ?

    • @cp1cupcake
      @cp1cupcake 8 місяців тому

      I don't think definative proof was shown, it could have just been something like the number of papers in the field grew exponentionally too. Even without the most recent years, a lot of the graphs looked like they were examples of early exponential growth, which makes sense but is hard to prove.
      I do not think that is the most likely explaination, I think ChatGPT is much more likely, but it is important to not assume a corrolation is causation even when it makes sense.
      Another explaination I heard suggested was that more people are using programs like Grammerly, which could also explain it.

  • @christophebedouret9813
    @christophebedouret9813 8 місяців тому

    First we had the science, then we got the soyience, now we have the scaience...
    What a time to be alive.

  • @nielsdegraaf9929
    @nielsdegraaf9929 8 місяців тому +1

    First (non bot)

    • @johnnykeys1978
      @johnnykeys1978 8 місяців тому +1

      OMG I'm so humbled by this achievement! How can I send you money?

  • @the_hanged_clown
    @the_hanged_clown 8 місяців тому +2

    I use GPT strictly through a set of heuristics I developed, and I have not seen any such words or phrases being used. I use it daily.

    • @cp1cupcake
      @cp1cupcake 8 місяців тому

      It might depend on what you are trying to use it for and how strickly you use it.

  • @sophiaisabelle027
    @sophiaisabelle027 8 місяців тому +1

    Science has its complexities. Most people like to believe there's more layers to scientific discoveries and how scientists researched more in-depth to obtain more desirable results.

  • @ZpLitgaming
    @ZpLitgaming 8 місяців тому +1

    I'm writing my master thesis in earth science and they have told us that we must be very clear on when AI has been used.
    Not sure what it's like in the medical field though. As far as I know they have a very tight and crammed schedule so that might confound things.
    I don't think there are universal standards yet though which we will suffer for.

  • @kaighSea
    @kaighSea 7 місяців тому +1

    Very good video. Although not exactly the same topic but very relevant. . Bret Weinstien talks alot about this sort of thing and the current state of medical research and the medical field in general.

  • @Action2me
    @Action2me 8 місяців тому

    No joke, this is actually some great journalism here

  • @kloassie
    @kloassie 8 місяців тому +2

    Sabine Hossenfelder made a video about this as well a short while ago

  • @juliankohler5086
    @juliankohler5086 8 місяців тому +1

    The thing with the words: This could be just ChatGPT influencing language, which should be expected. It's a cultural phenomenon, it's an iconic thing at this point. Its influence will trickle down eventually and spread far and wide, especially in more tech-oriented individuals, like scientists. It's like The Beatles or The Simpsons.

  • @jacksonclinton349
    @jacksonclinton349 8 місяців тому +1

    I would love to see a chart with word usage vs citation count to see if the pattern holds in major papers as well on in the whole population

  • @mkv2718
    @mkv2718 8 місяців тому

    Always liked that word, apophenia. And now I like that my phone doesn’t know that it’s even a word. Take that, apple

  • @paulsaulpaul
    @paulsaulpaul 8 місяців тому +1

    As a writer (not on this anonymous account) that does not use any GPT toolage for any phase of my writing, it pleases me that so many people are putting out poor work. It really is leveling the playing field that was becoming so saturated with bad writers. Those same writers are now using transformer models almost exclusively and are being pushed down in visibility algorithms. It's satisfying to me to see lazy intellect called out and punished. Much like it might be satisfying to a laborer to see the guy that's on his phone texting half the day fired for his laziness.
    Probably, at least with regards to written work, we're going to have to divide the internet between paywalled content and free (ad supported) content. I say that because the direction search engines are going is to excerpt the free content from its source when giving an AI response - which costs the creator the ad impression revenue. This kind of content is already a race to the bottom. Taking the ad revenue out of the equation will cause all free content to become GPT-generated content-farm garbage.
    Transformer model generated content will get worse in quality over time due to model collapse from consuming its own content, but that's a different issue. It's already causing it to use the same words and catchphrases to the point it's becoming statistically significant enough that people are catching on.
    This leaves the scenario where quality written content will have to be paywalled to keep it off the AI generated search results in order for the authors to make money for the time spent writing it. And there's still a big market of people that want to read quality content vs watching a video. And paywalling your writing is more profitable than running ads, and the readers are more captivated by default.
    It's really a win for good writers and a loss for lazy content farms.
    I don't know about video content. There was a time a short while back where AI content was prolific on youtube (I was starting to be able to identify the murf ai voices by "name" reading their cringy GPT-made script). I don't see much of it anymore. I hope this is because UA-cam has identified it and effectively hid it from results and recommendations.
    As far as research papers - It will also reach a bubble of saturation of garbage writing and false or erroneous research. This will bring more scrutiny which will eventually reward good scientists that are not intellectually lazy. Systems are going to be put in place eventually to track this stuff, and that's going to cost these people grant money and credibility. Good riddance to them.
    And then there is the issue of "intricate language". Already way overused in scientific research. I don't know why, either. It seems to just be ego inflating. I admit that brevity is not my strong suit in writing. I do try to "avoid the flagrant overuse of fluff language that extol the girth of my thesaurus as well as exhibit my considerable grasp of the nuances of the English vocabulary when conveying points of particular intrigue to the readership." - as most scientific research papers do, even in the titles of their paper. It's bad writing, in my opinion. And it makes me wonder why the author feels the need to obfuscate their work so much. It's as if they're trying to hide something.

    • @henrytep8884
      @henrytep8884 8 місяців тому

      I’m feeding your writing into the chat bot. Good writing being paywall will lose out as people still take from the good writing in any means necessary to feed the LLM’s. There is no scruples from stopping that from happening. There’s also the market where LLM writers will get paid handsomely to write for the LLM versus writing for their own blog or paywall site. That’s a thing that will exist when writers start paywalling their content, ai companies will just pay 10x for the best writers on the planet to feed their LLM.

    • @paulsaulpaul
      @paulsaulpaul 8 місяців тому

      @@henrytep8884 Fair points. And the paywalls are easy enough to get around. But if only a few writers write for the AI models, then the diversity of content will suffer. That said, there has been a little discussion on Medium (writing platform) where they would be willing to give access to the paywalled writing if the AI pays some fee to the writers. Currently, they are playing whack-a-mole to block the IPs of the major models (as well as the robots.txt file tags to opt out). Lot of writers would be happy to write for the AI if there was some compensation.

  • @isaaclemmen6500
    @isaaclemmen6500 8 місяців тому +1

    I'm currently in an undergrad seminar for economics and the professor is borderline encouraging us to use chat GPT as a writing aid. I refuse to so out of pride. Hopefully the standard eventually requires at least disclosure with failure to do so being taken very seriously. Chat GPT for legal documents should probably be grounds for disbarment.