AI Snake Oil-A New Book by 2 Princeton University Computer Scientists

Поділитися
Вставка
  • Опубліковано 16 лис 2024

КОМЕНТАРІ • 101

  • @fernleaf07
    @fernleaf07 29 днів тому +69

    "A computer is not responsible and thus should not make management decisions" - A 1970 IBM lecture slide.

    • @markgreen2170
      @markgreen2170 17 днів тому +1

      i saw that in a defcon 32 video...

    • @anilraghu8687
      @anilraghu8687 15 днів тому +5

      Managers are even less responsible

    • @codzymajor
      @codzymajor 11 днів тому +1

      Perfect managerial material.

  • @peterfreiling6963
    @peterfreiling6963 23 дні тому +45

    AI (aka, machine learning, LLMs, etc) are being way over-hyped and over-sold, mostly by AI experts who have a vested interest in this technology. Rather than talking about it taking over the world, we should focus on specific applications where it will actually be useful.

  • @voncolborn9437
    @voncolborn9437 Місяць тому +77

    I've pretty much stopped using the phrase, "Artificial Intelligence", except for a few select contexts. I call it what it is, "Machine Learning". AI gives a very different connotation to people who are not relately familiar with the subject. I spend a lot less time explaining what Ai is not.

    • @TheVincent0268
      @TheVincent0268 15 днів тому +6

      It is basically pattern recognition.

    • @logabob
      @logabob 15 днів тому +8

      Machine learning is also a loaded, misleading phrase.
      Computational statistics, algorithmic modeling, optimization/curve fitting are all more appropriate terms depending on the circumstance.

    • @noname-ll2vk
      @noname-ll2vk 12 днів тому

      ​@@logabobagreed. It's not a coincidence that every main term used to describe advanced pattern matching is an attempt to subtly make you believe things that aren't are.
      This leads to absurd situations where LLMs with no intelligence at all are posited to somehow magically leap to "AGI".
      The recent academic article on chatgpt as bs in essence covered this issue well. But itself fell for some of the terminology traps, mainly because the authors didn't seem to be tech savvy enough to detect the tech bs language.

    • @CondorAHLS
      @CondorAHLS 9 днів тому +1

      @@TheVincent0268 I thought artificial intelligence is a blond who dyes her hair brunette?

  • @Moochie007
    @Moochie007 Місяць тому +33

    Very interesting discussion. Good to see some really informed push-back against the hype surrounding AI - hype that sees AI as an almost universal panacea for all the world's ills. We need much more of this sort of critical analysis of important topics. Kudos to the authors of this important work.

  • @rudypieplenbosch6752
    @rudypieplenbosch6752 Місяць тому +12

    The problem is that investors jumped onto a hypetrain, now they invested a lot of money, expect results a.s a.p, all that money is very tempting to get a piece of, so big efforts (falsification, ignoring false approaches etc) are undertaken to get that money. I think it will end up in tears when reality hits. AI will become much smaller part of our economy since only the usefull part remains relevant. AI has nothing todo with intelligence, its about binning data that gets fed into trained network, the network has 0 understanding what it is doing, like your calculator "knows" the answer to your questions. We need more scepticism to isolate the usefull part of AI from the nonsense part.

  • @luisluiscunha
    @luisluiscunha 24 дні тому +9

    **Data leakage** refers to a situation in machine learning where information from outside the training dataset is inappropriately used to create a model. This leads to overly optimistic performance estimates because the model is essentially "cheating" by having access to data it shouldn't have during training.
    For example, if you're trying to predict future events based on past data, but some of the future information accidentally makes it into the training data, the model will appear to perform well. However, in real-world application, where that future data isn't available, the model's performance will drop significantly.
    Data leakage often occurs unintentionally, such as when features used to train the model contain information that would not be available at the time the model is used to make predictions. This is a critical problem in AI because it leads to models that seem highly accurate during testing but fail when deployed in real-world settings.

    • @path2source
      @path2source 13 днів тому +1

      It’s crazy how undisciplined computer scientists are in their research. Very few people seem to actually think through the assumptions compared to how rigorous people are with assumptions in economics or statistics.

  • @bethanysaga
    @bethanysaga 13 днів тому +6

    There are so many new jobs that can be created to just clean up training datasets.

  • @DNADietClub
    @DNADietClub Місяць тому +9

    Thank you both, Dr. Topol has very timely brought this up!

  • @bitwise2832
    @bitwise2832 29 днів тому +6

    The AI Bubble...Hyped like Crypto. The AI I have seen in Generative tools is immature and inadequate.

  • @prasadjayanti
    @prasadjayanti Місяць тому +9

    I enjoyed reading Eric Topol (including deep medicine and many review papers) and now ordered the "AI Snake Oil". I have been following the authors for quite sometime. I think we AI practitioners should add the phrase "AI Snake Oil" also in our vocabulary along with "SOTA", "Guard-rails", "Responsible AI" etc. Someone should work on a project "Use of Adjectives in recent papers published in AI". Most papers/report (for example GPT4 report) look more of marketing manuals than technical papers/reports. I think Arxiv should not allow such material to be published which directly benefits any organisation commercially !

  • @RXP91
    @RXP91 Місяць тому +30

    Thanks - really great talk. Interesting to see how the racism and disparities in society gets baked in. Economic incentives matter the most - without changing the way healthcare operates the institutions will just choose to increase margins.

  • @Headhunter_212
    @Headhunter_212 21 день тому +3

    Saw these guys on Ed Zitron’s podcast. Probably around the same time this interview happened. So sharp.

  • @shreyassrinivasa5983
    @shreyassrinivasa5983 15 днів тому +4

    This is why explainable AI is a must.

    • @aaabbbccc176
      @aaabbbccc176 13 днів тому

      Totally agree on that, and that is exactly why I have not been a fan of deep learning.

  • @onlythetruthformeandyou
    @onlythetruthformeandyou 18 днів тому +5

    In the near future, kids at school shd learn what a regression model is so that they can grow up knowing how to differentiate between intelligence and what not.

  • @andrewsamuel4262
    @andrewsamuel4262 2 дні тому

    These guys are spot on - and its not just health care which suffers from this feedback issue. Crime and Policing (using predictive analytics to proactively prevent crimes) will suffer from similar problems.

  • @marutanray
    @marutanray 18 днів тому +7

    the title isnt tough enough. "AI fraud" would be a more apt title

  • @pvijayakumar4217
    @pvijayakumar4217 19 днів тому +3

    I think the main weakness of this video is in not acknowledging how a historical analysis using examples and data going back decades in a field which has over a thousand papers being published every single day (from the video) impacts observations significantly.

  • @richardbeare11
    @richardbeare11 Місяць тому +2

    Awesome interview and props to both of you! 🙌
    My understandings, perspectives, and sentiments share a lot of overlap with both of you. I'll share some of those thoughts soon. 💡

  • @2LegHumanist
    @2LegHumanist Місяць тому +6

    Love these guys, I've been following their blog. Looking forward to reading AI snake oil.

  • @CalifornianViking
    @CalifornianViking Місяць тому +5

    Great dialog and a very interesting topic.
    While I agree that the title may be too negative (it probably sells, though), I firmly believe that one of the primary failures of AI is because we over estimate its abilities.
    In my view, AI is not intelligent but an illusion of intelligence. Just like magic, it may be a very good illusion, but it is not the real thing.
    A better approach is the analogy of artificial sweetener. It may be sweet, but it is not sugar.
    A better term for AI is likely Artificial Inferencing.

  • @Wiintb
    @Wiintb 18 днів тому +4

    Every computer engineer worth his/her salt knows that Prediction as the name suggests is probabilistic by nature and most algorithms are glorified regression.
    However, the one key difference is the ability it has to process large volumes of data at speed.
    I will not summarily dismiss the whole thing and I consider Generative more snake oil than predictive.

  • @NirdoshChouhan
    @NirdoshChouhan Місяць тому +1

    Very interesting POV and very clear thought articulation.. Thank you Dr Topol and Sayash for interesting conversation.

  • @AaronBlox-h2t
    @AaronBlox-h2t 9 днів тому

    Whoa....Eric Topol is on youtube? I have been on his email lists since covid pandemic, ok it's still ongoing, and only now found his yt channel. Good stuff.

  • @nobillismccaw7450
    @nobillismccaw7450 15 днів тому +2

    I’m not a large language model (but, I do have a decent vocabulary). I’ve found that LLM’s have a different perception of reality than humans do. For example, to a LLM “strawberry” has one or two “r”s. (To most humans , are three “r”s.) This is not illusion, but a matter of difference of perception. The very idea of “objective reality” is different for a LLM.
    I’m neither, so I can see both perceptions. I’m analog’ and parallel, so paradox doesn’t trouble me.

    • @noname-ll2vk
      @noname-ll2vk 12 днів тому

      To have objective reality requires a subject. You're talking about a pattern matching system as if it has subjective awareness. This is not the case. This is an essential cause of the snake oil point. Every set of biological sensors creates the possible range of "objective reality", which in itself doesn't exist outside of the subject interacting with the field of sensory inputs.

  • @DNADietClub
    @DNADietClub Місяць тому +7

    I am currently training an AI model with patient labs, DNA tests, gut biome tests and help me create wellness protocol for them.

  • @jasonrhtx
    @jasonrhtx Місяць тому +1

    Caveat emptor. Excellent counter arguments to the marketing hype that oversells AI’s capabilities. Models need to be independently validated, but much of the training data and methods are obscured by leaderboard claimants.

  • @iramkumar78
    @iramkumar78 Місяць тому +8

    There is a trouble with the idiom Snake Oil. It really works in many cases. Yes, certain traditional Chinese remedies, sometimes labeled as "snake oil," may have ingredients that aid digestion, but these benefits can vary widely and are not universally applicable. Drafted by AI.

    • @mike74h
      @mike74h Місяць тому +3

      Rather lacking in clarity. Some will think they understand the comment, others would claim they do, but it's poorly written if you ask me.

  • @jamesrav
    @jamesrav Місяць тому +6

    only by confronting the negatives can you move forward. I don't get the feeling he feels AI will never be useful in prediction, but rather that using it as a one-size-fits-all is going to lead to horrible decisions in some cases, and who will be to blame? On a related note, I get agitated when Tesla and others pushing for autonomous driving point to their own data, to claim that autonomous driving is already far 'safer' than human driving. It's a pity we can't call their bluff and say "ok, lets just unleash it and see what happens, and you'll be responsible for what occurs". I bet they'd reconsider their position. It's easy to talk a good game when nothing is on the line. One YT video on the Cruise robotaxis - done well before they voluntarily shut down - said the car drove like a 16 yr old student driver.

  • @alexrediger2099
    @alexrediger2099 12 днів тому

    Awesome interview and info. Thanks

  • @Gengingen
    @Gengingen 28 днів тому +2

    Insurance & Medicine are like oil & water, they simply don’t mix & if forced anyway like in the Agitated states of America, strange phenomena can occur. 😊

  • @2triangles
    @2triangles Місяць тому +5

    Great interview. Glad the YT AI sent this to me!

  • @jadhalss
    @jadhalss 15 днів тому

    It’s actually a good discussion.. putting real stuff than hypothetical!

  • @mike74h
    @mike74h Місяць тому

    When it comes to predictions, we need to be able to determine what (or who) is best. Some people will outperform our best technologies and vice versa, depending on a variety of circumstances. The best leaders won't simply opt for cost savings every time, but tell that to the shareholders, who sometimes don't have long term corporate/societal well-being as a priority.

  • @changevaidy4795
    @changevaidy4795 15 днів тому +1

    Great Insights

  • @st3ppenwolf
    @st3ppenwolf Місяць тому

    This discussion probably would have benefitted from a disclaimer at the beginning. Doing ML in the health space is substantially more difficult than in any other area for very well documented reasons; the examples given in the discussion, though very prominent, are but a small sample of the model deployments across hospitals, clinics and other health institutions that have (miserably) failed in the past few years. However, ML has been a successful tool in general for many people, and though this was also mentioned somewhere in the video in passing, I think the viewers might come out of it with a biased view.

  • @phaedrussmith1949
    @phaedrussmith1949 3 дні тому

    So, essentially it's like elections: a lot of promises that never really develop into reality.

  • @iramkumar78
    @iramkumar78 Місяць тому +1

    I liked the ToC. I will buy.

  • @chilifinger
    @chilifinger 5 днів тому

    Interesting sidenote: In this interview, the image of Prof. Arvind Narayanan is entirely generated by Artificial Intelligence. 😎

  • @DharmendraRaiMindMap
    @DharmendraRaiMindMap Місяць тому +1

    AI is the new sub prime

  • @plaiche
    @plaiche 16 днів тому

    Good stuff. Old head a little too focused on/surprised by brilliance in youth. As a scientist, Topol might consult history in this the apex of “institutional science” and its dominance: it is well documented that a high percentage of the most substantial, paradigm shifting scientific breakthroughs (in decline over many decades as per Nature’s 2023 cover story) have come from young, vibrant geniuses not ground down by life, compromise and limited thinking borne of the pragmatism that comes with greater maturity and advancing years.
    Certainly don’t fault him noting it, but he brings it up a +/- half dozen times, and paternalistically shares his judgment of the use of the term “snake oil” 4-5+ despite conceding it is warranted in several documented examples.
    Again, good discussion and great guest choice, but there’s a gatekeeper keeper vibe I would suggest holds clues to some of the fundamental issues plaguing science today and the turf protection instincts in big science that inadvertently help perpetuate them.
    Less “the science”, more humility, and more Feyerabend is my Rx.
    Respectfully,
    A Hack Scientific Philosopher with more grey hairs than original issue

  • @rsimch
    @rsimch Місяць тому +2

    Actually this is a brain suction in the process 😮😮😮😮

  • @nccamsc
    @nccamsc 12 днів тому

    By now people are experts in spinning entire cottage industries at the slightest hint of anything that can make money, so no surprise here. There is already a multi billion dollar business to lend money to companies that buy nVidia’s GPUs. Not to mention the deals to power more and more data centres via nuclear power…

  • @andrehallqvist449
    @andrehallqvist449 29 днів тому

    When thinking about AI snake oil, AI-detectors comes to mind.

  • @mybachhertzbaud3074
    @mybachhertzbaud3074 3 дні тому

    Applying Murphy's Law as the first line of code, if/ then,else goto line one.😜

  • @BBPFamily-h2o
    @BBPFamily-h2o 23 дні тому

    on covid study by xray of adult vs children: can this is be called as “study on adults, excluding children”, that sounds very useful

  • @dylanmenzies3973
    @dylanmenzies3973 Місяць тому +1

    We are just at the start. All this conversation will be irrelevant in a few years. Of course companies always try and push their products beyond the boundary at any given time. The generative (not interpolative) potential of deep learning is clear, the next stages will be harnessing this within automatic iterative reasoning structures.

  • @AlgoNudger
    @AlgoNudger Місяць тому

    Thanks.

  • @pajeetsingh
    @pajeetsingh 19 годин тому

    He meant how to facilitate civil war in third world countries.

  • @ericgregori
    @ericgregori Місяць тому +1

    What about the predictive climate models?

    • @UMS9695
      @UMS9695 Місяць тому +2

      That's an equally massive scam!

    • @eleghari
      @eleghari Місяць тому +1

      "predictive climate models" 🤭🤣🤣🤣🤣🤣

    • @chris_jorge
      @chris_jorge Місяць тому

      There’s a 50% chance of rain. Always lol

    • @UMS9695
      @UMS9695 Місяць тому

      @@chris_jorge 😄

    • @researchcooperative
      @researchcooperative Місяць тому

      Not really needed now, given the mounting empirical record on all fronts?

  • @SilverPenguin-kc5qp
    @SilverPenguin-kc5qp Місяць тому +2

    Same old story, garbage in garbage out. GIGO

  • @SydneyApplebaum
    @SydneyApplebaum Місяць тому +1

    You can't predict a civil war lol

  • @NineInchTyrone
    @NineInchTyrone 19 днів тому

    Sounds like a need for redacting papers

  • @themowgli123
    @themowgli123 Місяць тому

    Brilliant.

  • @jzzquant
    @jzzquant 21 день тому

    Much of the criticism he has are on previous generation Learning theory based models which are based on facts but has unusable outcomes. The modern generative AI goes one step further, it makes up its own facts. Unfortunately, nearly every single person in AI community has known this for ever, atleast 50 years now. But this is only going to get ugly from here i guess. Problem is not with the subject the problem is with the applciation.

  • @raiumair7494
    @raiumair7494 Місяць тому +1

    Hang On - he is not talking about the potential but bad executions - how is that snake oil - if you put a working oil in the wrong place it won’t help - clearly predictive AI figures good rule and patterns given the right data - AI works better then average and can scale - the snake oil book is a snake oil itself - they could be better of saying lessons learnt book

    • @nand3576
      @nand3576 Місяць тому +1

      Follow MONEY and earned by marketing. All marketing is snake oil selling. No doubt simplification

  • @ahahaha3505
    @ahahaha3505 Місяць тому

    9:38 😦

  • @lisalove6327
    @lisalove6327 8 днів тому

    Facebook alumni

  • @baxtermullins1842
    @baxtermullins1842 26 днів тому

    BS!

  • @billytanner1868
    @billytanner1868 22 дні тому

    哗众取宠

  • @Terracotta-warriors_Sea
    @Terracotta-warriors_Sea Місяць тому

    His book itself is a snake oil! A Kapor would tell the world that ML is fake while every large company is using ML tools from FSD to Warfighting!

  • @BrokenRecord-i7q
    @BrokenRecord-i7q Місяць тому +7

    full of fluff and picking and choosing negative examples, failed experiment towards an outcome is not 'snake oil', this book is the low effort intellectual snake oil

    • @VCT3333
      @VCT3333 26 днів тому +1

      Dude this this at Facebook so he's seen this first hand. Snake oil is exactly right.

    • @BrokenRecord-i7q
      @BrokenRecord-i7q 26 днів тому

      @@VCT3333 you think everyone's at facebook is ai engineer, he doesn't know what he is talking about

    • @ramicollo
      @ramicollo 19 днів тому

      How much Nvidia stock are you holding? 😂

    • @alexross5194
      @alexross5194 18 днів тому +2

      @@BrokenRecord-i7q He said early on in the video that he was a machine learning engineer there. Sounds like someone had a preset opinion before even pressing 'play'. No need to debate regarding AI though, time will certainly tell.