AI (aka, machine learning, LLMs, etc) are being way over-hyped and over-sold, mostly by AI experts who have a vested interest in this technology. Rather than talking about it taking over the world, we should focus on specific applications where it will actually be useful.
I've pretty much stopped using the phrase, "Artificial Intelligence", except for a few select contexts. I call it what it is, "Machine Learning". AI gives a very different connotation to people who are not relately familiar with the subject. I spend a lot less time explaining what Ai is not.
Machine learning is also a loaded, misleading phrase. Computational statistics, algorithmic modeling, optimization/curve fitting are all more appropriate terms depending on the circumstance.
@@logabobagreed. It's not a coincidence that every main term used to describe advanced pattern matching is an attempt to subtly make you believe things that aren't are. This leads to absurd situations where LLMs with no intelligence at all are posited to somehow magically leap to "AGI". The recent academic article on chatgpt as bs in essence covered this issue well. But itself fell for some of the terminology traps, mainly because the authors didn't seem to be tech savvy enough to detect the tech bs language.
Very interesting discussion. Good to see some really informed push-back against the hype surrounding AI - hype that sees AI as an almost universal panacea for all the world's ills. We need much more of this sort of critical analysis of important topics. Kudos to the authors of this important work.
The problem is that investors jumped onto a hypetrain, now they invested a lot of money, expect results a.s a.p, all that money is very tempting to get a piece of, so big efforts (falsification, ignoring false approaches etc) are undertaken to get that money. I think it will end up in tears when reality hits. AI will become much smaller part of our economy since only the usefull part remains relevant. AI has nothing todo with intelligence, its about binning data that gets fed into trained network, the network has 0 understanding what it is doing, like your calculator "knows" the answer to your questions. We need more scepticism to isolate the usefull part of AI from the nonsense part.
**Data leakage** refers to a situation in machine learning where information from outside the training dataset is inappropriately used to create a model. This leads to overly optimistic performance estimates because the model is essentially "cheating" by having access to data it shouldn't have during training. For example, if you're trying to predict future events based on past data, but some of the future information accidentally makes it into the training data, the model will appear to perform well. However, in real-world application, where that future data isn't available, the model's performance will drop significantly. Data leakage often occurs unintentionally, such as when features used to train the model contain information that would not be available at the time the model is used to make predictions. This is a critical problem in AI because it leads to models that seem highly accurate during testing but fail when deployed in real-world settings.
It’s crazy how undisciplined computer scientists are in their research. Very few people seem to actually think through the assumptions compared to how rigorous people are with assumptions in economics or statistics.
I enjoyed reading Eric Topol (including deep medicine and many review papers) and now ordered the "AI Snake Oil". I have been following the authors for quite sometime. I think we AI practitioners should add the phrase "AI Snake Oil" also in our vocabulary along with "SOTA", "Guard-rails", "Responsible AI" etc. Someone should work on a project "Use of Adjectives in recent papers published in AI". Most papers/report (for example GPT4 report) look more of marketing manuals than technical papers/reports. I think Arxiv should not allow such material to be published which directly benefits any organisation commercially !
Thanks - really great talk. Interesting to see how the racism and disparities in society gets baked in. Economic incentives matter the most - without changing the way healthcare operates the institutions will just choose to increase margins.
In the near future, kids at school shd learn what a regression model is so that they can grow up knowing how to differentiate between intelligence and what not.
These guys are spot on - and its not just health care which suffers from this feedback issue. Crime and Policing (using predictive analytics to proactively prevent crimes) will suffer from similar problems.
I think the main weakness of this video is in not acknowledging how a historical analysis using examples and data going back decades in a field which has over a thousand papers being published every single day (from the video) impacts observations significantly.
Awesome interview and props to both of you! 🙌 My understandings, perspectives, and sentiments share a lot of overlap with both of you. I'll share some of those thoughts soon. 💡
Great dialog and a very interesting topic. While I agree that the title may be too negative (it probably sells, though), I firmly believe that one of the primary failures of AI is because we over estimate its abilities. In my view, AI is not intelligent but an illusion of intelligence. Just like magic, it may be a very good illusion, but it is not the real thing. A better approach is the analogy of artificial sweetener. It may be sweet, but it is not sugar. A better term for AI is likely Artificial Inferencing.
Every computer engineer worth his/her salt knows that Prediction as the name suggests is probabilistic by nature and most algorithms are glorified regression. However, the one key difference is the ability it has to process large volumes of data at speed. I will not summarily dismiss the whole thing and I consider Generative more snake oil than predictive.
Whoa....Eric Topol is on youtube? I have been on his email lists since covid pandemic, ok it's still ongoing, and only now found his yt channel. Good stuff.
I’m not a large language model (but, I do have a decent vocabulary). I’ve found that LLM’s have a different perception of reality than humans do. For example, to a LLM “strawberry” has one or two “r”s. (To most humans , are three “r”s.) This is not illusion, but a matter of difference of perception. The very idea of “objective reality” is different for a LLM. I’m neither, so I can see both perceptions. I’m analog’ and parallel, so paradox doesn’t trouble me.
To have objective reality requires a subject. You're talking about a pattern matching system as if it has subjective awareness. This is not the case. This is an essential cause of the snake oil point. Every set of biological sensors creates the possible range of "objective reality", which in itself doesn't exist outside of the subject interacting with the field of sensory inputs.
Caveat emptor. Excellent counter arguments to the marketing hype that oversells AI’s capabilities. Models need to be independently validated, but much of the training data and methods are obscured by leaderboard claimants.
There is a trouble with the idiom Snake Oil. It really works in many cases. Yes, certain traditional Chinese remedies, sometimes labeled as "snake oil," may have ingredients that aid digestion, but these benefits can vary widely and are not universally applicable. Drafted by AI.
only by confronting the negatives can you move forward. I don't get the feeling he feels AI will never be useful in prediction, but rather that using it as a one-size-fits-all is going to lead to horrible decisions in some cases, and who will be to blame? On a related note, I get agitated when Tesla and others pushing for autonomous driving point to their own data, to claim that autonomous driving is already far 'safer' than human driving. It's a pity we can't call their bluff and say "ok, lets just unleash it and see what happens, and you'll be responsible for what occurs". I bet they'd reconsider their position. It's easy to talk a good game when nothing is on the line. One YT video on the Cruise robotaxis - done well before they voluntarily shut down - said the car drove like a 16 yr old student driver.
Insurance & Medicine are like oil & water, they simply don’t mix & if forced anyway like in the Agitated states of America, strange phenomena can occur. 😊
When it comes to predictions, we need to be able to determine what (or who) is best. Some people will outperform our best technologies and vice versa, depending on a variety of circumstances. The best leaders won't simply opt for cost savings every time, but tell that to the shareholders, who sometimes don't have long term corporate/societal well-being as a priority.
This discussion probably would have benefitted from a disclaimer at the beginning. Doing ML in the health space is substantially more difficult than in any other area for very well documented reasons; the examples given in the discussion, though very prominent, are but a small sample of the model deployments across hospitals, clinics and other health institutions that have (miserably) failed in the past few years. However, ML has been a successful tool in general for many people, and though this was also mentioned somewhere in the video in passing, I think the viewers might come out of it with a biased view.
Good stuff. Old head a little too focused on/surprised by brilliance in youth. As a scientist, Topol might consult history in this the apex of “institutional science” and its dominance: it is well documented that a high percentage of the most substantial, paradigm shifting scientific breakthroughs (in decline over many decades as per Nature’s 2023 cover story) have come from young, vibrant geniuses not ground down by life, compromise and limited thinking borne of the pragmatism that comes with greater maturity and advancing years. Certainly don’t fault him noting it, but he brings it up a +/- half dozen times, and paternalistically shares his judgment of the use of the term “snake oil” 4-5+ despite conceding it is warranted in several documented examples. Again, good discussion and great guest choice, but there’s a gatekeeper keeper vibe I would suggest holds clues to some of the fundamental issues plaguing science today and the turf protection instincts in big science that inadvertently help perpetuate them. Less “the science”, more humility, and more Feyerabend is my Rx. Respectfully, A Hack Scientific Philosopher with more grey hairs than original issue
By now people are experts in spinning entire cottage industries at the slightest hint of anything that can make money, so no surprise here. There is already a multi billion dollar business to lend money to companies that buy nVidia’s GPUs. Not to mention the deals to power more and more data centres via nuclear power…
We are just at the start. All this conversation will be irrelevant in a few years. Of course companies always try and push their products beyond the boundary at any given time. The generative (not interpolative) potential of deep learning is clear, the next stages will be harnessing this within automatic iterative reasoning structures.
Much of the criticism he has are on previous generation Learning theory based models which are based on facts but has unusable outcomes. The modern generative AI goes one step further, it makes up its own facts. Unfortunately, nearly every single person in AI community has known this for ever, atleast 50 years now. But this is only going to get ugly from here i guess. Problem is not with the subject the problem is with the applciation.
Hang On - he is not talking about the potential but bad executions - how is that snake oil - if you put a working oil in the wrong place it won’t help - clearly predictive AI figures good rule and patterns given the right data - AI works better then average and can scale - the snake oil book is a snake oil itself - they could be better of saying lessons learnt book
full of fluff and picking and choosing negative examples, failed experiment towards an outcome is not 'snake oil', this book is the low effort intellectual snake oil
@@BrokenRecord-i7q He said early on in the video that he was a machine learning engineer there. Sounds like someone had a preset opinion before even pressing 'play'. No need to debate regarding AI though, time will certainly tell.
"A computer is not responsible and thus should not make management decisions" - A 1970 IBM lecture slide.
i saw that in a defcon 32 video...
Managers are even less responsible
Perfect managerial material.
AI (aka, machine learning, LLMs, etc) are being way over-hyped and over-sold, mostly by AI experts who have a vested interest in this technology. Rather than talking about it taking over the world, we should focus on specific applications where it will actually be useful.
I've pretty much stopped using the phrase, "Artificial Intelligence", except for a few select contexts. I call it what it is, "Machine Learning". AI gives a very different connotation to people who are not relately familiar with the subject. I spend a lot less time explaining what Ai is not.
It is basically pattern recognition.
Machine learning is also a loaded, misleading phrase.
Computational statistics, algorithmic modeling, optimization/curve fitting are all more appropriate terms depending on the circumstance.
@@logabobagreed. It's not a coincidence that every main term used to describe advanced pattern matching is an attempt to subtly make you believe things that aren't are.
This leads to absurd situations where LLMs with no intelligence at all are posited to somehow magically leap to "AGI".
The recent academic article on chatgpt as bs in essence covered this issue well. But itself fell for some of the terminology traps, mainly because the authors didn't seem to be tech savvy enough to detect the tech bs language.
@@TheVincent0268 I thought artificial intelligence is a blond who dyes her hair brunette?
Very interesting discussion. Good to see some really informed push-back against the hype surrounding AI - hype that sees AI as an almost universal panacea for all the world's ills. We need much more of this sort of critical analysis of important topics. Kudos to the authors of this important work.
The problem is that investors jumped onto a hypetrain, now they invested a lot of money, expect results a.s a.p, all that money is very tempting to get a piece of, so big efforts (falsification, ignoring false approaches etc) are undertaken to get that money. I think it will end up in tears when reality hits. AI will become much smaller part of our economy since only the usefull part remains relevant. AI has nothing todo with intelligence, its about binning data that gets fed into trained network, the network has 0 understanding what it is doing, like your calculator "knows" the answer to your questions. We need more scepticism to isolate the usefull part of AI from the nonsense part.
**Data leakage** refers to a situation in machine learning where information from outside the training dataset is inappropriately used to create a model. This leads to overly optimistic performance estimates because the model is essentially "cheating" by having access to data it shouldn't have during training.
For example, if you're trying to predict future events based on past data, but some of the future information accidentally makes it into the training data, the model will appear to perform well. However, in real-world application, where that future data isn't available, the model's performance will drop significantly.
Data leakage often occurs unintentionally, such as when features used to train the model contain information that would not be available at the time the model is used to make predictions. This is a critical problem in AI because it leads to models that seem highly accurate during testing but fail when deployed in real-world settings.
It’s crazy how undisciplined computer scientists are in their research. Very few people seem to actually think through the assumptions compared to how rigorous people are with assumptions in economics or statistics.
There are so many new jobs that can be created to just clean up training datasets.
Thank you both, Dr. Topol has very timely brought this up!
The AI Bubble...Hyped like Crypto. The AI I have seen in Generative tools is immature and inadequate.
I enjoyed reading Eric Topol (including deep medicine and many review papers) and now ordered the "AI Snake Oil". I have been following the authors for quite sometime. I think we AI practitioners should add the phrase "AI Snake Oil" also in our vocabulary along with "SOTA", "Guard-rails", "Responsible AI" etc. Someone should work on a project "Use of Adjectives in recent papers published in AI". Most papers/report (for example GPT4 report) look more of marketing manuals than technical papers/reports. I think Arxiv should not allow such material to be published which directly benefits any organisation commercially !
Thanks - really great talk. Interesting to see how the racism and disparities in society gets baked in. Economic incentives matter the most - without changing the way healthcare operates the institutions will just choose to increase margins.
Saw these guys on Ed Zitron’s podcast. Probably around the same time this interview happened. So sharp.
This is why explainable AI is a must.
Totally agree on that, and that is exactly why I have not been a fan of deep learning.
In the near future, kids at school shd learn what a regression model is so that they can grow up knowing how to differentiate between intelligence and what not.
Spot on!
These guys are spot on - and its not just health care which suffers from this feedback issue. Crime and Policing (using predictive analytics to proactively prevent crimes) will suffer from similar problems.
the title isnt tough enough. "AI fraud" would be a more apt title
bingo!
I think the main weakness of this video is in not acknowledging how a historical analysis using examples and data going back decades in a field which has over a thousand papers being published every single day (from the video) impacts observations significantly.
Awesome interview and props to both of you! 🙌
My understandings, perspectives, and sentiments share a lot of overlap with both of you. I'll share some of those thoughts soon. 💡
Love these guys, I've been following their blog. Looking forward to reading AI snake oil.
Great dialog and a very interesting topic.
While I agree that the title may be too negative (it probably sells, though), I firmly believe that one of the primary failures of AI is because we over estimate its abilities.
In my view, AI is not intelligent but an illusion of intelligence. Just like magic, it may be a very good illusion, but it is not the real thing.
A better approach is the analogy of artificial sweetener. It may be sweet, but it is not sugar.
A better term for AI is likely Artificial Inferencing.
Every computer engineer worth his/her salt knows that Prediction as the name suggests is probabilistic by nature and most algorithms are glorified regression.
However, the one key difference is the ability it has to process large volumes of data at speed.
I will not summarily dismiss the whole thing and I consider Generative more snake oil than predictive.
Very interesting POV and very clear thought articulation.. Thank you Dr Topol and Sayash for interesting conversation.
Whoa....Eric Topol is on youtube? I have been on his email lists since covid pandemic, ok it's still ongoing, and only now found his yt channel. Good stuff.
I’m not a large language model (but, I do have a decent vocabulary). I’ve found that LLM’s have a different perception of reality than humans do. For example, to a LLM “strawberry” has one or two “r”s. (To most humans , are three “r”s.) This is not illusion, but a matter of difference of perception. The very idea of “objective reality” is different for a LLM.
I’m neither, so I can see both perceptions. I’m analog’ and parallel, so paradox doesn’t trouble me.
To have objective reality requires a subject. You're talking about a pattern matching system as if it has subjective awareness. This is not the case. This is an essential cause of the snake oil point. Every set of biological sensors creates the possible range of "objective reality", which in itself doesn't exist outside of the subject interacting with the field of sensory inputs.
I am currently training an AI model with patient labs, DNA tests, gut biome tests and help me create wellness protocol for them.
Caveat emptor. Excellent counter arguments to the marketing hype that oversells AI’s capabilities. Models need to be independently validated, but much of the training data and methods are obscured by leaderboard claimants.
There is a trouble with the idiom Snake Oil. It really works in many cases. Yes, certain traditional Chinese remedies, sometimes labeled as "snake oil," may have ingredients that aid digestion, but these benefits can vary widely and are not universally applicable. Drafted by AI.
Rather lacking in clarity. Some will think they understand the comment, others would claim they do, but it's poorly written if you ask me.
only by confronting the negatives can you move forward. I don't get the feeling he feels AI will never be useful in prediction, but rather that using it as a one-size-fits-all is going to lead to horrible decisions in some cases, and who will be to blame? On a related note, I get agitated when Tesla and others pushing for autonomous driving point to their own data, to claim that autonomous driving is already far 'safer' than human driving. It's a pity we can't call their bluff and say "ok, lets just unleash it and see what happens, and you'll be responsible for what occurs". I bet they'd reconsider their position. It's easy to talk a good game when nothing is on the line. One YT video on the Cruise robotaxis - done well before they voluntarily shut down - said the car drove like a 16 yr old student driver.
Awesome interview and info. Thanks
Insurance & Medicine are like oil & water, they simply don’t mix & if forced anyway like in the Agitated states of America, strange phenomena can occur. 😊
Great interview. Glad the YT AI sent this to me!
It’s actually a good discussion.. putting real stuff than hypothetical!
When it comes to predictions, we need to be able to determine what (or who) is best. Some people will outperform our best technologies and vice versa, depending on a variety of circumstances. The best leaders won't simply opt for cost savings every time, but tell that to the shareholders, who sometimes don't have long term corporate/societal well-being as a priority.
Great Insights
This discussion probably would have benefitted from a disclaimer at the beginning. Doing ML in the health space is substantially more difficult than in any other area for very well documented reasons; the examples given in the discussion, though very prominent, are but a small sample of the model deployments across hospitals, clinics and other health institutions that have (miserably) failed in the past few years. However, ML has been a successful tool in general for many people, and though this was also mentioned somewhere in the video in passing, I think the viewers might come out of it with a biased view.
So, essentially it's like elections: a lot of promises that never really develop into reality.
I liked the ToC. I will buy.
Interesting sidenote: In this interview, the image of Prof. Arvind Narayanan is entirely generated by Artificial Intelligence. 😎
AI is the new sub prime
Good stuff. Old head a little too focused on/surprised by brilliance in youth. As a scientist, Topol might consult history in this the apex of “institutional science” and its dominance: it is well documented that a high percentage of the most substantial, paradigm shifting scientific breakthroughs (in decline over many decades as per Nature’s 2023 cover story) have come from young, vibrant geniuses not ground down by life, compromise and limited thinking borne of the pragmatism that comes with greater maturity and advancing years.
Certainly don’t fault him noting it, but he brings it up a +/- half dozen times, and paternalistically shares his judgment of the use of the term “snake oil” 4-5+ despite conceding it is warranted in several documented examples.
Again, good discussion and great guest choice, but there’s a gatekeeper keeper vibe I would suggest holds clues to some of the fundamental issues plaguing science today and the turf protection instincts in big science that inadvertently help perpetuate them.
Less “the science”, more humility, and more Feyerabend is my Rx.
Respectfully,
A Hack Scientific Philosopher with more grey hairs than original issue
Actually this is a brain suction in the process 😮😮😮😮
By now people are experts in spinning entire cottage industries at the slightest hint of anything that can make money, so no surprise here. There is already a multi billion dollar business to lend money to companies that buy nVidia’s GPUs. Not to mention the deals to power more and more data centres via nuclear power…
When thinking about AI snake oil, AI-detectors comes to mind.
Applying Murphy's Law as the first line of code, if/ then,else goto line one.😜
on covid study by xray of adult vs children: can this is be called as “study on adults, excluding children”, that sounds very useful
We are just at the start. All this conversation will be irrelevant in a few years. Of course companies always try and push their products beyond the boundary at any given time. The generative (not interpolative) potential of deep learning is clear, the next stages will be harnessing this within automatic iterative reasoning structures.
Thanks.
He meant how to facilitate civil war in third world countries.
What about the predictive climate models?
That's an equally massive scam!
"predictive climate models" 🤭🤣🤣🤣🤣🤣
There’s a 50% chance of rain. Always lol
@@chris_jorge 😄
Not really needed now, given the mounting empirical record on all fronts?
Same old story, garbage in garbage out. GIGO
You can't predict a civil war lol
Sounds like a need for redacting papers
Brilliant.
Much of the criticism he has are on previous generation Learning theory based models which are based on facts but has unusable outcomes. The modern generative AI goes one step further, it makes up its own facts. Unfortunately, nearly every single person in AI community has known this for ever, atleast 50 years now. But this is only going to get ugly from here i guess. Problem is not with the subject the problem is with the applciation.
Hang On - he is not talking about the potential but bad executions - how is that snake oil - if you put a working oil in the wrong place it won’t help - clearly predictive AI figures good rule and patterns given the right data - AI works better then average and can scale - the snake oil book is a snake oil itself - they could be better of saying lessons learnt book
Follow MONEY and earned by marketing. All marketing is snake oil selling. No doubt simplification
9:38 😦
Facebook alumni
BS!
哗众取宠
His book itself is a snake oil! A Kapor would tell the world that ML is fake while every large company is using ML tools from FSD to Warfighting!
full of fluff and picking and choosing negative examples, failed experiment towards an outcome is not 'snake oil', this book is the low effort intellectual snake oil
Dude this this at Facebook so he's seen this first hand. Snake oil is exactly right.
@@VCT3333 you think everyone's at facebook is ai engineer, he doesn't know what he is talking about
How much Nvidia stock are you holding? 😂
@@BrokenRecord-i7q He said early on in the video that he was a machine learning engineer there. Sounds like someone had a preset opinion before even pressing 'play'. No need to debate regarding AI though, time will certainly tell.