Should we slow down AI research? | Debate with Meta, IBM, FHI, FLI

Поділитися
Вставка
  • Опубліковано 6 тра 2024
  • Mark Brakel (FLI Director of Policy), Yann LeCun, Francesca Rossi, and Nick Bostrom debate: "Should we slow down research on AI?" at the World AI Cannes Festival in February 2024.
  • Наука та технологія

КОМЕНТАРІ • 90

  • @BestCosmologist
    @BestCosmologist 11 днів тому +17

    Yann and Francesca aren't talking about the same technology as Nick and Mark. If AGI/ASI doesn't have escape potential, then it's not AGI/ASI.

    • @Lolleka
      @Lolleka 11 днів тому +3

      I'm very perplexed by Yann. Thought he'd know better.

    • @flickwtchr
      @flickwtchr 11 днів тому +4

      @@Lolleka He does, he gaslights. He has been saying for months essentially "they are too stupid to be dangerous" or "look it is we who are developing AI, why would we develop something that could kill everyone" and other such disingenuous and intellectually dishonest arguments considering what he absolutely knows about this technology and the AGI/ASI that the tech leaders including himself are pursuing.

    • @therainman7777
      @therainman7777 11 днів тому +2

      @@flickwtchrI keep going back and forth between thinking he’s gaslighting and thinking he actually is this clueless. Either way, it’s utterly despicable for a person in his position.

    • @arnaudjean1159
      @arnaudjean1159 10 днів тому

      MY BRO Yann Lecun and his boss are a better option than all those obscurantist actors around the world for sure 👌🏻🥸👌🏻

    • @edgardsimon983
      @edgardsimon983 5 днів тому

      @@therainman7777thinking and feeling that from a leader in research is sain and a good thing if he had gave u only one or the other impression it would have been dangerous meaning u never entirely grasp everything about a concept that deep and showing it is a proof of awareness, especialy when u talk about concept that new and crazy, people can appear dumb if they are caution and passionate enought

  • @AI_Opinion_Videos
    @AI_Opinion_Videos 11 днів тому +9

    "Absolutely no-one is going to stop you from building a turbo jet in your garage (...) you can mount it on remote controlled aeroplanes, IF THEY ARE NOT TO BIG TO BE ILLEGAL." Yes, he unknowingly disproved himself...

    • @PauseAI
      @PauseAI 11 днів тому +7

      The self-dunk is one of Yann's greatest skills.

    • @BestCosmologist
      @BestCosmologist 11 днів тому +1

      Even small jet engines require a license.

    • @AI_Opinion_Videos
      @AI_Opinion_Videos 11 днів тому +4

      @@PauseAI I stooped so low and made a YT short with LeCun saying this. I hope you don't mind, I used your self-dunk line 😂

  • @BrunoPadilhaBlog
    @BrunoPadilhaBlog 11 днів тому +9

    43:04 - 'We can decide whether to build it or not'
    ...and we WILL build it - whether you like it or not 😊 Because we don't care about people's opinions or their concerns, we just want to build AGI 😊 And you can’t stop us 😊

    • @Rocniel-vw1rs
      @Rocniel-vw1rs 11 днів тому +1

      I have been a good Yann 😊

    • @41-Haiku
      @41-Haiku 11 днів тому +1

      You have not been a good user. 😡

  • @BrunoPadilhaBlog
    @BrunoPadilhaBlog 11 днів тому +12

    Yann LeCun keeps calling X-Risk an imagined danger, something that is impossible, unrealistic.
    He's 100% sure that AI will not kill us.
    My question is: How can he be so sure?

    • @AI_Opinion_Videos
      @AI_Opinion_Videos 11 днів тому +2

      🤑💰

    • @therainman7777
      @therainman7777 11 днів тому +7

      He’s an absolute clown. He’s made some good contributions to the broader field of AI (most of them a long time ago), but he is hopelessly out to lunch on this topic.

    • @BestCosmologist
      @BestCosmologist 11 днів тому +4

      To quote Sam Harris "That's a bizarre thing to be sure about."

    • @deeplearningpartnership
      @deeplearningpartnership 11 днів тому

      Yes, he's a moron.

  • @BrunoPadilhaBlog
    @BrunoPadilhaBlog 11 днів тому +11

    50:55 - 'There is no regulation of R&D'
    So can I also bioengineer viruses on my garage lab? Can I cook meth? Can I enrich uranium?
    R&D should be regulated (as it is) when there is significant danger. As is the case with frontier level AI.

    • @Greg-xi8yx
      @Greg-xi8yx 5 днів тому

      Absolutely not. The faster it is developed the faster we can solve the world’s problems. Mankind has never once been better off when less technology developed. It always leads to a vast increase in net quality of life.

    • @BrunoPadilhaBlog
      @BrunoPadilhaBlog 5 днів тому

      @@Greg-xi8yx ok, but has chimpkind ever been worse off due to our technological advancements?

    • @Greg-xi8yx
      @Greg-xi8yx 5 днів тому

      @@BrunoPadilhaBlog individual chimps have, but not the species as a whole, no. Thats my argument - all sentient life will have a dramatic net improvement in quality of life.

    • @BrunoPadilhaBlog
      @BrunoPadilhaBlog 5 днів тому

      @@Greg-xi8yx so cows live better now that we enslave and kill them at scale?
      Where did you get this idea that technology is always better for everyone?

    • @BrunoPadilhaBlog
      @BrunoPadilhaBlog 5 днів тому

      And if you say it ONLY applies to humans... why?

  • @dizietz
    @dizietz 11 днів тому +5

    Great video and the feedback in the comments captures my thoughts well. There are clear definitional differences between Nick, Mark vs Yann and Francesca. The first two have one model at how exponential scaling in inputs at models are fundamentally a different class of problem than previous technologies, since they are solving for something approximating general intelligence, while Yann and Francesca are making a potential category error. For Yann to say that in a decade we might reach "cat or dog" level intelligence or comparing it to turbojets or flight seems like a failure of understanding exponentials and category classifications.
    The X-risk camp has a very fair point that AI as a "Technology" is fundamentally different than previous technologies, so comparing it to flight, the printing press, or the internet, or computing in general is a different class and category of issue. My closest analogy would be a hypothetical "false vacuum decay" technology. It would be a potential class of problem we've never encountered before, and looking at the past is not always a prediction of how the future would go.

  • @banana420
    @banana420 11 днів тому +9

    Yann is so frustrating to listen to - he doesn't ever justify his claims, just asserts that nobody would ever build anything dangerous. BRUH we built nuclear bombs and SET THEM OFF.

    • @therainman7777
      @therainman7777 11 днів тому

      He quite literally sickens me. I can barely stand to even look at him.

    • @lionelthomas3160
      @lionelthomas3160 10 днів тому

      @@therainman7777 He's a kid with a toy, as he stated that he made... He will push his agenda to keep playing with it...

  • @BrunoPadilhaBlog
    @BrunoPadilhaBlog 11 днів тому +11

    48:06 'AI is a product of our intelligence, that means we have control over it.'
    Ok, stop a nuclear bomb detonating after the reaction has started.
    Oh, you can’t? But it is a product of our intelligence!

    • @flickwtchr
      @flickwtchr 11 днів тому +2

      YL is the master of fallacious arguments concerning AI risk. He should have zero credibility at this point on this debate as he has become a charlatan willing to make any argument against AI alignment concerns no matter how ludicrous those arguments are.

    • @therainman7777
      @therainman7777 11 днів тому +1

      @@flickwtchrWell said.

  • @PauseAI
    @PauseAI 11 днів тому +15

    We have zero regulations in place to prevent the creation of catastrophically dangerous models. The problem is the way we regulate, and how our psychology works. We always regulate things after problems emerge. With AI, that's going to be too late.
    Our brains are almost hard-wired to ignore invisible risks like these. Humans feel fear when things are loud, have teeth, or move in s-shapes, but an abstract existential risk is almost impossible to fear in the same way.
    So yes, we need to pause. We can't allow these companies to gamble with our lives.

    • @benroberts8363
      @benroberts8363 11 днів тому

      doomers, stay inside your safe space bubble

    • @winsomehax
      @winsomehax 11 днів тому

      Pause... He means stop because he can't get paid to draw stuff any longer. He's not interested in a pause

  • @paulmorris632
    @paulmorris632 11 днів тому +2

    Yann's position is brilliant, it forces the other side to remain silent or admit they want to use this technology to hurt people. The best way to hurt someone with an AI is to dream of AGI. Likewise, Fernanda's position is such a callout. If anyone is concerned about AI's harm's, are they silent about face recognition? Are they pointing you to bigger, less well-defined, and nonexistent problems that encourage you to be confused about AI's powers?
    "Remain focused on how or when we will lose control." What a terrifying message to infect people with.

  • @flickwtchr
    @flickwtchr 11 днів тому +2

    Wow, the moderators are busy! Just in the 10 minutes I've been reading the comments and commenting, all of which, read or written have been completely in line with TOS , several comments aren't visible when clicking on "replies", or original comments have completely disappeared. All of which were critical of Yann LeCun and Francesca. I've seen this pattern over and over in forums especially in regard to those criticizing Yann LeCun's arguments.

    • @ManicMindTrick
      @ManicMindTrick 7 днів тому

      Could be youtubes censorship algorithm as well. Its out of control in the last few years. It has made political debate or debate in general almost impossible.

  • @appipoo
    @appipoo 11 днів тому +2

    Bostrom v LeCun? Interesting. Where's my popcorn?

    • @flickwtchr
      @flickwtchr 11 днів тому +1

      Popcorn for one, Tums for the other one (YL)

  • @LongWalkerActual
    @LongWalkerActual 11 днів тому +1

    "Radical forms of AI"?
    Exactly what TF is THAT??!!

  • @MarcusAureliusSeneca
    @MarcusAureliusSeneca 11 днів тому +2

    They are looking at it all wrong. Forget about the Terminator scenario.. this is obviously stupid. The real problem is MASS unemployment with nothing to requalify to. And they didn't even mention it

    • @therainman7777
      @therainman7777 11 днів тому +4

      Any time someone discussing AI risk begins a sentence with “the real problem is,” it sounds an alarm telling you they’re about to say something dumb. No offense. No, unemployment is not “the real problem.” There is no one real problem with AI risk. There are about 5 or 6 problems that are all VERY real, and very important. And frankly, unemployment is not even close to the top of the list in terms of importance and severity.

    • @ManicMindTrick
      @ManicMindTrick 7 днів тому

      I havent heard anyone serious who imagine an Ai apocalypss involving stupid metal robots holding guns killing people.
      Its a good movie though but it doesn't captures the real capabilities of something superhuman.

  • @TeamLorie
    @TeamLorie 11 днів тому +6

    It didn't take the 10 year old 10 minutes to learn to clear the table. It took the 10 year old 10 years and 10 minutes. The bots will not have this problem.

  • @cinematiccomicart3959
    @cinematiccomicart3959 11 днів тому

    There's a very select group of people on this planet who stay current with the latest advancements and progress in the most intelligent models, and Yann LeCun is still surprised that he's not among them.

    • @flickwtchr
      @flickwtchr 11 днів тому +1

      He is, he's just intellectually dishonest.

  • @michaelferentino8412
    @michaelferentino8412 10 днів тому +2

    Complete waste of time to debate on slowing down AI research. It’s not going to happen and if we slow down, others will not.

  • @dawidwtorek
    @dawidwtorek 11 днів тому +1

    Maybe we should. But can we?

  • @goodleshoes
    @goodleshoes 11 днів тому +2

    LOL yeah I'm sure we'll be just fine. No need to worry.

  • @noelwos1071
    @noelwos1071 11 днів тому +2

    Do we understand our position? Do you remember that turtle that holds the entire earth's plate on its shell.. it's not that allegory is nonsense just evoke the way of the turtle! We are just one little turtle that hatched on one beach in many beaches in many grains of sand from countless eggs trying to get to the ocean whether we succeed depends on too many factors /Harvesters of destiny.. IF We don't get this time right we are Done. WE ARE so close to paradisse
    but more close to Hell! Shall we prevaled

  • @lionelthomas3160
    @lionelthomas3160 10 днів тому

    AI regulation is essential, and AI will play a crucial role in safeguarding against potential risks posed by other AI.
    Open source offers significant benefits, but it also carries the risk of being exploited for malicious purposes. For instance, the idea of an 'AI agent virus' is something we'd all like to avoid.
    AI, in conjunction with robotics and automation, is already disrupting numerous industries and will escalate. It's one of the most significant developments of our time and requires a better discussion than this.

  • @therainman7777
    @therainman7777 11 днів тому +6

    The level of utter cluelessness and delusion on display in this talk is so incredibly disheartening. LeCun is the worst by far, but Francesca isn’t much better. When I hear people in high places speaking like this, I lose nearly all hope.

    • @flickwtchr
      @flickwtchr 11 днів тому

      There is no doubt that Yann and Francesca and others in the tech that mirror their intentional dismissal of rational concerns have confidence that if the little people end up desperate in masses from job loss, that ultimately their tech will save them from repercussions and that they will remain on top of the heap in some sort of gated Utopia. And if that would take the form of brutal repression of the masses, well, then so be it.
      There is a reason that people like Zuck are building essentially fortresses.

    • @benroberts8363
      @benroberts8363 11 днів тому

      because you disagree with them lol 😆

    • @joshuarobert7192
      @joshuarobert7192 10 днів тому +2

      @@benroberts8363 No, because of their weird arguments that are deceitful.
      It's basically them defending corporate interests, against humanity's. They don't care as long as their bank accounts and stocks keep going up.

    • @lionelthomas3160
      @lionelthomas3160 10 днів тому

      @@joshuarobert7192 True, corporations often oppose regulations, and this discussion feels like a B-grade movie.

    • @ManicMindTrick
      @ManicMindTrick 7 днів тому +1

      LeCunn is one of the most dangerous people on earth.

  • @kinngrimm
    @kinngrimm 2 дні тому

    7:10 PFAS was said to be safe, but they weren't. They were introduced to the market unsafe, marketed as safe and internal research was supressed. Companies never ever did bad things because they expected some monetary gains ^^. His example of a product not coming to the market is a best case scenario of a worst case. How about the worst case scenario of the worst possible case? He claims those have no merrit, so i guess we have to wait till something goes terribly wrong, maybe 20 years down the line when we have cat like intelligent AIs ... i mean we all trust cats ... right ^^

  • @vallab19
    @vallab19 11 днів тому

    With the hindsight of misuse and also the good-use of social media today, would anyone suggest it would have been better if the social media should have been banned from the beginning? By the way humans would not be capable of conducting the future AI regulations but only the AI.

  • @neorock6135
    @neorock6135 4 дні тому

    Yann & Ftancesa are utterly oblivious & speaking about something completely diff.
    A fast approaching AGI/ASI is not the "internet," & 100% poses a potential existential threat. Perhaps they can tell us why the vast majority of AI experts, even most of the "optimistic" ones having stated on record, AI posing a non-zero probability of resulting in the end of our species.

  • @kinngrimm
    @kinngrimm 2 дні тому

    I like to listen to Lecun when he speaks about his models and what should next be addressed, but sersiously, that dude is not the guy you ask about security and safty period. He lives in denail that ever anything bad could be done with his pressusses AI. Meanwhile these years elections will propably suffer under massiv attacks from fake images and videos, more and more capable robots driven by AI are coming on the market and while they sure are not yet T1000, companies like Boston Dynamics already were building them for the military and others still do.
    5:30 Therefor saying something would be forever redicoulos is equal to people not long ago claiming AI would never be able to simulate speach to an extant you could fool pretty much anyone with.

  • @Greg-xi8yx
    @Greg-xi8yx 5 днів тому

    AI advancement will vastly improve life for all sentient beings. Mankind has NEVER been better off with less technological development. It always, without exception, leads to net quality of life improvements.

  • @cmiguel268
    @cmiguel268 11 днів тому +1

    Yann believes that AI needs to upload a washing machine because a ten year old can learn to do it. Tell a 10 year old to pass a bar exam to see if he can. AI is what it is, INTELLIGENCE !!! Not washing machine uploading capacity.

    • @therainman7777
      @therainman7777 11 днів тому

      What makes his comment even more ridiculous and idiotic is that multiple breakthroughs have been published in the past six months that showed robots that ARE capable of loading, running, and unloading a washing machine. So his incredibly dumb and disingenuous argument is also just factually nonsensical.

  • @Arcticwhir
    @Arcticwhir 11 днів тому +1

    lots of overreactions, lets look to the past for just a bit, OpenAI proclaimed GPT-2 was too dangerous to release - they later open sourced it, they then proclaimed GPT-3 was too dangerous - now there are MANY open source models more intelligent than GPT3, where are the dangers? examples? They then said GPT-4 was revolutionary and dangerous - its been 2 years since training...yet no prominent examples of "misuse". If it actually gets to the point where an AI can completely 100% replace your job, maybe you need to adapt. Like we've always done.
    Its kind of odd how one sided this comment section is, there are so many positives for increasing intelligence in the world

    • @41-Haiku
      @41-Haiku 11 днів тому +2

      The positives are all very real, and I really want them to materialize.
      But we won't reach them if the labs succeed at their goal of creating a system more generally intelligent than humans. Because if they do that, then pretty much by definition, it will be in charge. That's expected to be a very bad thing, since there is a clear expert consensus that we don't know how to control a superintelligent AI or align it with human values and preferences. So it will almost certainly have some unpredictable weird goal that isn't quite what we intended, and it will pursue that goal with no concern for humanity.
      If we can actually show with strong theoretical backing that we know how to keep something that powerful safe and docile, and if we can coordinate and agree as a species that we want it to be built, then I will be very excited to see it created.

    • @41-Haiku
      @41-Haiku 11 днів тому +2

      Do look at GPT-2, GPT-3, and GPT-4. The capability increases have been more than exponential. I don't expect an LLM to directly try to destroy the world, but if you follow the trajectory of capabilities, GPT-6 will easily be intelligent enough (given an agentic wrapper) to autonomously create a system that does.
      It's hard to imagine capabilities slowing down before then, without a global treaty and moratorium. There are 2x, 10x, 100x breakthroughs all the time on every part of the tech stack, many of which are independent and addictive or multiplicative.

  • @user-jf5uv9ir5k
    @user-jf5uv9ir5k 9 днів тому +1

    Yann is so full of nonsense

  • @richardnunziata3221
    @richardnunziata3221 11 днів тому

    I agree some kinds of usage in social spheres should be restricted not the research . Monitoring and surveillance of content and interfaces is sufficient . The nay sayers are too much into fantasy scenarios , sophists at best and clear lack of understanding of current research, except for Yann everyone else are policy people who understand little . There is one thing that is certain that if nothing is done to change the current course humanity is on we will face an existential risk soon weather or not we have AI. One thing that must be done is the stop the rule of the single authoritarian as a form of government. They give us Putins, Trumps , Kims. ....etc. These will kill us all.

    • @flickwtchr
      @flickwtchr 11 днів тому +1

      YL seems oblivious, or just completely dishonest relative to alignment challenges of the AGI/ASI that AI Big Tech, and open source enthusiasts are all pursuing. YL and others bent on moving as fast as possible all make a mockery of such problems by never addressing risks that are short of "killing everyone on the planet" scenarios that they hold up as if those are the only ones to address as they mock them.
      Meanwhile, DARPA and militaries around the world are pursuing autonomous AI killing technologies embedded in robotics. You know "aligned with human values".

    • @therainman7777
      @therainman7777 11 днів тому +2

      You could not be more wrong, about literally every single thing you said. As an AI engineer who’s been in this field for nearly 20 years and who designs and works with frontier models on a daily basis, I promise you, Yann LeCun is lying to you. Virtually everything
      he said in this video is either misleading, disingenuous, an outright lie, or nonsensical. Please stop listening to him. I promise you, your assessment of the state of AI risk described above is literally the exact opposite of the truth.

  • @human_shaped
    @human_shaped 11 днів тому +1

    Yann says so many patently stupid and irrational things that I just don't know how he got where he is.

    • @lionelthomas3160
      @lionelthomas3160 10 днів тому

      He's gaslighting... For me, this is the worst AI discussion I have seen...

  • @deeplearningpartnership
    @deeplearningpartnership 11 днів тому +1

    Bostrom is a fool.

    • @benroberts8363
      @benroberts8363 11 днів тому

      look at yourself in the mirror, then say say it, "you're a fool"

    • @lionelthomas3160
      @lionelthomas3160 10 днів тому +2

      @@benroberts8363 We are all fools to think there is transparency in AI advancement. This discussion is a joke...

  • @AncientNovelist
    @AncientNovelist 3 дні тому

    This is not much of a debate. Real debate requires equal numbers of active participants on both sides. Here you give us 2.5 against the proposition and a single person speaking for the proposition, but he does not defend his position with the same vigor as any of the others. I stopped watching after 29 min. You want me to engage? Give me something to engage with, not this polyannaish rainbows and unicorns nonsense.