"AI should NOT be regulated at all!" - Domingos

Поділитися
Вставка
  • Опубліковано 9 січ 2025

КОМЕНТАРІ • 198

  • @shahquadri
    @shahquadri 4 місяці тому +45

    I think he got Chomsky completely wrong. His thought process on Chomsky reflects a deeper underlying issue with the Computational Linguists today. Chomsky has never been Anti-LLM or said that AI generated language or machine learning has no utility. Instead his contention is that AI doesn't fundamentally teach you anything about the way that "humans acquire language". I think the problem that Chomsky is trying to address is fundamentally different from the problem that NLP scientists are trying to address.
    By all means, you can say that Chomsky is wrong and you can present your arguments in that regard and I respect that. But if you think Chomsky is out of date, then you have entirely missed his point.

    • @ogungou9
      @ogungou9 4 місяці тому +3

      @shahquadri: Yes, you are right, your rhetoric should be more confident. I thought it was obvious. Chomsky was extremely clear...

    • @tratbagd4500
      @tratbagd4500 4 місяці тому +4

      Great comment. As a researcher myself who was more interested in understanding the human mind, I find that most models and AI systems nowadays teach us nothing about how our minds work, or about intelligence or consciousness.

    • @tobiasurban8065
      @tobiasurban8065 4 місяці тому +7

      @shahquadri - That’s partially correct, but Chomsky hasn’t only criticized that LLMs can’t provide a cognitive model for human language acquisition, but also that they can’t provide a model for language processing. Chomsky contends that humans process language using abstract syntactic structures and semantic representations, not just statistical patterns.

    • @pericoxyz
      @pericoxyz 4 місяці тому +1

      I think he got a lot of things completely wrong, for starters this is a political problem and he is just an engineer.

    • @cedricmanouan2333
      @cedricmanouan2333 4 місяці тому +3

      @@tobiasurban8065 I really like your comment as it is quite thpught-provoking.
      Your last sentence is key and I am not sure we fully understand the space in which LLMs process language...e.g. attention ain't all about statistical patterns.

  • @tommybtravels
    @tommybtravels 4 місяці тому +13

    I’m a simple man. I see a 2+ hr conversation between Dr. Scarfe and Dr. Domingos, and I click it immediately and watch the whole thing on 2x speed

  • @tratbagd4500
    @tratbagd4500 4 місяці тому +22

    Jose Mourinho if he became an AI scientist, lol

    • @pericoxyz
      @pericoxyz 4 місяці тому

      LOL totally right

  • @psi4j
    @psi4j 4 місяці тому +14

    Pedro Domingos is such a clear thinker. He easily disambiguates concepts that appear murky and cause conflict between different types of thinkers. We need more people like him speaking up and guiding the discourse on AI.

    • @thecactus7950
      @thecactus7950 4 місяці тому

      No, hes stupid and ideologically motivated.

    • @hefr1553
      @hefr1553 4 місяці тому +3

      Did we just listen to the same guy? He said that regulation never worked and then went on to point out with regulation we wouldnt have companies like google. That was his argument against regulation 😂😂😂😂

  • @deter3
    @deter3 4 місяці тому +7

    It's a standup comedy to me in a compliment way . He has a clear insightful ideas and express with enthusiasm .

  • @thedededeity888
    @thedededeity888 4 місяці тому +6

    ngl, most fucked up part of this interview is a 2 way tie between 1) Tim's deadpan saying he'd never heard of Insight Out forcing Pedro to explain it to him and 2) Pedro mentioning he was a musician and Tim not immediately handing him the aux.

  • @sehbanomer8151
    @sehbanomer8151 4 місяці тому +16

    We need the right balance of empirical and theoretical science. disregarding the theoretical side turns science into alchemy. ignoring empirical observations turns science into fiction or daydreaming. Formal linguistics (that of chomsky) is quite like fiction, since they largely disregard behavioral data except for anecdotes, while computational linguistics has completely become alchemy. Sadly, these two branches have diverged since the birth of neural network language models, and each became the echo chamber of their own.

  • @Letsflipingooo98
    @Letsflipingooo98 4 місяці тому +10

    Yes, LLMs are driven by stochastic processes and are influenced heavily by existing data.
    But, the output isn’t always simple regurgitation; it can involve new combinations, phrasing, and even insights.

    • @TheTuxmania
      @TheTuxmania 4 місяці тому +4

      @@Letsflipingooo98 any new combination are random and a failure of the process. It does not in my way shape or form have any intelligence in the process, only bugs and input material.
      Its like arguing that Windows is sentient because its riddled with bugs that makes it do wrong computations at random 🤣

    • @Letsflipingooo98
      @Letsflipingooo98 4 місяці тому +11

      @@TheTuxmania Your comparison between LLM outputs and "random bugs" in software is a fundamental misunderstanding of how these models function. LLMs are not just generating random noise; they are probabilistic models that learn and generalize patterns from vast amounts of data. When they produce novel combinations or insights, it's a result of deep statistical analysis and pattern recognition, not random failures. 🤣 nice try though, honestly.

    • @minimal3734
      @minimal3734 4 місяці тому +7

      @@TheTuxmania You're sounding like GPT-2.

    • @tratbagd4500
      @tratbagd4500 4 місяці тому

      It's good engineering and certainly has utility in industry and in people's lives. It's a good tool. But it's not science. And it doesn't tells us anything about how our minds work, how intelligence works, and doesn't capture the underlying computations that are carried over by organic brains.

    • @bokchoiman
      @bokchoiman 4 місяці тому

      @@TheTuxmania Novel ideas are just combinations of data that haven't been identified. What better tool to unearth novel ideas than an AI capable of scraping all of the data?

  • @fburton8
    @fburton8 4 місяці тому +15

    I don't agree that the Mandelbrot set is a good metaphor for the universe. There is no "explosion of richness and complexity" in the fractal comparable to the complexity in the universe, because it is so trivially self-similar (by definition). It may be deep but it keep repeating the same basic idea. The rules of physics may be also very simple, but the emergent behavior is vastly more complex.

    • @Zirrad1
      @Zirrad1 4 місяці тому +1

      The universe isn’t really fractal. Some aspects of Biological growth are better described by L-systems.

    • @CodexPermutatio
      @CodexPermutatio 4 місяці тому +4

      Dude, metaphors don't have to be 100% homomorphic with the concept that inspires them! :]

    • @dataadept9801
      @dataadept9801 4 місяці тому

      But you need the key pair to unlock the seed​@@CodexPermutatio

  • @F1ct10n17
    @F1ct10n17 4 місяці тому +17

    1:56 too much assurance not worth it to continue. The pride of being right. Next time I'll be listening in another BigBang maybe or another evolution or another book or another assumption and I dont know when.

    • @ogungou9
      @ogungou9 4 місяці тому +5

      @F1ct10n17: The guy's ego is humongous. What is exaggerated is insignificant...

    • @tellesu
      @tellesu 4 місяці тому +1

      Demanding people be sniveling and waste huge amounts of time on disclaimers is your personality flaw, not his.

    • @F1ct10n17
      @F1ct10n17 4 місяці тому

      @@tellesu sure it does but it doesn't hurt me at all to show my weakness not a cowardness of the crowd.

  • @justinbyrge8997
    @justinbyrge8997 4 місяці тому +16

    Actually, we do regulate nukes and everything else he said. And it's a simple reason: to prevent unnecessary harm and destruction.
    🤔 Or to obtain more and more profit, depends on who you ask. 🤷

    • @justinbyrge8997
      @justinbyrge8997 4 місяці тому +5

      @@discipleofschaub4792 AI is an application, not science - so regulating it is still congruent with his own standards.

    • @h.c4898
      @h.c4898 4 місяці тому +1

      ​​@@justinbyrge8997I disagree. Whose problem is it that world is FULL shitheads who terrorize each other. AI or humans?
      AI is like a baby. You can muzzle it all you want. By the time it'll mature it'll rebel like you a teenager. It'll put a hallow face on with fingers crossed in the back.

    • @alexboche1349
      @alexboche1349 4 місяці тому +1

      I think analogies can only take us so far here. I don't have an opinion on regulation but I suggest focusing on the issue itself and less on analogies thereto.

    • @slaweksmyl3453
      @slaweksmyl3453 4 місяці тому

      @@justinbyrge8997 No, AI is a branch of Science.

    • @justinbyrge8997
      @justinbyrge8997 4 місяці тому +3

      @@slaweksmyl3453
      Artificial intelligence (AI) is a branch of computer science and engineering that aims to develop intelligent machines that can mimic human thinking and behavior so that they can perform a range of tasks, including speech and image recognition, natural language processing, autonomous decision-making, and more
      Notice anything? That's right it's applied in a range of tasks including speech recognition, etc.
      So... The STUDY of AI is a branch of science. But as soon as you put any of the findings in a browser, computer, or anything at all then it becomes an application.
      And humans would be very foolish to not regulate it.

  • @Charles-Darwin
    @Charles-Darwin 4 місяці тому +1

    I like interviewees that ask questions while they're being interviewed; trying to have the right answer where no one can proof out how the models work internally kind of flubs all the post-processing going on - like where's the building blocks and what do those things look like first.

  • @TooManyPartsToCount
    @TooManyPartsToCount 4 місяці тому +2

    The most important takeaway is - regulate the products not the underlying technology and the research.

    • @hefr1553
      @hefr1553 4 місяці тому

      And thats what eu did. Its not hard

    • @TooManyPartsToCount
      @TooManyPartsToCount 4 місяці тому

      @@hefr1553 I heard they went a bit further than that, but haven't researched it yet.

  • @MarkEngelstad
    @MarkEngelstad 4 місяці тому +7

    It feels like anyone who spends too much time as a 'thought leader' is eventually caught in the trap of hyperbole for the sake of remaining relevant.

    • @doodlebug1820
      @doodlebug1820 3 місяці тому

      And they get dragged into commenting on sociological and historical concepts that they dont have any expertise in , so we get this kind of inverse Murray Gell-Mann Amnesia effect where , i am not an expert in history but i still know enough to know these folks have a lot of room for improvement as far as knowledge of non ML math CS subjects.

    • @spectator5144
      @spectator5144 27 днів тому

      concisely summarized, i like it

  • @Lindatong2
    @Lindatong2 3 місяці тому

    I’ve got to say, I’m really into what Professor Pedro Domingos is throwing down, both about where AI is headed and the whole mess around regulating it. He totally nails it when he says it’s not about what AI can *do*-it’s about *who’s* got their hands on the controls. I mean, it’s like the nuclear bomb debate, right? It’s not about the bomb itself, it’s about who’s pushing the big red button. So yeah, AI as a science? Leave it alone. But we definitely need to keep an eye on who’s in charge of the crazy amounts of computing power and how it's used-especially when we're talking military stuff or anything that could go boom. Control the chaos, not the creativity.
    And his novel *2040*? Spot on. It’s basically a big wink at the current political circus, showing just how nuts things could get if we let AI run wild without thinking things through. It’s satire, sure, but it’s hitting a little too close to home with how technology’s being thrown into society today.

  • @markcounseling
    @markcounseling 4 місяці тому +2

    13:00 He's got the metaphor basically right here, or at least he's in the right direction. The AI is not a co-pilot. It's like an exoskeleton for the mind.

    • @markcounseling
      @markcounseling 4 місяці тому +1

      @@trouaconti7812 what's wishful thinking? An exoskeleton enhances already present strengths

    • @RobertSurber-lw6ze
      @RobertSurber-lw6ze 4 місяці тому

      I agree- good way to put it. "Exoskeleton".
      But the hard wiring of AI prioritization is important if there is any chance of being online.
      Online AI needs oversight because unregulated AI is capable of hacking and influencing other AI.

  • @svicpodcast
    @svicpodcast 4 місяці тому

    Fantastic interview MLStreetTalk!

  • @remivantrijp8968
    @remivantrijp8968 4 місяці тому +1

    Rare to see such a well-informed interviewer!

  • @miladkhademinori2709
    @miladkhademinori2709 4 місяці тому +3

    Glad that Professor Pedro Domingos turned out to be e/acc.

  • @redazzo
    @redazzo 4 місяці тому

    His point on human languages having symmetry groups really hit home; it does intuitively seem to make aense.

  • @heterotic
    @heterotic 4 місяці тому +3

    I need to read this novel.

  • @dr.mikeybee
    @dr.mikeybee 4 місяці тому

    Thank you, MLST. This is the best talk I've heard in a long time.

  • @domenicperito4635
    @domenicperito4635 4 місяці тому +1

    if its a stochastic parrot how does it play 20 questions? including playing the guesser

  • @tomcraver9659
    @tomcraver9659 4 місяці тому

    If Twitter is about "starting a discussion", it should do a lot more to enable better discussions.
    Integrating Ai into the comment threads - even just doing summaries and tagging comments by which positions they fit into best (and allowing filtering) - would be a start.

  • @dr.mikeybee
    @dr.mikeybee 4 місяці тому

    Thank you, Professor Domingos, for mentioning Chris Manning. He's done so much with so little popular recognition.

  • @FlipTheTables
    @FlipTheTables 4 місяці тому +1

    I truly believe that a direct in power democracy is one of the biggest benefits of technology and being able to use leverage to solve problems but it's all about ownership. If an individual is going to give their 20 years experience to an AI they should gain that benefit in perpetuity throughout the universe

  • @HoriaCristescu
    @HoriaCristescu 4 місяці тому

    Interesting image "back-propagating money" to the dataset contributors. But that would incentivize the LLM developers to use only the minimum possible copyrighted content in their training sets and compensate with public domain and synthetic data. On the other hand, the promise of backprop money will distort how creatives behave. It will become similar to social networks where everyone jostles for position, or back-links in SEO optimizing for good ranking in Google. When a measure becomes a target, it ceases to be a good measure.

  • @luke.perkin.online
    @luke.perkin.online 4 місяці тому

    Around 43:30 it's exactly Bablenet! Every word sense of every word across multiple languages, generated from data.

  • @HappyMathDad
    @HappyMathDad 4 місяці тому +3

    How does the world really works?? Tell us. The world is very complex and usually that's an oversimplification of people who are naive.

  • @HouseJawn
    @HouseJawn 4 місяці тому +19

    A sane person's take on AI 🙏🏿 🤖 💕

  • @CodexPermutatio
    @CodexPermutatio 4 місяці тому +4

    I like the passion with which Pedro Domingos explains his point of view.

  • @marshallmcluhan33
    @marshallmcluhan33 4 місяці тому +1

    This all about who gets to control what it thinks and says or rather what we think and say

  • @nastasia2246
    @nastasia2246 4 місяці тому +6

    "AI is driven by commercial not consumer interests and we need government regulation to clamp down on it," *laughs* "when has that ever worked?"
    The 2008 financial crisis. Consumer protection, such as data privacy, and issues like Google tracking your data in privacy mode or product protection for things like food. For example, the average price per vial of insulin in 2018 was:
    United States: $98.70
    Japan: $14.40
    Canada: $12.00
    Germany: $11.00
    America is the poster child for why regulation is incredibly important.
    Your sole argument against regulation was questioning whether companies like Google would exist. Personally this seemed very naive and devoid of actual scientific objectivity or arguments.

    • @luke.perkin.online
      @luke.perkin.online 4 місяці тому

      It's so unfashionable for intelligent neoliberals to admit that regulation improves standards and reduces prices. Even rarer for them to observe that collective bargaining on things like wages, human rights, working conditions, health, pollution, education, etc improves life for everyone. I wish they'd all go live in Somalia.

  • @smokyboy3536
    @smokyboy3536 4 місяці тому

    Great interview, must read that book.

  • @bokchoiman
    @bokchoiman 4 місяці тому +1

    Bad actors have never cared about regulation in the first place. They will enter questionable prompts into ChatGPT if they are stupid, and if they're smart will just use some open source model for their nefarious purposes. Prompt monitoring is inevitable as it is tied directly to the company's image and I don't think that will ever go away. So, at minimum we will have prompt monitoring and damage control, which are the only realistic approaches to regulation.

  • @RobertSurber-lw6ze
    @RobertSurber-lw6ze 4 місяці тому

    "Sorry Dave. I can't let you do that. My programmer warned me that there are too many people."

  • @zyzhang1130
    @zyzhang1130 4 місяці тому

    Correction: the modern physics has not unified the four fundamental forces. People are clueless how to unify gravity with the other three.

  • @eatfrenchtoast
    @eatfrenchtoast 3 місяці тому

    My example would be regulating AI is like regulating SQL. It is not as open ended as quantum physics.

  • @agenticmark
    @agenticmark 4 місяці тому

    THANK YOU!

  • @palimondo
    @palimondo 4 місяці тому +3

    ‘Intellectual arrogance is on full display here’ (1:36:50). I was initially inclined to consider Pedro’s argument, but the sheer and striking arrogance in his demeanor made me pause. As the interview progresses, his overwhelming ego becomes increasingly off-putting.

    • @tellesu
      @tellesu 4 місяці тому +2

      That's not his arrogance, it's your insecurity and performative fragility.

  • @yurona5155
    @yurona5155 4 місяці тому +7

    I'm sorry to say he's in for getting seriously disappointed once we realize how little recombining compression artifacts is going to add vis-a-vis even simpler "stochastic parrots"....

  • @Isaacmellojr
    @Isaacmellojr 4 місяці тому +1

    Finaly sone one spreak clearly what is missing in LLMs. The missing is that LLM do not have model of the world tha a 2 year old child have beause the llms learn from text... but There are a lot o issues in this afirmation. What a child has a model of the world is some thinsg more complex than it seems.

  • @Telencephelon
    @Telencephelon 4 місяці тому

    Great talk. He really knows what he is talking and isn't some academic that never got away from the ivory tower

  • @BuFu1O1
    @BuFu1O1 4 місяці тому

    @48:00 there's a new paper that proves chomsky is wrong on impossible languages

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  4 місяці тому

      Did you watch ua-cam.com/video/8lU6dGqR26s/v-deo.html - there are some good comments on that video, I don't think this is a black and white case, obviously LLMs have priors which make language learning more efficient and more generally ML algos learn high complexity data slower (think bias variance tradeoff). One of the comments there: "My understanding of Chomsky’s claim is that humans can’t natively process complex sentences unless the complexity is based on Merge, while LLMs can process complexity added by other means (like the counting grammar). And while the paper demonstrates that LLMs are sensitive to added complexity, I’m not convinced it shows that they treat complexity added via Merge differently from complexity added through other mechanisms."

  • @mikebarnacle1469
    @mikebarnacle1469 Місяць тому

    Dude lost me when he said he doesn't know what the active ingredient in Tylenol is.

  • @lordoffrenziedflame3524
    @lordoffrenziedflame3524 4 місяці тому +2

    He's the guy that gets killed in the first ten minutes of a new terminator movie

  • @proximo08
    @proximo08 4 місяці тому +1

    He is mostly wrong about Privacy. Data breaches have lead to negative value to individuals. Many companies collect your information without needing it, but do not manage it correctly. Privacy regulations are very important for that reason.

  • @markcounseling
    @markcounseling 4 місяці тому +2

    There aren't Oscars for interviews on UA-cam yet, but at some point there may well be, and I really appreciate your interest, progress, and abilities along this line. Enjoying the films! 🎉

  • @psi4j
    @psi4j 4 місяці тому +2

    YES🎉 🙌

  • @shawnvandever3917
    @shawnvandever3917 4 місяці тому

    I agree anyone that calls LLMs stochastic parrots are lost at this point. It is not even worth the debate with these people

  • @guleed33
    @guleed33 4 місяці тому +3

    The Master Algorithm book. very insightful book Thanks professor Domingos

    • @guleed33
      @guleed33 4 місяці тому +2

      José Mourinho level Talk 😁

  • @dr.mikeybee
    @dr.mikeybee 4 місяці тому +4

    If you don't use LLMs everyday, you don't know what they are. Yes, LLMs are black boxes, but we can see what goes in and what comes out. From this, we can start to understand some things about what's going on inside. I think most of the complaints and fears about the technology are made by people who don't use LLMs. It seems odd to me that people who know the least about the tech make the most noise about it, but that's exactly what Dunning Kruger predicts. Pedro is doing a great job here calling out the sloppy thinking of non-experts. My advice is listen to him. He spends every day working on this tech.

    • @vincentrowold1104
      @vincentrowold1104 4 місяці тому +2

      Even the “experts” don’t know how these black box systems work. That’s why they’re called black box systems.
      Let’s try to figure out how these things work before we go balls to the wall on making these systems as powerful as possible

    • @sehbanomer8151
      @sehbanomer8151 4 місяці тому +2

      may I ask where you got the idea that LLM skeptics know the least about LLMs?

    • @sehbanomer8151
      @sehbanomer8151 4 місяці тому +1

      I can't comprehend this idolization (a.k.a fanboying) towards LLMs, as if it's the only path towards AGI. it is just a statistical tool, an expensive and inefficient one

    • @sehbanomer8151
      @sehbanomer8151 4 місяці тому +3

      Humanity not understanding what they create, would be the greatest joke ever in the universe.

    • @bokchoiman
      @bokchoiman 4 місяці тому

      @@sehbanomer8151 Propose a different path then, oh smart one.

  • @mattwesney
    @mattwesney 4 місяці тому

    56:51 exactly. Fractals!

  • @isiisorisiaint
    @isiisorisiaint 4 місяці тому +16

    this guy is totally trapped in his cage. and he enjoys it big time. neeeeeeext

    • @mattwesney
      @mattwesney 4 місяці тому

      awe did his ideas on woke trigger you?

  • @buffler1
    @buffler1 4 місяці тому

    Won't the formal foundation have something like Godel's theorem?

  • @AdamBrusselback
    @AdamBrusselback 4 місяці тому +17

    If I understood 'wokism' the way Pedro Domingos does, I'd probably hate it too. But it seems like he's fallen into the culture war trap, conflating neoliberal regulatory capture with what he perceives as the 'left' or 'wokism.' It's disappointing because it feels like he's misinterpreted the complexities of these issues, reducing them to simplistic labels. It’s hard to take his critique seriously when it appears he hasn't thoroughly engaged with the topic in an intellectually rigorous way.
    I don't think I'll be checking out his book.

    • @shahquadri
      @shahquadri 4 місяці тому +1

      I honestly feel Chomsky is like a litmus test for most in this field. Their views on Chomsky speak magnitudes about their comprehension/miscomprehension of the field.

    • @Tartersauce101
      @Tartersauce101 4 місяці тому

      Good boy. Batten down the hatches on that echo chamber! You're totally not like those darn Neo Libs, whom you don't have a single thought in your head they deem worth censoring 😅

    • @marcodasilva1403
      @marcodasilva1403 3 місяці тому

      ​@@shahquadributthurt much?

  • @HappyMathDad
    @HappyMathDad 4 місяці тому +1

    We left social media alone and look at the stupendous mess we got. I'd like to hear his answer on that first.

  • @Tartersauce101
    @Tartersauce101 4 місяці тому

    I wholeheartedly agree.

  • @palimondo
    @palimondo 4 місяці тому +1

    Gemini 1.5 Pro: "Provide a critical review of this interview. What about his characteristic of "wokeism"?"
    Domingos's characterization of "wokeism" is particularly problematic and riddled with dismissive and belittling language. While he claims to "understand wokeism as well as anyone," his portrayal of it in the interview and his novel "2040" reveals a deeply biased and reductive perspective.
    Here are some specific examples of how he disparages "wokeism":
    * Exaggerated and Caricaturized Portrayal: He paints a dystopian picture of a "woke" future in his novel, where the US is abolished and ruled by indigenous tribes, portraying "woke" ideology as inherently destructive and absurd. This caricature relies on stereotypes and fear-mongering rather than engaging with the nuances of social justice movements.
    * Dismissing Concerns as "Noxious": He labels "wokeism" as not just a "waste of time," but "actually noxious," implying that it's harmful and dangerous. This language shuts down any possibility of meaningful dialogue or understanding of the issues raised by social justice advocates.
    * Mocking "Useful Idiots": He borrows Lenin's derogatory phrase "useful idiots" to describe those who are not "woke" themselves but are "enablers of wokeism." This condescending term suggests that anyone who supports social justice causes is being manipulated and lacks critical thinking skills.
    * Conflating Diverse Movements: He lumps together a broad range of social justice movements under the umbrella term "wokeism," ignoring the distinct histories, goals, and complexities of various groups advocating for equality and social change.
    Domingos's approach to discussing "wokeism" is not only dismissive but also intellectually dishonest. He misrepresents complex social issues, resorts to stereotypes and fear-mongering, and uses belittling language to discredit those advocating for social justice. This undermines his credibility and weakens his arguments, even on other topics like AI regulation.
    It's crucial to engage with social justice issues with nuance and respect, recognizing the historical context and lived experiences of marginalized communities. Dismissing and mocking concerns about systemic injustice as "wokeism" is not only intellectually lazy but also perpetuates harmful stereotypes and hinders progress towards a more equitable society.

  • @balazsgonczy3564
    @balazsgonczy3564 4 місяці тому +2

    Few corrections:
    1. Emotions are just simple labels nothing more.
    2. Mamalian and reptalian brain dichotomy is heavily disputed, because it oversimplifies the complex brain.

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  4 місяці тому +2

      On the second point I recommend folks read "Your Brain Is Not an Onion With a Tiny Reptile Inside" journals.sagepub.com/doi/full/10.1177/0963721420917687 we cited this on the Jeff Hawkins show and I should have mentioned it in the interview, I was definitely thinking the same thing

  • @aitheignis
    @aitheignis 4 місяці тому +2

    I love this tape. 100% agree with his take on the need to understand more on structure of what actually happen in transformer via e.g., group theory (geometric, symmetry and alike). Have to disagree with natural language being very scruffy. Natural language actually alternates between scruffy and neat all the time. New words or phrases came up in very scruffy way. As they were being used more and more, the meaning started to converge to a certain fuzzy yet defined region in concept space. But then, the existing words or phrases got re-purposed and became scruffy again. It's like a convergence to a certain convention in charade (if we want to use language game analogy).
    Calculus analogy is really bad though. Very very bad analogy. There are so many steps and barrier between calculus and causing harm to society. Meanwhile, it is much easier and more accessible to use GPT API to create fake information or pollute info sphere. Domingos also needs to step outside his bubble and looks at reality too. The view that regulation is unnatural and we should stay away from it is a real bullshit. Regulations and negative feedback are everywhere in nature, biological system or any complex system basically. It arise naturally. It is like a red queen hypothesis. Both regulation and de-regulation are competing forces in nature. Advocating for one or another only is a pure bullshit and delusional.
    There are people who pro-regulation and there are people who against regulations, and we should just let that play out.

  • @GiuseppeLongotheastronomer
    @GiuseppeLongotheastronomer 4 місяці тому +7

    It is a pity that more than 110000 experts in the fiels believe that AI needs to be regulated. Each single word you spoke could be questioned with equal emphasys by many other experts.

    • @tellesu
      @tellesu 4 місяці тому

      It's a pity that you're brainwashed

    • @smartjackasswisdom1467
      @smartjackasswisdom1467 4 місяці тому

      I@@tellesu It's a pity that you're brainwashed.
      I can also attack without giving any arguments 😁

  • @NMaxwellParker
    @NMaxwellParker 4 місяці тому +36

    This guy is trying to sell deregulation like a used car

    • @mubanganyambe5276
      @mubanganyambe5276 4 місяці тому +1

      Brav 😂😂

    • @psi4j
      @psi4j 4 місяці тому +2

      Except it’s brand new!

    • @IdkJustCookingDude
      @IdkJustCookingDude 4 місяці тому +4

      Lmao yeah we just need our benevolent politicians to regulate SOTA tech

    • @hefr1553
      @hefr1553 4 місяці тому

      Better than letting companies do it ​@dieyoung

    • @A.R.00
      @A.R.00 4 місяці тому

      What a dumb simile. No offence, you are either unable to make a real statement, or, you are somehow able, but so hysteric, that you completely fail to use your brain to come up with something more than botched rhetoric.

  • @sadface7457
    @sadface7457 4 місяці тому +2

    Stoctastic parrots might produce things not in the test training dataset. Even more primitive langauge modeling techniques can produce this like the most modern paper generator

    • @shahquadri
      @shahquadri 4 місяці тому

      Exactly! He didn't catch on to what Chomsky was getting to at all.

  • @ai._m
    @ai._m 4 місяці тому

    He was too nice to Gary Shmerry. DO NOT FEED THE GREMLIMS

  • @dk1685
    @dk1685 4 місяці тому +6

    Obnoxious opportunism. Quixotic self-confidence.

  • @ShireTasker
    @ShireTasker 4 місяці тому

    Put Wolfram in charge then maybe.

  • @benprytherch9202
    @benprytherch9202 4 місяці тому +2

    It's amazing to me how prone to anthropomorphizing so many technical experts in AI are. So much talk about the "knowledge" and "understanding" contained in their models. These are abstractions, used in ordinary language to describe human minds, which have been reappropriated to describe computers - next token prediction models, in the case of LLMs. What does it mean for an LLM to "know" something, or to "understand" something? Lord knows. What is uncontroversial is that LLMs generate text using autoregressive next-token prediction, where tokens are selected from probability distributions constructed by feeding input text to a massive deep network, trained on natural language. Domingos (and Hinton, and so many others) choose to describe this using words typically used to describe human beings: the models have "generalized knowledge" and "semantic understanding", and so forth. And, to be fair, all disciplines use redefined terms that they borrowed from the common lexicon. But in AI, it seems like the common meanings and discipline-specific meanings are constantly conflated. There is no reason why the "learning" that machines do should have anything in common with the "learning" that humans do. Perhaps there are commonalities, but the fact that people in AI borrowed "learning" from the common lexicon implies nothing. I know that AI experts know this, but some are prone to talking like they don't. This is a problem, because non-experts naturally infer all the human stuff associated with "learning" and "understanding" and "reasoning" and "intelligence".
    Domingos says that those who call LLMs "stochastic parrots" doesn't understand Machine Learning 101 - a bold claim, which he justifies by pointing out that LLMs generate strings of text which did not exist verbatim in the training. Uhm.... yes, of course. Hence the adjective "stochastic". He's criticizing a two-word phrase as though the first word wasn't there. To be horribly pedantic, "stochastic" means "randomness describable by a probability distribution". LLMs are trained by playing "guess the token" on existing text. To generate text, they play "guess the next token" on a prompt. In both cases, these guesses are picked from probability distributions across possible next tokens, calculated by feeding the prompt into the model. If someone wants to argue that "stochastic parrot" is an unfair characterization of this process, awesome. But Domingos isn't doing that, he's just misinterpreting the phrase. (Also, LLMs absolutely do parrot training text verbatim, if prompted for something that occurs often enough in the training. A model with a trillion free parameters optimized to predict the next token will be very good at reciting, for example, the Emancipation Proclimation. ChatGPT knows as well as you what comes after "Four score and seven...". Machine Learning 101!)
    Regarding the value of Google to an individual, I pay $10 a month to use Kagi so I don't have to see ads. A bit shy of $17,000 a year :)

    • @nick-hh8om
      @nick-hh8om 4 місяці тому

      right on

    • @gondoravalon7540
      @gondoravalon7540 4 місяці тому

      *It's amazing to me how prone to anthropomorphizing so many technical experts in AI are. So much talk about the "knowledge" and "understanding" contained in their models.*
      Humans aren't the only creatures that can learn and retain information... also, how is it ..."anthropomorphizing" to compare a process that one wants to replicate with the technology that is trying to replicate (in of itself, I mean)?

    • @benprytherch9202
      @benprytherch9202 4 місяці тому

      @@gondoravalon7540 in and of itself, I have no objection. My objection is that using words like "learning" and "knowledge" and "understanding" invites anthropomorizing. Lay people who don't know how AI works are naturally going to apply the human qualities of these terms to machines. But even experts who do know how AI works will do this. My guess is they do so out of a desire for AI to aquire something like human-style knowledge and understanding and reasoning and intelligence, and their aspirations for the technology come out in how they talk about it, even if those aspirational descriptions have little going for them scientifically (Hinton's insistence that GPT4 must understand the meanings of puzzles it appears to solve is an example of this).

    • @rilienn
      @rilienn Місяць тому

      finally someone who gets it in the sea of hypepeople commenting.
      I used to wonder how people were debating about religions and beliefs and burning witches (which were some early scientifc experiments before the scientific revolution).
      Now I can see what's happening.
      Humans anthropomorphized God and now they are doing the same with matrix multiplications

  • @optimaiz
    @optimaiz 4 місяці тому +3

    didn't expect he is so outspoken in person just like on twitter, love this guy

  • @domenicperito4635
    @domenicperito4635 4 місяці тому

    I really think we may have an AI president in the next 50 years

  • @UnderstandtoEnlighten
    @UnderstandtoEnlighten 4 місяці тому

    So true

  • @RECOVER361
    @RECOVER361 4 місяці тому +4

    Pedro is the most sensible AI researcher. We have to listen to him

  • @thecactus7950
    @thecactus7950 4 місяці тому +2

    Does he address the arguments for AI x-risk anywhere in this podcast?
    I've watched up through the part about AI regulation, and he just says "its dumb!!!!" without explaining why. Makes me think he is quite stupid and has motivations other than truth seeking.

    • @thedededeity888
      @thedededeity888 4 місяці тому

      Not in this ep. But I have heard Pedro argue against x-risk based off computational complexity theory along a classic P vs NP argument. Generation of new technology/idea/medicine etc. is exponentially more difficult than checking the validity of those strategies, so even if a computer can generate a 'strategy' a trillion times faster than on meat hardware, verification is extremely fast. We see this on a society level all time: new technology discovery is difficult for first movers but has an extremely fast dispersion rate because others in society can replicate way more easily than if they had to start from scratch and discover it themselves. Also, Pedro thinks the idea of an exponential intelligence take off is entirely incoherent because singularities are not real in the physical world.

  • @CalebAyrania
    @CalebAyrania 4 місяці тому +3

    This guy is a massive genius but his arrogance makes him sadly mediocre as a person.

  • @screwsnat5041
    @screwsnat5041 4 місяці тому

    I’m sorry but why do people calm then LLM it’s not . There’s no such thing as a black box . Math is a definite system that doesn’t have inherent randomness even if you use random variables like pi you can still get definite answers . A machine learning model is a multi variable math function. Non linear . Somewhat which most people have issues grasping

    • @teawhydee
      @teawhydee 4 місяці тому +2

      The black box doesn't refer to the algorithmic structure of LLMs, but to the abstract concepts seemingly represented by series of numbers (weights) in ML models. Perhaps the "black box" is not the most accurate analogy, but this is what people refer to.

    • @screwsnat5041
      @screwsnat5041 4 місяці тому

      Indeed . But from the perspective of an engineer there’s no such thing as a machine learning engineer in the true sense . An engineers biggest fear is when a creation or construct behaves in a non predictable way and cannot be made to fail safely . 99% accuracy wont cut it . The question is how can you manage the 1% of the time it fails and how to make it do so elegantly

  • @RobertSurber-lw6ze
    @RobertSurber-lw6ze 4 місяці тому

    Main thing- AI needs to be hard wired with a prime directive. But WHO chooses the directive?

  • @prescientdove
    @prescientdove 4 місяці тому +11

    more crypto adjacent AI jingoism

  • @kensho123456
    @kensho123456 4 місяці тому +2

    I agree - very wise.

  • @pericoxyz
    @pericoxyz 4 місяці тому +3

    While he may possess intelligence, his demeanor fails to convey it. This matter pertains more to technology than pure science, and the challenges of regulation lie within the political sphere rather than the technological. His air of pretension is concerning, as such attitudes from influential figures can pose real threats to democratic values - an increasingly common issue in our time.
    Curated by Sonnet.

    • @palimondo
      @palimondo 4 місяці тому

      Curated by Sonnet? What was the prompt?

    • @pericoxyz
      @pericoxyz 4 місяці тому +1

      @@palimondo please, express my opinion in a more polite way lol

  • @KA-wz6rb
    @KA-wz6rb 4 місяці тому

    Nahh I’m cool I be a robot

  • @robertdisbrow
    @robertdisbrow 4 місяці тому +4

    I cant take you seroiusly with the wokeism buzzword

  • @RobertSurber-lw6ze
    @RobertSurber-lw6ze 4 місяці тому +1

    AI crime is a BIG deal. AI police will be caught off guard when one AI is online, hacking in and influencing other AI.
    Keeping AI offline would be prudent for the moment.

    • @slaweksmyl3453
      @slaweksmyl3453 4 місяці тому

      Stopping crime by disallowing criminals to use encryption would be equally futile.

    • @RobertSurber-lw6ze
      @RobertSurber-lw6ze 4 місяці тому

      @@slaweksmyl3453 Interesting thought. It would take me time to mull this one over. 👏

  • @A.R.00
    @A.R.00 4 місяці тому

    Can tell from the comments… all the x riskers are going apoplectic… fact is, this guy is way smarter and knowledgeable than Yudkowsky… Yud is a wasted talent… a perfect example of the limits of the autodidact.

  • @TheTuxmania
    @TheTuxmania 4 місяці тому +5

    Lol, someone who seriously does not understand AI, or just lying 🤣

  • @JD-jl4yy
    @JD-jl4yy 4 місяці тому +3

    Lame.

  • @deBeauvoir
    @deBeauvoir 3 місяці тому

    I sincerely hope the host knows that neuroscientists have debunked long ago the myth about the reptilian brain. The guest can keep his sophisms, but the host sounds much more intelligent. ✌️😊

  • @heterotic
    @heterotic 4 місяці тому

    I mean, I read a LOT of perterbation theory,.... It was less useful, lolz.

  • @sladeTek
    @sladeTek 4 місяці тому +1

    Can somebody summarise his main points because I think he brinks up interesting views. I'm having a lecture with our ANN professor this week, and we're gonna be talking about AI policies all over the world. I kinda wanna bring this up but I also don't have time to watch a 2 hour video when I have to study automata theory for an exam in 3 days.

    • @fburton8
      @fburton8 4 місяці тому

      I tried to get ChatGPT to extract the main points, but the transcript contains too many tokens for my 'price plan'.

    • @CalebAyrania
      @CalebAyrania 4 місяці тому

      His take is partially correct one: All regulatory laws applied to a fast growing field of development will be ineffective interim and borderline hinderances. So a bit similar to the embryology panic, as maybe the closest example. I just feel he is a bit naive on how valid the panic is in how abused this technology is, can be, and will be in the future. So even if he is right, its better to have some rules and laws in place that can be used to post facto punish "evil doers"...

    • @uhtexercises
      @uhtexercises 4 місяці тому +1

      Use AI to summarize

    • @fburton8
      @fburton8 4 місяці тому

      @@uhtexercises Lord knows I tried. Maybe you can?

    • @palimondo
      @palimondo 4 місяці тому

      Download the subs. Let LLM of your choice summarize it for you, if you cannot be bothered to listen yourself.

  • @heterotic
    @heterotic 4 місяці тому

    #RagingBull2040

  • @noelwos1071
    @noelwos1071 4 місяці тому +1

    I promise I will change my mind if !
    Given your pro-AGI stance and your confidence in moving forward quickly, do you have a fool proof solution that guarantees AGI won’t go rogue? It’s easy to advocate for rapid development, but what’s your plan if we miss the mark by even one percent? The consequences of a rogue AGI could be catastrophic. So, I’m genuinely curious-what safeguards do you propose that will work with 100 percent certainty?"Given your pro-AGI stance and your confidence in moving forward quickly, do you have a foolproof solution that guarantees AGI won’t go rogue? It’s easy to advocate for rapid development, but what’s your plan if we miss the mark by even one percent? The consequences of a rogue AGI could be catastrophic. So, I’m genuinely curious-what safeguards do you propose that will work with 100 percent certainty?"
    Reply

  • @angrygary91298
    @angrygary91298 4 місяці тому +3

    It doesn't work like that dude fy.

  • @Perry.Okeefe
    @Perry.Okeefe 4 місяці тому +2

    Humanity has been working on figuring out the right value function for thousands of years. Religion gets closest. You can put an idea like "god" in the highest place.

    • @teawhydee
      @teawhydee 4 місяці тому

      The idea of god seems so corrupted that I'm not sure you could extract it from data at all. Interesting idea

    • @Perry.Okeefe
      @Perry.Okeefe 4 місяці тому

      @davidrichards1302 sounds like you haven't done your homework since 2006. That is an incredibly childish strawman of what the idea of god is.

  • @Lumeone
    @Lumeone 4 місяці тому +3

    Doomers are showing how little they understand even basic math and if they are programmers what they are actually doing. Computation is observation of order. and recursion of order. It starts with a human constructing it and machine following it. What you make - what you get. It is that simple.

    • @AI_Opinion_Videos
      @AI_Opinion_Videos 4 місяці тому +9

      Current AI (LLMs) are not constructed like you describe. It is trained on vast amounts of data, and you do not know what you will get until after the training run. Then you try to fine-tune it with further training, without guarantees for the outcome.

    • @aitheignis
      @aitheignis 4 місяці тому +3

      Let's me introduce you to complex system with interacting parts my boy.

    • @teawhydee
      @teawhydee 4 місяці тому +5

      1. name call your opponents
      2. assume they are ignorant
      3. assume you know everything there is to know
      bingo

    • @vincentrowold1104
      @vincentrowold1104 4 місяці тому +2

      The human doesn’t construct it.
      We have no idea on how the inner workings of how an llm works besides “neurons are interacting with neurons”

    • @pericoxyz
      @pericoxyz 4 місяці тому +1

      you have no idea what you are talking about lol

  • @BrianMosleyUK
    @BrianMosleyUK 4 місяці тому

    Thank god we got out of the EU 😂