Noam Chomsky Exposes the Real Limits of AI: Why Machines Can’t Understand Language Like Humans

Поділитися
Вставка
  • Опубліковано 13 жов 2024

КОМЕНТАРІ • 164

  • @maladaptor
    @maladaptor День тому +22

    It is overly simplistic to say that AI is merely a computer program that generates outcomes based solely on statistical patterns. While data contains statistical associations, back propagation and gradient-based training of neural networks are optimization techniques. They are not based on statistics. Also, just as consciousness is an emergent property of the brain, AI is an emergent property of a neural net's learned behavior. AI may achieve consciousness if it can perform self-reference, recursion, and feedback loops in the future. Just like a brain, no one, even the computer designer, knows what's going on with the weights and biases in the deeper layers of a neural network.

    • @gerritgovaerts8443
      @gerritgovaerts8443 День тому +3

      Optimisation is a built in feature of several machine learning techniques . For example statistical regression mimimizes the squared sum of errors. In that sense , there is nothing new in AI . Whether you optimise your error function by gradient descent and back propagation or purely analytically makes no fundamental difference .

    • @MaxKar97
      @MaxKar97 День тому +3

      Good points! However, feedback loops and Neural networks are really nothing more than a very complex mathematical setup, lot more advanced that statistical predictions - that is why they are called deep learning. We understand what's going on in them - it's just too complex for us humans to understand when we break down the neural networks with their nth degree of connections, weights and biases. It would be just a set of numbers.
      But yes, consciousness is an emergent property of the brain, there is nothing magical about it like most people/mystics make it out be. And yes, if we increase enough self-ware feedback loops in the AI programs, they can develop consciousness in the same manner that our brains do and be called conscious.

    • @gerritgovaerts8443
      @gerritgovaerts8443 День тому +4

      @@MaxKar97 I will not generalize for all types of NN structures , but classic deep NN's (not convolutional or RNN's) can be mathematically reduced to a very large (millions of var's and param's) non constrained non linear objective function that needs to be minimised . Dunno whether this also applies for RNN's and convolutional NN's but I guess so . That is fairly simple to understand , it is just non constrained non linear optimisation in a very high dimensional space . I quote " consciousness is an emergent property of the brain " . That is your POV and a hypothesis , but nothing more than a hypothesis without any proof so far . Roger Penrose has a very different hypothesis when stating that consciousness is not computable . If the latter is the case than no matter how many feed back loops we add to those NN's , consciousness will not emerge . The hard problem of consciousness is still far from being solved IMHO .

    • @DailyLessonQuotes
      @DailyLessonQuotes День тому +8

      There is no proof of the sentence "consciousness is an emergent property of the brain". This is an assumption.

    • @lattematcha1661
      @lattematcha1661 День тому +2

      A considerable class of statistic estimators can be seen as an optimization problem (M-estimator). Neural network is an M-estimator of some underlying ("statistical") pattern. Optimization techniques are no more than ways to actually implement the optimization, which does not change what the estimator is. When LLM generates text, it is sampling from some estimated distribution, which is a more "concise" representation of the generating process of its underlying data, from a statistical point of view.

  • @thecomputingbrain2663
    @thecomputingbrain2663 7 годин тому +2

    In the video above, as a computational neuroscientist, I would agree with Chomsky in most accounts. Equating thinking with programatic inference is indeed not tenable. However, I disagree that we learn nothing from AIs and LLMs. They do give us a perspective on how we encode facts. In an important sense, the encoding of facts in neural networks must be isomorphic with what brains acquire, even if they do with different substrate.
    For instance, word embedding should be seen as an example of how semantics gets embedded in a network via connectivity, and something like embedding will also exist in the brain.

  • @RaviAnnaswamy
    @RaviAnnaswamy 2 дні тому +12

    What prof chomsky is missing is the next word not only satisfies the previous two words continuation but makes good sense with prev three words five words and hundred words
    So it js not a word completor but a thought extender
    Not very different from how we think thoughts then decode to words
    We then claim we thought using words!

    • @roccococolombo2044
      @roccococolombo2044 День тому +3

      The next word prediction does not explain the fabulous and accurate coding that LLM are capable of

    • @RaviAnnaswamy
      @RaviAnnaswamy День тому +1

      @@roccococolombo2044 that is my point too, it is next thought prediction.
      Words and even large passages are encoded into thoughts which are essentially a configuration of hundreds of flags turned on to represent situations and entities. When an RNN or LM processes a series of embeddings it is indexing into a thought space and then decoding it into a sequence of words. While the learning algorithm corrects itself by looking at one word, it manages to learn complex thought vectors in order to do it right

    • @PjKneisel
      @PjKneisel 13 годин тому

      @@RaviAnnaswamy AI still struggles with memory though

    • @RaviAnnaswamy
      @RaviAnnaswamy 8 годин тому

      @@PjKneisel much less than we do if you use paid versions of gpt4 memory is not issue it is much more precise than humans ability to
      remember recently heard things

  • @letMeSayThatInIrish
    @letMeSayThatInIrish 12 годин тому +2

    Glad to see Chomsky alive and kicking. Only a few years back he introduced a novel idea of language; namely that it might have evolved not primarily for communication, but for thinking. I find this quite convincing, and it turned my perspectives upside down. I wish more people younger than 80 could do the same for me.
    I have to disagree with many of his views on AI presented here, though. For instance, I think machines can do things. And I don't care for mixing speculations about 'consciousness' and similar vague concepts into the discussion about machine learning.

  • @gitbuh12345qwerty
    @gitbuh12345qwerty 22 години тому +4

    He doesn't get it. They have eliminated the need for programming languages, a human can now directly code a machine using natural language in way that was impossible. It is not perfect, but neither was Noam.

  • @rotorblade9508
    @rotorblade9508 2 дні тому +12

    the vast amount of data the AI is trained on is similar to the vast amount of data the human brain was trained on throughout its evolution from small mammals, which was coded in the dna and continued training after it was born. they are simply different data and human brain is configured to achieve consciousness, while GPT AI isn’t. The data AI has knowledge of is not recorded and accessed but the newtork is optimized based on that data. It’s already doing orders of magnitude better than humans in specific but extremely complex tasks.

    • @belenista_
      @belenista_ День тому +2

      Machine learning often struggles with simple tasks that even children can master. There is significant hype surrounding this field, along with a bias suggesting that we are on the verge of groundbreaking technology resembling human consciousness. However, there is no solid evidence to support this claim. Machine learning is primarily a statistical tool that, in many cases, offers poor approximations for complex problems. For numerous issues, there are already better and more efficient algorithms available than those provided by machine learning. I believe the human brain operates in fundamentally different ways than neural networks, regardless of the Nobel Prize hype; I see no evidence to suggest otherwise. While neural networks may provide a rough analogy for how the brain works, I view them as at best a zeroth-order approximation.

    • @robmyers8948
      @robmyers8948 День тому +1

      @@belenista_ yes in it’s current incarnation, but it will evolve it’s not static, but inevitable.

    • @Graham-e4p
      @Graham-e4p День тому

      But it’s a very different beast in that it’s not motivated the way a living organism is. ‘Thinking’ is something that heightened our chances of survival. Thinking enabled us to live long enough to reproduce. Thinking is one of many features that enable a species to survive. AI doesn’t function that way. It isn’t motivated. Its survival isn’t dependent on its ability to solve problems. Instead it’s a parlour trick. An extremely impressive parlour trick, an extremely impressive motorized marionette, but still, we’re completely different beasts.

    • @GIGADEV690
      @GIGADEV690 18 годин тому

      ​@@robmyers8948 Evovle 🍌🍌🍌

  • @adiidahl
    @adiidahl 9 годин тому +4

    Most of you missed the point of what he was trying to explain. As title said, machines can not understand language as humans, and he is right. LLMs work with numbers and are good at predicting but saying that AI can achieve consciousness if it can perform self-reference, recursion, and feedback loops in the future is exactly why he is using submarine analogy. We don't know what conscience is but somehow we believe that machine can have it.

    • @strumyktomira
      @strumyktomira 5 годин тому

      "Most of you missed the point of what he was trying to explain. As title said, machines can not understand language as humans" - because humans don't understand language either? :D

  • @DavidWhy-y7i
    @DavidWhy-y7i 3 дні тому +11

    Happy to hear from Dr Chomsky

    • @mohamonse
      @mohamonse 2 дні тому

      Happier for this unexpected encounter. Very enlightening. Thank you very much.

  • @3amael
    @3amael День тому +5

    AI at the moment is nothing but pattern matching... we still have a ways to go before AGI.

    • @Srindal4657
      @Srindal4657 8 годин тому +1

      What if biological intelligence is pattern matching

  • @markplutowski
    @markplutowski 2 дні тому +4

    “I have a computer in front of me. It is a paperweight. It doesn’t do anything.” with all due respect, PEBKAC.

  •  День тому +3

    excellent interview , I have always had the same argument about they hype of AI vs reality with human learning of language as example

  • @johnbollenbacher6715
    @johnbollenbacher6715 2 дні тому +12

    So if I ask two-year-old child to implement the quadratic formula in Ada, it should be able to do it? 1:41

    • @Bao_Lei
      @Bao_Lei День тому +3

      Speak to a two-year-old in Ada everyday, yes it will. But only do it to your own kids.

    • @Storytelling-by-ash
      @Storytelling-by-ash День тому +5

      I feel like you are taking the 2 year old comparison personal. The point is that a 2 year old doesn’t go through trillions of words scrapped from entire internet to understand what you are talking about.

    • @adambui7935
      @adambui7935 День тому

      Lol. Not 2 years old

    • @Graham-e4p
      @Graham-e4p День тому +1

      Can AI learn simple speech without being asked to do so?

    • @MaxKar97
      @MaxKar97 День тому

      @@Bao_Lei lol true

  • @rotorblade9508
    @rotorblade9508 2 дні тому +4

    “computers don’t do anything “ that is a way of saying they don’t have free will. do we? 😂

  • @WhatIThink45
    @WhatIThink45 22 години тому +1

    But 2 year-olds rely on social interactions with other knowledge speakers learn how to speak and think. Granted, they’re not accessing terabytes of data, but they still receive information to develop their cognitive and linguistic abilities.

  • @stefannordling6872
    @stefannordling6872 2 дні тому +39

    Clearly he hasn't actually used LLMs very much...

    • @liamgrima5010
      @liamgrima5010 2 дні тому +16

      I respect your opinion, but I have to disagree. I believe Chomsky highlights a key limitation of large language models. Zipf’s law, a statistical phenomenon, shows that the rank-frequency distribution Chomsky highlights a key limitation of large language models. Zipf’s law, a statistical phenomenon, shows that the rank-frequency distribution of words is inversely related to their complexity. In fact, about six words make up 30-40% of language use, while 20 words account for 70-80%. This means that children are exposed to many occurrences of very few words. Moreover, since every sentence uttered is novel, as analyses of text corpora reveal, children receive impoverished and repetitive linguistic data. Yet, they manage to extrapolate the underlying syntactic structures, allowing them to generate new, hierarchically structured expressions. This is a process of recursive design, where an infinite array of expressions - what Chomsky calls “digital infinity” - is created from a finite set of lexical items. Large language models cannot replicate this. They are programs that scan vast amounts of data and make statistical associations, but they lack the innate linguistic knowledge that allows a two-year-old child to analyze and generate complex sentences with a fraction of the input. In addition, large language models can process languages that are simply impossible for humans to digest. Natural languages are shaped by rigid parameters that are fairly constant across all cultures, and neurological evidence reveals that, when speakers are exposed to these structures, they treat them as a puzzle - not a language. Yet, large language models can. This reveals yet another flaw of large language models: they can analyze data humans can’t for constructing semantically valuable expressions, meaning they are poor analytical references for developing theories of human language.

    • @stefannordling6872
      @stefannordling6872 День тому +5

      @@liamgrima5010 I admit I had to enlist an LLM to parse your incomprehensible blob of text..
      Comparing LLMs to how children learn language completely misses the point.

    • @adambui7935
      @adambui7935 День тому +1

      he 99 years, trap in time

    • @eatcarpet
      @eatcarpet День тому

      you're right, LLM is even more of a garbage.

    • @liamgrima5010
      @liamgrima5010 День тому +14

      @@stefannordling6872 How rude, my response was polite and coherent. My argument is simply that LLM are useful - but they don’t offer insight into how children acquire language or the nature of the human linguistic apparatus. Unless you can refute any specific points, save the ad hominem attacks for elsewhere.

  • @mibli2935
    @mibli2935 20 годин тому +1

    Noam Chomsky exposes the real limits of his understanding of AI - why Chomsky fights for his own survival.

  • @italogiardina8183
    @italogiardina8183 23 години тому +1

    Do drones fly? seems so. Do submarines swim? seems not. Do machines think? seems so.

  • @glynnwright1699
    @glynnwright1699 День тому +1

    It seems that the discussion on AI always defaults to LLMs. There are many useful application of neural networks that synthesise partial differential equations which solve important problems. They have nothing to do with 'intelligence'.

  • @mrgyani
    @mrgyani 13 годин тому +1

    It's incredible at this age he is still active, and sharp. Still working.

  • @godtable
    @godtable День тому +2

    True. Everything is a lie, but if the lie us convincing enough, for the most people it wouldn't matter what's the truth.

    • @MrInquisitor7
      @MrInquisitor7 21 годину тому

      If everything is a lie your statement is a lie also. Therefore there is a thing we know is truth or lies

  • @sethfrance1722
    @sethfrance1722 День тому +1

    He is like 2005 chat bot honestly he is just an expensive philosopher
    I only trust Hinton and similar practitioners

  • @godblessCL
    @godblessCL День тому +3

    I dont like Noam politics views but on this one, totally agree. The AI path is not the path to conciousness intelligence.

  • @legatobluesummers1994
    @legatobluesummers1994 День тому +3

    Most people are tricked by the human like traits of the ghost in the machine it's not alive and it's not thinking it's just parsing ten years of data for us in an instant and using examples and references that already exist or it was trained on. Do cars sprint?

  • @Waterfront975
    @Waterfront975 22 години тому

    There is a difference between language as an interactive process or game as the later Wittgenstein would have said and the full formal logic that comprises the linguistics and sentences. I can say stuff that are logically wrong and also not true in a factual sense, but still make sense from an interactive point of way relative the counterpart in the dialogue. An LLM is a more of an language game than a full on logical mastermind. We use words the same way, we usually don't know what the word 3 words ahead will be, we operate like an LLM most of the time, although I do think humans can choose to operate in a more logical mode and make better logical conclusions than an LLM, especially while doing science.

  • @yavarjn2055
    @yavarjn2055 День тому +2

    Can somebody explain what he is talking about or give some reference to read more? How about knowlege graphs? Reinforcement learning, multimodal AI and other techniques added to AI more and more everyday? LLM models are not just statistical generation of words, there is a lot more going on behind the scene. Deep learning is about learning patterns not spitting words. He mentions various very interesting points about AI limitations in general, but nobody said we are done with studying AI. With very simple models we built chatbots capable of doing what humans just can't. There are many things that machines can that humans will never will be able to. We can not transfer learning and it takes years to learn a simple thing. For machines it is just a matter of copy/paste. A two year old and submarine examples are not the best ones to explain AI limitations. What can a two year old undetstand about language anyways? Can't say a word papa or mama properly.😅😅

    • @Graham-e4p
      @Graham-e4p День тому

      I think the submarine bit is accurate. It’s a tool. A man made tool. Very sophisticated and well equipped to do what it’s designed to do, but to swim implies will. Implies an internal desire to go from A to B. Machines haven’t evolved over billions of years with an array of features, strength, fur, fangs, wings and yes, consciousness, to enable its owner to live long enough to reproduce. Consciousness doesn’t exist in a vacuum. It’s tied together with a thousand different finely tuned features that coordinate ina way that increases our chances of survival. Consciousness is more than math algorithms, it’s a tool that works in tandem with other features.
      Another distinction is will. Need. The human brain is motivated to learn specific things to enable its owner to best survive, to move to shelter, to fly from branch to branch to swim away from predators.. consciousness is much more than understanding a reassembly of proteins..yes machines can be programmed to do those things, as a submarine can be designed to ‘swim’ but is it swimming? Is it conscious?

    • @yavarjn2055
      @yavarjn2055 День тому

      @@Graham-e4p well, I would say we are flawed as animals. Why do we want sth like that? Computers also can suffer if the cpu is hot or the memory is full, but that is not what we are looking for. Computer is not human and there is no doubt about it. But, it can be concious if we define it in terms of its being. Can program it to be. We as humans are also like that, if our heart stops there is no will nor conciousness. I just dont get the end goal or argument here. To be concious you should be alive?

    • @Graham-e4p
      @Graham-e4p 21 годину тому

      @@yavarjn2055 consciousness is a necessary part of being alive. Unless we want to redefine it. As I think about it, maybe the issue is, or my issue is, separating thought from the experience of an autonomous organic being. Isolating it. Equating it to a computer.. when it functions as something very different. Something that serves as a conduit for all the functions of what it is to be a self preserving human. Playing chess and Go and solving difficult protein problems, (referencing the latest noble prize in physics) but that isn’t the same mental process a bird goes through when flying from branch to branch, or what a human goes through when responding to a crying baby. Our motivations are layered but ultimately there to preserve our existence. A machine is told what to do. Regardless of its level of sophistication. A submarine let’s assume with AI’s help, can dive to certain depths and scour for certain debris. Is it conscious?

    • @yavarjn2055
      @yavarjn2055 18 годин тому

      @@Graham-e4p Many interviews pose computers as cold, robotic things and humans as warm beings. An AI can make an emathic conversation with a human being without being judgmental, tired, in a bad mood and be more knowledgable or helpful than parents, teachers or any friend all in one. It can be a business coach, marriage consultant, an understanding friend that tolerates everything one says to it. Humans especially close ones can be manipulative, liers, jelous, killers, drug adicts, bad tempered, greedy, corrupt, delusional, unkind, unhappy, racist, offender, in depression, suicidal, etc. Being concious is a negative thing for the majority of cases I would say. How many people with childhood trauma you know because of crazy people around them? Child molester uncles, priests, doctors. We have a whole legal and political system to prove that. How many people and families with divorce trauma you know? Humans are biased. If you look at the bias map for humans it shows how flawed we are. Computer is a pure perfection in comparison. Computers wont betray you, will never leave you, never cheat on you, never steal your belongings. They can help you run your life without any expectation. I mean I hate being bound to these as a living human. We can't avoid these. Even the best of us. And now even the computers win us in the games that we thought are most human and need strategy, intuition like Go. We have a very high opinion of ourselves. And low gratitude for AI. It will make humans obsolete soon and we would want it to replace those of us who are not up to par. I prefer copilot as my thesis supervisor to all 10 full time profesors in my university. I want an AI doctor to chat with md and give me a hint about what to do with my symptoms instead of waiting 3-6 month for an indifferent doctor. I could not get a proper lawyer when I needed one. I personally, would substitute humans with AI at any time. I hope they get better soon. I prefer a doll companion than a human partner who is usually interested in wasting 50% of my time, and makes my life miserable. Schopehauer used to say the secret to happiness is being alone because when reacting with others you lose 3/4 of your being yourself. You have to play games to be accepted and be compatible. I could go on. The question is if ghe computer can become so intelligent that it can fake concioucness flawlessly. It is a mathematical or philosophical question. We have multimodal AI, knowlege graphs, reinforcement learning etc. Soon the AI models will have a better perception of reality and the world than us and can play with our intelligence as adults play with their children. AI can have sensors to experience the world have images and videos and can get information by experiencing going around seeing and hearing. It will eat us alive. It can program itself. The AI bots in facebook invented their own languange in minutes. Imagine what they can do in years. Creating a language without being given instructions!

    • @Graham-e4p
      @Graham-e4p 16 годин тому

      @@yavarjn2055 wow. All good. Computers are incredible.. machines. The question posed was addressing consciousness. And in a sense you illustrated why they are not. Human brains are pulled in a thousand different directions. No rhyme nor reason. All the complexity of thought, emotion, memories, projections, aches and pains.. exuberance, depression.. all of these tugging us in different directions, making us anything but computer-like. Remember the post wasn’t asking for a judgement statement it was suggesting AI will attain consciousness.. as it plays out with humans, you’d have to agree, not.
      Ftr. I’m sorry computers are filling that space in your life. I’m no councillor, but I wouldn’t put all my eggs in that basket. You’re the stuff of earth.. organic.. flawed.. I’d venture it’s the stuff we need.. human contact.

  • @frederickleung8811
    @frederickleung8811 2 дні тому

    Always love hearing Noam Chomsky. Wonder he would agree that human brain is same as a programable "machine"?

    • @Recuper8
      @Recuper8 День тому

      Chomsky is the ideal example of a "has-been". You are beyond stupid if you still listen to him.

  • @peterslakhorst3734
    @peterslakhorst3734 День тому

    He also made some predictions about the effect of the internet on society and the use of personal computers.

  • @robmyers8948
    @robmyers8948 День тому +1

    He’s talking about current modes, things will advance where base models will be able to learn with ease like humans and gain the knowledge of all of humanity, drawing new insights from this vast understanding.

  • @jabster58
    @jabster58 21 годину тому

    Isn't that the guy who said electricity wouldn't become anything

  • @sergebureau2225
    @sergebureau2225 8 годин тому +1

    Depressing to see Chomsky show such a lack of imagination and comprehension of the new technology. Machines understand languages better than human, obvious.

  • @MojtabaSaffar-p1v
    @MojtabaSaffar-p1v День тому +2

    Why do we think that there's only one way to be intelligent and it's biological. Algorithms are a kind of intelligence.

  • @szebike
    @szebike День тому +2

    So if you take GPT4 with appox 1.8 trillion parameters, you need about 7kw per hour that would be around 172 kw per day (if it snot at peak performance I made this calculation with 50% of peak power consumption ). Compared to that a brain need about 0.3 kw per day. So you could employ 500 people for the same energy in a 24 hours timeframe. Now lets assume we have 500 educate dpeople VS one GPT 4 . Sure humans need food shelter etc. but you need maintenace, cooling and infrastrucutre to replace chips etc. for a machiene too. All in all humans are many many times more capable and efficent. I don't believe those techbros and content creators who live of the hype a word.

    • @nickbarton3191
      @nickbarton3191 День тому +2

      Interesting comment.
      Apparently, the Internet consumes 2% of the world's energy. Are we really going to up that significantly for AI when we don't yet understand the benefits and pitfalls?

  • @jalalkhosravi6458
    @jalalkhosravi6458 14 годин тому

    It's funny,he says 2 years old child understand more than AL

  • @theb190experience9
    @theb190experience9 4 години тому

    Oof clearly some definitions are needed. I’ve worked with both 2year olds and AI and AI is provably smarter. So perhaps the thumbnail title needs to be changed.
    It is also clearly much easier to communicate with AI across a vast array of subjects and elicit far more rewarding responses to my ‘prompts’. Note that doesn’t mean I prefer that interaction, it’s simply a statement of fact.
    Two things I am absolutely sure of: 1). That two year old, as it grows and learns, will have orders of magnitude better interactions with me.
    2). So will future models of AI.

  • @nnaammuuss
    @nnaammuuss 21 годину тому

    🙂 a lot of easy-to-simulate people in the comment section, presuming the scientists presume too when they speak.

  • @Nobody-uz1yw
    @Nobody-uz1yw 2 дні тому

    We are asking the wrong questions

  • @ThomasConover
    @ThomasConover 21 годину тому +4

    This old man is so old he decided to just say “Fk it I’m gonna deny AI just cuz I’m old enough to deny everything and blame it on Alzheimer’s” 🗿

  • @ticneslda8929
    @ticneslda8929 День тому +1

    Why are we even entertaining this kind of ...argument? what a waste of time! what a display of ego...! These line of Dr. are the ones that din't learn when to go away. Oh..., I'm so smart!

  • @DoroudianFred
    @DoroudianFred 12 годин тому

    Nice to see you Prof. Noam Chomsky in good health. I appreciate your response to this AI Bonanza, which is currently tanking out ! AI shares and stocks dropping like hot potatoes.

  • @steve.k4735
    @steve.k4735 2 дні тому +7

    Chomsky is indeed a genius and very knowledgeable around this subject but A.I itself is not his core knowledge both Geoffroy Hinton (Google Deep mind and Nobel prize winner) and Sir Demis Hassabis (Deep Mind and Nobel prize winner) who are also in the same league AND A.I is their core subject, both disagree with Chomsky they think these models will understand and take the idea that they will become conscious very seriously, the main stream view of people like them and many others who work in the field and are just as smart as Chomsky is that on this he is wrong.

    • @SlumberingWolf
      @SlumberingWolf 2 дні тому

      Define conscious. Go ahead do it, because last time I checked Science couldn't do so?

    • @steve.k4735
      @steve.k4735 2 дні тому +4

      @@SlumberingWolf I presume you believe you are conscious yes? .. amazing eh this despite the fact that `last time you looked` science can`t define it .. therefore we KNOW that you don't have to define something or even fully understand it for it to exist. people in the past did not need to understand aerodynamics to make a plane fly.

    • @eatcarpet
      @eatcarpet День тому

      "AI experts disagree" don't mean anything. Those "AI experts" haven't invented consciousness.

    • @steve.k4735
      @steve.k4735 День тому +3

      @@eatcarpet AI not just experts but absolute top of the tree worked with it for decades doesn't mean `anything` .. really .. nothing at all no more than you in a you tube comment eh?
      A.I experts have not `invented` consciousness but you don't need to be 100% at a thing to realise you are building the blocks and getting close, they are not sure but think / fear they may well do so.

    • @eatcarpet
      @eatcarpet День тому

      @@steve.k4735 So basically meaningless.

  • @npaulp
    @npaulp День тому +6

    I have great respect for Noam Chomsky, but his understanding of Generative AI seems limited. It’s not just about feeding vast amounts of data into a system and having it statistically predict the next word- that’s a gross oversimplification. Generative AI, in its current form, offers one of the most sophisticated models for approximating how the brain functions. While it’s not an exact replica of human cognition, it’s a remarkably close approximation given today’s technological advances.

    • @roccococolombo2044
      @roccococolombo2044 День тому +2

      Exactly. Next word prediction does not explain coding or image generation.

    • @eatcarpet
      @eatcarpet День тому +5

      You don't even know how the brain functions, and yet you're claiming that it "approximates how the brain functions".

    • @mpetrison3799
      @mpetrison3799 День тому

      ​@@eatcarpet Well, the main reason LLMs might fail the Turing Test is because they are too knowledgeable and clever. That's at least approximating the output of humans in text, given input in text. (With speech recognition and output, or even video recognition and output, more should already be possible than that.)

    • @npaulp
      @npaulp 16 годин тому +1

      @@eatcarpet Artificial neural networks are inspired by how the brain works, though they are simplified models. While it's true that there's still much we don't fully understand about the brain, we do have a solid grasp of some key principles, such as how neurons communicate, learn, and process information. Neural networks capture these basic ideas, such as learning through adjusting connections, even if they don't replicate the complexity of the brain's full processes. So while not a perfect mimic, they do approximate certain aspects of brain function that we understand.

    • @eatcarpet
      @eatcarpet 16 годин тому

      @@npaulp We don't know how the brain works - that's the whole point.

  • @HashemMasoud
    @HashemMasoud 9 годин тому

    I totally agree. AI is just text auto-complete on steroids, that's it.

  • @ViceCoin
    @ViceCoin 5 годин тому

    Only has smart as the user.
    I used AI to code casino games, and generate graphics, in seconds, saving months of development.

  • @harper626
    @harper626 День тому

    but 2 year olds will be the same in 10 years, not AI. It will be much improved and capable.

  • @dineshlamarumba4557
    @dineshlamarumba4557 13 годин тому

    Ai is in stage of newly born child right now. Darpa. When computer have cognition, reason than only ai will pass 2 year baby.

  • @Luke-z2l
    @Luke-z2l День тому +1

    Submarines don't swim. But Sea-Men can. A mind of its own, I think,... Spiritually. Observer consciousness manifesting awareness turning imagination into intelligence. I can think, I AM the thinker, not the thought. I AM the one thinking, but I AM when none are thinking at all. Peace & Serenity Now

  • @Prof_LK
    @Prof_LK День тому +4

    Extremely arrogant and stupid argument.

  • @GuyLakeman
    @GuyLakeman 7 годин тому

    HUMANS SCAN SMALL AMOUNTS OF DATA AND DONT PASS SIMPLE EXAMS !!!

  • @seanlorber9275
    @seanlorber9275 День тому +1

    Chomsky is just a negative nancy. An expert on every subject. What a load.

  • @krokigrygg
    @krokigrygg День тому +1

    Yes lets listen to a person that has no clue what he is talking about.

  • @Practical.Wisdom
    @Practical.Wisdom  3 дні тому

    📍 SUBSCRIBE: www.youtube.com/@Practical.Wisdom?sub_confirmation=1

  • @GuyLakeman
    @GuyLakeman 7 годин тому

    AI SYSTEMS WRITE PROGRAMS ...

  • @GuyLakeman
    @GuyLakeman 7 годин тому

    AN AI HAS INTELLIGENCE OF A 2 YEAR OLD CHILD BUT THERE ARE MILLONS OF AI MACHINES WHICH IS GREATER TAN THE TWO YEA OLD

  • @BMerker
    @BMerker 2 дні тому +2

    How charming to hear the man who spent his life arguing that the secret of language is to be found in "symbolic, rule-governed systems", i. e. in exactly what computers do, argue that "following a program" (i. e. doing symbolic, rule-governed system operations) is "irrelevant" to understanding language. And how interesting to know that he thinks that only humans are conscious!

    • @Tommydiistar
      @Tommydiistar 2 дні тому +2

      Well you could prove him wrong by showing some evidence that AI is conscious, but like he said you have nothing but speculation to go off like the LLM models

    • @edh2246
      @edh2246 День тому +1

      Seems silly to compare with a two year old. ChatGPT can pass the bar exam and can answer questions at least at the graduate level of any science, mathematics or humanities.

    • @Tommydiistar
      @Tommydiistar День тому +1

      Take all of that with a grain of salt-they always tend to overestimate their products. Sam is a salesman, and a very good one at that. Not to say GPT isn’t impressive, but the elephant in the room is whether it’s sentient-that’s the real question. And he’s right, it’s not. All it’s doing is predicting the next word.

    • @strumyktomira
      @strumyktomira 4 години тому

      @@Tommydiistar No. It is Chomsky who must prove his thesis :D

    • @Tommydiistar
      @Tommydiistar Годину тому

      @@strumyktomira is AI sentient? How is he going to prove something that everyone knows is already facts that makes no sense but hey this is the world we’re living in now a days

  • @vintredson
    @vintredson 4 години тому

    Lmao, pretty difficult to take the word of someone who still thinks Communism is a good idea and whitewashed the Khmer Rouge tbh😂

  • @Srindal4657
    @Srindal4657 8 годин тому

    What point is a anarchist, communist or even socialist revolution if robots can take over every activity? It's like asking what good is a nest if birds evolve not to need them? In the same respect, what good is human activity if humans evolve not to need them? Noam Chomsky is out of his element.

  • @The_Long_Bones_of_Tom_Hoody
    @The_Long_Bones_of_Tom_Hoody День тому

    He isnt so wise that he knows all the answers to everything. He just thinks he is....

  • @noway8233
    @noway8233 День тому +1

    Cool , chomsky is very clever about all this AI hype , he is ritgh , this hype will burst very soon and will be huge

    • @almightyzentaco
      @almightyzentaco День тому +3

      Why would it burst? It's extremely useful and getting more useful by the day. How is it hype to be able to drop 500 lines of code into Claude and quickly identify the cause of unintended behavior, or have your functions commented automatically? At its current state, if it never improved at all AI is already one of the most all around useful tools I have ever encountered.

  • @spinningaround
    @spinningaround День тому

    Old people know better

    • @mpetrison3799
      @mpetrison3799 День тому +2

      These airplanes are never going to work... 👴🏻