Keynote: Yann LeCun, "Human-Level AI"

Поділитися
Вставка
  • Опубліковано 24 лис 2024

КОМЕНТАРІ • 81

  • @Paul-rs4gd
    @Paul-rs4gd 2 дні тому

    LeCun is spot on saying that pixel-based generative models don't learn deep world models. They generate images that 'look' good, since they are trained on appearances, but they do not learn world models. A great example that I saw recently was a 'space babe' in a spacesuit that did not have a join between the helmet and the suit - the AI generated something more like a motorcycle helmet, because it had no idea that the suit needed to hold air. Another example was a video that showed a first person viewpoint entering a library. Each image frame was consistent, but it was plain that the inside was larger than allowed by the outside view - the AI had no mental map of the library.

  • @xianglin4046
    @xianglin4046 Місяць тому +1

    Thank For Sharing, You Speak Of Truths, My Brother French Bread.. I Loves Bread

  • @SapienSpace
    @SapienSpace Місяць тому +4

    @17:53 That is overlapping Fuzzy membership values used in Fuzzy Logic. Richard Hamming, who worked on the Manhattan project talks about Fuzzy Logic in his "Learning to Learn" lectures. @25:38 Joint-Embedding Predictive Architecture seems very similar to this as well. The optical nerve in the human brain splits for each eye and routes the signal to both hemispheres, this is also a biological observation of the same concept.

    • @nullvoid12
      @nullvoid12 Місяць тому +1

      @@SapienSpace no fuzzy logic mentioned in the entire video. on the other hand, all truths represented in fuzzy logic lies on the continuum, the complexity and approximate nature of those makes it impossible to work with in critical conditions.

    • @SapienSpace
      @SapienSpace Місяць тому

      @@nullvoid12 Yes, Fuzzy Logic is not mentioned in this video but it is mentioned by Hamming in his 1995 lecture series on "Learning to Learn" in UA-cam. I suspect, Fuzzy Logic, got "shoved under the bus" so to speak, as terminology because it is a sort of self incriminating terminology, i.e. admitting the opposite of high accuracy, and few like to admit low accuracy, but nothing is "perfect", everything has a tolerance, and Fuzzy accepts the tolerance. Looking at nature, our optical nerve splits between the two hemispheres of each eye, the brain itself, optically, merges two signals (fuzzy membership functions from each eye). Admitting fuzziness is like being on the intelligent side of the Dunning-Kruger effect. The generative "AI" process is much like a layered fuzzification and de-fuzzification process.

    • @nullvoid12
      @nullvoid12 Місяць тому

      @@SapienSpace its useful in certain cases with the help of fuzzy decision trees, I'll give you that.. but there's no notion of proof in fuzzy logic hence no essence of truth.. it's all subjective all the way down. With no proper logical foundation, it can't take us anywhere. Cheers!

  • @BeckieBlanchette
    @BeckieBlanchette Місяць тому +24

    judgmentcallpodcast covers this. Keynote on AI's essential characteristics.

  • @agentxyz
    @agentxyz Місяць тому +8

    he made the kool-aid, but seems nervous about drinking it

  • @dhamovjan4760
    @dhamovjan4760 Місяць тому +2

    Interesting but nearly the same talk as the previous years. However, redundancy is essential to learn. One novelty was the Guardrail objective on the slide in 13:50.

  • @novantha1
    @novantha1 Місяць тому

    With regards to the amount of data a human versus the amount of data an AI model has been trained on: It be really interesting to normalize the amount of data against the number of neurons available at time of that data’s incorporation to the model; If a lot of data is incorporated when there are fewer than, say, one billion neurons in a human, I think the information which can be extracted from that is different from the information that could be extracted by the same data being processed by, ie: a 100B parameter AI model.
    And likewise, I also think that the amount of data absorbed when a human has, say, a trillion neurons is very different than what can be learned by an 8B parameter model (assuming, of course, that neurons and parameters are broadly equivalent, which they appear to be).

  • @z_enigma
    @z_enigma Місяць тому

    interesting insights.

  • @jmirodg7094
    @jmirodg7094 Місяць тому

    Enlightening presentation!

  • @lemurpotatoes7988
    @lemurpotatoes7988 Місяць тому

    I like hierarchical RL, but I think the "right" way to do it would require learning an almost algebraic structure that describes how big tasks should decompose into little ones. We'd also need to guarantee that the side effects of the different subtasks played nicely with one another, which has a similar flavor to the guardrail idea (I don't really like the guardrail idea, but I do think that computationally bounded agents should satisfice their values.)

    • @lemurpotatoes7988
      @lemurpotatoes7988 Місяць тому

      The no side effects thing is also important for out of distribution generalization - ancillary features in the new domain need to not break what was already learned. I think better incorporation of constraints into high dimensional problem solving may be one of the keys for AGI.

  • @Ikbeneengeit
    @Ikbeneengeit Місяць тому

    He's thinking about how we think

  • @benshums
    @benshums Місяць тому +2

    When was this?

  • @ronvincent5645
    @ronvincent5645 Місяць тому +3

    OODA is more sophisticated than what is presented here.

  • @fj103
    @fj103 Місяць тому +2

    Not convinced

  • @ordiamond
    @ordiamond Місяць тому +3

    These machines may surpass our intelligence, but we can still control them. How? They are objective-driven. We give them goals.
    So, Yann considers giving them goals to achieve is a way to control AI that is super-intelligent. I don't see that's going to work. Yann says that at the last minute of the talk. He needs to have another keynote to discuss mainly that part.

    • @Steve-xh3by
      @Steve-xh3by Місяць тому

      Given that "control" as humans commonly use it is a provably nonsensical concept (reality doesn't work that way), it is literally impossible for us to "control" anything. Humans can't even be said to "control" their own behavior in any meaningful sense. Brains follow the laws of physics. Self-awareness is not a control center.

    • @lemurpotatoes7988
      @lemurpotatoes7988 Місяць тому

      He doesn't have anything to say about it, I've looked repeatedly. Read Paul Christiano if you're interested in viable routes to safety.

  • @telebiopic
    @telebiopic Місяць тому +5

    Not everybody espouses the same good morals and civic virtues. We need regulation in this space so that we don’t suffer from runaway corporate greed & corrupt security apparatus. The founding fathers never imagined the rise of technocrats.

  • @TyronePost
    @TyronePost Місяць тому

    32:30 “… Repository of all human knowledge… more of an infrastructure than a product…” Key takeaways and early understandings of what it will be like to coexist alongside more Super-Genius life-forms than you could possibly imagine. 👏🏾👏🏾👏🏾

  • @hedu5303
    @hedu5303 Місяць тому

    Good to see he is able to talk about AI and not only about Elon or politics…

  • @iganmak
    @iganmak Місяць тому

    Already for long time I considered model thinking as the next required step in AI development. But I'm not sure that training models to predict world status changes is a good way to do that. I'd rather use analogue microchips to actually model world state transition caused by actions. We could start from generating as complex representation as we can afford by the available hardware (for example one chip), let it run and collect state data with certain time intervals, as well as track maximums and minimums continuously. This would be the first step that already could be delivered to production use cases.
    The next step would be to implement hierarchy. In this case first representation should be as simple as meaningfully possible, then take intervals with unacceptable level of certainty and go deeper with details, until uncertainty is acceptable.
    Of course we'd need models to encode and decode representations. But is it that hard?
    I think that 10 years time scale for this research is prohibitively long. Clumsy, energy hungry, but working systems based on existing architectures will appear much earlier. Text based systems are already capable of generating representations, even if not super accurate and great. Video generating models can already be utilized for predicting physical changes based on applied actions to some extend. It will only take to generate high quality purpose optimized specialized datasets to be able to achieve pretty decent results. So I think that traditional "pure" scientific processes with decades long planning would not be very productive for this task.

  • @Paulus_Brent
    @Paulus_Brent Місяць тому

    Fascinating, but I don't think this will lead to AGI. Understanding is much more than just predicting.

    • @Paul-rs4gd
      @Paul-rs4gd 2 дні тому

      Can you give an example of where understanding cannot be broken down into the ability to predict.

    • @Paulus_Brent
      @Paulus_Brent 2 дні тому

      @@Paul-rs4gd Think of Searle's Chinese room thought experiment. One can predict everything, and yet understand nothing.

  • @AlgoNudger
    @AlgoNudger Місяць тому

    AGI? 😂

  • @FruitPrut
    @FruitPrut Місяць тому +1

    Why the confident level is so low.

    • @webgpu
      @webgpu Місяць тому +1

      what makes you think he's not confident

    •  Місяць тому +1

      His latest bets on world representations based on Jepa haven't really taken off

    • @detective_h_for_hidden
      @detective_h_for_hidden Місяць тому

      I believe he said we would get news about their progression next year? Tbf he always said Jepa is just the beginning and it would take time

    • @webgpu
      @webgpu Місяць тому

      why I'm not surprised with a progressist's failed estimate...

    • @webgpu
      @webgpu Місяць тому

      happy, good, like it

  • @SydneyApplebaum
    @SydneyApplebaum Місяць тому +11

    If it isn't Twitter psycho Yann LeKook

  • @MitchellPorter2025
    @MitchellPorter2025 Місяць тому +13

    Yann has good ideas on how to make "human-level AI" but his ideas about the consequences are extremely unrealistic - I mean the part about how it will still be humanity's world, we'll all just have AI assistants. Human-level AI means nonhuman beings that are at least as smart as human, making their own choices, and it almost certainly means nonhuman beings much smarter than any human

    • @ganeshnayak4217
      @ganeshnayak4217 Місяць тому +1

      Why is a world where human level ai exist under humans is unrealistic, as long as there is no concrete proof of counseousness in these systems his arguments are pretty valid.

    • @drxyd
      @drxyd Місяць тому +3

      The core error is this move towards agentic systems. So long as we use AIs as calculators we can avoid the worst harms by filtering the ideas of AI through human judgement.

    • @imthinkingthoughts
      @imthinkingthoughts Місяць тому +2

      @@drxyd we avoid the best potentials too then

    • @dibbidydoo4318
      @dibbidydoo4318 Місяць тому +4

      Yann says that having independent agency isn't necessary for human-level intelligence, that's necessary for creatures that came from _evolution._ and evolved to create their own goals.

    • @chastetree
      @chastetree Місяць тому

      You can't seem to be able to imagine intelligence without animal instincts. No machine is interested in survival, reproduction, self-actualization, or anything you care about (unless some human programs them to).

  • @blackcorp0001
    @blackcorp0001 Місяць тому

    FAFO

  • @sizwemsomi239
    @sizwemsomi239 Місяць тому +3

    Yan doesn't know what he is talking about..

  • @alsaderi
    @alsaderi Місяць тому +1

    Lone chad💯👏. Get them with their end of the world (theory's)🦾🤖💙