Yann LeCun: Dark Matter of Intelligence and Self-Supervised Learning | Lex Fridman Podcast

Поділитися
Вставка

КОМЕНТАРІ • 612

  • @lexfridman
    @lexfridman  2 роки тому +85

    Here are the timestamps. Please check out our sponsors to support this podcast.
    0:00 - Introduction & sponsor mentions:
    - Public Goods: publicgoods.com/lex and use code LEX to get $15 off
    - Indeed: indeed.com/lex to get $75 credit
    - ROKA: roka.com/ and use code LEX to get 20% off your first order
    - NetSuite: netsuite.com/lex to get free product tour
    - Magic Spoon: magicspoon.com/lex and use code LEX to get $5 off
    0:36 - Self-supervised learning
    10:55 - Vision vs language
    16:46 - Statistics
    22:33 - Three challenges of machine learning
    28:22 - Chess
    36:25 - Animals and intelligence
    46:09 - Data augmentation
    1:07:29 - Multimodal learning
    1:19:18 - Consciousness
    1:24:03 - Intrinsic vs learned ideas
    1:28:15 - Fear of death
    1:36:07 - Artificial Intelligence
    1:49:56 - Facebook AI Research
    2:06:34 - NeurIPS
    2:22:46 - Complexity
    2:31:11 - Music
    2:36:06 - Advice for young people

    • @Satoshi-Nakamoto.
      @Satoshi-Nakamoto. 2 роки тому

      Interesting tppics

    • @missh1774
      @missh1774 2 роки тому

      That WhatsApp Bot does a funny lil trick. The pic changes. Seen it happen in another chat space too.

    • @missh1774
      @missh1774 2 роки тому

      @@BeckyYork but if you just suppose for a moment that u are already a copy of a previous reality ... wouldn't the notion of you being a clone act as a "safe keep" for the best parts of yourself?

    • @missh1774
      @missh1774 2 роки тому

      @@BeckyYork the mind is capable of so much more if one would allow it the freedom to do so. I do like this reality too.

    • @TheAeroman90
      @TheAeroman90 2 роки тому

      I think Professor George Karniadakis might have some interesting insight regarding NN and physics applications.

  • @ikust007
    @ikust007 2 роки тому +283

    That gentleman must have created for himself one of the most fantastic job ever : to meet brilliant minds and to LEARN every time . Bravo !

    • @willbadr4335
      @willbadr4335 2 роки тому +12

      More importantly, spread all this leaning to everybody else through video interviews!!

    • @TimeLordRaps
      @TimeLordRaps 2 роки тому +5

      It's truly beautiful I hope one day I'm brilliant enough to be considered, even though Lex ignores me on Twitter haha, so I guess I'm here to bring awareness to him, I make funny jokes about bucky balls like how lex has to handle even my best jokes. I know this doesn't make sense to anyone else, so Nostrovia to family.

    • @sheridixon190
      @sheridixon190 2 роки тому

      @@TimeLordRaps drop Twitter account. I want to follow you.

    • @moormanjean5636
      @moormanjean5636 2 роки тому

      So has Lex, much respect to both.

    • @daarom3472
      @daarom3472 Рік тому

      wish he'd still do AI podcasting :(

  • @stevehutchison7968
    @stevehutchison7968 2 місяці тому +5

    This just came up on my UA-cam feed two years later. Wow, what an extraordinarily prescient discussion.

  • @labordaze
    @labordaze 2 роки тому +71

    I think this was my favorite Lex podcast. No other *(super popular) podcaster has the technical proficiency to go so deep into a discussion of computer vision. This is why I'm subbed.

    • @kwillo4
      @kwillo4 2 роки тому +14

      Check out machine learning street talk, they go deeper and yann was also on there

    • @labordaze
      @labordaze 2 роки тому +2

      @@kwillo4 Thanks for the suggestion!!

  • @SallyErfanian
    @SallyErfanian 2 роки тому +75

    Many people including me are indebted to the perseverance of people like Yann LeCun. Luckily for me, I got to meet him and thank him. What an inspiring person.

    • @harryseaton7444
      @harryseaton7444 2 роки тому +1

      How so? Does he also research medicine or something?

    • @SallyErfanian
      @SallyErfanian 2 роки тому +5

      @@harryseaton7444 I work in CV/ML/AI.

    • @harryseaton7444
      @harryseaton7444 2 роки тому +1

      @@SallyErfanian so his work has made a difference in your work life then? Or just his ideas being educational

    • @gogigaga1677
      @gogigaga1677 2 роки тому +3

      HE IS THE GOAT OF ARTIFICIAL INTELLIGENCE

    • @gogigaga1677
      @gogigaga1677 2 роки тому +5

      @@harryseaton7444 no he is a pioneer in the field of Artificial Intelligence a True Legend in the Field

  • @TrueMilli
    @TrueMilli Рік тому +8

    I get really scared when "Chief AI Scientists" are that bad at predicting AI capabilities.
    LeCun 57:55:
    You take an object, place it on a table, and then push the table. It's completely obvious to you that the object will be pushed along with the table, because it's sitting on it. I believe there is no text in the world that explicitly explains this. So, if you train a machine, as powerful as it could be - let's say your GPT-5000 or whatever - it's never going to learn about this phenomenon.
    ChatGPT (GPT 4):
    If you push the table gently, the object might stay in place due to friction, although it may slide or wobble slightly. If you push the table with a greater force, the object might slide or fall over, especially if the object is top-heavy or not very stable.

    • @robertvolek8360
      @robertvolek8360 Рік тому +1

      I asked it to compare a ball and a box in that scenario, to describe order of events if the accelaration of the table would increase over time... it blew me away

    • @sydneyfong
      @sydneyfong Рік тому +1

      Yet the same Lecun: "ChatGPT is 'not particularly innovative'"

    • @robnobert
      @robnobert 19 днів тому

      Yann is the epitome of a research con artist. And it's CRAZY how many "intelligent" people like Lex can't see this. But Lex is an overrated midwit himself so I guess that really shouldn't be too surprising in his case. Don't get me wrong, as a PERSON - I like Lex. But as far as rating his intelligence? It's absolutely batsh*t to me people think he's some kinda super smart guy. He's A TEACHER. TEACHERS ARE NEVER EXPERTS >> FULL STOP. THAT'S WHY THEY TEACH INSTEAD OF ACTUALLY MAKE SOMETHING.

  • @zebrawien
    @zebrawien 2 роки тому +14

    In my opinion one of the best of your podcasts. I watched them all by the way on a sidenote.

  • @Executor73
    @Executor73 2 роки тому +40

    Thanks for keeping it real Lex. Can't thank you enough. You and the guests you choose have been opening my mind in the most magnificent ways.

  • @McSwey
    @McSwey 2 роки тому +3

    The beauty of this channel. Finally, someone who can talk to so many people about so many advanced things.

  • @binod8720
    @binod8720 2 роки тому +47

    Blessed to have Yann to be in your podcast finally. Most deserving figure in the field of modern computer and AI.

    • @boboobrob
      @boboobrob 2 роки тому +6

      He was in much earlier in the Lex Fridman Show. This is his second time on

    • @rajjubhaiwala4508
      @rajjubhaiwala4508 2 роки тому +1

      Don’t forget François Chollet

    • @arnaudjean1159
      @arnaudjean1159 2 роки тому +1

      And Josha Bach on computering these men have to be the smartest bc like Max Tegmark said we have to be pro active on this subject
      It is the most important revolution in human history.

  • @paris_mars
    @paris_mars 2 роки тому +12

    I really liked this conversation. This guy's awesome.
    As a kind of related aside, the auto-generated CC are amazing for someone with such a strong French accent.

  • @generichuman_
    @generichuman_ Рік тому +2

    It's interesting coming back to this now. I put Yann's example of the smart phone on the table through GPT4 and of course it got the right answer
    "If the smartphone was on the table and you pushed the table 5 feet to the left, the smartphone would also move 5 feet to the left, assuming it stayed on the table during the push. So, relative to where it started, the smartphone is now 5 feet to the left."
    It's just interesting that people at the bleeding edge of this technology didn't realize how competent these system could get using only text.

  • @professord8888
    @professord8888 2 роки тому +28

    I've loved everything about the Lex Fridman podcast since day one except that it _marked_ the end of the artificial intelligence podcast. However, among the many things I learned from today's episode is the fortuitous fact that the AIP lives _inside_ the LFP.

  • @Alex-ck4in
    @Alex-ck4in 2 роки тому +9

    Saw his name and HAD to click the video!! I cited his work in my undergrad thesis, he is a walking legend 👏

    • @Christian-ry3ol
      @Christian-ry3ol 2 роки тому

      im a cs undergrad and understood almost nothing of what was said

    • @arnaudjean1159
      @arnaudjean1159 2 роки тому

      Maybe you should rewind frequently his answers bc Yann think and talk fast like all great scientists .
      It is like that I understood everything .

  • @Mostafa-cv8jc
    @Mostafa-cv8jc 2 роки тому +7

    Can't get enough of him, hope this series (with lecun) goes to round 20!

  • @TimeLordRaps
    @TimeLordRaps 2 роки тому +2

    This is worth multiple watch through. For understanding learning, learn what you find different on each watch to begin to learn your own instincts.

  • @norabelrose198
    @norabelrose198 Рік тому +11

    58:27
    LeCun: "GPT-5000 would never learn that a phone sitting on a table will move with the table when you push it"
    GPT-4: *in depth physics explanation about the conditions in which the phone would move with the table and when it would slide off*

    • @peterc1019
      @peterc1019 Рік тому +2

      This guy has become a massive AI Safety skeptic. Not great to hear him making confidently wrong predictions like this

    • @littlestewart
      @littlestewart Рік тому

      It’s really to think about it

  • @haakoflo
    @haakoflo 2 роки тому +6

    I see Yann, and I like immediately. Geoff may be the grandfather of the field, but Yann still has ideas that are super-interesting going forward.

  • @bartlx
    @bartlx 2 роки тому +10

    Very interesting talk, I like when Lex and his guests put the bigger questions inside the balance when talking about current and next technology. I wonder when this was recorded though? 1:54:31

  • @marzx13
    @marzx13 2 роки тому +19

    Yes! About time to do a second round. Really looking forward to this

  • @13mrkasper
    @13mrkasper 2 роки тому +6

    Love the words of wisdom at the end of every podcast Lex! They really tie an elegant bow to the whole conversation. Generally just love your podcasts! Been following since you started and I am forever grateful for the amount of uploads as well as the wide variety of topics you bring up in them. Keep up the good work!

  • @odoylrulz1
    @odoylrulz1 2 роки тому +17

    Lex, thanks for putting together high quality interviews with rock stars of the nerd-verse. I appreciate these videos a lot 😬, keep it up 👍

  • @tchlux
    @tchlux 2 роки тому +5

    It seems like an important concept is undervalued in ML right now: objective
    Building a world model is good, but it's far better to have a world model that predicts whether or not X will happen (for some finite set of objectives X). Our objectives are what determine every action we take. All animal brains are capable of forming a *minimal* world model (not exhaustive!) that can effectively predict actions and observations that relate to a few important objectives:
    - do not get hurt
    - eat food & reproduce
    - explore
    In order to achieve these goals, brains must be capable of forming intermediate "objectives" (ideal perceptions) that can be created, reordered, remembered, reevaluated, ... Solving a prediction problem is easy with time and data, but creating the *right* prediction problem is the hard part we don't know how to do.

  • @peterszilvasi752
    @peterszilvasi752 2 роки тому

    When Lex talked about death and how we try to ignore or hide from it. And everything we do centered around that... I got goose bumps.

  • @MitchellSchooler
    @MitchellSchooler 2 роки тому +3

    I will follow your videos for a long time. You seem to me, to be a good guy, rational and aware. I wish you success good sir

  • @SevenFootPelican
    @SevenFootPelican 2 роки тому +9

    Lex, this was a phenomenal conversation! This is why I keep coming back to your podcasts. Keep up the incredible work.

  • @leafarst
    @leafarst 2 роки тому +10

    LeCun is a real genius. Good to see them on our own time.

    • @robnobert
      @robnobert 19 днів тому

      Is he though!? -- for all his "research" his AI ideas have actually had VERY FEW practical implementations. The mark of a good idea is one that actually provides VALUE to REAL THINGS that you can make. Yan is remarkably lacking in this department. Anyone who actually does real AI development knows how full of it Yann is... Lex is more or less the same. Teachers teach, mostly because they can't do. If Lex is such an "AI expert" name ONE THING that he's done to significantly advance AI capability besides just talk about it. These are not geniuses. Lex and Yann are the EPITOME of "midwit" -- smart people that are just BARELY smarter than an average idiot -- enough to convince the average idiot they're geniuses. But they're not. They're just barely above regular intelligence and contribute really nothing to field besides chit chat.

  • @robbiewalsh6965
    @robbiewalsh6965 2 роки тому +11

    Lex you gotta try and talk to Gabor Mate, I think you guys would have a very deep and quite frankly important conversation.

  • @user-xs9ey2rd5h
    @user-xs9ey2rd5h 2 роки тому +5

    You're really doing everyone a favor by bringing him on, so awesome to hear from such an important figure of the machine learning community

  • @Augustinrouchon
    @Augustinrouchon 2 роки тому +3

    Thank you for these conversation. It keeps my brain working.

  • @jovialpunch
    @jovialpunch 2 роки тому +5

    STOKED, 3 hour podcast with Tom Arnold! I fkn loved him in True Lies!

  • @xgalarion8659
    @xgalarion8659 2 роки тому +2

    Feels good to have someone so deep in the field to be optimistic about the future of ai!

  • @karthikeyakethamakka
    @karthikeyakethamakka 2 роки тому +4

    I actually implemented barlow twins for FTU segmentation in tissue images. By the way object localization is extremely useful in bio medical imaging applications.

  • @snwbrus
    @snwbrus 2 роки тому +2

    Great Podcast session, I learned a lot during this conversation. Thank you, Lex Friedman and Yann LeCun !

  • @agentx2316
    @agentx2316 2 роки тому +15

    This was an amazing interview and most of all -- this interview reminds me of the bigger concerns and areas that exists that looms over the rather useless scraps of so-called ' news' that has nothing to do with changing the actual global world and global community.
    Thank you very much, Lex, for your inspiring and probing podcasts.

  • @DJmates
    @DJmates 2 роки тому +7

    You are as impressive as always, Lex. Wow oh wow! Thank you so much for doing what you do!

  •  2 роки тому +2

    Wow one can extract multiple dystopian novels from this conversation and turn them into best sellers!
    Love you both, thanks for keeping pushing the envolpe!

  • @gr82moro
    @gr82moro 2 роки тому +12

    I think the major missing piece of AI is "abstraction", human brain relies on highly abstracted concepts to think, express and understand the world. Abstract concepts are the basic building block to achieve higher intelligence and will improve the efficiency of learning significantly.
    For example: A person can learn knowledge easily by reading a book. His brain doesn't learn, think, understand the content by the combination of characters in the book, but that’s what current AI does (like sequence models).
    Without higher level of abstractions, AI will reach the bottle neck soon.

    • @ilikecommenting6849
      @ilikecommenting6849 2 роки тому +1

      I think this is a typical gross overestimation of what humans do. Next thing you're gonna try and convince me that humans have free will.

    • @anhta9001
      @anhta9001 Рік тому +1

      How do you know AI doesn't use abstract concepts?

  • @MrSchweppes
    @MrSchweppes 2 роки тому +1

    Please invite Andrej Karpathy and Sam Altman.

  • @AliRashidi97
    @AliRashidi97 2 роки тому

    Great talk! I wish there was a written version of this conversation.

  • @amandajrmoore3216
    @amandajrmoore3216 2 роки тому +2

    Amongst these testing times in our world an oasis of knowledge easing the start to my day. Thanks Yann and of course Lex as always

  • @prasannakukade300
    @prasannakukade300 2 роки тому +1

    Great thing about great people is when you listen to them you can sense experience they carry

  • @tnmygrwl
    @tnmygrwl 2 роки тому +1

    I would've loved to hear a discussioin around the intrepretibility of Convolutions, Self-Attention and MLPs.

  • @soumojitguhamajumder3143
    @soumojitguhamajumder3143 Рік тому

    As a data scientist, who works on various areas in data science, this podcast was amazing to hear. Loved his response at 17:50 about intelligence and statistics.

  • @jayxavier6930
    @jayxavier6930 2 роки тому +1

    01:46:52 "I think the Chinese Room Argument is a ridiculous one..." As someone who winced at, and was underwhelmed by, LeCun's critique of nativism and innate ideas, this was music to my ears!

    • @meatskunk
      @meatskunk 2 роки тому +1

      Out of curiosity, why is it a ridiculous argument? Unfortunately LeCun doesn’t really say why here, he just kind of handwaves it away, as others like Hassabis and Dennett have also done in the past.
      Hassabis basically said “it doesn’t matter if something only appears intelligent, it’s enough for what we’re aiming to achieve” … which is fair and valid, especially to avoid getting bogged down by semantics - but it doesn’t address the underlying criticism that Searle first raised. LeCun seems to suggest here that the sum of all human experience can reduced to a mechanistic “solution” - just not in the forseeable future, but in a blind faith eventuality which itself is an unsatisfying non-answer.

    • @jayxavier6930
      @jayxavier6930 2 роки тому +1

      ​ @meatskunk Thanks for your comments -- and I hope it was clear that I was partly joking. Of course, I don't really believe "ridiculous" is a fair characterization of Searle's position, however much I may have misgivings about it (more on that later). After all, anyone who convinced Putnam to walk back his commitment to computation/functionalism deserves eminent respect. And if I've missed something in Searle, I'm happy to be corrected.
      It seems to me the greatest liability or limitation in CRA is that it entirely inverts the relationship of processing and output to consciousness, or the personal and the subpersonal. Recall the premise: the man in the room is fed instructions, which he enacts. In short, he understands, has some conscious understanding of, the instructions -- the processing. But the problem for computational studies allegedly arises when the man in the room doesn't understand, has no conscious understanding, of whatever "content" the instructions are meant to yield, i.e. the output. So he has a personal grasp of the instructions, but no more an understanding of his output than he'd be able to consciously introspect into sub-personal processes (say, cardiovascular activity or involuntary memory).
      As should be clear from the above, whatever CRA is evoking, it's the diametrical opposite of whatever is being claimed in computational, or at least computational-leaning theories of language processing (Chomsky), perception (Marr) or thought (Fodor). In all of these and like other studies, the emphasis is that our computations are inaccessible to introspection (subpersonal). In short, in direct opposition to the man in the room, we are not personally aware of the operations whereby we process external stimuli. To wit, the man has the lived experience, conscious and phenomenological, of the blow-by-blow whereby he walks through certain instructions (e.g. "I am now matching x to 2 on this look-up table"). By fitting contrast, no sentient being, in real time, has personal access, or is required to consciously plan and think out, say, the nodes in a Chomsky tree diagram, when speaking a sentence in ordinary language (!).
      To briefly spell this out: nobody, when speaking "John expects to hurt himself," has to consciously think, in order to speak the sentence in real time, "ok, in enacting the operations of TGG, I need to displace 'John' from 'hurt himself' and raise it; but, in doing so, I also need to leave a trace, or a PRO, from its displaced position, and decide, to top it off, which is it: trace or PRO?" Unlike the man in the room, we're not aware of the operations we are enacting -- we just do it, all day, and every day, all the time. (See The Minimalist Program).
      So, it's not clear, as Catherine Elgin once noted, what Searle's little thought experiment is meant to show. That's not say there aren't compelling arguments against computational or functional studies of the mind or brain -- Ned Block (in my view) perceptively adapts some ideas from Nelson Goodman; von Neumann, as early as 1958, was sounding the alarm. Heck, even Noam, way back in 1957, posed powerful thought experiments as to why mental language processing, pace machine models, WASN'T probabilistic, statistical, or a posteriori (based on what a speaker-hearer had heard or been fed before).
      To sum up, there are good claims to be made against pushing studies of the mind/brain too far down the rabbit hole of machine processing. It's just that, Searle ain't it.

  • @dilyarbuzan9138
    @dilyarbuzan9138 2 роки тому +2

    Lex is killing it! Appreciate the work brother

  • @speedysmithy
    @speedysmithy 2 роки тому +2

    Well said about the "Printing Press" by Yann LeCun

  • @josephmacdonald1255
    @josephmacdonald1255 2 роки тому +4

    Thank you for a great discussion. I did checkout the sponsors.
    I rarely post information and hope the following do not contravene protocols for this system.
    It is very important to use machines to discover what is known and not known and we should continue to do so.
    Yann made it clear self-supervised learning is one of many types of AI tools. He also made it clear different tools are for different purposes. It was a casual conversation with lots of personal observations which could not be either proved or disproved. Who cares, I do not. It was like a flaw in an otherwise good paper. You do not have to agree on those points, however, a big take away is using a model and in my opinion what self supervised learning is good for and what it is not good for.
    As an example, he did not say it, but due to the paucity of data and the time involved; it is frustrating and expensive for domain experts to train systems to do what experts already knows how to do, particularly if it only involves text. This is relevant if the relevant information is easily and well represented by text alone, which as he pointed out is often not the case.
    Starting with self supervised learning would frequently slow down the development of creating useful analytical tools for end users who do not have the same expertise as the expert doing the supervision. In effect it is machine learning’s version of the knowledge acquisition gap which constrained the expert systems of the past. It sometimes the tool is worth using, sometimes it is not and over time that can change.
    The real future benefit of machine learning is to help monitor and guide (assist) the work of experts in many knowledge domains simultaneously. They can do this by learning from each expert with a much more limited form of machine learning which is beyond the scope of this post.

  • @BJJ_Richie
    @BJJ_Richie 2 роки тому +2

    “The fear of death”
    and the awareness thereof I call,
    as I get older and older
    “The reality of our mortality”

  • @sirousmohseni4
    @sirousmohseni4 2 роки тому

    Have you listened to a 2.5 hours long podcast two times and back to back?
    I just did.
    I might listen this once more.

  • @avi3681
    @avi3681 2 роки тому +5

    So Yann says that AI human companions will have emotions and consciousness, but that nevertheless we will own them as our "intellectual property", and we can back them up and erase their memories at will. Good for you Lex for pushing back against the blithe moral horror of this vision.

  • @zorqis
    @zorqis 2 роки тому

    How clear and eloquent thinking. Always a joy to listen.

  • @brenalddzonzi7334
    @brenalddzonzi7334 3 місяці тому

    This is basically a whole semester of self supervision ml, the knowledge is golden.

  • @seanreynoldscs
    @seanreynoldscs 2 роки тому +1

    one feature of a cat, is that it catches things that move... even a little bit of yarn, or a laser pen dot... movement is key.

  • @AndruXa
    @AndruXa Рік тому

    it's a privilege to hear LeCun talk about ML

  • @citizizen
    @citizizen 2 роки тому

    We watch to outside. So what if we watched to inside phenomena as well? (hands, eyes, etc). To connect different purviews.

  • @richardperry4379
    @richardperry4379 2 роки тому +1

    This is difficult. Cats spend much time alone. I loved my Cat, Bilbo. People need good company.

  • @davidbjoern
    @davidbjoern 2 роки тому +7

    "Do you think UA-cam has enough data to learn how to be a cat?" - Great questions as always Lex 😄

    • @bzqp2
      @bzqp2 2 роки тому

      What's a better source to learn how to be a cat than UA-cam!

  • @SeedsofJoy
    @SeedsofJoy 2 роки тому +2

    would love to see you get geoffrey hinton

  • @erikERXON
    @erikERXON 2 роки тому +11

    damn. i was always saying that intelligence/IQ is just a collection of experiences/statistics, no one ever really wanted to agree. good to know im not alone.

    • @abbasfakih5151
      @abbasfakih5151 2 роки тому +1

      Ye maybe some subset of intelligence will be trivialized

    •  2 роки тому +5

      I never understood why so many people feel the need to invoke mysticism to explain consciousness. People can't even tell what AI systems they designed themselves do once they are trained. Why would brains be different?
      It's just a very complex "complex system" that has evolved through a _very_ long series of random events... _Of course_ it's hard to make sense of when you look at it.

    • @FourOneNineOneFourOne
      @FourOneNineOneFourOne 2 роки тому +3

      There's been a paper published last year that mathematically proved infinite width neural networks are equivalent to kernel machines, which isn't that much more from a collection of statistics/lookup table based on all possible features. Also primate brain works the same way, correlating a lot of shapes/colors/sounds/other things we can test 1:1 with individual neurons, except when you're ramping up complexity of it, it eventually breaks down and (presumably) continues in a more heuristical/approximation mode, this whole concept for when the brain exits it's purely statistical work is known/debated as the "grandmother neuron" (due to now disproven belief that there should be a single neuron that 's sole purpose is to identify your grandmother, based on the image you're seeing, and nothing else) or sometimes also called "jennifer aniston neuron" (due to similar but funny reasons).

    • @erikERXON
      @erikERXON 2 роки тому +2

      @@FourOneNineOneFourOne nice reply. aciu.

    • @jeffharrington8883
      @jeffharrington8883 2 роки тому

      I have always called probabilistic thinking. Every decision usually has a range of outcomes.

  • @aaroncamss4053
    @aaroncamss4053 2 роки тому

    At the gym right now and this episode got me on edge

  • @DeepFindr
    @DeepFindr 2 роки тому +1

    Thank you, great conversation :)

  • @prabhavkaula9697
    @prabhavkaula9697 2 роки тому +1

    Thank you so much for the interview.

  • @privateequityguy
    @privateequityguy 2 роки тому

    I love what Lex does. 🙏
    I read this yesterday and it opened my eyes: *”You don’t get what you want in life, you get who you are!”*
    Really think about it 😉

  • @hermes_logios
    @hermes_logios 2 роки тому +1

    All learning is conducted through the matrix of prior learning.
    In the earliest moments, learning is written in the broadest strokes (which becomes the system through which later learning is understood).

    • @barrypickford1443
      @barrypickford1443 Місяць тому +1

      Can be mapped almost like the development of a tree. Trunk/branch/twig/leaf then stabilise until death. Violence in my trunk to branch phase means I’m an anxious person in adult life. Perhaps 😊

  • @TimeLordRaps
    @TimeLordRaps 2 роки тому

    Paused at 17:45 because if I am this prolific I gotta switch to a laptop and sleep so I'll see yall in the morning. Nice sharing ideas.

    • @TimeLordRaps
      @TimeLordRaps 2 роки тому

      I'm only comprehending this action of previous me in the current sense, however causally these things only matter in past tense to anyone including myself. If you thought of that as significant why? If not why not?

  • @arc8dia
    @arc8dia 6 місяців тому

    14:57 Run inference on the neural network in reverse. When given a concrete output, you will see a distribution of probable inputs.

  • @ryanmcginness2888
    @ryanmcginness2888 2 роки тому +2

    If AI is so smart, why do I have to click pictures of CAPTCHA images to login to my bank account? Why can't AI simply identify pictures that contain a stoplight or a bus?

    • @rpcruz
      @rpcruz 2 роки тому +1

      Machine learning works well for specific tasks. You could build a image classifier that breaks a certain captcha. It would be a hell lot of work but you could do that. But if they change "identify the cars" to "identify the birds" you need to do it all over again. There is no universal AI. The best thing to universal AI we have are text recognizers like GPT.

  • @Lolleka
    @Lolleka 2 роки тому

    Statistics is not intelligence, it is the correct mathematical representation of human knowledge.

  • @lisamuir4261
    @lisamuir4261 21 день тому

    I thought this would be a direction to try within a controlled safe zone, having a robot interact outside of the video learning. As much information and knowledge already enlisted, see if it can learn as a teenager does driving. Sooner or later "hands-on" will be applied. Lex is always on it. 1:21:43 perhaps confusion comes from saftey shutoff if a person is driving so maybe it's hangup there, such a as talking or texting while driving. (Position swap)

  • @kid808able
    @kid808able 2 роки тому

    Here's a deep question, how is this possible? A guy in a suit, asking questions to the most intellectually blessed people in the world? One after the other. I just joined lex, about a month ago. Why isnt this mainstream as it would have been about 20 years ago? Thanks to everyone who contributes! Ive learned so much in the last month than what I have since high school.

    • @gordonjay2461
      @gordonjay2461 2 роки тому

      Dudes dad is likely close to the bug dogs in tech and Washington.

  • @nesquickk2754
    @nesquickk2754 2 роки тому

    Grateful that we can watch it for "free"

  • @maryjanewhite5710
    @maryjanewhite5710 2 роки тому +3

    Struck by how much of AI recapitulates intensive, early intervention using applied behavioral analysis (ABA) as used to recover very young children with autism, teach them language and joint attention. (Mother of a roboticist)

  • @arvisz1871
    @arvisz1871 2 роки тому

    One of the best if not the best episode from Lex Fridman podcast 👍

  • @terrysmith3532
    @terrysmith3532 2 роки тому +2

    I think we now see how Mark is going to respond to tougher questions, when he does get on.

  • @robfielding8566
    @robfielding8566 2 роки тому +1

    heh... I also went through a expressive-music-instrument phase of fighting against MIDI, doing OSC, ChucK/Csound; and hobby helicopters. The former sent me through an education on iOS music instruments, and embedded hardware; in which I learned more than I did in school in some areas.

  • @barrypickford1443
    @barrypickford1443 Місяць тому

    What LeCun said about the limitations of language was interesting. Now more than ever our labelling for example, keeps failing to map our reality effectively.

  • @BryanHoward
    @BryanHoward 2 роки тому +1

    Good conversation with Yann.

  • @eaf888
    @eaf888 2 роки тому

    WOW! I was just listening to Tom Brands interview

  • @maksymbabaiev3653
    @maksymbabaiev3653 2 роки тому +1

    Great stuff, but I would really appreciate a Rumble channel also ;)

  • @fredt3217
    @fredt3217 2 роки тому

    The car door example the mod talks about at around 29:30 is the perceived state dinging back and fourth between models.
    I can show you a diagram of how it works... it's not that hard to understand...

  • @fredt3217
    @fredt3217 2 роки тому

    High level and low level are the same.
    Would you be able to move your car without knowing there is a pot hole there?
    All inputs go into the perceived state. Some just take priority due to negative associations attached.
    And I can't get over how the nueral network processes of the mod matches my old best friend...

  • @krunchykarrot6537
    @krunchykarrot6537 2 роки тому

    @2:02:05 I still find a large aspect being overlooked. “different operating incentives” exactly Lex

  • @IanMott
    @IanMott 2 роки тому

    I got a idea on how to solve this love to get both your feedback on it.

  • @hamsade
    @hamsade 2 роки тому +2

    You looked sleepy Lex! Get some sleep man! ;) Nice talk! Really enjoyed. Thanks!

  • @ericadar
    @ericadar 2 роки тому

    anyone have a link to Karpathy's car door talk @ MIT? Also, would be very cool if Lex moderated a panel discussion on AGE: LeCun, Y Bengio, Hassabis, Hinton, Koch, Marcus, Chalmers ...

  • @Nahte001
    @Nahte001 2 роки тому +1

    Fascinating watching this after he released H-JPEA, the whole time you can feel him dancing around energy based models but obviously didn't want to explicitly leak the core principles of his unreleased work

  • @yuntaller
    @yuntaller 2 роки тому +1

    This talk is quite good, you know.

  • @fredt3217
    @fredt3217 2 роки тому

    If you push a table we associate that all the objects will move with it. Have you ever seen a table react differently outside of a magic show?
    It's an association in your mind...

  • @pjcollazo8318
    @pjcollazo8318 2 роки тому

    52:45 nice job holding that burp in haha

  • @VinBhaskara_
    @VinBhaskara_ 2 роки тому +2

    The thing I don't get is Lex keeps pushing in his own beliefs and ideas. Let Yann (and other guests) speak!

  • @sopwafel
    @sopwafel 2 роки тому +1

    Nice! I was very excited when I saw the name.
    Any hope for another Aubrey de Grey episode?

  • @prof_shixo
    @prof_shixo 2 роки тому +1

    Regarding Yann's opinion about the importance of language in human intelligence is still an underestimation. Yes, babies can learn basic skills without language (assuming no parental supervision which is going to be communicated in language), yet, adults cannot learn any advanced concepts like physics or chemistry without language that is a very efficient knowledge transfer mechanism for humans saving us from going into large number of trails (that could be very costly as some situations) to figure out the laws of these concepts. At some extreme scenarios, language (whether English or Math) is all what we have to explain the world, just look at what Eientstien did to communicate the relativity theory while at that time there was no mean to prove some of its findings.

  • @richardperry4379
    @richardperry4379 2 роки тому

    May I suggest what may be the difference between empathy and sympathy.

  • @louisboyer3472
    @louisboyer3472 2 роки тому

    A podcast with Demis Hassabis would be great!

  • @esjuve
    @esjuve 2 роки тому

    Thanks for this great conversation, it's a real gift.

  • @tracys3096
    @tracys3096 2 роки тому +2

    So essentially AI doesn't really have a sense of self preservation. A thousand trials seems an extraordinarily high number of trials required for it to learn this. Perhaps this is a better test of sentience.

  • @tobikro
    @tobikro 2 роки тому +1

    Great conversation, thank you so much!

  • @MayavanAMC
    @MayavanAMC 2 роки тому

    Can someone please point me to the Andreii Karpathy talking about corridors video?

  • @adityavarshney6690
    @adityavarshney6690 Рік тому

    "Started at the bottom, now we here" lex too good 🤣