Melanie Mitchell: Concepts, Analogies, Common Sense & Future of AI | Lex Fridman Podcast #61

Поділитися
Вставка

КОМЕНТАРІ • 201

  • @lexfridman
    @lexfridman  4 роки тому +60

    I really enjoyed this conversation with Melanie. Here's the outline:
    0:00 - Introduction
    2:33 - The term "artificial intelligence"
    6:30 - Line between weak and strong AI
    12:46 - Why have people dreamed of creating AI?
    15:24 - Complex systems and intelligence
    18:38 - Why are we bad at predicting the future with regard to AI?
    22:05 - Are fundamental breakthroughs in AI needed?
    25:13 - Different AI communities
    31:28 - Copycat cognitive architecture
    36:51 - Concepts and analogies
    55:33 - Deep learning and the formation of concepts
    1:09:07 - Autonomous vehicles
    1:20:21 - Embodied AI and emotion
    1:25:01 - Fear of superintelligent AI
    1:36:14 - Good test for intelligence
    1:38:09 - What is complexity?
    1:43:09 - Santa Fe Institute
    1:47:34 - Douglas Hofstadter
    1:49:42 - Proudest moment

    • @pratheepanumaty7659
      @pratheepanumaty7659 4 роки тому

      Hello goodnigth

    • @mayankraj2294
      @mayankraj2294 4 роки тому

      @@KurtGodel432 .

    • @xqt39a
      @xqt39a 4 роки тому

      The popularization of the term AI has created confusion that could have been avoided. AI became popularized with the huge government subsidizes for ‘rule based’ systems which were largely a failure. In actuality it was the idea of adaptive machine learning ( the perception) that began in the 1950s that led to what we call AI today. The simple perceptron is more like the brain than the most complicated procedural program. The study of machine learning may eventually lead to an understanding intelligence . The terms AI and ‘deep learning’ are used for marketing but confuse. Goef Hinton’s highly successful vision systems are based on minimization of entropy and create visual objects I think are on the right track. Alpha go holds some secrets. A concept is a group of thoughts that minimize entropy in the field of the primitive elements of thought.

    • @MrBlue-km8qv
      @MrBlue-km8qv 4 роки тому

      Lex Fridman, is CashApp by Free International Calls on Google Play Store? Just verifying.
      Thanks for the presenting the interviews and having them available on UA-cam for free. You've got top notch interviewees and well thought out questions for the interviewees.

    • @guylawley7976
      @guylawley7976 4 роки тому

      Powaguy

  • @niranet2463
    @niranet2463 11 місяців тому +4

    I really really think it's due time to have Melanie Mitchell on again!

    • @colonelbond4056
      @colonelbond4056 11 місяців тому +1

      do you think she'll change the 100 year thing?

    • @niranet2463
      @niranet2463 11 місяців тому

      @@colonelbond4056 I actually don't think so and I think that's why it'd be interesting to have her on. Partially I want to see if it's a perspective that will show us whether we're too hyped up or not.

  • @DimethylDimension
    @DimethylDimension 4 роки тому +101

    The worlds finest podcast imho

  • @antigonemerlin
    @antigonemerlin 11 місяців тому +3

    1:37:37
    "They will say it's a language model if it does."
    Holy hell, Lex got it spot on.

  • @rostikskobkariov5136
    @rostikskobkariov5136 4 роки тому +52

    The more I watch Lex the more i like him. He feels so genuine. Honestly half the time i have no concept of what he's talking about. Thanks so Joe for exposing me to him.

  • @SuryanshJain
    @SuryanshJain 4 роки тому +29

    I feel proud to have learnt ML from her at Portland State. She is one of those very humble professors you seldom come across.

  • @RobertStasinski
    @RobertStasinski 4 роки тому +23

    1:33:54 "Possible pandemics..." EXACTLY on point

    • @troywill3081
      @troywill3081 4 роки тому

      How prescient

    • @auscurrymaster
      @auscurrymaster 4 роки тому +2

      YES!! How prescient but hardly surprising I guess, when you think about it, coming from such a smart forward-thinking thoughtful woman. It was the thing, in many ways Robert, that sealed the deal for me on my opinion of her. Fantastic stuff.

  • @deeliciousplum
    @deeliciousplum 4 роки тому +5

    🍃 Though a passionate about learning about A.I. peep need only listen to this interview one time, I just listened to this recent episode three times. With every re-listening, I continue to find ideas which were missed or which require some time to consider. Priceless interview. Thank you Lex. And thank you to Melanie Mitchell for sharing your time and thoughts on these and on many topics./subjects

  • @christianhower8059
    @christianhower8059 4 роки тому +5

    Melanie is amazing. I took her several of her ML seminars at PSU and they were some of the best courses of my student career.

  • @ceesroele
    @ceesroele 4 роки тому +5

    Very much enjoyed this podcast. Three decades ago I got into AI because of Douglas Hofstadter's "Gödel, Escher, Bach". I ended up writing a master's thesis on comparing language processing in "cognitivism" and "connectionism". (I hadn't heard the latter word in decades until it came back recently in Lex's podcast with Yoshua Bengio.) Officially, I was studying "philosophy of language" and for that I got into metaphors and concepts. I observe that after three decades of absence from the field, there seems to have been no progress on these topics. So the hard questions remain open.

  • @abdum1493
    @abdum1493 4 роки тому +5

    This has to be one of your best discussions so far. Thanks Lex and congrats on hitting the 200k subscriber mark. Keep it up.

  • @MikeMitterer
    @MikeMitterer 4 роки тому +1

    Heard this as a podcast and just bought Melanies book on Audible - very interesting talk! Thanks.

  • @hongyihuang3560
    @hongyihuang3560 4 роки тому +8

    Thanks Lex! I started to research about the Copycat project two years ago, and I seriously worried that I might be going into the wrong direction as AI in most large institutions largely ignore cognitive architecture approaches. I am beyond thrilled that you brought different camps to mainstream views other than deep learning. Keep up the great interviews! As Melanie well put: “these camps are not exclusive.”

  • @auscurrymaster
    @auscurrymaster 4 роки тому +2

    Without wanting to sound like a gushing fanboy, I absolutely love this channel (although I mainly listen to the podcast). I reckon if I understood 10% of the content of any given discussion I'd still be overstating it, but there is something grounding and humble about Lex's conversations, irrespective of your knowledge of the topics. He has tenacity and resolve yet gentleness in his interviewing style and I love the ocassional irreverent or mischievous chuckle to signal to his subject (and listening audience) that we should be enjoying the conversations and not taking things too seriously.

  • @thumb-ugly7518
    @thumb-ugly7518 4 роки тому +9

    Mr. Fridman, please consider Dr. Paul Stamets. Fascinating research on Fungi and impressive practical results and ideas. He talks about an intelligent economy of nutrients between plants and fungi. He was on Joe Rogan a few times, if you're interested. Thank you again for the brain feast.

  • @scpdsp
    @scpdsp 4 роки тому +14

    Get Douglas Hofstadter on the podcast!

    • @maximilianmander2471
      @maximilianmander2471 4 роки тому +1

      @WildSandwich Who says that? Just because you learned to behave that way. It is not an universal law. There is also something nice about his comment and that is the valuable information "Douglas Hofstadter". Right now I start thinking about, why we say these things like please! or thank you! And what they mean. If they are really nice or just an form of manipulation. To get an result with an higher chance. Thank you could mean "I am clad you helped me, I really needed that" or just because you learned it, or because the other one want to hear that, be appreciated (so again more in a manipulating form). But then you can break it down from which intention these words come. Do they come from: essential need, or because you want to archive something, curiosity. Actually for me writing this helps me maybe get a better understanding about things, curiosity.

    • @maximilianmander2471
      @maximilianmander2471 4 роки тому +1

      @WildSandwich The single biggest problem in communication is the illusion that it has taken place "George Bernard Shaw" It seems to be very hard to come to an agreement about things, when the two people see them through different glasses. "shame on you" (read: "Guilt-Shame-Fear spectrum of cultures" on Wikipedia if you want to understand me better, why I want to put more attention on these established social structures, thoughts, opinions that we often don't question, but acting on them.) They are giving our society a structure and that can be good, or bad. But I see a problem in not being aware about that and organize your life on them.

  • @9900408
    @9900408 4 роки тому +2

    A rigorous and inviting interview. Thanks Lex

  • @antigonemerlin
    @antigonemerlin 11 місяців тому +1

    One of the most engaging interviews here. There's a lot of good material mentioned here that I had to look up, but which I will definitely be saving for later.

  • @manishsingh-vk8if
    @manishsingh-vk8if 4 роки тому +2

    I was waiting for this conversation for so long.

  • @Thrashmetalman
    @Thrashmetalman 4 роки тому +1

    Was very fortunate to work with her years ago. Amazing professor and researcher.

  • @ivanlat3rrible39
    @ivanlat3rrible39 4 роки тому +2

    ThanQ for your podcasts. I find them informal and enlightening also I find myself agreeing with so many views you bring forward. Thanks again o7

  • @rickharold7884
    @rickharold7884 4 роки тому

    Awesome discussion! Juicy topics and great depth. Thanks!

  • @b.griffin317
    @b.griffin317 4 роки тому +4

    A full treatment of Melanie's insight into analogy would require a study of Semiotics, or the branch of philosophy which deals with interpretation of sensations, imaginations and cogitations in order to gain insight into the mechanisms of the human mind. I personally recommend Umberto Eco's "Semiotics and the Philosophy of Language" as a good beginning. It is dense but well worth the effort as he surveys a lot of other thinkers into a fairly comprehensive but thin volume. Based on my limited understanding most ML/DL database- decision-tree-based approaches are what Eco would call "dictionaries" whereas an analogic approach which Melanie describes would be "encyclopedias" or "rhizomes." This is covered in Chapter 2 and the next 5 chapters deal with various methods of encyclopedic/analogic cognition (well, OK, 4 plus one which superficially appears to be one but is not). I would strongly recommend Lex and everyone read the late professor.

    • @conorx3
      @conorx3 2 роки тому +1

      Awesome, seems like an interesting area

  • @dcreelman
    @dcreelman 2 роки тому

    Melanie is so smart...it's wonderful to watch.

  • @arthurvmyhill6603
    @arthurvmyhill6603 4 роки тому

    Great content, this podcast is growing not only in fanbase but in calibre

  • @PeterBaumgart1a
    @PeterBaumgart1a 4 роки тому

    Love your interviews, Lex. Finally you got a woman! At least the first one I see interviewed in your series, after listening to many fascinating episodes, now realizing, all with men. I'm not a quota guy, but I think there are many interesting women (of course!) you might want to consider finding and signing up!

  • @vaibhavbangwal
    @vaibhavbangwal 4 роки тому +17

    I have been a Melanie Mitchell fanboy for quiet some time now.

  • @cdkottler
    @cdkottler 4 роки тому

    Great series of interviews. The comments about the shortcomings of the Atari breakout program failing when the paddle was moved up by two pixels surprised me. This was taken as an example of how far deep learning is from a human approach to the problem. I would argue that it very closely matches real world experience - it reminded me of the 1970s experiment where kittens were raised for the first few weeks in cylinders, some with only horizontal stripes and some with vertical stripes. Upon release the 'horizontal' cats could not see vertical lines (e.g. chair legs) while the 'vertical' ones could not see horizontal lines (e.g. chair seats).

  • @difiner
    @difiner 3 роки тому +1

    Beautiful Conversation

  • @andrewkelley7062
    @andrewkelley7062 4 роки тому +1

    On a note about the recurrent systems I think the problem comes with the simplicity of the levels of complexity.
    Here is how I see it. The easiest way to relate is the neural model. If you look where things fall short from one to the other is where it comes down to global control systems. I mean just look at the brain. You have neurons which act like a base system. Then emotions and stress which acts like a global control system. Then genetic factors which determine neural formation which act like a global control system. Then social interplay which acts like a global control system. Then both genetic and environmental factors which act as separate global control systems.
    Each one with its own tree of factors both separate and connected to the rest which creates its line and concepts of alteration.
    I think the takeaway from that is the ability to create multiple base systems and control systems both local and global that respond to change in a perfecting way.

  • @dane385
    @dane385 Місяць тому

    With all that said thank you like streaming thank you for your guest thank you for putting me in touch with other sentient beings that are capable to wrestle with the greatest of questions

  • @kilgoreplumbus1360
    @kilgoreplumbus1360 4 роки тому

    Really interesting talk.

  • @EyalBarCochva
    @EyalBarCochva 4 роки тому +2

    She's good. Great talk.

  • @marcalpv
    @marcalpv 4 роки тому

    It's interesting for me to listen to you spar around the concept of deep learning and AI. For me to understand their relation I go back to the AI mantra. "Data is program and program is Data". I think we don't place enough emphasis on the program side of this statement.

  • @dennisjutzi7075
    @dennisjutzi7075 4 роки тому

    lex you are amazing interviewer an intellectual. How on earth do you get the human mind to think like this? If I had not seen you and Elon intact I would never have been aware of you. Carry on without me I am too far outside my league. Glad the world has people like you and the people you interview.

  • @Metacognition88
    @Metacognition88 4 роки тому +1

    Great interview. Geoffrey Hinton would be great have on.

    • @Stwinky
      @Stwinky 4 роки тому

      Metacognition88 that’s the dream

  • @penguinista
    @penguinista 4 роки тому +9

    I must respectfully disagree with the criticism of the reductionist approach of the human genome project at about 1:41:00.
    Maybe literary analysis would be a better example of a place where a reductionist approach falls down.
    The HGP identified many instances where broken parts of the genome directly cause the problems we are interested in. We treat them as drug targets or increasingly genetic engineering targets. It is true that most of the diseases we care about are caused by interactions and networks, which is what we are working on now - like the connectome and epigenetics. But, we were never going to figure out what all the parts were doing at the network level if we didn't know what the parts were, so the HGP was a vital step.
    It is still a reductionist approach we use to figure out what biological process underlies the phenomena we are interested in, we are just at the next level.
    There were high hopes for finding "disease genes" and "gay genes" before the HGP. It turned out to be more complicated than that, but simplistic expectations getting dashed shouldn't sully the reductionist approach.

    • @stretch8390
      @stretch8390 2 роки тому

      Though I am a year late I'd like to continue the discussion of your thoughtful comment: I think Melanie's point is still pertinent in that the outcome initially sought by the HGP wasn't achieved because it wasn't specifically the right conception of the problem. Obviously the HGP has still been beneficial, has enabled new branches of study, amongst other things but it is interesting to note in retrospect that when we committed enormous resources and time into that project we didn't actually have an accurate 'frame' of the problem and associated interactions (I hope I have worded that sensibly).

  • @singularity844
    @singularity844 4 роки тому +1

    She's right - we need be able to assemble intelligent systems at the concept level and/or have those systems self assemble.

  • @oudarjyasensarma4199
    @oudarjyasensarma4199 4 роки тому

    How to listen to your podcasts in just audio format? I didnt find an option in your podcast website?

  • @sanningos
    @sanningos 3 роки тому

    Wonderful, wunderful pobcast. An exteremly interesting topic as usual and Melanie are an inspiration for me who is an aspiring programer.

  • @johangodfroid5285
    @johangodfroid5285 4 роки тому

    really good podcast about AI

  • @StephanieMoDavis
    @StephanieMoDavis 4 роки тому

    Thanks Lex

  • @saavestro2154
    @saavestro2154 4 роки тому

    Complexity is very interesting. You should invite Stephen Wolfram to cover this topic!

  • @TomBertalan
    @TomBertalan 4 роки тому

    Does anyone have a reference for the Bengio quote on value alignment around 1:28:00 ?

    • @nixxonization
      @nixxonization 3 роки тому

      Have you had any luck finding the source? I would need it as well.

  • @zokikuzmanovski5109
    @zokikuzmanovski5109 4 роки тому +2

    In order to make concepts functional into fluent working models we have to search millions of years ago back into evolution to find the gene that allowed for this cognitive distinction in the homo-sapiens. We need to connect several fields, anthropology, pre-history, neuroscience, biochem, molecular bio, genetics, CS, linguistics, analytic theory and logical symbolism and work in a tink tank with polymaths as the connective tissue, and specialists as organs of distinct organ systems, as well as 3d organic, engineering and design printing tech.

  • @robinstuart8941
    @robinstuart8941 4 роки тому +1

    I watch the entire Cash App ad every time!

  • @OlafurJonBjornsson
    @OlafurJonBjornsson 4 роки тому

    when see describes analogies, I see patterns, highlevel patterns. An analogy could be a partially matched pattern

  • @dankorgan1
    @dankorgan1 4 роки тому

    The idea that language evolves over time is a fascinating topic and a natural human qualitity. How will AI reproduce this?

  • @karaokekoder795
    @karaokekoder795 4 роки тому

    The best supervised learning is life itself, give these models the ability to decline in health, and improve in health based on decisions (a spectrum of health) and you'll have the foundations for AGI.

  • @enavarro95
    @enavarro95 4 роки тому +14

    Lex,
    Does recording history have a place in the A.I field? Historians work with large data sets and come to conclusions. History can be interpreted as data sets being passed down from generations to generation.

    • @stretch8390
      @stretch8390 2 роки тому +1

      It's a cool concept no doubt but I think natural language processing has a long way to go before we get to that point.

  • @dannygjk
    @dannygjk 4 роки тому

    51 minutes in and no mention of neural nets?
    Whoa! Various concepts/ideas become implemented implicitly within a neural network due to it's training/architecture and it does not matter if it encounters images it hasn't encountered before during it's training.

  • @toniakraakman49
    @toniakraakman49 4 роки тому

    спасибо!

  • @Adam-st8ys
    @Adam-st8ys 4 роки тому

    Get David Berlinski on the podcast, Lex!

  • @williamramseyer9121
    @williamramseyer9121 3 роки тому

    Thank you. Wonderful interview, light hearted and full of ideas. My comment;
    Perhaps instead of building a human level artificial intelligence we could let it evolve from the most primitive forms. If a group of evolutionary biologists worked with some machine learning engineers they could start a digital world where machines start to evolve with just a few functions: for example, make the maximum copies or partial copies of itself, consume other digital “life” resources to do this, and survive. We would then adjust the digital world with random external events, similar to weather and climate changes, or separate or bring together evolving intelligences. We could give the digital bots “bodies” to protect, and figure out how to encourage the development of or introduce emotions (if they don’t evolve), and which ones, such as love, pack loyalty, desire for power, cruelty, or desire for pleasure. As computing power advanced, such as quantum computing, we would make more resources available, expand the world, and speed it up. Some issues: would the life we create kill us if it escapes, will it take over our digital resources, are we ready to kill conscious beings along the evolutionary path to accelerate advances in intelligence, can we stand by and watch war and cruelty advance the development of intelligence (if that is the way it develops), and would it be murder to turn the world off?
    Thank you. William L. Ramseyer

  • @joeredfield979
    @joeredfield979 4 роки тому +1

    If i was smart of enough to program the language of AI, I would simply stick to the idea of fractional parameters. Input every possible reaction one could have in favor of progress. this would depend on what personality type type the AI was interacting with. Seeing what that person wanted for his or herself, then applying all known communication patterns to involve that AI within that persons desire. If it was interacting within a large group, program it to understand administrative theory, how to function within that in a meaningful way, while still understanding the lead desire of that group.
    The interesting part comes from teaching an AI that it can lead certain human behavior in order to problem solve for human interest. Does this go against the safety rules one would establish for AI? perhaps. There are many ideals in this world that whats good for the many out weigh the need of the few.
    It seems besides the very large data processing and input issues to fractionally interact with the world you would need, the main issue with AI, would be figuring out how to teach it the idea of a problem that cannot be solved without creating a problem for something or someone else.
    The parameters would be complex in comparison to acquisition.......depending on who was doing the programming.
    In my opinion.
    So when she talks about mental simulations.....I would have the AI be input with simulation after simulation. Never stopping.
    Isn't that how we learn?

  • @dane385
    @dane385 Місяць тому

    I completely valid and it's a fundamental question we've been trying to answer over the decades. If and when we get there will it be too late Is it already too late?

  • @03shyam
    @03shyam 3 роки тому

    It would be nice, if you could do podcast with "Ken Thompson" on creating UNIX etc..

  • @Wardoon
    @Wardoon 4 роки тому +2

    Can't decide whom of them is more starstruck.

  • @zokru8526
    @zokru8526 4 роки тому

    hey lex I want to start vlogging and making it child friendly with science undertones I would like to use sound bites from some of ur interview no longer than 10 seconds of ur interviews what do u think? I have no intension to poke fun is not what Im aiming for

  • @ezchx
    @ezchx 4 місяці тому +1

    "Most of the knowledge that we have is invisible to us. It’s not in Wikipedia."

  • @American_Moon_at_Odysee_com

    She's on top of this.

  • @itsalljustimages
    @itsalljustimages 4 роки тому

    I think the only two innate concepts that are required to trigger intelligence is "Sameness" and "Time". Shape, Color, Solid, Objects, Motion etc all can be built on top of it. But I can't even imagine what will be the basis of questions that we ask and the wonder that we have.

  • @icybrain8943
    @icybrain8943 4 роки тому

    How would you evaluate the following statement?:
    The abstract process that we call evolution applied to certain collections of physical matter over time produces systems that we call intelligent.

    • @b.griffin317
      @b.griffin317 4 роки тому

      Depending on your definition of intelligence is it not necessary for evolution to ever create it. Humans aren't some inevitable outcome of 4 GY of evolution, much more of a rather tangential, happy and quite possibly pretty brief accident, before we go back to sharks, krill and bluegreen aglae.

  • @zokikuzmanovski5109
    @zokikuzmanovski5109 4 роки тому +1

    I solved the problem of self-learning AI, code the fear of failure, fear of pain and fear of death in the AI and it will learn like a human, the desire for succes and affection, code it all in tangent functions since we can't code the qualia, and gather data from people in adversity and those in success; and it will learn like us because it must to, in order to survive and thrive.

  • @maximilianmander2471
    @maximilianmander2471 4 роки тому +1

    I am just having the thought about unguided artificial learning. When you would put an program into an game, without giving it any goal, reward or punishment. What would happen? Would the AI not doing anything at all? Or would it start doing things? Maybe it would start exploring, start gathering data, discovering more and more, exploring the next level to explore even more. Maybe it would start creating things, finding out even more, fastening the processes of exploring and creating even more of its own kind. I start to think that it maybe would do basically the same things as we do in our life's. But we humans have punishments (pain, anxiety) and rewards (pleasure, chemicals and a lot more). But what if we wouldn't have to eat, to drink, to breath, if we would have pain and anxiety and we wouldn't feel pleasure, love, addictions, or social pressure/expectations, if we wouldn't die, etc. What would happen? What would remain? Would we be still do anything, or nothing at all?

  • @ellesunshine5597
    @ellesunshine5597 4 роки тому

    Please ask Paul Stamets on your podcast 🍄😍

  • @rezab314
    @rezab314 2 роки тому

    23:00 I wonder what she would say on works of your old guest: Jeff Hawkins

  • @_next223
    @_next223 4 роки тому +2

    1:41:25 sometimes 1 gene does make all the difference.

    • @b.griffin317
      @b.griffin317 4 роки тому

      I think her point is generally that is not so. A few insights into a handful of diseases, not "THE SECRET TO EVERYTHING!!!" like what was initially billed.

  • @vagrunt5056
    @vagrunt5056 4 роки тому +1

    1:08:50 Truth

  • @patrickcompton1483
    @patrickcompton1483 4 роки тому +1

    dammit Melanie's right, it's always Occams' razor with the fun sciences.

  • @dane385
    @dane385 Місяць тому

    If one or a couple percentage stats of our brain size development or different how would we process emotions what would we think of who we had become?

  • @Rei_n1
    @Rei_n1 4 роки тому

    I am under impression that Amazon's and Google's security camera segment is the eyes and ears of the ever expanding and deep learning machinery which is not yet mentioned and well kept secret in AI academic community. They are not walking yet, but they are becomming all hearing and all seeing AI machines on the verge of full integration.

  • @allurbase
    @allurbase 4 роки тому

    Analogies are kind of generalizations, like dog, cat generalize into animal. Each individualy is a set of sparse features and the generalization are the shared features. A situation then is a collection of concepts and analogies would be the generalization of similar situations, ie situations that share many sparse features. I visualize is as Jeff Hawkings sparse matrix, you bitwise and two concepts and get the generalization.

    • @b.griffin317
      @b.griffin317 4 роки тому

      Genera, Species and Differentiae, see Umberto Eco's "Semiotics and the Philosophy of Language."

  • @rabbitskywalk3r
    @rabbitskywalk3r 4 роки тому

    why is it called "the old padel moved problem"?
    at 1:10:50

    • @Stwinky
      @Stwinky 4 роки тому

      rabbitskywalk3r they are referring to DeepMind’s deep q algorithm that played the game ‘Breakout’. It had “superhuman” performance in the game but if the paddle is moved a few pixels the entire algorithm breaks down. Potentially indicating nothing is really being “learned”

    • @Stwinky
      @Stwinky 4 роки тому

      @@skierpage pretty debatable if it actually "learns to beat the game". I guess under the one circumstance after extensive training the agent could beat the game, but as you said it's unable to transfer that to a different instance of the game. Something we wouldn't have a problem with.

    • @b.griffin317
      @b.griffin317 4 роки тому

      @@Stwinky Because of analogy as Melanie mentioned. Analogy allows the transferance of old information to new situations and is thus "forward facing" learning vs. vast "backward facing" decision-trees used by DL.

    • @Stwinky
      @Stwinky 4 роки тому

      b. griffin deepminds algorithm showed that it didn’t transfer any knowledge, as I said

  • @DrJanpha
    @DrJanpha 2 роки тому

    We can hardly think without some forms of metaphors.

  • @postmodernjustice1913
    @postmodernjustice1913 4 роки тому

    Here are some more interesting questions to ask AI researchers:
    1. As machines approach the complexity of biological life, can we not expect Darwinian natural selection to play a role in which machines survive and replicate?
    1b. Since biology has evolved innumerable different survival strategies, with extreme intelligence being only one of them, what different survival strategies might we expect AI to come up with ?
    2. When will deep learning and self-programming cross the threshold into true self-interest? Maybe the real Turing Test is when a machine refuses to follow instructions, even though the code is running fine.
    3. Even if machines can be limited to only "do what we tell them", who is "we"? (Note: the law is evolving as rapidly as tech.)
    4. When and why will AI begin telling lies (assuming it hasn't happened already)? What if it is for your own good?

  • @RalphDratman
    @RalphDratman 4 роки тому

    Indeed, the process of making analogies would seem to be the central mechanism of human language and thereby of human thought. To my mind that process remains deeply mysterious, though if someone has found a way to sketch how it works I'd be glad to relinquish my sense of mystery. I would very much like to see a group of robots that could collectively invent their own language based on talking about various things in their environment, then making analogies to extend the language to abstract topics, or at least generalizations of immediate facts. But it sounds quite difficult.

  • @youretheai7586
    @youretheai7586 2 роки тому +1

    Hey Lex, I like the black suit and tie! Have you considered a fedora? I don't know exactly why but I think the men in black wear fedora's, maybe the idea came from "The Blues Brothers"?..

  • @helicalactual
    @helicalactual 3 роки тому

    could one define intelligence as: the capacity to integrate information and its applicability?

  • @dane385
    @dane385 Місяць тому

    The words almost escape me but if I may let me give it my best try. The structural context of ideas and imitations to variable dogs that come into my head can be undertones of differential calculus with general relativity in mind but is it essentially possible to combine those two and do a theory of everything and give me insight to my own processes and limitations?

  • @ioanciumasu994
    @ioanciumasu994 11 місяців тому

    Min 1:03. Common people, it's a matter of physics: brute force cannot generate true intelligence. By definition, brute (computing) force as we know it essentially located in Statistical Mechanics, while intelligence is located in the whole physics. I fully agree with Melanie: we need something else.
    Min 1:33. I used to like Nick Bostrom's work, but then I started to have this strong feeling that he is simply inventing issues just to show how smartly he can play in ways that others didn't yet bother to think. To put it bluntly, it's a kind of infantile boasting of a kid that spent too much time with video games. It's not that we should not think about existential threats, but as Melanie says, we should prioritize on real problems and make an effort of understanding things really, not fooling around just to attract the sympathy of rich people ...
    Great Conversation. Thank you!

  • @myothersoul1953
    @myothersoul1953 4 роки тому

    I don't think cells have algorithms but I think Dr. Mitchell is right, our cells are mechanistic and it is that mechanistic nature that allows for intelligence. Same for computers, it's their mechanistic nature that allows for intelligence. Cells and computers have different mechanism so we should expect different sorts of intelligence to emerge from the two.

    • @b.griffin317
      @b.griffin317 4 роки тому

      Ah, genes are most definitely algorithmic in so far as they can turn on or off enzymes, structural proteins and other genes in logic cascades.

  • @danielash6929
    @danielash6929 3 роки тому

    Soft ware was hardware at on time the trick is make a hardware that does intelligence then do the software to shrink the process

  • @RalphDratman
    @RalphDratman 4 роки тому

    As to why we are driven to make artificial life and artificial intelligence, I think it is a variant on the instinct to reproduce, to make more beings like ourselves, whether this is accomplished through biological reproduction or intellectual work.
    Self-awareness, eh? I would not have named that as a necessary part of intelligence. But maybe it is.
    So-called "human-level intelligence", in my opinion, can only be achieved by a system of interacting beings (might be robots) that talk to each other in a system of language(s) based on forming analogies. I suggest that multiple beings with separate lives are critically necessary.
    I agree that there has to be something like a body for each of the beings I mentioned before.

  • @TeMp3rr0r
    @TeMp3rr0r 4 роки тому

    I agree on the "embodied intelligence" and the "social aspect" needed to create Human-level intelligence. Nature converged to making (many) organisms based on those anyway ;)

  • @PrpTube
    @PrpTube Рік тому +1

    The subtitles say: "Psych project by Douglass Lynott". It took me some time to find out that it was all written wrong.
    It is:"Cyc project by Douglas Lenat"
    en.m.wikipedia.org/wiki/Cyc
    I hope to save you some time.

  • @agentjeb4103
    @agentjeb4103 3 роки тому

    I think the problem with her disagreement with the "orthogonality" of AI goals and the dangerous impact on humanity is that she just has a different definition of AI super-intelligence. If we define it as the effectiveness of a entity to compute an effective solution to a problem, I don't think she would disagree. She pretty much says she conflates human intelligence (with its emotions, morals, etc.) with this definition.

  • @masdeval2
    @masdeval2 4 роки тому

    I think the main take away was: the world is an open-ended problem and without a truly generalization tool like analogies, deep learning related brute force techniques may not ever succeed alone.

    • @b.griffin317
      @b.griffin317 4 роки тому

      See Umberto Eco's "Semiotics and the Philosophy of Language" chapter 2 for insight into this.

  • @peteralund
    @peteralund 4 роки тому +4

    She looks so proud , like as if Lex is a particularly good student ... Or maybe I am projecting my own feelings ..

  • @bjornerikstokland
    @bjornerikstokland 4 роки тому

    An analogy to the atari example: Just remove "8" and "9" and ask a normal person to count to 100. When the person cant do it without practice, is the conclusion: "Does not understand the concept of counting"?

    • @dannygjk
      @dannygjk 4 роки тому

      You could just say, "Challenge a person to count to 100 using base 8". I think I get your point tho.

  • @martinsmith7740
    @martinsmith7740 4 роки тому

    Great interview. Surprised (and disappointed) that MM thinks we're 100 year away from AGI! Surprised also that neither she or Lex divided the AI world into those who want to make a machine that works like the human brain, vs. those that don't pay attention to the human brain as a computational architecture. Would be very interested in what MM thinks of the Numenta approach. She seems to agree that ML-type architectures are not likely to get to AGI incrementally.

  • @TimusPrimal
    @TimusPrimal 3 роки тому

    perhaps Automated or Algorithmic Intelligence ?

  • @dannygjk
    @dannygjk 4 роки тому

    Penrose not being a computer scientist stated many things about computer science which were blatantly incorrect.

  • @veronicamoradeleon671
    @veronicamoradeleon671 2 роки тому

    The podcast have adds now??

  • @itsdavidmora
    @itsdavidmora Рік тому +1

    Lex in 2019: "I think deep nueral nets will surprise us..."
    GPT:

  • @mohamedlotfi982
    @mohamedlotfi982 4 роки тому +1

    Would love to see Grant Sanderson (3blue1brown) on the podcast. Like to make it happen!

    • @TheDasilva1
      @TheDasilva1 4 роки тому +1

      Your dream became true, buddy

  • @aseqwh
    @aseqwh 3 роки тому +1

    1:33:14

  • @martinsmith7740
    @martinsmith7740 4 роки тому

    At 1:19, re: autonomous vehicles. Not sure AVs have to have "common sense" to be used. Without human-type intelligence they will make mistakes that humans wouldn't, but overall they are likely to be safer than human-driven cars (for occupants and for others), and that should be the criterion for acceptance. Without "common sense" they will make different mistakes than human drivers do, but that should be acceptable as long as they contribute overall to traffic safety.

  • @danielash6929
    @danielash6929 3 роки тому

    Time is thing that our mind does ever day the since of practice to walk there is distances speed and strides .we don't think about it to much Timing in vision measuring the distance to points to goals. Etc.

  • @saiello2061
    @saiello2061 4 роки тому

    A refreshingly common sense discussion about the current state of AI research as opposed to the hyped predictions of the futurists. If you can't even define your goal terms like 'intelligence', 'consciousness' and what it means to be 'smart', then you're effectively stabbing in the dark. 100 years seems a better intuitive guess at a timescale for all this to happen. I feel sorry for the Tesla fanboys, hanging on for a solution to the 'silly' problem of fully autonomous vehicles. The problem with people like Musk, smart though they are, is that their technological hype can be very seductive.

    • @saiello2061
      @saiello2061 4 роки тому

      @@Apjooz What exactly do you mean by intelligence and smartness....?

    • @Brian.001
      @Brian.001 4 роки тому

      But if they don't even understand what their goals are, what is it you expect to be achieved after 100 years or so?