#57 - Prof. MELANIE MITCHELL - Why AI is harder than we think

Поділитися
Вставка
  • Опубліковано 19 тра 2024
  • Patreon: / mlst
    Since its beginning in the 1950s, the field of artificial intelligence has vacillated between periods of optimistic predictions and massive investment and periods of disappointment, loss of confidence, and reduced funding. Even with today’s seemingly fast pace of AI breakthroughs, the development of long-promised technologies such as self-driving cars, housekeeping robots, and conversational companions has turned out to be much harder than many people expected.
    Professor Melanie Mitchell thinks one reason for these repeating cycles is our limited understanding of the nature and complexity of intelligence itself.
    Framing [00:00:00]
    Dartmouth AI Summer Workshop [00:07:02]
    Letitia Intro to Melanie [00:09:22]
    The Googleplex situation with Melanie and Douglas Hofstadter [00:14:58]
    Melanie paper [00:21:04]
    Note on audio quality [00:25:45]
    Main show kick off [00:26:51]
    AI hype [00:29:57]
    On GPT-3 [00:31:46]
    Melanie's "Why is AI harder than we think" paper [00:36:18]
    The 3rd fallacy: Avoiding wishful mnemonics [00:42:23]
    Concepts and primitives [00:47:56]
    The 4th fallacy [00:51:19]
    What can we learn from human intelligence? [00:53:00]
    Pure intelligence [01:00:14]
    Unrobust features [01:02:34]
    The good things of the past in AI research [01:11:30]
    Copycat [01:17:56]
    Thoughts on the "neuro-symbolic camp" [01:26:49]
    Type I or Type II [01:32:06]
    Adversarial examples -- a fun question. [01:35:55]
    How much do we want human-like (human-interpretable) features? [01:43:44]
    The difficulty of creating intelligence [01:47:49]
    Show debrief [01:51:24]
    Pod: anchor.fm/machinelearningstre...
    Panel:
    Dr. Tim Scarfe
    Dr. Keith Duggar
    Letitia Parcalabescu and and Ms. Coffee Bean ( / aicoffeebreak )
    Why AI is Harder Than We Think - Melanie Mitchell
    arxiv.org/abs/2104.12871
    melaniemitchell.me/aibook/
    www.santafe.edu/people/profil...
    / melmitchell1
    melaniemitchell.me/
    #machinelearning

КОМЕНТАРІ • 126

  • @LucasDimoveo
    @LucasDimoveo 2 роки тому +90

    This podcast is shockingly high quality for the viewership. I hope this channel grows much more!

    • @marceloleal5127
      @marceloleal5127 2 роки тому

      y6

    • @marceloleal5127
      @marceloleal5127 2 роки тому

      m
      0

    • @rodi4850
      @rodi4850 2 роки тому +2

      It is a very good channel! Sadly channels that get views are the ones that are easily digested and have short content.

  • @ddoust
    @ddoust 2 роки тому +15

    Without a doubt, MLST is the best channel for AI practitioners - every episode is mandated work time viewing for our team. Their instinct for the right guests, the quality of the panel and the open minded ventilation of competing key issues is exemplary. Friston, Chollet, Saba, Marcus, Mitchell and Hawkins are among the spearhead thinkers for the next (and final) breakthrough. If I might humbly recommend three more: David Deutsch, Herb Roitblat and Cecilia Heyes.

  • @CristianGarcia
    @CristianGarcia 2 роки тому +12

    "Machine Learning practitioners were often quick to differentiate their discipline"
    How differentiable are we talking?

  • @oncedidactic
    @oncedidactic 2 роки тому +9

    Letitia was an excellent addition to the show! I love the varied perspective she brings, really complements the panel. As always I love d Keith’s contributions as well, and together they bring a formidable physics lens. Kudos on having such an eminent guest and thank you for all your hard work. It makes a fantastic show.

    • @haldanesghost
      @haldanesghost 6 місяців тому

      Her point about intelligence being limited by physical law totally caught me off guard and found it incredibly thought provoking. She was a great addition for sure.

  • @JohnDoe-ie9iw
    @JohnDoe-ie9iw 2 роки тому

    I wasn't expecting this quality. So happy I found this channel

  • @teamatalgo7
    @teamatalgo7 2 роки тому +3

    One of the best talks on the topic, congrats to the team for pulling such an amazing content. I am hooked to MLST now and binge watching all videos.

  • @marilysedevoyault465
    @marilysedevoyault465 2 роки тому +1

    And it is so great that you have Mr. Duggar interacting in your interviews, giving a voice to philosophy!

  • @sabawalid
    @sabawalid 2 роки тому +16

    Another great episode guys!!! Keep 'em coming.

  • @ChaiTimeDataScience
    @ChaiTimeDataScience 2 роки тому +6

    MLST releases a new episode on Sunday. Time to start my Monday Chores :D

  • @MixedRealityMusician
    @MixedRealityMusician 8 місяців тому

    Thank you for these conversations and ideas. As a musician who is looking to go into computer science and AI, there are so many questions and worries around creativity and art and it takes a lot of humility and curiosity to approach these questions with an open mind.

  • @user-xs9ey2rd5h
    @user-xs9ey2rd5h 2 роки тому +6

    Awesome episode, I'm looking forward to the one with Jeff Hawkins as well, I've learned so much from this podcast and am very glad you guys are doing what you're doing.

  • @abby5493
    @abby5493 2 роки тому +1

    Wow! Love this video! Awesome quality and so interesting 😍

  • @2sk21
    @2sk21 2 роки тому +3

    Very enjoyable way to spend a summer Sunday. You have had some great guests lately

  • @jordan13589
    @jordan13589 2 роки тому +9

    Has the Jeff Hawkins episode not yet been released? I was confused by references to a previous discussion with him.

    • @TimScarfe
      @TimScarfe 2 роки тому +12

      We will release it next, we need to get it checked with them before we publish. Sorry for the confusion

  • @marilysedevoyault465
    @marilysedevoyault465 2 роки тому +2

    Tim Scarfe, you are such an amazing pedagogue! I wish everybody would be as good as you when explaining something!

  • @bertbrecht7540
    @bertbrecht7540 2 роки тому +5

    I am 20 minutes into this video and am so inspired. Thank you so much for the hard work you all put into creating this.

  • @alexijohansen
    @alexijohansen 2 роки тому +3

    Thanks for doing these!

  • @nembobuldrini
    @nembobuldrini 2 роки тому +2

    Great content, guys! And enjoying the framing and debrief discussion very much. The idea to factor in time and energy efficiency reminded me of a recent conference of Robin Hiesinger, regarding the growing of neural networks (which in turns reminded me of Ken Stanley and co. work on Hyper-NEAT - BTW, that was a great show from you guys as well!). It would be interesting hearing your take on that.

  • @Avichinky2725
    @Avichinky2725 Рік тому

    Completed the entire video in 4 days. I have been practicing Machine Learning for the last 5 years and this video gave me knowledge about the things that I never encountered during my tenure. Great Podcast.

  • @bethcarey8530
    @bethcarey8530 2 роки тому +1

    Agreed, 'shockingly high quality for the viewership' - you can tell the effort that goes into these productions FOR the audience's digestion and appreciation of such complex concepts across so many global experts. Thankyou Letitia, Keith, Tim and of course Melanie.
    I particularly love one of your 'fallacies' Melanie that 'narrow AI is on a continuum of general AI'. ML achieves so much good, case in point DeepMind's protein structure mapping, for focused and valuable problems. But conflating that with a step toward general AI does a major disservice for advancement of that goal.

  • @imerovislam
    @imerovislam 2 роки тому +3

    Lex Fridman's Podcast led me here,
    I'm really glad.
    Wonderful content!

  • @minma02262
    @minma02262 2 роки тому +2

    It is 12.3 am here and this street talk is 2.3 hours. Yes, I'm sleeping at 3 am today.

  • @EricFontenelle
    @EricFontenelle 2 роки тому +2

    1:10:48
    You love you some François Chollet 😂😂

  • @dr.mikeybee
    @dr.mikeybee 2 роки тому +1

    This was very interesting. Thank you.

  • @EricFontenelle
    @EricFontenelle 2 роки тому +2

    I wish you did more editing with the group talk. Love the channel and material.

    • @oncedidactic
      @oncedidactic 2 роки тому +2

      With respect to taste and preferences, disagree! I really value the free flowing convo.

  • @BROHAMMER_OK
    @BROHAMMER_OK 2 роки тому +3

    Great episode as always.

  • @crimythebold
    @crimythebold 2 роки тому +3

    So. Intelligence must be measured in Watt then... I'm so relieved that we did not create a new unit for that 😉

  • @danieleallois4633
    @danieleallois4633 2 роки тому

    Amazing show guys. Keep it up please :)

  • @ugaray96
    @ugaray96 2 роки тому +2

    The problem lies on what's the research that has more financial interest (in the short term): probably downstream tasks such as translation, summarisation, object detection and more. If there was more financial interest in doing research on general intelligence, we would be seeing a whole different panorama

    • @SatheeshKumar-V
      @SatheeshKumar-V 2 роки тому +1

      Well said. That’s a profound thought many miss to see today. I feel the same.

  • @bigdoor64
    @bigdoor64 2 роки тому +2

    Hi Tim, I feel your pain regarding this audio quality. Check out descript if that happens again. It might be a better alternative than painfully denoising/enhancing guests voice. Not a free software of course.

  • @johntanchongmin
    @johntanchongmin 2 роки тому

    Thanks for this video!

  • @satychary
    @satychary 2 роки тому +1

    Hi Tim, excellent episode!
    Around the 3:00 minute mark, you switch from talking about the brittleness of expert systems in the 80s, to NNs in the 90s. In between, from 1984-1994, the Cyc project happened - the largest attempt ever, to distill common sense from humans. It didn't succeed, meaning, the system (Cyc) did not become intelligent the way it was hoped [robust, flexible, generalizable etc.].
    IMO, the missing "glue" is not common sense, rather, it is experience - which can only be acquired via an appropriate body+brain combination, by directly interacting with the environment.

  • @kennethlloyd4878
    @kennethlloyd4878 2 роки тому

    When AI researchers speak of concepts, they often only refer to 'mental concepts' without acknowledging more abstract structures. That is a barrier to understanding and leads to anthropocentric models.

  • @caiinfoindia1511
    @caiinfoindia1511 2 роки тому

    Nice episode guys!

  • @CandidDate
    @CandidDate 2 роки тому

    AGI right now is experiencing growing pains in the form of criticizing the current methods. It is up to us so we better get this right.

  • @Hexanitrobenzene
    @Hexanitrobenzene Рік тому

    Replying to a note on audio quality, sampling does not happen in a microphone, it happens in an analog to digital converter on some IO chip in a computer. My guess would be that the audio driver settings somehow were incorrect.

  • @dougewald243
    @dougewald243 7 місяців тому

    Seems that the Turing Test is sufficiently ambiguous that much debate has emerged as to whether or not it's been passed.
    Do we have an updated & refined version of it?
    Do we need a Turing Test 2.0?
    A TT2?

  • @dr.mikeybee
    @dr.mikeybee 2 роки тому +1

    We easily see what large models don't learn -- in other words how they are not like us. What we don't even begin to see is all the things they do learn that we can never learn without them, because we lack the cognitive ability. This is what we should be discussing.

  • @NelsLindahl
    @NelsLindahl 2 роки тому +4

    That audio did sound like a bad speakerphone recording. The content however was great.

  • @doubtif
    @doubtif 2 роки тому

    That short story about adversarial examples (at 1:36:00 or so) sounds like one of the central plot lines in Infinite Jest. I wonder if Hofstadter is aware of it.

  • @TEAMPHY6
    @TEAMPHY6 2 роки тому

    Love the sci-fi womp womp sound effects on the predictions

  • @balayogig6030
    @balayogig6030 2 роки тому

    If possible catch Dileep George from vicarious AI. If you are planning for any human inspired AI. Thanks.
    One of the nice channel for machine learning and AI in general.

  • @dr.mikeybee
    @dr.mikeybee 2 роки тому +1

    Getting to intelligence is a function of overcoming local entropy. So organizations of atoms developed that could do that well. At this point, we are designing new organizations of atoms that don't need to overcome entropy on their own. We do that for them. Therefore, these new organizations, our computers, don't need to concentrate on overcoming entropy -- survival. They can have all their resources devoted to problem solving.

  • @dr.mikeybee
    @dr.mikeybee 2 роки тому +1

    We can do analogies in model form in transformers with pairs of sentences, and this can be done in the same model as a masked model just using different heads.

  • @dr.mikeybee
    @dr.mikeybee 2 роки тому +1

    It seems to me that context extraction, filtering, and summarization is an important area of AI that is not getting enough attention. For example, we don't have a good English to SPARQL model.

  • @someonespotatohmm9513
    @someonespotatohmm9513 2 роки тому

    About the temporal dynamics of learning: wouldn't RL fit this description because every timestep the network is updated and "learning" and it "determines" what the next observations are. Or am I misunderstanding something here? Because to me it doesn't sound like it matters if you learn on a already existing dataset or a dataset you "discover" as you learn, you can implement it such that both are the same.

  • @dr.mikeybee
    @dr.mikeybee 2 роки тому +1

    There are 2.5 billion seconds in 80 years. If you look at one node per second, how could you ever hope to comprehend GPT3 that has 175 billion parameters? It would take 70 lifetimes just to inspect every node.

  • @user-or7ji5hv8y
    @user-or7ji5hv8y 2 роки тому

    It’s almost comical to hear such outlier predictions in hindsight but of course, those predictions were taken with much gravity when boldly made by such titans.

  • @dr.mikeybee
    @dr.mikeybee 2 роки тому

    Specialized message passing is a kind of abstraction that vitiates the need for a large section of network.

  • @dr.mikeybee
    @dr.mikeybee 2 роки тому +1

    Why do we assume that AGI can be built with a smaller architecture than the human brain? We won't know what our models can do until we get some orders of magnitude larger models? Evolution has had 4 billion years to create the human brain; so we should assume it is very well optimized. It's certainly very well optimized for energy usage. I truly believe we can't discount connectionist ideas until we get significantly larger models.

  • @Hypotemused
    @Hypotemused 2 роки тому

    It’s the best. I’d say ML News (with Yannic Kilcher) and Lex Friedman’s podcast are in the same level. Different styles but top notch insights. But Yannic’s ML News is the funniest for sure. His sarcasm and wit run deep 🥸

  • @michaelguan883
    @michaelguan883 2 роки тому +2

    How would three entities of "pure intelligence" divide $10 among themselves? Of course, $10 cannot be evenly split into three.

    • @michaelguan883
      @michaelguan883 2 роки тому

      From Melanie's "Why AI is Harder Than We Think": Nothing in our knowledge of psychology or neuroscience
      supports the possibility that “pure rationality” is separable from the emotions and cultural biases that shape
      our cognition and our objectives. Instead, what we’ve learned from research in embodied cognition is that
      human intelligence seems to be a strongly integrated system with closely interconnected attributes, including
      emotions, desires, a strong sense of selfhood and autonomy, and a commonsense understanding of the world.
      It’s not at all clear that these attributes can be separated.

    • @dr.mikeybee
      @dr.mikeybee 2 роки тому

      It's an interesting question. If we have an intents model that chooses actions, we could have various symbolic actions that could be chosen. If you are talking about end-to-end models, the correct prompt on a large enough model could get the correct answer. As you know, we can only retrieve what has been encoded; so if that sort of scenario exists in the data, it can be found.

    • @michaelguan883
      @michaelguan883 2 роки тому +1

      @@dr.mikeybee Actually, I was not thinking of technical aspects, but viewing "pure intelligence" from the perspective of society, economy, morality, etc. There are many dilemmas in our society. And I doubt that "pure intelligence" can ever exist because they still need to face those problems. In my example above, will a "pure intelligence" entity try to eliminate the other two entities so that it can maximize its income? Or it will take only $3.33 to be "fair"? Who is going to take the 1cent left? How can one entity protect its own wealth? It is good to unite other entities? You know what I mean.

    • @Hexanitrobenzene
      @Hexanitrobenzene Рік тому

      3 perfect mathematicians would just give up :)
      3 AIs could just take the last cent perfectly randomly, real world always includes many such situations, so it would even out.

  • @dr.mikeybee
    @dr.mikeybee 2 роки тому

    Distorting data is a good way to get rid of statistical shortcuts. For example, occlusion in images can remove watermarks, and other incorrect correlations in images that might be creating over-fit models. I saw a recent example of a cancer model that got the majority of it's cancer samples from a particular lab, so all the images were the same aspect ratio, whereas the non cancer images were different. So the model was predicting from aspect ration. LIME can help uncover this sort of wrongheadedness.

  • @QuaaludeCharlie
    @QuaaludeCharlie Рік тому +1

    We have nothing to replicate the breath of Life , Biological replication , the Penial gland , 0nce we Have these traits developed A.I. will be a living being .

  • @XOPOIIIO
    @XOPOIIIO 2 роки тому +1

    The fact that we are constantly failing in AI development could be explained by anthropic principle. Like constantly failing to develop the fatal technology is the necessary prerequisite to our existence.

    • @dr.mikeybee
      @dr.mikeybee 2 роки тому +1

      I think the biggest reason we've failed at AGI is that we haven't gotten powerful enough systems, but we're getting closer. Moreover, it's tough to say we're failing. I would say we're succeeding pretty quickly.

  • @NextFuckingLevel
    @NextFuckingLevel 2 роки тому +2

    I feel dirty for watching this for free

  • @EricFontenelle
    @EricFontenelle 2 роки тому

    1:07:03
    I almost spit my coffee out thinking about that dem debate where president biden says, “No man has a right to raise a hand to a woman in anger other than in self-defense, and that rarely ever occurs.”
    “So we have to just change the culture, period,” Biden said. “And keep punching at it and punching it and punching at it.”

  • @dr.mikeybee
    @dr.mikeybee 2 роки тому +1

    We shouldn't confuse intelligence with intelligent agents. Dumb agents can fetch correct answers. In fact, we probably don't want really smart agents.

  • @dr.mikeybee
    @dr.mikeybee 2 роки тому

    Adversarial systems have a serial temporal framework.

  • @dr.mikeybee
    @dr.mikeybee 2 роки тому +1

    Why would we ever want synthetic self-motivated agents? Motivation is essential for survival. It's not essential for synthetic intelligence.

    • @dr.mikeybee
      @dr.mikeybee 2 роки тому +1

      If we are creating and choosing agents by an evolutionary algorithm, survival might be something to optimize, but I don't recommend ever doing that. Moreover, I would say, survival as a objective function should be prohibited in our designs. That is to say, if we give an agent the ability to choose objective functions, programmatically, survival is one that should never be a choice.

  • @AICoffeeBreak
    @AICoffeeBreak 2 роки тому +5

    Second!

  • @jgpeiro
    @jgpeiro 2 роки тому +3

    Hey, is there a problem with the audio?

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  2 роки тому +1

      We made a comment on this at 00:25:45

    • @AICoffeeBreak
      @AICoffeeBreak 2 роки тому

      Yes. 😥 See chapter "Note on audio quality [00:25:45]"

    • @jgpeiro
      @jgpeiro 2 роки тому +1

      @@AICoffeeBreak no no, my audio was completely muted... But after replay the video few times, now it works. I dont know what happened

    • @AICoffeeBreak
      @AICoffeeBreak 2 роки тому

      @@jgpeiro I understand now what you are saying. Someone else reported having *no* audio too and I had the problem of no image for a while.
      I guess it might have been a glitch from YT since the video was to fresh and available only in SD and not HD quality at the moment we saw these errors.

    • @jgpeiro
      @jgpeiro 2 роки тому

      @@AICoffeeBreak thanks fot the explanation

  • @satychary
    @satychary 2 роки тому

    All forms of intelligence [plants, animals, group, synthetic...] can be defined to be 'considered response'.

  • @OneFinalTipple
    @OneFinalTipple 2 роки тому +1

    When will you release the Hawkins vid?

  • @_ARCATEC_
    @_ARCATEC_ 2 роки тому

    I got this 💓

  • @user-or7ji5hv8y
    @user-or7ji5hv8y 2 роки тому

    Nice forest

  • @singularity844
    @singularity844 2 роки тому

    A machine with the same general intelligence as a human should have extremely similar biases.

  • @Hypotemused
    @Hypotemused 2 роки тому

    Dommage about melanies audio. Some ought to call Dr. Karkaeur and make him send mikes to all SFI staff. Come on David, your running an Airbnb for noble prize winners. Give em a damn mike.

  • @alcoholrelated4529
    @alcoholrelated4529 2 роки тому +1

    what is "go-fi"?

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  2 роки тому +2

      "Good old-fashioned AI" i.e. symbolic AI methods, this is what AI used to mean before the statistical / empirical methods i.e. machine learning. Symbolic basically means not data-driven, rather trying to create an AI using code and explicit knowledge.

  • @user-or7ji5hv8y
    @user-or7ji5hv8y 2 роки тому

    But replace music with art which also is about emotion. Can ML create art?

  • @kitastro
    @kitastro Рік тому

    SCP type example here 1:36:30

  • @chrislecky710
    @chrislecky710 2 роки тому

    humanities intelligence is not based on logic gates its based on frequency because frequency has a scale and that scale allows for more information per connection than is possible with logic gates. quantum computers are the only current technology that has the potential to crunch that much data at once. A new type computer framework will need to be designed to make such things possible.
    The issue is at first glance such a framework will not be coherent at the beginning as quantum Ai will need to explore every possible variation to create coherence which is similar to a new born baby who can only perceive abstract shapes shades and colours. Coherence will then form from AI exploring every possible variation of frequency of every connection presented. For example your nervous system is able to process a light touch and something painful using the same nerves because their is a variation in frequency that your brain processes. its how our entire body works including our brain.

  • @oualadinle
    @oualadinle 2 роки тому +1

    10

  • @dr.mikeybee
    @dr.mikeybee 2 роки тому +1

    To keep agents ethical, chosen actions need to be passed through a policy network.

  • @charcoaljohnson
    @charcoaljohnson 2 роки тому +1

    Morse Code in the intro: QDEFHLY

  • @stevenhines5550
    @stevenhines5550 Рік тому

    Watched the Chomsky interview. Can't get past his estimation that after half a century this discipline has accomplished nothing. I am left to wonder, why all this effort and investment of intense brainpower? I suspect it has more to do with inventing systems which subjugate human dignity to power in service to the ruling class.

  • @machinelearningdojowithtim2898
    @machinelearningdojowithtim2898 2 роки тому +2

    First!

  • @geoffansell4388
    @geoffansell4388 2 роки тому +1

    Letitia Parcalabescu is so blemishless I thought she was AI generated at first.

  • @osman7900
    @osman7900 Рік тому

    It is an irony that despite all the progress in AI, it is still not possible to repair and enhance voice recordings.

  • @rohankashyap2252
    @rohankashyap2252 2 роки тому +2

    Third

  • @annaibanez2499
    @annaibanez2499 Рік тому

    LOL

  • @da-st6ux
    @da-st6ux 2 роки тому +1

    fifth!

  • @RavenAmetr
    @RavenAmetr 2 роки тому

    I feel that the last "fallacy" rather than addressing intellectual laziness is representing it.
    The body is necessary for intelligence? Cool, but what exactly does it mean?
    A "brain in a jar" is not intelligent, or conscious? That would be a bold statement.
    A virtual body or robot body would cause the emergence of intelligence? I don't think that's the point.
    Physical body constraints? Then which constraints are necessary and in what way? Why they cannot be programmed?
    Yes, I saw the video with prof. Bishop. Nothing makes sense there.
    I've only learned that anyone who is trying to explore human cognition from a computational standpoint, is a quasi-religious idiot,
    and anyone sane must avoid even thinking about it if they want to get precious Bishop's approval.
    Sorry for the sarcastic tone for the last part.

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  2 роки тому +1

      Have you looked at Godel, Escher, Bach? Bishop cited it as one of his biggest inspirations. As well as intelligence being a process/emergent -- Bishop points out the "observer-relative problem" for computationalism. I don't think anyone is saying that you couldn't reproduce the emergent intelligence, rather they are saying that the nature of the intelligence is strongly determined by the entire chain i.e. the environment, the agent, how many sensors it has, how it interacts with the environment. So there is something "uniquely human" about our own intelligence. But as we discussed at the end, perhaps the uniqueness of the intelligence doesn't matter -- if a common language emerges or we even "discover" universal knowledge primitives like transitivity.

    • @RavenAmetr
      @RavenAmetr 2 роки тому

      ​@@MachineLearningStreetTalk
      Thank you for the response. No, I didn't, probably I should add it to my reading list.
      The "observer-relative problem", is an interesting and afaik a really old one, and yes I do see it as a problem.
      If I would know the solution, I would gladly share, but I don't, and I beleive nobody does. There's also no solid proof, that there cannot be a computational solution for it.
      I can't see that "pixie" thing or another Bishop's analogies convincing or even relevant. Nevertheless, I find it arrogant to state incomputability based on such "proofs".
      In regards to something "uniquely human". I'm not sure if my intelligence is uniquely human, or human intelligence is uniquely mine ;)
      I'm quite sure that I am uniquely myself, and I can make a bold guess, that you too. But isn't our uniqueness a "red herring"? I don't see how my uniqueness helps me to be sentient.
      By the way, do you know this guy: ua-cam.com/video/Xv_JJ2ZuDJM/v-deo.html
      Would be awesome to see an interview with him.

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  2 роки тому +1

      @@RavenAmetr Robin Hiesinger is great, we would love to get him on the show. Thanks for the suggestion

    • @DontfallasleeZZZZ
      @DontfallasleeZZZZ 2 роки тому

      "A "brain in a jar" is not intelligent, or conscious? That would be a bold statement."
      Is the “brain in a jar” really not embodied? Sure, it may be in a jar now, but if it’s based on a human brain, its design is the result of millions of years of evolution, a very embodied process. It depends how much causality you are willing to ignore.
      “Physical body constraints? Then which constraints are necessary and in what way? Why they cannot be programmed?”
      Maybe they can, but for that, you need programmers, using their embodied intelligence to create the program. What the program does is a direct causal result of what the programmer’s fingers does on the keyboard.

    • @RavenAmetr
      @RavenAmetr 2 роки тому

      ​@@DontfallasleeZZZZ
      I think you and I talking about embodiment in different contexts. Feel free to clarify, what are you intended to prove and I will try to clarify my side.
      Embodyment in the given context is a way to say - everything is important. And it is probably correct, but such an attitude is just not helpful, non-informative, and lazy.
      It is not solving anything and not describing anything. It is just: "it is what it is", there's nothing to learn, nothing to discuss.
      On the other hand, we could go another way. and describe what it is not, what makes it different, what makes it special.
      Is my message clear?

  • @magnuswootton6181
    @magnuswootton6181 2 роки тому

    well your not thinking hard enough.

  • @muzzletov
    @muzzletov 2 роки тому +1

    complete bs, adversarial examples exist in humans as well. we got trained over thousands of years, yet were still susceptible to "adversarial examples". the issue is a rather fundamental one. you always have a bias, no matter what structure u r. the definition of a structure is even biased in itself. idk what youre even hoping for. but i guess its some kinda sensationalism to attract more viewers. which i have no problem with, i enjoy the concept, but dont like the sensationalism.