#56 - Dr. Walid Saba, Gadi Singer, Prof. J. Mark Bishop (Panel discussion)

Поділитися
Вставка
  • Опубліковано 2 чер 2024
  • It has been over three decades since the statistical revolution overtook AI by a storm and over two decades since deep learning (DL) helped usher the latest resurgence of artificial intelligence (AI). However, the disappointing progress in conversational agents, NLU, and self-driving cars, has made it clear that progress has not lived up to the promise of these empirical and data-driven methods. DARPA has suggested that it is time for a third wave in AI, one that would be characterized by hybrid models - models that combine knowledge-based approaches with data-driven machine learning techniques.
    Joining us on this panel discussion is polymath and linguist Walid Saba - Co-founder ONTOLOGIK.AI, Gadi Singer - VP & Director, Cognitive Computing Research, Intel Labs and J. Mark Bishop - Professor of Cognitive Computing (Emeritus), Goldsmiths, University of London and Scientific Adviser to FACT360.
    Moderated by Dr. Keith Duggar and Dr. Tim Scarfe
    Pod version: anchor.fm/machinelearningstre...
    / gadi-singer
    / walidsaba
    / profjmarkbishop
    Introduction [00:00:00]
    Bishop Intro: [00:03:09]
    Gadi Intro [00:05:06]
    Walid Intro [00:06:37]
    Gadi Opening Statement [00:08:30]
    Bishop opening statement [00:12:21]
    Walid Opening Statement [00:16:08]
    Round Robin Kickoff [00:18:49]
    Self-supervised categories as vectors [00:25:57]
    The context of understanding electric sheep? [00:28:12]
    Most unique human knowledge is not learnable [00:37:16]
    Two modes of learning: by observation and by deduction [00:41:09]
    Hybrid directions [00:46:24]
    Monte Carlo tree search and discrete overlays [00:51:44]
    What's missing from artificial neural networks? [00:54:40]
    Closing Statement: Bishop [01:02:45]
    Closing Statement: Gadi [01:06:09]
    Closing Statement: Walid [01:08:48]
    Rapid Round: When will we have AGI? [01:10:55]
    #machinelearning #artificialintelligence

КОМЕНТАРІ • 79

  • @dosomething3
    @dosomething3 2 роки тому +15

    Walid knows how to bash DL better than all of us✅✅✅

  • @dosomething3
    @dosomething3 2 роки тому +9

    Obviously Wallid Saba is my favorite 🤩. GO WALLID!!!

  • @AICoffeeBreak
    @AICoffeeBreak 2 роки тому +2

    Just finished watching this. Thanks MLST, for organising and broadcasting these discussions! 💪

  • @paxdriver
    @paxdriver 2 роки тому +15

    What a fantastic show, once again. Tim, you are such a boss podcast producer. Thank you so much

    • @paxdriver
      @paxdriver 2 роки тому

      @@blokin5039 I wouldn't even consider touching you, no worries ;p

    • @paxdriver
      @paxdriver 2 роки тому

      @@blokin5039 I'm stumped. Wanna cameo on my next rap album? (it's funny because I actually have a rap album 😜)

    • @maloxi1472
      @maloxi1472 2 роки тому +1

      @@paxdriver looks like @Blokin left the chat 😅

    • @paxdriver
      @paxdriver 2 роки тому

      @@maloxi1472 lol that's always the response I get when I ask people to check out my indie rap album 😂 i'm used to it

  • @machinelearningdojowithtim2898
    @machinelearningdojowithtim2898 2 роки тому +20

    First! Are you guys excited about Jeff Hawkins coming on the show next week or what? 😉

  • @bethcarey8530
    @bethcarey8530 2 роки тому +2

    I've listened to this episode 3 times now and increased my exercise bike cycling to fit the length of these podcasts so thanks Keith & Tim :-) . You asked for suggestions Tim and if you are doing another panel discussion like this, I'd suggest John Ball with Luis Lamb & Walid.

  • @welcomeaioverlords
    @welcomeaioverlords 2 роки тому +4

    Great discussion gentlemen.

  • @NelsLindahl
    @NelsLindahl 2 роки тому +1

    The most unbiased podcast... wonderful aspirational goal...

  • @nauman.mustafa
    @nauman.mustafa 2 роки тому +4

    I am a hardcore connectionist, and I think we will reach AGI (somehow doesn't matter if it's neural nets or whatever) in our lifetimes (hopefully). But one of the main problems I see people do not understand is that [[ intelligence != consciousness ]]. While we may be able to achieve AGI, I believe it won't be conscious.

  • @braineruption
    @braineruption 2 роки тому +1

    Loved this show, very thought provoking. I've been checking out some of your previous shows recently, all have been high quality, but I particularly liked your one with Chollet. As an old-school software engineer trying to break into ML, your shows make me feel like I'm keeping up with bleeding edge!

  • @Lumeone
    @Lumeone 2 роки тому

    Great debate. Lovely passionate science minds learned to reason, argue, disagree, and investigate abstract images of abstraction. Learning is universal.

  • @mh49897
    @mh49897 2 роки тому

    This was a great panel discussion and much-needed discourse as we figure out what to explore next in AI.

  • @abby5493
    @abby5493 2 роки тому +1

    Awesome panel discussion 😍

  • @conrado8881
    @conrado8881 2 роки тому +1

    More panels please

  • @henriquepeixoto
    @henriquepeixoto 2 роки тому +1

    Amaizing, the only problem with you guys is that every new episode I add 100 more papers to my master! hahahaha

  • @arvisz1871
    @arvisz1871 2 роки тому

    Great discussion, more please! Very interesting to hear contradictory points of view and some mediation or guidance to where the differences arise would be also welcome. P. S. Great channel!

  • @dr.mikeybee
    @dr.mikeybee 2 роки тому +2

    The central issue here is will we create AGI or will it be discovered by machine learning. I seriously doubt humans' ability to do anything that complex unless it's self-assembled.

  • @robbiero368
    @robbiero368 2 роки тому +3

    Interesting timing with the announcement of Tesla V9 FSD arriving this Saturday

  • @troycollinsworth
    @troycollinsworth 2 роки тому +6

    AI is missing massively concurrent asynchronous processing with feedback loops and dynamic connection evolution during inference. There is no artificial hardware that can do this. Maybe AGI will only be possible with biological systems.

    • @TimScarfe
      @TimScarfe 2 роки тому +4

      Wait until next week's show before you decide that!

    • @oncedidactic
      @oncedidactic 2 роки тому +2

      @@TimScarfe you have our attention…

  • @dr.mikeybee
    @dr.mikeybee 2 роки тому +4

    How many times to we need to see end-to-end systems outperform hybrid systems before we stop saying hybrid systems are what we need to aim towards? Yes, at the current time, because of computational limitations, we need to build systems based on symbolic agents that use models in something like microservices.

  • @RinnRua
    @RinnRua 2 роки тому +1

    Interesting that there seemed (I’ve only listened to this discussion 3 times so far so can’t be totally sure) to be a bit of a consensus at the end over the need for modularity - Ben Goertzel’s SingularityNET approach to facilitating an AGI to boot-up its own cognition/consciousnesses spontaneously (like a baby human becoming an adult) might be the right path to take - this intuitively appeals to me; but I have to point out that my ‘opinions’ are always ultimately based on a deep learning approach to understanding my own consciousness over many decades.

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  2 роки тому +3

      We have Ben on the show in a month or so! Will be good! "I’ve only listened to this discussion 3 times so far" -- wow! Nice!

    • @RinnRua
      @RinnRua 2 роки тому +1

      @@MachineLearningStreetTalk Fantastic‼️ I really enjoyed and appreciated hearing your 3 guests on the show today - a great practical demonstration of modularity - now imagining how crazily great it would have been if Ben Goertzel had been the 4th guest in with the others 😶

  • @brettforbes6070
    @brettforbes6070 2 роки тому +1

    So there are two missing elements here, one is dynamics, and two is the value and use of approximate models. Dynamics is built into the world, and humans simulate outcomes based on very few examples in order to determine possible outcomes. This is far more than just knowledge structures, as it involves dynamic rules about how the world works utilised to identify the likely causal outcome. With regard to approximate models (e.g. curve fitting), there is a long history in science of using approximate methods successfully, yet there is no history of converting those approximate methods into exact methods through feeding more data to them. In short, current nlp methods are doomed by a glass ceiling they cannot break through. Language is only approximately statistical

    • @charlesfoster6326
      @charlesfoster6326 2 роки тому

      What specific prediction would you be willing to wager on? You speak of a ceiling but the evidence to date supports the hypothesis that there are continued benefits to be had from the scaling of dumb statistical models, with no ceiling in sight. Almost none of the benchmarks constructed in NLP are safe anymore, for instance.

    • @brettforbes6070
      @brettforbes6070 2 роки тому

      @@charlesfoster6326 in my view the benchmarks are a joke, a self-serving set of standards that do not show that the nlp system understands the meaning of human-derived sentences, and GPT-3 is a bigger joke, its fragility and shallowness is obvious. As I say, 300 years of science shows that approximate methods cannot be made exact, regardless of the amount of data you throw at it, and its easy to phrase/rephrase sentences so that NLP cannot understand them. You should follow Walid Saba if you want examples, as he has shown many. A founding problem here is the Chomsky linguistic model, and its pretty clear that Role Reference Grammar is a better approach for pan-language decomposition. There is nothing wrong with approximate approaches, as i say they have a long history of successful utilisation. At issue is the idea that approximate approaches can be incrementally improved until they reach an exact answer, when in fact this has never been shown. A further problem is that curve fitting approaches are seen as the whole of AI and generally useful, when in fact curve fitting is astoundingly useful for some narrow tasks, and completely useless at many others

  • @dosomething3
    @dosomething3 2 роки тому +3

    Wait!!!! WHERE IS YANICK KILTCHER??????

    • @TimScarfe
      @TimScarfe 2 роки тому +5

      Lightspeed Kilcher doesn't countenance DL bashing! He will be back the episode after next! Don't worry he isn't going anywhere 😁😁

  • @jameslewis3442
    @jameslewis3442 2 роки тому +2

    Great discussion, particularly the different points of view. Walid and Gadi seem locked in to their world view, but what's needed now is revolution not evolution. Regarding, the model/hybrid approach: If a better model was the answer, we would have found it by now. We humans develop over time, we learn and forget and sometimes remember. How can a model, a snapshot really, developed in an instant capture all that.

    • @sabawalid
      @sabawalid 2 роки тому

      Great point James.

  • @kalabumdra564
    @kalabumdra564 2 роки тому +2

    You guys should invite Joscha Bach

    • @TimScarfe
      @TimScarfe 2 роки тому +3

      We tried to invite him on several times. Funnily enough, Gadi is his boss at work! We did one better :P

  • @dr.mikeybee
    @dr.mikeybee 2 роки тому +1

    If an optimized model has 100 billion parameters, how long would it take a human to create it by hand? It's impossible.

  • @marilysedevoyault465
    @marilysedevoyault465 2 роки тому

    I think a lot of our incredible common sense comes from chronology scripting of sequences in our brain. Chronology has Logy in it. I think an AI learning a lot of reality based videos with chronology actions in it would have a good base for common sense. Even children can do maths by using their sequenced memory. This is why I share this again related to Jeff Hawkins' research : I'm sorry if it is anoying, and sorry for the mistakes, because I'm French speaking, and maybe it isn't of any use at all, cause I'm no specialist, only an artist I guess. But I'm sharing this little hypothesis : Let say all the mini columns in an area all learn the same thing, sequences of events in a chronological order. All a human went through or learned related to this area(let say visual memory) is there in every mini columns: all the sequences respecting the chronology, like if it is absolutely small layers of events stored in each mini column. Obviously there is some forgetting, but there is a lot there. Now lets talk about the predictions or creativity. When making a prediction or creating a mental image, could different mini columns jump at different layers of the chronology (different moments of life), seeking identified sequences of the same object, all this for predictions. The intelligence part would be to melt all these similar sequences from different moments of life in one single prediction ? Let say I saw a cat falling when I was ten years old, and I saw many cats falling on television, and many cats falling on facebook. Some minicolumns would bring back the cat at ten years old, other minicolumns some cat on facebook, and other minicolumns a falling cat on television, and melting all these sequences together, I could predict or hope that my own cat would fall on its feet while falling down. Is it what is met when saying that mini colomns vote?

  • @dezatron
    @dezatron 2 роки тому

    We never understand each other!

  • @willd1mindmind639
    @willd1mindmind639 2 роки тому

    The main problem is that brain learning involves memory which is based on bio chemical substances in neurons that preserve aspects of signals passing through them. This is why you can recall the taste of something or the feeling of something long after having experienced it. That system treats each signal as discrete and the higher order parts of the brain uses collections of various signals representing "features" to reason and infer things about the world based on prior signals and new ones.
    Machine learning has no concept of a distinct signal or groups of signals being analyzed to come to a reasonable conclusion because each model is designed to map signals into layers of less fidelity leading to distinct values representing some classification based on human defined label vector. Human learning sees a circle and recognizes it as a distinct characteristic of the visual signal without any labels being provided in advance. And long before any concept of a formula to generate circles is even thought about.

  • @mohamadhijazi1794
    @mohamadhijazi1794 2 роки тому +1

    Human learn for different reasons, as for survival reasons, fun, or for power. What are the reasons for a machine to learn?? Why to learn?

    • @nauman.mustafa
      @nauman.mustafa 2 роки тому +1

      Sad thing is: AI community today is just too arrogant that they will be able to solve everything.

  • @mgostIH
    @mgostIH 2 роки тому

    On the argument of wanting more "logic driven AI", what do the panelist think of approaches currently being researched in the domain of making discrete problems differentiable?
    For example, OptNet ( arxiv.org/abs/1703.00443v4 ) treats specifically quadratic optimization, and they show it having much better performance on solving sudoku compared to classical networks, otherwise there's quite some work over neural SAT solvers that look promising in the alternative ideas offered ( arxiv.org/abs/2008.02215 shows how MapleSat blows the classical solvers out of the water in the results at page 10).
    Is this the kind of work they refer to when talking about hybridization?
    This seems an interesting field in my opinion, but at parts it seems they completely reject the idea of "learnability" of data and never mention any of the actual research going over that topic, or completely dismiss famous works on neural network heuristics like AlphaGo.

  • @SecondSight
    @SecondSight 2 роки тому +1

    I'm a nobody but i have some thoughts could be wrong... I think speaking about brain vs data/environment etc, you basically have 2 levers you can pull. One lever is data and the other lever is algorithms (in a very general sense). So data would be things like environment, embodiment, human culture etc, and algorithms would be the human brain. I think what's tripping people up that might be true is that given enough computing resources (possibly infinite) you can create all algorithms with enough data, and vice versa enough data contains all algorithms. And so whats happening in deep learning is that you are compensating a lack of algorithms with more data, but if we had better algorithms we could do with less data, and vice versa. i think though ultimately algorithms are more powerful than data, but at the same time it depends on the information density and arrangement of the data, especially in deep learning. if the data is very good and dense, maybe it produces more algorithms. search space size is unknown i think and without a good model of the algorithms you dont know how good the data is maybe

    • @nauman.mustafa
      @nauman.mustafa 2 роки тому

      You are right. The deeper problem is AI community today is very arrogant. While DL has achieved much today, they start to put so many assumptions, and it has become more like many cults than a scientific community.

  • @jonasfrey3515
    @jonasfrey3515 2 роки тому

    At 40:40 the point of saying transitivity is not learned is a self-contradiction with the previous point. He argues a 2-year-old already knows this but so does an ape.

  • @luke2642
    @luke2642 2 роки тому +1

    'oh sure it can do x & y but it can't do z'. There will always be more z's. The ability to build and use a knowledge tree will be achieved sooner or later. Also, the main difference between a chimp and human is that we have 3x the brain power in neocortical columns. And finally, our brain *is* a computer, by any reasonable definition. No argument based on limitations of computation are convincing.

    • @nauman.mustafa
      @nauman.mustafa 2 роки тому

      Ahh, the assumption: the larger the brain, the better. P.S. elephants have a much larger brain. Arrogant AI community of today.

    • @luke2642
      @luke2642 2 роки тому

      @@nauman.mustafaAn elephants cortex has 1/3 the neurons of our cortex despite its size. The big difference between humans and animals is the size and connectedness of our cortex.

    • @nauman.mustafa
      @nauman.mustafa 2 роки тому

      ​@@luke2642 Ahh, the biology where stories become unprovable theories and then laws/fact and eventually cults like Darwinism. Thanks, but no thanks.

    • @luke2642
      @luke2642 2 роки тому

      @@nauman.mustafa Do you mean you have a better explanation for the evidence than the theory of evolution by natural selection? Oh yes, what a cult. You must find 150 years of scientific consensus really annoying. It must be much easier for you to know the answer before looking at the evidence?

    • @nauman.mustafa
      @nauman.mustafa 2 роки тому

      @@luke2642 Yes, the 'science' where one theory becomes a fact/law not because it has been tested/proven to be true but because it became popular.

  • @Redstoner34526
    @Redstoner34526 2 роки тому

    Hi

  • @badhumanus
    @badhumanus 2 роки тому +3

    I'm sorry but anyone who believes that either GOFAI or deep learning or a hybrid of the two will lead to AGI has not been paying attention. Deep learning is designed to do the one thing we don't want in AGI: it optimizes specific functions. This is anathema to generalization. Unlike DNNs, the brain can generalize border, color, parts, position for any object, even if it has not seen anything like it before. It's not even a matter of generalizing out of distribution. The brain is learning something about the world that has nothing to do with specific objects.
    Consider that a DNN is just a rule-based expert system on steroids. Adding a billion rules will not make it generalize. Imagine if a honeybee was using DL to navigate and find honey. It would have to learn every single flower, tree, insect, and environment. It would need multiple samples of each. This is preposterous since the bee has only about 1 million neurons. The bee gets around the problem by having the ability to generalize.
    Generalization should be the only focus of AI research. Thinking that the mixing of GOFAI with DL has a chance to solve this pressing problem is absurd.

  • @SimonJackson13
    @SimonJackson13 2 роки тому

    The meaning of life? It's all the mean bad stuff that gets made to happen.

    • @SimonJackson13
      @SimonJackson13 2 роки тому

      All the genetic computer units get connected by fit to learning correctness?

    • @SimonJackson13
      @SimonJackson13 2 роки тому

      "The convergence of the multiple series for different integral forms have bounds. These could be considered some sophisticated parallel to attractor convergence in fractals. As they have a possible intersection as well as a pseudo digital behaviour (time analytic of halting problem applied to divergence) they can be used to represent some digital manifold, while maintaining series differentiability."

  • @neelsg
    @neelsg 2 роки тому

    It seems like Bishop is projecting when he says it is a religious view to think computation can lead to intelligence. This separation between what he refers to as mind and the computation within the brain is popular among religious people, but is simply not supported by observable evidence

    • @davidw8668
      @davidw8668 2 роки тому

      Really, so within which religious community is it popular?