Why AI is Harder Than We Think (Machine Learning Research Paper Explained)

Поділитися
Вставка
  • Опубліковано 15 тра 2024
  • #aiwinter #agi #embodiedcognition
    The AI community has gone through regular cycles of AI Springs, where rapid progress gave rise to massive overconfidence, high funding, and overpromise, followed by these promises being unfulfilled, subsequently diving into periods of disenfranchisement and underfunding, called AI Winters. This paper examines the reasons for the repeated periods of overconfidence and identifies four fallacies that people make when they see rapid progress in AI.
    OUTLINE:
    0:00 - Intro & Overview
    2:10 - AI Springs & AI Winters
    5:40 - Is the current AI boom overhyped?
    15:35 - Fallacy 1: Narrow Intelligence vs General Intelligence
    19:40 - Fallacy 2: Hard for humans doesn't mean hard for computers
    21:45 - Fallacy 3: How we call things matters
    28:15 - Fallacy 4: Embodied Cognition
    35:30 - Conclusion & Comments
    Paper: arxiv.org/abs/2104.12871
    My Video on Shortcut Learning: • Shortcut Learning in D...
    Abstract:
    Since its beginning in the 1950s, the field of artificial intelligence has cycled several times between periods of optimistic predictions and massive investment ("AI spring") and periods of disappointment, loss of confidence, and reduced funding ("AI winter"). Even with today's seemingly fast pace of AI breakthroughs, the development of long-promised technologies such as self-driving cars, housekeeping robots, and conversational companions has turned out to be much harder than many people expected. One reason for these repeating cycles is our limited understanding of the nature and complexity of intelligence itself. In this paper I describe four fallacies in common assumptions made by AI researchers, which can lead to overconfident predictions about the field. I conclude by discussing the open questions spurred by these fallacies, including the age-old challenge of imbuing machines with humanlike common sense.
    Authors: Melanie Mitchell
    Links:
    TabNine Code Completion (Referral): bit.ly/tabnine-yannick
    UA-cam: / yannickilcher
    Twitter: / ykilcher
    Discord: / discord
    BitChute: www.bitchute.com/channel/yann...
    Minds: www.minds.com/ykilcher
    Parler: parler.com/profile/YannicKilcher
    LinkedIn: / yannic-kilcher-488534136
    BiliBili: space.bilibili.com/1824646584
    If you want to support me, the best thing to do is to share out the content :)
    If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
    SubscribeStar: www.subscribestar.com/yannick...
    Patreon: / yannickilcher
    Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
    Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
    Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
    Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
  • Наука та технологія

КОМЕНТАРІ • 279

  • @YannicKilcher
    @YannicKilcher  3 роки тому +24

    OUTLINE:
    0:00 - Intro & Overview
    2:10 - AI Springs & AI Winters
    5:40 - Is the current AI boom overhyped?
    15:35 - Fallacy 1: Narrow Intelligence vs General Intelligence
    19:40 - Fallacy 2: Hard for humans doesn't mean hard for computers
    21:45 - Fallacy 3: How we call things matters
    28:15 - Fallacy 4: Embodied Cognition
    35:30 - Conclusion & Comments

    • @nickhockings443
      @nickhockings443 3 роки тому +1

      @Paolo Bernasconi (mostly agreeing with you) The class of "recurrent" artificial neural networks (e.g. Hopfield networks) hold state in their current activation, (as opposed to solely their synaptic weights and activation functions) do provide the "ticking brain" you mention. So do most artificial cognitive architectures (regardless of whether they use DL).
      In neuroscience there is the concept of "motor chauvinism" www.ted.com/talks/daniel_wolpert_the_real_reason_for_brains/transcript?language=en i.e. "the purpose of having a nervous system is to decide how to move", or conversely "embodiment defines the problem which intelligence exists to solve". Hence without some dependence on the world, an intelligence (artificial or natural) lacks a reason to (1) take an interest (2) prefer one answer over another. The available sensorimotor interaction defines what inference is possible, while needs and vulnerabilities define priorities. There is a great deal of research in neuroscience on action selection and learning of action selection. Such action selection is dependent on (i) emotional state and physiological monitoring, on top of basic drives (ii) forward prediction of the expected outcomes of alternative actions.
      At least some artificial cognitive architectures do take account of these insights from neuroscience. See IEEE cognitive robotics workshop 2021 transair-bridge.org/workshop-2021/ or review paper (Kotseruba Tsotsos 2018) link.springer.com/content/pdf/10.1007/s10462-018-9646-y.pdf

    • @vlogsofanundergrad2034
      @vlogsofanundergrad2034 3 роки тому

      @yannic Can you share the highlighted paper with us? It will be very helpful for us when we want to look back at the research work later...

  • @SirCaptainCrumpet
    @SirCaptainCrumpet 3 роки тому +35

    *Draws a blob "This is a brain" *Writes Brain with the middle three letters overlapping.

  • @kaihuchen5468
    @kaihuchen5468 3 роки тому +62

    @ Yannic Thank you for sharing this thought-provoking video!
    As an AI practitioner who has lived through more than one full cycle of the AI Spring-Winter waves, I feel that the recent advancements are very real, but the hype is also unreal as always. I agree with the author regarding the irrational exuberance of many of today's AI practitioners, but I disagree with author's conclusion where she falls into the age-old trap of attempting to address the the "commonsense" problem without context nor boundary, where the problem becomes much too nebulous to solve. And no, bringing in the cognitive scientists at this stage won't help either.
    A more practical approach to tackle the commonsense problem is to do it in a much more specific real-world domain, which if successful then it should give us a foothold towards applying the solution in a broader context. For example, we can attempt to solve the commonsense problem in self-driving cars, where the problem is about coming up with a set of background catch-all knowledge/rules that measurably improves the behavior of a self-driving car in unseen and unforeseen circumstances.
    On the topic of self-driving cars, Musk certainly has overstated when he said level 5 FSD will be here by the end of 2021. Tesla's approach of forcing neural networks to learn long-tail events won't generalize well, which means that we will find its FSD cars making seemingly incomprehensible mistakes from time to time.
    On the topic of AlphaGo, I thought author's constant comparison with human is unnecessarily snide. While a lot of recent advancements in AI are inspired by how human mind works, most AI practitioners made no attempt to fully duplicate human mind in every ways. AlphaGo's contribution is in that it is able to do a kind of unsupervised reinforcement learning that beat a top human professional for the first time, and also that the core mechanism is applicable in other domains. For example, I was able to take the core reinforcement learning algorithm in AlphaGo and use it for the unsupervised learning of a self-driving car in a simulated physics-realistic 3D world, and it works great. The fact that AlphaGo does not reason like a human is beyond the point.

  • @sdmarlow3926
    @sdmarlow3926 3 роки тому +46

    "Redefine into existence" is my favorite part. It's something start ups and tech giants do on a regular basis.

    • @sdmarlow3926
      @sdmarlow3926 3 роки тому +3

      If embodiment was key, then why are dolphins and elephants not giving TED talks? Our minds evolved within a body, and there is a strong link between them. We can't reach consciousness from birth as just being a brain in a jar, and an adult mind would likely fall apart after being in a jar for a few months. As long as we truthfully understand how minds work, there is nothing to suggest we can't solve the 2 main issue with AGI: bootstrapping and grounding.

    • @TechyBen
      @TechyBen 3 роки тому +1

      @@sdmarlow3926 A bit of both. A brain in a jar is (theoretically) fine providing communication. See those with deathblindness as an example. Often they will say communication was the key to them understanding both the world and themselves.
      Dolphins and elephants have neither the communication or the physical interaction to build those understandings (co-dependently or independently).

    • @KristoferPettersson
      @KristoferPettersson 3 роки тому

      @@sdmarlow3926 It might be important with a barrier of any kind, because all narratives of intelligence require the means of forming a potential which separate true from false. However, I think we do this quite naturally when we talk, do math or just observe the world. Embodiment might be very important but trivial.

    • @mickmickymick6927
      @mickmickymick6927 3 роки тому +1

      And governments probably even more

  • @MachineLearningStreetTalk
    @MachineLearningStreetTalk 3 роки тому +22

    We have the honour of welcoming Melanie as a guest on our show in late July -- so please give us any questions to ask her. We are super excited to have this opportunity!

    • @michaelwangCH
      @michaelwangCH 3 роки тому +1

      Tim, can you explain some model e.g. Dino intuitively, why model like Dino works without labeling at all?

  • @JoaoVitor-mf8iq
    @JoaoVitor-mf8iq 3 роки тому +13

    The goal of trying to create intelligence is imaginative by itself, and I agree that many people are just building something very specific and call it intelligent just "because".

  • @patf9770
    @patf9770 3 роки тому +33

    I think something often overlooked is the value of priors. Humans almost never actually do zero-shot learning.
    Say you've never played a video game before and you decide to try a 1st person shooter. You're going to have all sorts of priors about the organization of objects in 3d space and the causal time dependence of events, let alone the concepts of an enemy or ammo.
    The fact that language models can be finetuned on images proves this!

    • @Kraft_Funk
      @Kraft_Funk 3 роки тому +3

      The operation of convolution itself has the prior of locality, which is why it works great with images.

    • @funkyj77
      @funkyj77 3 роки тому +7

      Been having the same thought recently. And also just think about the fact that humans are actually exposed to a huge amount of data because we see/hear/read things nearly every second. This makes me think that maybe data-driven methods for robustness problems is the right direction, although research community seems to favor approaches that are less data-dependant.

    • @arvind31459
      @arvind31459 3 роки тому +3

      @@funkyj77 and also lot of memories and intuition stems from millions of years of evolution harcoded into genes and still evolving

    • @wiczus6102
      @wiczus6102 2 роки тому +2

      "memories... hardcoded into genes" You cannot code thought genetically.
      It's like saying your 1 mil param NN can be represented as a 3 line code (86b neurons to 20k genes). Like sure, if you'd have 10k lines of code available maybe you could code for a SIMPLE (like cats vs dogs) NN, but the numbers just don't fit.
      The things that can be hardcoded are breathing (medulla oblongata), the beating of the heart, increases in NT based on chemical/physical stimuli. The simple things. Things that require 3-4 proteins. The thought that there is some secret super convoluted milion protein architecture that decodes memories in our brains is just crazy.

    • @patf9770
      @patf9770 2 роки тому +1

      @@wiczus6102 I've been thinking about this and I think while you can't encode "memories", you can encode reflexes since automatic reflexes aren't learned at the individual level (think the patellar reflex).
      Also, some behaviors must be genetically encoded. Think about an indoor dog that still has the urge to dig.
      So some priors must be baked in while others are learned from experience.

  • @Thrashmetalman
    @Thrashmetalman 3 роки тому +1

    Hey this was my thesis advisor. always nice to see her work pop up.

  • @sieyk
    @sieyk 3 роки тому +12

    The reason why giving an AI definition for "understand" is so hard, is because the explanation gives the solution which we can't figure out.

    • @brendawilliams8062
      @brendawilliams8062 Рік тому +1

      AI travels by code at a great distance forwards for calculating. Humans may travel a close distance backwards to bounce forwards and spin. Night next to day. Short distance is tragic for Turing

  • @exmachina767
    @exmachina767 Рік тому +3

    Regarding what NNs learn, it seems to me that the loss function plays a big role. Similar to what happens with regularization, some constraints need to be added if you want to avoid learning spurious features. It seems to me that human minds do apply this kind of constrained learning all the time, sometimes to a detrimental extreme (ever heard of lateral thinking? It's a way to relax that extreme constraining). The question is then which kinds of constraints are conducive to learning with good generalization capabilities, and how do you effectively incorporate those constraints. Another possibility is that the current architectures are simply not able to extract/learn sufficiently good features to generalize (e.g., maybe we are missing sufficiently good abstraction mechanisms so most features learned by NNs are surface-level and therefore not great for generalization.)

  • @_bustion_1928
    @_bustion_1928 Рік тому +1

    This is a great video and a good paper that gives some thoughts to process. My personal opinion on machine learning is this:
    I think of machine learning right now like a realatively new and very fancy tool, a powerful rival of calculators. We create models for some specific tasks, and most of the times they do really well. However, when we try to create a general approach we seem to either fail, or meet certain boundries (computational, data, biases, and etc). From such summary, it seems that there is problem in our, humane, approach.
    - Specific task. For example, we have images of cats and dogs, and we need to label them. So, we need to create a model such that maps an image to corresponding label; proceed to develpoment phase. This task is clearly defined; question and solution are clearly defined.
    - Aritificial Intelligence. Okay, what makes up intelligence? Problem solving? Connecting previous experience with new experience? Filtering different kinds of experience? Is there possibility of simulation of intelligence? The point here is, there are a lot of unanswered questions and each require specific solution.
    (Now I feel like comparing oranges to tomatoes)
    We can solve specific tasks, but it seems we fail when it comes to making something like AI. My personal opinion, is that we have not developed enough of our personal understanding of our own intelligence to replicate its mechanisms. Additionaly, It almost feels like the usual approach of problem solving is not working when it comes to solving the problem of AI, though I am not going to argue about this one.

  • @jamiekawabata7101
    @jamiekawabata7101 3 роки тому +7

    Thank you Yannic, this is a great video. At 11:36 it seems you are arguing that for general intelligence, additional _architectures_ must be sufficient, while the paper is questioning whether additional _layers_ are enough and the problem is only one of scale. I would agree with both: new architectures are wide open and in principle they can do anything, but just scaling a bigger GPT-3 (or any other existing system) is unlikely to reach general intelligence (but some people disagree with that).

    • @YannicKilcher
      @YannicKilcher  3 роки тому +2

      Good point!

    • @NextFuckingLevel
      @NextFuckingLevel 3 роки тому +3

      achieving general intelligence or not..scaling up GPT-3 till trillion parameter is worth a shot

  • @andrewminhnguyen9446
    @andrewminhnguyen9446 3 роки тому +14

    "Paraplegics have intelligence."
    There was this guy named Stephen Hawking...

    • @wiczus6102
      @wiczus6102 2 роки тому

      One of the most intelligent people

  • @beilkster
    @beilkster 3 роки тому +1

    First time viewer, great analysis. I subscribed

  • @pneumonoultramicroscopicsi4065

    Your channel is a goldmine

  • @saichaitanya7115
    @saichaitanya7115 3 роки тому +2

    For a machine learning learner like me, who just completed Andrew Ng's course on Coursera. Are there any papers that one can insist to beginners to get better understanding of the topic and implementing on their own would even make their portfolio more presentable. I would be delighted to get a list of those, so I could start hustling right away.
    I admire your work so much.

    • @Rhannmah
      @Rhannmah 3 роки тому

      go on reddit and take a look at reddit.com/r/learnmachinelearning
      lots of practical tutorials in there!

  • @willsnyder8735
    @willsnyder8735 3 роки тому +1

    I think it will be sooner than later, my reason... not only the spring effect is taking place currently... but many brilliant minds are working hard at this. Possibly more people now than ever, as far as funding to press forward.. what AI is currently capable of doing is making money. I feel like we’re about to see some big things in this development, maybe not what some would hope... but certainly world changing

  • @aniekanumoren6088
    @aniekanumoren6088 3 роки тому +1

    3:48 lmao you caught yourself there

  • @DasGrosseFressen
    @DasGrosseFressen 3 роки тому

    Fallacy 3 seems pretty clear to me... It is about ascribing the actions and characteristics which are related to intelligence (whatever that is) to procedures in the algorithms... Since concepts attached to intelligence are not formally defined, it is dificult to attached them to formal procedures that could be gross simplifications of what intelligence means.

  • @ralfgustav982
    @ralfgustav982 3 роки тому

    In his book "Gödel, Escher, Bach" Douglas R. Hofstadter argues that a necessary condition for a system to even be able to develop some sort of general intelligence is to have some sort of "self-reference" (have a sense of self). Self-reference can be used to compute the relation of the system to the world, predict effects of potential action sequences, model the theory of mind of other agents (and their theory of mind of oneself) etc.. I think that reinforcement learning is already on a good track. See the paper "Experience grounds language"

  • @narendrapatwardhan68
    @narendrapatwardhan68 3 роки тому +9

    One of the biggest problems with ML is the people's tendency to expect it to magically work. Most "AI" efforts go into trying to design fancy models as opposed to well-studied ones. Most people do not collect a "correct" dataset or even scrutinize the one taken from the web and then blame that AI isn't there yet.

    • @franciskusxaveriuserick7608
      @franciskusxaveriuserick7608 3 роки тому +2

      In my opinion this is kind of a direct result of the overhype and relative sudden increase in demand of "AI experts". It's a very fast developing field and yes I also still don't know a lot of what is going on, with all these online courses available people can claim to be expert easily. A lot of models and architectures in fact are still quite of a black box which explains why a lot of people approach it in a very weird manner. I am still very new to this whole thing, but I noticed that some people who keep trying to design fancy models generally don't understand that modelling architecture is not just an easy trial-and-error process that can somehow easily solve the accuracy problem they have with their current model. I know that those insanely knowledgable researchers in top research institutes "sort of" know what they are doing when they are modelling something new and crazy, but most of the time laymen like me and many other people don't have that. And people fail to realise again that modelling is not everything, you also need to make sure it works in real life inference. Analyze the problem in hand, get well established models that work well ( check paperswithcodes if you want relatively new,cool models that a lot of people think work quite well already ) , train it with good labelled datasets and it can for sure solve a lot of things , though don't expect it to be like some sort of highly generalizable intelligent being.

    • @sidkapoor9085
      @sidkapoor9085 3 роки тому

      @@franciskusxaveriuserick7608 just because the bar for entry has been lowered doesn't mean the ceiling isn't just as high.

  • @ssssssstssssssss
    @ssssssstssssssss 3 роки тому +7

    I think AI winter conjures up the wrong image. It’s not like it disappeared. Like even today, fuzzy logic controllers are being applied. Even if the excitement as to what this generation of AI can delivers dwindles down, it will continue to have a big impact on industry. There just won’t be the same level of excitement.
    I think the most concerning thing today is the trend toward more and more parameters. This indicates a progression toward a local minimum.

  • @wiczus6102
    @wiczus6102 2 роки тому

    I think understanding is, when you can derive each element of the model that leads from the input to the outcome rather than just learning the inputs and outputs.
    For instance I could ask you what 9*9 is. A person who doesn't understand multiplication could say it's 81 based on the multiplication table. A person that does understand multiplication would know they'd have to add 9, 9 times. so they'd be like 9,1->18,2->27,3...->81,9 (and then you could define addition etc.). The first one is learning empirically, second one is learning rationally.
    The thing is that we can't fully learn rationality, you have to learn rationality empirically. So deep down there is no human or machine understanding. We both understand in the same way, but a machine has a huge drawback of having significant less neurons. I guess some algorithmic changes can also have an effect, but I think ANNs with CNN and RNN's are basically all there is to a BNN.

  • @shimondoodkin
    @shimondoodkin Рік тому

    understanding is: reasoning based on correctly recognized associations.
    i think there is more to it like recognizing intention, processing the reasoning with intention.

  • @anetakahleova5705
    @anetakahleova5705 3 роки тому +6

    I love you soooo much for summarising these papers, you are awesome 💜💜💜💜 I adore your consistency and diligence 🙈💜 and your presenting skills as well! 😁🧠🦾

  • @robertstudy9903
    @robertstudy9903 3 роки тому

    I really enjoy your channel. Please keep update :))

  • @luke2642
    @luke2642 3 роки тому +17

    The process of forming a single brick is not the same as building a house. Narrow AI is like making bricks. Bricks make ~50% of the structure of a house, the same is probably true for Narrow AI vs AGI.

    • @juanok2775
      @juanok2775 3 роки тому

      This is exactly why I call bullshit on Tesla claims of L5 FSD if they can do L5 that means you can train any robotics cheaply to manufacture anything you want

    • @luke2642
      @luke2642 3 роки тому +2

      @@juanok2775 I'm not so sure that's a fair comparison. You're confusing specific solutions with general solutions. Driving is often described as long tail. It's 99% solved by time/mileage but that 1% just needs hand crafted solutions, or remote operators. It's an engineering challenge for sure, but not comparable to an AGI factory that can design and make any robot for any purpose.

    • @christopherhong7004
      @christopherhong7004 3 роки тому +1

      Disagree, narrow AI and AGI are very different goals

    • @Laszer271
      @Laszer271 3 роки тому +2

      @@christopherhong7004 but theoretically you could build AGI if you just had narrow AI for ANY task AGI can encounter and also some module to synchronize and control all those narrow AIs

    • @christopherhong7004
      @christopherhong7004 3 роки тому

      @@Laszer271 but how are you defining "any"? In what context?

  • @badwolf8112
    @badwolf8112 3 роки тому

    I think that if you look into understanding, you can find some useful info that can shine a light on what it means. I don't think she used the comparison of the computer's understanding to ours badly.
    Stephen Wolfram wrote an essay on understanding. There's also an oxford and wiki pages about it. From skimming them, my impression is that there are certain criteria of when someone understands something. One is being able to reason about it, as in know what it does or how it behaves in different circumstances. Another possible one is having a clear mental image.
    Wolfram once said a strong indicator is a feeling. Feynman put an emphasis on constructing ideas at a high level, as in explaining them well enough that someone else could understand it, and there's research that shows that doing this helps retain information.
    All we know about AI's "understanding" right now is that it's an automated self-configuring program that can tweak itself according to rules we give it. But as far as we're aware (and we might say, beyond any reasonable doubt) it's just a nifty automaton. "understanding" is part of *Psych*ology and Philosophy. We might invent a term for it for machines but we'll probably want it to be so precise that it won't be like human understanding (as many things in philosophy and psychology have many angles and aren't so precise).
    However, maybe we can make an automaton which appears indistinguishable from a person. And there is this idea that consciousness might be a patterns thing more than a biology thing, or at least that it can be achieved through the material computers are made of.
    Personally I think we can achieve human-level AI but whether it has consciousness is another question.

  • @andreassyren329
    @andreassyren329 3 роки тому +1

    Here's my reaction to this. Not a critique on the paper, but reflections on it.
    > Follow society debate about how ML models combined with social media algorithms significantly impact election outcomes.
    > Frantically trying to come up with policies to deal with ML models allowing anyone to impersonate others appearance and voice on video.
    > News papers across the globe rush to report on the (fairly rare) accidents involving the _existing today_ cars with auto-pilot.
    > See Boston Dynamics Atlas perform a dance number with a robot dog
    Maybe we're just not measuring our progress by the same metrics by which we incentivize ourselves? Because the above certainly looks like progress to me. (Noting that "progress" here is distinct from "progress towards the society we want".)
    This might seem like an unnecessarily sensationalist comment, but hear me out.
    At what point do we question whether our expectations are just not adjusted? Do we base our expectations too much on science fiction and too little on what problems we actually solve when we go to work?
    I claim (with about the level of proof as is customary for UA-cam comments, none 😉) that the following two phenomena are core to what we, as a society, will _actually_ make progress towards.
    * We make quick progress when we recognize that practical problems are solved when we combine results from several different fields.
    * We make quick progress if you can make an obvious successful business out of the outcome.
    I don't think it makes sense to expect Deep Learning to solve practical problems like AGI end-to-end, in the same way we should not expect advances in control theory to singlehandedly make things like Boston Dynamics Spot possible. To make Spot we needed progress in control theory, materials science, structural mechanics, miniaturization of power supply, miniaturization of compute, developments in actuator strength and precision, as well as deep learning to interpret the surrounding from sensor signals.
    Why should we expect AI research alone to give us AGI when all we actually need it to do is perform data retrieval, or make sure the car can get you from A to B without accidents or breaking traffic rules? AGI is cool, but I'd say the monetary gains will happen much earlier than AGI - not to mention that AGI could be a liability from a business standpoint.
    Maybe we would solve AGI quicker if we lived in a world that incentivized us to do so. But maybe that's just not the incentives we have when we go to work. And in either case, why do we expect the solution to to be an ML model, and not a larger system of parts, some of which might very well be iterations on the language models and image processing models we see today?

  • @jeroenput258
    @jeroenput258 3 роки тому

    Is this paper published somewhere or is it just on Arxiv?

  • @snippletrap
    @snippletrap 3 роки тому +3

    The point about AlphaGo not having goals is that it's not an adaptive, autonomous agent that dynamically updates its goals based on context. It is programmed to do one thing and one thing only.

    • @cocorico454
      @cocorico454 3 роки тому +3

      And yet it does. It has one major goal which is to win the game or course. But it can learn many subgoals depending on the state of the game, like winning a specific part of the board, or capturing a specific peebles. Its subgoals are just not explicit, and alphago cannot explain them.

  • @maxwellclarke1862
    @maxwellclarke1862 3 роки тому +1

    Re. Progress - even if some research is not on the path to AGI (ie. it ends up looking superfluous) it can still be progress because it explores (and hopefully eventually discards as irrelevant) a branch of the tech tree.

  • @urfinjus378
    @urfinjus378 2 роки тому +1

    You are quite right by arguing argument 3, about misuse of term "understanding", because there are no evidence that human has one.
    Nobel prize winner Ivan Pavlov, the founder of conditional reflex theory, wrote that he and his colleagues once agreed to never use "understand", "mention" etc words relatively to dogs whose behaviour they studied. The reason was that when they used these terms they could never agree on anything, instead they used something like: receptors of the dog received a signal from the environment, then it was recognised be the brain, then nerve system transfered the signal to the mussles of the dog....

  • @marceorigoni6614
    @marceorigoni6614 2 роки тому +1

    To me the current machine learning, specially supervised, is like the "common sense" or intuition, we are building these systems that can mimic the common sense/intuition in living entities(not only humans).
    But of course just by improving the common sense you will not get AGI, and in some problems and skills like self driving you cant just base your whole thinking process on commons sense and intuition. Because you will have unpredictable mistakes.
    So to me and probably what is being done. Is that you have to have some other system that will learn when to trust their common sense and when just try something else, dont do anything, etc.
    I have some ideas on how to do that. But I of course lack resources and knowledge to try much. I would make emphasis on knowledge lol.

  • @excitedbox5705
    @excitedbox5705 2 роки тому

    The body intelligence seems more like an uncertainty factor. The way you feel affects how much value you place on what you know to be a fact, etc. This means it most likely even works to reduce accuracy, but is helpful for learning through discovery of new outcomes or new methods. Ie. Your arm hurts/is tired, so you decide to use your legs to knead dough, and this way learn that you can use your legs as a tool. Stupid example, but for survival this could be useful and doesn't just apply to physical things. Your emotions towards your community helps the whole group survive, even if it isn't optimally beneficial for you. This could be simulated in AI by either performing an algorithm twice (once with known data and once with a degree of randomization or rounding). Or using a confidence factor and similarity and relationships to other domains in problem-solving. Ie. if certainty is below a certain % try solving the problem with algorithms from related domains or using data from related subjects and seeing if this gives a higher confidence score.

  • @dansken610
    @dansken610 3 роки тому +4

    Interesting topic

  • @pratik245
    @pratik245 2 роки тому

    AND a possible analogy of thoughts with concepts working in infinite dimensional space is that if we sense something, a thought we build upon is different for different humans, how can a same absolute experience have relative meanings for different people? Precisely, because human understanding is also not absolute on everything, otherwise we would be divine entities right now. It is based on dimensions each human mind is able to connect to based on their experience or priors. Similarly, machines can have their priors based on what we train them upon and what it can find about it. It won't and can't be perfect but when we expect to find physical functions that are possibly operating on them, better our predictions or outcomes will be. There is certainly an absolute function that operates on infinte dimensional space far beyond human's abilities to comprehend. It carries answer to everything but we are also part of this answer, how can we even perceive beyond it is the question, the better we get, we come one step closer to what we call superintelligence. However, we are just getting closer to that infinite dimesnional mystery at infinitesimally small pace. The thing we term in humans as genius(whether chess, problem solving, pattern recognition etc) is basically able to calculate best possible paths in infinite-dimensional space our consciousness operats upon. If we generalize that best path solutions, we can expect machines to be better than humans in everything we do where complex undefined spaces are not operating. So, to say that AI will hit a winter is missing what AI can really do, we cannot emulate it to be human in every capacity but when we segregate the domains and assign them a set of them, they will be better than humans in evrything. One question often asked about AI failing to recognize some innocuous pics is precisely because they are not able to find all the dimensions that humans can because they could not find the connection in finite computing space they operated upon. It does not mean they are dumb but simply that their understanding is not as fine tuned as humans. However, it is important to note that increasing computing does not mean we will find that dimesnion because this space is infinte, we need to find what dimension machine is missing, is there a physical function that can define and refine it? Also, machines understanding is governed by humans limited understanding of intelligence and built on computing bits. Both constrain the domain we work upon. If machines were to develop their own intutions, it depends solely on their world of computing which is far less superior to how human computing can work. It is more efficient and multi dimesnional than we think it is. ENOUGH SAID 😂. I am sorry i could not make it funny.

  • @davr9724
    @davr9724 3 роки тому +16

    Valid points, personally don't think that an AI winter is coming soon.

    • @y__h
      @y__h 3 роки тому +1

      If AI Winter ever coming again, the world would be a very bleak place.

    • @billykotsos4642
      @billykotsos4642 3 роки тому +3

      Deep Learning is everywhere. It is not going away any time soon.

    • @jcald111
      @jcald111 3 роки тому +8

      Probably not an AI winter, but a lot of disappointment when the loftier expectations fail to materialize.

    • @rpcruz
      @rpcruz 3 роки тому +1

      @@y__h AI winters are actually great for AI. It means the previous understanding was replaced by something closer to reality.

    • @lavs8696
      @lavs8696 3 роки тому +1

      why not? there's already massive doubts that AGI is possible using most current methods. most of these companies like deepmind are bleeding money by overpromising, doubt investors will tolerate this for much longer

  • @calaphos
    @calaphos 3 роки тому +3

    Interesting Arguments. I feel like all of those fallacies are a lot about the anthropomorphism of "AI" or rather deep learning systems currently. The naming of Artificial Intelligence already goes in that direction, some nebulous concept associated with human behavior.
    I dont think in the end this even matters in the original context of an AI Winter. Even if these things are just good function approximations and not able to achieve AGI in the way we imagine it, they are still useful. It doesnt matter if things like Protein Fold predictions or counting cars in satellite images is intelligent from a human perspective. It is still worth a lot to lots of use cases. I personally dont think we will achieve AGI anytime soon, but that doesn't mean that funding will dry up.

  • @machinelearningdojowithtim2898
    @machinelearningdojowithtim2898 3 роки тому +1

    "GPT-3 is undeniably progress towards AGI". I disagree. As you cited from Kenneth Stanley, it's "progress" sure, but the objective is deceptive. It's progress in the wrong direction. We are caught in a local minima in the maze. We need an entirely different approach i.e. statistical models for NLU are probably a dead end. I think the final solution will be more discrete than continuous, or at least discrete in the inner loop.

  • @ChocolateMilkCultLeader
    @ChocolateMilkCultLeader Рік тому

    How did I miss this masterpiece

  • @pratik245
    @pratik245 2 роки тому

    Where i used computing space.. Please consider it as computing space time

  • @sheggle
    @sheggle 3 роки тому +54

    No the model doesn't take shortcuts, it does exactly what you bloody asked it to do. It's just that you have no idea how to formulate the question.

    • @kazumik4941
      @kazumik4941 3 роки тому +26

      "Haha, look at this stupid AI doing exactly what we told it to, instead of what we want! What an idiot!"

    • @andrewminhnguyen9446
      @andrewminhnguyen9446 3 роки тому +15

      "Find the shortest path to the local minima in this loss landscape given this input."
      "Silly AI, that's a dumb answer!"

    • @MCRuCr
      @MCRuCr 3 роки тому +2

      I think this shows the discrepancy between what "normal" people think AI is vs what the actual researchers think it is.
      No magic, just a lot of number crunching

    • @wiczus6102
      @wiczus6102 2 роки тому +1

      The shortcut refers to the fact that you can teach ai to recognize dogs, but it's easier to just recognize that people who have dogs rather use android and people with cats use iphones, which makes the model take a shortcut and instead of recognizing by ear and snout shape it recognizes using camera quality.
      Yes, you're right it's the fault of the engineer, but this problem is called shortcut learning.

  • @Rrabelo
    @Rrabelo 3 роки тому +3

    We need more philosophers working in artificial intelligence. Some questions in the article go beyond the technical area and go to some aspects of what intelligence means

  • @jones1351
    @jones1351 3 роки тому

    What Chomsky et al. have said on the matter rings true with me. The bulk of the field are searching for the keys under the wrong street lamp. Along the way we build bigger and better 'bulldozers' but we're not on the path to understanding 'intelligence', let alone developing machines with general intelligence.
    How do we 'teach' a machine to perform a task (basic, inherent 'common' sense) that is still a mystery to us? If we don't know how WE do it, how in the world do we 'teach' it to a machine?
    Absent this crucial 'ingredient' we're tossing darts in a darkened room - and we're not even sure the board is in that room.

    • @SianaGearz
      @SianaGearz Рік тому

      Why is it a problem fundamentally? If we do something and we don't know how we do it, we have learned it regardless; and this is the same thing we do, having the system learn. What we are really doing is figuring out the correct incentives for it to do what we want it to do. One thing to keep in mind is that our brain is well oversized for any given task, but it's also just the right size given nobody is good at everything and it takes a lot of brain just to sustain us alive. The simplified neurons we simulate are by all reason less efficient, so we're not reaching the same capacity, and yet we're achieving impressive results. And not after 25 years of learning but rapidly.
      What is the alternative? You can't just probe into a human brain and figure out how it works exactly, ethically available tools are limited. And as much as we could figure out on other organisms, some of it unethically, we have done. When you put numerous subsystems (thousands or more, millions, billions) that are simple together you get emergent behaviours, that are not reasonably predictable, and by all reason we'll never have the power to analyse them until we have such a system fully recreated artificially, so we won't know about good AI design, or indeed intelligence, until it has emerged by our hand. So ultimately the approach at hand, as inefficient as it may seem, is the only one.

  • @pratik245
    @pratik245 2 роки тому

    Thats why AI equivalence of human is not in building humans but all tasks with can be represented as mathematical functions(not stricly and also even a combination) in physical worlds will be attainable by our current understanding of AI and computers, however, correlation od concepts to create thoughts remain very challenging which are essential for deep thoughts , but the better we get at fine-tuning what dimensions we need for say a specifc set of general tasks better our ai will be, so to say that we want to build say an army of robots with features of aerial attack, night vision, closed combat capability or army of robots to do a task like banking or teaching computer programming that does not require deep design decisions , we know those dimensions can be traced/replicated by AI.

  • @ZyTelevan
    @ZyTelevan 3 роки тому

    Regarding the discussion about the goal, in this case I would agree with the author. The *goal* of AG is certainly not to *beat the best human player* . If it has a goal, then it would be something like making the move that maximizes its chance of winning the game. It doesn't care about defeating human champions in particular. It is, however, its purpose. That is, it was constructed with the goal of beating the champion - the engineers had the goal, not AG itself.

  • @nauy
    @nauy 3 роки тому +7

    I can already see a winter coming. There seems to be lots of work being done in deep learning, but very few, if any, fundamentally new ideas. I think many of the tools we use today are too crude and inefficient. Things like vector representation and optimization using gradient descent and back propagation are actually holding us back. We’ll never get to full self-driving if we don’t even have basic stuff like good representations of the physical world and stable and efficient learning licked.
    There is something to the embodied brain argument, but it’s not what the proponents make it out to be. It is true that in the biological brain, computation is often anatomy. But it’s just superb engineering rather than some inextricable computation. E.g. It is well known that the human visual system performs a log-polar transform going from the retina to the cortex (see space variant sensor work from Eric L Schwartz ca 1990). The fact that this complex computation is implemented via simply varying the density of the rods and cones in the retina (ingeniously efficient) doesn’t make this function any less abstract. We can reimplement it anyway we want in silicon.

    • @christopherhong7004
      @christopherhong7004 3 роки тому +1

      I don't think there will be a winter in AI, just a winter in human-level AI

    • @wiczus6102
      @wiczus6102 2 роки тому

      But you don't need that many advances now, you need applications.

    • @nauy
      @nauy 2 роки тому

      @@wiczus6102 So you want lots of crappy AI applications displacing human labor? That’s worse than not having any AI applications at all.

    • @wiczus6102
      @wiczus6102 2 роки тому +2

      @@nauy
      1. Why would AI be crappy? The current technology is sufficient to improve the models to a point where they can be considered "not crappy".
      2. Human labor gets displaced by "crappy" AIs all the time. You think a robot arm in a factory works with any convoluted algorithm?
      You think defect sensors work with something more than just a conv net and a decision tree?
      3. You don't need Skynet to keep people interested. Autonomous cars and factories is plenty to keep people interested.

  • @descai10
    @descai10 Рік тому

    I think "understanding" can be estimated by how well something can model the causal links that cause something to be true. Simply knowing it is true is not understanding, you have to know *why* it is true. And by that definition I think AIs do indeed understand, they make associations between data which is the same way that humans form understanding. That being said, I think AI still likely has a lot of unexplored territory that could produce AI magnitudes better than current models. More specifically, I think AI that continuously feeds back into itself and builds upon its own understanding based on what it already knows would learn significantly faster than current AI systems. In contrast with the current systems that simply try random things until the correct answer is found. Transformers are a step in the right direction since it directs their attention towards more effective variables, but this is still controlled externally rather than by the AI itself.

  • @davidtoluhi8686
    @davidtoluhi8686 2 роки тому

    To your point along the lines of “it ultimately depends on the hardware”: doesn’t it follow that if you want to create something better than human intelligence (as our great big tech overlords have declared) you have to modify the hardware, hence, the architecture?

  • @DamianReloaded
    @DamianReloaded 3 роки тому +4

    I was thinking recently that neural networks simply can't recognize something as an unknown object. I mean it as in "to predict the bounding volume of a previously unseen object". Humans, lower intelligence animals and insects can do that quite easily (some times imperfectly). In the context of what it means to "understand" I believe this is key. In the context of human level intelligence we are missing the highest levels of abstraction in which the mind can presume unknown objects will be affected in the same way by the world model regardless of their characteristics/label and update the world model with new exceptions. A general cause-and-effect framework for machine learning algorithms to work with(in). Transformers have shown the capability of working with unseen words/concepts tho. From what I've seen them doing, It's my intuition that transformers may have the potential to work out the "symbolism" behind different kinds of signals. I can imagine for example, a self driving AI, seeing a toy lying on the street and then "asking" a language model what kind of "things" it can expect to have to avoid next. The language model may answer: "Nothing, a child, a truck" and an algorithm then calculate the probabilities of having to slow down to better avoid a future obstacle. The language model in turn could be trained with automatically annotated driving footage. "the car drives and there is a toy up ahead [a child appears out of nowhere to pick it up] [a truck had let it's cargo of toys fall] [it was a toy that fell from a balcony] . We often say that a self driving car will never get drunk or asleep, but the reality is that state of the art machine learning algorithms are at best a drunk/sleepy person on the wheel with all the common sense turned off.

    • @nauy
      @nauy 3 роки тому +1

      “In the context of human level intelligence we are missing the highest levels of abstraction in which the mind can presume unknown objects...”
      I think quite the opposite is true. Artificial systems are missing the low levels of abstraction that allows it to reason about behaviors and implications of objects that the higher levels cannot identify. If we see something small object being blown about on the road, what does it matter if it’s a piece of paper or a leave or some other thing? It’s relevance to the car, passengers, or other cars on the road are still the same - it’s a small light thing that is inconsequential. Contrast that with a big lump of something unidentifiable sitting in the middle of the road. Both are unidentifiable yet they have completely different implications. The “unknown” token is not the way to go about it. Systems that put object recognition, rather than object “relevance”, front and center are the wrong way to go about it.

    • @DamianReloaded
      @DamianReloaded 3 роки тому

      @@nauy I think we are talking about the same thing but disagree on what higher level means. I meant higher-level as in programming languages, where high level features are an abstraction of many small ones. The problem for a CNN to "understand" there is an unknown object in front of it is that an unknown object doesn't have specific features the NN can remember. If you wanted it to say "well I don't know what this is, but I understand how it fits in the world, therefor it has this widht, this height and this depth and is probably this far appart" you'd need some sort of fallback that would search for some very generalized features that may hint the object's shape. State of the art visual recognition algorithms can't do this right now afaik.

  • @BlockDesignz
    @BlockDesignz 3 роки тому +5

    “It was like claiming that the first monkey that climbed a tree was making progress towards landing on the moon" - I appreciate what this claim is trying to get at, but its pretty flawed. To follow their analogy, the monkeys would have a perfectly build rocket on their doorstep, just as humans do in their attempts to create AGI (in the form of the human brain to be concrete).

    • @mohammadsultan935
      @mohammadsultan935 3 роки тому +1

      I don't see why this is relevant? We're not trying to create human brains, we're trying to replicate some of its features. Point is, a fundamentally different technology, other than ML will be required to achieve AGI.

    • @BlockDesignz
      @BlockDesignz 3 роки тому

      @@mohammadsultan935 A plane doesn’t fly the same way as a bird but they are highly inspirational to aviation.

    • @Milan_Openfeint
      @Milan_Openfeint 3 роки тому

      @@BlockDesignz How does a plane fly differently from a bird? They certainly glide in the same way, and that's the hardest part of flying.
      And I also agree your argument goes sideways. How is a chess computer a step towards replicating a human brain? How does climbing a tree help navigate a rocket?

    • @wiczus6102
      @wiczus6102 2 роки тому

      I think the author points out, that this is a potential scenario. So a monkey might try climbing the tree, but shouldn't get very optimistic about getting to the moon with this.

  • @leinarramos
    @leinarramos 3 роки тому +23

    The irony of writing about a fallacy on wishful mnemonics and then using terms like "common sense" so loosely

    • @GeekProdigyGuy
      @GeekProdigyGuy 3 роки тому +2

      The wishful mnemonics are "wishful" because they anthropomorphize machines, they try to attribute something to machines which isn't there. The reason they mention "common sense" is precisely to point out that machines *don't* have it, *despite* such wishful mnemonics, so there is no irony there. That was literally the argument they were making.

    • @patrolin
      @patrolin 3 роки тому +7

      ​@@GeekProdigyGuy the paper is wishing for human level intelligence and common sense, without actually explaining what it is supposed to be
      it is just as shallow as the wishful mnemonics

  • @mmehdig
    @mmehdig 3 роки тому +1

    When we are talking about a data-driven method, there is no such thing as a "dataset problem". There is a methodology problem if one cannot find the "dataset problem" methodologically.

  • @dimonenka
    @dimonenka 3 роки тому +1

    I feel like the _goal_ of what is treated like AI breakthroughs like GPT-3 these days is to inflate the bubble and attract more hype, despite knowing how far we are from AGI, which can explain some of the 'fallacies'. These breakthrough are often based on throwing in more compute, which does not bring us closer to AGI (analogous with how better tree-climbing does not land us on the moon), but researchers like to pretend it's a big deal. The results of these breakthroughs are amusing but useless. Using wishful mnemonics is also not a fallacy when employed by top researchers. David Silver knows that AlphaGo does not _think_ , and he knows that the general public does not necessarily know, and he does not make an effort to resolve this confusion. It's a deliberate attempt to deceive. This is also true on a small scale. Whenever I write a paper I often feel like I'm supposed to try to sell it. Whenever I read a paper, especially by companies like DeepMind, I often feel like I'm being pitched a sale. It's natural selection, really. With 10k papers (and growing) submitted to a typical A* conference, incremental improvements, and lack of reviewers, review process becomes noisy. Reviewers don't have time to read into papers, the world moves too quickly. So of course a paper that sells itself well is more likely to be published. I like this quote from a DeepMind paper as an example of sales pitch: "The fact that DQfD outperforms all these algorithms makes it clear that it is the better choice for any real-world application of RL where this type of demonstration data is available.".

  • @pon1
    @pon1 2 роки тому

    I think we can see evolution as a kind of extended AI network, we don't start as a blank slate, our brains are "pre-trained" by evolution. The layers of neurons that we have in current AI are too perfect and regular, they are on a row to row basis without really any structure (or rather, one big repetitive structure). Also, the brain isn't just one big network, it consists of several networks that are specialised to handle different things and then interconnected. So I think we have ways to go in order to achieve general AI that resembles human intelligence.

  • @Hyrtsi
    @Hyrtsi Рік тому

    Very nice criticism on AI hype. In general we have gained amazing results. It's just that we are hitting the limits of compute power. The models are way too big. That's the next thing to fix.

  • @ytrew9717
    @ytrew9717 3 роки тому

    Do many people try to understand what "understanding" means? Who are they? I wonder if they discovered basic principles behind it. Anyone knows about these basic principles/hypothesis? I'd like to understand what are those problems AI can't solve.
    And are little animals capable of solving them (like flies) ? If so, i guess we could simulate their neural network and we how they do it, right?

  • @ahadicow
    @ahadicow 3 роки тому +2

    "what doesn't have intelligence is someone who has been to the guillotine....there is no intelligence there"
    Best sentence I read in science.

  • @CosmiaNebula
    @CosmiaNebula Рік тому

    From then on explainable AI will have a language module to generate symbolic strings that correlates with other parts of its output, just like humans do.

  • @daverei1211
    @daverei1211 3 роки тому

    But it takes decades for human intelligence to tune on the human biological hardware platform. When we are first born there are some limited survival algorithms, and maybe some kickstart discovery algorithms to attain feedback, for example to try to reach out to try to grab something you think you can see - and learn from that for more tuning. The can I touch the bird in a tree problem (determining visual object is at a distance). We develop layers of integration and tuning and refinement through simulation during dream. When you have kids, and watch there cognitive development you see this problem just takes decades to build human level ai through a lot of training.

  • @bhargav7476
    @bhargav7476 3 роки тому +1

    The only research paper I completely understood

  • @pratik245
    @pratik245 2 роки тому

    In the infinite dimensional space, human minds can sense best probability of anything. The better their senses, better perceptions are there, the concept of intuition as a sense is the mysterious dimension hiding somewhere in consciousness domain.

  • @elipersky1591
    @elipersky1591 3 роки тому +62

    There is a fallacy running through this paper (especially in the Fallacy 3 section) that concepts like 'understanding' or 'being intelligent' are binary properties; that you either understand or your don't, you either have intelligence or you don't. This is not reasonable- there is not a meaningful definition of 'understanding' that goes beyond the question of 'does the system behave as if it understands?'. If the answer is 'a bit' or 'sometimes' then the system understands a bit or sometimes. The more complex and abstract the problems are that the system can respond appropriately to, the more we can say it is understanding. Do dogs have understanding? Do fish? Yes, in various shades of grey which are much lower than human level. Computer programs can very comfortably be placed on this spectrum. We should stop putting humaness on a pedestal and acting like it is anything more than highly distributed and abstracted computation. We need to abandon the John Searle Chinese Room rubbish which presupposes the premise that no computer can be described as having understanding and instead respect the spectrum of intelligence that exists. This leads to much more meaningful insight on the relationship between artificial and biological intelligence. It's also much less of a downer.

    • @andrewminhnguyen9446
      @andrewminhnguyen9446 3 роки тому +18

      I would go one step further and even argue that to suggest dog and fish understanding are shades of grey "lower than humans" is an example of the very same problem.
      I think a relevant question to ask is "Understanding in relation to what?" I think we have to be careful how we define "understanding" because one can conceivably say that fish "understand" aquatic environments much better than humans do, or dogs "understand" the relationship between scent and territory (and therefore dominance, resource availability, and danger) much better than humans do.
      That we cannot communicate with creatures about their mode of intelligence and/or understanding is not evidence of absence per se. I think recent research on avian and bee reasoning capacity may be suggestive of this.

    • @paulcassidy4559
      @paulcassidy4559 3 роки тому +2

      great comment. the reply from Andrew Nguyen was also. thanks for the food for thought!

    • @bntagkas
      @bntagkas 2 роки тому

      like in most things, there is a 0 to 1 situation, but what happens is, the 0 is just 0, yuo dont understand/are intelligent
      but the 1 isnt just 1, but has flavors/levels, you understand this but not that, you are intelligent in playing a musical instrument but you dont understand physics at all etc

  • @gabrielwong1991
    @gabrielwong1991 3 роки тому

    Causality is still a problem in AI, something like Lucas Critique in economics where many things are affecting each other at the same time (endogenity as they call it)

    • @easydoesitismist
      @easydoesitismist 3 роки тому

      Causality is a feature of the mind, not the world. If it rains and car accidents increase did the rain cause or contribute to those accidents? It's the heap problem.

    • @gabrielwong1991
      @gabrielwong1991 3 роки тому

      Lets be honest current ML (NN) models only good at finding feature... that is why it is so vulnerable to adversarial attack... i.e. feeding white noise to a picture changes prediction. We need to let computer learn and have a brain on what is the Data Generating Process (DGP) behind the data and as u said there are many factors that can change it, sometimes two or more factors endogenously affecting each other.
      In my field (I am in the econometric field) have a thing called Instrumental Variable that helps alleviate this endogenity problem. I understand the frontier of current NN research frontier on adversarial attack is employing this idea, they are much more robust to adversarial attack... so it is important to identify causality other than chasing prediction but easy to crack when the data changes a little bit. Microsoft is currently one example: www.microsoft.com/en-us/research/blog/adversarial-machine-learning-and-instrumental-variables-for-flexible-causal-modeling/

  • @666andthensome
    @666andthensome 3 роки тому

    I think what people mean by "understanding" is the combination of consciousness and what machine learning people refer to as the "human ability to generalize".
    An algorithm is just dead math, in comparison.
    Take chess.
    Yes, and algorithm and a computer can now beat humans all the time.
    But the algorithm and computer emerge bootstrapped to the people who *understand* computers, chess, math, and have a self-reflective consciousness that allows them to look across domains, plunder them, and then turn up solutions to automating chess play.
    Another observation of mine -- machine learning fans tend have the personality type that, well, likes machines. And so, I often think they are fooled into seeing that what a machine is doing is what they, as a human, is doing -- but what this paper is pointing out is, probably not.
    That's why as cool as GPT-3 and, say, AlphaGo are, they can appear to be doing something much like what we do (and even better), but really, they are missing the mark in major, major ways.
    Just because they are powerful, we shouldn't be too impressed, because they still are quite brittle, and doing something much more narrowly than what humans do.
    And they also tackle areas that are actually quite a bit more well-formulated; chess, in particular, is very tightly parameterized. Language has rules and patterns too, but things get tricky there, because individual expression is a thing, innovation in language is a thing, and then there's all the hidden unconscious psychology humans seem to draw on when they speak.
    Once you start to get into areas where we ourselves cannot agree, or lack insight, such as our own psychology, machine learning just falls apart.
    And there are a lot more black boxes in human experiences than we tend to appreciate, methinks.
    Evolution built a lot of complex stuff in us before we were fully conscious, so it's no wonder so much of it is nearly impenetrable to us.

  • @Lee-vs5ez
    @Lee-vs5ez 3 роки тому

    when humans 'understand' something, they relate them to the thing they have seen before. If they saw something new, they would not understand it too at the beginning. Isn't that what the NN do with training/unseen data?

  • @pierreboyer9277
    @pierreboyer9277 3 роки тому

    I think something fundamental is missing in machine learning approach. Humans has memory, which is a way to stock information and our brain algorithm uses it. I think neural networks are only components in a larger system algorithm.
    If you come accross this comment, don't hesitate to reach out as I'd like to discuss it further.
    Cheers

    • @jimimased1894
      @jimimased1894 2 роки тому

      "a larger system algorithm" are you referring to Panpsychism?

  • @zyxwvutsrqponmlkh
    @zyxwvutsrqponmlkh 2 роки тому

    29:00 By this reasoning people who are locked in, paralyzed or otherwise physically impaired should have substantially different cognition than abled bodied humans. Early understandings of deafness would seem to support this as deaf people of the time were typically measurably more mentally fable than hearing people, it was later shown that the mental deficiencies were not based on the lack of hearing but rather the lack of language. If you give a deaf child language they don't have these mental deficiencies. Raise a hearing child without language and they will likely have profound mental deficiencies even in areas not obviously related to language. Anyhow a paralyzed person is not loose the ability to understand abstract concepts. I don't agree with the supposition that an AI would need a body to understand either. But perhaps what a GAI truly needs is language and an inner monolog.
    If a three year old can learn to speak and converse by observing less than 2000 hours of speech a general ai should not need orders of magnitude more data to achieve a similar result.

  • @PierreH1968
    @PierreH1968 3 роки тому +2

    If Elon Musk makes a mistake by 2 years on the availability of FSD, it is such a small amount of time, negligible compared to the lack of progress seen on ICE cars for the last 80 years. Also I heard that their AI model is being retrained using closer to human vision sensors, reducing the weight of the radar input , which the training was emphasizing too much, becoming like a clutch, to the detriment of vision input .

    • @PierreH1968
      @PierreH1968 3 роки тому

      And the undeserved bad press Tesla got, also put pressure to focus on the long tail of super rare accidents, hence requiring more time.

    • @ytrew9717
      @ytrew9717 3 роки тому

      I saw that it's what the new version coming (this week?) is about v.9

  • @AndreGeddert
    @AndreGeddert 2 роки тому

    About Fallacy 4, the "body problem": I dont see the problem here. Even if we asume its true and the building of a human-level AI requires a body, you should easily archive this with artificial actors and sensors. There should be no significant difference for intelligence with natural or artifical actors and sensors.
    What i think whats much more relevant is the pressure that the existence of the body puts on the evolution of intelligence. Cells evolve before neurons. Cells have needs to live, like nutriens, heat etc. So the existence of non-neural cells had set the goals, or the will, if you like, for the later evolving intelligence.

  • @scottmiller2591
    @scottmiller2591 3 роки тому +3

    1) Early AI winter was catalyzed by Minsky's "Perceptrons," but didn't cause it. Minsky in no place said MLPs couldn't be trained. When I read his book, I took it as a challenge to researchers to develop multilayer training methods. The real cause of the AI winter was political machinations behind the scene by GOFAI (good old-fashioned AI, essentially human-crafted warehouses full of if statements) vs the uppity connectionists. The GOFAI group took advantage of the overselling of connectionist approaches and "Perceptrons" to say "See, we told you it would never work" in order to capture a larger fraction of DoD budget, although in the end this maneuver probably hurt them as well.
    2) +1 to Mitchell for using "begging the question" correctly - the first instance in the wild I've heard in several years.
    3) There definitely is a component of "wishful mnemonics," as in the notion of calling deep learning AI for marketing purposes, then using the term AGI to replace what used to be AI. I wouldn't blame the researchers, however. "Neuron" and "attention" are other useful examples. Neurons in deep learning do nothing like what actual neurons do, and we have no idea what attention really is, but these are useful analogies for describing, implementing and analyzing systems.
    4) The notion of "goal," is quite rigorously defined mathematically by Bell, Sutton, et al., with a great deal of effort to reasonably describe what "agents" "want."
    5) "Rational," as used by economists, game theoreticians and reinforcement learning researchers means completely different things. The fact that a specialized field of study uses an English word differently from the vernacular, or that different specializations use words differently, should come as no surprise.
    6) The real current danger from AI is not AI safety (nefarious actions by evil AIs). Rather it is marketeers (AKA politicians) trying to dishonestly influence people's choices (or simply usurp them) by "dumbing down" the answer to implement policy, or more commonly, never understanding the problem and its application to policy in the first place (nefarious actions by evil/stupid people using AI as a tool). This category of actor is incapable of understanding the best policy is sometimes to do nothing. This will probably cause the next AI winter, if the lack of GPUs from the COVID-19 supply chain disruptions hasn't already started one (by the same category of people).

  • @willd1mindmind639
    @willd1mindmind639 3 роки тому

    For most lay people the problem comes from the marketing around the technology. But in reality, it is an apples to oranges comparison. Silicon chips do not fundamentally work the way a brain does. In many ways they are diametrically opposite as brains do not have data types or predefined execution pathways for logic, whereas microchips do. Likewise there is no computer capable of self programming like the human brain, nor is there any equivalent between using large clusters of servers to train a model for machine learning and how the brain trains on any task. But those are technical details that most times will not be included in any sort of marketing summary seen by the average Joe. For the most part, the "practical" application of machine learning today is as a portable framework for writing code to do certain kinds of tasks based on various machine learning algorithms. There are many companies who are pushing this model in order to develop new coding frameworks, platforms, algorithms, hardware and software to support it. Underlying most of these frameworks is the assumption of connectivity to large data centers with large pools hardware for compute, storage and memory with access to large volumes of data. And yes, in selling these platforms and services many of these companies do use the term "artificial intelligence" quite a lot.
    The issue of "understanding" comes down to the fact that these algorithms can not generate a "discrete" coherent internal representation of anything. Meaning, for example, a neural network representation of the picture of a cube, has a bunch of bytes representing pixel values arranged, sorted and filtered into statistical mathematical patterns according to the math functions used in training. But there is no "discrete" internal representation of a cube as consisting of 6 faces, which have properties such as opposite faces being parallel to each other, with the same size and so forth. And each of those aspects of a cube, as an object within 3d coordinate space, which we understand as humans, are in themselves discrete elements, such as each face and the coordinates of each corner and the volume encompassed by the cube. Another example is the classic character recognition task. No neural network has a discrete internal representation of a character, such as the letter a. In order to recognize a character, you have to provide a vector of ascii characters during training and map those to images of characters. The neural network model will have a statistical mapping of patterns of pixel values to the labels or corresponding characters from the vector. But there is no intrinsic discrete representation of what the letter 'a' is within this model. Meaning if you were to visualize the contents of the trained model, you would not see any recognizable characters. To generate such an internal coherent representation of individual 'entities' of the real world and their "attributes" or "features" would require a fundamentally different neural network architecture

  • @Kerrosene
    @Kerrosene 3 роки тому

    Has AI transcended what could be called "animal level"? Must we look then investigate parts of our intelligence that distinguish us from animals which may be self consciousness and ego, towards the development of a general AI?

  • @davidtoluhi8686
    @davidtoluhi8686 2 роки тому

    I think the “body based intelligence” criticism she offers is better interpreted as a criticism of biomimicry as a whole: “we’re gonna replicate intelligence by replicating the brain”, well, there are a couple of gotchas you have to be aware of with this type of reasoning ... I agree with you that she missed the mark by attributing it to the body (the paraplegic point was excellent), however, I still think those gotchas can be characterised as emergent properties of intelligence, and I think “common sense” fits perfectly into this: I believe common sense is an emergent property of intelligence, similarly, things like language are emergent intelligent characteristics (personally speaking, I think there are a few aspects of NLP that can be improved by appreciating this) ... but more specifically, (as an attempt to define emergence) my point is that there are certain intelligence properties that arise from intelligent entities interacting together and then get propagated back to all the individual intelligent entities (I think language is a good example of this... or common sense).
    Another example of this emergent property is history and documentation: we humans tend to build on what past humans have done and documented - there is no Feynman without Einstein, no Einstein without Newton, etc we learn what these people have done and improve on it, discuss it, process it and emit another “token” of intelligence (so to speak ... forgive my careless description)
    Does some sort of language emerge from neural networks interacting? This is not clear ... also, if it does, how do we interpret it, formalise it?
    I think the great thing AI has missed when it comes to some NLP areas is to use language as some sort of yardstick ... I believe this is a step in the wrong direction, languages evolve to be more specific and useful (from natural language to mathematics to programming languages to neural networks lol) ... I understand the impulse to go in the other direction, but it is not clear to that this is very useful?
    Anyways, great breakdown of the paper, I really appreciate these videos and the effort you put in.

  • @pladselsker8340
    @pladselsker8340 3 роки тому

    You understand something if you can predict its bahaviour given a certain starting state

  • @jabowery
    @jabowery 3 роки тому +1

    First of all there have been 3 "connectionist summers" as distinct from "AI summers". 1) Prior to Minsky and Papert's book "Perceptrons" which put the field into its first connectionist winter -- and initiate a "symbolist summer" (from which Minsky and Papert benefitted handsomely -- and which lasted quite a while) up until 2) My colleague, Charlie Smith gave Hinton support from the Systems Development Foundation as well as giving Werbos, Rumelhart and McClelland support which resulted in the second connectionist summer -- and sent the "symbolists" packing (including "expert systems" but also stuff like "The Fifth Generation Computer" initiative by the Japanese) up until support Vector Machines came along in the later 1990s and were proclaimed (not sure by whom exactly) to obviate the rest of the field resulting in the second connectionist winter up until 3) Moore's Law enabled Hinton et al to demonstrate that basically, with a few new tricks and lots of brute force one could show financially valuable results, which was the 3rd connectionist summer that we're still in.
    In each case, the field died because people got the practice cart before the Algorithmic Information Theoretic horse that Solomonoff and Kolmogorov had established by the mid 1960s. This is usually excused because algorithmic information is "uncomputable" but that is actually no excuse. The real reason is the usual one: If funding sources aren't sufficiently insistent on principle, people will forget principle and go for the money. We're about to do that again.
    Secondly, here's the horse:
    "Artificial General Intelligence" is a term that serious AI theorists had to come up with at about the time the second connectionist winter set in to distinguish their work from the not-so-serious field of "Artificial Intelligence". Unfortunately, as with most attempts at protective sequestration, disease vectors will not be denied access to all that nice juicy credibility, so you'll find it hard to discover what the serious AI theorists had come up with by searching for "AGI" or "artificial general intelligence".
    Here's my attempt at an "AGI" cheat-sheet conveying the essentials:
    Ideal induction compresses all data you have to the smallest of all programs that outputs that data exactly.
    Ideal deduction/prediction runs that program until it outputs all that data plus data you didn't have (which are its "deductions/predictions").
    Ideal decision appends an action to your original data and does induction again (re-compresses it). It then runs that new program to yield predictions. This it does for each possible action. Decision take place by assigning to each consequence of each action a value/utility measure, and picking the action with the highest value (maximum utility).
    There are assumptions that go into the above but they are pretty hard to argue with if you subject yourself to the same degree of rigor/critique in offering your own alternatives as to what AGI means.
    PS: Ideal induction and ideal deduction/prediction of AGI comprise what might be called "The Scientific Method" in the absence of running controlled experiments -- which is why I advocate replacing political arguments over social policy that appeal to "the science", with a contest to compress a wide range of data relevant to policy decisions.

  • @willcowan7678
    @willcowan7678 3 роки тому

    I am sure GTP-3 has a level of understanding. There is a video with some guy "interviewing" GPT-3, and he asks "can cats fly rockets" and GPT-3 responds with "yes, if it evolves enough". If this is not understanding of the concepts underpinning this statement (evolution advances organisms, a cat is an organism, a cat is not advanced enough to fly a rocket, evolution could hypothetically make it capable), then it is pretty close...

  • @swordwaker7749
    @swordwaker7749 3 роки тому

    On the point of "adversarial example", well, on a well-made dataset, these adversarial examples are usually made up. We humans also have our own adversarial example, for example: en.m.wikipedia.org/wiki/Cornsweet_illusion.
    On the self-driving car, I can see that it is very possible even in the short timeframe. Humans tend to overestimate their driving capability.

  • @tomti1023
    @tomti1023 3 роки тому

    Highly constructive paper - the media should read this and digest it fully before publishing another piece of news on AI

  • @robertulrich3964
    @robertulrich3964 3 роки тому

    intelligence is creation. creating a tool that makes a new tool is called a meta tool. until robots can create something to solve problems, they are just heuristics and algorithms. humans imagine, delve into future possibilities, can explorer things not yet created. dreaming is the main function of creation and creation is the main function of imagining the future, ie. consciousness.

  • @MrSwac31
    @MrSwac31 3 роки тому +1

    Let's make sure people don't get exited because chaos go brrrrrr

  • @michaelringer5644
    @michaelringer5644 3 роки тому

    It almost sounds Yannic is against AGI.

  • @-slt
    @-slt 3 роки тому +1

    Wow you're fast! :)
    let me say a few sentences about each fallacy but before that this very interesting article from David Graeber also argues "why we did not get all the cool sci-fi technologies that had been promised to us". 'Of Flying Cars and the Declining Rate of Profit' here: thebaffler.com/salvos/of-flying-cars-and-the-declining-rate-of-profit
    Also sorry for my English.
    Fallacy 1 : Well we learn new skills and form intelligent behaviors gradually so it might not be very wrong to assume small steps will lead somewhere but well yes we might got stuck in a local minima as well and no matter how much better we got with smaller and smaller steps but the "real" minimum might be far and out of reach using this method. Right?
    Fallacy 2 : Yes yes yes. it's absolutely valid.
    Fallacy 3 : Andangerous
    Fallacy 4 : I think she is referring to Hinton and others who argue when the number of computational parameters of a system surpass the level of human brain, then we might be able to achieve intelligence. I do agree with you Yannic, we are simply biased about intelligence cause we have so few of intelligent beings in the nature to learn from but i think Mellina Mitchell also criticizes it from the inside of this narrow view to intelligence. "Even if intelligence can only be achieved with a computational system like what we humans have, then it's might not only depend on the brain but the body is important too."

    • @-slt
      @-slt 3 роки тому +2

      but about that man named Musk. Dear Yanic, he talks a lot and just like when you do not put an end char to a language model, he also produce gibberish about science and technology but he is not a simple ordinary man. He is one of the richest on the planet and he is rich because many many many worked and he used (in the holly name of Capitalism). People like Musk are responsible for climate change, pandemics, hunger and poverty around the world and also many Coup d'état and wars all in the name of "progress" but in fact for money and power. So please let's not confuse him with a single ordinary human being. :)
      Musk talk's a lot about technology and claims lot's of unbelievable things which most are False and i think he knows some are false but he use them for the purpose of PR to acquire attention and investments. Take a look at his (his companies) projects. The only success is SpaceX rocket launches. It's indeed interesting but just till everybody else do it. SpaceX used pubic funding and used the labor of many scientists, engineers and workers. It's more just to call that their achievement not a Billionaire who got rich from slavery from apartheid in South Africa. (yeah i really dislike him :)) ) but anyway he talks about brain implants as if they are usb memories! talks about transferring thoughts as if we have the smallest clue of what even is a though or it's representation in brain. talk's about "curing Autism"! his SpaceX send tens of satellites to orbit claiming worldwide internet access for many but it seems it's only made for a few hundred thousands and can not service more people. Hyperloop? nothing. Boring company? nothing at all only advertisements and video clips. Tesla? yea Tesla works hard to sign contract with other car producers to buy their dirty carbon emission, but it's also making EVs which has good and bad sides. From exaggerated claims on emission reductions to exploiting nature and labor for rare earth elements.
      And then these men (world billionaires have more wealth than 4.5 billion people of the world* and emit more carbon than half of the planet**) talks about carbon capture, brand new worlds, the future, progress and then when a pandemic hit Musk calls science bs and Bill gates defend holding the patent of Covid vaccine! ah.
      * www.oxfam.org/en/press-releases/worlds-billionaires-have-more-wealth-46-billion-people
      ** www.oxfam.org/en/press-releases/carbon-emissions-richest-1-percent-more-double-emissions-poorest-half-humanity

    • @azertyQ
      @azertyQ 3 роки тому +1

      @@-slt thank you for writing all that so I wouldn't have to :P
      This guy knows his AI stuff, but then goes and simps for Muskrats.

    • @-slt
      @-slt 3 роки тому

      @@azertyQ :))

    • @victoriachudinov9580
      @victoriachudinov9580 3 роки тому

      @@azertyQ Agreed. Yannik is very very good on technical stuff, but all his social and philosophical takes tend to be just pure techbro cringe. I guess this is what you get from overspecialization in a field.

  • @hambabumba
    @hambabumba 3 роки тому

    AlphaGos Goal is not winning but minimize the loss function

  • @Supreme_Lobster
    @Supreme_Lobster 2 роки тому

    To me adversarial attacks would be like messing with someone's retina and then throwing "you are not properly identifying objects" at them as if it was valid criticism lol

    • @exmachina767
      @exmachina767 Рік тому

      But when given adversarial attacks NNs don't say "I don't know what this is". Instead, they say things like "this is object X". The same thing happens with language models when asked nonsense. They try very hard to give you a (nonsense) answer instead of realizing the question is invalid and telling you so.

  • @fredericfournier5594
    @fredericfournier5594 3 роки тому +1

    For me the vectorization of the words is already a way to introduce comun sense by introduce more information connected to a word. It's already a step in the direction of understanding. The ability to make a description of a video with words A step on describe your life. I mean we can keep going like with many exemple. The difficulty is probably to find the right way to use the Data and in what order, like for a kid.

  • @enric-x
    @enric-x Рік тому

    Watching this a year later is definetely ironic

  • @bluestar2253
    @bluestar2253 3 роки тому +3

    Late 1980s: "AI is dead. Long live neural networks!"
    2020, AI: "I'm baaack!"

  • @JP-re3bc
    @JP-re3bc 3 роки тому

    Excelente paper. Poor comments on your part.

  • @444haluk
    @444haluk 3 роки тому

    This is because computer scientist don't have the disipline of neuroscientists. Neuroscientists are mostly after small things, they say what they do is small, they always think with increments and knowledge piling up, slow but steady. Computer scientist who hate to debug their code (due to the hell of object oriented programming) daydreams about the future, don't understand the intelligence concepts like a neuroscientist and therefore always too optimistic or too pessimistic. Basically computer scientists are either bipolar or have bipolar bosses.

  • @444haluk
    @444haluk 3 роки тому

    32:04 Yannic, you are confusing the implementation levels and the order of the functions. Humans are organic, we need to feed their neuron and compete in the world for resources, the efficient energy distribution is top priority, if we don't eat we deteriorate and seize to be. that's why humans have what they have: biological neurons. Robots have synthetic neurons and still need to feed their neuron, compete for resourses and think about energy efficiency. Before a robot upload its intelligence to a server (like Ultron) it needs to have a body that is human shaped. And if everyone around it is not compassionate enough it needs to be energy efficient and needs to look like the human neuron systems like the spine, stem and all. 15W for all jobs: climb, run, think, jump, throw. But the order of the function is different. We know that even though Turing Machines and lambda calculus are the same and we talk thinks like "I have a memory of this" like a state (turing's) it is embedded into the parameters of the neurons, hence the brain is applying lambda calculus, being equivalent to turing machine it talks about itself like a turing machine. If you think of turing Machines you may conclude sensors are secondary but for lambda calculus sensors are top priority because the abstact pure functions can only perform well if they are fed the right amount of richness. In short the sensors are top priority when humans are compassionate for robots, stems and spines are top priority when humans aren't compassionate.

  • @hikaroto2791
    @hikaroto2791 3 роки тому +1

    Totally d'aCord with your analisys.

  • @nickhockings443
    @nickhockings443 3 роки тому +1

    Since these issues were raised, here are some definitions which are likely to be helpful if the goal is to make "human-like" intelligence.
    "common sense" : The prior knowledge and inference from that knowledge, that humans expect of all competent human beings. NB these expectations are highly social group specific. One of the most widely demanded "common sense" skills is reliable intuitive physics for human scale problems. This is based on experience common to most human infants, and is the base from which most human learning is bootstrapped.
    "understanding" : The ability to predict the tangible consequences of abstract concepts. Demonstrations of understanding include: (i) recognising homonyms and paraphrasing, (ii) anticipating novel future sensor data, from minimal (possibly abstract) descriptions of a scene, (i.e. zero-shot learning), and more generally (iii) extrapolations from minimal information prove to be correct.
    Note that both of these relate to relevant prior knowledge that enables generalization over a larger class of problems than the training data for given task. They imply the system grounds its models in uncertain beliefs about an external world governed by causal laws. Note also that the words and concepts of human abstractions are based on tangible concepts from human-scale physics.
    What humans exhibit is not "universal intelligence" of the kind forbidden by the No-Free-Lunch theorems (Wolpert & Macready 1997)(Wolpert 2012), but what could be called "proximal, extensible intelligence". That is that they can use their existing prior knowledge to understand adjacent problems and extend their prior knowledge to generalize over a larger set of problems. This relies heavily on inferring causal relations, which requires training data that includes action selection, see the Held & Hein (1963) "Two Kitten Experiment" or Judea Pearl's work on causal inference.

  • @christianleininger2954
    @christianleininger2954 2 роки тому

    Its funny that you talk about round 8:00 that the AI only needs to learn that well-dressed people are not criminals. Well that is was a lot of people (maybe not that smart) do ;)

  • @XOPOIIIO
    @XOPOIIIO 3 роки тому +6

    But today we are in AI global warming.

  • @pierreboyer9277
    @pierreboyer9277 3 роки тому +1

    I think that 'something fundamental is missing' doesn't mean that the thing that is missing cannot be coded in hardware. It can just mean that it's a neurone-architecture/algorithm which is completely missing in the current NN networks.

    • @yifeiwang5768
      @yifeiwang5768 3 роки тому +1

      Yep I agree with you. Personally I think sometimes we don't find the exact way to formulate a task, or, the current object/loss function is not enough to guide the model to the correct direction.

  • @adamrak7560
    @adamrak7560 3 роки тому

    We may want to create a human like A.G.I. because it would be easier to understand and make safer than something which is completely alien to us.

    • @drdca8263
      @drdca8263 3 роки тому

      @Robert w ok, but if "alien-ness" is a scalar, not a boolean, then is the justification one which could be good reason to prefer one which is *less* alien, to one which is *more* alien?

  • @TechyBen
    @TechyBen 3 роки тому

    Who is "we"? ;)

    • @telecorpse1957
      @telecorpse1957 3 роки тому

      We the people :)

    • @nauy
      @nauy 3 роки тому

      You and the mouse in your pocket.

    • @drdca8263
      @drdca8263 3 роки тому

      society?

  • @____2080_____
    @____2080_____ 3 роки тому

    I would suggest that researchers in artificial intelligence what do far more to advance the field by looking at things beyond our typical westernized reduction is view of things. Today, much of learning is trying to reduce “Understanding“ into a smaller and smaller element. We have never considered understanding from a different philosophical standpoint of a macro, larger interconnected view. Our limitations on artificial intelligence every other decade is akin to a wild up toy being aimed at a corner wall and being surprised by how that stops the toy’s progress. What the paper’s authors hint at yet are blind of it themselves is this reductionist philosophical tradition. What if we were to program computers from a Zen point-of-view? What if we could find a math that doesn’t reinforce our reductionist bias? The more we are able to understand ourselves and what we don’t know and white blinds us about the way we believe can allow us to have the breakthroughs we seek.