Why humans learn so much faster than AI

Поділитися
Вставка
  • Опубліковано 4 чер 2024
  • - Link to edited game versions: rach0012.github.io/humanRL_we...
    - Link to the Paper:
    openreview.net/pdf?id=Hk91SGWR-
    "Why are humans such incredibly fast learners?"
    This is the core question of this paper.
    By leveraging powerful prior knowledge about how the world works, humans are able to quickly figure out efficient strategies in new and unseen environments.
    Current state-of-the-art Reinforcement Learning algorithms however, usually don't have strong priors and this is one of the fundamental challenges in current research on Transfer Learning.
    If you want to support this channel, here is my patreon link:
    / arxivinsights --- You are amazing!! ;)
    If you have questions you would like to discuss with me personally, you can book a 1-on-1 video call through Pensight: pensight.com/x/xander-steenbr...

КОМЕНТАРІ • 107

  • @TheBenjaminsky
    @TheBenjaminsky 4 роки тому +18

    9:09 Dude, really good point about human priors affecting our understanding of quantum mechanics. Also, I'm listening to this on studio monitors and the intro music woke up the astronauts on the ISS.

    • @alexeypolevoybass
      @alexeypolevoybass 2 роки тому

      So I'm not the only one with studio monitors here. Nice.

    • @Vedranation
      @Vedranation 26 днів тому

      I was wearing headphones and holy shit my ears weer blasted

  • @henrikf8777
    @henrikf8777 4 роки тому +21

    My two cents is that the "core algorithms" in our brain isn't necessarily better it's just that we're born with a lot of really well-optimized algorithms already. Human babies can already detect faces and look into faces of people and as you said, imitate people. Recognizing objects in 3D like this and mapping it to the agents own body would on a blank slate require lots of training. I don't think we should look to mimic these well-optimized modules (like making an algorithm that learns by imitation) but instead get to the core of what allows them to be created in the first place.

    • @4AlexeyR
      @4AlexeyR Рік тому +1

      I agree with that. It is much more complicated process. Yes, one point of view that a human (and other creatures) have some pre-defined well tuned algorithms. And this fact is part of miracle of a life. The human brain a more complicated system. Thus it has own schedule of body evolution in time. And in this case we can see different stages of learning and understanding in quality and quantity. You can find some literature about it. For example, when a child starts to identify similarity of objects in a different scales (as I remember it starts after 3 years old age cca). And a lot of other body growing properties there are. It looks there is a very wise program or plan how to learn a mind step by step according to the state in a time.

  • @alexanderkurz2409
    @alexanderkurz2409 5 місяців тому

    5:03 "to test the presence and influence of different kinds of human priors" ... this is pretty cool ...

  • @JKarioun
    @JKarioun 6 років тому

    This is great content, thank you for doing those videos, looking forward to the next ones!

  • @ericfeuilleaubois40
    @ericfeuilleaubois40 6 років тому +1

    Damn great video! Carry on ! Makes it very easy to get into these advanced subjects :)

  • @pfever
    @pfever 6 років тому

    I just found your channel, pretty cool! Looking forward to see more videos of yours =)

  • @davidm.johnston8994
    @davidm.johnston8994 6 років тому

    Nice video man, just subscribed :-)
    If I may give you just a bit of criticism, watch out the audio levels because the ending music and some "pop" sound fx in the beginning were much louder than the speech.

  • @NikosKatsikanis
    @NikosKatsikanis 5 років тому

    Great videos on making complex stuff approachable

  • @maskedman6890
    @maskedman6890 2 роки тому +1

    Well there is a saying that you have to have done a task for 10,000 hours to be actually great at it. And adult humans when they turn 18 years old they have already spent 155,520 hours. In that time a person could only be best at 15.552 things.
    But another variable that comes is the involuntary actions we performs like walking, running, balancing. If we have done those actions then they just become muscle memory. So then you could read while walking, or sing when running or whatever. Also it doesn't affect the performance of the other task we are doing even though multitasking.
    So probably we could be great at more than 15.552 tasks, or even a 50 or 100 tasks if you are being productive 100% of the time.
    The point is that humans take years to perform these many tasks efficiently, so robots/simulation would also need years of training to be a conscious AI or something. I mean if it's possible for a single a program/robot to accumulate all the tasks it is learning throughout the years .

  • @EdViaja
    @EdViaja 4 роки тому +2

    This is the nicest channel about AI (personal account) I saw in UA-cam. Continue with this work, it is excellent. RL has a big potential in real-world problems.

  • @maheshpatel2005
    @maheshpatel2005 Рік тому

    Very nicely explained..thanks a lot

  • @zmajoslavomirdedamrazizovi4290
    @zmajoslavomirdedamrazizovi4290 6 років тому +70

    i liked and subbed, but the intro and outro scenes blew out my eardrums and gave me a mini heart attack so i had to panically lower the volume, as well as adjust it during and after the video with the babies...

    • @ArxivInsights
      @ArxivInsights  6 років тому +11

      Good feedback, I'll definitely check this in my next edits before exporting!

    • @zmajoslavomirdedamrazizovi4290
      @zmajoslavomirdedamrazizovi4290 6 років тому +1

      i noticed this with a few other youtubers i follow in their early videos... i have high hopes for your channel :D

    • @ArxivInsights
      @ArxivInsights  6 років тому +10

      Fixed the audio levels in the new video :p Thanks for the good feedback! ;-)

    • @zmajoslavomirdedamrazizovi4290
      @zmajoslavomirdedamrazizovi4290 6 років тому +1

      yea, I saw man, it's awesome :D
      I think i was the second viewer, but i didn't want me to be the first comment again :)
      I hope you realize that when you reach a 100k subs, I'm gonna be telling people that i fixed your channel and that I'm the reason :D (kidding, to my friends only)

  • @bernardfinucane2061
    @bernardfinucane2061 6 років тому +22

    One reason why reinforcement learning works so well on games is that there is a more or less infinite amount of data to work with, and reinforcement learning needs that. But the technique will have to be improved to deal with real world problems.

    • @rooster443
      @rooster443 6 років тому +1

      Same capabilities as our prior knowledge and processing speeds

  • @MrJorgeceja123
    @MrJorgeceja123 6 років тому

    Awesome video and explanation mate! Can you do one on inverse reinforcement learning or GANs in general? That will be great for the community! Thanks!

  • @swapnilmeshram9991
    @swapnilmeshram9991 6 років тому +1

    great video. You are really doing something different from others who present knowledge on ai. Depth of knowledge is amazing.

  • @inkwhir
    @inkwhir 6 років тому

    Wonderful video!!! Thank you for sharing this paper!

  • @ativjoshi1049
    @ativjoshi1049 6 років тому

    Elegant and clear explanation. Great speaker and nice content.

  • @bernardfinucane2061
    @bernardfinucane2061 6 років тому +15

    The concept of "priors" is what deep learning is about. The shallow layers are the priors of the deep layers. In image classification, edges are the priors of shapes, shapes are the priors of patterns, patterns are the priors of objects and so on. It would be interesting to do transfer learning to see how well a network trained with supervised learning on internet images could use its priors to do on a RL task like this.

    • @alexanderkurz2409
      @alexanderkurz2409 5 місяців тому

      "The shallow layers are the priors of the deep layers." Yes, and I will remember this quote. But humans like have priors that are not learned (as Chomsky famously has been arguing since the 1950s).

  • @faizanahemad
    @faizanahemad 5 років тому +2

    Hi Xander. Great Video. An orthogonal question: How do you find such interesting papers? Like what do you use as source and how do you filter what to read and what to ignore?
    Possibly a vlog on how to actually find good recent papers and how to decide what to read??

    • @ArxivInsights
      @ArxivInsights  5 років тому +1

      Great question, since many people have already asked I wrote a blogpost on this a while ago: blog.ml6.eu/catching-the-ai-train-c0c496959999
      But it kinda takes a while to create your own, healthy filter bubble in our digital media mess. Give it soms time, and remember: all those recommendation engines don't train themselves, so give feedback whenever you have that option! :p

    • @faizanahemad
      @faizanahemad 5 років тому

      @@ArxivInsights Thanks Xander! Went through your blog, great ideas. and keep making these vids, they are awesome.

  • @HeduAI
    @HeduAI 5 років тому +1

    What an awesome video!!!! Thank you! The edited games examples blew my mind...

  • @sedi4361
    @sedi4361 4 роки тому +1

    so what i conclude out of this. its crucial to combine computer vision (what do we see and the knowledge of what we see) with deep reinforcement learning to generate something close to primitive intelligence.

  • @athul164
    @athul164 5 років тому +1

    This is gold! Thank you for this, and keep up the good work! :)

  • @ankaandrews1093
    @ankaandrews1093 Рік тому

    This video is excellent!!

  • @bjbodner3097
    @bjbodner3097 6 років тому +1

    Thanks for another great video! Keep doing what you're doing:)
    Loved it!

  • @miladbanan2445
    @miladbanan2445 5 років тому

    Thank you

  • @sausage4mash
    @sausage4mash 6 років тому +1

    very interesting, thank you . I arrived here due to Alpher go zero beating the stockfish chess engine, I was blown away by its games, to me it showed signs of a deep understanding, playing moves whose benefit would be beyond a brute force calculation, but maybe I'm jumping to conclusions , guess I'm very curious.

    • @ArxivInsights
      @ArxivInsights  6 років тому +3

      Well, I think deep learning is radically shifting what we actually mean with "deep understanding". Chatbots (despite being incredibly 'stupid' and narrow) are pretty close to passing the Turing test (depending on the skill of the judges of course), a benchmark devised by one of the godfather's of AI almost 70 years ago. It's very fascinating to see the 'requirements' for true intelligence shift as Machine Learning progresses to display ever more fancy tricks and skills ;)

  • @vanderkarl3927
    @vanderkarl3927 Рік тому +1

    Man that ending music was loud lol

  • @williamkyburz
    @williamkyburz 5 років тому

    You forgot to put (pop up) some great universities in Holland. Like Ghent. Wish you the best in your Ph.D. studies. Looking forward to reading some of your research. Peace

  • @garrett7754
    @garrett7754 5 років тому +3

    Loved the video and explanation ! But I wished you offered a solution to teaching AIs priors. That might delve to far into the current problem of transfer learning, where there might not be an answer yet.

    • @ArxivInsights
      @ArxivInsights  5 років тому +1

      Exactly, we currently don't really have a good solution to the prior / transfer learning problem although many many people are working on this since it is a fundamental problem in all application areas of AI. I might do a follow up video in a year or so when there are exciting new developments!

  • @basti7848
    @basti7848 6 років тому +3

    LOL the Website says "You have already participated in our experiment before, hence you cannot take part in the experiment again. Thanks again." for the first time I visited it.

    • @ArxivInsights
      @ArxivInsights  6 років тому +1

      Bastian Schwickert I noticed it too :p The updated links are here: rach0012.github.io/humanRL_website/

    • @basti7848
      @basti7848 6 років тому +1

      All I had to do was clear the cookies to access the site, but still thanks for the updated links :)

  • @verdiergun
    @verdiergun 5 років тому

    Great great great video!

  • @omarlopezrincon
    @omarlopezrincon 6 років тому +3

    greeeat !!! you only need to normalize the volume of the whole video hehehe

  • @rakhilsoman9299
    @rakhilsoman9299 5 років тому

    So in an RTS game which has action based supervised feedback. Can I claim it will emerge with new tactics over time or do I need RL to claim the same.

  • @lorenzoblz799
    @lorenzoblz799 5 років тому

    I always wonder how many other "abstract" priors are actually learned by the model. For example, playing breakout I think that the model does learn what a trajectory is, what a bounce is, even what time, space and movement and causation are. And even "identifying" itself with the white paddle. Not in a sophisticated introspective/self-conscious way but in a very raw/"instinctive" way. If you place the paddle in a certain location and wait for the ball to arrive there are a lot of "concepts" you are relying on. Looking at this from the opposite perspective there are many more priors that humans bring to the game: space, time, the idea of a "puzzle", winning/loosing, dependencies (you need this to open that). Liked the paper and the video, thanks.

    • @ArxivInsights
      @ArxivInsights  5 років тому

      Hmm I'd beg to differ on that point actually. It's quite proven that current Deep Learning systems are unfortunately not doing much more than simple curve fitting (although these 'curves' might be very complex, like recognizing images for example). Check out this nice blogpost: www.vicarious.com/2017/08/07/general-game-playing-with-schema-networks/
      They clearly point out that standard RL policies completely miss the underlying point off the game, they simply (over)fit actions to pixels through learned feature extractors. This is one of the biggest open problems right now in Deep Learning: how do we go from the powerful, gradient based curve fitting models to more 'general intelligence' type systems...

    • @lorenzoblz799
      @lorenzoblz799 5 років тому +1

      The article discusses the possibility for the model to have a "human like conceptualization" of the world. And of course this is not true: the "world" the model "lives" in is a flat grid, not our physical world, there is no reason, and no possibility, for it to discover concepts similar to ours. Bricks are not "human bricks" to the model, just "those thing at the top that works in that way". Its "umwelt" (to use a big word) is completely different. Like the time and space in the "mind" of a wasp are different from the ones we use but, in my opionion, are nonetheless there with a similar role.
      I also disagree with the author concept of "small changes": these changes are small from the point of view of a human, who has a very specific conceptualization of the game. But these may be huge for the model, like changing fundamental physical constant for us humans (gravity, time "rate", etc.). In our mind we have the concept of bullets and their speed because we have seen different examples of them, otherwise we probably would not even have two categories to express these two concept. Like: let's make the time 10% taller.
      Dismissing intelligent-like behaviour as "instinct" has been done a lot with animals, erroneously more often then not. Even AlphaGoZero is no more then "curve fitting" but I would not dismiss it as a dumb "pattern matching" thing. And we may also say that human mind is nothing more than "electric signals" but this does not make us more or less dumb.

  • @albertomartel6508
    @albertomartel6508 6 років тому +1

    Hey awesome video, this channel is underrated (by no. Of subscriptions) Keep up the good work

  • @WildAnimalChannel
    @WildAnimalChannel 6 років тому

    Good video. I learned something. Which is a rarity.

  • @jackalstrategy9675
    @jackalstrategy9675 6 років тому

    Nice video! Do one on google AI deepmind and openAI's race to conquer dota and starcraft2 please!

  • @alexanderkurz2409
    @alexanderkurz2409 5 місяців тому

    3:12 This reminds me of Chomsky's critique of AI and LLMs. Any comments?

  • @asddassl9453
    @asddassl9453 6 років тому

    For the future videos you might want to tune down the intro and outro music a a bit. It's much louder than your talking. Otherwise, a great video!

    • @ArxivInsights
      @ArxivInsights  6 років тому

      Many people made this comment :p I screwed up there xD! This is fixed in all later videos :)

  • @palfers1
    @palfers1 5 років тому

    It's almost a century since the birth of QM. "New ways of thinking" indeed!

  • @Arsonade
    @Arsonade 6 років тому +27

    Your videos are awesome but please normalize your volume!

    • @ArxivInsights
      @ArxivInsights  6 років тому

      Adam Chess Haha I know, small mistakes were made in the first vids, the other ones have it normalized :)

    • @Arsonade
      @Arsonade 6 років тому

      Arxiv Insights, cool. Keep up the good videos. I think you've got a great format here

  • @juleswombat5309
    @juleswombat5309 3 роки тому

    Yes very rather interesting. Deep RL learning is much too slow learning and frustrating to apply in many practical uses and generalisation to real problems. So boot strapping with some priors, would be attractive to shorten training times, even if this is considered a return to feature management.

  • @TheAcujlGamer
    @TheAcujlGamer 3 роки тому

    Great fucking video!

  • @snippletrap
    @snippletrap 6 років тому

    Killer channel, watched all your vids. But try to treat the acoustics of your recording space! Around 6:24 - :25 you can really hear the flutter echo.

    • @ArxivInsights
      @ArxivInsights  6 років тому

      Yeah, I know! I was using a very bad mic at that time, switched to a clip-on in the later videos so it's a lot better there :)

  • @damnit258
    @damnit258 4 роки тому

    so is it good to let "fully" our prior understanding of the world influences our future decisions?

  • @JoshuaAugustusBacigalupi
    @JoshuaAugustusBacigalupi 5 років тому

    Ironically, the same analogy he used to explain our struggle in understanding quantum mechanics is very likely applicable to our struggle to understand the cognition itself.

    • @ArxivInsights
      @ArxivInsights  5 років тому +1

      Great point. I guess it would make sense if it turns out that a brain cannot by itself fully understand how it works... We might need some Machine Learning algorithms to show us :p

  • @MadScientist512
    @MadScientist512 6 років тому +1

    I think he's got that first part backwards with the statement "Why are humans so good at this?" He says that the AI took 36 hours to solve a simple level, but that's actually the length it took the AI to learn from scratch how to play games, with no prior experience, and then solve the level with no knowledge of the objective; he even mentions that people have an advantage due to prior experience. This puts that figure in a completely different light, making one wonder how long it'd take for a kid who's never played games to learn the same thing, without assistance or knowledge of the objective. I know some older people who'd probably never get there :) Chess and Go AIs routinely learn the game from scratch just by playing themselves and they surpass top human level after only a couple of hours, they have to test them against other AIs because humans are too slow to catch up, which is kind of uncomfortable. It'd interesting to see Elon Musk apply some OpenAI to platformers, hopefully it's just a matter of time.

    • @ArxivInsights
      @ArxivInsights  6 років тому

      It's true that I should have placed the "36 hours" statement into richer perspective. The RL agent starts from scratch so you can't compare this with a human playing, very true! On the other hand, humans are able to transfer those prior forms of knowledge to new games the've never played before, and this is where AI currently fails quite miserably. No matter how many games you train the AI on, once you apply it to a new game, you basically have to start from scratch...
      And with AlphaGo, the "couple of hours" it took to train their latest system is what we call "wall-time", which is the total time from starting the python script to it having finished training. But you can throw as much compute in there as you have available, so you can't compute 'wall-time' with human learning time since there could be thousands of worker threads training simultaneously!

    • @MadScientist512
      @MadScientist512 6 років тому

      Arxiv Insights That's even worse, you can't just discount 'wall-time' or turn the time it took to go through 4 million frames into 36 hours simply because it would take a human that long as that's how computers work, and you can't multiply the thread-count by the time taken or some such for a human-equivalent figure, humans aren't single core CPUs executing code. It's real-time human vs real-time AI, that's how it's been done since before Deep Blue defeated Garry Kasparov and the measure of super-human AI performance.

    • @ArxivInsights
      @ArxivInsights  6 років тому

      Well your argument is valid as long as you are talking about very scalable simulation environments where it's easy to spin up 1000 threads that do the same thing in parallel. As soon as you arrive at the real world (say eg training a robotic arm to pick up objects) real-world sample efficiency becomes very important. It's true that Google can use hundreds of robotic arms in parallel, but they are Google. For industrial applications of RL, the real-world sample efficiency is a rather important metric!

  • @loopuleasa
    @loopuleasa 6 років тому

    cool channel

  • @TheOneMaddin
    @TheOneMaddin 6 років тому +2

    Overall a good video, but I feel the note about QM was unnecessary. Physicist understand QM pretty well and its not strange when you have the right mind set. It is just presented as a very strange and unintuitive subject to the public, simply because this is how you get the attention for it. All these old quotes about "QM cannot be understood" are very questionable today.

    • @ArxivInsights
      @ArxivInsights  6 років тому +1

      Good point! Perhaps I should not have specifically mentioned Quantum Mechanics.. Nonetheless I still feel AI is great when it comes to finding new solutions in domains where humans have strong priors that are not necessarily optimal (like AlphaGo for example)..

    • @wunkewldewd
      @wunkewldewd 5 років тому +1

      I think it was a good analogy. If you notice, he didn't actually say that scientists don't understand QM, he just showed the Feynman quote saying that. But his point is totally correct -- even though we obviously can use QM effectively, it's still very "unnatural" to humans because it goes against a lot of our intuitions/priors/experiences/etc. So you might have a different definition of "understanding", but I think it's fair to say that between concepts like entanglement, tunneling, and a probability density rather than a definite position, it's much weirder than more classic fields like CM, EM, SM, etc. It's not that you can't understand it, but it definitely is weirder.

    • @CosmiaNebula
      @CosmiaNebula 5 років тому

      The right mindset takes a lot of training to get, and it is strange not as a marketing stunt, but because that's how it is for humans. I mean, what kind of mind set would find both of these situations natural?
      1. double slit experiment for electrons. Inference pattern appears.
      2. double slit experiment for footballs. No inference pattern.
      And many more, such as the delayed choice experiments, which violate intuitive causality.

  • @stephenkamenar
    @stephenkamenar 5 років тому +1

    0:24 1 minute? you mean 1 second?

    • @Fire6
      @Fire6 3 роки тому

      Yeah xDD

  • @codyheiner3636
    @codyheiner3636 5 років тому +1

    Maybe the monkey experienced significantly different environment during its upbringing to learn object permanence more quickly? How well controlled was that experiment?

  • @itsjacobhere
    @itsjacobhere 2 роки тому

    Is it just me or is the intro and closing music super loud?

  • @MaxLohMusic
    @MaxLohMusic 6 років тому +1

    Monkeys figure out object permanence faster than humans... More powerful models sometimes take longer to train... maybe those AI's aren't as stupid as we thought

  • @muhammadhelmy5575
    @muhammadhelmy5575 Місяць тому

    4:00

  • @m07hcn62
    @m07hcn62 2 роки тому

    I just got smarter after watching this.

  • @versag3776
    @versag3776 5 років тому +1

    Good point, teach AI quantum physics with a reward for creating teleportation devices

  • @georgplaz
    @georgplaz 6 років тому

    In other words: human evolution and societies nurture were overfitting us..

  • @triularity
    @triularity 2 роки тому

    Just curious.. how much money was spent on this research to validate the obvious? =)

  • @judgeomega
    @judgeomega 6 років тому

    but an adult human has many years of training built in. when comparing AI to human performance, you need to compare a newborn human to the AI. things look a bit different in that frame

    • @ArxivInsights
      @ArxivInsights  6 років тому

      Exactly, hence the whole idea of prior knowledge. The difference is that our brains somehow succeed in rapidly transfering this inbuilt knowledge to new tasks and problems we encounter in the world. So far, neural nets can't quite do this...

  • @otonanoC
    @otonanoC 4 роки тому

    How could you make a video like this and not mention MONTEZUMA'S REVENGE? No existing AI agent can play that game. And it's not just a matter of "human priors".

  • @boqsc0
    @boqsc0 2 роки тому

    The example with games is a nonsense, it's cultural preset that allows to perform well. Would take me hours or days to figure out what the game is actually about and what to do, if I had no previous preset to make a search and initiate the actions you expect and propose in that example

  • @JohnSmith-lf5xm
    @JohnSmith-lf5xm 6 років тому

    Very nice video until mentioning quantum physics... WTF...

    • @ArxivInsights
      @ArxivInsights  6 років тому +2

      How do you mean? I think the analogy is very striking: natural selection has provided us with perceptual systems that are good at discerning predators and modeling the linear trajectories of a moving object etc.. Not so much to reason transparently about quantum entanglement or wave-particle duality. We are inherently limited by our in-built intuitions about how the world works. In a field very far removed from ordinary perception (as in quantum mechanics, where mere measurement affects a systems state) this becomes a significant hurdle if we want to objectively explore the laws of nature. I truly believe Machine Learning will play an increasingly important role in man's scientific endeavor because it is mostly unconstrained by our prior assumptions and evolutionary baggage.

    • @JohnSmith-lf5xm
      @JohnSmith-lf5xm 6 років тому +1

      Your channel tells the development of a science that works at the scales of nature at which any Quantum BS is not applicable. Brains operate by electro-chemical interaction between cells. Cells get their metabolic processes going by the use of the ATP molecule. that's it!.. If you do not eat, your mitochondria does not make ATP, then cells can not perform its duty and die. Game over... sorry ... that is all.. is not that your souls stays playing video games in another dimension because of the wrap of the space-time. no! or that "go" is being played in all the possible outcomes in multiple parallel universes... no. none of that non sense. Those THEORIES of Quantum Entanglement and Wave-particle duality are just scientific misunderstandings... think on that like when the most advances minds though that the earth was flat, or that the earth was the center of the universe. The plain true is that still no ones knows how an atom looks like neither an electron or a photon... for some years all the science went down the path of quantum mechanics until it reach this point now at which it does not even make sense... hopefully other theories could come and save the day. look at this video and see how crazy the thing can go if you believe that those nonsense theories are possible. ua-cam.com/video/EtJKBonXst4/v-deo.html
      And then look at for example this video which shows that other mathematical models can explain real phenomenon with out the need of bending the rationality.
      ua-cam.com/video/WIyTZDHuarQ/v-deo.html
      Your channel is amazing. I only watch two videos and they already help me with my work. Thanks!

    • @ArxivInsights
      @ArxivInsights  6 років тому +2

      Well I completely agree with you that the brain is just a physical system (although there are some very interesting ideas out there that disagree with this. Check out this video if you really wanna dive down the rabbit hole: ua-cam.com/video/oadgHhdgRkI/v-deo.html ). And I also agree with you that nobody really knows what atoms, electrons or photons really look like, they are mathematical models that try to explain experimental measurements, nothing more. My only point is that we all have strong evolutionary biases that might sometimes cause us to miss good solutions/explanations and I think Machine Learning can provide a totally new approach to doing modern science!

    • @JohnSmith-lf5xm
      @JohnSmith-lf5xm 6 років тому

      Thanks for the reply I just want to post this last video that kind of sets my view also of being careful with extrapolating subatomic particles mathematics to macro-scale reality (like ourselves). ua-cam.com/video/8DGgvE6hLAU/v-deo.html

    • @alexfloyd5730
      @alexfloyd5730 6 років тому +4

      I think he was just making a point that humans use their common sense understanding of the world to help make decisions, and sometimes that common sense understanding is wrong. We can use our rationality to overcome our biases but that doesn't make it easier. A system without these built in biases may find unintuitive solutions to problems or it might even find it faster than a human would if those biases affect us too greatly.

  • @SurferDudex99
    @SurferDudex99 4 місяці тому

    Lmao this must be a joke. Anyone who supports this theory has no understanding of the exponentially nature of how AI learns.

  • @user-cn4qb7nr2m
    @user-cn4qb7nr2m 6 років тому

    Result was absolutely obvious and this is not f hindsight. Waste.