Modeling the Human Mind in Code

Поділитися
Вставка
  • Опубліковано 18 кві 2024
  • A birds eye view of the similarities and differences between the human brain and neural networks. Also, some thoughts around whether conscious machines are possible.
    Models of the Mind by Grace Lindsay: amzn.to/3Q4Hbf6
    Keyboard: Glove80 - www.moergo.com/collections/gl...
    Camera: Canon EOS R5 amzn.to/3CCrxzl
    Monitor: Dell U4914DW 49in amzn.to/3MJV1jx
    SSD for Video Editing: VectoTech Rapid 8TB amzn.to/3hXz9TM
    Microphone 1: Rode NT1-A amzn.to/3vWM4gL
    Microphone 2: Seinheiser 416 amzn.to/3Fkti60
    Microphone Interface: Focusrite Clarett+ 2Pre amzn.to/3J5dy7S
    Tripod: JOBY GorillaPod 5K amzn.to/3JaPxMA
    Mouse: Razer DeathAdder amzn.to/3J9fYCf
    Computer: 2021 Macbook Pro amzn.to/3J7FXtW
    Lens 1: Canon RF50mm F 1.2L USM amzn.to/3qeJrX6
    Lens 2: Canon RF24mm F1.8 Macro is STM Lens amzn.to/3UUs1bB
    Caffeine: High Brew Cold Brew Coffee amzn.to/3hXyx0q
    More Caffeine: Monster Energy Juice, Pipeline Punch amzn.to/3Czmfox
    Building A Second Brain book: amzn.to/3cIShWf
  • Наука та технологія

КОМЕНТАРІ • 42

  • @codetothemoon
    @codetothemoon  Місяць тому +4

    There are a few places in the video where I refer to backpropagation as a means of training neural networks. But it's actually just a component of the training algorithm (albeit a key one)

  • @TheNoirKamui
    @TheNoirKamui Місяць тому +6

    The one thing that worries me, is that it could create digital, and therefore almost limitless suffering.

    • @codetothemoon
      @codetothemoon  Місяць тому +1

      i'm assuming you're referring to machine consciousness - given some assumptions about the nature of consciousness, it may be a valid concern

    • @TheNoirKamui
      @TheNoirKamui Місяць тому

      @@codetothemoon Yes, almost from a supernatural concern. Until now, you could torture and cause suffering to highly inteligent beings only through grown humans. It is not that easy to acquire those. And we lament torture of even singular humans, let alone when it happens to millions and that goes down in history. Theoretically, we could cause the same, or maybe greater amount of suffering by spinning up a docker container, lol. I wonder if the universe would care. I guess we will find out. It does seam kinda unavoidable.

    • @TheNoirKamui
      @TheNoirKamui Місяць тому

      ​ @codetothemoon Yes, until now, to make inteligent beings suffer, you needed humans, that are difficult to come by. If it will become possible to create the same or even greater potential for suffering by spinning up a docker container... I wonder if the universe would care.

    • @Stanlix01
      @Stanlix01 Місяць тому

      That is exactly my worst fear on the subject of AI

  • @dreamsofcode
    @dreamsofcode Місяць тому

    This is such a good video. Your cadence and knowledge density is pretty much perfect. Great work, dude.

  • @LaCarteRouge
    @LaCarteRouge Місяць тому +4

    that consciousness talk brought me back to my college philosophy 101 class 😅

    • @codetothemoon
      @codetothemoon  Місяць тому +1

      been on a bit of a philosophy of mind kick recently, and it appears to be seeping into my software development videos 😎

  • @RogerValor
    @RogerValor Місяць тому +2

    back propagation is the one key word, that would make such systems superior in certain areas, like learning.
    but even the most reknown experts in the field admit, that neural networks are only just a piece of the puzzle to AGI, let alone consciousness.
    I would therefore rather focus on the issues we create with this particular tool, rather than skynet
    thank you for this good overview video, it is certainly an interesting book to look at.

    • @codetothemoon
      @codetothemoon  Місяць тому +1

      thanks, glad you liked it! agree they are likely just one piece to the puzzle. but I suspect they may remain a critical component in AGI, probably in combination with some other "tricks". but I am just speculating.

    • @carlosmspk
      @carlosmspk Місяць тому

      I'm not a data scientist or anything resembling one, but I'm fairly confident I've read that back propagation has serious draw backs. One example I recall is the fact that as you propagate back in layers, the impact of the error derivative becomes "blurry". There's also the vanishing/exploding gradient problem. It's the best solution we've got thus far, but the biological approach actually seems to fix this: it's localized, so it shouldn't matter how "far back" in the neural network you move, because all that matters are the adjacent neurons (in ANNs that would be the neurons in the layer before and after), and because it's a matter of yes/no (did you fire, or did you not fire) than gradient shouldn't be a problem. Of course, there are no proposals on how to adapt the biological approach to be useful in any way or form, but it's just a thought

  • @ChristopherFranklinSr
    @ChristopherFranklinSr Місяць тому +5

    Bio computers may end up happening in the future. Which is kind of terrifying.

    • @codetothemoon
      @codetothemoon  Місяць тому +3

      yeah nobody saw large language models coming, it's fun to wonder what other surprises we might see in the near future

    • @Deathington.
      @Deathington. Місяць тому +3

      Also the rise of optical computing for the silicon to quickly interface with the biological.
      The future is going to be wild.
      I can't wait to be a cyborg.

    • @ChristopherFranklinSr
      @ChristopherFranklinSr Місяць тому +2

      I just think about the worst of humanity and if the scientists would deem something unethical or not. Even if they do, someone else won't. I think of Cloud Atlas with the restaurant scene or Vault-Tec types of situations with robo brains and if it comes down to slavery. We need to be better as humans and tread carefully.

    • @aleclippe6213
      @aleclippe6213 Місяць тому +1

      Terrifying that it may happen, even worse to think it hasn’t happened universally somewhere already

  • @DaveParr
    @DaveParr Місяць тому +1

    As a senior data scientist I'm thrilled at how effectively and comprehensively you've tackled this topic. I actually learnt a few things from the biological and historical perspective. I respect that you're cautious to recommend books, but consider me sold.

  • @martin_nav
    @martin_nav Місяць тому +3

    Wait you have 2 glove80s?

    • @codetothemoon
      @codetothemoon  Місяць тому +3

      I actually have 3 - a linear, tactile and clicky. most of the time I'm using the linear one.

  • @SaaSLTDDeals
    @SaaSLTDDeals Місяць тому

    Wow, the complexity of neural networks compared to biological ones is mind-blowing. The future of AI and consciousness is a fascinating topic to explore!

  • @maheshkar772
    @maheshkar772 26 днів тому

    You’re 100% right. We cannot know if the other person is conscious. So we can’t know if the machine is conscious. Also, is consciousness & mind the same? Thoughts, feelings, intellect, ego, memory are they all mind? Is it consciousness? Have you heard of hard problem of matter & hard problem of consciousness? Panpsychism is but one flavor.

  • @FastRomanianGypsies
    @FastRomanianGypsies Місяць тому

    This is great. The temporal (and spatial) component you mentioned has been modeled in neural networks that have "spike trains"--a spiking neural network. This is absolutely required in biological neural network computation you cannot hope to replicate a human brain if you abstract it out. For example, how we determine the location of sounds less than 2000 Hz depends on this temporal accuracy. When a sound wave hits your head from either your left side or right, it hits one ear sooner than another. There is a structure in the medial geniculate nucleus (MGN) where you have an array of neurons that have two upstream connections of linearly differing axon length. These neurons ONLY fire when a spike from both upstream axons arrive at the same time. If you heard the sound directly in front of you, the MGN neuron that has upstream axons of equal length will fire. If you heard the sound to your left, the MGN neuron with a really long left upstream axon and really short right upstream axon will fire, because the velocity of spikes are the same but the left one appeared before right one.
    Also, something else an aspiring AI dev wants to keep in mind if they're trying to recreate the brain is that they should know is that there are significantly large areas of the brain that are formed without any learning involved. Early sensory processing regions have circuitry that's predetermined, not learned, and are not plastic in the adult brain (so their weights do not change). This is so that the incoming sensory signals can be broken down into their individual components. Your visual system is visually mapped, where the bottom right of your primary visual cortex corresponds to the top left of an image. Meaning, the first pixel, an x,y of (0,0) if the bottom right of the back of your head, behind your right ear. However, those neurons aren't encoding for RGB 3 bytes of information, there are red vs green (1.0, -1.0) color specific neurons, light intensity mapped neurons (0, 1.0), and center surround neurons (what you get after convolving an image with laplacian of gaussian kernel). In higher brain regions (V2, V3, V4, and MT/V5) neurons combine their outputs just like a multi-layered neural network in order to then derive the RGB values in each part of your visual field. You do this same process when using LLMs without realizing it! You're not feeding the LLM with words, you are first breaking down the words by "tokenizing" them. In your brain, you break down speech into "phonemes", your brain's "tokens". Only then you form words from those tokens to do language comprehension. The problem here is that not everyone maps phonemes the same in their brains, and the phonetic mapping varies drastically in different languages. Rational thought however, is not language comprehension. You need another network other than an LLM to do rational thought, a network that is trained to map perception to mathematic principles like quantity, size, rate of change, equality, units. Rational thought exists without language. This is why Linus Torvalds thinks that AI is more hype than economically viable and Devin is a scam--no matter how many nodes and layers you add to a language model neural network, it will never develop rational thought from training on text data. The day that someone comes up with a Rational Thought Model (RTM) is the day AI can start replacing people in the workplace.

  • @cwirz
    @cwirz Місяць тому

    Im working on a software project written in rust that does exactly what you explained about spiking neural networks

  • @user-gs5fc4oy8h
    @user-gs5fc4oy8h 28 днів тому

    Am I the one that pictures of Galton board for the neural network

  • @DenjiXMakima
    @DenjiXMakima Місяць тому

    How to program a liquid neural network?

  • @Kabodanki
    @Kabodanki Місяць тому

    Belief and probability is two different things

  • @thesimplicitylifestyle
    @thesimplicitylifestyle Місяць тому

    If a thing looks like a duck, quacks like a duck, acts like a duck, and thinks like a duck...

  • @letsgetrusty
    @letsgetrusty Місяць тому +3

    Very interesting comparison! It's a fascinating time to be a software dev.

  • @addmoreice
    @addmoreice Місяць тому

    ANN are to bio-brains as airplanes are to birds.
    They both fly, and we can learn from one to assist in building the other, but they solve the problems in some seriously different ways.

  • @_justarandomone_8884
    @_justarandomone_8884 Місяць тому +2

    Neural networks only simulate, even though a prominent one, of the many aspects of human brain.

    • @codetothemoon
      @codetothemoon  Місяць тому

      what do you think are some other key aspects you suspect might be useful in trying to replicate in machines?

  • @willi1978
    @willi1978 Місяць тому

    for a machine that can learn like people it would need to be learn on its own in real time. When it can think on it's own then the morality comes into play, what would such a machine want to do?

  • @Leonhart_93
    @Leonhart_93 Місяць тому +2

    Yeah, it's pretty arrogant to think that these ML models are that similar to how the human brain works, considering the sheer inefficiency when compared side by side.
    They require supercomputers to train and a lot of energy to run. In the end, they are very limited by the algorithms and hardware humans design, which can be very different from human brains.

    • @ChristopherFranklinSr
      @ChristopherFranklinSr Місяць тому

      I don't think it's arrogance as much as misunderstanding and terminology if you meant the human race at large thinking that way. Just like the cloud is just a marketing term.

    • @codetothemoon
      @codetothemoon  Місяць тому +1

      sure, but why is it arrogant to point out the similarities? you point out another very important difference, in addition to the ones I covered in the video

    • @ChristopherFranklinSr
      @ChristopherFranklinSr Місяць тому +2

      @@codetothemoon I don't think they meant you, just it general.

    • @Leonhart_93
      @Leonhart_93 Місяць тому +1

      @@codetothemoon Everyone that I see online likes to make the argument "but don't humans work the same way", so I liked that you pointed out the differences for once.
      And well, these ML neural networks were designed in the first place to be an analogy for the human brain, the only model we knew that actually works.
      But of course it's not only a very basic analogy limited by the current understanding, but also limited by the current hardware and software.
      A better hardware might actually be bio-computers, but that's far into the future.

  • @CasimiroBukayo
    @CasimiroBukayo Місяць тому

    Hold on to your papers! What a time to be alive!

  • @themax2go
    @themax2go Місяць тому

    i think i cracked it

  • @olsuhvlad
    @olsuhvlad Місяць тому

    45 Philip findeth Nathanael, and saith unto him, We have found him, of whom Moses in the law, and the prophets, did write, Jesus of Nazareth, the son of Joseph.
    46 And Nathanael said unto him, Can there any good thing come out of Nazareth? Philip saith unto him, Come and see.
    47 Jesus saw Nathanael coming to him, and saith of him, Behold an Israelite indeed, in whom is no guile!
    48 Nathanael saith unto him, Whence knowest thou me? Jesus answered and said unto him, Before that Philip called thee, when thou wast under the fig tree, I saw thee.
    49 Nathanael answered and saith unto him, Rabbi, thou art the Son of God; thou art the King of Israel.
    50 Jesus answered and said unto him, Because I said unto thee, I saw thee under the fig tree, believest thou? thou shalt see greater things than these.
    51 And he saith unto him, Verily, verily, I say unto you, Hereafter ye shall see heaven open, and the angels of God ascending and descending upon the Son of man.
    (Jn.1:45-51)