A Fruitful Reciprocity: The Neuroscience-AI Connection

Поділитися
Вставка
  • Опубліковано 15 гру 2024

КОМЕНТАРІ • 10

  • @მომავლისთრეიდერი

    Ai videos -moment switching looks exactly how dmt entities change their forms and shapes

  • @willd1mindmind639
    @willd1mindmind639 Рік тому +5

    I think this way of looking at the brain to model computer neural networks omits the key difference between brains and computers. Brains have discreteness built in which makes the process of learning to identify patterns and shapes along with relationships between them much easier. Computers have no intrinsic means of generating discrete elements to distinguish one element from another, such as in a collection of pixels. Therefore the computer can never match the way the brain learns things because of that lack of discrete data encoding that is based on bio molecular values. (To see this best, look at the cells in the skin of a camouflaging octopus).
    So the fundamental behavior of computer neural networks is building a model that approximates the base classifier or set of classifiers (dog, cat, human) that you want to use as part of identification. Because without that base classifier there is no way to identify anything in a computer imaging pipeline. That is why unsupervised learning doesn't work because there are no base models to compare against. And this is where the contrast approach seems to work, but even there, it doesn't have the fidelity and flexibility of the way human brains work. Local aggregation is a mathematical approximation totally different to how brain neural networks work. A child will still be able to distinguish two dogs based on the type of fur, color of fur and other discrete characteristics that a computer neural network has no way of understanding innately. Because these unsupervised are still generalizing a high level classifier, such as dog, versus really understanding all the characteristics and elements that make up a dog: legs, tail, fur, ears, snout, tongue, etc.
    Ultimately all computer neural networks operate on a mathematical model that tries to generate discreteness through classifiers based on computational processing. That imposes a cost that doesn't exist in biology at a far lesser degree of fidelity and detail. Brains don't have built in previously trained classifiers for things

    • @doudouban
      @doudouban Рік тому

      a child could see, touch and smell with live feedback, while AI is facing cold image data. I think if we give it a full functioning body and improved algorithms, machines might learn much faster and could be improved much faster.

    • @willd1mindmind639
      @willd1mindmind639 Рік тому +2

      It is a difference in how data is encoded where in the brain for example, each color captured by the retina has a specific discrete molecular encoding separating it from other colors. Which means that the visual image in the brain is a collection of multiple networks of these discrete low level molecular values. There isn't any "work" required to distinguish one color from another or one "feature" from another based on these values. Whereas in computer neural networks, everything is a number, so you have to do work to convert those collections of numeric values into some kind of "distinct" features.
      Most of the reason why current computer neural network frameworks still use pre-existing encoding formats for imagery is because they are designed to be portable and operate on existing data formats. And the other reason is because algorithms like convolutions are based on pixels in order to work.@@doudouban

    • @narenmanikandan4261
      @narenmanikandan4261 8 місяців тому

      @@willd1mindmind639 I see what @doubouban is saying in related to your earlier comment. if we did give the capability to sense things in the physical model, this would greatly increase the learning speed since a large section of the use-cases of AI is in something that relies on physical interaction (such as the 5 sense). Then, i guess the real challenge lies in data that isn't necessarily physical (such as code and text).

  • @hyphenpointhyphen
    @hyphenpointhyphen Рік тому

    I like the parsimony approach. Not sure if i get this right but couldn't a working type of memory then selectively grant access to lower level *-topic maps in parallel for feedback in so called higher brain functions? The foundational model delivering the mappings and basic functionality for higher brain functions to access and optimize (learn) target functions, whichever are useful in a social context, thus in light of evolution stabilize genetics.
    Some more months and CAPTCHAs won't work anymore.
    If those evolutionary parameters are hard-coded, shouldn't there be genes markable/knockable during development determining the connection strength?

  • @AlgoNudger
    @AlgoNudger Рік тому

    Thanks.

  • @jerryzhang3506
    @jerryzhang3506 Рік тому

    👏👏👏

  • @richardnunziata3221
    @richardnunziata3221 Рік тому

    not unlike evolution of eye saccadic

    • @hyphenpointhyphen
      @hyphenpointhyphen Рік тому

      Care to explain? You mean as error correction of flow?