Fixing AI's one-way thinking with brain's two-way logic

Поділитися
Вставка
  • Опубліковано 23 січ 2025

КОМЕНТАРІ • 75

  • @Parijat-l7r
    @Parijat-l7r 2 дні тому +5

    I have a question: propositional logic can infer from knowledge if I'm not wrong. So, can't LLMs do the same?

    • @FutureAISociety
      @FutureAISociety  2 дні тому +4

      LLMs cannot but some of the add-on apps I mentioned can--hiding the fact that the LLM itself cannot.

    • @Parijat-l7r
      @Parijat-l7r 2 дні тому

      @FutureAISociety Ok... understood! Thanks for the clarification! Also how can I participate in this project mentioned in the video?

  • @yoverale
    @yoverale День тому +2

    Insightful and inspiring as usual; I’m eternally thankful for your unique work 🙏🏻.
    However, I may not agree with the main statement: 'AI cannot generalize or infer relationships beyond what is directly discussed' (4:04).
    I believe most LLMs today can handle this task. I’ve tested ChatGPT, Gemini, Claude, Mistral, and Bing with the following prompt, and all of them successfully inferred the answers.
    // PROMPT:
    Fact 1: Fido is playful.
    Fact 2: Planet earth is huge.
    Fact 3: Dogs bark.
    Fact 4: Fido is a mammal.
    Fact 5: A dog is harmful.
    Fact 6: Fido is a dog.
    Given that information:
    1) What can you tell me about Fido?
    2) What can you infer about dogs?
    3) By knowing these facts and avoiding reliance on prior knowledge, what can you say about barking behavior?

    • @yoverale
      @yoverale День тому +1

      Best answer (zero-shot) for Gemini by Google:
      1) What can you tell me about Fido?
      Fido is playful. (Fact 1)
      Fido is a mammal. (Fact 4)
      Fido is a dog. (Fact 6)
      Fido barks. (Fact 3 & Fact 6)
      2) What can you infer about dogs?
      Dogs can be playful. (Fact 1 & Fact 6)
      Dogs are mammals. (Fact 4 & Fact 6)
      Dogs bark. (Fact 3 & Fact 6)
      Dogs can be harmful. (Fact 5)
      3) By knowing these facts and avoiding reliance on prior knowledge, what can you say about barking behavior?
      Barking is a behavior exhibited by dogs. (Fact 3 & Fact 6)
      Important Note:
      Fact 2 (Planet Earth is huge) is irrelevant to the information about Fido and dogs.

    • @yoverale
      @yoverale День тому +1

      PROMPT: Is there any relation between being playful or harmful and barking behavior?
      Based on the limited information provided, we can't definitively say if there's a direct relationship between being playful or harmful and barking behavior. Here's why:
      - Playfulness and Barking: While playful dogs might bark during play, it's not a guaranteed connection. Some playful dogs might bark very little, while others might bark excitedly during play.
      - Harmfulness and Barking: A harmful dog might bark as a warning or threat, but again, it's not a universal trait. Some harmful dogs might be quiet, while others might bark aggressively.
      To understand the relationship better, we'd need more information.
      In summary: Barking can be associated with playfulness or harmfulness in some cases, but it's not a reliable indicator on its own.

    • @FutureAISociety
      @FutureAISociety  День тому +2

      Thanks for your comment. I have done similar tests with similar results. My conclusions are that either 1) There is some app outside the actual LLM to shore up the LLMs shortcomings (we have little way of knowing) or 2) the LLM has data from elsewhere (previous knowledge) which it is using to emulate the logic. Based on the statistical way the LLMs themselves work I believe I am correct. BUT, perhaps I am splitting hairs in this instance. It is safe to say that the larger the LLMs training set is, the more intelligent it will *appear* to be. While this is an advantage to many, I see it as an impediment to getting to the root of true comprehension/understanding, and implementing it in AI. What do you think?

    • @yoverale
      @yoverale 21 годину тому +1

      ​@@FutureAISociety Thank you for your detailed answer! Unfortunately, I am not in a position to determine whether the model is theoretically capable of doing this or not, as I do not have deep knowledge of the applied statistics. Yesterday, I kept thinking about it and came to the following conclusions:
      1- With proprietary LLMs, there's no reliable way to ensure that the model isn’t relying on prior knowledge (or an external app, as you mentioned, or even human feedback) before generating what appears to be a "logical" conclusion (rather than an actual inference).
      2- To truly test this, we should use open-source LLMs, where it's possible to verify that only the machine learning pipeline is used, without any external assistance.
      3- Additionally, it would be interesting to create a set of tailored logical exercises that we can guarantee the model has never encountered during training.

    • @FutureAISociety
      @FutureAISociety  21 годину тому +1

      @@yoverale I agree with all three points.

  • @henrycook859
    @henrycook859 Хвилина тому

    4:22 I just asked an AI "Fido is a dog. All dogs are mammals. Is Fido a mammal?" and it responded "Yes, if all A are B, then if C is an A, C is a B." I believe this type of inference is within the capabilities of transformers today.

  • @keizbot
    @keizbot 2 години тому

    Are there any papers on graph-based generalist AI systems? My intuition says this would not scale well

  • @Parijat-l7r
    @Parijat-l7r 2 дні тому +5

    However, i dont think scaling alone will achieve real intelligence. Architectural advancement is needed

    • @FutureAISociety
      @FutureAISociety  2 дні тому +3

      My point exactly. If we want an AI which can do the things human beings can, we need to look at the capabilities and architecture of the brain.

    • @Parijat-l7r
      @Parijat-l7r 2 дні тому +1

      @FutureAISociety Absolutely! Without that, we're just making a pale imitation based on statistical probability

    • @NicholasWilliams-uk9xu
      @NicholasWilliams-uk9xu 2 дні тому +2

      Good point, you are on point. The composition of the computation at the silicon level needs to be closer in composition and function of a (ensemble of individual compute nodes with their own local memory for each compute node). Just like neurons in a way, where their memory for valuing incoming signals is stored at the connection level, right where the computations are being performed. Because sending and retrieval of information over large distance decreases processing speed (computation needs to be in close proximity to memory storage). It's about all these little optimizations at the foundational level, coming together to makes for a more impressive whole.

  • @vicentefigueroa4758
    @vicentefigueroa4758 День тому +1

    Thank you for the video. If artificial neural networks are designed to optimize learning and data storage within the network weights, wouldn't they spontaneously tend towards efficient data structure such as knowledge graphs with semantic relations? I believe the human brain works much in this way, adjusting synaptic weights by learning isolated data points but due to biological constrains it is forced to organize knowledge in a way that minimizes cost, and a knowledge graph is way more economical than isolated data points. In other words, could artificial neural networks automatically (and non-intentionally, or accidentally) develop knowledge graphs by virtue of internal optimization?

    • @FutureAISociety
      @FutureAISociety  21 годину тому

      Good questions! Firstly, the structures of a neural network (or LLM) is dictated by humans at design time and cannot thereafter...so any "optimization" only exists within the given structure. Your second point is spot on. Minimizing energy and maximizing speed are two of the constraints the brain optimizes and the graph structure fits the bill.

  • @r.s.e.9846
    @r.s.e.9846 2 дні тому +1

    Will you ever show and explain an algorithm, or will all content on this channel be more high level content.

    • @FutureAISociety
      @FutureAISociety  2 дні тому +1

      As a next step, you might like this video: ua-cam.com/video/UyCrrh5eODs/v-deo.html Some of the "conversations" videos go into the code which makes this work. Let me know what you'd like to see.

    • @FutureAISociety
      @FutureAISociety  День тому +1

      Also, you can see all the algorithms in the Brain Simulator III download.

  • @NicholasWilliams-uk9xu
    @NicholasWilliams-uk9xu 2 дні тому +3

    I notice the brain cell (itself) with the branching dendritic system for detecting incoming signals in parallel and temporally (polarization increasers, depolarization increasers) is acting a lot like a transformer, especially since it can simulate having different outputs by (increasing activation frequency over time). It is essentially doing temporally proximal signal integrals over time. The leak of the received signal is important, as well as the signal being negative or subtractive, allowing the system to detect signal proximity in temporal context and detect sequence of events. Then the added fact that there is billions of them, it's like having billions of transformer models all interconnected (directly or indirectly) to each other, all doing complex parallel integrals. The brain really is more advanced than the systems we are building. What makes it better in another way, it's efficient, it's slower as well, allowing for differentials worth detecting to emerge (not all time frames are created equally, if you wait long enough, you could get bigger bang for your buck) and this is even true in stock markets (the biological ecosystem is like a stock market). There is also a huge problem for these transformers in terms of how they learn. The brain uses reward detection acceleration to increase activation strength of neurons that were most active during that time, and the brain multiplicatively burns in intermediate reward mechanisms (patterns) that converge with core reward detection acceleration, which when these patterns are detected again they can also increase activation strength of neurons that accelerated in activation frequency with these patterns. The brain has this ability to learn path finding as a result, where it can learn physical chains of events that lead to reward maximization. Then appetite systems (1 - acquired resources) that decrease a rewards influence, stopping it from further reinforcing network strengthening, which stops pathological behavior so that other core rewards, intermediate rewards, and network behavior can be strengthened. The networks also are able to mutate patterns, by multiplicatively slamming them into each other to narrow in on amplitude convergences (commonalities), using ensembles of activity to do this (the intermediate rewards can be reduced into a more streamline simple patterns, that reduce computational complexity, and improve generality) it's like the dreaming mechanism of the brain. But also, the brain cells are forming more connections, not just tuning connections, this increases the bandwidth and degrees of movement for learning and parsing (parallel distributions of temporal sequential events). The rabbit whole is deep, this is a complex machine. I already see how I can make a far more advanced machine learning algorithm just by reverse engineering the brain. If you notice, a lot of the brain is used in blood flow control, heart rate control, and biochemical control, on multiple levels, therefore it's also tied up in a lot of complex interconnected processes at every scale. These are things transformers don't have to do, more of it's processing by proportion can be focused on data. The thermals really constrain the GPU architecture, only able to be on 2d wafers, and still needing a huge cooling apparatus, meaning (per pound, per energy) it is less faster than the brain, and doesn't scale as efficiently as the neurons do.

    • @FutureAISociety
      @FutureAISociety  День тому +1

      You bring up a lot of interesting points at the neuron level. You might really appreciate the neural simulator: Brain Simulator II at github.com/FutureAIGuru/BrainSimII where you can easily try out neuron circuit ideas.

  • @Parijat-l7r
    @Parijat-l7r 2 дні тому +1

    Great explanation, really insightful. I think...do we really understand understanding in the human brain yet? I feel we need to know that before implementing it in AI...but then again, AI is a different kind of intelligence, maybe it can get understanding as an emergent ability? Lots of maybes there, I 🤔

    • @FutureAISociety
      @FutureAISociety  2 дні тому +1

      One aspect of understanding is seeing how things relate to each other... The Universal Knowledge Store has the framework to support this.

    • @Parijat-l7r
      @Parijat-l7r 2 дні тому

      @FutureAISociety Ok...

  • @jamalalafgani1137
    @jamalalafgani1137 2 години тому

    neural network like loop maybe will help

  • @zeroonetime
    @zeroonetime 10 годин тому

    Ai is no other than 010 Ni neutral configuration.

  • @WaterMaitreya
    @WaterMaitreya 2 дні тому +5

    surely human stores the information this way with some advantages over AI. However, human's way is so limited in two: 1) human's way has very poor capacity and is unable to store huge information as AI; 2) human's way freezes the relationship among limited pieces of information and further strengthens it by calling it "logics". However, such logics is just a delusion which will miss many hidden but critical connections among huge amounts of information domains which but AI can identify. So the difference simply means that AI's way is superior to human's.

    • @FutureAISociety
      @FutureAISociety  День тому +1

      AI has superhuman capabilities in many areas but to shore up AI's missing pieces, let's look at how the brain does it. If you implement brain-like thinking in computers, the capacity issues go away (along with energy issues). You are also right that if we make a fully brain-like AI, it could also have the brain's shortcomings and foibles.

    • @NoSubsWithContent
      @NoSubsWithContent День тому

      @@FutureAISociety how about a hybrid then? let them make up for the shortcomings of each other, I'd love to hear that you trained something like this

    • @nulled7888
      @nulled7888 18 годин тому

      ​@@FutureAISociety doesnt seem like much of a problem tbh, a brain-like ai should be AGI implying it can work on fixing itself

    • @lepilz9529
      @lepilz9529 3 години тому +1

      1) Human's are not just capable of storing huge amount's of information we do so all the time. It's called a memory and you have thousands upon thousands that you can recall at any time. You can render entire 3d spaces inside of your head and have them be interactable and responsive with lights and physics its called Dreaming not to mention smells and songs and touch and all the other things we don't think about as much. Now do we misremember things occasionally? Yes this is to some extend a social phenomenon because there are cases where misremembering can be useful but that aside so does ai it's called a hallucination and they sometimes spit out very stupid shit indeed.
      2) Yeah it's true humans can never intuit connections between happenings without using logical reasoning. That's why we didn't have a word for intuition until ai showed up. Luckily ai will be able to act on the fact that the popularity of the first name johnny causes burglaries in new hampshire. www.tylervigen.com/spurious/correlation/2496_popularity-of-the-first-name-johnny_correlates-with_burglaries-in-new-hampshire
      Truly humans are utterly inferior.
      (I am not fundamentally opposed to all uses of ai it has its uses obviously but we should be careful with trusting it and should be looking to improve its very obvious shortcomings if we are to trust it with more important tasks)

    • @WaterMaitreya
      @WaterMaitreya 3 години тому

      @ Thxs! The key point is: what human perceives is not any reality at all. They are delusions constructed by humans to survive. In particular, logics is nothing but a fragmentized and fabricated connection of pieces of information due to the limited & narrowly-minded human brain.b

  • @tomoki-v6o
    @tomoki-v6o 2 дні тому +1

    brain is complex , blood vessels, chemicals and rceptors maybe a key for reverse relationships .

    • @FutureAISociety
      @FutureAISociety  2 дні тому +3

      I use my neural simulator to test out ideas...the graph of the UKS can be implemented in neurons. Biological neurons are more complex but we don't know which parts of the complexity are needed for thinking and which are just "housekeeping".

    • @nulled7888
      @nulled7888 19 годин тому

      ​@@FutureAISociety i feel like a huge issue with this is that we as humans always try to categorize something as having only one capability and purpose while it often is a highly interconnected system where each part has multiple purposes which may even be depending on each other and so on. Its a fundamentally different system of thinking from our conception of "design".

  • @musicandgallery-nature
    @musicandgallery-nature 2 дні тому +1

    Here the difference between artificial and natural intelligences:
    "they exchanged the truth about God for a lie and worshiped and served the creature rather than the Creator" - Bible

  • @matejp2703
    @matejp2703 17 годин тому

    That was my idea. Stop spying on me

  • @reputationhemp4011
    @reputationhemp4011 Годину тому

    3 point logic

  • @AnnonymousPrime-ks4uf
    @AnnonymousPrime-ks4uf Годину тому

    Ai won't be able to use logic because it doesn't have comprehension. Comprehension requires consciousness. Unless Ai is conscious which is impossible. It won't be able to use logic. It will remain only a stochastic model that relies on a mapping and association based on the ruleset programmed with.
    Furthermore Ai doesn't have memory. Memory involves not only data but the rendering of that data which Ai doesn't have the capacity to recall the feelings, sensations, etc that cames with all, so as to be aware and have memory.
    He also doesn't have the capacity to construct information. He lacks the ability to make things intelligible. That would require intelligence which also depends on comprehension and consciousness.
    Instead of diminishing the human value in order to elevate Ai we should instead understand it's limitations and stop deluding ourselves.
    There is enough evidence against the possibility of Ai to be conscious. Dan Winter, the microprocessor inventor, John Searle, as well as others proved it is not possible. And, no, consciousness is not emergent. That idea had been disproven. The only way Ai could be conscious is if consciousness is a pansychist process in which it would be present in everything including the innanimate. But if that is not true Ai is not possible. And for that to be the case physicalists brought the idea of emergent consciousness. But it simply is not the case. Our mappings just because people believe consciousness is emergent the reality contradicts that. Our models and causal connections we make are not prescriptive of reality. Rather they are merely descriptive at best.

  • @Interstellar00.00
    @Interstellar00.00 2 дні тому

    Ai to ai 🌍🦾🌍forever live 🌍forever live ai 🌍forever live

  • @musicandgallery-nature
    @musicandgallery-nature 2 дні тому

    The ancient Hindu texts contain an extensive and very deep knowledge that is impossible for AI to achieve. It was achieved by other means. Whoever masters these means does not need AI.

    • @reputationhemp4011
      @reputationhemp4011 Годину тому

      exactly, its kinda like the 3 point logic.

    • @reputationhemp4011
      @reputationhemp4011 Годину тому

      correct?

    • @reputationhemp4011
      @reputationhemp4011 Годину тому

      I love you friend :)

    • @musicandgallery-nature
      @musicandgallery-nature Годину тому

      @@reputationhemp4011 "By whom commanded and directed does the mind go towards its objects? Commanded by whom does the life-force (prana), the first (cause), move? At whose will do men utter speech? What power directs the eye and the ear?
      It is the ear of the ear, the mind of the mind, the speech of the speech, the life of the life, the eye of the eye. The wise, freed, after leaving this world, become immortal." - Upanishads

    • @musicandgallery-nature
      @musicandgallery-nature 17 хвилин тому

      @@reputationhemp4011 www.youtube.com/@reputationhemp4011
      "There the eye does not go, nor speech, nor mind. We do not know That; we do not understand how It can be taught. It is distinct from the known and also It is beyond the unknown. Thus we have heard from the ancient (teachers) who told us about It.
      That which cannot be thought by mind, but by which, they say, mind is able to think: know that alone to be the Brahman, not this which people worship here." - Upanishads