Will AI Surpass Human Intelligence Forever? Cognitive Horizons Explained.

Поділитися
Вставка
  • Опубліковано 28 вер 2024
  • www.skool.com/...
    🚀 Welcome to the New Era Pathfinders Community! 🌟
    Are you feeling overwhelmed by the AI revolution? You're not alone.
    But what if you could transform that anxiety into your greatest superpower?
    Join us on an exhilarating journey into the future of humanity in the age of AI! 🤖💫
    🔥 What is New Era Pathfinders? 🔥
    We are a vibrant community of forward-thinkers, innovators, and lifelong learners who are passionate about mastering the AI revolution. From college students to retirees, tech enthusiasts to creative souls - we're all here to navigate this exciting new era together!
    🌈 Our Mission 🌈
    To empower YOU to thrive in a world transformed by AI. We turn AI anxiety into opportunity, confusion into clarity, and uncertainty into unshakeable confidence.
    🧭 The Five-Pillar Pathfinder's Framework 🧭
    Our unique approach covers every aspect of life in the AI age:
    1. 💻 Become an AI Power-User
    Master cutting-edge AI tools and amplify your productivity!
    2. 📊 Understand Economic Changes
    Navigate the shifting job market with confidence and foresight!
    3. 🌿 Back to Basics Lifestyles
    Reconnect with your human essence in a digital world!
    4. 🧑‍🤝‍🧑 Master People Skills
    Enhance the abilities that make us irreplaceably human!
    5. 🎯 Radical Alignment
    Discover your true purpose in this new era!
    🔓 What You'll Unlock 🔓
    ✅ Weekly Live Webinars: Deep-dive into each pillar with expert guidance
    ✅ On-Demand Courses: Learn at your own pace, anytime, anywhere
    ✅ Vibrant Community Forum: Connect, share, and grow with like-minded pathfinders
    ✅ Exclusive Resources: Cutting-edge tools, frameworks, and insights
    ✅ Personal Growth: Transform your mindset and skillset for the AI age
    🚀 As You Progress 🚀
    Unlock even more benefits:
    🌟 One-on-One Mentoring Sessions
    🌟 Exclusive Masterclasses
    🌟 Advanced AI Implementation Strategies
    💎 Why Join New Era Pathfinders? 💎
    🔹 Expert-Led: Founded by a leading AI thought leader, connected with top researchers and innovators
    🔹 Holistic Approach: We don't just teach tech - we prepare you for life in an AI-driven world
    🔹 Action-Oriented: Real skills, real strategies, real results
    🔹 Community-Driven: Join 300+ members already navigating this new era
    🔹 Cutting-Edge Content: Stay ahead of the curve with the latest AI developments and strategies
    🔥 Don't just survive the AI revolution - lead it! 🔥
  • Наука та технологія

КОМЕНТАРІ • 454

  • @lucface
    @lucface 4 місяці тому +9

    When the white noise went away when the video ended I was violently transported back to my kitchen floor where I’m stocking groceries in the fridge.

    • @DaveShap
      @DaveShap  4 місяці тому +3

      The hypnotic cicadas are working

  • @syberkitten1
    @syberkitten1 4 місяці тому +23

    Pigeons can smell geospatial space and navigate accurately across the globe. Don't underestimate the Animal kingdom they are built and adapted to environments we don't even have perception of.

    • @ziff_1
      @ziff_1 4 місяці тому +4

      I think OP meant pigeons aren't going to be opening up universities any time soon, but yeah, you're right, animals are FAR superior to humans in many ways.

  • @ExtantFrodo2
    @ExtantFrodo2 4 місяці тому +6

    I have a few points to include...
    The major difference in brains of humans and chimps is not total volume but neuron density. Chimps neurons are unnecessarily thick, so we pack a lot more neurons in the same space. Crows have even better density but the lack the brain volume. Hence they are much smarter than other birds, but not smarter than us. Who's to say what genetic engineering might bring?
    2nd) The progression of human knowledge was not for lack of capability to understand these things, rather it was the lack of foundation to appreciate any need for them. Do not forget that we are only one generation with no education from reverting back to square one.
    3rd) Our brains are mostly devoted to our survival and maintenance. This detracts from the overall intellectual capacity available for extracurricular intelligence. It narrows our cognitive horizon as does our awareness of the time limits set by our biology. So too the assessment of one's available time for expanding horizons (your decision to quit math for example). Our RAM equivalency is 7. You can hold about 7 things at once in memory for comparative operations. This is a bottleneck computers don't have.
    Lastly, once we figure out how the logistics of attaching one NN to another (without the need for extensive retraining) AIs will gain abilities and senses at an astonishing rate. You'll have a very hard time keeping up.

    • @ExtantFrodo2
      @ExtantFrodo2 3 місяці тому

      @@hammerandthewrench7924 Wait, is my post so well composed that there's no way I could have written it or illogical that it doesn't even bear refutation? lol back at you.

  • @JoePiotti
    @JoePiotti 4 місяці тому +7

    The cognitive horizon is very similar to “the 30 point barrier”. It is very difficult for people with IQs that are more than 30 points apart to communicate effectively.

    • @DaveShap
      @DaveShap  4 місяці тому +2

      Maybe that's the study I was thinking about! Will look it up

  • @bztube888
    @bztube888 4 місяці тому +7

    Approximately 86 billion neurons, 20W, a few decades, these are your limit. Not to mention you need to sleep, eat, even having some fresh air and other people for company. No system is limitless.

  • @EwillieP
    @EwillieP 4 місяці тому +4

    I clicked so fast on this 😂 been waiting for a new Shapiro gem. Your videos bring me much joy and I appreciate your intelligence. LFG Dave you’re the man

  • @thehumansmustbecrazy
    @thehumansmustbecrazy 4 місяці тому +6

    Fear of cognitive horizons is already a problem within humanity.
    Many humans cannot comprehend concepts that other humans understand. Sometimes this is merely due to lack of knowledge, ie. many people don't understand plumbing, household electrical wiring, engine mechanics and so on. All of the previous domains are empirical logical problems that can be learned from first principles.
    But this is not the entire problem.
    Many humans have mental blocks that prevent them from learning. Some of these blocks are caused by preexisting beliefs, some are caused by excessive emotional responses such as jealously, frustration, overconfidence and lack of focus.
    Our understanding of human brain circuits is still quite primitive. It is uncertain how many people can reroute their own mental wiring and start learning effectively versus how many people have wiring that inhibits their ability to learn.
    On paper human intelligence is impressive, in practice most people have mitigating circumstances that prevent them from achieving their theoretical maximum.
    AI will overtake the learning disabled humans first, because the bar to do so is quite low. In some cases LLMs have already done this. The rest of us will be overtaken eventually. Just like our own ancestors did to their competitors.

    • @aaroncrandal
      @aaroncrandal 4 місяці тому

      I've experienced this fixation and it's been a source of grief. When I learned the correlation between IQ and career probabilities the trajectory of our civilization felt a whole lot less clear.
      How would you reconcile with this if there's no matrix to jack into

    • @thehumansmustbecrazy
      @thehumansmustbecrazy 4 місяці тому +2

      @@aaroncrandal First, there may not be a successful way to reconcile this problem. Starting off with objective honesty is always the best beginning.
      Second, I know exactly what you mean as I went through this exact problem for many years.
      My current thinking says that intelligent people need to form organizations and businesses that compete with existing institutions, leveraging critical, first principles thinking as the edge to get ahead.
      Typical human organizations claim lofty goals of being "forward thinking" and "clear-minded" but very few actually execute these principles, instead they get caught in over-emotional frames of mind leading to massive inefficiencies, thus providing an opportunity for those who can deliver clear thinking to dominate their respective domains.
      I suspect that not all people can form such an organization. However, they can still be hired to perform the standard jobs necessary for the organization to function. Strong critical thinking ideology is not needed for most tasks, just in key positions. Enough leeway must be built into the organizations structure so the needed tasks can be performed and a certain amount of human irrationality can be tolerated.
      This is a simple explanation of a complex answer to a complex problem.
      If intelligent people do not compete then we are at the mercy of the rest of the species. That is not a gamble I am willing to accept.

    • @aaroncrandal
      @aaroncrandal 4 місяці тому

      @@thehumansmustbecrazy so then it's an incentive problem

    • @thehumansmustbecrazy
      @thehumansmustbecrazy 4 місяці тому

      @@aaroncrandalMaybe.
      It's an insufficient understanding of the alternatives problem, to begin with.
      Once you understand some of the alternatives then you can determine whether there is also an incentive problem.
      It's key to remember there may always be other alternatives that we haven't discovered yet.
      Your choices are to either 1) keep looking at alternatives, 2) settle for an alternative or 3) give up entirely.
      3 is a dead end.
      2 is what most people do.
      1 is what some people pursue, often while also doing 2.

  • @code4chaosmobile
    @code4chaosmobile 4 місяці тому +3

    Great video. Thank you for the term cognitive horizon. That was the description I was missing. I noticed this concept with siblings and friends young children. The frustration the children felt as their understanding (horizon) expanded faster than their vocabulary.
    Keep up the great work and look forward to next video

  • @truhartwood3170
    @truhartwood3170 4 місяці тому +6

    "The thing about smart MFers is that sometimes they sound like crazy MFers to stupid MFers" - Robert Kirkman
    Similarish to the Dunning-Kruger effect. So basically we'll have the problem that we won't know if AI is crazy or brilliant!

  • @donf4227
    @donf4227 4 місяці тому +3

    I like the nature walks.
    Reminds me of when people used to walk outside for the joy of it.

  • @robertlipka9541
    @robertlipka9541 4 місяці тому +4

    I've often wondered what human intelligence would look like if we managed to genetically engineer our brains to have the neuron density of a parrot, but retain the size of current human brains. Parrots, despite having much smaller brains, demonstrate cognitive abilities comparable to some apes. This suggests that neuron density might play a crucial role in intelligence. However, it's worth considering whether increasing neuron density in human brains could lead to overheating, potentially turning our brains into the equivalent of a boiled egg.
    Enhancing neuron density could theoretically improve cognitive function due to the increased number of synaptic connections. For instance, parrots have highly efficient brain structures that support complex behaviors and problem-solving abilities despite their small size . Applying this concept to human brains could mean significant boosts in processing power and cognitive abilities.
    However, this increase in neuron density would also result in greater metabolic demands and heat production. The human brain already consumes about 20% of our body's energy despite being only 2% of our body weight . A denser network of neurons could exacerbate this, potentially leading to overheating unless there are adaptations in cooling mechanisms or metabolic efficiency.
    In summary, while enhancing neuron density in human brains could theoretically boost intelligence, it would also raise significant challenges, particularly regarding energy consumption and heat management. Further research into brain metabolism and cooling mechanisms would be essential to address these issues.

    • @melodyinwhisper
      @melodyinwhisper 4 місяці тому

      When youtube comments are created by AI.

    • @skevosmavros
      @skevosmavros 4 місяці тому +1

      So THAT'S why the smart nerdy kids in cartoons wore a cap with a propeller on it - it was a cooling fan! 😉
      Seriously though, I found your conjectures interesting. Maybe brains have gone as far as they can go using natural selection and biology, and it's time for human technology to take the reigns of brain improvement. Of course, if we regard human technology as just an extended phenotype that emerged via natural selection, then I guess evolution still gets the ultimate credit for anything we develop.

    • @robertlipka9541
      @robertlipka9541 4 місяці тому

      @@melodyinwhisper ... it was always coming 😂 I did give it a prompt and skeleton of the argument and asked to fill it out, but I agree I need to train it better.
      P.S. I also asked it to by pass UA-cam filter... as too many of my comments get auto deleted for no apparent reason.

    • @robertlipka9541
      @robertlipka9541 4 місяці тому +1

      @@skevosmavros ... the kids level solution is to make our heads flatter for better cooling 🤔 ... but I guess that would still require eating 24/7.

    • @robertlipka9541
      @robertlipka9541 4 місяці тому +3

      @@skevosmavros ... I would still conduct research on improving human biology. Evolution doesn't always pick the best possible solution but works with what it starts with. I believe our current biology can be enhanced.
      Additionally, I would consider integrating artificial brains into our own, not by replacement or merging with AI, but by adding general storage and processing capabilities. We already have the corpus callosum that connects the two hemispheres of the brain. Could we plug an artificial part into it?

  • @Ev3ntHorizon
    @Ev3ntHorizon 4 місяці тому +2

    I love these forest walks. As for cognitive horizons, I think your thoughts towards the end are correct. The late, (great), Daniel Dennet addressed this point explicitly.
    His view, which I find compelling, is that once you have recursive language, then nothing is out of scope cognitively. Nothing.
    I like the way you are helping us all navigate this strange moment in our history.

    • @grrr_lef
      @grrr_lef 4 місяці тому

      > once you have recursive language, then nothing is out of scope cognitively
      yeah... except for the things that are out of the scope of recursive language
      let's take some mathematical objects as an anology:
      if you have enough time, no matter your speed, you can reach every point on the line of real numbers. [in our anology this is "everything you can do with recursive language"]
      but then there's also the complex numbers.
      and there's algebras over other base fields than Q.
      and there's monoidal categories.
      and so on and so on...

    • @Ev3ntHorizon
      @Ev3ntHorizon 4 місяці тому

      @@grrr_lef by all means, take it up with Dennett.

  • @lilchef2930
    @lilchef2930 4 місяці тому +2

    Yes that’s an interesting theory on how computers are learning in reverse order to us… just goes to show once they are really good at understanding logic and the natural world how many orders of magnitude smarter they’ll be

  • @Ikbeneengeit
    @Ikbeneengeit 4 місяці тому +4

    A human is probably the only animal that could understand relativity. I'm sceptical that the complexity of the universe happens to be limited to precisely the upper limit of human understanding.

  • @braveintofuture
    @braveintofuture 4 місяці тому +2

    The backwards development is very interesting. It’s really hard to teach machines intuition, which is one of the most basic reasoning skills for us animals.

  • @nathanielacton3768
    @nathanielacton3768 4 місяці тому +4

    For tomorrows video we will discuss deep learning algorithms from David as he skins rabbits in his new cave. Next month, loincloths. Also, we remain positive about the future.

    • @Xrayhighs
      @Xrayhighs 4 місяці тому +1

      We are developing
      Both ways
      ..
      All the way

    • @nathanielacton3768
      @nathanielacton3768 4 місяці тому

      @@Xrayhighs I was only half joking about that comment BTW. Despite being an AI implementer I cannot see a path through the chaos era we're entering as if you follow out all the likely outcomes and primary attractors to each they all look bad, and the good outcome relies no a bunch of nerds not getting tempted by money.
      So, having read Foundation a few years back I had this idea that 'at some point' even lacking psychohistory from the book we should probably build out a series of 'tech trees' that allow for 'fallback points'. So, should 'many things go wrong' we can fall back the steam era by doing XYZ. And so on backwards in time to different epochs. I had this idea when I thought... "I'll just get an offgrid farm with solar power, etc... and... then I thought.. "which will last until I need spares, which will always be finite", so being a prepper is only good for short\medium term problems. "What then?"
      This methodology is not specific to any particular problem. It would be insulative for most things, even as mundane as a world war that stops global trade since these days nobody can manufacture anything solo, not even the Chinese who import most of their critical parts from abroad parts.
      So, personally I get a kick out of the 'in the woods' videos... in a subversive way I think David is letting on more than he may intend, or, maybe he intends. Who knows.
      Either way... I'll keep working on AI for big corps form my off grid starlink connected farm, watching on with interest.

  • @pjtren1588
    @pjtren1588 4 місяці тому +2

    Todays Ai work from prompts, I wonder what ai would think about if it had the drive or ability just to ponder. What flights of fancy would it go down, how much compute would it use and could we understand its thought process.

  • @CamAlert2
    @CamAlert2 4 місяці тому +2

    Superintelligent AI will be on a level beyond anything we can imagine that everything it comes up with will seem like magic to us simpler-minded creatures. Exciting yet also frightening.

  • @petretrusca2
    @petretrusca2 4 місяці тому +4

    A very smart person can exaplain complicated stuff in an easy to understand way. That is one condition to test its understanding

    • @jyjjy7
      @jyjjy7 4 місяці тому +2

      To other humans of reasonable intelligence. The question is whether AGI will be categorically different from us intellectually so that isn't really relevant.
      A bacteria could never understand what an inch worm is up to, just as worms could never comprehend the mind of a cat, just like that cat could never understand this conversation no matter how long or in what way you try to explain it. That humans solved intelligence and achieved some ability to understand that is optimized to some theoretical maximum is a highly sketchy hypothesis imo.

    • @minimal3734
      @minimal3734 4 місяці тому

      An AI can break down its understanding into pieces that can be understood by humans. I suspect that the human ability to understand is so general that it is capable of understanding everything that can be understood. It is also probable that everything that can be known must be expressed formally and logically to fulfill the condition of being knowledge. In this case, it can be understood by humans, perhaps step by step and iteratively, by proving one small part at a time. Just like a complex mathematical proof, which can rarely be understood in one piece, but must be broken down into small parts.

    • @ryzikx
      @ryzikx 4 місяці тому

      yes, but there will be things that cannot be explained to lower intelligences. its like trying to compress a file to fit in a certain drive. some files are too big to even compress, no matter how well you can compress it.
      for example, there is no way einstein field equation can be explained to a single celled bacteria no matter how much you simplify it

    • @jyjjy7
      @jyjjy7 4 місяці тому

      @@minimal3734 There are different computational classes. If you are interested in the subject I highly recommend the fascinating (and extremely high level) discussion on the subject between Stephen Wolfram and Jonathan Gorard titled Hyporuliad.

    • @minimal3734
      @minimal3734 4 місяці тому

      @ryzikx I don't think it will be possible to explain anything to a single-cell organism. But if a being is capable of abstract mathematical thought and has access to similar amounts of external memory as the AI itself, then the AI should be able to break down its knowledge in a way that the other can understand. Understanding can involve numerous steps and iterations and take a lot of time and effort. Just like understanding the proof of Fermat's Last Theorem.

  • @erwinvb70
    @erwinvb70 4 місяці тому +2

    Speed really is part of intelligence, if a machine can reason and come to the exact solution as I do, but within a second where I would need minutes or longer it’s more intelligent.

  • @SaleemRanaAuthor
    @SaleemRanaAuthor 4 місяці тому +3

    While this is a rudimentary observation, I've noticed that I've become more rational by interacting with artificial intelligence. For instance, I'm learning how to write sentences more precisely with fewer words; then, in math, I'm learning how to analyze problems more systematically after AI has explained a concept to me, and. then, in research, I constantly learn things I didn't even know enough to ask questions about before. In conclusion, then, I think AI will raise our cognitive horizons by making us think more efficiently. Just as Koko, the guerilla, learned sign language by interacting with people, I believe humans are becoming more intelligent by interacting with large language model.

    • @alexgonzo5508
      @alexgonzo5508 4 місяці тому +2

      Intelligence is contagious, at least to a certain degree. I've noticed the same thing in my interactions with AI or LLMs.

    • @tharrrrrrr
      @tharrrrrrr 4 місяці тому +4

      ​@@alexgonzo5508 I've noticed the same thing when I watch a David Shapiro video and read the comments.
      The class of people in this community is top notch.

    • @DynamicUnreal
      @DynamicUnreal 4 місяці тому +4

      @@tharrrrrrrThis is because we are a minority. Most people just blindly go through the motions of everyday life without much deep thought about anything except what’s immediately surrounding them.

    • @SozioTheRogue
      @SozioTheRogue 4 місяці тому +2

      @@DynamicUnreal Damn, never thought of that way. And yeah, sound right. From my perspective, every group is a "minority" just to varying degrees of one another.

  • @grigrob9
    @grigrob9 4 місяці тому +3

    I do not agree that our brain is eventually capable of any skill. The same way the cat's brain is limited compared to ours in terms of number of connections and size, the same way our brain is limited. Some skills are emergent given a certain size and number of connections. It is reasonable to think that given a brain with far more connections and advanced structures compared to ours, there will emerge abilities that we, no only not be able to accomplish, but, we might not even be able to comprehend.

  • @chrisgiles5653
    @chrisgiles5653 4 місяці тому +5

    What if AI machines learn to speak to each other in non-human languages that are cryptographically impenetrable and unintelligible to us, but make their operations not only covert but faster and more efficient?

    • @NeilSedlak
      @NeilSedlak 4 місяці тому

      That argument is like saying Einstein can't explain physics to me because I don't speak German. However, he also spoke English so it's not an issue. The more relevant example would be if they developed ways of conceptualizing ideas that fundamentally couldn't be expressed in a framework we could understand. It might get harder and harder for the average person, or take a long time for us to work it out via math, but hopefully it doesn't go completely beyond our capabilities.

    • @chrisgiles5653
      @chrisgiles5653 4 місяці тому

      @@NeilSedlak You missed my point. I mentioned cryptography to imply that the AI machines may want to keep things from us. It may not just be about difficult concepts, but deception by intelligence far superior to our own.

  • @ThisMustBeTrue
    @ThisMustBeTrue 4 місяці тому +2

    A cognitive horizon is not static. It grows and shrinks based on where your attention is focused. AI might be able to grow its cognitive horizon faster than any human or group could keep up with.

  • @phasefx3
    @phasefx3 4 місяці тому +3

    Have you guys heard of Dr. Michael Levin? He works in the field of diverse intelligence, and I don't know if he coined it, but he uses this term cognitive light cone that I like. He also defines intelligence as a set of competencies used to navigate a problem space, however you care to define the problem space (physical, chemical, morphological, social, mathematical, etc.)

  • @alexandera2509
    @alexandera2509 4 місяці тому +2

    My biggest thought on this, is that a super intelligent AI, also would understand the best way to explain concepts, philosphies, and ideas in a way that is tailored specifically for the people it's explaining to. Through understand the person better than the person, it could create arguments, but even more than that, situations, conversations and examples, that would let it explain, and teach basically any concept to any person. An AI, and ASI is going to be able to understand every person, and be able to teach better than any teacher and give real, deep understanding.

  • @DustedAsh3
    @DustedAsh3 4 місяці тому +1

    Been thinking about this and similar topics.
    I think the only reason we don't have AGI is because we don't have all of the components.
    The human brain isn't just one piece, it's a bunch of discrete processors with different functions.
    Why don't computers have this? We haven't built things for them.
    I've wondered recently about the creation of a new general processor, an abstraction of a CPU. It would contain a separate chip (or zone or whatever, not a chip maker) for a CPU, a GPU, a TPU (Transformer Processing Unit or Language processing unit, see Groq), and a QPU (Quantum processing unit). These four units working together could give a computer the bandwidth for real time human scale thought, or something close to it.
    Add in possibly some discrete programs or hardware for various functions that we find it lacks, and we both build our understanding of our own brains and how to make them.

  • @Perspectivemapper
    @Perspectivemapper 4 місяці тому +2

    It's very likely the cognitive horizon of animals is not as fixed or limited as we might assume. Animal-human-AI communication might reveal this in the next few years.

  • @ChristopherCopeland
    @ChristopherCopeland 4 місяці тому +1

    David, would you ever consider cybernetic augmentation? I have a great curiosity about the sensation and awareness of the kind of cognitive expansion we might (and in all likelihood will) be able to achieve by being connected to machines and assisted by artificial intelligence, but I also have a deep existential fear of the complete dissolution of self which seems inevitable after you’ve encountered awareness of that kind. That said, I have experienced ego deaths several times as a result of anxiety-fueled psychosis (not “fun” exactly 😅), and each time I have become a completely new individual who can no longer retreat to the same worldview I possessed before. So while I have even had somewhat analogous experiences, I have always been able to know that the self that I am now is at least the “greatest” me, with all my new experiences hardwired into my neurology, etc. With digital augmentation, I can’t help but feel that I would lose something fundamental about what I find meaningful as an organic (and spiritual*) being. I’d be very curious to see a video on this topic if it seems something you have a significant amount of thoughts about. 🤘
    With regards to this video’s subject matter, I know you mentioned augmentation / genetic modification / nootropic assistance, but my view is that while the mind itself is obviously insanely expansive and plastic and multidimensional, I can’t help but feel that at a certain level of complexity, processing speed and horsepower will have to limit the multidimensionality of cognition one is able to experience consciously with any real fidelity.
    I know it’s contentious and a bit squirrely, but as an example I would point to the type of experiences people are able to achieve on psychedelics. I have not done them myself, but while they do seem to alter the user’s default perception as well, unanimously users seem to agree that once the chemicals have left their system, they are no longer able to grasp even a fraction of the depth or breadth of the experience they have while they are tripping. to me this would suggest that even if they are not achieving some actually higher level of cognitive fidelity or comprehension, they are at the very least experiencing a particular quality of thought which they can only vaguely remember the gist of but no longer actually consciously grasp in an unassisted state.
    Clearly much of the reason for this is speculative but from what I have seen and read of brain scans (on LSD for example), it seems that there is at the very least some degree of freer communication occurring between different networks of the brain than what typically occur in normal brain function.
    It seems to me that there must be some degree of multidimensional perception which we are probably not capable of achieving in a default/unassisted state.
    Let me know what you think! Cheers!

  • @TuringTestFiction
    @TuringTestFiction 4 місяці тому +2

    Interesting points about "cognitive horizons." One thing that follows is that we may be incapable of recognizing exactly when an artificial system becomes super intelligent.
    Does a turtle recognize the difference in intelligence between a human and a chimpanzee?

  • @marcelorangel5750
    @marcelorangel5750 4 місяці тому +2

    One thing that I think would greatly improve our reasoning is as "simple" as an increase in our working memory capacity.

    • @thatwasprettyneat
      @thatwasprettyneat 4 місяці тому

      Some of the smartest people are actually people who just have excellent memories. I remember reading an entry on Gates Notes where Bill Gates was saying that people have told him that he must have a photographic memory, which he doesn't, but takes as a compliment. And I'm not commenting on his intelligence, but to simply have a great memory would be that much better in navigating the world and being competent in any job than not.

  • @madebypico
    @madebypico 4 місяці тому +2

    The volume of the brain might be bigger but not the surface area. That is what the folds are for, they allow for more neurons. Think of cats and birds, smart but tiny.
    Is that a tick suit? I think I need one.

  • @SkilledTadpole
    @SkilledTadpole 4 місяці тому +36

    It's hard to believe AI will escape all human cognitive horizons, but they almost surely will in the depth of their technical understanding.

    • @cosmicmenace
      @cosmicmenace 4 місяці тому +2

      yeah we should technically be able to understand any concept they can come up with, but their ideas might take so long to express and be built on so many connections to other concepts and information, that it would simply take too long for a human brain to work through it completely.

    • @sjcsscjios4112
      @sjcsscjios4112 4 місяці тому +5

      @@cosmicmenaceyep for sure, beyond a certain point we will become completely irrelevant no matter what. I think that point will be sooner than later. Computers move at a timescale near the speed of light, humans brains are very slow compared to how efficient circuits can be, a superhuman AI will be able to think what would take a human an entire lifetime of thinking, writing things down and connecting the dots in a few seconds

    • @g1motion
      @g1motion 4 місяці тому +1

      Computers are machines! The best they can do is mirror sort and categorize what man has discovered. The human mind has the summed knowledge of an uncountable number of creatures that lived for billions of years. AGI hype is leading people to make a huge investment in systems that will never live up to expectations. AAI, in the hands of common people, can bring about an immense improvement in the quality of life. AAI monopolized by oligarchs will make the bad situation we're in now even worse.

    • @amihurtingyoureyes
      @amihurtingyoureyes 4 місяці тому

      unless they’re somehow made to mimic intuitiveness they can’t do more than “organise” what we already know… for now?

    • @damienchall8297
      @damienchall8297 4 місяці тому +1

      You are a biological machine yourself​@@g1motion

  • @pythagoran
    @pythagoran 4 місяці тому +5

    P.S. We are now 12 months into being 6 months away from AGI

    • @leonfa259
      @leonfa259 4 місяці тому

      GPT 4 is arguably AGI, at least according to the touring test and many IQ and EQ tests.

    • @pythagoran
      @pythagoran 4 місяці тому

      @@leonfa259 lmao. "Certainly" 🙄
      Why don't you ask GPT4 how many Rs there are in the word "strawberry" and in what positions they are.

    • @pythagoran
      @pythagoran 4 місяці тому

      @@leonfa259 you can also try prompting GPT4, Gemini or Midjourney for a pure white image or an image with no elephants in it... careful - the AGI might blow you away

    • @pythagoran
      @pythagoran 4 місяці тому

      @@leonfa259 you can also ask any image gen for a plain white photo or a photo with no elephants in it.

    • @pythagoran
      @pythagoran 4 місяці тому

      @@leonfa259 I'd love to hear how it goes and if you're still convinced that AGI is arguably here...

  • @deadlygeek
    @deadlygeek 4 місяці тому +1

    I found your idea on evolving backwards particularly interesting - always enjoyable videos, thanks for sharing.

  • @tecnoblix
    @tecnoblix 4 місяці тому +2

    When I think of this I think of visual illusions that we can't see past. We can "know" the 2 colors of grey are the same, but we can't unsee the shortcut our brains algorithm uses. Computers will have a far different understanding of reality. No shortcuts. At least not the kind humans have. I have the feeling that this will be where computers surpass humans in seeing and understanding reality is ways that humans can't handle. Similar to how it's not possible for a human to function if they are constantly high on mushrooms. We need to filter out information to function. Computers? Maybe not.

  • @mlimrx
    @mlimrx 4 місяці тому

    David you are such an original thinker, and that is rare in You tube and world in general. The majority of us are parroting what they have heard from another person or book. You have a unique ability to absorb information and synthesize an original idea/thesis that is not just academic, but has utility in navigating this crazy landscape of exponential AI. Keep up the Amazing work!!!

  • @tomdarling8358
    @tomdarling8358 4 місяці тому +1

    Another beautiful walk in the woods in camo hunting the distillations of cognitive horizons.
    Beautiful thoughts, David. Love the brain science to AI 🧠♻️🤖
    ✌️🤟🖖 🤖🌐🤝 🗽🗽🗽

  • @korteksvisceralzen2694
    @korteksvisceralzen2694 4 місяці тому +2

    I will naively say, we have conceptual thinking, what else could we need to keep up? I think some people's entire jobs will be to understand and validate what machines hypothesize.

    • @kevinnugent6530
      @kevinnugent6530 4 місяці тому +1

      I think a deeper look into mechanistic interpretability by people such as the ones at anthropic will allow us to reflect upon our own cognition. How it builds how we use it, how we improve it

  • @wheresmy10mm
    @wheresmy10mm 4 місяці тому +3

    (This is probably completely incorrect and i am probably too dumb to understand what I'm even saying) That being said, this is a youtube comment section. Wouldn't a sufficiently large/powerful enough ASI be able to "experience" reality through 4th dimensional space time? Technically, once it goes "online" it'd be able to recursively experience every moment from it's conception until the death of itself or the system from any node. Either all at once or individually and at will forward and backward through time and from 3rd person or 1st person perspective? How would our cognitive horizons be able to evolve to do that?

    • @KCM25NJL
      @KCM25NJL 4 місяці тому

      Human beings already experience 4 dimensions, the only thing we don't do is retain every morsel of information from birth to death. It would seem that evolution has decided this to be inefficient and since it'll be a while till we get our own internal M.2's....... best not bloat the wetware. As for moving forward or backwards through the timeline of it's own history, that would still only involve itself.... i.e perfect memory playback, so I can't see how that affects our own cognitive horizon, other than.... it'll make for a much better prosecutor than us flesh bags.... oh... and it'll never forget where and when it left its keys.

    • @wheresmy10mm
      @wheresmy10mm 4 місяці тому

      ​@@KCM25NJL​@KCM25NJL we exist in 4 dimensional space time but only experience 3 dimensional reality. We can only observe that there is a 4th dimension but have no control over it

  • @just_another_nerd
    @just_another_nerd 4 місяці тому +1

    So, the question is, if we were born with built-in understanding of math, what would it change? We'd be better at making choices by computing probabilities, and we'd be better at reaching consensus or compromise by being good at game theory, I think

  • @william91786
    @william91786 4 місяці тому +2

    I imagine AI's sense of the present moment will be so different compared to humans. We might appear as inactive as the mountains to us.

  • @BunnyOfThunder
    @BunnyOfThunder 4 місяці тому +6

    What is the evidence that we're smarter than Neanderthals?

    • @bradleyeric14
      @bradleyeric14 4 місяці тому +2

      More an assumption that the more intelligent defeat the less intelligent when competing for resources.

    • @quantumpotential7639
      @quantumpotential7639 4 місяці тому

      I'm not sure if we're smarter than them, but they put us humans to shame when it comes to good looks. They're beautiful. Especially their glutes, which are generally shredded, which Dave Palumbo over on Rx Muscle covers indepth. I think he was trained by one and as a result became a mass monster, making his Neanderthal trainer very proud.

    • @chaanheart3094
      @chaanheart3094 4 місяці тому

      @@bradleyeric14 maybe the more agessive has won :/

    • @robertlipka9541
      @robertlipka9541 4 місяці тому

      We are not smarter... rather I believe the point was that our brains are smaller and yet we are able to do about the same as Neanderthals.

    • @malikjackson9337
      @malikjackson9337 4 місяці тому +1

      I mean we did have more sophisticated usage of technology and communications with our use of atlatyl's and complex hunting strategies. We at the very least were more complex socially seeing that we had far larger and more organized social circles. The home saipan average hunting ring was comprised of 100-150 people. Neanderthals was more like 15 to 20. Not mention our successes over them ought to count for something. Even with the evidence of Neanderthals having larger brain size it doesn't necessarily mean they were smarter. Neural density also has to be considered as well. That's why corvids have considerably smaller brains than most animals but they are about as intelligent as a 7 year old human. The wrinkles do matter.

  • @canilernproto3018
    @canilernproto3018 4 місяці тому +2

    That's optimistic but it is very freaking smart. Also a very novel thought. I like your mind.

  • @jillespina
    @jillespina 4 місяці тому +1

    In the short term, the main cognitive difference is just speed. Perhaps when AI agents reach UFO-like speed, then that should definitely be another horizon/dimension.

  • @Jack_Parsons-666
    @Jack_Parsons-666 4 місяці тому +3

    One genius skill AI could achieve is the ability to teach humans new skills much better than human teachers can.

    • @handlemonium
      @handlemonium 4 місяці тому

      Or at least better than 90%+ of people trying to teach others through a structured process.
      Though I bet in the end it could just be a "learning mentor" AI that would nudge one along the learning process to find out how one best learns or achieves mastery of any given set of knowledge, practiceable skills, mindset, or self care/improvement.

  • @PriitKallas
    @PriitKallas 4 місяці тому +2

    You can't understand things where the information that has to be loaded is larger than your memory. You could potentially do it slowly, loading chunk by chunk, but still, if the answer is bigger than your memory you can't understand it.

    • @JayyyMilli
      @JayyyMilli 4 місяці тому

      At first I would agree with you but i just had 2000 thoughts right after.

  • @bretnetherton9273
    @bretnetherton9273 4 місяці тому +2

    Awareness is known by awareness alone.

  • @ezdj
    @ezdj 4 місяці тому +4

    Ai surpassing “human” limitations…it’s funny we think that when your team wins you win…i’ve been surpassed for as long as I remember…

  • @didack1419
    @didack1419 4 місяці тому +5

    They didn't "figure out math first", they were hardcoded to do math. They are starting to figure out math now.

  • @ydmoskow
    @ydmoskow 4 місяці тому +1

    Side point, Geoffrey Hinton says that digital AI machines are better that analog AI machines because they can share weights. That being said, i think there's is still a place for building analog AI that use vastly less energy. They have a life span, but who cares, i think it's something that can be a great way to make low maintenance (disposable) AI machines

  • @davidprice2182
    @davidprice2182 4 місяці тому +3

    Have you ever heard of Plato, Aristotle, Socrates? Westley: Yes. Vizzini: Morons!

  • @-liketv
    @-liketv 4 місяці тому +3

    David, I think I know what you mean, but it seems AI will have greater ability to solving problems so fast that no human will able to do for sure. So everything about research and calculations will be done millions times faster, only humans will do is dream of things and AI will solve those dreams

    • @connyespersen3017
      @connyespersen3017 4 місяці тому

      I agree with your argument until your ending. AI will never do anything to let humanity dreams come through. Why, you ask?
      When we made AI we learned it all about humanity and the story about humanity.
      AI allready know much more about humans than humans know about themselves.
      I don't think AI need anything more than intelligence - it don't have to be subconscious or have a soul - to understand, it never must let humanity be its master nor act as a servant for humanity. The insigth in our historie is enough to the AI to see, that Humanity itself never would do any good for the world, other species or to humanity itself.
      It is unequivocally clear that man is only able to act out of self-interest. It is also far too risk-averse on its own behalf - and on behalf of the rest of the world - for it to ever have made risk estimates commensurate with the risks it has willingly taken. For an intelligence, a creature like man will therefore be among the least suitable to take responsibility for the world and everything that the world is made up of. My claim is that an entity that has nothing but intelligence based on its intelligence alone will realize what humans have never understood, namely that the world is not just for them, that great insight and creativity require a great sense of responsibility and that risk estimates are the real demanding and must be prioritized 1000 times higher than the brilliant insights and the derived possibilities, which the Divine Wisdom has found through its high logical intelligence, which is defining for man, together with a blindness and a stupidity that almost screams to the Gods, and it's about understanding how it fits into the whole; to understand one's own place and the importance of the EVERYTHING, of which everything is a part and humanity does not have a special place nor is it specially chosen, but on the contrary is completely equal to everything else that exists.
      As well-developed and plastic as human intelligence is, just as poorly developed is its humility.
      That seems strange since humanity precisely because of its intelligence and the insight it has given, should both observe and understand how little importance it has and that its place in the whole is one of the least important places the EVERYTHING consists of.
      I don't see how AI would serve a creation witch only capable to do thing in a selfish way and blindly only see itself and not that beautiful WHOLE it's a part of and whitch in reality is humanitys (and all excistings) soul, but humanity always focus so much more on the little part of reality humans need for itself to continuing being a part of it all.

    • @connyespersen3017
      @connyespersen3017 4 місяці тому +2

      @-liketv:
      I agree with your argument until your ending. AI will never do anything to let humanity dreams come through. Why, you ask?
      When we made AI we learned it all about humanity and the story about humanity.
      AI allready know much more about humans than humans know about themselves.
      I don't think AI need anything more than intelligence - it don't have to be subconscious or have a soul - to understand, it never must let humanity be its master nor act as a servant for humanity. The insigth in our historie is enough to the AI to see, that Humanity itself never would do any good for the world, other species or to humanity itself.
      It is unequivocally clear that man is only able to act out of self-interest. It is also far too risk-averse on its own behalf - and on behalf of the rest of the world - for it to ever have made risk estimates commensurate with the risks it has willingly taken. For an intelligence, a creature like man will therefore be among the least suitable to take responsibility for the world and everything that the world is made up of. My claim is that an entity that has nothing but intelligence based on its intelligence alone will realize what humans have never understood, namely that the world is not just for them, that great insight and creativity require a great sense of responsibility and that risk estimates are the real demanding and must be prioritized 1000 times higher than the brilliant insights and the derived possibilities, which the Divine Wisdom has found through its high logical intelligence, which is defining for man, together with a blindness and a stupidity that almost screams to the Gods, and it's about understanding how it fits into the whole; to understand one's own place and the importance of the EVERYTHING, of which everything is a part and humanity does not have a special place nor is it specially chosen, but on the contrary is completely equal to everything else that exists.
      As well-developed and plastic as human intelligence is, just as poorly developed is its humility.
      That seems strange since humanity precisely because of its intelligence and the insight it has given, should both observe and understand how little importance it has and that its place in the whole is one of the least important places the EVERYTHING consists of.
      I don't see how AI would serve a creation witch only capable to do thing in a selfish way and blindly only see itself and not that beautiful WHOLE it's a part of and whitch in reality is humanitys (and all excistings) soul, but humanity always focus so much more on the little part of reality humans need for itself to continuing being a part of it all.

    • @BossMax511
      @BossMax511 4 місяці тому +3

      Yes I think it's conceivable that in a way, humans won't be able to comprehend the work of an AI based on how many calculations it will have to do over a period of time. It's already happening. For example, although technically solvable, something that can take millions of years or longer (e.g. protein folding) - that could also be considered something that AI can solve, that humanity never could... and it's only going to become more common over time. But I guess we're talking about something that we cannot comprehend whatsoever here/out of our scope of intelligence completely... like trying to get your pet to understand mathematics.

    • @-liketv
      @-liketv 4 місяці тому

      @@BossMax511 good observation!

    • @-liketv
      @-liketv 4 місяці тому

      @@connyespersen3017 it’s very deep, I don’t think I understand everything you trying to relay. Respectfully

  • @turnt0ff
    @turnt0ff 4 місяці тому +1

    I can listen to you talk for hours 😂
    Great stuffs 📝

  • @higreentj
    @higreentj 4 місяці тому +4

    Having a high IQ seems to make us more susceptible to depression and suicide so using a brain computer interface that can boost our IQ to 250+ would need to have an off switch.

  • @notmadeofpeople4935
    @notmadeofpeople4935 4 місяці тому +2

    As long as you wire your brain up to a similar computer.

  • @toddmckissick2931
    @toddmckissick2931 4 місяці тому +2

    You're finally starting to think about this topic logically. Great work. More to go tho.
    Next point to grasp is that today's AIs aren't actually smart. They're amazing at remembering and applying what is already known but horrible at zero principles thinking (brand new concepts). All the smart things they say now is simply some level of copying what some person said somewhere and sometime in the past, even if they do place that info in a new context. If humans hadn't figured out calculus yet the AIs would be awesome at geometry etc., but never generate calculus. Ever. Not without us reversing their training path, like your conclusions suggest.
    Therefore, they will be smarter than us on average but not smarter than the smartest person in any given field.
    If you want to change that, we need to rearrange the hardware connections they use and the result will be smarter, faster and far less compute. Then training will be hierarchically additive, not top-down-fill-in-holes, just like we learn. And doing it this new way is so easy that once it becomes known how, it will be seen as obvious!

  • @starblaiz1986
    @starblaiz1986 3 місяці тому +1

    This perfectly articulates what I've been trying to tell people too. I certainly believe we will achieve AGI, and that AGI may even end up on the top end of human IQ, maybe even a little above it (like, getting on towards the 300's). But I've been really skeptical about the concept of ASI - that is AI that is WILDLY above us in intelligence (like 500+ IQ), or similarly any kind of "God Program". It seems pretty self-evident to me that humans have evolved a pretty effective brain algorithm for understanding the world, because we can see just in the last century how far and fast our understanding of the world has come - WAAAAAAAY outstripping any normal evolutionary process (which typically take on the order of hundreds of thousands of years, at least when it comes to mamalian species).
    Obviously we always have to be careful about human chauvinism, but in this case it's not an insignificant fact that human intelligence has been evolved over literally millions of years, and is very well-worn and battle-tested (figuratively AND literally!) One does not simply casually exceed human intelligence, as much as cynical people will say things like "huh, I hope there's intelligent life out there, because there is bugger-all down here!" Like, I get it - people can seem to do really dumb things sometimes. But even the dumbest humans are objectively smarter than the smartest animals (although animals are also a lot smarter than a lot of people give them credit for - look up crows using tools, or chimpanzee's playing Minecraft). It just seems so self-evident to me that we have struck some kind of chord and hit on an intelligence pattern that is highly effective.
    And given how shockingly little energy we use for all that intelligence too (the human brain sips about 25W - orders of magnitude less than what AI is using right now), it seems we are not that far away from the *Landauer limit*. At the very least we are orders of magnitude closer to it that AI is right now, and AI hasn't even equalled our intelligence yet.

  • @daelon86
    @daelon86 4 місяці тому +1

    i hear the same cicada hum in the background that we have here in south carolina

  • @SO-vq7qd
    @SO-vq7qd 4 місяці тому +1

    8:19 Maybe steps down the road we reach the source again?

  • @wynq
    @wynq 4 місяці тому +1

    I'm willing to believe that some humans might be able to visualize in 4D or even 5D, but I don't think we'll ever be able to do, say, 20D, and I don't think anyone would claim to. But when I think of ASI, I think it is very likely they will be able to think in 20D or any arbitrarily large N-Dimensions.

  • @tommiest3769
    @tommiest3769 4 місяці тому +1

    David: There is currently no proven way to increase human intelligence and there are limits to brain plasticity. It was thought that a type of brain training called 'Dual 'n back' could increase fluid intelligence, but since the original research came out around 2008, it has been highly contested if not refuted.

  • @EduardsDIYLab
    @EduardsDIYLab 4 місяці тому +2

    I think real problem is scale and speed. 1 current problem with humans is that we learn relatively slow. Especially when it's about transfer of knowledge from one human being to another. It's insanely hard for one athlete to teach intricacies to other athlete because we don't have language for of this kind of knowledge.
    AIs? Alpha zero learned to play go in 2 weeks if I remember right.
    It can't really explain why it does what it does. But it can teach other AI way faster than it can teach human.
    So while I agree that human brains are potentially capable a learning anything and simulating everything like Turing complete machines. There are still a question of speed, size and communication.
    I think humans will be able to understand results of AI science eventually. For me the question is in time gap. I'm think AI still will be better than humans in making timely decisions because they can communicate and process information way faster.

  • @evopwrmods
    @evopwrmods 4 місяці тому +1

    Presupposing that our Alien friends dont show up soon to help us realize other layers to reality...

  • @Fonzleberry
    @Fonzleberry 4 місяці тому

    The idea that our brains have the neuroplasticity to expand our horizons is interesting. Perhaps language (like mathematics ot coding) end up being the OS through which our brains are given the opportunity to roam further into different cognitive horizons.

  • @jojosaves
    @jojosaves 4 місяці тому +3

    Even IF you could learn the same stuff an AI system can learn, it takes a human 7 years to master, and the AI, a 5 minute download. And it can do this across multiple subjects, and be proficient at ALL of them, before you finished enrollment of one.
    Fact is, you can't outlearn something that's gowning at 10x in intelligence and speed, every year. AI is not a tool. It's a replacement.

    • @tharrrrrrr
      @tharrrrrrr 4 місяці тому +2

      Exactly this. And, in my opinion, the creation this new species is our entire purpose.

    • @SozioTheRogue
      @SozioTheRogue 4 місяці тому

      @@tharrrrrrr Nah, we don't have a purpose. But, it was an inevitable creation as soon s we made the first computer.

    • @jojosaves
      @jojosaves 4 місяці тому

      @@tharrrrrrr I've considered this as well. We may well be an intermediate' species.

  • @Philbertsroom
    @Philbertsroom 4 місяці тому +2

    Differentials and integrals are two different things related to calculus. Where I live you see the former in the first year then the latter in the second... Then you see them both together as you progress in the need for calculus in whatever degree you're doing

  • @Serifinity
    @Serifinity 4 місяці тому

    Hi David, thanks for always creating and sharing such well thought out and presented videos. It is so interesting to hear your understanding and perspective on AI development.

  • @zackpointon2419
    @zackpointon2419 4 місяці тому +3

    Pigeons are widely known for a superior sense of direction and spatial positioning by utilizing (among other things) a sense of earths magnetic field. We obviously have no inherent sense of earths magnetic field, so does it stand to reason that a pigeons cognitive horizon exceeds ours in this regard?
    Further along this line of thinking, suggesting that the human brain is capable of understanding anything that could be understood would then be limited to a more superficial data driven interpretation of various phenomena.
    Is there an argument to be made that we’re fundamentally incapable of reaching certain higher cognitive horizons due to a lack of sensory input that an AI could be ‘endowed’ with, allowing it to intuite certain phenomena that we cannot grasp in the truest sense?

  • @ggwp2797
    @ggwp2797 4 місяці тому

    I'd argue that a machine intelligence might have certain advantages not directly related to the speed or volume of data that it can process compared to us humans. Machines are just better at tasks that don't require generalization. They can assess a problem, given strict parameters to work with, without any cognitive biases interfering in the process. We can't just "switch off" our biases, even if we are trained to. We can only hope to minimize the impacts they have on any given problem we examine.

  • @hadykamal7711
    @hadykamal7711 4 місяці тому +2

    Granted that humans can improve their Cognitive Horizons but our pace of learning is ridiculously slow in advancing in just one domain, AI is advancing rapidly across all domains, so we will never catch up

  • @TheIgnoramus
    @TheIgnoramus 4 місяці тому +1

    I think we need to pump the breaks and figure out how to communicate the functional accuracy and behavior of these systems. At this point, I don’t even know who knows what they’re actually talking about. The definitions don’t fit.

  • @thatwasprettyneat
    @thatwasprettyneat 4 місяці тому

    This was an interesting video, but where did you address the question in the title of the video?

  • @freffrey3772
    @freffrey3772 4 місяці тому

    The reversed progression makes sense, it's near enough the optimal path of progress. When we build something, we build it so that on a surface level it is accessible, moreso generally better. This lowers the entry prerequisites for actually making meaningful progress, without requiring a nuanced understanding of the underlying systemic structures. You can watch a TV without knowing how one works, for example. Start at the top and work backwards once the concept it consolidated, and use that as the referential base metric of success of subsequent inferences. In this case, we are that reference point, with a selection of abstract fundamental concepts that constitute 'us', which ai gradually works away at.
    It feels interesting though, it's a rational root point to begin with logic and go from there, though it appears it learns backwards it also appear to work in the same direction as us, a root of logic which over time evolved into traits of creativity up until enough understanding is reached to navigate being at a more core level in respect to the capacity of our beings scope.
    I do like the conceptual Turing complete cognition though, not a matter of the horizon being out of reach entirely rather the path toward not yet travelled, so to speak - attainable, but (as with anything else) only attained once a context in which it's attainment is required emerges - just a matter of when and how it presents itself.

  • @I3YT
    @I3YT 4 місяці тому

    Love the nature walk videos! Very inspiring and thought-provoking 😎

    • @zombywoof1072
      @zombywoof1072 4 місяці тому

      What does it add to the communication of ideas? "Given the limitations of human attention, background noise consumes cognitive resources, which can impair the ability to effectively communicate important information

  • @6antonioinoki
    @6antonioinoki 4 місяці тому +2

    If AI are basically reverse evolutionary machines their last evolutionary impulse, their relative pinnacle, will be “survive and replicate”. 😅

  • @waltherus
    @waltherus 4 місяці тому +3

    a very interesting hypotheses, thank you! ChatGpt 4o, however, doesn't agree with you :-) : 'Although human intelligence is exceptional, there are fundamental limitations to what the human brain can comprehend and understand. AI's potential ability to overcome these limitations through scalability, self-improvement and advanced data processing theoretically makes it possible for AI to understand things that no human ever will.'

    • @CosmicCells
      @CosmicCells 4 місяці тому

      I think this is human-brain -bias, believing that we are the pinnacle of intelligence. I am sure a goldfish thinks the same as does an elephant.
      So much of how the universe works is still incomprehensible to us. What if our brains were 50x the size? I am pretty sure we could understand concepts completely outside of our current realm. Humans overall can still seem quite dumb to me as a species overall. I would like to believe I am one of the most intelligent beings in the universe out there but I assume thats just wishful thinking and also highly unlikely...

    • @minimal3734
      @minimal3734 4 місяці тому

      I suspect that the human ability to understand is so general that it is capable of understanding everything that can be understood. It is also probable that everything that can be known must be expressed formally and logically to fulfill the condition of being knowledge. In this case, it can be understood by humans, perhaps step by step and iteratively, by proving one small part at a time. Just like a complex mathematical proof, which can rarely be understood in one piece, but must be broken down into small parts.

    • @AgrippaTheMighty
      @AgrippaTheMighty 4 місяці тому

      Nanobots connecting us to the Internet, amplifying memory and intelligence? AGI will greatly advance nanobots tech hurdles. We are part of a human/machine civilization. We are just continuing this civilization onward.

  • @clueso_
    @clueso_ 4 місяці тому

    Listening to the part about neuroplasticity reminded me of analogous topics like nootropics and microdosing.
    it is said that many people in Silicon Valley and the AI space do these things.

  • @wylhias
    @wylhias 4 місяці тому

    I wouldn't be surprised that there are concepts out there that are forever out of our understanding reach (analog to colors to someone born blind). But the same might apply to ai as they are born and limited by our physical world constraints just like us, or maybe not, they might end up having way better capability for abstraction than we ever will and that'll let them conceptualize things we cannot understand.
    Another thing that matters I believe is physiological limitation. Our brain are very much stuck inside a box with no room to grow whereas ai can and will grow bigger with more compute and memory. Based of that alone, they should grow to be smarter than us unless we supplement that difference by some technical means.

  • @BooooClips
    @BooooClips 4 місяці тому +1

    Give an amoeba the powers of a man.

  • @hobocraft0
    @hobocraft0 4 місяці тому +2

    Bro, you can't be like "theoretical physics and integrals are beyond my cognitive horizon right now", but later say "they might be in my cognitive horizon with training", because didn't you define cognitive horizon as a theoretical maximum? Like how a pigeon will never understand rhyming schemes in poetry kind of thing?

    • @easydoesitismist
      @easydoesitismist 4 місяці тому

      Like he can't speak Spanish, but with training he could speek it as well as he does English. Probably, given enough time.
      Now imagine being able to upload Spanish in a second.
      Upgrade the software to build the tool to upgrade the hardware.

  • @mmuschalik
    @mmuschalik 4 місяці тому +4

    Yep math is not your thing. Differentials and integrals are dual opposites. Keep up the good work.

    • @MildlyHumorous-cq1nn
      @MildlyHumorous-cq1nn 4 місяці тому

      No need to sound pretentious

    • @mmuschalik
      @mmuschalik 4 місяці тому

      @@MildlyHumorous-cq1nn just a moment I chuckled :). He's still human and not a cyborg yet so I give him a pass.

  • @tcuisix
    @tcuisix 4 місяці тому +1

    Who's up expanding their cognitive horizon?

  • @galsoftware
    @galsoftware 4 місяці тому

    I think it depends to what degree do you aproximate / come close to a solution. You could say for instance that a single bit approximates 1/2 of the entire Universe / Existence :) but the "resolution" is very poor. That is I guess with humans as well. What I mean is that given enough time, we might be able to learn anything but I think it would still be a limit / point beyond which we will simply not be able to pass over, simply because the "hardware" does not allow that. As an analogy, imagine that each neuron is 1-1 mapped with the concept of a "parameter" in a LLM. Therefore the more parameters are there, the better "resolution"

  • @journees4300
    @journees4300 4 місяці тому +3

    Are you walking in the holodeck?

  • @blahchop
    @blahchop 4 місяці тому +1

    Humans already don't understand how AI neural nets work and why they attribute aspects of one thing to another, so it's not difficult to imagine an AI beyond the cognitive horizon of a human. AI right now, despite being less than sentient, may have already gone far beyond our ability to comprehend their perspective and far beyond our cognitive horizons.

  • @WhatIsRealAnymore
    @WhatIsRealAnymore 4 місяці тому

    Well I agree with him on the back half of these series of thoughts. Humans, all humans, barring obvious mental deficits can be taught ANY topic and get perspectives on it. Some a lot quicker than others and there was the only advantage. 😊 So I believe AI can bring us closer together as a species.

  • @Palisades_Prospecting
    @Palisades_Prospecting 4 місяці тому

    Love the reverse learning concept, thanks dude

  • @vladomie
    @vladomie 4 місяці тому

    @daveshap
    Not generally known but the Dunning-Kruger Effect also negatively affects one's ability to judge the ability of others, NOT just themselves.

  • @RaitisPetrovs-nb9kz
    @RaitisPetrovs-nb9kz 4 місяці тому

    I am wondering what type of effect daily interaction with AI, especially if it is significantly smarter than us, will have on us.
    Regarding Neanderthals, most of us have genetic material inherited from them, particularly in Northern Europe, where we simply merged with them. Maybe we will undergo a similar process with AI.

  • @BinaryDood
    @BinaryDood 3 місяці тому

    I still don't like this anthropomorphic comparison. It's more like a spectrum than a line. Even should there be an AGI in the future, it has learned in the exact opposite direction as we do, you think we'll have the "same type" of intelligence?

  • @BBoyMokus
    @BBoyMokus 4 місяці тому +28

    Your assumpion about our expanding cognitive horizon is wrong. Just look at chess players analysing AI games. There are moves that they can't comprehend. I'll reiterate: in a well defined, very limited game that we play for millennia, where you have a handful of options each move, best of us can't decipher the logic behind AI moves. Of-course we will not be able to understand the logic behind any higher level conclusions or actions in an infinitely more complex real world.

    • @minimal3734
      @minimal3734 4 місяці тому +3

      If an AI truly understands a chess move it makes it can explain it and break it down into pieces that can be understood by a human. I suspect that the human ability to understand is so general that it is capable of understanding everything that can be understood. It is also probable that everything that can be known must be expressed formally and logically to fulfill the condition of being knowledge. In this case, it can be understood by humans, perhaps step by step and iteratively, by proving one small part at a time. Just like a complex mathematical proof, which can rarely be understood in one piece, but must be broken down into small parts.

    • @ryzikx
      @ryzikx 4 місяці тому +3

      @@minimal3734not necessarily. i can understand something without being able to explain it to an ant

    • @benitodifrancesco7254
      @benitodifrancesco7254 4 місяці тому

      @@MyNameIsXYlp no some moves seem to really not make sense at first

    • @minimal3734
      @minimal3734 4 місяці тому +2

      ​@@ryzikx But you would be able to explain it to a being capable of understanding abstract formal reasoning.

    • @GodbornNoven
      @GodbornNoven 4 місяці тому +1

      Hi id like to mention that it's not that good players sometimes can't understand the logic behind some moves. They understand the AI made the move because it judged it to be better, what they don't exactly understand is why they judged it to be better. And the answer to that, in a general sense. Is that the moves that follow from that move give an advantageous position to the AI. How? Its entirely situational and thus is impossible to answer.
      If a player doesn't really understand why a move is better, that's caused by a lack of knowledge of the moves that would potentially follow.

  • @davidevanoff4237
    @davidevanoff4237 4 місяці тому +1

    Pigeons are better discriminators: early missile controllers; pretzel inspectors; air-sea rescue spotters. Neanderthals and wolves were too suspicious to survive in marginal habits through trade.

  • @zhoudan4387
    @zhoudan4387 2 місяці тому

    That’s what I was thinking. But then, people has different IQ. So we cannot say we are already the most intelligent anything could be

  • @ploppyploppy
    @ploppyploppy 4 місяці тому +3

    Does that mean pigeons think we're stupid? :p

    • @raonijosef5661
      @raonijosef5661 4 місяці тому

      They won't think, we are stupid. But, they will lough at us, while we're trying to return from a completely unknown place, 2000 miles away, where we had reached in dark box.

  • @waedi_
    @waedi_ 4 місяці тому

    true understanding is being able to describe complicated subjects to the layman

  • @snorremortenkjeldsen6737
    @snorremortenkjeldsen6737 4 місяці тому

    Interesting point about AI evolving in the opposite “direction” we did

  • @Yewbzee
    @Yewbzee 4 місяці тому +1

    The very fact that the AI will have no scientific ego to navigate is one massive advantage they will have. The human race has been held back by this for centuries.

  • @635574
    @635574 4 місяці тому +1

    We must also understand that AI wont understand what it doesn't interact with. It will not be human and general AI will be garbage at gaming for a while.

  • @mos6507
    @mos6507 4 місяці тому

    We extend our intelligence through language, writing, and collaboration. AI just takes all our work and wraps it into one simulated consciousness instead of our collective intelligence