Michael Graziano on consciousness, attention schema theory, AI | Thing in itself w/ Ashar Khan

Поділитися
Вставка
  • Опубліковано 16 лис 2024

КОМЕНТАРІ • 18

  • @stephengee4182
    @stephengee4182 7 місяців тому +2

    Any theory of consciousness must be able to explain how anesthetic and hallucinogenic compounds work.

  • @djoe-tf8hn
    @djoe-tf8hn 8 місяців тому +2

    The squirrel case does not answer the hard problem of qualia, at least there is experiences of color. Where as the squirrel is more of psychology case. Believe in something is not the same as experiencing color. Even if the color is the brain mistake, you still need to explain how the color arises in experience.

  • @deepzan1
    @deepzan1 8 місяців тому +1

    Man the interviewer could use a bass equalizer in his larynx

  • @ingenuity168
    @ingenuity168 Рік тому

    Please up the volume a tad.

  • @WeMakeSuperLuckyFace
    @WeMakeSuperLuckyFace 9 місяців тому

    I couldn't understand what the argument is. I don't see how this invalidates the hard problem. I think the hard problem might be fundamentally unsolvable

    • @markvosslpcc
      @markvosslpcc 9 місяців тому

      His argument is that the brain creates simplified models of reality, which we call qualia, but such qualia is not entirely accurate or real. It's "just a model" of "deeper information". But, as countless philosophers have pointed out, thinkers like him seem unable to grasp that their cherished material brain is itself a "simplified model". The brain makes the brain, according to their worldview. It's not logical.

  • @blakejon
    @blakejon Рік тому +8

    I remain unconvinced. Even if my mental model is inaccurate -- i.e. you erroneously believe your own mental model is a phenomenal experience -- it doesn't explain why that inaccurate model feels like something. It's side-stepping the question without explaining how the brain can make a mental model feel like something, just asserting that the "feels like something" is inaccurate. The inaccurate mental model -- you're wrong about how your mind works -- is on a different order than thinking you have a squirrel in your skull, a factual question about the world. Your mental model could be completely wrong about the facts of the world around you, but it still feels like something no matter how hallucinatory. How?

    • @one2zero471
      @one2zero471 Рік тому +6

      I am sympathetic to your statement. I think though, that he uses the word belief much the same way Keith Frankish does, in that the model of attention is a belief that you are conscious. I am not sure, but I suspect this is the case, that this means that belief IS the model or IS the consciousness. So a belief, being that it is a conceptual or propositional attitude, is linguistic and computational in nature and this is where we can grasp the idea that consciousness is just data. Thus, it 's an illusion that consciousness is anything other than data. The issue I have - which I think you share - is that, even if this is true, it STILL does not explain why it feels like something. The magic he claims to remove is still there.

    • @blakejon
      @blakejon Рік тому +2

      @@one2zero471 Agreed. The video didn't make it clear to me how our mental models being based on data -- something I agree with -- explains the experiential aspect of consciousness. I would argue that there are mental models in the autonomic nervous system that are based on information about the world that don't require a conscious experience to function. What makes our sensory attention different that it allows/requires its mental models to feel like something?

    • @larawhite5890
      @larawhite5890 Рік тому +4

      In view of spiritual enlightenment, there is no one there, who is experiencing, no one who "feels to be me", i.e. no squarrle. There is just experience experiencing itself. The body-mind creates a model of itself as autonomous I to be part of social structure. This is how I understand it so far.

    • @raresmircea
      @raresmircea Рік тому +3

      Just like everyone here seems to believe, I too believe I’m a predictive model the organism makes of itself (a layered model comprised of several aspects like attentional self, volitional self, bodily self, narrative self, social self). The world I see around me is only a model of the world that’s generated within my skull. And like various individuals across history have discovered, these 2 models are in fact joined into one unitary object within which the self-other distinction is just a seeming partition whose seeming can collapse leaving behind an unitary, awake, allocentric (instead of egocentric), responsive field (the so called enlightened state). But the problem is explaining: *1. how can this model manifest qualities* (like "🔴"), & *2. how is binding implemented.*
      I don’t deny that my inner experience of red is in a way illusory, because the outside red light seems to be just a non-qualitative oscillation with a certain frequency, or at least this is how the retinal sensors register it (as pure mechanical oscillation). But the biggest question of all still remains, *how can the brain generate this quality out of supposedly non-qualitative stuff* (quantum fields, particles, pure information, constructive mathematics, whatever)?
      And a similarly big question, how can distributed "feature" processors (spatially & likely also temporally separated bits of the brain responsible for predicting "edges", "colors", "sounds", "touch sensations", etc) either come together themselves or stitch together their output in such a way to create a single locally & globally bound entity? All the answers to this problem are either ignorance or idiocy. The only theory which has an intuitive answer that makes any sense is Integrated Information Theory. But in my opinion IIT fails to account for the qualitative aspect, saying that the different qualities of phenomenal experience are just different spatio-temporal integrated informational structures. Why would abstract structure = felt quality tho? And why should one structure give rise to "🔴" while other to "sweet"? I do suspect that theories like IIT will indeed identify mathematical structures that correlate with different qualities, and this will absolutely revolutionize the world because we would have the ability to at least "nudge" brain activity away from pain & suffering and towards intensely positive & interesting states of being (possibly new exotic types of qualia). But we won’t have an explanation of *why* color, sounds, emotion… and not something else. And we won’t have an explanation of *how* color emerges from something non-color.
      I don’t think we should accuse people of being p-zombies because this is the first step towards discarding their ideas & worse, to dehumanize them, but honestly this is the sole explanation I can think of when I keep encountering people just talking past the most obvious & most important aspects of reality. You struggle to point towards the character of experience & to formulate your question as clearly as possible and then they just speak past it as if it wasn’t there.

    • @blakejon
      @blakejon Рік тому +1

      @@raresmircea I've had similar experiences trying to discuss this subject. People seem to not share my qualitative experience. Learning about aphantasia has me wondering if some folks really don't have a first-person experience, or at least one that I wouldn't recognize. It's a conundrum.

  • @Nonconceptuality
    @Nonconceptuality Рік тому

    The brain doesn't do anything