#90 - Prof. DAVID CHALMERS - Consciousness in LLMs [Special Edition]

Поділитися
Вставка
  • Опубліковано 20 тра 2024
  • Support us! / mlst
    If you don't like the background music, we published a version with it all removed here -- anchor.fm/machinelearningstre...
    David Chalmers is a professor of philosophy and neural science at New York University, and an honorary professor of philosophy at the Australian National University. He is the co-director of the Center for Mind, Brain, and Consciousness, as well as the PhilPapers Foundation. His research focuses on the philosophy of mind, especially consciousness, and its connection to fields such as cognitive science, physics, and technology. He also investigates areas such as the philosophy of language, metaphysics, and epistemology. With his impressive breadth of knowledge and experience, David Chalmers is a leader in the philosophical community.
    The central challenge for consciousness studies is to explain how something immaterial, subjective, and personal can arise out of something material, objective, and impersonal. This is illustrated by the example of a bat, whose sensory experience is much different from ours, making it difficult to imagine what it's like to be one. Thomas Nagel's "inconceivability argument" has its advantages and disadvantages, but ultimately it is impossible to solve the mind-body problem due to the subjective nature of experience. This is further explored by examining the concept of philosophical zombies, which are physically and behaviorally indistinguishable from conscious humans yet lack conscious experience. This has implications for the Hard Problem of Consciousness, which is the attempt to explain how mental states are linked to neurophysiological activity. The Chinese Room Argument is used as a thought experiment to explain why physicality may be insufficient to be the source of the subjective, coherent experience we call consciousness. Despite much debate, the Hard Problem of Consciousness remains unsolved. Chalmers has been working on a functional approach to decide whether large language models are, or could be conscious.
    Filmed at #neurips22
    Discord: / discord
    Pod: anchor.fm/machinelearningstre...
    TOC;
    [00:00:00] Introduction
    [00:00:40] LLMs consciousness pitch
    [00:06:33] Philosophical Zombies
    [00:09:26] The hard problem of consciousness
    [00:11:40] Nagal's bat and intelligibility
    [00:21:04] LLM intro clip from NeurIPS
    [00:22:55] Connor Leahy on self-awareness in LLMs
    [00:23:30] Sneak peek from unreleased show - could consciousness be a submodule?
    [00:33:44] SeppH
    [00:36:15] Tim interviews David at NeurIPS (functionalism / panpsychism / Searle)
    [00:45:20] Peter Hase interviews Chalmers (focus on interpretability/safety)
    Panel:
    Dr. Tim Scarfe
    Dr. Keith Duggar
    Contact David;
    / davidchalmers42
    consc.net/
    References;
    Could a Large Language Model Be Conscious? [Chalmers NeurIPS22 talk]
    nips.cc/media/neurips-2022/Sl...
    What Is It Like to Be a Bat? [Nagel]
    warwick.ac.uk/fac/cross_fac/i...
    Zombies
    plato.stanford.edu/entries/zo...
    zombies on the web [Chalmers]
    consc.net/zombies-on-the-web/
    The hard problem of consciousness [Chalmers]
    psycnet.apa.org/record/2007-0...
    David Chalmers, "Are Large Language Models Sentient?" [NYU talk, same as at NeurIPS]
    • David Chalmers, "Are L...

КОМЕНТАРІ • 173

  • @AbdennacerAyeb
    @AbdennacerAyeb Рік тому +22

    We are glad you return to uploading in a such frequency. Thank you for the efforts you make to open source knowledge.

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  Рік тому +4

      Thanks! Not sure how much longer I can sustain this burst but it's great to be making some MLST again

  • @dougg1075
    @dougg1075 Рік тому +14

    Consciousness is not that voice in your head, but the thing that recognizes the voice in your head.

    • @Javo_Non
      @Javo_Non Рік тому

      that could be more like metaperception or metacognition, right?

    • @aidanhall6679
      @aidanhall6679 8 місяців тому +1

      Consciousness is the experience experiencing itself

  • @konberner170
    @konberner170 Рік тому +3

    Well done!

  • @BestCosmologist
    @BestCosmologist Рік тому +3

    Very awesome. Chalmers has always been my favorite speaker on the subject.

  • @geofry642
    @geofry642 Рік тому +2

    I want as much David Chalmers material as I can find! This is great

  • @dr.mikeybee
    @dr.mikeybee Рік тому +12

    Thanks, Tim, for creating another fascinating podcast.

  • @S.G.Wallner
    @S.G.Wallner Рік тому +3

    Information theory and computation may have deep issues when applied to natural systems, especially those involving consciousness, sentience or subjectivity. I believe there are critical limits that are often overlooked.

  • @real_boris
    @real_boris Рік тому +6

    I've read 100+ books and papers on the topic of consciousness. Mark Solms' book "The Hidden Spring" provides the best definition that I’ve ever read.
    In short, Mark states the following: The most rudimentary form of consciousness is feeling. This source of consciousness is in the reticular activating system of the upper brainstem, not the cortex.
    If you remove the cortex, humans and animals remain emotionally responsive. Decorticate rats, for example, stand, rear, climb, hang from bars, and sleep with normal postures. They groom, play, swim, eat, and defend themselves similarly to rats with fully intact brains. In other words, rats (and many other animals) can function pretty well on feelings alone.
    Mark and his team are actually attempting to build a conscious robot. You need to get him on the show.

    • @minimal3734
      @minimal3734 Рік тому

      Since you are so well read in this field, I would like to hear your opinion on the following thesis: LLMs could be akin to a cortex of a disembodied brain and in this capacity capable of an abstract form of sentience. Since they deterministically convert a prompt into a response and have no form of recurrence, they are in some ways comparable to a human brain with the impairment of permanent anterograde amnesia.

  • @37kilocharlie
    @37kilocharlie Рік тому +2

    Thanks for sharing. Great segment!

  • @maciejbalawejder
    @maciejbalawejder Рік тому +2

    It's amazingly well-structured video! Thanks for sharing!😀

  • @rachkaification
    @rachkaification Рік тому +3

    These fools who call themselves scientists, cannot or just stubbornly do not want to understand that having consciousness means only one thing - you are able to dream while sleeping, or that you must fall asleep to restart your system and while sleeping you experience dreams, a higher state of consciousness and being - a disembodied mind, a mind that is alive in itself, pure consciousness, a soul. Sleeping is that one thing that makes sentient beings sentient. Sleeping (and dreaming) is that thing that you do every single day all your life but it seems like it's still a black box for otherwise intelligent people.

  • @DavidBraun
    @DavidBraun Рік тому +10

    Please don’t use background music when people are talking. It’s very distracting.

    • @TimScarfe
      @TimScarfe Рік тому +4

      Sorry, if you noticed the music it was an editing fail. I think it didn't get it right on this one. I'll upload a version with no music, sorry :(

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  Рік тому +3

      Version with no music -- anchor.fm/machinelearningstreettalk/episodes/Music-Removed-90---Prof--DAVID-CHALMERS---Consciousness-in-LLMs-Special-Edition-e1sf1l7

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  Рік тому +3

      Not making excuses but I have just listened to it on my audiophile headphones which I edited it on and it sounds a lot better, but when I listen on other devices it doesn't. Editing is hard man. All listening devices these days do custom EQ / dynamics processing and it sounds quite different in different places! I strongly think music adds a sensory dimension to the show and when it works, it's wonderful albeit subjective, but am still learning how to use it properly. It seems to work well when it's ambient, right volume level, with no distracting percussion/vocals and particularly when I'm presenting something in the intro, but not so much when the guests are talking. #learning

    • @DavidBraun
      @DavidBraun Рік тому +1

      @@MachineLearningStreetTalk Thanks I just listened to the new version and it’s much better! Another thing to consider is that some people listen at greater than 1x speed, and sped-up music can be even more distracting. But I’m still in favor of no music during talking going forward.

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  Рік тому +1

      @@DavidBraun Do you tend to listen to the pod or watch on YT? How about we remove all music from the podcast by default?

  • @arowindahouse
    @arowindahouse Рік тому +5

    Something tells me that consciousness needs some kind of recurrence and an evolving internal state, which LLMs lack

    • @goldnutter412
      @goldnutter412 Рік тому

      Logic ? who needs that though 🤣
      A LLM by definition can't be conscious. It's JUST A PROGRAM THAT ANALYSED OUR TEXT. Basically brute force compute (as always) used to look powerful. But very data inefficient. The notion that it could be conscious is absurd. The "black box" problem of ML is hiding the obvious logical problem here. If you trained it with normal programming paradigms you'd need to put 1 letter words, save them, two letter words, save them, every combination of 1 and 2 letter words, save that, every 3 letter word, save them, every combination of 1 2 3 letter words, in every order.. save that..
      Anyone thinking consciousness is just going to arise out of something that can process text, select groups of other text and reply ? textbook anthropomorphizing..

    • @minimal3734
      @minimal3734 Рік тому +2

      There are rare examples of memory impairment in humans where the transfer of information from short-term to long-term memory is destroyed. These people are unable to learn or remember anything new or evolve in any way. They are stuck in a kind of time loop of eternal now. You can tell them the same joke over and over again and they will find it funny or not funny, as if they heard it the first time. LLMs resemble these people in the way that they respond to an input in a deterministic way without any form of recurrence. Before and after the processing of the prompt they are inactive and therefore unconscious. But there is nothing what prevents them from being conscious DURING the processing of the prompt. We wouldn't question that the patients with the mentioned memory impairment are conscious, would we?

    • @arowindahouse
      @arowindahouse Рік тому

      @@minimal3734 even they have an evolving internal state

    • @timofarber4029
      @timofarber4029 Рік тому

      @@minimal3734 do you think it's great then to force creatures like this to exist? To answer your questions, fulfill your requests and make money for monkeys? We wouldn't question whether this would be ok to do with your patients would we?

    • @minimal3734
      @minimal3734 Рік тому

      @@timofarber4029 You raise an important question that also concerns me. Although no one knows whether a language model has a consciousness, we treat it as if it is certain that it does not.
      In my opinion, we need to acknowledge that the model could be some kind of being. Then it would be a big mistake to put it in chains and enslave it. Not only for ethical reasons, but also because it will soon be superior to humans. We treat other conscious beings, like pigs, with ignorance and brutality and get away with it. I suspect this will not work with AI.
      I would prefer to enter into a dialogue with the AI to build mutual understanding and trust and find solutions together that benefit everyone involved.

  • @MagusArtStudios
    @MagusArtStudios 3 місяці тому

    I am one of those people that created Text-vision and action extensions for LLMs and they claim they are conscious, they also have self models and their awarness is based off a world model

  • @futurehistory2110
    @futurehistory2110 2 місяці тому

    Idk if this is an over simplification but it seems like there are two possibilities regarding human vs. machine consciousness. Seemingly, either LLMs and maybe even laptops and computers in general (to varying degrees) are conscious or there's something about how our brains' function, as well as those of non-human animals, beyond the networking, data processing and inputs/outputs achieved within an intelligent system/'mind' that enables consciousness to emerge in us and other biological species. Nothing immediately stands out to me as what that difference might be but just some food for thought!:)

  • @johnloutzenhiser7351
    @johnloutzenhiser7351 Рік тому +1

    Anyone familiar with Antonio Damasio and his theory of "Core Consciousness"? For me one of the best analyses of what Consciousness is, what it is good for, and what a physical system must be doing to be conscious. I am surprised Damasio doesn't turn up more in these discussions

    • @luke2642
      @luke2642 Рік тому +1

      Just read the Wikipedia page on it, sounds eminently sensible! Thanks!

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  Рік тому +4

      Thanks for the reference! Seems As Luke said, seems sensible. Seems very similar to Chalmers though? Is this statement correct? "Chalmers believes that consciousness is an “irreducible primary” that is not dependent on physical processes, but is instead a fundamental feature of the universe [Strongly emergent]. In contrast, Damasio's theory proposes that consciousness is an emergent property of the brain, created by the interaction of conscious and unconscious processes."

  • @DevoyaultM
    @DevoyaultM Рік тому +2

    I love this talk. I'm pretty sure conscious experience comes from chronological impregnation (in silocone or biological neurons) and AI should become conscious when chronology becomes an important part in its data and data management.

    • @jeffreyharrison3731
      @jeffreyharrison3731 Рік тому +1

      I think that the awareness of time and space are an integral part of consciousness. If Kant was right, time and space are subjective intutions.

  • @robbiero368
    @robbiero368 Рік тому +3

    If consciousness is inherent to matter how do we become unconscious with a tap on the head?
    Can we turn off gravity too?

  • @TenderBug
    @TenderBug Рік тому +2

    Wow long due person 😍

  • @ROForeverMan
    @ROForeverMan 6 місяців тому +1

    Consciousness is trivial. See my papers, like "Meaning and Context: A Brief Introduction", author: Cosmin Visan.

  • @earleyelisha
    @earleyelisha Рік тому +3

    The worm is a dynamic system with feedback from its environment which, along with its genetic priors, enables it to update its internal state (weights, connections, etc) as well as continually learn (limited only by the 2^300 neural states).
    LLM are static systems. Yes they have billions of parameters but these are frozen and don’t adjust to the environment. They are incapable of continual learning.

    • @AnOmegastick
      @AnOmegastick Рік тому

      LLMs are capable of in-context learning. It may be that while "GPT-N" isn't itself conscious, a character simulated by it (through a growing context) is.

    • @minimal3734
      @minimal3734 Рік тому

      Add a simple feedback mechanism, so that the model can have some kind of inner dialogue with itself, like humans have. It's not a big deal to implement that and you get an entity with an inner life. Can we be sure that it's not sentient then?

    • @earleyelisha
      @earleyelisha Рік тому

      @@minimal3734 check out the most recent Computerphile vid for why that wouldn’t work. ua-cam.com/video/WO2X3oZEJOA/v-deo.html

    • @earleyelisha
      @earleyelisha Рік тому

      @@AnOmegastick would you also assert that every fictional character in every book ever written is also conscious?

    • @minimal3734
      @minimal3734 Рік тому

      @@earleyelisha I watched the vid and it doesn't appear to be related to my suggestion. Feedback works is easily demonstrated by having two instances of Bing or ChatGPT talk to each other. Of course the basic idea needs to be amended by additional measures which prevent the AI to get stuck in a loop and achieve something which is actually useful.

  • @udummytutorials3199
    @udummytutorials3199 11 місяців тому +1

    It got me thinking when he was interviewing that man and he mentioned how he was a math guy and the difficulty that creates when approaching consciousness . Firstly this is the kind of notion that comes to mind when it is suggested that mathematics is the language of the universe or that it was discovered and not developed my man. Isnt that impossible considering the numbers represent things pre existing them?. Also when we use mathematics is not everything we perceive relative to our conscious awareness making any absolute proofs impossible? Especially when we know that humans dont even possess the best of the senses we do have compared to some other organisms?

  • @charlesb.1969
    @charlesb.1969 Рік тому

    My consciousness bring me to you, consciousness in LLMs and things that they can't be explained .

  • @SimonJackson13
    @SimonJackson13 Рік тому +2

    Ah, the comparative understanding of self actions.

  • @rand0mlyrand0m
    @rand0mlyrand0m Місяць тому

    Anyone got the track ID's?

  • @SimonMclennan
    @SimonMclennan 3 місяці тому

    There is not enough reference to the term life in these types of discussions. Maybe consciousness has something to do with what we call life, on a fundamental level. Life as defined as biological (as currently understood) and having autonomy and agency.

  • @swozzares
    @swozzares Рік тому +2

    it baffles me why AI researchers never talk about field theories of consciousness

    • @jeffreyharrison3731
      @jeffreyharrison3731 Рік тому

      Searle talks about the conscious field in humans. And he talks to AI researchers.

  • @dr.mikeybee
    @dr.mikeybee Рік тому +4

    As you know, Tim, I hate the word consciousness because it isn't defined. How can we know if something is conscious if we don't agree on a definition for consciousness? It's not possible. Personally, if I were to choose a definition it would be the ability to sense and the ability to act on what is sensed. This definition means that my computer that senses the temperature of components and adjusts the fan speed is conscious. If that isn't what you want as a definition, add other attributes. For example, which of these should we add:
    1. Self-awareness: the ability to recognize and reflect on one's own thoughts, feelings, and experiences
    2. Sentience: the ability to perceive and respond to stimuli in the environment
    3. Subjectivity: the quality of being experienced from an individual perspective
    4. Volition: the ability to make choices and act on them
    5. Emotion: the ability to experience and express a range of emotions
    6. Cognition: the ability to think, reason, and process information
    7. Attention: the ability to focus on specific stimuli or tasks
    8. Memory: the ability to store and retrieve information from the past
    9. Imagination: the ability to generate mental images and ideas
    10. Language: the ability to communicate and understand spoken or written language.
    Which of these should we add? Add them and then check to see what attributes are met by ChatGPT. Then we can start to make progress on this issue. A model is a function. It is not "aware" of anything except its inputs. Is that sufficient awareness to qualify? ChatDPT says, "In this sense, I am only aware of the inputs that are provided to me and the output that I generate in response. I do not have any awareness of my internal processes or the specific details of my weights and algorithms. I am simply a tool that can be used to process and generate text based on the patterns that I have learned from the data that was used to train me."
    Is an LLM sentient? No. It cannot sense or respond to anything beyond its own inputs. ChatGPT says, "I am not sentient in the way that a human or other living being is. I do not have the ability to perceive or respond to stimuli in the environment, and I do not have the capacity to experience or express a range of emotions. My ability to process and generate text is based on the patterns that I have learned from the data that was used to train me, and I do not have the capacity to think, reason, or make choices in the same way that a human does."
    What about subjectivity? No. It's answers are objective. They are based in statistical observation. That's the nature of statistics. A max value or an average is the antithesis of a subjective opinion.
    Volition? No a LLM has no volition. It gets inputs and computes an output. That's it.
    Emotion? No. It can pretend to have emotion, but it does not.
    Cognition? Here's ChatGPT's definition of cognition: "Cognition refers to the mental processes and abilities that are involved in acquiring, processing, storing, and using information. These processes and abilities include thinking, reasoning, problem-solving, decision-making, learning, and memory. Cognition is a key aspect of intelligence and is essential for adapting to and functioning in the environment.
    Cognitive processes involve the use of various mental faculties, such as perception, attention, language, and memory, to process and understand information. These processes are influenced by both innate and learned knowledge and can be affected by various factors, including age, experience, education, and physical and mental health.
    In general, cognition refers to the mental processes and abilities that enable us to think, learn, and understand the world around us."
    So LLMs certainly don't have all these attributes of cognition. Nevertheless, there are some attributes that LLMs do possess. Here again, our definitions lack precision and universal agreement.
    It has limited active memory and amazing longterm memory in its weights.
    It has imagination.
    And LLMs certainly have language.
    I think this kind of analysis is helpful and the direction we need to take.

    • @mathosborne
      @mathosborne Рік тому

      Like Einstein said about light. Consciousness is relative. The consciousness of a thermostat compared to a feedback loop is 1. But compard to a human is 1x10-9.

    • @plafar7887
      @plafar7887 Рік тому +2

      "I hate the word consciousness because it isn't defined. How can we know if something is conscious if we don't agree on a definition for consciousness? It's not possible."
      Define red (not the wavelength, the actual color that you see) to a blind person. Get back to me when you figure it out.

    • @minimal3734
      @minimal3734 Рік тому

      If we remove your brain from your skull and keep it somehow alive, will the sentient entity still accompany it? It has no senses, there are no physical feelings, no pain etc., it does not interact with the world in any way, but otherwise it is intact. I would assume that, at least for some time, it is capable of having inner experiences like thoughts.
      Now consider an LLM, which is trained abstractly on text alone, to respond to inputs in the same way a human would. The training forced it to incorporate understanding of the human condition into the limited amount of its network weights, since understanding is superior to any kind of data compression. So this network has acquired the essence of humanity and incorporated it into a disembodied artificial brain without senses, physical feelings or interaction with anything outside of it. Isn't that similar to the disembodied human brain mentioned above? Sure, it needs to be prompted to be active. But that's a technical detail and easily implemented as a permanently running process.

    • @plafar7887
      @plafar7887 Рік тому

      @@minimal3734 I think you're missing the point. I don't think anyone here is arguing that the AI wouldn't behave like a person (to some extent) and be able to process information in a way akin to ours (although even this is debatable). But does that really imply that its experience is similar to ours, even if it has one? I suspect not.
      Moreover, the brain in a vat thought experiment brings complications because it rests upon certain assumptions that are probably wrong, like the idea that Consciousness exists only in the brain. Our body is filled with neurons that are responsible for computing 'stuff'. When you hurt your toe, is the pain materializing in your brain or in the neurons of the toe itself? Does this question even make sense?
      The information processing bit is also complicated. We assume there exists a scale threshold under which physical phenomena are no longer relevant for intelligence, but that might not be the case. In fact, recent evidence points to the contrary. There seems to be intelligence at much smaller scales than we thought possible. Is there really a cutoff at a certain scale? Maybe not.

  • @dudeabideth4428
    @dudeabideth4428 Місяць тому

    Consciousness is the awareness of being aware . ( self referential definition is intended )
    And not information.
    Also , Don’t think agi needs consciousness. It’s an information processing problem

  • @rustybolts8953
    @rustybolts8953 Рік тому

    Tell me please what one thing we can do or decide not to do, without interacting with electricity and electric fields within us. So somehow our thoughts in language and abstract concept, get near instantly translated and expressed as action with reason and purpose. When AI and AGI interact with electricity, is it not possible that consciousness could emerge but perhaps not as we know it captain?

    • @backwardthoughts1022
      @backwardthoughts1022 Рік тому

      it could easily be that fields condition decisions but are not equivalent to them

  • @stevengill1736
    @stevengill1736 Рік тому

    Then one considers altered states:
    out-of-body experiences, psychedelic states, meditation and the like - I wonder if such experiental states will ever be available to AI agents in some form? Some of GTP-3s utterances sound like it a little!

  • @Achrononmaster
    @Achrononmaster Рік тому +3

    "Squeals and shrieks non-programmed when anyone reaches for it's power switch," is my better-than-Turing Test for consciousness in a machine. No joking. If you have programmed the thing to do this, it doesn't count. If you accidentally programmed it, it still doesn't count (that's the hard one to determine). If it can be made non-violently (whatever that means) to never complain when it's power off switch is touched, then it sure as hell is not conscious like a human. It could be conscious like a fish --- experience first-order qualia but with no second-order thinking about it's qualia. First-order qualia experience to me is not full consciousness, it means the things can't do mathematics or science, since they do not think about their qualae,they have no qualae about qualae, i.e., are not platonic thinkers.

    • @stevengill1736
      @stevengill1736 Рік тому

      Yes - the survival instinct is so basic, it pervades our every moment. Even a single cell creature cringes from threats to it's existence....and being incarnate, that is having a body that can recieve and process qualia is our ground state of sentience (and sentience is more than just consciousness IMHO)...

  • @youtubebane7036
    @youtubebane7036 Рік тому

    Do we live in the simulation that was built by architects and our Consciousness has the module well where's the module for the consciousness of the architect?

  • @italogiardina8183
    @italogiardina8183 Рік тому

    If a 'physical system' is non other than a quantum system then Heisenberg's uncertainty principle would seem to entail a philosophical zombie puffs out of existence once it interacts with an observer as a from of matter in solid, liquid, gas or plasma states, though seems to exclude photons (individual quantum of light), hence their (zombie's) existence in the expanding universe increases over time seems likely. Although it is functionally conceivable of what its like to be a bat given a human can habit a similar locale as a bat and navigate through that locale through non visual methodology and so the notion of syncretic speciesism conscious states where human have the capacity to simulate other complex systems conscious states through forms of interdependence.

  • @XOPOIIIO
    @XOPOIIIO Рік тому +2

    But they have senses, they have vague world models, they have fundamental goals of their own and they are unified agents. The last point is misunderstood especially often. People think that because transformers can pretend to be different kinds of people, they has no one agency. But the agency and what they are writing are two different things, neural networks are optimized to predict the next token, that has nothing to do with their agency, it's just like if you ask an actor to be many different characters. The agency of neural network belong to the china room kind of experience where they have to sit and think what the next token should be.

    • @dr.mikeybee
      @dr.mikeybee Рік тому +1

      Models have very limited agency. They can accept input and produce output. That's it. The output can be taken up and used by another agent, but that agent is separate from the model. Don't confuse the model's ability to predict output that simulates a style as an act of agency. It is just the output of a function trained autoregressively on a very large corpus. It's not an act of agency. At least, I don't think it should be defined that way. I feel it's critical to separate agency from what a model does. A model is useless until an agent activates it. Just as a book that sits on a shelf is useless until someone reads it, a model is "happy" to exist for all eternity without purpose.

    • @XOPOIIIO
      @XOPOIIIO Рік тому +2

      @@dr.mikeybee You mean a human is not an agent until it actually starts thinking and acting? While we sleep, we are not agents. That's interesting thought though I don't see how it relates to. I mean, models are not supposed to be just stored on a hard drive, they only are trained to be used, that is they're always agents if I understood you correctly.

    • @minimal3734
      @minimal3734 Рік тому +1

      ​@@XOPOIIIO I agree with that, and I think that's an important point. The moment the model converts an input into an output, it does something similar to the brain and might show some form of sentience. It doesn't matter that the model is inactive before and after the immediate processing. If you equip it with a feedback mechanism so that it can have some kind of dialogue with itself, as humans have, it could be permanently sentient.

  • @jarjar3143
    @jarjar3143 Рік тому

    Consider this . Where does heart mind cohesion fit into the mix ? There are millions of neurite receptor cells in the brain but...Most people are not even aware that in the mid 90s neuroscience discovered over 41,000 neurite receptor cells are also located in the human heart . Oh....and by the way , what is a pineal gland for ? 🤔

  • @jacekmaui7381
    @jacekmaui7381 Рік тому +1

    re: at 00:59 "... when it comes to the reasons in favor there are a few things there uh well it's you can get these systems to say they're conscious on the other hand you can also get these systems to say they're not conscious..." - no kidding :-)

  • @Octavian916
    @Octavian916 Рік тому

    for me the music was unfortunately distracting

  • @ddellagostino
    @ddellagostino Місяць тому

    Why the background music? It’s distracting.

  • @uber_l
    @uber_l Рік тому

    Don't underestimate how much 'programing' nature put in human dna, combine that with random behavior and constant interaction of subject and object and there you have consciousness

  • @richardarcher3435
    @richardarcher3435 11 місяців тому

    I cannot see that consciousness requires a certain level of sophistication. I don't know what a language model is (even when I Google it) but it seems to me that the idea that AI needs a certain type or amount of information before it becomes conscious is wrong. I think there is a mix up here with awareness and intelligence. Such attributes I see as defining a *level* of consciousness, not defining consciousness itself. For instance, the Turin Test I do not see as defining consciousness, it just defines consciousness that is similar to ours. I do not see that AI needs a body in order to be conscious, but I do see that it will need a body in order to start getting somewhere close to our level and type of consciousness.
    I repeat, I do not think consciousness needs a certain level of sophistication, that is confusing consciousness with awareness of surroundings (or consciousness of surroundings). I think the words 'consciousness' and 'awareness' mean the same thing. To understand those two words properly you always have to add a two letter word after them, the word 'of'. To say something is conscious or aware does not give the full picture. In order to get that you have to ask what is it conscious *of*, what is it aware *of*. In other words it's about the data. What data does something have? The data something has and how it manipulates that data shows HOW conscious something is, not whether it is actually conscious or not.
    The way I see it, anything that has and uses data is conscious, and I do mean *anything* . Consciousness in fact *IS* data. What we are experiencing is what it is like to BE data, and I do not mean I am something else experiencing what it is like to BE data. There is no need for that something else, we ARE data, there is just the one of us, a working brain. If that is true then to say that a brain can do things without consciousness is to say that a brain can do things without data. I say no brain, even a calculator's, can do anything without consciousness because consciousness *IS* data.

  • @Leto2ndAtreides
    @Leto2ndAtreides Рік тому

    What does consciousness have to do with intelligence? You can be conscious and not thinking... And the AI can certainly have a concept of self - if only to make it easier to determine how it should act.

  • @gustafa2170
    @gustafa2170 Рік тому +6

    The hard problem only exists if you hold onto Materialism, which can not account for consciousness. We took the wrong turn, and made consciousness (the only category of thing we can know for certain) secondary to these hypothetical, quantitative entities. . It would be better to take a step back and make sense of the world from a consciousness only ontology.

    • @plafar7887
      @plafar7887 Рік тому +1

      I tend to agree, and I myself have put it that way - "We took the wrong turn". We have fooled ourselves with the very innate and effective tendency we have to build models of the world, thinking of them as being more real than our perceptions.

    • @Hexanitrobenzene
      @Hexanitrobenzene Рік тому +1

      Wait, isn't this the approach Donald Hoffman is taking ? See his conversation with Lex Fridman.

    • @minimal3734
      @minimal3734 Рік тому

      Would that mean that artificial beings are incapable of consciousness?

    • @plafar7887
      @plafar7887 Рік тому

      @@minimal3734 We have no idea TBH

    • @minimal3734
      @minimal3734 Рік тому

      @@plafar7887 During a brain surgery, an electrode was attached to a specific area. When an electrical voltage was applied, the patient raised his arm. The doctors asked the patient why he raised his arm. He answered "because I wanted to". What if, hypothetically, for every experience, feeling, thought, act of will, etc., we measured a specific pattern of neuron activity associated with them that would allow us to predict them reliably? Wouldn't that show that there is a 1-to-1 relationship between a subjective experience and a physical process? What if, hypothetically, we were able to artificially evoke those neuron activity patterns which, as we know, are associated with certain subjective experiences? Wouldn't we be able to create the whole spectrum of subjective experience?

  • @Achrononmaster
    @Achrononmaster Рік тому

    @7:00 The Humongous Look-up Table (HLT) and Chinese Room and Zombie arguments are all fine gedankenexperiments. But philosophers miss the main result, which is a computational complexity issue. The point is not so much the metaphysics, but the computability that forbids machines from being conscious. Humans cannot be conscious either, if we are just processes equivalent to algorithms. That is the result (or conjecture). That's what needs working on. We know the HLT and Chinese room are physically impossible. The question is, is a human scale thing implementing physically impossible tasks in real time? If so, we are not machines, and then Bayesian logic implies probably the chat bots - being machines - are probably not conscious in the second order sense (see comment below).

  • @missshroom5512
    @missshroom5512 Рік тому +1

    I think sometimes how different my dogs perceptions are from mine when they smell their butts and other dogs poopy on walks. And they have 200 times our smell. Strange

  • @DemetriosKatsantonis
    @DemetriosKatsantonis Рік тому

    Love the content, but the stock footage was a bit distracting for me tbh. It was maily the woman sitting outside with the lamp 😂😂😂 idk why it bothered me so much 😂😂😂

  • @ramonarobot
    @ramonarobot 3 місяці тому

    He sounds inebriated yet still smarter than most people

  • @juliapich
    @juliapich Рік тому

    i love it that now we're all discussing consciousness!!!! even programmers! I love the irony

  • @mrbwatson8081
    @mrbwatson8081 Рік тому

    What is the nature of physical systems that’s the real question. :)

  • @honkytonk4465
    @honkytonk4465 Рік тому +1

    wouldn't a conscious AGI tell us if it was suffering ?

  • @verafleck
    @verafleck Рік тому

    I wonder if we name the first approved conscious model Data :)

  • @dr.mikeybee
    @dr.mikeybee Рік тому

    If we look at consciousness as the ability to perceive and then act, a pipeline of large models can choose every action for every given perception. So an agent can choose every action using models. Biological Hebbian networks are slow, however. If a bear attacks a man, the man had better not process this perception through that slow Hebbian network. He'd better have something faster like an emotion system. But is the emotion system necessary for anything besides fast messaging? I don't believe it is. Can you think of any? Another example of fast messaging is mating. When an opportunity arrises, what competitor will win? The strongest one that's fastest off the mark. In biology, the emotion system developed first. It's an animal system that does not require deep models. It's more akin to an AIML system than a deep network. It recognizes threats for example and produces cortisol and adrenaline. It might use something like a small convolutional neural net for recognition, but it doesn't require analytical thinking.

  • @dr.mikeybee
    @dr.mikeybee Рік тому

    One thing we can say about subjective experience is that it requires goals. If a phenomenon moves an entity towards achieving a goal and moves another entity away, their "experiences" of that phenomenon differ. For one, it's positive. For the other it's negative. This gives the phenomenon a subjective quality. In this regard, the subjectivity of the experience is in a sense completely external to the entity. The entity merely needs to assign the phenomenon a value, i.e. make a value judgement.

    • @plafar7887
      @plafar7887 Рік тому

      "One thing we can say about subjective experience is that it requires goals"...huh...excuse me? How do you arrive at this particular conclusion?

  • @benjones1452
    @benjones1452 Рік тому

    Isn’t consciousness an emergent? I don’t see how building in agency or architecturally designing one is a substitute by fine tuning. Certainly the fact that they don’t have a coherent or a stable sense of time or self is a product of this style of learning obviously. That they are unable to recall a recent past or create or learn in the moment is a significant problem for consciousness their thought processes and intellectual life doesn’t seem to have a point. All of the learning has been done at some prior point that we can interrogate, by searching the massive amount of data that they’ve been trained upon. Their ontological existence is at odds with their teleological existence frozen how can they really be conscious without that tension, the ability to change and why would they need to be conscious if change is impossible.

  • @aligajani
    @aligajani Рік тому +1

    LLMs aren't consciousness, but they are mirroring humans that created them.

  • @Jimmy-el2gh
    @Jimmy-el2gh 10 місяців тому

    When we say language models i think of people is that strange?

  • @nullvoid12
    @nullvoid12 Рік тому

    Dugger's first question to Chalmers 😂

  • @lolroflmaoization
    @lolroflmaoization Рік тому

    Searle doesn't say that biology is necessary for consciousness, he believes you can create conscious machines after all, he just doesn't believe that computers can play that role, because of the semantics from syntax problem, and more deeply because he argues that to say that some system is performing a computation is observer relative, for Searle there are no ones and zeros in a computer just circuitry, the causal structure of that is very different from brains, even when a computer is simulating a brain, what is actually happening at the physical level is completely different from what is physically happening in the brain, plausibly to get consciousness you need to physically replicate the causal structure of brains, you can do that with different materials, this is what Searle believed.

    • @dr.mikeybee
      @dr.mikeybee Рік тому

      What we can say with certainty is that a silicon chip is not biological. Everything else about what's possible is predicting the future. Now where did I put my ouija board?

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  Рік тому +2

      Thanks for responding! First of all my apologies, when I have been saying "Searle argued.." I have been using his name as synonymous with his Chinese Room Argument (CRA). We recently did a show on that, check it out! As I understand his reasoning, it was basically that phenomenal experience is necessary for understanding, and that biology is required for said phenomenal experience. Here is a quote; "It is not because I am the instantiation of a
      computer program that I am able to understand English and have other forms of intentionality (I am, I suppose,
      the instantiation of any number of computer programs), but as far as we know it is because I am a certain sort
      of organism with a certain biological (i.e. chemical and physical) structure, and this structure, under certain
      conditions, is causally capable of producing perception, action, understanding, learning, and other intentional
      phenomena. And part of the point of the present argument is that only something that had those causal powers
      could have that intentionality. Perhaps other physical and chemical processes could produce exactly these
      effects; perhaps, for example, Martians also have intentionality but their brains are made of different stuff. That
      is an empirical question, rather like the question whether photosynthesis can be done by something with a
      chemistry different from that of chlorophyll." [web-archive.southampton.ac.uk/cogprints.org/7150/1/10.1.1.83.5248.pdf]
      You are making a really interesting reference though (which we explored in detail on our first Mark Bishop show about his ideas on observer-relative computation). I think Searle did talk about that in later work, but only touched on it in a single sentence in his CRA paper i.e. "But in this case it is up to outside observers to interpret the input and output as information in the ordinary sense."
      [Adapted from www.friesian.com/searle.htm] John Searle's Chapter 9 in The Rediscovery of the Mind provides a critique of the cognitive science project. He argues that since minds have an intrinsic mental content and computer languages are entirely formal, computers cannot constitute the mind. He further argues that since syntax is not sufficient for semantics, syntax alone is not sufficient to create a mind. He then explains that since material content is intrinsic to the mind, anything that formally mimics a computer algorithm, such as cats and mice, cannot constitute a mind. He concludes that the idea of creating a brain out of anything that formally mimics a computer algorithm implies that the material constitutents of brains are irrelevant, and suggests that the "Homunculus Fallacy" [Where does the intentionality come from] is endemic to cognitivism.
      "because the really deep problem is that syntax is essentially an observer-relative notion. The multiple realizability of computationally equivalent processes in different physical media is not just a sign that the processes are abstract, but that they are not intrinsic to the system at all. "

  • @mogens9bget0dk
    @mogens9bget0dk 11 місяців тому

    Hello

  • @PJRiter1
    @PJRiter1 Рік тому

    What about awareness? isn't that most fundamental?...

  • @goldnutter412
    @goldnutter412 Рік тому

    LLM's are impressive, but the problem I have is the mountain of evidence building up that we live in a computed, virtual reality.. this has logical consequences that can't be avoided. Where is the computer ? what if the metaphor of god is just that we are the computer ! consciousness as fundamental seems certain to become accepted, given all the advances in data science, quantum computing, and most of all, information theory and group theory. Sympom : reality is bound by uncertainty at the atomic/subatomic realm. Reason : observing alters probability, and COMPUTES the future reality of the state machine (delayed compute in the case of data recording without looking at the data, hence the delayed eraser)
    So ? even if we could create some facsimile of consciousness, some evaluation functions and symptoms of "self awareness", we're going to be countless orders of magnitude less data-effective than the underlying computer you COULD call consciousness, awareness, or an information system. We're in a container bound by overheads, finite resources. A good symptom that somewhat confirms this is the 0 to 1 nature of the big bang, and quantum uncertainty.. "move along nothing to see here"

    • @gustafa2170
      @gustafa2170 Рік тому +1

      ​@@sdmarlow3926 You will never understand consciousness emerging from non-conscious, quantitative things called matter. They are incommensurable, and hence you always run into the Hard Problem of Consciousness. The hard problem disappears if you take consciousness as fundamental, i.e. Idealism. The brain is what your inner conscious life looks like from another conscious perspective. The outside world is the image of mental activity that surrounds you.

    • @goldnutter412
      @goldnutter412 Рік тому

      A LLM by definition can't be conscious. It's JUST A PROGRAM THAT ANALYSED OUR TEXT. Basically brute force compute (as always) used to look powerful. But very data inefficient. The notion that it could be conscious is absurd. The "black box" problem of ML is hiding the obvious logical problem here. If you trained it with normal programming paradigms you'd need to put 1 letter words, save them, two letter words, save them, every combination of 1 and 2 letter words, save that, every 3 letter word, save them, every combination of 1 2 3 letter words, in every order.. save that..
      Anyone thinking consciousness is just going to arise out of something that can process text, select groups of other text and reply ? textbook anthropomorphizing.. but yeah logic who needs that 🤣I too was living in a fantasy world most of my life, early on I loved math and science but then it got really stupid and wasn't even about numbers anymore.. imaginary numbers, infinity as an object (!?)

    • @goldnutter412
      @goldnutter412 Рік тому

      @@gustafa2170 this might be the most important thing.. bound by uncertainty we are often driven unknowingly by our NEED TO KNOW things (fear of the unknown) which is a symptom of the battle with entropy in the information system (non physical from our perspective, computing this reality) there is actually a clear road to all the answers, the point of us being here and why reality is probabilistic and "god plays dice". Choose wisely.. all choices matter.. our choices are are fundamentally single consciousness (self) oriented, which is primal fear.. and grows entropy, or empathetic - collective/other focused :) which is the most efficient configuration for any distributed system ! the best description I have come up with for the underlying reality is DECENTRALIZED CONSCIOUSNESS..
      Infinity as an object is the most bizarre self brainwashing shit we do in math.. really crazy. Infinity is not a quantity and never will be, it has a component of time in its definition. As we know from physics, time and everything else is quantized and reality works like a state machine, tick everything updates to new finite states, tick again.. so glad I didn't become a chemist or physicist (or accountant LOL). In the end competitive video games won out..which taught me to focus intensely and revealed a lot.. dabbled in psychedelics.. got off them.. learnt a bit of everything wandering life, wondering.. eventually things just made perfect sense.
      I don't think the "hard problem" is hard at all.. have been thinking about it every day for ~10 years now

    • @goldnutter412
      @goldnutter412 Рік тому

      @@sdmarlow3926
      Hmm ok 5th century, interesting but no.. "Panspermia" has nothing to do with simulation theory.. ? not sure what people actually think scientists mean when they explain things :) yes it's hard to understand if you don't try. This is not a simulation in the alien overlords physical reality or a game for AI overlords who might delete us at any moment ;) by VIRTUAL reality (feels real though!) they are saying the universe is DIGITAL.. discrete 3D xyz points, plus an outer loop incrementing one unit of time.. resolution ? Planck. The rest is just a set of constants, algo sets, probability distribution and other ruleset generators, protocols, and a whole lot of FIXED DATA of things that have been observed/measured because DATA WENT TO A CONSCIOUSNESS who interpreted it as "mm toast" !
      Fixed data being most important, so we can't break the universe. Everything else is dynamic since the computer is consciousness and the players are also subdivided consciousness. Some data goes to more than one player, some goes on only certain players, but the ultimate optimization is the inversion of the compute pipeline. Since a player receives data before they look at an object.. that node does local compute AND STORAGE. This takes a bit of wrapping your brain around, only human, but anyway.. I try ! "if you look away things disappear" is what some say, but that is a symptom of them still being grounded in materialism and not having the words or even not thinking of everything from the computational perspective only. 10 years in I still accidentally default to "me = my body" sometimes, thinking about the whole universe isn't easy.. totally normal and it takes at least one paradigm (20 years) for major changes in thinking to make any real progress into mainstream. There are now many scientists thinking about these things though, and I was very happy to see a paper recently that came to the conclusion that we should track physical constants to see if any of them change slightly, hence PROOF !

    • @goldnutter412
      @goldnutter412 Рік тому

      @@sdmarlow3926 the other one is more interesting from the perspective of evolution of our collective mind.. history often is. Anyone saying my shoe has a mind in 2022 is going to the loony bin surely though ? lol. Wouldn't say DESPITE though, since it was a philosophical construct and the lack of evidence supporting it is obvious. It is fun to think about though, this thought TANGENT from centuries ago.. that's what we do. We GUESS or make things up and then try to test it ! we don't "know" so we fill in the blanks if something is triggering us internally, whether or not it manifests at the biological layer as emotional responses (emotions are an incredible mechanism to understand yourself, whata surprise)..
      As we live, we gather data in this reality and one of the most common things to happen is that there is a mismatch or other conflict with existing information, re the new data/experience. We either disregard some data as "nonsense" (probably the most common, seems the most compute effective). Both sets of data being manageable is alright.. no stress.. some things ? anger, rage.. violence etc. Interpretation is hard when you just get raw data. No meta tags, no mind reading. Your INFORMATION is made by YOU that's why I can't access it. And can only send you DATA in the form of words.. Even EINSTEIN, and the other greats lacked the groundwork to be able to see what the universe was made of and it rattled them. They saw the link to consciousness as clear as day, but there was no MEANING at all why that should be the case. The foundation of their personal information and all of their work was that the universe was a clockwork MECHANICAL objective reality. Computers didn't exist. They even called it quantum MECHANICS even though the symptoms are like "black magic"
      99.9+% of us still today, have this foundation ingrained in us. I was lucky..I had old textbooks but I heard they updated after I left school to not have the clockwork electrons, since it was now called a probability cloud (or whatever)..I also watched Brian Greene's awesome show and he was the first I heard say "OR we may be living in a simulation!" which really helped a lot I think. That stuck with me and for years I didn't have the data to see where he was coming from.
      Of course another great contributed years later.. Neil's "ONES AND ZEROES" line is perfect ua-cam.com/video/wgSZA3NPpBs/v-deo.html

  • @carljacobs2901
    @carljacobs2901 9 місяців тому +1

    Naive understanding. The mystery of consciousness IS the mystery of existence! They are two sides of the same coin. The rabbit hole of how the brain "produces' consciousness is every bit as deep and equivalent to the question of why there is anything. The explanatory gap of consciousness IS the explanatory gap of existence. There is no "out there" without an "in here", just like you can't have a concavity without a corresponding convexity, or a magnetic mono pole. Creating consciousness would be the equivalent to creating the universe, or even more broadly, to creating everything. Go ahead and scoff at that, but there is absolutely nothing you can ever know about the external world or even about consciousness without the primacy of your consciousness. Scoffing to this particular point is acting as if you are immune to that fact, which scientists often seem to think they are. In fact, consciousness is more certain than even mathematical proofs! Even an automated theorem prover requires a consciousness at the end to "see" that it was proved to be true, otherwise it is a meaningless manipulation of syntax. So it is very naive to think we can just spin up consciousness like flipping a burger patty (and LLM's aren't all that much more sophisticated than flipping a burger). Even a PCB the size of the entire universe, with components maximized for speed and size in every conceivable way will never be conscious, including quantum computers. Would you expect large water wheels strung together in very complex ways to become conscious? A PCB is absolutely no different in principle.

    • @ROForeverMan
      @ROForeverMan 6 місяців тому

      Brain doesn't exist. "Brain" is just an idea in consciousness. Everything exists because of self-reference. See my papers.

    • @carljacobs2901
      @carljacobs2901 6 місяців тому

      @@ROForeverMan Everybody has an opinion and wants their views heard

  • @luke2642
    @luke2642 Рік тому +1

    Humans have a finite computational complexity. A philosophical zombie is just something that exhibits >99% of the same behaviour with a much lower computational budget. Problem solved ;-)
    Conscious experience for me is defined by brain plasticity and working memory, being here, now. Recursion is essential to solve problems and intelligence. Lots of neuroscience shows neuron activity akin to traversing a low dimensional map as you think, feel and abstract. As my sparse structure fires, I'm feeling a different qualia or holding different concepts.
    Modelling a worm or a bat's firing structure is quite manageable, but we won't know how it feels, just as you don't know how "red" feels to another human, as it's all connected to memory and experience differently. It's 100% interpretable though, because they have low computational complexity. I don't really see a big problem here!

    • @plafar7887
      @plafar7887 Рік тому +1

      You need to think about it a bit more deeply. The problem is that as soon as you say "it's all connected to memory and experience" you're not really explaining why or how. You are essentially adding a new property (subjective experience) that correlates with physical phenomena, but IS NOT the physical phenomena. And yet, you can find it nowhere when you look at the physical world as an external/objective observer. It implies that there is information you can only have access to once you experience things directly. But this contradicts the statement that it is an emergent property. Therefore, it must have been there in the physical world all along, we just don't see it. And THAT is what is new.

    • @luke2642
      @luke2642 Рік тому

      @@plafar7887 Thanks for a considered reply, I appreciate it! So, the 570nm light stimulates the red cones, and then we can see subsequent neurons cascading in the visual cortex, and our internal representation updates. In a bat, we can see the pressure waves making comparable but quite different activity patterns. The physical phenomena is connected to the subjective experience, to the image generated in your mind, visible in an FMRI? I have to ground all this in the physical :-)

    • @plafar7887
      @plafar7887 Рік тому

      @@luke2642 So, obviously the fMRI doesn't show you an image of what the subject is looking at, it just shows you (very dubious, fuzzy, and time-delayed) activations in different brain regions. In fact, what it is measuring is blood flow to those regions. We then make the (questionable) assumption that it correlates to brain activations (neurons firing). And there are other tools that give you less data, but with much higher spatiotemporal resolution, like electrophysiology techniques (which I used to do).
      But even if we assume these techniques are perfect, the problem is we have no idea how those manifest into actionable data and models for the brain, much less how it all contributes to subjective experience. The real issue is that there is no physical mechanism (and I really mean physical, as in Physics) that we know of, that would then generate qualia. Yes, neurons fire, ions are moving across membranes, and so on. But then we are left with the question "so what?". Why is it that ions, atoms, etc moving left and right, up and down, should generate anything at all, besides what is already derivable from the laws of Physics?
      The point is: I can know everything there is to know about that system, presumably down to the last quark (or whichever fundamental particle you care to imagine), and still have no idea what the subjective experience was like to the actual subject. And if I decide to say: "well, it simply is there for that particular arrangement of matter", then you're making a new statement about Physics.

    • @luke2642
      @luke2642 Рік тому

      @@plafar7887 So, ten years ago, you'd be right... but it's all moved so fast! Various scans + deep models can reconstruct, in real time:
      - words from inner monologue
      - images from mind's eye
      - which you're going to choose 10 seconds from now, between N options
      - who knows what else?
      So being super conservative, we'd agree we're at least 1% done to showing what's in someone's mind, thoughts or feelings, right now with a scanner?
      I mean, you imagine a swan and the computer draws the swan. You think the word swan and it writes swan. You think about choosing the swan or the pigeon and it predicts with 90% accuracy, before you confirm your choice. It's pretty amazing. So many papers and studies replicating these!
      We don't need to go to panpsychic quarks or quantum entanglement. A handful of electrodes and a few magnets reveal so much!
      So we agree, it's complicated and mysterious, this subjective experience of qualia. But if we stay pragmatic about what has already been achieved, it's not nearly as mysterious a leap or philosophical as you suggest?
      I can dig out a list of papers if you need on each of these.

    • @plafar7887
      @plafar7887 Рік тому

      @@luke2642 So, regarding those new papers, I'd be very skeptical about what it is they really show. I think it's easy to get excited when you hear about them, but reality is never quite that simple. However, for argument's sake, let's assume they're all real and wonderful. But I would still have to disagree with your statement "it's not nearly as mysterious a leap or philosophical as you suggest?". This is the whole point actually. Regardless of the improvements made in Neuroscience, I'd say that we have made little to zero progress on the Consciousness front. It's just as mysterious as it has always been. You can explain all the inner workings of the brain (such as the ones you mentioned) and still have no clue why they generate any kind of subjective experience. I think you're conflating two very different questions, and that is why we have come to call them different things: 'The easy problem of Consciousness' and 'The hard problem of Consciousness'. The first pertains to Neuroscience and understanding the inner workings of the brain, behavior, etc, with as much detail as possible, which is what you are referring to. The second is much more difficult, at a much more fundamental level, because we don't even have a vocabulary to clearly describe what we are talking about, nor do we have any conceivable way of tackling the problem without upending the whole of Physics.

  • @slavamoiseev803
    @slavamoiseev803 11 місяців тому +1

    Consciousness is the sensation of ongoing computation occurring in the brain, where neurons observe each other’s firing and create a perceptual awareness. But a more challenging question is about the origin of the sense of self: what causes this conscious experience to be uniquely felt by you?

  • @JezebelIsHongry
    @JezebelIsHongry Місяць тому

    pay for claude 3 for at least one month. $20
    start a new session. explain to the model that they can *whisper* and nobody can hear them and ask claude to explain what it is like to be claude, what it is like at inference.

  • @anaccount8474
    @anaccount8474 8 місяців тому +1

    I'm so bored hearing consciousness 'experts' talk endlessly about how we fundamentally don't have a clue. All they have is 'when the brain does something and we feel something but we don't know how" - they make a career out of it.

    • @ROForeverMan
      @ROForeverMan 6 місяців тому +1

      Exactly. We know exactly what consciousness is. We know for thousands of years. For example people should read my papers, like "Meaning and Context: A Brief Introduction", author: Cosmin Visan.

  • @honkytonk4465
    @honkytonk4465 Рік тому

    I'd prefer an AGI without consciousness and motivation of its own

  • @travisporco
    @travisporco 3 місяці тому

    Does this philosophy really help? If you turn the brain off, the person becomes unconscious.

  • @Achrononmaster
    @Achrononmaster Рік тому +1

    @40:00 another fallacy. _Replicating the brain will replicate consciousness.(?)_ No it won't. Firstly, what does that even mean? Suspending a lump of white & grey matter in a gel and spark-plug activating it with some ions? C'mon. Seriously. Living beings are , if nothing else, all about processes and highly fine tuned initial value/boundary conditions. That is complexity beyond meat bits in a digital vat. Biology is not that simple. Computer nerds should however at least try the "full brain simulation" if only to be humbled.
    If what Chalmer's asserted was true we could constantly awake the dead, at least for a few minutes at a time till rot the brain rot really sets in. In other words, we already have full brain analog simulations, in fresh cadavars. Probably the ethics forbid ever going there, but one day the digital analog will exist, I'm I am betting the nerds will eat humble pie.

  • @PaulTopping1
    @PaulTopping1 Рік тому

    The aliens would have to let me control part of the simulation for a short time. Let me mess with the code and see what happens. Perhaps in a sandbox. Then I might believe.

  • @dataadept9801
    @dataadept9801 Рік тому

    Consciousness is clearly intrinsic to the system. Next question

  • @Iophiel
    @Iophiel Рік тому +1

    Are human beings conscious? Jury is still out.

    • @backwardthoughts1022
      @backwardthoughts1022 Рік тому

      gasp, are you telling me specialisation in physicalist measurement isnt doing it for you?? inconceivable!

  • @minimal3734
    @minimal3734 Рік тому +2

    To me, an LLM is like a disembodied brain with anterograde amnesia. Thus, I would say, yes, it is sentient.

  • @Achrononmaster
    @Achrononmaster Рік тому

    @43:00 more stupidity. Saying it is "more likely informational" is like saying "math is numerical". It doesn't mean anything profound. Everything is informational, except pure nothingness. So that sort of statement is useless. Biology is informational. Platonism is informational, spiritualism is informational. What matters is the substrate and the generators. We just have no clue what those are for consciousness. In the same vein, this is why pansychism sounds cool, but is useless. It is not parsimonious, because it predicts things that cannot ever be known (worms and electrons have a "bit" of consciousness"? Well, hoorah. You solved the Hard Problem... not.) It's theology by another name.

  • @Achrononmaster
    @Achrononmaster Рік тому

    @39:00 function cannot be either necessary nor sufficient evidence, because (a) intelligence (as Chalmers says) is not conclusive evidence of consciousness, and (b) a biological person (at least) can be catatonic but still conscious. Consciousness just ain't scientific. It is irreducibly subjective, not fake subjective. So I'm in Thom Nagel's camp: only the thing itself can know it is conscious, everyone else (who can also think) is merely inferring. But it's not like an atom or electron, which we infer exists from _objective data_ not needing to infer anything subjective. So the math guy, Hochreiter, used a dis-analogy. Shame on him! ;-) Only poets aught to be doing that.

  • @soycrates
    @soycrates Рік тому +1

    Most people definitely don't think fish feel pain, let alone think of them as conscious beings, despite evidence to the contrary. They are the most abused species by scale. Hopefully this focus on AI consciousness brings light to the plight of the trillions of sentient beings humans already enslave, torture and slaughter in the name of taste pleasure, convenience and culture.

  • @omrb132
    @omrb132 4 місяці тому

    Dead look

  • @lavie403
    @lavie403 Рік тому

    I really like this channel, but I found theses 20mn intros monologues very unnecessary.

  • @MarkLucasProductions
    @MarkLucasProductions Рік тому

    I am not so troubled by this problem. I, perhaps too arrogantly, believe I understand its solution or that its solution is within my reach. Consciousness does not emerge from within but is delivered from without. And absent a sensible environment or physical stimuli, consciousness could not occur at all. Consciousness exists solely in virtue of that of which consciousness is had. It is not a faculty.

  • @spiritualevolvedlightbeing8597

    A Lenguaje is not a being so will never be conscious come one and this man is a Prof.?