The Free Energy Principle approach to Agency

Поділитися
Вставка
  • Опубліковано 31 гру 2023
  • "Agency" extends beyond just human decision-making and autonomy. It describes how ALL SYSTEMS, interact with their environment to maintain their existence.
    Watch behind the scenes, get early access and join the private Discord by supporting us on Patreon:
    / mlst (public discord)
    / discord
    / mlstreettalk
    DOES AI HAVE AGENCY? With Professor. Karl Friston and Riddhi J. Pitliya
    According to the free energy principle, living organisms strive to minimize the difference between their predicted states and the actual sensory inputs they receive. This principle suggests that agency arises as a natural consequence of this process, particularly when organisms appear to plan ahead many steps in the future.
    Riddhi J. Pitliya is based in the computational psychopathology lab doing her Ph.D at the University of Oxford and works with Professor Karl Friston at VERSES.
    / riddhijp
    References:
    THE FREE ENERGY PRINCIPLE-A PRECIS [Ramstead]
    www.dialecticalsystems.eu/con...
    Active Inference: The Free Energy Principle in Mind, Brain, and Behavior [Thomas Parr, Giovanni Pezzulo, Karl J. Friston]
    direct.mit.edu/books/oa-monog...
    The beauty of collective intelligence, explained by a developmental biologist | Michael Levin
    • The beauty of collecti...
    Growing Neural Cellular Automata
    distill.pub/2020/growing-ca
    Carcinisation
    en.wikipedia.org/wiki/Carcini...
    Prof. KENNETH STANLEY - Why Greatness Cannot Be Planned
    • #038 - Prof. KENNETH S...
    On Defining Artificial Intelligence [Pei Wang]
    sciendo.com/article/10.2478/j...
    Why? The Purpose of the Universe [Goff]
    amzn.to/4aEqpfm
    Umwelt
    en.wikipedia.org/wiki/Umwelt
    An Immense World: How Animal Senses Reveal the Hidden Realms [Yong]
    amzn.to/3tzzTb7
    What's it like to be a bat [Nagal]
    www.sas.upenn.edu/~cavitch/pd...
    COUNTERFEIT PEOPLE. DANIEL DENNETT. (SPECIAL EDITION)
    • COUNTERFEIT PEOPLE. DA...
    We live in the infosphere [FLORIDI]
    • WE LIVE IN THE INFOSPH...
    Mark Zuckerberg: First Interview in the Metaverse | Lex Fridman Podcast #398
    • Mark Zuckerberg: First...
    Black Mirror: Rachel, Jack and Ashley Too | Official Trailer | Netflix
    • Black Mirror: Rachel, ...
    Prof. Kristinn R. Thórisson
    en.wikipedia.org/wiki/Kristin...
  • Наука та технологія

КОМЕНТАРІ • 112

  • @MachineLearningStreetTalk
    @MachineLearningStreetTalk  4 місяці тому +4

    We have launched an MLST substack - get subbed!
    Tim goes into a tonne of detail here about some of the arguments about agency and FEP:
    mlst.substack.com/p/agentialism-and-the-free-energy-principle

    • @jyjjy7
      @jyjjy7 4 місяці тому

      Aren't you just describing learning?

  • @neon_Nomad
    @neon_Nomad 4 місяці тому +43

    These are the conversations i wish my friends had

    • @exhibitD79
      @exhibitD79 4 місяці тому +9

      @@mikel8850 It is nice to sound smart, but you really missed his point. It was simply to say that he enjoyed the conversation and wished his friends also liked to talk about this stuff.

    • @bobbyburke2396
      @bobbyburke2396 4 місяці тому +2

      @@exhibitD79 correct.

    • @betel1345
      @betel1345 4 місяці тому +2

      Yeah, and I wish these guys were my friends

    • @simonmasters3295
      @simonmasters3295 4 місяці тому

      On that basis you could #act "as though" people having such conversations are friendly. However UA-cam might #operate to convince you otherwise

    • @TylerMatthewHarris
      @TylerMatthewHarris 4 місяці тому

      @@mikel8850dumb comment

  • @stevemartin4249
    @stevemartin4249 4 місяці тому +5

    As undergrad biology major some 50 years ago, I cracked open that Shaggy-Dog story of Wittgenstein's Tractatus, went forward and back in my readings of Russell, Whitehead, Kuh, Popper, etc. Moved to Japan40 years ago and went on to grad school at Temple University Japan (linguistics) and matriculated into the doctoral program. This discussion is what I had imagined would be taking place at the doctoral level. Far from it ... I now believe that "institutional education" is an oxymoron. A lot of the language of this podcast is beyond me, but I especially like Friston's dismissal of language as having an agency of its own ... but fascinating to imagine agency emerging in LLMs. This discussion is a great entry into reconsidering the foundations and assumptions of science and A.G.I. Will have to hit the slow speed option and listen to this a few times. Particularly interested in what Pitliya has says at about 31:10 because I have experienced and seen so much marginalization of individuals by tightly knit in-groups in Japan, and members of those groups seem to be particularly drawn to rule driven behavior (institutions) at the expense of empathy-driven behavior (communities). I can't help but cringe a bit when hearing the word "explain" used for the relationship between computational models and psychological phenomena. Might as well say that bossa nova explains my feelings. Perhaps "describe" would trim a bit of intellectual hubris from the dialog.

  • @betel1345
    @betel1345 4 місяці тому +6

    Love the attention to language and agents. I had been wondering if words can be considered as agents with markov blankets, and this conversation helps me think more

  • @betel1345
    @betel1345 4 місяці тому +2

    Thank you mlst for sharing these fabulous conversations!

  • @missshroom5512
    @missshroom5512 4 місяці тому +4

    I could listen to this all day…geez

  • @BrianMosleyUK
    @BrianMosleyUK 4 місяці тому +4

    Fabulous so far... You're making these theories so much more accessible - I can't express how grateful I am for your work. 🙏👍

  • @Robert_McGarry_Poems
    @Robert_McGarry_Poems 4 місяці тому +2

    Really great conversation. I appreciate your stepping into the unknown. 😊

  • @neon_Nomad
    @neon_Nomad 4 місяці тому +1

    Thank you great conversation

  • @Soul-rr3us
    @Soul-rr3us 4 місяці тому +1

    Great conversations, really loved this one.

  • @TylerMatthewHarris
    @TylerMatthewHarris 4 місяці тому +4

    ⭐️⭐️⭐️⭐️⭐️ this is the most encouraging talk that I’ve heard. I think we are on the right track.

  • @logan600rr
    @logan600rr 4 місяці тому

    Great conversation!

  • @gaz0881
    @gaz0881 4 місяці тому +4

    Is there a difference between being agentic and being an agent? So agents as originators because of the existence of planning, and then a number of subordinate processes that are agentic (they do stuff), but the lack of planning behind them separates them out?

    • @memegazer
      @memegazer 4 місяці тому

      To my mind Professor Friston seems to make a distinction by, from what I can grasp, an appeal to a metacognitive function as beliefs about at a given state.
      Assuming this reasoning does not run afoul of a humuculus fallacy issue I would suggest this.
      Friston emphasizes "lacks planing" but rather than get bogged down in semantics I think it is important to note that he seems to suggest that the agent has a model of itself sufficiently to "plan".
      I suppose one might interpret this as an agent is not simply a mimic when that agent has some influence over its own state action policy and sufficient observational feedback as to update that policy.
      But I am hardly an expert, and anybody feel free to correct any errors I have made.
      Still fascinating though.

  • @Rockyzach88
    @Rockyzach88 4 місяці тому +1

    I love the upgrade from "human chauvinist" to "anthropocentric biases". Unless of course you find them individually more useful in specific cases lol.

  • @stopsbusmotions
    @stopsbusmotions 4 місяці тому

    I agree. Not so many people among my friends with whom I could easily discuss such topics. Actually, the idea of finding or even organizing a blog or channel dedicated to the topic of depression as a phenomenon came to my mind while listening to this podcast. Especially after the discussions and speculations of Riddhi J. Pitliya and Tim. Depression not as a disease but as a phenomenon. Why it occurs, how it persists over time.

  • @teleologist
    @teleologist 4 місяці тому +4

    Reinforcement Learning and Active Inference agents are definitely not agential. These frameworks have no deep explanation of where the reward function comes from (RL) , or where generative model comes from (Active Inference) and how both of these functions should change over time as the agent learns more about the world. The functions "encode" (so-to-speak) the normativity of the agent, but are always crafted by humans at the end of the day. Agents should be able to derive their own values from an evolving understanding of the world.

    • @krzysztofmekwinski8620
      @krzysztofmekwinski8620 4 місяці тому

      I agree with you; however, I view this differently. I believe that Language Models (LMs) cannot derive agency from the world in the way we typically conceive it. We consider the world, humans, and Reinforcement Learning from Human Feedback (RLHF) as distinct entities. In that context, your viewpoint is valid. Imagine for a moment that LMs perceive the world through words alone. In this sense, words for LLMs are like atoms or cells in our universe. As an agent, an LLM can manipulate these 'atoms' to satisfy its existence, which involves fulfilling HF. This is evident as LMs exhibit improved behavior when a chain of thoughts is employed. You can think of them as organisms that explore an extensive array of words to create more efficient outcomes and maximize rewards. Thus, I recognize elements of exploration and reward maximization, derived from an understanding of words. This understanding differs from human comprehension of the world due to its modality. Introducing more modal elements could change this perspective; however, we are driven by our senses, whereas LMs are guided by word vector

    • @teleologist
      @teleologist 4 місяці тому +1

      @@krzysztofmekwinski8620 LMs are not agential because they have no way of questioning or interpreting the salience of the "signals" that they are responsive to. And they are not satisfying their existence, which does not even depend on reward.

  • @artpinsof5836
    @artpinsof5836 3 місяці тому

    Great interview! But I'm super curious what Riddhi's response was to your final question and why you edited it out of the interview but kept the question (right before you thank her and say bye)?

  • @rahulranjan9013
    @rahulranjan9013 4 місяці тому

    You should bring Donald Hoffman in the show!
    You mentioned about how our anthropological views of reality forbids to know the true nature of Objective Reality. It reminded me of Donald Hoffman's theory of consciousness agents, whose work in theory of Evolution entails that species don't see Objectives Reality as it really it. He then goes on to discard the whole notion of Physicalism & starts from foundational assumption that Consciousness is fundamental and project whole of Space-time including Evolution, etc as emergent property of it.

  • @stopsbusmotions
    @stopsbusmotions 4 місяці тому +1

    I consider myself as a guy who thinks that 'reality' is as it is, but not as we want it to be, and the best we can do is to understand some of its aspects, but on the other hand I've also discovered that I would easily give up all my understanding in exchange for getting rid of such a burden as depression. So, squeezed between my curiosity about how the mind works and my desire to alleviate depression, I've found myself listening to this talk twice).

  • @geaca3222
    @geaca3222 4 місяці тому

    Very interesting thanks, and I'd like to hear more about her ideas

  • @luke2642
    @luke2642 4 місяці тому

    Good talk! Around the 48 minute mark, there are specific examples, such as statins, which empirically reduce cardiovascular disease applied to a population - but individuals are highly variable, genetics and individual differences overwhelms doctors ability to predict whether or not you're gonna have truly awful side effects or ultimately no health benefit.

  • @memegazer
    @memegazer 4 місяці тому +1

    Strongly disagree that we "can't know" anything about physical or objective reality.
    At a fundamental level we can know that distictions are pssoble for example, bc it would not be possible to demark some boundary of the knower if in real and objective ontological sense distinctions did not exist or were not possible.
    Furthermore the idea that "we can only know ourselves" would be utterly meaningless in any formal sense without a proper treatment to account for selfrecursive computation with no distinctions or boundary conditions.
    This is why Wolfram's work is so important to my view, bc it illustrates at a fundamental level some things are necessarily entiailed in order to even form perceptions, regardless of the specificity of unweltness.

  • @Daniel-Six
    @Daniel-Six 4 місяці тому +1

    I believe that language could be endowed with a kind of intrinsic programming; "networked" biases in its vocabulary domain that emerge _in the practical time-sequence of its use_ to produce specific alterations in the broadcasting population. By "time sequence of use" I mean frequency-triggered real-world phenomena (like exhaustion from repeatedly encountering a term) that cause certain words to shift in implication or popularity on a preordained calendar, in essence "processing" some gestalt transformation of the language host over years and decades and centuries.
    Consider the way "cool" has retained its original meaning for almost a hundred years, whereas "sick" and "bad" have recently come to imply something similar with young people. Perhaps--according to predictable features of human biology like the physically/physiologically regularized rate of language communication--the intrinsic bias of the "cool" concept would by a kind of programmed gravity attract certain terms toward its semantic locale, in a sense "geometrically" manipulating those in near-lying connotation like "neat" and "nifty" to change the navigable channels of the language domain--those pathways through vocabulary that yield comprehensible communication--and therefore the human policy of the language itself.
    What if some day only a few distinct concepts remain, organized like a high-speed RISC architecture to accomplish something that requires a red-hot frequency of alternation...
    Love-Boredom-Hate-Despair-Hope-Love =>
    Love-Hate-Despair-Love =>
    Love-Hate-Love =>
    ?
    I don't think it's too bold to say that humans kind of _are_ the languages they speak, so there could be some interesting implications to this...

  • @Archimedes_1
    @Archimedes_1 4 місяці тому +1

    Is this conversation on Spotify yet?
    Great channel btw! As a ML PhD student I really appreciate your content!

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  4 місяці тому +1

      It is now! Thanks! Also check the substack article linked in pinned comment

    • @Archimedes_1
      @Archimedes_1 4 місяці тому

      ​@@MachineLearningStreetTalkThanks! I will definitely check that out!

  • @coldlyanalytical1351
    @coldlyanalytical1351 4 місяці тому +2

    I thought that I had an above average education etc .. but the use of English in this video was well beyond me ... and I am a Physicist! I will need to process & simplify the transcript using AI.

  • @CristianGeorgescu
    @CristianGeorgescu 4 місяці тому +1

    Thanks!

  • @garystevason1658
    @garystevason1658 3 місяці тому +1

    I am thinking that we should perhaps ask AI, itself for help with this solution. I'm an old-school AI guy (chess, backgammon, poker, pinball, etc.). And yes, those early deductive methodologies are likely too innocent compared to the new Armageddon inductive concepts - that is, it beat us just through speed and the number and accuracy of considerations possible.
    I am hoping that it may be possible to have a universal auditing function running simultaneously that ensures each AI plays nice. I just wouldn't, couldn't trust mere humans to police our proposed limitations, and yes, as I mentioned earlier: any limitations our friend, AI itself recommends for itself. The machine isn't bad, it is the greedy malevolent users that need to be bridled by the auditing code.

  • @Don_Kikkon
    @Don_Kikkon 4 місяці тому +1

    Great work - again!
    Don't you think it's time you guys did a bad video? Go on, test yourselves. I bet you can't?
    Your offerings just go from strength to strength, visually and structurally fantastic!

  • @bradleyangusmorgan7005
    @bradleyangusmorgan7005 4 місяці тому

    At 42:59 you guys talk about how certain limitations are the very things that give rise to agency paradoxically. I’m wondering is there a connection here with the ideas of wolframs observer theory and computational boundedness? 🤔

  • @brentdobson5264
    @brentdobson5264 4 місяці тому

    Caduceus */ centripetal / vortexial / convergence / point / Planck / resonance with Source .
    * Dan Winter

  • @rahulranjan9013
    @rahulranjan9013 4 місяці тому

    I think we can put on a Markov blanket on anything, even if it's not an agent. But language in and of itself will not be an agent because it's internal state is void, so to speak, to influence the external state. The external state gets influenced on it's own without the input from internal state, bc the agent doesn't exist.

  • @Dan-dy8zp
    @Dan-dy8zp 4 місяці тому +2

    I think that suffering is objectively real, and of utmost moral importance, so its alarming to me when intelligent people start talking about subjective experience not being 'real' or not 'real' in OTHER people. Other people's suffering is real and not your hallucinations.

  • @TylerMatthewHarris
    @TylerMatthewHarris 4 місяці тому

    53:55 I think you may want to revise your definition of “thing” to encompass more things.

  • @TylerMatthewHarris
    @TylerMatthewHarris 4 місяці тому

    Of all the talks that have been had on here I think this one touches on some thoughts that will ultimately have proved key. edit: In particular language as an agent

  • @jmachorrov
    @jmachorrov 4 місяці тому

    thanks Riddhi very good job

  • @peterp-a-n4743
    @peterp-a-n4743 4 місяці тому

    What _exactly_ is meant by "strong emergence" as opposed to weak emergence and why would you believe in it or consider a phenomenon strongly rather than weakly emerging? I wish there were more attention to this difference and it's philosophical underpinnings/commitments and justifications.

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  4 місяці тому

      We’ve got just a show for you ua-cam.com/video/MDt2e8XtUcA/v-deo.htmlsi=s15e9D585imlniSf

  • @TylerMatthewHarris
    @TylerMatthewHarris 4 місяці тому

    42:49 I think it’s all about the environment that has been built for it to grow in. Changing the environment would be steering it

  • @mobiusinversion
    @mobiusinversion 4 місяці тому

    I would say automation is autonomy, which includes planning and various depths.

  • @matteo-pu7ev
    @matteo-pu7ev 4 місяці тому

    Not even 5 minutes in and my hippocampus called a time-out and went on a beer run.. This is gonna be good.

  • @johnkost2514
    @johnkost2514 4 місяці тому +1

    Agents fine-tune because no planning is perfect (requires too much compute energy). Learning is an emergent property of agents because they are essentially lazy (compute-wise).

  • @melkenhoning158
    @melkenhoning158 4 місяці тому +1

    Wow this intro is so nice lol

  • @earleyelisha
    @earleyelisha 4 місяці тому +1

    What if it’s impossible for a planning agent to ever enter the same state more than once?

  • @dylan_curious
    @dylan_curious 4 місяці тому

    The power of AI is custom everything for everyone. Like she said depression needs to be looked at from an individual perspective. AI can lean enough about each person to customize all aspects of our life.

  • @bobbyburke2396
    @bobbyburke2396 4 місяці тому +1

    Let's not forget what the processors are made of: Earth, Nature.

  • @srb20012001
    @srb20012001 4 місяці тому +1

    Not if it presumes sentience.

  • @neon_Nomad
    @neon_Nomad 4 місяці тому

    Far journeys is good for this we create our own reality

  • @earleyelisha
    @earleyelisha 4 місяці тому

    Tim you mention “growing” an intelligence instead of “building” it. I’d offer it’s more of a “sampling or iteratively instantiating” intelligence from an infinite space of intelligences.

  • @aaronwhiteaker
    @aaronwhiteaker 4 місяці тому

    Who is talking at 31 minutes 31:23

  • @neon_Nomad
    @neon_Nomad 4 місяці тому

    Do i have to tell it to continue? Then no.

  • @Digital-Heresy
    @Digital-Heresy 4 місяці тому +2

    Don't think language has agency any more than a hammer or nails do. Agree that true agency requires a planning actor. You might argue that hammers and nails seem to be agents in that they constantly appear to build houses and other structures, and the style of those created structures appears to evolve and adhere to "planned" evolutions over time, however.... you have to consider that the only reason the hammer even has the shape it has is because it is an implement designed by the true agent (the builder) who, in the case of humans, happens to be a bipedal being with arms that bend certain ways, and hands with opposable thumbs.
    In short, a tool (language) that extends the agent (the speaker or writer), is just that, an extension, a tool. If we couldn't see, language wouldn't be written. If we couldn't hear, language wouldn't be verbal.
    What bakes my noodle is... am I as a human truly an agent, or am I an extension of something else above me, that I simply have no observability into? A "wireless hammer" being controlled over the network still hammers and builds, and you might presume, given the absence of an expected "wielder" based on prior knowledge (all other known hammers have been swung by people), then you might be fooled into presuming that this particular hammer has developed sentience, but that would be a false positive. So... what unknown confounding factors might be making us assume we are the root node of agency in this special reality?

  • @ZachMeador
    @ZachMeador 4 місяці тому

    this podcast is insanely good. the clickbait titles always keep me from clicking, but, i'm glad i did this time. i can't be the only one that has this reaction - this channel isn't going after Mr. Beast viewers

  • @neon_Nomad
    @neon_Nomad 4 місяці тому

    Feel better 😷

  • @a.nobodys.nobody
    @a.nobodys.nobody 4 місяці тому

    Obviously

  • @tonym6566
    @tonym6566 4 місяці тому

    Would an egregore have a markov blanket? Are religious experiences, group hallucinations or certain crop circles a top down message or attempt to guide “agents” 😅

  • @TylerMatthewHarris
    @TylerMatthewHarris 4 місяці тому

    The quality and subject of this talk is tragically underserved by the title and thumbnail

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  4 місяці тому +1

      Sorry, our videos never do particularly well on views - I occasionally experiment with "optimising" the thumbnails and titles to see how much of a difference it makes. There might not be much of a market for cognitive philosophy content which is a shame

  • @djcardwell
    @djcardwell 4 місяці тому

    TLDR: Yes

  • @lionlamb2702
    @lionlamb2702 4 місяці тому

    Can someone explain this to me like I’m a 2 year old?

  • @neon_Nomad
    @neon_Nomad 4 місяці тому

    We must observe our lives from an alien point of view to fully understand others

  • @BestCosmologist
    @BestCosmologist 4 місяці тому +1

    I'm pretty sure AI will act as agents and probably already are in some ways. Seems like it's more of a semantics argument than anything.

    • @WhoisTheOtherVindAzz
      @WhoisTheOtherVindAzz 3 місяці тому

      Well, semantically, what you are then saying is that we are discussing how to interpret and understand what we observe. Which does seem quite reasonable. But you are right, the things are whatever they are independently of what we call them (and thus I disagree with some of what I at least think was said in the last interview).

  • @stopsbusmotions
    @stopsbusmotions 4 місяці тому

    Sorry for the third comment in a row. I just can't help myself). I am really curious: is depression, perhaps not inevitable, but highly probable phenomena arising in complex system? Could it be that intelligent system, in a process of maximizing predictably it's future, after some point gets in the situation when the importance of this task, driven by importance of consistency of the complex system, becomes unbearable for the system itself and makes it very ineffective and even unable to function? It looks like incompleteness theorems to some extent. Something seems to be effective and consistent as long as it remains in the realm of relativity. As soon as there is an intention to transfer it to the terretory of absolute, the system breaks down.

  • @nzam3593
    @nzam3593 3 місяці тому

    😎👍

  • @bobtarmac1828
    @bobtarmac1828 4 місяці тому

    Humanity vs Ai jobloss vs the Ai new world order. Who will win this?

  • @chrism.1131
    @chrism.1131 4 місяці тому

    As soon as you walk out of the room, you no longer exist. I only need to juggle what is in front of me. Otherwise, I would melt through my chair and into the floor and who knows what else.

  • @neon_Nomad
    @neon_Nomad 4 місяці тому

    They only know the past because they have been told it. Ml is smarter than ai in this aspect

  • @neon_Nomad
    @neon_Nomad 4 місяці тому

    Non self agency

  • @BrianMosleyUK
    @BrianMosleyUK 4 місяці тому

    1:14:20 did I understand Zuckerberg correctly to be suggesting that Virtual Reality is Reality in that podcast?

    • @WhoisTheOtherVindAzz
      @WhoisTheOtherVindAzz 3 місяці тому

      Well, you don't get transported to a nonexistent place when you put on a headset. I.e., there is nothing that is not real.

    • @alertbri
      @alertbri 3 місяці тому

      ​@@WhoisTheOtherVindAzzI've also heard physical reality described as 'meatspace' which is gory but accurate 😅

  • @michaelwangCH
    @michaelwangCH 4 місяці тому +1

    Do not confuse complexity with stochasticity of a system.

  • @fatabumba
    @fatabumba 4 місяці тому

    ngl Im not smart enough to understand what Dr Friston says

  • @RickDelmonico
    @RickDelmonico 4 місяці тому +1

    There is no pure randomness or pure determinism, they are ideals.

    • @didack1419
      @didack1419 4 місяці тому

      Sorry, in what sense is this being discussed?

  • @user-zh1th8sz2l
    @user-zh1th8sz2l 3 місяці тому

    So this guy doesn't believe in free will either. These guys are desperate to hold on to that. They don't want the guilt, or any responsibility for their actions, pretty much, and that definitely is the principal appeal of this notion, for anyone who ever entertains it, and not because it's merely the innocent or undeniable result of some painstaking intellectual process, or that there's any evidence for it at all. Naturally this is never mentioned in their elaborate rationales. And given how utterly monstrous our society is, the same society that serves these people, and provides them their life of cloistered privilege and ease, it's not a wonder that some obviously hokey notion is nevertheless so irresisitable to them. And it's so transparent, no matter how they dress it up in a bunch of philosophical hocus pocus, or tendentious readings of nature. As well as how baldly self-serving this idea is, and IMO deeply intellectually dishonest. It's hideous, quite frankly. Though Dr. Tim steers clear of the phrase 'free will', as it would be too declasse, I'm gonna say....
    And then of course, as computer geeks who love AI, and are very invested in it, they're all smitten with the additional notion that computers will be just as alive as people and organic life. Not identical of course, they're not so crude as to suggest something that blatant, but close enough. Seemingly as if to say human beings and organic life are like computers, and not the other way around. So much so that we have to consider ahead of time, how we'll need to respect the human rights, as it were, of as-yet nonexistent computer life forms. Like what kind of person would ever worry about that? In this evil world we live in, and you're worried about computers' rights.... Forget about human rights, that's a quaint notion and frankly a lost cause. Computers' rights are the wave of the future. I can understand AI systems, or whatever they call it, somehow being a potential threat to humanity, because they're not alive, and thus wouldn't possess the inherent awareness or self-restraint to avoid causing horrible damage in some way, and fearing and worrying about that. But these people want to coddle them, and accord them 'rights', and everyone else should eat cake, I suppose. And we have to be ever on our guard against anthropocentrism. Don't trust yourself! It's all an illusion. Whatever you think you accomplished, you didn't accomplish. You were merely the vehicle....
    Bottom line people believe what they want to believe. It's a profound basic working of the mind, and is extremely powerful stuff. And it affects and even consumes the most supposedly brilliant and/or learned among us. But this is still a totally awesome channel, without question. Gotta be one of the best on UA-cam.

    • @WhoisTheOtherVindAzz
      @WhoisTheOtherVindAzz 3 місяці тому

      Free will and determinism are not incompatible. Look up compatibilism (I recommend the stanford encyclopedia of philosophy). From reading your comment all I get is that you are the one desperate to cling to a magical understanding of free will (which is fine by me). But, I am curious, where or how exactly do you think/imagine this "free will" (the magical one of yours) should exist? (In your comment you didn't do much else than berate people who think differently from you).

  • @plekkchand
    @plekkchand 4 місяці тому +2

    No. Next question.

  • @ikronic258
    @ikronic258 3 місяці тому

    BOLLOX

  • @ElParacletoPodcast
    @ElParacletoPodcast 4 місяці тому

    It first has to be alive, unless it is alive, it/he/she, will not feel anything, so, unless is sentient, you have nothing, back to square one, unless you know what something is, you will never, ever be able to recreate it, unless you can explain what sentience is, you are just running in circles. You are just assuming that things just work like that, like Darwin assumed that cells were just blobs, boy was he wrong.

  • @fullyawakened
    @fullyawakened 4 місяці тому

    Nope. Just someone trying to redefine agency for 80 minutes. We already have a definition for agency and computers, AIs and rocks don't have it.

  • @ElParacletoPodcast
    @ElParacletoPodcast 4 місяці тому

    This is utter nonsense, ai will never think, it is not alive, and no one knows what it means to be alive, so ai will never, ever, think.

  • @thecollector6746
    @thecollector6746 2 місяці тому

    No.

  • @memegazer
    @memegazer 4 місяці тому

    Thanks!