Neurosurgeon Michael Egnor: Why Machines Will Never Think

Поділитися
Вставка
  • Опубліковано 28 гру 2024

КОМЕНТАРІ •

  • @kayzee2675
    @kayzee2675 6 років тому +54

    Fascinating.

  • @agungdewandaru
    @agungdewandaru 4 роки тому +31

    Machine will not be thinking or having conscience, but I am afraid it may act and produce decisions that is hardly indistinguishable form thinking or conscience.

  • @CenturianCornelious
    @CenturianCornelious 6 років тому +106

    Yes, okay, machines can never think. However people have an unfortunately strong tendency to anthropomorphize things, so they will believe that machines. think.

  • @kumarvishwajeet8419
    @kumarvishwajeet8419 4 роки тому +7

    Right

  • @NomadOfOmelas
    @NomadOfOmelas 6 років тому +32

    So I was having a conversation with someone about free will, who shared with me this video. Thought I would share my response here, in case anyone else agreed with me or could clarify as to counter my points :)
    Yea I've looked a bit into the oppositions to Sam, and have found them far from laudable. Particularly views of those like Mele, who wrote a short book about it, specifically making similar arguments about the shortcomings of the Libet experiments etc. Mele at least summarized the work of Libet pretty fairly though..I can’t believe this guy says “Libet showed that free will is real." This isn’t at all Libet’s conclusion or that of multitudes that have further discussed the work. Seems audacious at best and disingenuous at worst. Most, think that conscious will, similar to their thoughts on consciousness, is an epiphenomenon, without causal capacity. Just thinking about the arguments here, and some in Mele’s book, our fMRIs to my understanding don’t have the capacity to distinguish the waves that plan to veto an action at the last moment and those that plan to follow through. And even if they were the exact same, this would only be us saying not pulling through is not controlled by brain waves, which then leaves to question the metaphysical problem of you doing something without it being caused. Otherwise, it shouldn’t have required the Experimenter asking you to revert at the last second, and it you should just veto randomly.
    The mind being metaphysically simple also is a weird interpretation of that particular split-brain experiment he mentioned. Sam talks about stuff like this in his waking up book, and others have talked about patients that believe in God if they speak their answer but don't believe in God if they write their answer (seriously, if God exists I wanna know if this guy gets to go to heaven haha). Just because your conscious feeling of self exists as one, at least to the extent that you can describe it, doesn't show that the mind is metaphysically simple, in fact these experiments, as discussed further in The Master and his Emissary, show quite the opposite. Two conjoined twins would both say that they would have a single sense of self, and where they disagree, would say that’s not them. This turns out to be the same between the two halves of the brains for many beliefs, including intentions. As Sam says, it may be weird to say you can split the mind and have 2 separate minds, but the mind itself certainly is ‘divisible’. And lastly in regards to agency, there are plenty of experiments where we have subjects insist that they had agency/free will in choices they made, but then revealed to them that they actually were being manipulated by subconscious variable, e.g. Asking a preferred vacation spot, and their answer depending on the temperature of the drink they had provided to them. Sam has talked about this specific example as well, so I’m just not sure why the feeling of agency, which clearly does exist, is indicative of whether not legitimate free will could sensibly exist.
    Then, to the focus on this video, since I’m getting on a role here! He implies that a computer that displays two differently typed arguments is computational and not invoking thought, which is fine, since no one is arguing that current processing systems are thinking at this point, particularly the likes of MWord or a camera used to take photos. But given our lack of understanding of consciousness, it seems bold here when he contends that computers will never think, not just because they aren't our minds, but because they are the opposite of our minds, cus they don't care about meaning. Well, I've got plenty of nut job conservative and liberal friends who can think and still engender contradictory opinions and extreme levels of cognitive dissonance..and this isn't an example of them "not thinking", it just shows the inflexibility of the hardware to deal with the inputs that it’s not capable of sorting out properly. Just because the complexity of our minds is currently that much greater than Microsoft Word, doesn't mean computers will never be able to think. He has to show what's 'so special' about the meat we are made of, the ‘matter’, and explain why such information processing couldn't be done on different hardware. And what's more..until we have an understanding of agency, not knowing if the computer is conscious via analogy to ourselves, and it actually being conscious, is the same thing. And that's the 'hard problem'.
    Anyways those are my thoughts..I appreciate the intellectual exercise you’ve provided me with, and am curious when you get the chance if I’ve mis-analyzed any of the points made in this video!