Episode 15: David Poeppel on Thought, Language, and How to Understand the Brain

Поділитися
Вставка
  • Опубліковано 14 тра 2024
  • Blog post with show notes: www.preposterousuniverse.com/...
    Patreon: / seanmcarroll
    Language comes naturally to us, but is also deeply mysterious. On the one hand, it manifests as a collection of sounds or marks on paper. On the other hand, it also conveys meaning - words and sentences refer to states of affairs in the outside world, or to much more abstract concepts. How do words and meaning come together in the brain? David Poeppel is a leading neuroscientist who works in many areas, with a focus on the relationship between language and thought. We talk about cutting-edge ideas in the science and philosophy of language, and how researchers have just recently climbed out from under a nineteenth-century paradigm for understanding how all this works.
    David Poeppel is a Professor of Psychology and Neural Science at NYU, as well as the Director of the Max Planck Institute for Empirical Aesthetics in Frankfurt, Germany. He received his Ph.D. in cognitive science from MIT. He is a Fellow of the American Association of Arts and Sciences, and was awarded the DaimlerChrysler Berlin Prize in 2004. He is the author, with Greg Hickok, of the dual-stream model of language processing.
  • Наука та технологія

КОМЕНТАРІ • 88

  • @drzecelectric4302
    @drzecelectric4302 5 років тому +6

    Sean, this one is amazing!

  • @fs5775
    @fs5775 2 роки тому

    Great conversation, you guys have great chemistry and a complementary sense of humor!

  • @fahmidayeasmin4780
    @fahmidayeasmin4780 5 років тому

    Enjoyed so much, specially the big data philosophy part and overall history of the field. Thanks Dr. Carroll and Dr. Poeppel

  • @MattieCooper10000
    @MattieCooper10000 5 років тому +6

    Greatness! Thank You for a wonderful podcast!

  • @armchairrockstar186
    @armchairrockstar186 5 років тому +1

    Love you Sean! You're my favourite physicist.

  • @jennydeepable
    @jennydeepable 3 роки тому

    SEAN CARROLL IS AN AMAZING HOST!

  • @JeffreyMW1
    @JeffreyMW1 5 років тому +1

    Highly entertaining, thank you!

  • @user-kb1pj7iu6j
    @user-kb1pj7iu6j 5 років тому +1

    Sean. I appreciate your time, and trying to do a specific format. But I am almost always bummed out when the podcasts abruptly end.

  • @ExistentialistDasein
    @ExistentialistDasein 5 років тому +7

    Thank you for the upload. Excellent as usual.

  • @RyanReece
    @RyanReece 5 років тому +2

    Not Emerson, the quote should be credited to Whitehead:
    "The safest general characterization of the European philosophical tradition is that it consists of a series of footnotes to Plato."
    Alfred North Whitehead, Process and Reality, p. 39 (Free Press, 1979)

  • @moving_particles
    @moving_particles 5 років тому

    Loved this one. Very engaging, funny and your guests voice sounds a bit like Jeff GoldBlum which added to it. Thank you.

  • @ZappyOh
    @ZappyOh 5 років тому

    Thanks for an enlightening podcast.
    Who plays the closing music?

  • @txnygotw
    @txnygotw 5 років тому +1

    OMG, this guy's German accent is amazingly good.
    (looks up bio)
    Oh, I see.
    Great talk, Sean, as usual.

  • @thomaspatel3423
    @thomaspatel3423 5 років тому +4

    Hello Sean, thanks for the podcast. Would you consider having a conversation with Donald Hoffman about his ideas?

  • @CalendulaF
    @CalendulaF 5 років тому +1

    Really like David's views on big data. In particular, that there are different ways to understand things and that big data - at the moment - promotes only the least satisfying way of all. What science oftentimes really strives for is to invent a powerful but simple enough metaphor; ofcourse all the data is needed in the first place, but at the fundamental level we make sense of it through some simple story or picture that makes up the mechanism which in turn brings forth the phenomena we whish to explain/understand. Take for instance the idea if our memory potentially being something like a digital storage. Actually, I don’t even believe this to be a good idea (let alone modifications of introns being a candidate for long-term memory), but: This is just the type of hypothesis, a big data approach could never help to come up with. And it would be also just the type of metaphor that I believe to be needed to *understand* memory - or any other type of phenomenon.

  • @Dominik-jk4yq
    @Dominik-jk4yq 5 років тому

    Thank you! Will there be a videostream too at some point?

  • @Petticca
    @Petticca Рік тому

    So this is one episode of a podcast, covering a complex and fascinating subject matter, but in simple, conversational style. Good stuff.
    Obviously you knew that potentially this episode would be heard by someone who might be thought of as 'a bit slow', so a not too technical, lay-friendly format is great; I can be painfully slow, this is less great.
    The process my brain employs to translate the language I hear into useable data, is very context dependent; and will not work if the language is either very imprecise and/or leaves me trying to make sense of too many unknown unknowns. Consequently, I could not follow what the dr. was discussing, which is pretty funny (to me) considering the subject matter and the insight and expertise of the guest.
    It's frustrating that I can not take new information and hold on to it, (I will literally be incapable of remembering what I heard, or was 'taught') if it can not be placed (even very temporarily) onto some existing knowledge 'brick' on one of the many 'walls of relationships' that my brain builds.
    I get information bricks, I place them into the correct areas of knowledge walls, which form part of the little houses of analysis and evaluation, that I place along roads of comprehension, to complete my little town of understanding.
    I got to place a couple of bricks that were questions Sean asked, I couldn't place the response bricks though :(
    This is a lot of a bummer really. I'm fascinated by language, particularly the role it has in shaping our perceptions. I'm also very interested in language and how it intersects with our psychology. Meh. because I can't retain information that I perceive to be ad hoc, I ended up being wholly occupied for the rest of the episode trying to parse the statement the Dr. made expressing he could see no reason "honesty" would be 'more difficult to understand than '"red can'"

  • @starshipenterprises4356
    @starshipenterprises4356 5 років тому +1

    I always find these pod casts fascinating and thought provoking, although I have a question for you Sean....
    Just wondering what it is that inspires you to come up with the topic each week? is this stuff that you are interested in yourself, do you find the guests first or think of a random topic (that you are interested in?... that you would like to know more about?) and then look for a guest speaker?
    I ask because most of the time these topics cover subject matter I have never considered looking into, or didn't even know existed at all! This of course just adds to my fascination, we all love to learn something new.
    so Sean... DO TELL.. and also keep up the fantastic work inspiring me and so many others.
    P.S. I often walk into things while thinking about stuff

    • @clairehann2681
      @clairehann2681 2 роки тому

      He likely researches trending topics in different academic circles

  • @dimitrijmaslov1209
    @dimitrijmaslov1209 3 роки тому

    Thanks!

  • @GnomiMoody
    @GnomiMoody 5 років тому +1

    I love talking about the brain.

  • @bigsuz
    @bigsuz 5 років тому +2

    Am I the only one that finds this episode hilarious aka laugh out loud funny? I even had to stop listening to note where I have to pick up again and unfortunately saw the top comment which led me to believe the rest of the comments were going to be more of the same: complaining about what you would rather hear about or bragging indirectly by flat out dissing what I find a charming repartee about a topical subject. For example, this provides a conversational way to frame the mechanics of Trap music, cultural and mechanical contexts of. Which I've been looking for.
    And it's funny.

  • @rJaune
    @rJaune 5 років тому +1

    How useful and ubiquitous is brain dysfunction in figuring out important things about how the brain works? Is someone like Tan(talked about in the episode), or someone who had a, "useful", brain problem very rare? Or maybe not ultimately useful?

  • @chrisrecord5625
    @chrisrecord5625 5 років тому

    I wonder if a brain imaging comparison would be of interest where one was reading sign language (visual stream) in contrast to a traditional audio based image by the other.

  • @erichodge567
    @erichodge567 4 роки тому

    Mindscape is now my "go to" podcast. I just learned a ton from this show.

  • @Valdagast
    @Valdagast 5 років тому +1

    Don't we need a "how much?" stream as well? My cats are very good at figuring out whether a treat is worth moving six inches for or not.

  • @raresmircea
    @raresmircea 5 років тому +11

    Invite Thomas Metzinger, Giulio Tononi or Terrence Deacon.

  • @RyanReece
    @RyanReece 5 років тому +2

    Wittgenstein would be happy with your wrestling with the meaning of words.

    • @bigsuz
      @bigsuz 5 років тому

      Ryan Reece agreed!

  • @test-ii5ts
    @test-ii5ts 5 років тому

    ifni.co/bi/TOC.html
    When you use programming languages you depend on the symbols of the language to have a specific meaning, very rarely it is the case for a programming language to allow fuzzy and context based meaning, because it complicates the implementation. In natural languages concepts/symbols are both discrete and fuzzy at the same time.
    To make programming languages more natural we have to embrace this dichotomy... one way to make the symbols behave this way is to represent symbols and/or context as vectors to achieve fuzziness, but still preserve discreteness.
    Something like having the whole cake, but eating it too.
    That is why Bi is build on top of the so called VSA (Vector Symbolic Architecture).
    As the name implies the symbols are not your run of the mill symbols, but rather vector based i.e. symbolic system based on vector-distributed symbols, instead of traditional approach of using discrete-symbols (where symbols represent entities in all-or-none fashion). VSA approach allows symbols to be compared for similarity.

  • @asimo15
    @asimo15 5 років тому

    very interesting subject:where could I get information on:"0:25:08 DP: Our conversation, if you measured it, the mean rate of
    speech, across languages by the way, it’s independent of languages, it’s
    between 4 and 5 Hertz. So, the amplitude modulation of the signal… The
    signal is a wave." why is the focus was amplitude modulation and not frequency content ?

  • @CorwynGC
    @CorwynGC 5 років тому +1

    4:20 The exact moment when 'podcast' became a transitive verb in my brain.

  • @peterp-a-n4743
    @peterp-a-n4743 5 років тому +5

    Invite Joscha Bach!

    • @earthian3658
      @earthian3658 5 років тому

      yes! he would be a fantastic guest. I've wanted him on a number of podcasts, but this one would be amazing. A.I. or anything to do with conciousness.

  • @jimm1028
    @jimm1028 5 років тому

    I would recommend Mindscape to anyone in a zeptosecond.

  • @stirlingblackwood
    @stirlingblackwood 3 роки тому

    26:30

  • @TheOriginalRaster
    @TheOriginalRaster 4 роки тому +2

    I've been listening to episodes in sequence, starting with Episode 1. By the end of this podcast I had formed this impression that David Poeppel and his colleages really haven't been able to make much tangible progress. This is a great podcast along with the others and my comment is about this field of science.
    In summary: the old model of language was crude to an extreme degree, and by the end of this podcast we see that David and associates new model is amazingly crude, not really giving anything all that useful about how the brain processes language. Perhaps these guys should just say during an introduction to their work that they are just coarsely stomping around in the dark hoping to learn something that is actually real. I'm left feeling sorry for these guys. (I appreciate the scientists efforts it's just that the brain is so complex and progress is difficult.)
    I wish they had made vastly more progress. Note David admits no one really even knows where information is stored. Umm, come on guys... perhaps just work on that problem. On a small scale, supply some information and verify where it is stored. Find the number 17 somewhere.
    Sean, thanks for these great pieces of work. Your podcasts are wonderfully great.
    Cheers!

    • @Petticca
      @Petticca Рік тому

      @Raster
      While I would argue that your assessment is uncharitable, I can't help but feel it's not completely unfair.
      I do think that having scientists who are trying to get at the why, not just keep bouncing from hypotheses to hypotheses with every newest thing about the brain as a conjecture-filled fail fest, is a really good thing.
      Obviously this is a podcast, with lay listeners so I have to assume there is a reasonable basis for the Dr. saying that he couldn't see why the word "honesty" with its abstract concepts is more difficult for the brain than "red" / "can".
      Though, I have no idea how.

  • @myothersoul1953
    @myothersoul1953 5 років тому +1

    Great podcast, very interesting, thumbs up! But .... : ) Codes in the brain? Is the brain a computational device? Variables assigned values? Certainly the brain can be modeled with such but that doesn't mean that is how the brain operates. We live in the computer age, computers are new and neat and they are changing the world but maybe they aren't the best metaphor for everything. Why would the brain need a code or a representation? What decodes the data and who is being represented to? The brain? How by more encoding and decoding? It's a vicious cycle not an explanation, a Cartesian theater to put it Daniel Dennett's words.

    • @CorwynGC
      @CorwynGC 5 років тому

      The brain needs a code merely because an entire thought, memory, whatever, can not be represented by a single neuron.

    • @myothersoul1953
      @myothersoul1953 5 років тому

      CorwynGC Multiple neurons are required from for memory and whatever but that doesn't mean representation is required.

    • @CorwynGC
      @CorwynGC 5 років тому

      Sure it does. How could it not? Let's take '587' as an example, and let's assume that it is too big to fit in one neuron, how do two (or more) neurons contain that one piece of information?

    • @myothersoul1953
      @myothersoul1953 5 років тому

      Not all information is encoded (transferred into symbols). There are features on the Martian surface that indicate liquid water flowed there but those are not symbols representing water. "H2O" are symbols representing water. As far as I know "H2O" hasn't been found on the surface of mars but that doesn't mean water wasn't once there.
      Committing something like "587" to memory will change some receptors or firing patterns in the brain, like flowing water changes a surface, but that's not a code. Those changes in the brain are not coded symbols representing the idea of "587" but in brain code instead of Arabic numerals or binary.
      Symbols or some representational code is not required for information to leave a trace.

    • @CorwynGC
      @CorwynGC 5 років тому

      "but in brain code" hence 'encoded'. Glad you have come around.

  • @robotaholic
    @robotaholic 5 років тому

    Metzinger

  • @TheXitone
    @TheXitone 5 років тому +2

    you need to get Chomsky on about this topic he basically invented it

    • @dreed7312
      @dreed7312 4 роки тому

      Chomsky hasnt contributed anything in over 50 years. He's developed into a crazy, delusional, psychotic with nothing interesting to say. A tiresome bore still revered for a contribution made a half century ago, that still hasn't really explained anything.

  • @TheReferrer72
    @TheReferrer72 5 років тому

    Wow, looks like you should have invited Geoffrey Everest Hinton to listen in "Thought vectors!? anyone"
    Looks like the computer science/machine learning people are going to answer most of the important questions and the other sciences will just have to stand aside.

  • @unclebirdman
    @unclebirdman Рік тому

    The parts list of the "mind"... sounds like jobs for the boys and endless vague papers

    • @unclebirdman
      @unclebirdman Рік тому

      "Encoding information"??? Is there really any "information" or is just all behaviour? I mean, the method of encoding (if there is any information) could vary throughout the brain and the brain could still function, nature isn't going to care about how simple it is to decode what we call information from what is actually living and behaving structure.

  • @leonenriquez5031
    @leonenriquez5031 5 років тому

    I tuned in to see if we have a better idea of what ideas are in the physical brain... nope, we’re still very far... and these guys poopoo the linguistic and semiotics models... they don’t even seem to know the most recent work by some... try Fredrick Stejernfeld and I dare you to say his theory is naive...

  • @erichodge567
    @erichodge567 4 роки тому

    1:10:00 ... Ants on stilts doing algebra.
    Gtfoh...