#1 Rasmus Hougaard: Human leadership in the age of AI

Поділитися
Вставка
  • Опубліковано 7 лют 2025

КОМЕНТАРІ • 11

  • @vacazion2425
    @vacazion2425 26 днів тому +1

    15:59 Yeah, that’s what we need, AI monitoring unconscious bias (a fake term if I ever heard one) and correcting us in the name of philosophical notions “awareness,” “wisdom,” and “compassion.” And who determines that?
    If the Stoics barely scratch the surface of these concepts, I dread folks like Martin training LLMs because they can’t help muddy the water with their conscious biases.
    To use Matin’s example of bias against fat people. Body positivity for those carrying unhealthy weight in the name of compassion is not wise nor compassionate to me. But I suspect Martin would disagree.

    • @MindfulAIPodcast
      @MindfulAIPodcast  25 днів тому +2

      Thank you for your comment, @vacazion2425! I'll provide responses to the things you bring up below:
      Unconscious bias is a well-researched phenomena within psychology, neuroscience, and sociology. Often they are called implicit biases or associations. A good way of finding out one's own unconscious biases is the Harvard Implicit Association Test (I had the link here, but it seems it made UA-cam block my reply.)
      And I am certainly not advocating for AI "correcting" us. I was just suggesting AI as a potential tool for growing in self-awareness and insight. Self-awareness, reflection and examination are central to stoicism, if you resonate with that philosophy. What I was suggesting was that AI could potentially become a powerful tool in that process of knowing oneself more and more deeply.
      I would define “awareness” as the capacity to remain fully attuned to everything that arises within our minds, encompassing thoughts, emotions, sensory perceptions, impulses, desires, intentions, and more, "compassion" as wishing another to be free from suffering, and "wisdom" as the ability to perceive reality with clarity, free from distortion, and to act with foresight, prioritizing long-term outcomes over short-term gains, and seeking to benefit the many rather than just oneself or a select few. It is certainly my hope that AI can become a tool for us to develop these qualities within ourselves. I do think there is zero percent change that LLM's will be able to do this, but perhaps when we move to concept based (rather than token based) models it will become more feasible.
      And when it comes to "body positivity" I would not say it is a particularly helpful idea, no. It is an attempts to help obese individuals deal with the unhelpful shame, guilt and self-loathing they unfortunately so often feel. But body positivity is not a particularly wise and skilful way of achieving this as it also has many detrimental health effects (as being obese has consequences such as shortening your life, limiting your ability to participate in many activities and a host of other undesirable things).

    • @vacazion2425
      @vacazion2425 24 дні тому +1

      @ I appreciate your thoughtful response. This particular subject is a pet peeve of mine, especially when you throw an AI lens on it. I have to take a couple hour training on unconscious bias training for my company every year and 4 hours of it every 2 years for my professional license accreditation.
      Type in “flaws of the implicit association test or unconscious bias theory” into chat gpt. The biggest problem is that the test is not predictive of behavior. So even if you have these alleged biases, they don’t show up as discriminatory behavior. Perhaps it’s because the test has very low retest reliability calling into question the test’s validity. Ultimately, unconscious bias trainings have not shown to reduce discrimination and often cause resentment. My resentment stems from the wasted hours of training. I don’t even pay attention to them any more. I just click through the trainings as fast as I can. They are such a waste of time.
      The podcast was otherwise very interesting.

    • @martinstrom
      @martinstrom 23 дні тому +2

      @@vacazion2425 Then I certainly understand where you are coming from. My perspective is that of thoughtful individuals desiring to understand themselves better and cultivate more awareness, compassion, and wisdom as a part of desired personal growth and how AI, used skilfully might contribute to that. Companies forcing things like "unconscious bias training" on their employees is something very different. I want to be aware of my own biases (unconscious or not) so that I can make an effort to break free of them. But that is very different from telling someone else "I believe you have these biases" and "you need to engage in this training I have devised to fix them."
      You just made me realize that coupling AI with those kinds of efforts is a potential nightmare. For example, using AI to detect what are perceived undesirable attitudes in employees and again using AI to try and "correct" those by means of various interventions (such as mandatory training programs) would be dystopian indeed.

    • @vacazion2425
      @vacazion2425 23 дні тому +1

      @@martinstrom I agree with everything you just said. Sorry, I had to rant on that one issue. I just upgraded to the + version of ChatGPT and it’s both amazing and a bit scary how fast AI is evolving. Cheers.

    • @TheZGALa
      @TheZGALa 16 днів тому

      To my perspective, it is in the tension between differing perspectives and opinions that the ever-fluid "truth" of right-v-wrong lies...and freewill, if there is such a thing, lives in that tension. There is definitely danger in allowing AI to determine that for us, and it already does to a large extent, even unconsciously through advertising, tailored feeds...learning to stay conscious and think for ourselves is increasingly important and challenging.