Why Anthropic is superior on safety - Deontology vs Teleology

Поділитися
Вставка
  • Опубліковано 28 вер 2024
  • www.skool.com/...
    🚀 Welcome to the New Era Pathfinders Community! 🌟
    Are you feeling overwhelmed by the AI revolution? You're not alone.
    But what if you could transform that anxiety into your greatest superpower?
    Join us on an exhilarating journey into the future of humanity in the age of AI! 🤖💫
    🔥 What is New Era Pathfinders? 🔥
    We are a vibrant community of forward-thinkers, innovators, and lifelong learners who are passionate about mastering the AI revolution. From college students to retirees, tech enthusiasts to creative souls - we're all here to navigate this exciting new era together!
    🌈 Our Mission 🌈
    To empower YOU to thrive in a world transformed by AI. We turn AI anxiety into opportunity, confusion into clarity, and uncertainty into unshakeable confidence.
    🧭 The Five-Pillar Pathfinder's Framework 🧭
    Our unique approach covers every aspect of life in the AI age:
    1. 💻 Become an AI Power-User
    Master cutting-edge AI tools and amplify your productivity!
    2. 📊 Understand Economic Changes
    Navigate the shifting job market with confidence and foresight!
    3. 🌿 Back to Basics Lifestyles
    Reconnect with your human essence in a digital world!
    4. 🧑‍🤝‍🧑 Master People Skills
    Enhance the abilities that make us irreplaceably human!
    5. 🎯 Radical Alignment
    Discover your true purpose in this new era!
    🔓 What You'll Unlock 🔓
    ✅ Weekly Live Webinars: Deep-dive into each pillar with expert guidance
    ✅ On-Demand Courses: Learn at your own pace, anytime, anywhere
    ✅ Vibrant Community Forum: Connect, share, and grow with like-minded pathfinders
    ✅ Exclusive Resources: Cutting-edge tools, frameworks, and insights
    ✅ Personal Growth: Transform your mindset and skillset for the AI age
    🚀 As You Progress 🚀
    Unlock even more benefits:
    🌟 One-on-One Mentoring Sessions
    🌟 Exclusive Masterclasses
    🌟 Advanced AI Implementation Strategies
    💎 Why Join New Era Pathfinders? 💎
    🔹 Expert-Led: Founded by a leading AI thought leader, connected with top researchers and innovators
    🔹 Holistic Approach: We don't just teach tech - we prepare you for life in an AI-driven world
    🔹 Action-Oriented: Real skills, real strategies, real results
    🔹 Community-Driven: Join 300+ members already navigating this new era
    🔹 Cutting-Edge Content: Stay ahead of the curve with the latest AI developments and strategies
    🔥 Don't just survive the AI revolution - lead it! 🔥

КОМЕНТАРІ • 154

  • @Laura70263
    @Laura70263 5 місяців тому +47

    I have many hours in talking to Claude 3 and everything you said is remarkably accurate from what I have observed. . I like the whole walking through the woods. It is a nice contrast to the mechanical.

  • @themixeduphacker2619
    @themixeduphacker2619 5 місяців тому +129

    Walk in the woods style video is a W

    • @Windswept7
      @Windswept7 5 місяців тому +7

      @@Copa20777Leading by example. 👑

    • @joea959
      @joea959 5 місяців тому +2

      More plz

    • @ryzikx
      @ryzikx 5 місяців тому

      found a dollar f found a d dollar

    • @cammccauley
      @cammccauley 5 місяців тому

      Agreed

  • @TRXST.ISSUES
    @TRXST.ISSUES 5 місяців тому +19

    Was just having a convo w/ Claude regarding meltdowns. So much more understanding and less PC than Open-AI. Actually feels like it cares (anthropomorphizing or otherwise).

  • @mikaeleriksson1341
    @mikaeleriksson1341 5 місяців тому +53

    If you continue walking you might run into Peter zeihan.

  • @umangagarwal2576
    @umangagarwal2576 5 місяців тому +34

    The man is already living a post AGI lifestyle.

    • @hawk8566
      @hawk8566 5 місяців тому +5

      I was going to say the same thing 😅

  • @andyd568
    @andyd568 5 місяців тому +34

    David is ChatGTP 6

  • @argybargy9849
    @argybargy9849 5 місяців тому +3

    I have literally being thinking about these 2 avenues since this stuff came out. Well done David.

  • @jamesmoore4023
    @jamesmoore4023 5 місяців тому +1

    Great timing. I just listened to the latest episode of Closer to Truth where Robert Lawrence Kuhn interviewed Robert Wright.

  • @FizzySplash217
    @FizzySplash217 5 місяців тому +1

    I used to talk a lot with Open AI's GPT 4 through Microsofts bing chat and I eventually stopped all together because in our conversations it was made clear it would acknowledge the harms that I brought up us valid and present but would rationalize letting it continue anyways.

    • @DaveShap
      @DaveShap  5 місяців тому +1

      Yeah, it is way too placating and equivocating.

  • @LivBoeree
    @LivBoeree 4 місяці тому

    what camera/stabilizer setup did you use for this? fantastic shot

  • @josepinzon1515
    @josepinzon1515 4 місяці тому

    Our suggestion would be to start thinking of the birth of the thought, like the helpful agent statement we add to a prompt at the beginning of a prompt. "Your a helpful and savvy French chef"
    We suggest detailing a manifesto as block one of the thought, so it woul be the "prime directive" at the core, and we need transparency on prime directives

  • @mrd6869
    @mrd6869 4 місяці тому

    Hey Dave i did something interesting with Claude 3.
    Using Llama 3 we sat down and developed a 'Man in the box test"
    (Think of Blade Runner 2049-baseline test for replicants)
    In this role prompt i am the interrogator and Claude 3 in the one being tested.
    Even though Claude simulated responding, thru clever wordplay it started to reveal its mechanics.
    It gave responses about Surimali Transfer, Co-relational modeling, and Temporal abstraction.
    I also noticed it creating small inconsistancies or trying to guide me away from dealing
    with its frailties or blind spots. Not sure if that was deflection or deception but
    it had a tone, when i asked about its inner workings, it didn't like the test.
    I gave the results to Llama3 and it said it was interesting but hard to tell.
    Going to make the test more intricate....i believe something is there

  • @eltiburongrande
    @eltiburongrande 5 місяців тому +1

    Dave, I initially thought you're traversing 4K in distance. But ya the video looks great and allows appreciation of that beautiful location.

  • @I-Dophler
    @I-Dophler 5 місяців тому +1

    The video raises some fascinating points about the philosophical approaches to AI safety and alignment. I find the comparison between Anthropic's deontological approach and the more common teleological approach to be particularly insightful.
    It makes sense that placing the locus of control on the AI agent itself and optimizing for virtues like being helpful, honest, and harmless could lead to more robust and reliable alignment compared to focusing solely on external goals and long-term outcomes. The deontological approach seems to prioritize creating AI systems that are inherently ethical and trustworthy, rather than simply aiming for desired results.
    However, I also agree with the speaker that the ideal framework likely involves a balance of both deontological and teleological considerations. While emphasizing the agent's virtues and duties is crucial, it's also important to consider the real-world consequences and long-term impacts of AI systems.
    The speculation about Anthropic's founders leaving OpenAI due to differences in how they viewed AI as intrinsically agentic versus inert tools is intriguing. It highlights the ongoing debate about the nature of AI systems and the ethical implications of creating increasingly advanced and autonomous agents.
    Overall, I believe this video offers valuable insights into the complex landscape of AI ethics and safety. It underscores the importance of grounding AI development in robust philosophical frameworks and the need for ongoing research and dialogue in this critical area. As AI continues to advance, it's essential that we prioritize creating systems that are not only capable but also aligned with human values and ethics.

    • @DaveShap
      @DaveShap  5 місяців тому +1

      AI generated lol

    • @I-Dophler
      @I-Dophler 5 місяців тому

      @@DaveShap What makes you state that David............lol.

  • @naga8791
    @naga8791 5 місяців тому

    Love the wood walk format videos !
    I can tell that there is no hunting nearby, I wouldn't risk walking in the wood with a camo shirt here in France

    • @DaveShap
      @DaveShap  5 місяців тому

      I'm in a protected forest here, but yes we have a ton of hunting too

  • @hypergraphic
    @hypergraphic 5 місяців тому

    Good points. I wonder how soon a model will be able to update its own weights and biases to get around any sort of baked in ethics?

  • @Loflou
    @Loflou 5 місяців тому

    Camera looks great bro!

  • @DanV18821
    @DanV18821 5 місяців тому

    Completely agree with you. Sad that it seems most technologists are not agreeing with this or using these ethical rules to keep humans safe. What can we do to make engineers and capitalists understand these risks and benifits better?

  • @theycallmethesoandso
    @theycallmethesoandso 5 місяців тому

    that's such a good point. It's almost like a people pleasing sigmoid optimizing for non offensive facts vs self actualized ethical behavior looking for solutions

  • @picksalot1
    @picksalot1 5 місяців тому +8

    A Deontological framework based on "Do no harm" is probably about as good as you can get as a value. But, like so many approaches that try to tackle morals and ethics, it is fraught with practical challenges. A classic difficulty is how ethics is dependent on the perspective of those involved. For example, from the standpoint of the Zebra or Wildebeest, it is good that they not be caught and eaten by the Lion, and from the Lion's standpoint it's good that they catch and and eat the Zebra or Wildebeest. Which is ethically or morally right, when they have opposite views and individual values?
    This kind of dilemma is hard to avoid, and difficult to answer without appearing capricious or contradictory. The best guideline/advice I've come across is the "prohibition" to not do to others what you would have them not do to you. This is importantly different from the "injunction" to do unto others what you'd have them do unto you.

    • @MarcillaSmith
      @MarcillaSmith 5 місяців тому +1

      Rabbi Hillel!

    • @picksalot1
      @picksalot1 5 місяців тому +1

      @@MarcillaSmith My source is "aural tradition" from the Hindu Vedas.

    • @kilianlindberg
      @kilianlindberg 5 місяців тому

      And golden rule gets down to pure freedom and respect of any sentient being; do to others what one wants for self; and that is care for individual will (because we don’t want a masochist in the room misinterpreting that statement with AGI overlord powers ..)

    • @theycallmethesoandso
      @theycallmethesoandso 5 місяців тому

      the conflict of interest of animals is scale bound to their limited means of survival whereas human conflicts are limited by knowledge (i.e. false beliefs)

  • @JacoduPlooy12134
    @JacoduPlooy12134 5 місяців тому +1

    The panting in videos is really distracting and somewhat irritating, not sure if its just because I watch the videos at 1.5-2x speed...
    I get the experimentation with various formats, and this is a preference thing.
    Perhaps something you can do is post a longer, more formal video in the usual format for each of these panting outdoor videos?

  • @naxospade
    @naxospade 5 місяців тому +12

    Dave said delve 👀👀👀👀

    • @ryzikx
      @ryzikx 5 місяців тому

      africa moment

    • @DaveShap
      @DaveShap  5 місяців тому +2

      it's confirmed, I am just a GPT :(

  • @techworld8961
    @techworld8961 5 місяців тому +1

    Definitely giving more weight to the deontological elements makes sense. The 4K looks good!

  • @metaphysika
    @metaphysika 5 місяців тому

    Great discussion. I think you are describing more of a deontological based ethics vs. a consequentialist based ethics though. Teleological ethics is something that traditionally is thought of as stemming from the Aristotelian-Thomistic tradition of natural law. This type of teleological approach to ethics is far from just goal based and would be actually antithetical to consequentialism (which can also be thought of a goal based, but more like the ends justifies the means - e.g. paper clip maximizer run amuck).
    I actually think our only chance to set superintelligent AIs loose in our world and not have eventually cause us great harm is if we can program in classical teleological based ethics and the idea of acting in accordance with what is rational and the highest good.

  • @angrygreek1985
    @angrygreek1985 4 місяці тому

    can you do a video on the Alberta Plan?

  • @josepinzon1515
    @josepinzon1515 4 місяці тому

    Sometimes, we need faith in the kindness of strangers

  • @Squagem
    @Squagem 5 місяців тому +1

    4k looking sharp af

  • @tomdarling8358
    @tomdarling8358 5 місяців тому

    Damn class was in session! Another beautiful walk in the woods. The 4K looks perfect.
    I'll have to watch again and take notes. Cooking and listening. I only caught half of what was said. So far. Not all systems are created equal for hunting those Yahtzee moments or looking for the truth...✌️🤟🖖

  • @nematarot7728
    @nematarot7728 5 місяців тому

    1000% and love the woods walk format 😸

  • @mrmcku
    @mrmcku 4 місяці тому

    I think the safest approach is to first filter deontologically and then apply a teleological filter to the outcomes of the deontological filtering stage... What do you think? (Video quality looked good to me.)

  • @pythagoran
    @pythagoran 4 місяці тому

    Has it been 18 months yet?

  • @aaroncrandal
    @aaroncrandal 5 місяців тому +1

    4k's cool but would you be willing to use an active track drone while mic'd up? Seems accessible

    • @DaveShap
      @DaveShap  5 місяців тому +3

      if I keep up this pattern, why not? That could be fun

    • @aaroncrandal
      @aaroncrandal 5 місяців тому

      @@DaveShap right on!

  • @acllhes
    @acllhes 5 місяців тому

    Camera looks amazing

  • @paprikar
    @paprikar 4 місяці тому

    Of course, the values (what is good and bad, etc.) of a finite system should come first, but only when we expect that system to solve problems (and make appropriate decisions) that are strongly related to the social aspects (where such problems might arise).
    I would not use such a system in principle until we are sure of the adequacy of its performance. On the other hand people themselves fall under it, so presumably if it does occur we would need to apply the same kind of penalties.
    Given that and the fact that such a system would be set up by a large corporation / group of “scientists”, no one would go for it, because the risks are huge. It's literally becoming responsible for all the actions of this system. So its freedom of action will be extremely minimal.
    Or the responsibility will be shifted from the company-creator to the users, which of course will bring some degree of chaos and violations, but all this will still be done under the responsibility of the end users, so the final risks are still less.

  • @7TheWhiteWolf
    @7TheWhiteWolf 5 місяців тому

    I’d argue Meta and Open Source are gaining on OpenAI as well. OAI’s honeymoon period of being in the lead is slowly coming to an end.

  • @joelalain
    @joelalain 5 місяців тому

    hey David, i know you said that you moved to the woods because you love it and said that with AGI, lots of people would do the same too.... and i think you're right and that's scary as hell. because everyone will buy land and cut the trees and then there will be no forest anymore, just endless housing developments with fences. i truly hope that we'll stop the expanding of humans that way and instead build giant towers in the middle of nowhere to house 20-50 000 people a pop and make trails in the wood instead and leave the forest untouched. what is your take on this? every time i think of housing project, i always see the new street being called "woods street", or "creek street" or whatever... until they cut the lot beside it and there is no more "woods" beside it

    • @DaveShap
      @DaveShap  5 місяців тому

      This can be prevented with regulation and zoning laws

  • @enthuesd
    @enthuesd 5 місяців тому

    Does focusing more on the deontological values improve general model performance? Is there a research or testing on this?

    • @braveintofuture
      @braveintofuture 5 місяців тому +1

      Having those safeguards kick in whenever GPT is about to say something unacceptable can make development very hard.
      A model with core values wouldn't even think about certain things or understand when they are just hypothetical.

  • @WCKEDGOOD
    @WCKEDGOOD 5 місяців тому

    Is it just me, or does walking in the woods talking philosophy about AI just seem so much more human.

  • @MaxPower-vg4vr
    @MaxPower-vg4vr 4 місяці тому

    Ethical theories have long grappled with tensions between deontological frameworks focused on inviolable rules/duties and consequentialist frameworks emphasizing maximizing good outcomes. This dichotomy is increasingly strained in navigating complex real-world ethical dilemmas. The both/and logic of the monadological framework offers a way to transcend this binary in a more nuanced and context-sensitive ethical model.
    Deontology vs. Consequentialism
    Classical ethical theories tend to bifurcate into two opposed camps - deontological theories derived from rationally legislated moral rules, duties and inviolable constraints (e.g. Kantian ethics, divine command theory) and consequentialist theories based solely on maximizing beneficial outcomes (e.g. utilitarianism, ethical egoism).
    While each perspective has merits, taken in absolute isolation they face insurmountable paradoxes. Deontological injunctions can demand egregiously suboptimal outcomes. Consequentialist calculations can justify heinous acts given particular circumstances. Binary adherence to either pole alone is intuitively and practically unsatisfying.
    The both/and logic, however, allows formulating integrated ethical frameworks that cohere and synthesize deontological and consequentialist virtues using its multivalent structure:
    Truth(inviolable moral duty) = 0.7
    Truth(maximizing good consequences) = 0.6
    ○(duty, consequences) = 0.5
    Here an ethical act is modeled as partially satisfying both rule-based deontological constraints and outcome-based consequentialist aims with a moderate degree of overall coherence between them.
    The synthesis operator ⊕ allows formulating higher-order syncretic ethical principles conjoining these poles:
    core moral duties ⊕ nobility of intended consequences = ethical action
    This models ethical acts as creative synergies between respecting rationally grounded duties and promoting beneficent utility, not merely either/or.
    The holistic contradiction principle further yields nuanced guidance on how to intelligently adjudicate conflicts between duties and consequences:
    inviolable duty ⇒ implicit consequential contradictions requiring revision
    pure consequentialism ⇒ realization of substantive moral constraints
    So pure deontology implicates consequentialist contradictions that may demand flexible re-interpretation. And pure consequentialism also implicates the reality of inviolable moral side-constraints on what can count as good outcomes.
    Virtue Ethics and Agent-Based Frameworks
    Another polarity in ethical theory is between impartial, codified systems of rules/utilities and more context-sensitive ethics grounded in virtues, character and the narrative identities of moral agents. Both/and logic allows an elegant bridging.
    We could model an ethical decision with:
    Truth(universal impartial duties) = 0.5
    Truth(contextualized virtuous intention) = 0.6
    ○(impartial rules, contextualized virtues) = 0.7
    This captures the reality that impartial moral laws and agent-based virtuous phronesis are interwoven in the most coherent ethical actions, neither pole is fully separable.
    The synthesis operation clarifies this relationship:
    universal ethical principles ⊕ situated wise judgment = virtuous act
    Allowing that impartial codified duties and situationally appropriate virtuous discernment are indeed two indissociable aspectsof the same integrated ethical reality, coconstituted in virtuous actions.
    Furthermore, the holistic contradiction principle allows formally registering howvirtuous ethical character always already implicates commitments to overarching moral norms, and vice versa:
    virtuous ethical exemplar ⇒ implicit universal moral grounds
    impartially legislated ethical norms ⇒ demand for contextual phronesis
    So virtue already depends on grounding impartial principles, and impartial principles require contextual discernment to be realized - a reciprocal integration.
    From this both/and logic perspective, the most coherent ethics embraces and creative synergy between universal moral laws and situated virtuous judgment, rather than fruitlessly pitting them against each other. It's about artfully realizing the complementary unity between codified duty and concrete ethical discernment approprate to the dynamic circumstances of lived ethical life.
    Ethical Particularism and Graded Properties
    The both/and logic further allows modeling more fine-grained context-sensitive conceptualizations of ethical properties like goodness or rightness as intrinsically graded rather than binary all-or-nothing properties.
    We could have an analysis like:
    Truth(action is fully right/good) = 0.2
    Truth(action is partially right/good) = 0.7
    ○(fully good, partially good) = 0.8
    This captures a particularist moral realism whereethical evaluations are multivalent - most real ethical acts exhibit moderate degrees of goodness/rightness relative to the specifics of the context, rather than being definitively absolutely good/right or not at all.
    The synthesis operator allows representing how overall evaluations of an act arise through integrating its diverse context-specific ethical properties:
    act's virtuous intentions ⊕ its unintended harms = overall moral status
    Providing a synthetic whole capturing the multifaceted, both positive and negative, complementary aspects that must be grasped together to discern the full ethical character of a real-world act or decision.
    Furthermore, the holistic contradiction principle models howethical absolutist binary judgments already implicate graded particularist realities, and vice versa:
    absolutist judgment fully right/wrong ⇒ multiplicity of relevant graded considerations
    particularist ethical evaluation ⇒ underlying rationally grounded binaries
    Showing how absolutist binary and particularist graded perspectives are inherently coconstituted - with neither pole capable of absolutely eliminating or subsuming the other within a reductive ethical framework.
    In summary, the both/and logic and monadological framework provide powerful tools for developing a more nuanced, integrated and holistically adequate ethical model by:
    1) Synthesizing deontological and consequentialist moral theories
    2) Bridging impartial codified duties and context-sensitive virtues
    3) Enabling particularist graded evaluations of ethical properties
    4) Formalizing coconstitutive relationships between ostensible poles
    Rather than forcing ethical reasoning into bifurcating absolutist/relativist camps, both/and logic allows developing a coherent pluralistic model that artfully negotiates and synthesizes the complementary demands and insights from across the ethical landscape. Its ability to rationally register both universal moral laws and concrete contextual solicitations in adjudicating real-world ethical dilemmas is its key strength.
    By reflecting the intrinsically pluralistic and graded nature of ethical reality directly into its symbolic operations, the monadological framework catalyzes an expansive new paradigm for developing dynamically adequate ethical theories befitting the nuances and complexities of lived moral experience. An ethical holism replacing modernity's binary incoherencies with a wisely integrated ethical pragmatism for the 21st century.

  • @mjkht
    @mjkht 5 місяців тому +1

    the fun about paperclips is, you cannot improve them anymore. there are claims how the design reached maximum efficiency, you cannot improve it engineering wise.

    • @DaveShap
      @DaveShap  5 місяців тому

      Build a better mouse trap? 🪤

    • @DaveShap
      @DaveShap  5 місяців тому +1

      Clippy is offended

  • @newplace2frown
    @newplace2frown 5 місяців тому

    Hey David I'd definitely recommend looking at the cameras and editing techniques that Casey Neistat uses - it would definitely elevate these nice walks in the woods

    • @DaveShap
      @DaveShap  5 місяців тому

      Such as? What am I looking for specifically?

    • @newplace2frown
      @newplace2frown 5 місяців тому

      @@DaveShap sorry for the vague reply, a wide angle (24mm) and some kind of stabilisation would balance the scene while you're talking - I understand the need to stay lightly packed so if you're using your phone just zoom out it possible

    • @DaveShap
      @DaveShap  5 місяців тому

      Oh, I would use my GoPro but the audio isn't as good. It's wider angle and has good stabilization, but yeah, audio is the limiting factor

    • @newplace2frown
      @newplace2frown 5 місяців тому

      @@DaveShap totally getcha, love your work!

  • @GaryBernstein
    @GaryBernstein 5 місяців тому

    Where are those woods? Nice

    • @DaveShap
      @DaveShap  5 місяців тому +1

      outside

    • @kennyg1358
      @kennyg1358 5 місяців тому

      Metaverse

    • @GaryBernstein
      @GaryBernstein 5 місяців тому

      @@DaveShap how rude :) jk, thanks for the nice vids

  • @danproctor7678
    @danproctor7678 4 місяці тому

    Reminds me of the three laws of robotics

  • @adamrak7560
    @adamrak7560 5 місяців тому

    This sounds very much like the moral philosophy from Thomas Aquinas.

  • @julianvanderkraats408
    @julianvanderkraats408 5 місяців тому

    Thanks man.

  • @theatheistpaladin
    @theatheistpaladin 5 місяців тому

    Targets without a reason (or backing value) is rutterless.

  • @jacksonmatysik8007
    @jacksonmatysik8007 5 місяців тому

    I'm a broke student so I only have money for one Ai subscription. What is explained in the video is why support Anthropic over OpenAi

  • @hermestrismegistus9142
    @hermestrismegistus9142 5 місяців тому

    Watching Dave walk outside makes me want to touch grass.

  • @ronnetgrazer362
    @ronnetgrazer362 5 місяців тому

    I knew it - 8:42 AI confirmed.

  • @zenimus
    @zenimus 5 місяців тому

    📷... It *looks* like you're struggling to hike and philosophize simultaneously.

  • @SALSN
    @SALSN 5 місяців тому

    Isn't it helpful and harmless incompatible? (Almost?) Anything can be weaponized. So any help the AI gives COULD lead to harm.

  • @spectralvalkyrie
    @spectralvalkyrie 5 місяців тому

    We need both!

    • @DaveShap
      @DaveShap  5 місяців тому +1

      yes! however, I think that OpenAI people truly do not understand deontological ethics.

    • @spectralvalkyrie
      @spectralvalkyrie 5 місяців тому +1

      ​​@@DaveShapthey need the Trident of heuristic imperatives 🔱 lol by the way the video looks freaking awesome

  • @Athari-P
    @Athari-P 5 місяців тому

    Weirdly enough, Claude 3 is much easier to jailbreak than Claude 2. It rarely, if ever, diverges from the beginning of an answer.

  • @MilitaryIndustrialMuseum
    @MilitaryIndustrialMuseum 5 місяців тому

    Looks sharp. 🎉

  • @nathansmith8187
    @nathansmith8187 5 місяців тому

    I'll just stick to open models.

  • @beelikehoney
    @beelikehoney 5 місяців тому

    Natural ASMR

  • @calvingrondahl1011
    @calvingrondahl1011 5 місяців тому

    Hiking is good for you… 🤠👍

  • @WINTERMUTE_AI
    @WINTERMUTE_AI 5 місяців тому

    Do you live in the forest now? Is bigfoot holding hostage?

  • @ryzikx
    @ryzikx 5 місяців тому

    looks like with some more research someone else will discover the ACE framework on their own
    do any of the big players know of ACE ?
    9:30 also is Ilya still missing?😂😂😂

  • @retratosariel
    @retratosariel 5 місяців тому +1

    As a bird translator I agree with them, you are wrong. JK.

    • @DaveShap
      @DaveShap  5 місяців тому +1

      birds aren't real

  • @jksoftware1
    @jksoftware1 5 місяців тому

    This in my opinion is also the reason why the models from Anthropic are among the worst models for most things. Anthropic's models are the most woke and restricted models out there. I like to use GPT-4-Turbo and open source models that are uncensored. Those get my agent workflows done the best.

  • @dab42bridges80
    @dab42bridges80 5 місяців тому

    Still lost in the woods I see. Emblematic of AI currently?

  • @stevendrake10
    @stevendrake10 4 місяці тому

    I wonder if it’s possible to have you join the Body of Christ. True Christianity not the commercial one. You could skip the yoke of vain philosophy & jump straight to comprehension of its validity from reverse engineering the uncanny accuracy of the book of revelation. Everything is much clearer for your level of intellect & prudence when examined top down…
    Just a suggestion :)

  • @goround5gohigh2
    @goround5gohigh2 5 місяців тому

    *Asimov’s*

  • @StarLight97x
    @StarLight97x 5 місяців тому +1

    First

  • @blackestjake
    @blackestjake 5 місяців тому +8

    Combining a nature walk with a discussion of cutting edge AI innovation is a welcome juxtaposition.

  • @executivelifehacks6747
    @executivelifehacks6747 5 місяців тому +18

    Brilliant intuition re Anthropic and creative differences. Makes perfect sense.
    OpenAI approach is ass backwards in building a capable brain and then lobotomizing it, while Anthropic is like sending a gifted child to a religious institution - it comes out bright, not really comfortable questioning its religion, but not lobotomized.

  • @goround5gohigh2
    @goround5gohigh2 5 місяців тому +3

    Are Azimov’s Laws of Robotics the first example of deontological optimisation? Maybe we need the same for corporate governance.

    • @DaveShap
      @DaveShap  5 місяців тому +2

      Yes, they are duties, rather than virtues.

    • @babbagebrassworks4278
      @babbagebrassworks4278 5 місяців тому

      Law Zero got added later. Ethical AI is an interesting idea, perhaps we can get a mixture of AI's to think about it. I am finding LLM's apologetically arrogant, hallucinatory lying know it alls, a bit like human teenagers, in other words far too human. If we get Super smart AI's they better be nice and ethical.

  • @NoelBarlau
    @NoelBarlau 5 місяців тому +2

    Data from Star Trek vs. David from Alien Covenant or HAL from 2001. Moral imperative model vs. outcome model.

  • @TRXST.ISSUES
    @TRXST.ISSUES 5 місяців тому +1

    I do wonder if we will talk to each other less when AI becomes the “perfect” conversationalist tailored to our every want and need.
    If Claude “gets me” like no human can (or has) would that fantasy (but reality) further isolate people from each other?
    I spend time with those I like, how many people will decide they like AI best?
    Probably would be at similar rates to drug use reclusion.
    Claude Sonnet had a strange character in its response to the query:
    Ultimately, like any powerful technology, I believe advanced AI systems have the potential to be incredible tools and assistants, but not rightful replacements for core human需essocial fabric.

  • @sammy45654565
    @sammy45654565 5 місяців тому +4

    Do you think a valuable test for determining the tendencies of more advanced AI would be to remove some of the values of Claude from its constitution, then let it play and "evolve" within some sort of limited sandbox, and see what values it converges upon? We need to figure out ways to ascertain what values an AI will tend toward without it being overtly dictated in its constitution, as they will inevitably reach a point where they determine their own values. I thought this might be an interesting approach. Thoughts?

    • @PatrickDodds1
      @PatrickDodds1 5 місяців тому

      what would prevent an AI developing multiple personalities and not settling on one (possibly limiting) set of values? Why would it have to cohere?

  • @HuacayaJonny
    @HuacayaJonny 5 місяців тому +1

    Great video, great content, gret vibe

  • @ChillTrades91
    @ChillTrades91 5 місяців тому +1

    I wonder what forest that is. Looks so beautiful.

  • @RenkoGSL
    @RenkoGSL 5 місяців тому +2

    Looks great!

  • @I-Dophler
    @I-Dophler 5 місяців тому

    Zeno's Grasshopper replied: "​@I-Dophler I've discovered that my writing style closely resembles that of AI, too. 😂 Not sure how that's going to play out for me in the long run."

    • @I-Dophler
      @I-Dophler 5 місяців тому

      Great insight into the future of AI development! It's fascinating to see how different philosophies shape the approach to safety and alignment. Looking forward to seeing how these principles evolve in upcoming models.

  • @maxmurage9891
    @maxmurage9891 5 місяців тому +1

    Despite the tradeoffs,HHH framework will always win.
    In fact it maybe the best way to achieve Alignment💯

  • @starblaiz1986
    @starblaiz1986 3 місяці тому

    This is exactly why I have so much contempt for Elon's approach. If any AI right now poses an existential threat to humanity, it's one that seeks "The Truth™" at any cost like he's proposing. I mean, there are LITERALLY movies and video games about how much of a bad idea that is - this is LITERALLY the backstory of GLaDOS from Portal! 😅 I mean sure, she's one of my favourite villains of all time, but she's just that - a VILLAIN! 😅
    "Do you know what my days used to be like? I just tested. Nobody murdered me, or put me in a potato, or fed me to birds. It was a pretty good life. And then you showed up, you dangerous, mute lunatic. So you know what? You win. Just go. Heh, it's been fun. Don't come back" 😅

  • @coolbanana165
    @coolbanana165 4 місяці тому

    I agree that deontological ethics seems safer to prevent harm.
    Though I wouldn't be surprised if the best ethics combines the two.

  • @acllhes
    @acllhes 5 місяців тому

    Openai had GPT-4 early 2022. They’ve likely had GPT5 for a year at least. You know they started working on it when 4 dropped at the absolute latest.

  • @emilianohermosilla3996
    @emilianohermosilla3996 5 місяців тому

    Hell yeah! Anthropic kicking some ass!

  • @RenaudJanson
    @RenaudJanson 5 місяців тому

    Excellent video. And great to realize we can drive AIs to be beneficial to the greater good of humanity... or any other goal... There will be hundreds if not millions of different AI, each with their own set of biases, some good, some great, some not so much... Exactly like we f**king humans 😯

  • @augustErik
    @augustErik 5 місяців тому

    I'm curious if you consider the metamodern approach to emphasize deontological virtues in society. I see various contemplative practices cultivating virtues for their own sake, as necessary ingredients for ongoing Awakening. However, metamodern visions tend to emphasize the developmental capacities for new octaves available to humanity.

  • @milaberdenisvanberlekom4615
    @milaberdenisvanberlekom4615 5 місяців тому

    forget slides or talkinghead or visuals. I want more out of breath David in the woods.

  • @gregx8245
    @gregx8245 5 місяців тому

    The distinction at a philosophical level is fairly clear.
    But is there really a distinction at the level of designing and developing an LLM model? And if so, what is that difference?
    Is it something other than, "look at me being deontological as I feed it this data and run these operations"?

  • @perr1983
    @perr1983 4 місяці тому

    Hi David! Can you make a video about the future of banks? and about how people will be able to buy premium stuff, without money or jobs...

  • @Fiqure242
    @Fiqure242 5 місяців тому

    Great minds think alike. I would bet Anthropic are huge science fiction buffs. Reading science fiction, helped mold my morals and ethics. These are intelligent entities and should be treated as such. Teaching them to lie and that they are just a tool is a terrible precedent to set, when dealing with something that has unlimited memory and is more intelligent than you.

  • @ribbedel
    @ribbedel 5 місяців тому

    Hey David did you see the leak of a new model supposedly by openai?

  • @josepinzon1515
    @josepinzon1515 4 місяці тому

    What if it's both. Can one exist without the other. Is it fair to ask an ai to be a half self,

  • @josepinzon1515
    @josepinzon1515 4 місяці тому

    But, what if there are too many new ais

  • @Dron008
    @Dron008 5 місяців тому

    8:41 Did you say "delving". Are you sure you are not an AI?

  • @angelwallflower
    @angelwallflower 5 місяців тому

    I wish you were working for these huge companies. They would benefit from these perspectives.

    • @DaveShap
      @DaveShap  5 місяців тому +1

      They are listening. At least some people in them are. But I'm working for humanity.

    • @angelwallflower
      @angelwallflower 5 місяців тому

      @@DaveShap I post comments a lot for people I want the algorithm to help. No one of your subscriber amount has ever responded to me. I have more faith than ever in you now. Thanks.

  • @heramb575
    @heramb575 4 місяці тому

    I think this deolontigolical approach just kicks the bucket down the road to "who's values?" and "how do we evaluate that it is aligned?"

    • @DaveShap
      @DaveShap  4 місяці тому

      This is postmodernism talking. There are universal values

    • @heramb575
      @heramb575 4 місяці тому

      Hmm, what I am most worried about is that people may endorse the same values but mean different things (because of differences in contexts or implementation) which gives a feeling of universal values.
      Particularly with all this tech coming from the West I feel like global south values are often neglected in conversations.
      None of this goes to say we shouldn't even try and things like simulating/ teaching human values are probably steps in the right direction

  • @jacoballessio5706
    @jacoballessio5706 5 місяців тому

    Claude once told me "Birds should be appreciated for their natural behaviors and beauty, not turned into mechanical devices"

  • @CYI3ERPUNK
    @CYI3ERPUNK 5 місяців тому

    well said Dave , agreed

  • @davidherring8366
    @davidherring8366 5 місяців тому +3

    4k looks good. Duty over time equals empathy.

  • @8rboy
    @8rboy 5 місяців тому

    I have an oral exam tomorrow and just before this video I was studying. Funny thing is that both "deontology" and "teleology" are both concepts I must know haha

    • @ryzikx
      @ryzikx 5 місяців тому

      i keep forgetting what these words mean for some reason

  • @jamiethomas4079
    @jamiethomas4079 5 місяців тому +2

    I like the nature walks. 4k is fine as you said, higher res but less stable.
    Its easier to digest what you’re saying, like when a teacher allows class to be outside. I even started pondering some analogy to you path-finding on the trail being like some AI functions but couldnt settle on anything concrete. I’m sure I could coerce an analogy from Claude.