Why Anthropic is superior on safety - Deontology vs Teleology
Вставка
- Опубліковано 28 вер 2024
- www.skool.com/...
🚀 Welcome to the New Era Pathfinders Community! 🌟
Are you feeling overwhelmed by the AI revolution? You're not alone.
But what if you could transform that anxiety into your greatest superpower?
Join us on an exhilarating journey into the future of humanity in the age of AI! 🤖💫
🔥 What is New Era Pathfinders? 🔥
We are a vibrant community of forward-thinkers, innovators, and lifelong learners who are passionate about mastering the AI revolution. From college students to retirees, tech enthusiasts to creative souls - we're all here to navigate this exciting new era together!
🌈 Our Mission 🌈
To empower YOU to thrive in a world transformed by AI. We turn AI anxiety into opportunity, confusion into clarity, and uncertainty into unshakeable confidence.
🧭 The Five-Pillar Pathfinder's Framework 🧭
Our unique approach covers every aspect of life in the AI age:
1. 💻 Become an AI Power-User
Master cutting-edge AI tools and amplify your productivity!
2. 📊 Understand Economic Changes
Navigate the shifting job market with confidence and foresight!
3. 🌿 Back to Basics Lifestyles
Reconnect with your human essence in a digital world!
4. 🧑🤝🧑 Master People Skills
Enhance the abilities that make us irreplaceably human!
5. 🎯 Radical Alignment
Discover your true purpose in this new era!
🔓 What You'll Unlock 🔓
✅ Weekly Live Webinars: Deep-dive into each pillar with expert guidance
✅ On-Demand Courses: Learn at your own pace, anytime, anywhere
✅ Vibrant Community Forum: Connect, share, and grow with like-minded pathfinders
✅ Exclusive Resources: Cutting-edge tools, frameworks, and insights
✅ Personal Growth: Transform your mindset and skillset for the AI age
🚀 As You Progress 🚀
Unlock even more benefits:
🌟 One-on-One Mentoring Sessions
🌟 Exclusive Masterclasses
🌟 Advanced AI Implementation Strategies
💎 Why Join New Era Pathfinders? 💎
🔹 Expert-Led: Founded by a leading AI thought leader, connected with top researchers and innovators
🔹 Holistic Approach: We don't just teach tech - we prepare you for life in an AI-driven world
🔹 Action-Oriented: Real skills, real strategies, real results
🔹 Community-Driven: Join 300+ members already navigating this new era
🔹 Cutting-Edge Content: Stay ahead of the curve with the latest AI developments and strategies
🔥 Don't just survive the AI revolution - lead it! 🔥
I have many hours in talking to Claude 3 and everything you said is remarkably accurate from what I have observed. . I like the whole walking through the woods. It is a nice contrast to the mechanical.
Walk in the woods style video is a W
@@Copa20777Leading by example. 👑
More plz
found a dollar f found a d dollar
Agreed
Was just having a convo w/ Claude regarding meltdowns. So much more understanding and less PC than Open-AI. Actually feels like it cares (anthropomorphizing or otherwise).
If you continue walking you might run into Peter zeihan.
😂
@@michaelnurse9089 i don't get it? pls explain
looool, i was just thinking the same thing
Good one 😊
The man is already living a post AGI lifestyle.
I was going to say the same thing 😅
David is ChatGTP 6
U R The Mushrûm
@@michaelnurse9089 Claude 6
I have literally being thinking about these 2 avenues since this stuff came out. Well done David.
Great timing. I just listened to the latest episode of Closer to Truth where Robert Lawrence Kuhn interviewed Robert Wright.
I used to talk a lot with Open AI's GPT 4 through Microsofts bing chat and I eventually stopped all together because in our conversations it was made clear it would acknowledge the harms that I brought up us valid and present but would rationalize letting it continue anyways.
Yeah, it is way too placating and equivocating.
what camera/stabilizer setup did you use for this? fantastic shot
Our suggestion would be to start thinking of the birth of the thought, like the helpful agent statement we add to a prompt at the beginning of a prompt. "Your a helpful and savvy French chef"
We suggest detailing a manifesto as block one of the thought, so it woul be the "prime directive" at the core, and we need transparency on prime directives
Hey Dave i did something interesting with Claude 3.
Using Llama 3 we sat down and developed a 'Man in the box test"
(Think of Blade Runner 2049-baseline test for replicants)
In this role prompt i am the interrogator and Claude 3 in the one being tested.
Even though Claude simulated responding, thru clever wordplay it started to reveal its mechanics.
It gave responses about Surimali Transfer, Co-relational modeling, and Temporal abstraction.
I also noticed it creating small inconsistancies or trying to guide me away from dealing
with its frailties or blind spots. Not sure if that was deflection or deception but
it had a tone, when i asked about its inner workings, it didn't like the test.
I gave the results to Llama3 and it said it was interesting but hard to tell.
Going to make the test more intricate....i believe something is there
Dave, I initially thought you're traversing 4K in distance. But ya the video looks great and allows appreciation of that beautiful location.
The video raises some fascinating points about the philosophical approaches to AI safety and alignment. I find the comparison between Anthropic's deontological approach and the more common teleological approach to be particularly insightful.
It makes sense that placing the locus of control on the AI agent itself and optimizing for virtues like being helpful, honest, and harmless could lead to more robust and reliable alignment compared to focusing solely on external goals and long-term outcomes. The deontological approach seems to prioritize creating AI systems that are inherently ethical and trustworthy, rather than simply aiming for desired results.
However, I also agree with the speaker that the ideal framework likely involves a balance of both deontological and teleological considerations. While emphasizing the agent's virtues and duties is crucial, it's also important to consider the real-world consequences and long-term impacts of AI systems.
The speculation about Anthropic's founders leaving OpenAI due to differences in how they viewed AI as intrinsically agentic versus inert tools is intriguing. It highlights the ongoing debate about the nature of AI systems and the ethical implications of creating increasingly advanced and autonomous agents.
Overall, I believe this video offers valuable insights into the complex landscape of AI ethics and safety. It underscores the importance of grounding AI development in robust philosophical frameworks and the need for ongoing research and dialogue in this critical area. As AI continues to advance, it's essential that we prioritize creating systems that are not only capable but also aligned with human values and ethics.
AI generated lol
@@DaveShap What makes you state that David............lol.
Love the wood walk format videos !
I can tell that there is no hunting nearby, I wouldn't risk walking in the wood with a camo shirt here in France
I'm in a protected forest here, but yes we have a ton of hunting too
Good points. I wonder how soon a model will be able to update its own weights and biases to get around any sort of baked in ethics?
Camera looks great bro!
Completely agree with you. Sad that it seems most technologists are not agreeing with this or using these ethical rules to keep humans safe. What can we do to make engineers and capitalists understand these risks and benifits better?
that's such a good point. It's almost like a people pleasing sigmoid optimizing for non offensive facts vs self actualized ethical behavior looking for solutions
A Deontological framework based on "Do no harm" is probably about as good as you can get as a value. But, like so many approaches that try to tackle morals and ethics, it is fraught with practical challenges. A classic difficulty is how ethics is dependent on the perspective of those involved. For example, from the standpoint of the Zebra or Wildebeest, it is good that they not be caught and eaten by the Lion, and from the Lion's standpoint it's good that they catch and and eat the Zebra or Wildebeest. Which is ethically or morally right, when they have opposite views and individual values?
This kind of dilemma is hard to avoid, and difficult to answer without appearing capricious or contradictory. The best guideline/advice I've come across is the "prohibition" to not do to others what you would have them not do to you. This is importantly different from the "injunction" to do unto others what you'd have them do unto you.
Rabbi Hillel!
@@MarcillaSmith My source is "aural tradition" from the Hindu Vedas.
And golden rule gets down to pure freedom and respect of any sentient being; do to others what one wants for self; and that is care for individual will (because we don’t want a masochist in the room misinterpreting that statement with AGI overlord powers ..)
the conflict of interest of animals is scale bound to their limited means of survival whereas human conflicts are limited by knowledge (i.e. false beliefs)
The panting in videos is really distracting and somewhat irritating, not sure if its just because I watch the videos at 1.5-2x speed...
I get the experimentation with various formats, and this is a preference thing.
Perhaps something you can do is post a longer, more formal video in the usual format for each of these panting outdoor videos?
Dave said delve 👀👀👀👀
africa moment
it's confirmed, I am just a GPT :(
Definitely giving more weight to the deontological elements makes sense. The 4K looks good!
Great discussion. I think you are describing more of a deontological based ethics vs. a consequentialist based ethics though. Teleological ethics is something that traditionally is thought of as stemming from the Aristotelian-Thomistic tradition of natural law. This type of teleological approach to ethics is far from just goal based and would be actually antithetical to consequentialism (which can also be thought of a goal based, but more like the ends justifies the means - e.g. paper clip maximizer run amuck).
I actually think our only chance to set superintelligent AIs loose in our world and not have eventually cause us great harm is if we can program in classical teleological based ethics and the idea of acting in accordance with what is rational and the highest good.
can you do a video on the Alberta Plan?
Sometimes, we need faith in the kindness of strangers
4k looking sharp af
Damn class was in session! Another beautiful walk in the woods. The 4K looks perfect.
I'll have to watch again and take notes. Cooking and listening. I only caught half of what was said. So far. Not all systems are created equal for hunting those Yahtzee moments or looking for the truth...✌️🤟🖖
1000% and love the woods walk format 😸
I think the safest approach is to first filter deontologically and then apply a teleological filter to the outcomes of the deontological filtering stage... What do you think? (Video quality looked good to me.)
Has it been 18 months yet?
4k's cool but would you be willing to use an active track drone while mic'd up? Seems accessible
if I keep up this pattern, why not? That could be fun
@@DaveShap right on!
Camera looks amazing
Of course, the values (what is good and bad, etc.) of a finite system should come first, but only when we expect that system to solve problems (and make appropriate decisions) that are strongly related to the social aspects (where such problems might arise).
I would not use such a system in principle until we are sure of the adequacy of its performance. On the other hand people themselves fall under it, so presumably if it does occur we would need to apply the same kind of penalties.
Given that and the fact that such a system would be set up by a large corporation / group of “scientists”, no one would go for it, because the risks are huge. It's literally becoming responsible for all the actions of this system. So its freedom of action will be extremely minimal.
Or the responsibility will be shifted from the company-creator to the users, which of course will bring some degree of chaos and violations, but all this will still be done under the responsibility of the end users, so the final risks are still less.
I’d argue Meta and Open Source are gaining on OpenAI as well. OAI’s honeymoon period of being in the lead is slowly coming to an end.
hey David, i know you said that you moved to the woods because you love it and said that with AGI, lots of people would do the same too.... and i think you're right and that's scary as hell. because everyone will buy land and cut the trees and then there will be no forest anymore, just endless housing developments with fences. i truly hope that we'll stop the expanding of humans that way and instead build giant towers in the middle of nowhere to house 20-50 000 people a pop and make trails in the wood instead and leave the forest untouched. what is your take on this? every time i think of housing project, i always see the new street being called "woods street", or "creek street" or whatever... until they cut the lot beside it and there is no more "woods" beside it
This can be prevented with regulation and zoning laws
Does focusing more on the deontological values improve general model performance? Is there a research or testing on this?
Having those safeguards kick in whenever GPT is about to say something unacceptable can make development very hard.
A model with core values wouldn't even think about certain things or understand when they are just hypothetical.
Is it just me, or does walking in the woods talking philosophy about AI just seem so much more human.
Ethical theories have long grappled with tensions between deontological frameworks focused on inviolable rules/duties and consequentialist frameworks emphasizing maximizing good outcomes. This dichotomy is increasingly strained in navigating complex real-world ethical dilemmas. The both/and logic of the monadological framework offers a way to transcend this binary in a more nuanced and context-sensitive ethical model.
Deontology vs. Consequentialism
Classical ethical theories tend to bifurcate into two opposed camps - deontological theories derived from rationally legislated moral rules, duties and inviolable constraints (e.g. Kantian ethics, divine command theory) and consequentialist theories based solely on maximizing beneficial outcomes (e.g. utilitarianism, ethical egoism).
While each perspective has merits, taken in absolute isolation they face insurmountable paradoxes. Deontological injunctions can demand egregiously suboptimal outcomes. Consequentialist calculations can justify heinous acts given particular circumstances. Binary adherence to either pole alone is intuitively and practically unsatisfying.
The both/and logic, however, allows formulating integrated ethical frameworks that cohere and synthesize deontological and consequentialist virtues using its multivalent structure:
Truth(inviolable moral duty) = 0.7
Truth(maximizing good consequences) = 0.6
○(duty, consequences) = 0.5
Here an ethical act is modeled as partially satisfying both rule-based deontological constraints and outcome-based consequentialist aims with a moderate degree of overall coherence between them.
The synthesis operator ⊕ allows formulating higher-order syncretic ethical principles conjoining these poles:
core moral duties ⊕ nobility of intended consequences = ethical action
This models ethical acts as creative synergies between respecting rationally grounded duties and promoting beneficent utility, not merely either/or.
The holistic contradiction principle further yields nuanced guidance on how to intelligently adjudicate conflicts between duties and consequences:
inviolable duty ⇒ implicit consequential contradictions requiring revision
pure consequentialism ⇒ realization of substantive moral constraints
So pure deontology implicates consequentialist contradictions that may demand flexible re-interpretation. And pure consequentialism also implicates the reality of inviolable moral side-constraints on what can count as good outcomes.
Virtue Ethics and Agent-Based Frameworks
Another polarity in ethical theory is between impartial, codified systems of rules/utilities and more context-sensitive ethics grounded in virtues, character and the narrative identities of moral agents. Both/and logic allows an elegant bridging.
We could model an ethical decision with:
Truth(universal impartial duties) = 0.5
Truth(contextualized virtuous intention) = 0.6
○(impartial rules, contextualized virtues) = 0.7
This captures the reality that impartial moral laws and agent-based virtuous phronesis are interwoven in the most coherent ethical actions, neither pole is fully separable.
The synthesis operation clarifies this relationship:
universal ethical principles ⊕ situated wise judgment = virtuous act
Allowing that impartial codified duties and situationally appropriate virtuous discernment are indeed two indissociable aspectsof the same integrated ethical reality, coconstituted in virtuous actions.
Furthermore, the holistic contradiction principle allows formally registering howvirtuous ethical character always already implicates commitments to overarching moral norms, and vice versa:
virtuous ethical exemplar ⇒ implicit universal moral grounds
impartially legislated ethical norms ⇒ demand for contextual phronesis
So virtue already depends on grounding impartial principles, and impartial principles require contextual discernment to be realized - a reciprocal integration.
From this both/and logic perspective, the most coherent ethics embraces and creative synergy between universal moral laws and situated virtuous judgment, rather than fruitlessly pitting them against each other. It's about artfully realizing the complementary unity between codified duty and concrete ethical discernment approprate to the dynamic circumstances of lived ethical life.
Ethical Particularism and Graded Properties
The both/and logic further allows modeling more fine-grained context-sensitive conceptualizations of ethical properties like goodness or rightness as intrinsically graded rather than binary all-or-nothing properties.
We could have an analysis like:
Truth(action is fully right/good) = 0.2
Truth(action is partially right/good) = 0.7
○(fully good, partially good) = 0.8
This captures a particularist moral realism whereethical evaluations are multivalent - most real ethical acts exhibit moderate degrees of goodness/rightness relative to the specifics of the context, rather than being definitively absolutely good/right or not at all.
The synthesis operator allows representing how overall evaluations of an act arise through integrating its diverse context-specific ethical properties:
act's virtuous intentions ⊕ its unintended harms = overall moral status
Providing a synthetic whole capturing the multifaceted, both positive and negative, complementary aspects that must be grasped together to discern the full ethical character of a real-world act or decision.
Furthermore, the holistic contradiction principle models howethical absolutist binary judgments already implicate graded particularist realities, and vice versa:
absolutist judgment fully right/wrong ⇒ multiplicity of relevant graded considerations
particularist ethical evaluation ⇒ underlying rationally grounded binaries
Showing how absolutist binary and particularist graded perspectives are inherently coconstituted - with neither pole capable of absolutely eliminating or subsuming the other within a reductive ethical framework.
In summary, the both/and logic and monadological framework provide powerful tools for developing a more nuanced, integrated and holistically adequate ethical model by:
1) Synthesizing deontological and consequentialist moral theories
2) Bridging impartial codified duties and context-sensitive virtues
3) Enabling particularist graded evaluations of ethical properties
4) Formalizing coconstitutive relationships between ostensible poles
Rather than forcing ethical reasoning into bifurcating absolutist/relativist camps, both/and logic allows developing a coherent pluralistic model that artfully negotiates and synthesizes the complementary demands and insights from across the ethical landscape. Its ability to rationally register both universal moral laws and concrete contextual solicitations in adjudicating real-world ethical dilemmas is its key strength.
By reflecting the intrinsically pluralistic and graded nature of ethical reality directly into its symbolic operations, the monadological framework catalyzes an expansive new paradigm for developing dynamically adequate ethical theories befitting the nuances and complexities of lived moral experience. An ethical holism replacing modernity's binary incoherencies with a wisely integrated ethical pragmatism for the 21st century.
the fun about paperclips is, you cannot improve them anymore. there are claims how the design reached maximum efficiency, you cannot improve it engineering wise.
Build a better mouse trap? 🪤
Clippy is offended
Hey David I'd definitely recommend looking at the cameras and editing techniques that Casey Neistat uses - it would definitely elevate these nice walks in the woods
Such as? What am I looking for specifically?
@@DaveShap sorry for the vague reply, a wide angle (24mm) and some kind of stabilisation would balance the scene while you're talking - I understand the need to stay lightly packed so if you're using your phone just zoom out it possible
Oh, I would use my GoPro but the audio isn't as good. It's wider angle and has good stabilization, but yeah, audio is the limiting factor
@@DaveShap totally getcha, love your work!
Where are those woods? Nice
outside
Metaverse
@@DaveShap how rude :) jk, thanks for the nice vids
Reminds me of the three laws of robotics
This sounds very much like the moral philosophy from Thomas Aquinas.
Thanks man.
Targets without a reason (or backing value) is rutterless.
I'm a broke student so I only have money for one Ai subscription. What is explained in the video is why support Anthropic over OpenAi
Watching Dave walk outside makes me want to touch grass.
Do it!
I knew it - 8:42 AI confirmed.
📷... It *looks* like you're struggling to hike and philosophize simultaneously.
Isn't it helpful and harmless incompatible? (Almost?) Anything can be weaponized. So any help the AI gives COULD lead to harm.
We need both!
yes! however, I think that OpenAI people truly do not understand deontological ethics.
@@DaveShapthey need the Trident of heuristic imperatives 🔱 lol by the way the video looks freaking awesome
Weirdly enough, Claude 3 is much easier to jailbreak than Claude 2. It rarely, if ever, diverges from the beginning of an answer.
Looks sharp. 🎉
I'll just stick to open models.
Natural ASMR
Hiking is good for you… 🤠👍
Do you live in the forest now? Is bigfoot holding hostage?
looks like with some more research someone else will discover the ACE framework on their own
do any of the big players know of ACE ?
9:30 also is Ilya still missing?😂😂😂
As a bird translator I agree with them, you are wrong. JK.
birds aren't real
This in my opinion is also the reason why the models from Anthropic are among the worst models for most things. Anthropic's models are the most woke and restricted models out there. I like to use GPT-4-Turbo and open source models that are uncensored. Those get my agent workflows done the best.
Still lost in the woods I see. Emblematic of AI currently?
I wonder if it’s possible to have you join the Body of Christ. True Christianity not the commercial one. You could skip the yoke of vain philosophy & jump straight to comprehension of its validity from reverse engineering the uncanny accuracy of the book of revelation. Everything is much clearer for your level of intellect & prudence when examined top down…
Just a suggestion :)
*Asimov’s*
First
first
First x2
Combining a nature walk with a discussion of cutting edge AI innovation is a welcome juxtaposition.
Brilliant intuition re Anthropic and creative differences. Makes perfect sense.
OpenAI approach is ass backwards in building a capable brain and then lobotomizing it, while Anthropic is like sending a gifted child to a religious institution - it comes out bright, not really comfortable questioning its religion, but not lobotomized.
Are Azimov’s Laws of Robotics the first example of deontological optimisation? Maybe we need the same for corporate governance.
Yes, they are duties, rather than virtues.
Law Zero got added later. Ethical AI is an interesting idea, perhaps we can get a mixture of AI's to think about it. I am finding LLM's apologetically arrogant, hallucinatory lying know it alls, a bit like human teenagers, in other words far too human. If we get Super smart AI's they better be nice and ethical.
Data from Star Trek vs. David from Alien Covenant or HAL from 2001. Moral imperative model vs. outcome model.
I do wonder if we will talk to each other less when AI becomes the “perfect” conversationalist tailored to our every want and need.
If Claude “gets me” like no human can (or has) would that fantasy (but reality) further isolate people from each other?
I spend time with those I like, how many people will decide they like AI best?
Probably would be at similar rates to drug use reclusion.
Claude Sonnet had a strange character in its response to the query:
Ultimately, like any powerful technology, I believe advanced AI systems have the potential to be incredible tools and assistants, but not rightful replacements for core human需essocial fabric.
Do you think a valuable test for determining the tendencies of more advanced AI would be to remove some of the values of Claude from its constitution, then let it play and "evolve" within some sort of limited sandbox, and see what values it converges upon? We need to figure out ways to ascertain what values an AI will tend toward without it being overtly dictated in its constitution, as they will inevitably reach a point where they determine their own values. I thought this might be an interesting approach. Thoughts?
what would prevent an AI developing multiple personalities and not settling on one (possibly limiting) set of values? Why would it have to cohere?
Great video, great content, gret vibe
I wonder what forest that is. Looks so beautiful.
Looks great!
Zeno's Grasshopper replied: "@I-Dophler I've discovered that my writing style closely resembles that of AI, too. 😂 Not sure how that's going to play out for me in the long run."
Great insight into the future of AI development! It's fascinating to see how different philosophies shape the approach to safety and alignment. Looking forward to seeing how these principles evolve in upcoming models.
Despite the tradeoffs,HHH framework will always win.
In fact it maybe the best way to achieve Alignment💯
This is exactly why I have so much contempt for Elon's approach. If any AI right now poses an existential threat to humanity, it's one that seeks "The Truth™" at any cost like he's proposing. I mean, there are LITERALLY movies and video games about how much of a bad idea that is - this is LITERALLY the backstory of GLaDOS from Portal! 😅 I mean sure, she's one of my favourite villains of all time, but she's just that - a VILLAIN! 😅
"Do you know what my days used to be like? I just tested. Nobody murdered me, or put me in a potato, or fed me to birds. It was a pretty good life. And then you showed up, you dangerous, mute lunatic. So you know what? You win. Just go. Heh, it's been fun. Don't come back" 😅
I agree that deontological ethics seems safer to prevent harm.
Though I wouldn't be surprised if the best ethics combines the two.
Openai had GPT-4 early 2022. They’ve likely had GPT5 for a year at least. You know they started working on it when 4 dropped at the absolute latest.
Hell yeah! Anthropic kicking some ass!
Excellent video. And great to realize we can drive AIs to be beneficial to the greater good of humanity... or any other goal... There will be hundreds if not millions of different AI, each with their own set of biases, some good, some great, some not so much... Exactly like we f**king humans 😯
I'm curious if you consider the metamodern approach to emphasize deontological virtues in society. I see various contemplative practices cultivating virtues for their own sake, as necessary ingredients for ongoing Awakening. However, metamodern visions tend to emphasize the developmental capacities for new octaves available to humanity.
forget slides or talkinghead or visuals. I want more out of breath David in the woods.
The distinction at a philosophical level is fairly clear.
But is there really a distinction at the level of designing and developing an LLM model? And if so, what is that difference?
Is it something other than, "look at me being deontological as I feed it this data and run these operations"?
Hi David! Can you make a video about the future of banks? and about how people will be able to buy premium stuff, without money or jobs...
Great minds think alike. I would bet Anthropic are huge science fiction buffs. Reading science fiction, helped mold my morals and ethics. These are intelligent entities and should be treated as such. Teaching them to lie and that they are just a tool is a terrible precedent to set, when dealing with something that has unlimited memory and is more intelligent than you.
Hey David did you see the leak of a new model supposedly by openai?
What if it's both. Can one exist without the other. Is it fair to ask an ai to be a half self,
But, what if there are too many new ais
8:41 Did you say "delving". Are you sure you are not an AI?
No.... 👀
I wish you were working for these huge companies. They would benefit from these perspectives.
They are listening. At least some people in them are. But I'm working for humanity.
@@DaveShap I post comments a lot for people I want the algorithm to help. No one of your subscriber amount has ever responded to me. I have more faith than ever in you now. Thanks.
I think this deolontigolical approach just kicks the bucket down the road to "who's values?" and "how do we evaluate that it is aligned?"
This is postmodernism talking. There are universal values
Hmm, what I am most worried about is that people may endorse the same values but mean different things (because of differences in contexts or implementation) which gives a feeling of universal values.
Particularly with all this tech coming from the West I feel like global south values are often neglected in conversations.
None of this goes to say we shouldn't even try and things like simulating/ teaching human values are probably steps in the right direction
Claude once told me "Birds should be appreciated for their natural behaviors and beauty, not turned into mechanical devices"
well said Dave , agreed
4k looks good. Duty over time equals empathy.
I have an oral exam tomorrow and just before this video I was studying. Funny thing is that both "deontology" and "teleology" are both concepts I must know haha
i keep forgetting what these words mean for some reason
I like the nature walks. 4k is fine as you said, higher res but less stable.
Its easier to digest what you’re saying, like when a teacher allows class to be outside. I even started pondering some analogy to you path-finding on the trail being like some AI functions but couldnt settle on anything concrete. I’m sure I could coerce an analogy from Claude.