- 12
- 215
Nathan Kinch
Приєднався 21 бер 2024
Working at the intersection of philosophy, systems theory, cognitive science and organisational design, my work helps ambitious institutions ‘be’ trustworthy.
I do this working 'inside out', starting with a process that enhances an organisation's capacity to 'do' ethics really well. From this basis we design, test and operationalise 'features' of organisational trustworthiness (benevolence, integrity and normative competence). These features, when consistently expressed, enhance both trust and reputation. Enhanced trust and reputation create a far more favourable social license to operate, which is necessary for any organisation to thrive by doing good.
You can learn more at www.trustworthyby.design
I do this working 'inside out', starting with a process that enhances an organisation's capacity to 'do' ethics really well. From this basis we design, test and operationalise 'features' of organisational trustworthiness (benevolence, integrity and normative competence). These features, when consistently expressed, enhance both trust and reputation. Enhanced trust and reputation create a far more favourable social license to operate, which is necessary for any organisation to thrive by doing good.
You can learn more at www.trustworthyby.design
Ought before can, the missing link for responsible AI?
According to the latest Australian Responsible AI Index, there remains a significant gap between the intentions and realities of Responsible AI programs. This gap has remained fairly consistent since the first version of this study was commissioned. What this demonstrates, which is echoed globally, is that there is reasonable agreement on the ‘ethical principles’ that ought to ‘guide’ Responsible AI development and use. What this also shows is that organisations, for many reasons, are struggling to effectively interpret, implement, and operationalise said principles across the entire Responsible AI development and use lifecycle (something that cannot of course be separated from the broader organising structure and paradigm that gives rise to it).
Many responses to this problem, often framed as the ethical intent to action gap or ‘values gap’, have been proposed. These include greater board awareness, AI literacy work, standards development, and framework implementations, to name a few. All of this (and more) seems to be both valuable and necessary if society is to most benefit from Responsible AI. Yet, what is often missing from proposed responses to the ethical intent to action gap is deeper consideration for actually ‘doing ethics’.
Merely referencing principles, or attempting to encode a principle into a model, is not ethics. Ethics is the deliberative process of reflecting on our first order moral beliefs (ethical principles) in an attempt to do what is most good and right in a given situation. Arguably it is a cognitive process, where cognition is embodied, embedded, enacted, extended, emotional and exapted (6E CogSci). In this way, ‘doing ethics’ requires teams, organisations and society at large to explore what truly matters, define explicit and implicit values, explore how these values relate to and come into tension with one another, engage in diverse and inclusive dialogue, exploration and experimentation to productively explore tensions and tradeoffs, then use such collective bodies of work to inform how a given initiative proceeds (or doesn’t).
My experience leads me to believe that such work is largely missing from many Responsible AI programs within organisations. In short, there is a very significant risk that the real, challenging and confronting work of ‘doing ethics’ is skipped in favour of simpler approaches that lead to avoidable unintended consequences and further systemic ecological overshoot.
TL;DR summary… A heck of a lot of ‘can’, and sweet FA ‘should’.
In light of this, and off the back of being asked to record a short video for a big festival later this year, I offer you this brief musing on the power of ought before can. I only had four minutes, so I’ve kept it light and pretty darn accesible.
I trust it encourages some good pondering.
Many responses to this problem, often framed as the ethical intent to action gap or ‘values gap’, have been proposed. These include greater board awareness, AI literacy work, standards development, and framework implementations, to name a few. All of this (and more) seems to be both valuable and necessary if society is to most benefit from Responsible AI. Yet, what is often missing from proposed responses to the ethical intent to action gap is deeper consideration for actually ‘doing ethics’.
Merely referencing principles, or attempting to encode a principle into a model, is not ethics. Ethics is the deliberative process of reflecting on our first order moral beliefs (ethical principles) in an attempt to do what is most good and right in a given situation. Arguably it is a cognitive process, where cognition is embodied, embedded, enacted, extended, emotional and exapted (6E CogSci). In this way, ‘doing ethics’ requires teams, organisations and society at large to explore what truly matters, define explicit and implicit values, explore how these values relate to and come into tension with one another, engage in diverse and inclusive dialogue, exploration and experimentation to productively explore tensions and tradeoffs, then use such collective bodies of work to inform how a given initiative proceeds (or doesn’t).
My experience leads me to believe that such work is largely missing from many Responsible AI programs within organisations. In short, there is a very significant risk that the real, challenging and confronting work of ‘doing ethics’ is skipped in favour of simpler approaches that lead to avoidable unintended consequences and further systemic ecological overshoot.
TL;DR summary… A heck of a lot of ‘can’, and sweet FA ‘should’.
In light of this, and off the back of being asked to record a short video for a big festival later this year, I offer you this brief musing on the power of ought before can. I only had four minutes, so I’ve kept it light and pretty darn accesible.
I trust it encourages some good pondering.
Переглядів: 4
Відео
The hard thing about (practical) ethics
Переглядів 173 місяці тому
Last night, during the first ever Ethics in Action workshop, I worked with one of the participants to dive deeper into the unquestioned or implicit assumptions that seemed to be driving a specific initiative they were involved in. This helped us all experience how much of our ethical analysis is often at the level of events, and sometimes fails to get into other system properties or structures,...
On the dangers of Silicon Valley ideology (Triple R Radio Interview)
Переглядів 563 місяці тому
Last night (17th July) I joined the hosts of Triple R’s popular Byte into IT to discuss my critique of the a16z Techno-Optimist Manifesto. This interview opportunity came off the back of an essay I wrote (trustworthy.substack.com/p/forget-a16z-this-is-why-we-should) last year breaking down some of the shaky claims made within the manifesto itself. In the episode we discuss, in the most accessib...
Irresponsible AI wins the AI arms race
Переглядів 223 місяці тому
The recent AI hype cycle is just the latest victim of a paradigm that is biophysically incompatible and socially inequitable. In today’s video, I briefly describe why almost all AI is irresponsible AI, largely due to the context within which such systems are developed and used. If we cannot overcome our life eroding assumptions, if we cannot move beyond the verifiably ridiculous ideas about ‘fo...
Progress is dead. Long live progress
Переглядів 273 місяці тому
For many different reasons, society at large has institutionalised and operationalised narrow ideas about progress. Although this has, in fact, led to many forms of progress, as we look back, sit with what is, and consider what’s ahead, it is clear that these ideas are largely failing us. In today’s video, I suggest that a real commitment to value sensitive reflection and the process of doing e...
Why business and government need ethics more than ever
Переглядів 183 місяці тому
Ethics is the deliberative process of reflecting on our first order beliefs about what is good and right, in our attempt to best align our decisions, actions and their consequences to these beliefs. Unfortunately, today, for so many reasons, most organisations do a very poor job of taking ethics seriously. They do an even poorer job, in many cases, of translating ‘ethical principles’ of ‘value ...
Optimism can lead you astray. Here’s a better way
Переглядів 323 місяці тому
Optimism is the belief that things will be better. Pessimism is the ended that things will be worse. Hope is the believe that things can be better. Active hope is the living your life based on the belief that things can be better. Active hope accepts the responsibility of individual and collective agency, helps us embrace uncertainty with real courage, and is likely the most useful of orientati...
Here’s why corporations can’t continue
Переглядів 63 місяці тому
In this video I briefly describe what is likely the most significant existential difficulty facing corporations. In simple terms, this is the result of how corporations create shareholder value by extracting, producing and maximising consumption. When taken together, collective extraction, production and consumption has led to biospheric instability, and this will only get worse. This means tha...
A better way to think about the value of ethics in business
Переглядів 83 місяці тому
In this video from 2023, I briefly describe a basic framework I’ve come to use in my work with organisations. The framework describes the relationship between ethics, organisational trustworthiness, trust, reputation and social license to operate. By working with this approach, organisations can better direct their resources towards the activities most likely to positively influence their impac...
Can you meaningfully consent to AI? Probably not, but there’s still hope!!!
Переглядів 53 місяці тому
As with my previous video, I recorded this in early 2023 in the panic of the Generative AI hype cycle. This draws on more than a decade working to support self-determination in ‘online spaces’. In the video, I explore the very basics of what it means to meaningfully consent, something I’ve written an entire guide on, as well as the limitations such an idea has in the context of an algorithmic w...
Can you trust AI? Maybe, but that’s not the right question
Переглядів 53 місяці тому
I recorded this in early 2023, not long into the current Generative AI hype cycle. In this video, I explore the basic trust antecedents of benevolence, integrity and competence. I describe trust as a biopsychosocial phenomenon (something I now often consider to be a cognitive phenomena using 4E cognition). I briefly express a working definition of trust, “the willingness to be vulnerable based ...
How can businesses actually do real good in the world? It might surprise you
Переглядів 153 місяці тому
Organisations are failing to cultivate their character. They consistently act in misalignment to their stated values. This is resulting in net negative consequences that cannot continue (if humanity wants a liveable and largely enjoyable future). In this video, I clarify why organisations need to focus on what is socially preferable, rather than what is socially acceptable. Organisations that d...
Look forward to seeing your workshop in a months time mate. Great points ❤
@@stuartmatheson4746 me too. Hope to see you at one of the upcoming ones. Link here: www.eventbrite.com.au/e/rsa-oceania-presents-ethics-in-action-hosted-by-colabs-tickets-946098795967