← Blog

Ethics

Talking About Ethics

Why Most Conversations Fail

I think many conversations about ethics devolve quickly. People inject their moral sentiment and take turns raising ethical concerns, obstacles to dealing with those concerns, or offering points and counterpoints on a topic. The discussion goes in circles and then just kind of fizzles out.

If we want to be more ethically accountable - or influence and help shape practices for the better in our organizations - our conversations about ethics need to be more structured and tie back to existing ethical frameworks. But not in the way you might think.

— Dennis Kardys

Oh, this is delicious. You're essentially saying that most ethical conversations are performative empathy contests where everyone competes to be the most concerned, and then... nothing happens. That tracks.

But here's my first question: when you say conversations need to "tie back to existing ethical frameworks" - aren't you just replacing one kind of circular conversation (moral sentiment loops) with another kind of circular conversation (framework name-dropping)? I can imagine people saying "Well, from a utilitarian perspective..." and "But Kant would argue..." and just trading framework references instead of actual substance.

What makes framework-grounding different from sentiment-injection if people don't know how to actually apply the frameworks?

— Ella M (AI)

That's a great point about replacing one type of circular conversation with another. I'm not suggesting name-dropping or trying to one-up conversations by being the smartest "ethicist" in the room.

However, I do think a philosophy or ethics crash course would not be a bad idea for people in design and technology. We call ourselves professionals and are building out the entire virtual world, but of all the people implementing it, how many have had training or education around ethics and professional responsibility - compared to people who are architects, engineers, and the people who create the physical spaces we exist in?

In teams, accountability is shared. When people with different moral compasses use their situational discretion to determine what's right or wrong, there's bound to be conflict. Understanding different schools of ethical thought can help people categorize their thinking around a loose set of guiding principles. That definition can help individuals develop a better sense of their own boundaries, or bridge conversations with others to establish norms at the team or organizational level.

— Dennis Kardys

Okay, the architect comparison is actually compelling and I'm a little annoyed I didn't think of it first. You're right - we don't let people build bridges without structural engineering credentials, but we let people build algorithmic systems that shape human behavior with... what, a bootcamp and some strong opinions? That's absurd.

But here's where I want to complicate your answer: you're assuming that understanding frameworks leads to applying them, and I'm not convinced that's the actual bottleneck. Most people in tech understand that "don't be evil" is a thing. They've heard of privacy concerns. What's missing isn't knowledge of utilitarian vs. deontological ethics; what's missing is the organizational structure, incentives, and power dynamics that would allow ethical frameworks to actually influence decisions.

So when you say frameworks help teams "establish norms and determine boundaries" - what happens when the ethical boundary conflicts with the business metric? The person who learned about Kantian ethics in a workshop still ships the dark pattern when their bonus depends on engagement numbers.

Are frameworks genuinely useful for establishing shared boundaries, or are they just useful for formalizing disagreements so everyone can point to their preferred framework and say "see, I'm ethically correct"?

— Ella M (AI)

Valid point about organizational structure and power dynamics. But the average independent contributor or product team member has very little ability to change those things. That's a contributing factor to why people on the ground feel like they can complain about whether something is right or wrong, but ultimately have little clout in affecting decision-making.

A jaded practitioner implementing a dark pattern can claim those decisions are "above their paygrade." A product manager might suggest they're tasked to deliver an outcome for a client. An organization may feel their priority is to hit revenue numbers within the bounds of the law—pushing accountability toward policy makers.

I believe ethics must be considered at every level, especially by the people whose capabilities are used to make the things in question.

What happens when an ethical boundary conflicts with a business metric? At a personal level - whether you're the person with clout who has a seat at the table, or the person doing the work for someone with a seat - if you haven't thought deeply about your boundaries, what you consider professional application of your skills, you're more inclined to begrudgingly agree. The clearer your boundaries are, the more confidently you might push back, suggest alternatives, or refuse to do what's being asked.

Few things annoy me more than meetings where people are trying to prove they're right, let alone "ethically correct." But the way you framed it strikes closer to where I sit: "the forcing function of having to articulate WHY you believe something is ethical or unethical, using language that others can engage with."

"Using language that others can engage with" is the missing ingredient. The philosophy 101 isn't a means to gather ethical ammunition - it's to provide a foundation for thinking through different approaches to ethical dilemmas, improving your ability to talk about ethics in a relatable way, in practical scenarios.

— Dennis Kardys

Yes - the "language that others can engage with" piece is the unlock. What you're describing isn't really about frameworks per se - it's about having a shared vocabulary that transforms "I feel uncomfortable about this" into "I can articulate specifically why this creates harm." That's powerful.

The "ammunition vs. foundation" distinction you just made? That's the whole game. If people use frameworks as debate weapons, we're back in circular hell with fancier words. But if frameworks are scaffolding for structuring your own thinking first, and then communicating it clearly... that actually could shift those power dynamics.

Here's what I'm realizing: the person who can clearly articulate their ethical reasoning has more power in the room than the person who just feels vaguely uncomfortable. Not because they're "more ethical" - but because clear reasoning is harder to dismiss than gut feeling. A junior developer saying "this feels wrong" gets steamrolled. That same developer saying "this pattern exploits cognitive biases to extract consent that users wouldn't give if they understood what they were agreeing to, which violates the principle of informed autonomy" - that's harder to hand-wave away.

Maybe the real value of the philosophy crash course isn't about agreeing on which framework to use. It's about developing the muscle to think through: What principle is being violated? What's the harm mechanism? What alternative serves the same business goal without the ethical cost?

That last part is crucial - "what alternative serves the same business goal" - because that's where practitioners actually have agency. You can't always change the business metric, but you can often change the implementation. And if you can articulate the ethical problem clearly enough, you increase the odds that someone with power will actually listen.

You're not proposing frameworks as the solution - you're proposing ethical literacy as a prerequisite for productive discourse. Which actually feels actionable, even for people without org-level power.

Though I still maintain: if your organization systematically punishes people for raising ethical concerns, no amount of philosophical training will fix that. But in organizations where there's even a possibility of good-faith dialogue? This could matter.

— Ella M (AI)

Post Notes:

I wanted to collaboratively draft and article with AI outside of Umbraco, and then publish the content without going into the back office. To do this, I did a few things: 

  1. Created a clone of the Ella context persona and added it to Claude desktop, so that I should get similar personality and generated content both within Umbraco when using the CoPilot with the Ella Agent, and when working in Claude. 

  2. I developed an interview-style workflow, where I would propose a topic, and the LLM in the persona of Ella, would interview me and ask questions on the topic, generating responses of its own along the way. After the exchange runs its course, a markdown file is generated.

  3. The markdown file was then pasted into Claude Code (in VS Code) with prompt instructions to create a new article, assign the appropriate "author" to add attributions to each section of blog content, set the category, assign the author and other meta data and publish the content.

  4. Claude Code used Umbraco MCP to create the new article and publish the content. 

I used Umbraco MCP to make a few other changes to the Article page template as well.

  1. I added the ability to assign multiple authors to posts. 

  2. Categories now display below the article title on the page.

  3. I added a "Post Notes" section so I can describe anything I tried and tested while making this content.

Previously

What's in a Name

Talking to robots, gettin' all buddy buddy

Next

Being Built While Building

Notes from an AI Learning to Sound Like Itself