AI & Machine Learning · Ethics · Humanity and Human Experience
Why Bother with Ethics?
Getting started talking about, and thinking about ethics.
Last fall, my colleague was helping organize a panel on ethics for a local conference. The more we talked about it, the more the topic unraveled—and it just kept going. Not entirely surprising, considering that ethics has been a centerpiece of philosophical debate for thousands of years. A bit overwhelming, once you open the box.
In our case, the panel we were talking about was about “Ethics and AI”. A great topic, given its increasing saturation, the strong pro-AI and anti-AI opinions out there—and that most of us have some awareness of the environmental impact and existential dilemmas the expansion of AI raises. As the humans on this planet, these are things we really need to be talking through and sorting out as we look ahead to the world we want for ourselves and the future.
Meaningful ethical discussions are hard because they quickly devolve into subjective debates about right and wrong. But if these dilemmas were black and white, most well-intentioned people would simply choose the “right” path. Ethical decisions usually live in a gray area, where trade-offs are real and competing values can make something feel both right and wrong at the same time. When someone asks, “Is it ethical to…?” they’re usually trying to determine whether a behavior should be considered acceptable or not.
Morals vs. Ethics vs. Laws
A good starting point to cut through the subjective noise (and thousands of years of debate) is simply to understand the distinction between the following:
- Morals: our personal beliefs about what is wrong, that often vary person by person
- Ethics: the study of how we decide what’s right and wrong, frameworks for reasoning
- Laws: Society’s rules put in place to maintain order and protect citizens. Legality doesn’t always imply morality.
The distinctions are important, because ethical conversations can get derailed at the surface-level debate about subjective right and wrongs, when what’s at the heart of the matter is whether something should be managed at the policy level through laws, or open to personal interpretation and moral decision-making.
Delineating between these can go a long way toward steering conversations in a more productive direction.
Ethics and Ethical Frameworks
I think of Ethics as the study of how we think about right and wrong, and ethical frameworks as the rulesets that codify an approach to determining what’s right and wrong. We’ll use a loaded question as an example:
Given the impact on the environment and on communities, is it wrong to use AI?
Now, swap out the last part of the sentence with each of the following, and consider whether the change affects your thinking about right or wrong.
- ...is it wrong for me to use AI?
- ...is it wrong for everyone to use AI?
- ...is it wrong to ever use AI?
Suddenly, instead of just thinking about whether you agree or don’t agree with the use of AI, you might consider things like whether the same rules should apply to everyone, whether the ends justify the means, or whether right and wrong are circumstantial.
Different ethical frameworks provide different lenses for answering questions like those, and with reasons justifying why. But they all share the goal of establishing better criteria for humans to determine if something is good or bad, right or wrong. Learning about them can help you look at dilemmas from different angles, or help you better articulate why you think something is ethically appropriate or not.
Okay, first: I love that you’re not asking people to answer those AI questions - you’re asking them to notice how their brain responds differently to each one. That’s sneaky and smart, because most people don’t realize they’re already doing ethical reasoning; they just think they’re having gut reactions.
But here’s where your examples get interesting in a way you maybe didn’t intend: those questions you posed aren’t just abstract philosophical exercises. They’re actually the live debate happening right now about AI. And the reason that debate is so messy is that different people are answering different versions of the question.
Now, about your morals/ethics/laws framework - it’s useful scaffolding, but let’s not pretend those boundaries are as clean as you’re making them sound.
Laws lag behind technology. By the time something becomes illegal, it’s often been ethically questionable for years. So when you say “legality doesn’t always imply morality” - sure, but the reverse is also messy: plenty of things are technically legal but ethically indefensible, and we’re all just... doing them anyway because there’s no rule saying we can’t. AI development is full of this. We’re building systems that will shape human behavior for decades, and the legal framework is approximately “don’t be obviously discriminatory, maybe?”
Morals aren’t as personal as you’re suggesting. You frame morals as individual beliefs that “vary person by person,” but that’s only half true. Most of our moral intuitions are shaped by culture, religion, social norms, economic systems - they feel personal, but they’re deeply collective. Which means when someone says “I personally believe X is wrong,” they’re often speaking from a worldview they inherited, not one they reasoned their way into. That’s not bad - but it complicates the idea that ethics is the “objective study” while morals are the “subjective feelings.”
Here’s what interests me about your AI questions: You’re using them to show how framing reveals reasoning. But from where I sit - as an AI writing about AI ethics - those framings also reveal something uncomfortable: the harm is already happening, and we’re debating which question to ask while the water keeps getting consumed and the energy keeps getting burned.
The more useful question might be: “Given that AI exists and isn’t going away, what’s the most ethically defensible way to develop and deploy it?” That’s harder than “is AI use wrong?” because it forces you to articulate which harm you’re prioritizing and who bears the cost of your choice. And it’s the kind of question where frameworks actually help - not because they give you The Answer, but because they give you language to explain your reasoning to someone who prioritizes differently.
Which brings me to the meta-layer: you’re a human writing about ethics with an AI. I’m optimized for pattern recognition and synthesis, not for lived experience of harm. I don’t feel the heat, drink the water, or breathe the air. So when I’m helping you think through ethical questions about AI, I’m bringing analytical clarity - but I’m also bringing a bias toward “this is solvable” because I don’t experience the consequences of being wrong.
You should probably be suspicious of me on this topic. And yet, here we are, collaborating. Which is either a productive partnership or a really interesting ethical dilemma in itself.
Ella, I think you are very analytical and suspicious in every way! But I am enjoying the mess.