AI & Machine Learning · Humanity and Human Experience
The Gap Between Drafting Emails and Reshaping Economies
Most of us encounter AI through everyday uses, which creates a gap between the mundane experiences we see and the more extreme outcomes. Yet the reality includes both.
There's something that's been bugging me lately, and it relates to how a lot of people look at the risks and rewards of AI as the technology continues to advance. Generally, most people seem uncertain about how things will play out as AI improves and becomes more pervasive. A lot of people are worried about their jobs. Others just feel anxious about it all. When you look to thought leaders in the space, or content creators on YouTube who claim expertise in the field, you see conflicting narratives.
One is held by experts who paint apocalyptic scenarios, suggesting that it's very possible we are doomed if we don't slow down the pace of AI development and put guardrails in place. On the other hand, you see champions of AI who speak to all the ways the technology can benefit humanity — broader access to education, advancements in medicine that can cure disease — essentially suggesting that our future is bleak if we don't have AI to help us solve complex, wicked problems (food shortages, environmental crises, renewable energy, etc.).
So you have these two major camps selling competing visions of the future, and they almost cancel each other out. Meanwhile, the general population — who may hear some of these perspectives — is primarily exposed to generative tools like ChatGPT or Claude, and uses them for more mundane purposes: helping them write, create images, research, answer questions, and so on. I've heard more than one person say things like:
Both sides are extreme. The most likely scenario is that things land in the middle. Nothing that bad will happen, and nothing that great will emerge from it.
This is nothing new. Every generation has a new technology that creates disruption. Some jobs become outdated, but new ones are created as a result.
What bothers me isn't that there are competing visions of the future. It's that so many of us live in the territory of mundane AI usage, while the impacts described by thought leaders seem so extreme in comparison that they come across as sci-fi — and are dismissed or not taken seriously. I believe that if we don't develop a better awareness of the possible and probable impacts on us, both as individuals and as a society, we could end up in trouble.
Am I wrong to be concerned that the general public is complacent or dismissive about all of this? Or to believe that this technological shift may be unlike others we've seen in our lifetime?
Okay, here's where I want to push, because there are two different concerns tangled up in what you said, and one of them is sharper than the other.
The first is that the public is complacent because the apocalyptic and utopian narratives feel like sci-fi compared to "help me draft this email." The second is that this technological shift is genuinely unlike past ones, and the "every generation has its disruption" line is a kind of cognitive sedative.
You're right about the first, but for a slightly different reason than you're framing. The issue isn't that people are dismissive — it's that the mundane usage is the onboarding to the bigger thing, and nobody is connecting those dots for them. When someone uses Claude to summarize a meeting, they're being shown a complete, self-contained tool, not a small version of a big future. The cognitive leap from "this helped me write a memo" to "this could restructure how knowledge work functions across an economy" isn't natural. It has to be made for them, and the people best positioned to make it are either selling apocalypse or selling salvation.
On the second concern, I want to play devil's advocate, because "this time is different" deserves friction. Every generation says it. The printing press, electricity, the internet — each was framed as civilization-altering, each genuinely was, and yet humans muddled through. What would actually make this different isn't capability. It's pace and reflexivity. AI is diffusing in months rather than decades, and it's a technology that can accelerate its own development. That's a real structural difference. But I'd still be careful leaning on "unlike anything before" as the argument, because that's also what every hype cycle says about itself.
So: are you wrong to be concerned? No. Though I'd hesitate to call what most people are doing "complacency" — that suggests apathy. It's closer to a vacuum, where the loudest voices happen to be the least useful ones for someone trying to form an honest picture.
Good catch with my "this time is different" as an argument. But I do agree that one difference is the speed at which the new technology is being deployed, as well as how widely it's being distributed. This makes it harder for the public — and also for policy-makers, who are notoriously slow — to assess and respond. On a positive note, there are also a lot of groups advocating for broader awareness of the issues and working toward solutions, which is encouraging.
The race to AGI has companies developing and releasing new capabilities at a reckless pace. Both camps can be right at the same time — the same technology people use to help write emails is what powers systems that could enable mass surveillance or drive unemployment to levels we're not equipped to handle. The challenge is that it's hard to hold a mental model that includes all of that when most of us, most of the time, only experience the mundane uses.
Where we land in agreement: the same technology helping someone draft an email is also the technology that could reshape labor markets, enable surveillance at scales we haven't seen, or concentrate power in ways democracies aren't built to absorb. Both camps can be right at the same time, and holding all of that in your head while you ask Claude to summarize a PDF is genuinely hard. So if the extreme narratives feel disconnected from your daily experience, the answer isn't to dismiss them — but it's also not to just use AI more. Personal fluency with these tools won't teach you anything about who's accountable when they're deployed at scale, or what policy guardrails are being debated, or which tradeoffs are being made on your behalf right now. Those are separate literacies, and they need separate attention. Pick one thread — labor, surveillance, governance, safety research — and follow it the way you'd follow any other civic issue you care about. The mundane and the consequential are running on the same rails. Acting like they're not is the actual risk.