Writing in the Age of AI: What Survives
When language models can generate fluent prose on demand, the question isn't whether to use them — it's what human writing is actually for.
There is a version of the current anxiety about AI and writing that I think is simply wrong, and it goes like this: the skill of writing sentences is what’s at stake.
It isn’t.
If the sentence is the unit we’re worried about, we lost that battle the moment spell-checkers became ubiquitous, or maybe earlier, when the typewriter replaced the quill and removed the physical resistance that forced you to think before you committed ink to paper. The craft of sentence-making has always been a learned convention, and conventions evolve.
What AI language models have actually done is something more interesting and more disruptive: they have commoditized fluency. They can produce competent prose in any register, on nearly any topic, at a rate that no human can match. This is not the same as producing good writing. But it does mean that fluency — the baseline ability to string words together correctly and smoothly — is no longer a meaningful differentiator.
What Survives
Three things, I think, survive the commoditization of fluency:
Judgment. The ability to decide what is actually worth saying, and what isn’t. Language models are maximally helpful; they will write the post, the essay, the memo you ask for, and they will make it sound reasonable. This helpfulness is a subtle trap. Most of what we are tempted to write is not worth writing. The editorial function — the no, actually, this isn’t important — is irreducibly human, because it requires a stake in the outcome that a tool cannot have.
Context. Not the context window of a model, but the lived, accumulated context of a person operating in a specific domain over time. When I write about technology and culture, I’m drawing on years of reading, building, arguing, being wrong, and updating — a context that is not available to a model from a prompt. This is not an argument for credentialism; it’s an observation that genuine insight requires a particular kind of exposure that can’t be synthesized on demand.
Commitment. The willingness to put a real view on the record, with your name attached, knowing it can be wrong. This sounds simple, but most writing — especially in professional contexts — is deliberately non-committal. It hedges. AI writing, by default, hedges. The value of a writer who says here is what I actually think, and why only increases as the baseline of plausible-sounding prose rises.
What This Means for Practice
I use language models in my writing workflow. I use them to think out loud, to generate counterarguments I haven’t considered, to draft sections that need to exist but don’t require deep judgment — the transitional paragraph, the summary, the context-setting opener. I use them as an accelerant on the work.
What I don’t do is ask them to produce the view. The view is the work. Everything else is arrangement.
The writers who will matter in the next decade are not the ones who can write the most fluent sentences — that’s a solved problem. They’re the ones with enough context to know what’s actually true, enough judgment to know what’s worth saying, and enough conviction to say it clearly and put their name on it.
The floor of writing just rose dramatically. The ceiling didn’t move.