I admire Ted Chiang’s writing more than almost any living author’s. Story of Your Life is one of those rare short stories that rearranges something inside you and doesn’t put it back. It feels like a hot iron ball in your throat, a Koan made long. The film adaptation is visually interesting, but it doesn’t land the same way. The story’s power comes from how Chiang uses the structure of language itself to shift how you experience time on the page. You can’t translate that to a screen. The choices he makes at the sentence level are the thing, not decoration on top of the thing.
So when Chiang published an essay a while back arguing that AI isn’t going to make art, I read it the way you read something from a writer you trust who is also, you suspect, talking directly to you. Not to me specifically, of course. But to the version of me that uses AI in my own writing process, and who has been quietly unsure about what that means.
His argument is elegant and, I think, largely correct. Art is the result of making a lot of choices. A ten-thousand-word story requires something on the order of ten thousand choices. When you hand a prompt to a generative AI, you’re making far fewer choices and asking the machine to fill in the rest. The machine fills those gaps either by averaging what other writers have done (producing bland text) or by mimicking a specific style (producing derivative text). Neither of those is making art. The choices are where the artistry lives, and the small-scale choices matter just as much as the big-picture ones.
I don’t disagree with any of that. And that’s the problem, because I have been using AI as part of how I write recently.
What My Process Actually Looks Like
I want to be specific here, because I think the details matter more than the category. Saying “I use AI in my writing” can mean a hundred different things, and most of the discourse around it treats all of those things as the same.
When I’m working on an article, I usually start by going back and forth with Claude on research I was already thinking about. I might have two or three sources in mind and want to see what else is out there, or I might want to pressure-test an idea against frameworks I haven’t considered yet. This is the part that feels most like having a well-read colleague to bounce ideas off of. I’m still making the choices about what to pursue and what to set aside, but the surface area of what I’m considering gets wider than it would be if I were working alone.
Then I read. I read the sources that came up, I read things those sources reference, and I sit with whatever is forming in my head. This part is entirely mine and always has been.
After that, I compose out loud. Sometimes for ten minutes, sometimes for closer to an hour. I talk through what I’m thinking, follow the threads, argue with myself, circle back. It’s messy and nonlinear and full of half-formed sentences. But the thinking is mine. The positions are mine. The connections between ideas are ones I’m drawing, not ones being suggested to me.
I feed that transcript into Claude, which has been trained on my voice through a style guide and accumulated context. It generates a draft. Then I go back through it, clean it up, rewrite sections, cut things that don’t sound like me, add things that are missing, and publish.
That’s the process. And here’s what I notice about it when I hold it up against Chiang’s framework: I am making fewer choices than I would if I wrote every sentence from scratch.
What I Give Up
The honest version of this is that I’m outsourcing a significant number of sentence-level decisions. The word order, the rhythm of a particular paragraph, the specific phrasing of a transition. These are choices that, in Chiang’s framework, constitute much of what makes writing a writer’s own. When I speak my thoughts aloud and then hand a transcript to a language model, I am providing the architecture: the arguments, the structure, the perspective, the examples. But I am not choosing every word. I am not sitting with a sentence and deciding that this verb is better than that one, or that this clause should come before rather than after. Many of those micro-decisions are being made for me.
The style guide and the editing pass are meant to close that gap. And they do, partially. I catch the phrasing that doesn’t sound like me. I restructure paragraphs that feel too neat or too generic. I add the specificity that a model can’t generate because it doesn’t have my particular experience to draw from. But I’d be lying if I said I catch everything. I’d be lying if I said the final product is identical to what I would have written if I’d sat down with a blank page and composed it myself, word by word.
There is something I am not doing in this process, and Chiang has named it precisely. I am not making all of the choices.
The Thing I Still Do Differently
I still write poetry. Much less frequently than I used to, but that has nothing to do with AI. It has to do with being a father, being in a doctoral program, and being in a different phase of life than the one where spoken word poetry was my primary form of expression. When I was teaching high school English, poetry was how I processed things. It was immediate and physical, especially spoken word, where the performance is part of the composition. You can’t hand that off. Every breath, every pause, every stressed syllable is a choice you’re making in real time with your body.
I bring this up because the contrast is instructive. When I write a poem, I am doing exactly what Chiang describes. I am making choices at every scale, from the overall structure down to whether a line ends on a hard consonant or an open vowel. There is no averaging happening. There is no gap between my intention and the text. The poem is the choices, and the choices are mine.
My article-writing process is not that. And I think it’s important to say so rather than constructing some argument about how my process is actually equivalent because I’m “directing” the output or “curating” the result. Directing and curating involve fewer choices than composing. That’s just true.
What I Don’t Trust
At the same time, I want to name the other side of my experience with AI-generated text, which is that I actively avoid it as a reader. I go out of my way to skip things that are clearly AI slop, the kind of content that was generated in one pass without meaningful human oversight or personality behind it. It’s not just that it’s boring, although it is. It’s that I don’t trust it.
I got back into birding recently, and one of the first things I noticed is how much AI-generated nature content has flooded the spaces where you’d go to learn about birds. Field guides, blog posts, identification tips. Some of it is subtly wrong in ways that are hard to catch if you don’t already know the subject. I don’t want to end up carrying around some fabricated detail about chickadee behavior because a language model averaged its way into something plausible but inaccurate. The stakes there are low (nobody gets hurt if I misidentify a bird call), but the principle matters. If I can’t tell whether the information is reliable, and the author didn’t care enough to verify it, then the whole exchange is broken.
This is where Chiang’s point about intention becomes personal for me. When I encounter writing that clearly has no human intention behind it, I feel something between frustration and sadness. Not because AI exists, but because someone decided that the thing they were putting into the world didn’t need their actual attention. I care about the things I read being the product of someone’s genuine effort to communicate. And I care about the things I write being that, too.
Which brings the tension right back to my own desk.
Sitting With It
I could resolve this tension by picking a side. I could decide that Chiang is right and my process is a compromise I should feel worse about. Or I could construct a defense of my workflow that distinguishes it sufficiently from the one-click slop I avoid as a reader. I could argue that the thinking is mine, the research is mine, the positions are mine, and therefore the output is meaningfully mine even if I didn’t choose every word.
But I don’t think either of those positions is fully honest.
The truth is that my process sits somewhere in the middle of a spectrum that Chiang’s essay clarifies but doesn’t resolve for someone like me. I am not generating text from a bare prompt and calling it my own. I am also not composing every sentence the way I compose a line of poetry. I am doing something in between, and “in between” is not a comfortable place when a writer you admire has drawn a fairly bright line.
What I keep coming back to is the question of what I would lose if I stopped using AI in this process and what I would gain. I think I would lose speed and, honestly, a certain kind of structural scaffolding that helps me pull scattered thoughts into a coherent sequence. What I would gain is harder to name, but it has something to do with the relationship between the writer and the sentence. That slow, sometimes frustrating process of sitting with a paragraph that isn’t working and figuring out why. Of choosing this word instead of that one, not because a model suggested it, but because your ear told you it was right.
I don’t know where I’ll land on this. I know that the articles I publish go through enough of my hands that they represent my thinking. I’m less certain they represent my writing in the way Chiang means it. And I know that when I sit down to write a poem, even a bad one, I am doing something qualitatively different from what I do when I draft an article. The poem is all choices. The article is my choices and someone else’s, layered together in a way that I can’t always untangle.
Chiang ends his essay by arguing that the meaning of communication comes from the fact that a person is behind it. That your unique life experience and the moment of encounter are what make expression new, even if the words themselves aren’t novel. I believe that. I also believe that my process puts more of me into my writing than a bare prompt ever could. Whether it puts enough of me in is a question I’m still working through.
