How 52 Professional Copyeditors Are Using LLMs in 2026
There’s no shortage of opinions about AI and editing right now. Scroll through any editorial forum or LinkedIn thread, and you’ll find strong feelings on both sides—ranging from “it’s going to replace us all” to “it’s just another tool.”
But what I’ve been craving isn’t more opinions. It’s data. What are actual working copyeditors doing with these tools? Where are they finding value? Where are they hitting walls? And how do they feel about it all?
To find out, I surveyed 52 professional copyeditors who have used or experimented with large language models (LLMs) like ChatGPT, Claude, Gemini, and NotebookLM for copyediting-related tasks in the past twelve months. This isn’t a massive sample, and I want to be upfront about that. This survey also doesn’t represent all copyeditors; it only represents the ones who’ve experimented with these tools. But that’s exactly the group I wanted to hear from.
A few things to know about the respondents: This is a highly experienced group. Sixty-seven percent have been working as professional copyeditors for more than ten years, and 87% have five or more years of experience. The majority (71%) are freelancers. These aren’t people casually dabbling in editing—they’re seasoned professionals with deep expertise and well-established workflows.
So what did they tell me? The short version: It’s complicated. That’s what makes the findings so interesting.
They’re Experimenting, Not Going All In
Every single respondent has tried using LLMs for copyediting-related work—that was a prerequisite for taking the survey. But “tried” doesn’t mean “adopted.” The picture that emerged is one of cautious, ongoing experimentation rather than wholesale commitment.
When asked how their LLM use had changed compared to six months ago, the most common answer (from 20 of 52 respondents) was that their usage had stayed about the same. Eleven said they were using LLMs somewhat more. But 15 said they were using them less or had stopped entirely—and only 3 said they were using them significantly more.
That last number is worth a quick pause. Out of 52 editors who’ve been actively experimenting with these tools, only 3 have significantly ramped up their use. Meanwhile, 11 respondents reported that they’d tried LLMs and stopped using them altogether.
The reasons for stepping back were strikingly consistent. One respondent put it bluntly: “It takes me more time to check their work than it does to do it all myself.” Another said: “I used ChatGPT to find Chicago Manual of Style rules, but it made them up!” And a third noted they’d abandoned LLMs after finding that “the systems are too prone to foolish changes for the sake of compliance.”
One editor captured a tension I heard from many respondents: “I learned to use AI because I wanted to. This makes me inclined to say that they have increased efficiency, accuracy, and the overall editing process. But have they really? Or is it just fun to use new tools?”
That kind of honest self-reflection ran through the entire survey.
Brainstorming, Yes. Copyediting, Not So Much
When I looked at what editors are actually using LLMs for, a clear pattern emerged: The most popular use cases aren’t core copyediting tasks. They’re support tasks that happen around the editing.
The most commonly reported use was brainstorming and rewording (42 of 52 respondents), followed by fixing grammar or mechanics (34), understanding complex or unclear text (30), and fact-checking (26). These numbers tell an interesting story on their own, but the picture gets even clearer when you look at where editors reported the best accuracy and the biggest time savings.
The tasks where editors found the most reliable output and the greatest efficiency gains were brainstorming alternative phrasings (24 respondents said this saved time, and 26 rated accuracy as good) and understanding complex text (17 reported time savings, 13 rated accuracy as good). In other words, the tasks where LLMs performed best were the ones where the editor is using AI as a thinking partner—a sounding board for ideas—rather than asking it to do the editorial work itself.
On the flip side, the tasks with the poorest accuracy were fact-checking (17 rated it poor), filling in citation details (14 rated it poor), and verifying publication details like publishers, dates, and ISBNs (11 rated it poor). These are exactly the tasks where precision matters most and where LLM hallucinations are most dangerous.
As one respondent put it: “[These tools are] awful for direct editing work, and not allowed by my clients anyway, but [they] can be good to bounce ideas off (not feeding it the actual text).”
Here’s the kicker: When I asked editors how much of their overall LLM use is for copyediting versus other tasks like admin, marketing, and research, 40% said most or almost all their LLM use is for non-copyediting tasks. Only 23% said they mostly use LLMs for copyediting itself.
One editor summed it up perfectly: “The best use that has saved me the most time is using LLMs as a thesaurus. I don’t consult any other thesauruses anymore. However, by selecting the word from the options it generates, I am still exercising my own expertise. There is no editing task I can reliably, wholesale give over to LLMs.”
The Trust Problem
If there’s one finding that should give the AI-enthusiasm crowd pause, it’s this: Trust in LLMs for copyediting is declining among the editors who use them most.
When asked how their trust in LLM accuracy had changed over time, 20 out of 52 respondents (38%) said it had decreased—with 11 saying it had decreased significantly. Only 12 said their trust had increased. The remaining 20 said it had stayed the same. In a group of editors who have actively chosen to experiment with these tools, the fact that trust is declining for more than a third of them is a telling signal.
This tracks with the error discovery rates. Sixty-three percent of respondents said they’ve found errors or inaccuracies in LLM output that they initially missed during verification. Ten respondents said this happens frequently.
The barriers and challenges editors reported paint a consistent picture. Here are the most commonly cited ones:
- Inaccurate or unreliable outputs (67%)
- Doesn’t understand context or nuance (63%)
- Inconsistent results (58%)
- Ethical concerns about AI use (58%)
- Privacy and confidentiality concerns (58%)
- Takes too much time to verify results (50%)
If you’ve been reading my earlier posts in my “AI and Editors” series, these findings will sound familiar. The overcorrection, the hallucinations, the false confidence—these are the same limitations I’ve been writing about.
One respondent who works extensively with ESL/EFL writers described a challenge I found particularly striking: “The worst part is the time LLMs add to my work when a writer has used it—the prose is slick but the meaning is often vague or shaded. Especially with ESL/EFL writers, with whom I have a lot of experience, it makes it harder for me to see what they originally meant and smooth that out accurately when an LLM has already done it, probably inaccurately.”
Another editor captured the verification trap perfectly: “I most often use it to double check something I might be second guessing. But I have to be careful and check sources just to be sure. So in that way, it takes me more time and extra steps if I employ an LLM.”
And one editor offered this sobering perspective: “Due to the increasing popularity of LLMs, the fact-checking portion of my job now takes 3 to 4 times longer than it did previously. Often, the focus of projects is no longer true copyediting, but chasing down AI-fabricated sources and ‘facts.’”
Can LLMs Actually Copyedit?
I asked respondents directly: As they exist today, do you think LLMs are capable of performing copyediting-level tasks accurately—that is, without extensive correction, oversight, verification, and review?
The answer was a resounding “not yet.” Sixty-nine percent said LLMs require too much oversight or can’t do it at all. Only 6% said LLMs work for most copyediting tasks.
The efficiency picture is similarly mixed. About half of respondents (50%) said they’re somewhat or significantly more efficient because of LLM use, but 21% said they’re actually less efficient, and 29% reported no difference. The “is the time investment worth it?” question was nearly a coin flip: 46% said yes, 33% said no, and the rest were neutral or unsure.
One editor nicely captured the upside for specific use cases: “It has been very helpful for optimizing my process—it’s good at what I struggle with, and Claude is an excellent co-thinking partner when I get stuck with rephrasing, tough-to-remember rules (CMOS, I’m looking at you), and headline options. It hasn’t made me a better editor necessarily, but it has made me faster/better at clearing jams.”
Another described the experience of working with an LLM as “working with a savant that is uberfocused on something (which may or may not serve the specific text). When we are on the same page, it is a timesaver, but I can’t say I am fully enamored.”
And one respondent offered a refreshingly practical take: “I only use it to create macros. When they work first time, it’s an efficient use of my time. When they don’t and I have to keep prompting, it’s a frustrating waste of time and I’ll often revert to doing things manually.”
The editors who reported the most satisfaction tended to be the ones who’d found specific, bounded use cases—using LLMs as a brainstorming partner, a thesaurus, a gut-check resource, or a macro writer—rather than trying to use them as an editorial assistant. As one put it: “It’s a good brainstorming partner when I’m trying to untangle a difficult sentence. Sometimes I’m completely baffled by what an author is trying to say, but then when I see the different options that AI offers, it becomes instantly clear the author’s likely intended meaning.”
The Paradox: Editors Expect Change They Don’t Think the Tools Are Ready For
Here’s what I found most fascinating about the survey results: There’s a striking disconnect between what editors think LLMs can do today and what they think will happen in the near future.
Sixty-two percent of respondents said they believe LLMs will fundamentally change the copyediting profession in the next three to five years. This is the same group that overwhelmingly said the tools aren’t ready for copyediting-level work right now.
When I asked whether LLMs will improve at copyediting tasks over time, 64% said yes, they’ll improve—but they’ll never match human copyeditors. Only 10% said LLMs would eventually match or exceed human performance. And when asked whether they’d recommend that other editors incorporate LLMs into their workflow, the most common answer was “it depends on the situation” (35%), followed by “yes, with reservations” (27%). Only 13% gave an unqualified yes.
The explanations for these answers reveal a profession grappling with uncertainty. Some respondents see the writing on the wall:
“Editors will need to focus on the human elements of copyediting and provide higher-level services. Just knowing grammar, spelling and punctuation is not going to help you maintain an editing career.”
“Many of us are going to be put out of business by LLMs because people can do this work, at a ‘good enough level’ for cheaper, even if that just means hiring people who will run their work through LLMs.”
“I think companies will see it as an easy way to cut costs, then realize that there is so much machine-generated content out there that the stuff that actually performs is content with a higher number of human touches.”
Others are more cautiously optimistic:
“I think LLMs will eventually improve to the point that they can be used as a tool in a copyeditor’s workflow, but I don’t consider that a significant change. We find new tools and resources to support our expertise all the time.”
“I think it’s all hype and no substance; I think the bubble will pop and these AI tools will be exorbitantly expensive, and it’ll be more worth it to simply know how to edit on your own.”
And some respondents voiced a concern that transcends the technology itself:
“I believe that in the US, money is king, and if publishers, editors, and authors think using LLMs and AI is cost-effective and will increase profits, that will become the most common way copyediting is done no matter the human and environmental costs.”
One respondent put the whole tension perfectly: “Like it or lump it, this conversation is not going away, and that alone is changing the profession.”
What This Means for Your Practice
What can we take away from all this? Here’s what I think the data is telling us.
First, the editors in this survey who report the most satisfaction with LLMs are the ones who’ve found specific, restricted use cases. They’re using AI to brainstorm, to gut-check a tricky grammar question, to untangle a confusing sentence, to write a macro, or to speed up administrative tasks. They’re not trying to get AI to do the copyediting for them. The value is in what happens around the editing, not in the editing itself.
Second, the trust issue is real, and it’s getting worse, not better, for many editors. If you’re using LLMs for copyediting tasks, you need to build in verification time—and be honest with yourself about whether the net result is actually faster or just feels that way because the tool is novel and fun to use.
Third, the ethical and environmental concerns aren’t going away. More than half of respondents flagged ethical concerns, privacy and confidentiality issues, and environmental impact as barriers. These aren’t fringe positions; they’re mainstream concerns among working editors. If you’re developing an AI policy for your business, you’ll need to grapple with these questions thoughtfully.
And finally, whether or not you personally use LLMs, this technology is already changing the editorial landscape. Several respondents noted that they’re now spending more time cleaning up AI-generated content from clients—a trend that’s likely going to accelerate. Understanding how these tools work, where they fail, and what they do to prose isn’t optional anymore, even if you choose not to use them yourself.
As one respondent reflected: “It’s tempting to take polarized positions on these tools, but I’m old enough to remember when a computer could never out-compute a human with a slide rule. Then it did, and now slide rules are probably more rare than buggy whips. I understand the fear and alarm, but at this point eradicating AI is probably not realistic.”
Whether that comparison comforts you or unsettles you probably depends on where you sit with these tools right now. But either way, the 52 editors in this survey are doing what professional editors have always done: Evaluating new tools carefully, thinking critically about their limitations, and making thoughtful decisions about what belongs in their workflows.
This post was published on March 25, 2026.
Whether you're experimenting with AI tools or avoiding them entirely, understanding how this technology works—and what it can and can’t do—helps you make better decisions for your business and communicate clearly with clients. Join my email list for practical updates on AI and editing.
Other Posts in My “Editors and AI” Series
- Editors and AI, Part I: What Is AI? A Primer for Editorial Professionals
- Editors and AI, Part II: AI in Editorial Software—Which Editing Tools Use AI and Which Don’t
- Editors and AI, Part III: How Generative AI Really Works—What Editors Need to Know
- Editors and AI, Part IV: Beyond “Just Say No”—A Nuanced Approach to Generative AI in Editing
- Editors and AI, Part V: Will AI Replace Human Editors?
- The Surprising History of Spell Checkers—and What It Means for AI-Anxious Editors
- Human Editors vs. AI: Why the Best Strategy Isn’t What You Think
- The AI Myth Every Editor Needs to Stop Believing
- Could AI Ever Replace Copyeditors? Here’s What the Tech Would Require
Further AI Resources for Editors
Are You Charging What You're Worth?
New to editorial freelancing and feeling like you need to learn all the things? Overwhelmed with projects but not making enough money? Forgoing breaks and vacation time to meet deadlines? My free, 9-lesson course gives you actionable ways to find your ideal freelance rates, say goodbye to the hustle, and build a profitable business that energizes you.