Blog

Thoughts on Change,
Tech, and Content

Becoming an AI Editor: Editing a Mythomaniac

How do you edit a writer that has no sense of truth?

For editors throughout history, this hasn’t generally been a problem. Except for the unlucky editor working with a mythomaniac or someone truly desperate, there’s been a “trust but verify” relationship between editors and writers (as there has been between writers and their subjects). Generally, we’ve been able to trust that the person we’re dealing with is working in good faith, if for no other reason than that they’d probably like to get published again.

But now, for AI editors, no such social contract exists. Output from a large language model (LLM) is just token-by-token probabilistic, randomized prediction. The AI has no bigger picture it is aiming toward, much less a sense of truth (which, as a side note, IMO, would be part of any model that actually represented artificial general intelligence).

So how do you approach this as an AI editor?

Last week, I talked about first step you need to take to put the “human in the loop” when editing generative AI: make sure you understand the output it has generated, so that if someone asks you questions about the content, you can answer them. Your first step to doing this is reading the text carefully, checking for consistency, and doing a “smell test” for anything fishy and in need of further examination.

Your next step—fact checking—depends on how you did your research. There are 3 possibilities here:

  1. You did your own research

  2. You asked the LLM to get you started on research by finding sources and summarizing its takeaway

  3. You used some hybrid approach (like asking the LLM to summarize longer documents that you found)

If you did your own research, your fact-checking is a pretty light lift. You already read your sources, you remember what stood out to you, and you may have even created a prompt that told the LLM to highlight specific facts. You still need to read to make sure it did what you asked and to compare it against your memory, but you did 90% of the work up front.

If the LLM got you started on your research, or you used a hybrid model, you have some more work ahead of you.

“But, Deanna,” you say, “I did it this way to save time. Don’t I lose that if I then have to go do the research on my own?”

Well, yes and no. If you don’t check the work, you’ll save time in getting to publication, but it’s time you could easily lose when someone asks about a specific fact, you can’t defend it, and you need to then go figure out where it came from and how to save face. If you do check the work, you’ll still save time because (and it actually hurts me to type this), you don’t have to fully read everything. You can treat this more surgically.

You’ve already read through the text to get an overall sense of it. Now, take a second pass. This time, as you read, interrogate the LLM. If there is anything it makes a claim for that you don’t think you could stand behind, ask it where it got the information.

Sometimes, it will respond back with a precise quote and the source it came from. Go and check that that quote really exists where it says it does. While you’re there, read a little of the context around it to make sure it’s not oddly cherry-picked.

Otherwise, it may tell you it has made a logical conclusion or deduction based on the information. Remember: it hasn’t. It’s doing token-by-token content creation, not reasoning as you and I understand it. So here, you need to read the information it says it based “its conclusion” on, and then see if you come to the same conclusion. If you don’t, now that you know what it’s referring to, you can ask it to rewrite to a point you feel more comfortable with. Or just do that yourself.

And sometimes, it will just admit it made it up. Not great, but at least you caught it before you put it out in the world.

So how do you edit a writer that has no sense of truth? Methodically, interrogatively, and line-by-line.

Next time: judging, is this even any good?

Deanna Oothoudt