← Digesting

Why 'reviewed' should be a protected verb

Filed in: AI, editorial standards, content governance, policy

Yesterday Ars Technica published its newsroom AI policy.

The short version from editor Ken Fisher: "Ars Technica is written by humans. AI doesn't write our stories, generate our images, or put words in anyone's mouth."

It also names a balance I keep trying to strike in my own work, whether with the dev team on code review and documentation or in broader communications: finding where AI can responsibly ease tedious aspects of tasks without corroding the creative and less-hallucinatory parts of the human role. Ars draws that line in a concrete place:

AI tools must not be used to generate, extract, or summarize material that is then attributed to a named source, whether as a direct quote, a paraphrase, or a characterization of someone’s views.

Reflecting on it stirs memories of art professors who have students help with tedious parts of a larger work: building canvases, cleaning paints, sometimes laying in background areas. The students get practice and exposure, and the professor keeps attention on the parts only they can do. It works until the professor asks for too much and the student becomes the creative author. AI can drift the same way when no one draws the line explicitly.

It's the second public, carefully worded AI policy I've flagged recently; the first was Fedora's for open-source contributions, and the two rhyme more than I expected.

Both policies echo a similar refrain: authors using AI tools must disclose that use. It doesn't automatically make it into the published piece; it just makes it impossible to use AI invisibly.

These kinds of public policies still seem to be an exception, but it's nice to see them becoming more common.

Learn more at the source

More things I've been digesting