Ars Technica Unveils Comprehensive AI Editorial Standards: Human Authorship Remains Non-Negotiable
Editorial Leadership Releases Transparent Policy on Generative AI Use in Newsroom
Ars Technica has officially published its newsroom artificial intelligence policy, establishing clear boundaries on how generative AI can and cannot be deployed in editorial operations. The publication committed to releasing a reader-facing explanation of its AI practices earlier this year, with Editor-in-Chief Ken Fisher prioritizing accuracy and precision over speed in developing the comprehensive guidelines.
The policy reflects two foundational principles: that AI cannot supplant human creativity, insight, and ingenuity, and that properly implemented AI tools can enhance professional work when used appropriately. These convictions directly shaped what the organization determined it would prohibit in its newsroom.
Core Editorial Standards
According to the policy, human journalists remain the exclusive authors of all reporting, analysis, and commentary at Ars Technica. Artificial intelligence will not serve as writer, illustrator, or videographer. The organization explicitly rejected using AI as a workaround to traditional editorial practices or as a mechanism for eventually replacing human professionals.
Fisher emphasized that these standards are not new innovations. Rather, they have governed editorial operations since AI tools became commercially available. The publication of this document makes previously internal guidelines visible to readers, who deserve transparency about the rules the organization enforces.
Permitted and Prohibited Uses
Ars Technica permits limited AI tool deployment in specific editorial workflows. Approved applications include:
- Grammar checking and style suggestions during editing processes
- Structural feedback on articles, provided humans retain final decision-making authority
- Research assistance for navigating large document volumes, summarizing background materials, and searching datasets
- Visual production support where creative direction and editorial judgment remain human-controlled
However, the policy establishes firm prohibitions. AI-generated material cannot serve as an authoritative source without independent verification. When reporters attribute statements, quotes, or positions to named sources, this material must originate from direct engagement including interviews, transcripts, published statements, or documents personally reviewed by the journalist. AI tools cannot generate, extract, or summarize content subsequently attributed to sources.
The organization forbids publishing claims derived solely from AI-generated summaries, and journalists cannot represent material as "reviewed" unless they have examined it directly.
Visual and Multimedia Content
Visual content, including photographs, illustrations, and video, originates from editorial teams, art departments, and established photography and wire services. While creative teams may utilize AI tools in production workflows, human creative direction and editorial judgment drive all decisions. Ars Technica will not publish AI-generated images, audio, or video presented as authentic documentation of actual events.
Documentary media cannot be altered in ways that change meaning, though standard production techniques like color correction, cropping, and contrast adjustments remain acceptable. When synthetic media appears in reporting about AI itself, it must be clearly labeled as AI-generated with disclosure positioned immediately adjacent to the material.
Accountability Framework
Any journalist employing AI tools in editorial workflows bears full responsibility for accuracy and integrity of the resulting work. This responsibility cannot be delegated to colleagues, editors, or the tools themselves. The policy emphasizes that maintaining these standards represents a shared obligation across the entire editorial operation.
Fisher noted that when violations occur, the organization takes action. The publication of this reader-facing policy reflects the conviction that readers deserve visibility into the standards the newsroom enforces, rather than requiring blind trust in their existence.
The policy was last updated on April 22, 2026.