Journalism in the AI Transformation
Few industries are as immediately affected by generative AI as journalism. Text production is the core of any newsroom — and exactly what ChatGPT, Claude, and Gemini were built for. At the same time, a newsroom's credibility hangs on whether readers can trust that a piece was researched and written by a human.
This article shows how Swiss media houses handle this tension: which guidelines they developed, how they integrate AI into daily newsroom work, what role AI detectors play, and which open questions remain.
The Starting Point: Trust as Core Currency
Media lives on trust. If a reader can no longer be sure whether the reportage about a mountain farm in the Emmental rests on real conversations or on ChatGPT output, the business model of a quality outlet is hit at its core. Switzerland's major media houses know this — and respond accordingly.
Three principles shape the approach of practically all serious Swiss newsrooms:
- Transparency toward the audience: when AI plays a role in production, it's disclosed.
- Human responsibility: every published piece has a responsible journalist who stands for content and accuracy — regardless of how much AI helped create it.
- Differentiation by genre: what holds for an automated stock market notice doesn't hold for reportage. Guidelines distinguish.
NZZ: The Quality Model
Neue Zürcher Zeitung was among the first Swiss newsrooms to publicly comment on its approach to AI. The core statement: AI is used as a tool — in research, structuring, translation, and routine production. But no piece is published without human responsibility and editing.
In practice:
- Research support with AI (e.g., for processing large document troves) is allowed and actively used in major investigations.
- Raw translations from other languages are partially AI-assisted and then editorially revised.
- Op-eds, reportage, and analysis remain core human production.
- Automated short notices (weather, stock market tables) may be machine-generated — and are labeled accordingly.
Tamedia and TX Group: The Group Approach
TX Group (which includes Tamedia, 20 Minuten, and other titles) has developed group-wide guidelines adapted to individual titles. Notable is the pragmatic handling of AI-assisted production in service and advice sections: in these areas, AI is used openly, while hard journalistic formats (political reporting, investigative work, reportage) remain clearly human.
The principle: not every text on a news website is "journalism" in the same sense. A service article on spring travel destinations is different from an investigative piece. The AI guidelines reflect that differentiation.
Ringier: Platform Logic and International Dimension
Ringier has a particular ambition: the company runs media in multiple countries and must find guidelines that work across language and market boundaries. The answer: a set of group-level principles interpreted specifically by individual titles.
These principles include:
- No fully autonomously created texts without human editing.
- Source and fact checking remains a human task.
- Transparency toward the audience when AI is involved.
- Protection of whistleblowers and sources — their statements may not be fed into AI systems that could store or transmit the information.
SRF and SRG: The Public Broadcasting Approach
SRF and the entire SRG group face a particular challenge: as a public-service media organization, SRG has a double responsibility — to the license-fee payer on one hand, and to the mandate of information and opinion formation on the other.
SRG has adopted internal guidelines that apply in several tiers:
- Core journalism: news, political reporting, documentaries, and reportage remain human production with clear journalistic responsibility.
- Assistance deployment: transcription, translation, and research support use AI — where it doesn't diminish quality.
- Automated formats: certain formats (e.g., some sports notices) may be produced with AI assistance and are labeled.
- Protection of sensitive sources: for investigations requiring source protection, external AI services are restricted.
The Role of AI Detectors in Newsrooms
Interestingly, detectors play a different role in Swiss newsrooms than in schools. They are less a control and more a quality instrument. Three typical use cases:
1. Verifying External Contributions
Op-eds, guest pieces, and freelance journalist submissions are increasingly checked with detectors — not to disqualify authors but to ensure the work paid for is actually human. Transparency is established through clear commissioning agreements.
2. Internal Quality Assurance
Some newsrooms use detectors as part of the editorial workflow. A text with a high AI score goes into editorial revision — not as punishment but because such a score often signals style issues worth addressing regardless of AI use.
3. Verifying External Claims
In investigations into AI use in politics, business, or education, newsrooms sometimes check texts that are the subject of their reporting — for example to verify whether a minister's controversial statement was actually formulated by a human or came from an AI system.
Handling AI-Generated Sources
A particular challenge for newsrooms is not their own AI use but others'. What if a press release was written with AI? What if a study being reported on contains AI-generated text? What if a primary source website itself publishes AI content?
Serious newsrooms have developed procedures:
- Source criticism rethought: the classic "is this source reliable?" expands to "is this source human?"
- Prefer primary sources: wherever possible, talk directly to actors instead of relying on secondary texts.
- Detection as aid: for texts of unclear origin, a detector serves as an additional indicator.
- Disclosure in one's own text: if a source is AI-generated, that's made transparent in the journalistic piece.
The Ethics Framework: Press Council and Guidelines
The Swiss Press Council has repeatedly commented on AI and clarified that existing journalistic principles continue to apply in the AI era:
- Truthfulness: published texts must be factually accurate. That holds for AI-assisted production too.
- Transparency of sources: readers must be able to trace where information comes from.
- Separation of fact and opinion: isn't softened by AI use.
- Copyright and quotation: AI-generated texts are copyright-complex. Direct uptake without attribution remains problematic.
Practically that means: the ethical bar for journalistic work isn't lowered by AI — if anything, it's raised. Newsrooms must exercise more care, not less.
Open Questions Newsrooms Haven't Yet Solved
Personalization and AI
What if a newsroom starts personalizing articles individually for readers — with AI assistance? How far can personalization go before it changes the character of journalism? This question is barely discussed in Switzerland but will become relevant over the coming years.
Translation and Multilingualism
In multilingual Switzerland, AI-based translation is especially attractive. A German article can be automatically translated into French and Italian. But: is an automated translation editorially full-value? Who's liable for translation errors? How is transparency established? These questions aren't fully resolved.
Archives and Training
Some media houses face the question of whether to release their archives for AI training — their own or others'. Decisions have long-term strategic consequences but are often made without broad public debate.
Handling AI Images and Deepfakes
The question of AI and text is complex enough. The question of AI and image is even more complex. Swiss newsrooms have approached this topic inconsistently so far.
What Other Industries Can Learn From Journalism
Newsrooms aren't alone with their AI challenges. Other industries — from corporate communication through scientific publishing to education — can learn from the journalistic approach:
- Differentiate clearly by genre. Not every written production is "own work" in the same sense. A weather report is different from an op-ed.
- Anchor responsibility personally. The institution isn't responsible — a concrete person is.
- Transparency as core value. Better to openly declare AI use than to hide it.
- AI as tool, not substitute. The human remains author and shaper; AI supports.
- Continuous adaptation. Technology moves fast — guidelines must grow with it.
What Readers Can Do
The audience also has a role. Anyone who values journalism should:
- Pay for quality media. Serious editorial work costs money. Ad-funded content is especially vulnerable to cheap AI substitution.
- Demand AI transparency. Readers may ask how a piece was made — and expect the outlet to answer.
- Place sources critically. Not every text on the internet is journalism. Distinguishing editorial content from automated production is more important than ever.
- Give constructive feedback. Newsrooms respond to their audience. When you see good AI guidelines, say so.
Conclusion: Defending Trust
Swiss newsrooms have done remarkable work over the past three years to navigate the transition to the AI era. They've developed guidelines, adapted internal processes, created transparency. Not everything is perfect — but the tone is right: AI is used as a tool, not as a substitute for journalistic responsibility.
Detectors play a supporting but not central role in this frame. The core tasks — researching, verifying, contextualizing, weighing — remain human work. For all other industries facing similar challenges, the journalistic approach is good inspiration: not every task can be automated. But where automation is possible, it should happen transparently — and be anchored in responsibility.
Sources
- Swiss Press Council, statements on AI in journalism, 2023–2025.
- Neue Zürcher Zeitung, editorial guidelines on the use of generative AI, 2024.
- TX Group, guidelines on AI use in newsrooms, 2024.
- Ringier, group principles on AI and journalism, 2024.
- SRG SSR, internal guidelines on AI in program production, 2024.
- Reuters Institute for the Study of Journalism, Annual Report 2024.