Why Manual Detection Still Matters in 2026
AI detectors like AIDetector.ch are the backbone of any serious integrity check today. And yet: the fastest, cheapest assessment of whether a text comes from a human or a machine still happens inside the reader's head. Anyone who knows the telltale patterns of ChatGPT, Claude, or Gemini can form an informed gut feeling within seconds — and knows when a detector is actually worth running.
This guide presents the ten most common signals AI text leaves behind in practice. It draws on thousands of submissions analyzed across Swiss schools, universities, and businesses — and on current research into the linguistic fingerprints of large language models.
What Research Tells Us: AI Writes Differently, But Increasingly Human-Like
Early studies on GPT-3 (Zellers et al. 2019, Gehrmann et al. 2019) showed AI text diverged statistically from human writing. With GPT-4 and Claude 3.5, that gap has shrunk — but not disappeared. A meta-analysis by Liang et al. (2023) found that even trained readers achieve only about 58% accuracy on GPT-4 text — barely better than chance. But readers who actively hunt for patterns can push that to 70–75%.
The ten signs below are the patterns that repeatedly show up as the most reliable tells.
Sign 1: Uniform Sentence Length (Lack of Burstiness)
Human writers are bursty: short sentences, then long, then short again. This variance — researchers call it "burstiness" — emerges because we breathe, emphasize, and rhythm our way through writing. AI models, by contrast, optimize for probability: the most likely next sentence tends to be a medium-length, well-structured one.
What to look for: Paragraphs where every sentence runs 15–25 words with no outliers. Human text shows sentences of 4, 28, 11, and 19 words stacked close together.
AI example: "Digitalization is fundamentally changing the working world. Companies must adapt to remain competitive. New technologies open up diverse possibilities. At the same time, new challenges are emerging."
Human example: "Everything changes. Not just the software, but the whole way we work — and who sits at which table when decisions get made. That's uncomfortable."
Sign 2: Overuse of Transition Words and Signal Phrases
AI models are trained to produce coherent text. They overreach on explicit transition words: "moreover," "furthermore," "consequently," "however," "nonetheless," "in particular," "equally," "in this context."
In human writing, such links are often implicit. A human writes: "The problem is known. The solution isn't." An AI writes: "The problem is known. The solution, however, remains unclear."
Metric: Count explicit transitions in the first 300 words. More than six? Strong AI signal.
Sign 3: Generic, Surface-Level Examples
Ask an AI for an example and you usually get the prototype: a teacher explains something. A company implements something. A student writes a paper. The examples are plausible but interchangeable. They lack friction, proper names, specific circumstances.
What to look for: Examples without names, dates, or locations. "A company in the finance sector deployed AI." Which one? When? For what project? Human text gets concrete: "Zurich Cantonal Bank rolled out Tool X in 2024 to speed up onboarding for corporate clients."
Sign 4: No Personal Perspective or Lived Experience
Even when AI text is written in the first person, it typically lacks the markers of lived experience: contradictory feelings, moments of uncertainty, specific sensory memories. Instead you get generic reflections like "This experience taught me a lot."
Telltale pattern: Personal paragraphs that end in "It's important to..." or "One should...". Real personal reflection rarely resolves into general advice — it stays stuck in the specific.
AI example: "During my internship, I learned the importance of teamwork. It is important to communicate openly and respect everyone's perspectives."
Human example: "On day two, Sandra, the team lead, corrected me in front of the whole team. I was furious. Three weeks later she told me the same thing privately, and that's when I understood why the public moment might have been necessary."
Sign 5: Obsessive List-Making
ChatGPT loves bullet points. Where three would do, it delivers five. Where prose would fit, it makes a list anyway. The pattern emerges because lists are associated with "clear, good structure" in training data — and because RLHF annotators reward them.
What to look for: Texts where more than a third of the content sits in bullet points without structural necessity. Especially suspicious: lists of exactly five or seven items where the final item feels noticeably weaker than the rest.
Sign 6: Spotless Grammar and Typography
Human text almost always contains small irregularities: a typo, a missing hyphen, a double space, unusual punctuation. AI text is conspicuously clean.
Watch for the inverse trap: Some users deliberately have AI insert typos to fool detectors. Clean grammar is a hint, not a proof.
Particularly strong signal: Consistent correct use of curly quotes ("...") and em dashes (—) in an otherwise informal text. People typing on phones rarely bother.
Sign 7: Stock Phrases That Repeat
Every language model has favorite phrases. For ChatGPT these include:
- "It's important to note that..."
- "In today's fast-paced world..."
- "In conclusion..."
- "A variety of factors..."
- "Not least..."
- "It's worth considering..."
- "In an ever-changing world..."
Claude tends toward hedged phrases: "on the one hand... on the other hand," "it should be noted, however," "from a nuanced perspective."
Gemini stands out with enthusiastic fillers: "Absolutely!", "A fantastic question!", "Here's a comprehensive overview."
Practical tip: Ctrl+F the text for these phrases. One hit means little; three or more is a clear signal.
Sign 8: Hallucinated Facts and Invented Sources
Large language models are notoriously bad at reconstructing precise facts. They invent years, author names, study titles, page numbers, and URLs with disturbing plausibility. The text sounds authoritative — but the references are partly fabricated.
What to look for: Citations you cannot verify. If a text cites "Müller et al. 2022" in the "Journal of Educational Research" and you can't find the paper on Google Scholar or in the journal's database, that's a warning. AI texts often mix real author names with fake titles — the hardest version to catch.
Sign 9: The "Epilogue" Ending
AI text often ends with a summary of what you just read: "In conclusion, digitalization brings both opportunities and challenges that require nuanced engagement."
That's the opposite of most good human writing, which ends with an image, a question, a sharpening, or a concrete call to action.
Telltale closing phrases: "In summary," "To conclude," "Overall, one can say," "To get to the point." In formal academic writing such closings are legitimate — in blog posts, personal essays, or cover letters they are almost always a strong AI tell.
Sign 10: "Balanced Presentation" Where a Position Is Called For
ChatGPT and Claude are explicitly RLHF-trained to reply in balanced, polite ways. The result is a characteristic pattern: even on questions that demand a clear stance, the AI delivers a "both sides" structure.
What to look for: A comment, op-ed, or opinion piece that weighs pros and cons without drawing a clear conclusion. Humans with actual opinions usually write in a more partisan register.
AI example: "The question of whether ChatGPT should be allowed in schools is multifaceted. On the one hand, the tool offers opportunities for individualized learning. On the other hand, there are risks to academic integrity."
Human example: "ChatGPT belongs in the classroom — as early as possible. Anyone who ignores this is preparing their class for a world that won't exist in 2030."
The Limits of Manual Detection
These ten signs are heuristics, not proofs. They have four key weaknesses that any serious integrity check must account for:
- False positives for non-native speakers: People writing in a non-native language often produce the same patterns as AI — grammatically correct, stylistically cautious, rich in transitions. A real risk in multilingual Switzerland.
- False positives for formulaic genres: Legal text, scientific abstracts, and technical documentation follow conventions that look "AI-like" but are correct.
- False negatives on edited text: Text drafted with AI and then hand-edited evades most manual signals.
- Confirmation bias: If you already suspect AI, you'll find the patterns almost everywhere. Human intuition is biased.
Combining Manual Signals With AI Detectors
Best practice combines your trained eye with a reliable detector. The workflow:
- Quick screen: Read the text with the ten signs in mind. Form a gut feeling.
- Technical check: Run the text through AIDetector.ch. The detector uses linguistic models that pick up patterns you can't see — statistical perplexity and burstiness distributions, for example.
- Cross-check: If the detector agrees with your gut, you have a robust signal. If they disagree, be careful — detector output alone does not justify a plagiarism accusation.
- Context check: Does the text match the person's usual writing style? Is there prior work? Is there a traceable drafting process?
Practical Example: The Three-Question Method
When you're pressed for time, three questions will get you an initial read:
- Does sentence length vary noticeably? No → suspicious.
- Is there at least one concrete detail (name, place, date, number) you could Google? No → suspicious.
- Does the text close with a summary or "balanced conclusion"? Yes → suspicious.
Two or three "suspicious" answers justify running a detector. Fewer: move on, no alarm.
Conclusion: Eyes and Algorithm Belong Together
Manual detection remains the first line of defense. It's free, fast, and sharpens your overall sense of good writing. But it's no longer sufficient on its own: the newest generation of AI models produces text that only reveals itself in subtleties to a trained eye. Anyone — teacher, editor, compliance officer — who needs truly robust verdicts combines both approaches, while keeping in mind that detection delivers probabilities, not certainties.
AIDetector.ch was built for exactly this use case: a second, technically grounded opinion that delivers in seconds what you can't reliably judge by eye — at Swiss data protection standards.
Sources
- Liang, W. et al., "GPT detectors are biased against non-native English writers," Patterns, 2023.
- Gehrmann, S. et al., "GLTR: Statistical Detection and Visualization of Generated Text," ACL, 2019.
- Tian, E. & Cui, A., "Towards Detection of AI-Generated Text," Princeton, 2023.
- Zellers, R. et al., "Defending Against Neural Fake News," NeurIPS, 2019.
- OpenAI, "GPT-4 System Card," 2023.
- Anthropic, "Claude 3.5 Model Card," 2024.