Swiss Higher Education Under AI Pressure
Ever since ChatGPT became publicly available in 2022, Swiss universities have faced a pressure no prior generation of vice-rectors or lecturers has seen. Students use generative AI — openly, secretly, expertly, or naively — and institutions must find answers that preserve academic integrity without losing touch with reality.
This article bundles what's publicly known about the approaches taken by Switzerland's major universities and universities of applied sciences: which tools they deploy, which policies they've drafted, which debates lie underneath. It does not replace official guidance from individual institutions but creates an overview for anyone who wants to know where Switzerland is heading.
ETH Zurich: Competence Over Control
ETH Zurich positioned itself early and clearly. The rectorate communicated as early as 2023 that the deployment of AI detectors for automated plagiarism checking is not planned — the risk of false positives too high, the technology not yet robust enough.
Instead, ETH relies on three pillars:
- Faculty education: Workshops, guides, and handbooks on prompt design, AI literacy, and assessment design in the AI era.
- Transparency obligations: Students must disclose whether and how they used AI. Responsibility rests with the individual, not the detector.
- Assessment redesign: Oral exams, lab work, supervised project components are expanded wherever sensible.
That doesn't mean detectors play no role at ETH. Individual lecturers use them informally as an "early warning system" — but no detector output alone can justify a sanction. ETH positions itself closer to Scandinavian higher education than to many US universities, which lean more heavily on automated control.
EPFL Lausanne: Pragmatism and Pedagogical Freedom
EPFL follows a similar path to ETH but gives more weight to institutional flexibility. Each faculty — and in some cases each individual course — may set its own AI rules, as long as they align with overarching integrity standards.
In practice that means:
- In programming courses, Copilot or ChatGPT use is often explicitly allowed, provided students document their prompting process.
- For written master's theses, rules are stricter: AI may be used for research and structuring, but not for the final phrasing.
- Detector deployment is up to individual supervisors. There's no central mandate.
EPFL has commissioned several internal studies on detector reliability, parts of which have been discussed publicly. The core takeaway: detectors are useful as suspicion triggers, but their error rates — especially on non-English texts — make them unsuitable as sole evidence.
University of Zurich (UZH): Working Group and Clear Taxonomy
UZH established a cross-faculty working group "AI in Teaching and Research" to develop recommendations for all faculties. Its central contribution: a clear five-level taxonomy of permissible AI use.
- Level 0 — No AI allowed: Exams, proctored assessments, work where unassisted writing is explicitly part of the learning outcome.
- Level 1 — AI for research: Allowed as a search assistant, not for text production.
- Level 2 — AI for revision: Allowed for spell checking, grammar correction, simple rephrasing.
- Level 3 — AI as writing partner: Allowed for structuring, brainstorming, feedback on drafts — with disclosure.
- Level 4 — Free AI use: Allowed for tasks where the outcome matters, not the process (e.g., programming exercises with a clear output).
Lecturers set the level at the start of each course. Students sign it as part of the learning contract. Violations are handled individually — AI detectors can inform suspicion, but are never the sole criterion.
University of Bern: Focus on Oral Examination Components
The University of Bern decided to put the emphasis less on detection and more on assessment design. In the humanities, oral defenses and presentations have been expanded, requiring students to explain their written work in person.
The reasoning: a text the author cannot explain is problematic even if no AI use is proven. The oral component creates a natural integrity check that knows no detector accuracy rates or false positives.
At the same time, Bern does use AI detectors in individual cases when concrete suspicion exists. Deployment is handled by trained assessment officers, not by individual lecturers acting alone.
Université de Lausanne (UNIL): Charter and Participatory Approach
UNIL adopted a "Charter on the Responsible Use of Generative AI" in 2024, with students, faculty, and administration participating as equals. The charter lays out guiding principles rather than rigid rules:
- Transparency: disclose AI use where it's relevant.
- Responsibility: students and faculty remain accountable for their own output.
- Education: AI competence is recognized as a core academic skill.
- Fairness: AI must not amplify structural inequalities.
Detectors barely appear in the charter. UNIL thus takes the softest stance among Switzerland's large universities.
Università della Svizzera italiana (USI): Italian as a Challenge
USI in Lugano faces a particular challenge: most commercial AI detectors are trained primarily on English and — to a lesser extent — German. Italian plays a minor role in many tools, with correspondingly higher error rates.
USI therefore places strong emphasis on:
- Multilingual detection: Tools that explicitly support Italian, including detectors hosted in Switzerland.
- Process assessment: Draft documentation, intermediate versions, and revision histories are routinely required.
- Peer review components: Students read and comment on each other's work, creating a natural additional check.
Universities of Applied Sciences: HSLU, ZHAW, BFH
Swiss universities of applied sciences tend to follow a more pragmatic path than the classical universities. Lucerne University of Applied Sciences (HSLU) has published institution-wide guidelines that split AI use into three categories (not allowed, allowed with disclosure, freely allowed). Verification combines detectors with oral components.
ZHAW runs its own competence center for AI in higher education and regularly publishes practice-oriented guides. Detector use is permitted but always combined with qualitative judgment by lecturers.
Bern University of Applied Sciences (BFH) prioritizes faculty development and offers regular workshops on AI detection and assessment design.
Common Patterns — and Lessons for Smaller Institutions
Despite differing details, three common principles emerge across all major Swiss universities:
- No detector alone: No institution bases sanctions exclusively on AI detector output. The detector is always a hint, never proof.
- Process documentation: Drafts, outlines, source lists, and working journals are becoming the central integrity tool.
- Oral components: Conversation about one's own work matters more again — in formal exams and informal office hours alike.
For smaller institutions — gymnasiums, vocational schools, secondary schools — these principles are achievable even without large working groups. The key insight: a clear taxonomy (like UZH's) plus oral defense (as in Bern) plus a reliable detector as second opinion often goes further than any single technical solution.
Which Detectors Are Used in Switzerland?
A complete overview is nearly impossible because many institutions deploy detectors decentrally without central procurement. From conversations with faculty and public sources, a few tools stand out as frequently mentioned in Swiss higher education contexts:
- Turnitin AI Detection: Common at the large universities because it's often already licensed as a plagiarism tool. Critics cite error rates on German texts.
- GPTZero: Well-known, cheap for individuals, but controversial in Switzerland due to US data processing.
- AIDetector.ch: Increasingly used as the Swiss alternative — nDSG-compliant, with Swiss server infrastructure and support for German, French, Italian, and English.
- Scribbr AI Detector: Popular among students for self-checking, less so among faculty as an assessment tool.
The Open Discussion Point: Data Protection
One topic appears in almost every debate: what happens to student texts fed into commercial US detectors? From a Swiss data protection perspective (revFADP, Art. 19 and 31), transferring personal data to third countries without adequate safeguards is problematic — and student work often contains personal reflections, case examples, or even health data.
That's why the debate at Swiss universities is always also a data protection question. Detectors with Swiss hosting and clear compliance documentation are increasingly preferred — not because they're technically superior, but because they're legally safer.
What Smaller Schools Can Take From These Experiences
The major Swiss universities have resources smaller schools don't: legal departments, pedagogical development units, budget for pilot projects. Even so, the core recommendations translate:
- Start with a clear taxonomy (e.g., UZH's five-level model), not with a tool.
- Invest in faculty education before investing in detectors.
- Combine manual and automated detection. Neither alone is enough.
- Choose data protection compliant tools. In Switzerland that means Swiss servers, revFADP-compliant, ideally with legal opinion on record.
- Introduce oral components wherever organizationally feasible.
- Document your decisions in writing. A clear policy protects both teachers and students.
Conclusion
Swiss universities did not respond to the AI surge with panic but with a characteristically Swiss blend of pragmatism, federalism, and institutional patience. No central detector mandate, no blanket surveillance — but a growing clarity about what's allowed, what must be disclosed, and how integrity can still be secured in a world of generative AI.
For faculty and administrators at smaller educational institutions, the message is: you don't have to reinvent the wheel. Switzerland's big universities have done valuable groundwork — use it.
Sources
- ETH Zurich, Rectorate: Statements on AI in teaching, 2023–2025.
- EPFL, "Guidelines on the Use of Generative AI in Teaching and Learning," 2024.
- University of Zurich, Working Group "AI in Teaching and Research," Final Report 2024.
- University of Bern, Guidelines on AI Use in Written Work, 2024.
- UNIL, "Charter for the Responsible Use of Generative AI," 2024.
- swissuniversities, Recommendations on AI in Higher Education Teaching, 2024.