Back to Blog

AI Policies for Swiss Schools: Ready-to-Use Template and Examples

Why Schools Need Written AI Policies Now

The era of improvised case-by-case decisions is over. What could pass as pragmatic restraint in 2023 is negligent by 2026: if a school has no written policy for handling generative AI, every individual teacher bears the risk — for wrong decisions, unequal treatment, and legal consequences.

This article provides a practical template that Swiss schools can adopt directly and adapt to their situation. It is based on the proven approaches of the major Swiss universities (UZH's five-level model, EPFL's transparency requirement, ETH's responsibility principle) and tailored to the needs of gymnasiums, vocational schools, and secondary schools.

The Seven Components of a Good AI Policy

An effective AI policy covers seven elements:

  1. Preamble and stance: Why AI is being addressed — and which values guide the school.
  2. Definitions: What counts as "AI"? What is an "AI-generated text"?
  3. Permission levels: Which uses are allowed in which context?
  4. Disclosure obligations: How and when must students declare their AI use?
  5. Verification and control: Which measures does the school take to ensure integrity?
  6. Sanctions: What are the consequences of violations — and how does the procedure work?
  7. Data protection: Which detectors are used, which data is processed how?

Template Part 1: Preamble and Stance

The preamble isn't legal garnish — it's the pedagogical foundation. A tested passage:

Generative artificial intelligence such as ChatGPT, Claude, or Gemini is part of our students' world and will be part of their professional future. [School Name] understands its mission as preparing students for responsible, competent, and critical engagement with these technologies.

At the same time: education lives from the individual engagement of a human being with content. Where that engagement no longer happens — where AI replaces work that the student should actually produce — exams, grades, and certificates lose their value.

This policy governs how AI may be used in school, when disclosure is required, and what consequences violations entail. It follows three guiding principles:

  • Competence orientation over blanket bans.
  • Transparency and responsibility on the students' side.
  • Fair, traceable procedures on the school's side.

Template Part 2: Definitions

Clear definitions prevent later arguments about "what was actually meant."

  • Generative AI: Software that autonomously creates text, images, code, or other content based on large language models or comparable systems. This includes ChatGPT (OpenAI), Claude (Anthropic), Gemini (Google), Copilot (Microsoft), DeepL Write, Perplexity, Poe, and comparable services.
  • AI-generated text: Text whose phrasing was created in whole or in substantial part by generative AI — regardless of whether a human edited it afterward.
  • AI-assisted text: Text where generative AI was used for research, structuring, correction, or feedback, but which was independently formulated by a human.
  • Original work: The intellectual engagement and written formulation by the student themselves.

Template Part 3: The Five Permission Levels

The heart of any good policy is a clear taxonomy of what's allowed where. A tested five-level model:

Level 0 — No AI Use Allowed

Application: Exams, proctored assessments, entrance exams, matura theses, or individual exam components where unassisted writing or thinking is explicitly part of the outcome.

Rule: Any use of generative AI is prohibited. Students confirm at submission that no AI was used.

Level 1 — AI for Research

Application: First orientation in a topic, source discovery, explanation of technical terms.

Rule: AI may be used as search and explanation assistance. Any scholarly claim must still be backed by a verifiable primary source — not by the AI answer itself. Formal text production happens without AI.

Level 2 — AI for Revision

Application: Written work where the original contribution lies in content and argument, not in orthographic perfection.

Rule: AI may be used for spell checking, grammar correction, sentence-level style, and simple rewording. Not allowed: structural reshaping, substantive additions, whole paragraphs.

Level 3 — AI as Writing Partner

Application: Larger project work, deep-dive papers, seminar and bachelor theses.

Rule: AI may be used as a brainstorming partner, for structure development, for feedback on drafts. Final formulation remains original work. AI use must be transparently documented in a separate document (AI journal): which tools were used for what, which prompts, which answers flowed into which parts of the work.

Level 4 — AI Freely Usable

Application: Tasks where the working outcome matters, not the formulation process (e.g., programming exercises graded on functionality).

Rule: AI may be used freely. Students must, however, be able to explain and modify their result verbally at any time.

Template Part 4: Disclosure Obligation

Transparency is the heart of a functioning AI policy. Every submission from Level 2 upward should carry a disclosure statement. A usable template:

AI Use Declaration

I declare that I wrote this work independently and cited all sources used. With regard to the use of generative AI (ChatGPT, Claude, Gemini, Copilot, etc.), I declare:

☐ No generative AI was used (Level 0 or 1).

☐ Generative AI was used exclusively for spelling and grammar correction (Level 2).

☐ Generative AI was used as a writing partner. An AI journal is attached (Level 3).

☐ Generative AI was used freely (Level 4).

I am aware that knowingly false statements will be treated as attempted cheating.

Place, date, signature

Template Part 5: Verification and Control

The school reserves the following verification measures:

  • Random technical verification: Submitted texts may be checked with a data protection compliant AI detector for possible AI use. The school uses [detector name, e.g., AIDetector.ch], hosted on Swiss servers and secured through a data processing agreement.
  • Verification on concrete suspicion: When there are grounds for suspicion of undeclared AI use, targeted technical verification may be performed.
  • Oral defense: For larger work (from Level 3 upward), the school may schedule an oral examination in which the author must explain the work on substance.
  • Review of work process: The school may request drafts, research notes, and revision histories.

A technical test result alone is never sole evidence. It's always weighed alongside other observations, the conversation with the student, and where appropriate an oral defense.

Template Part 6: Sanctions

The sanctions framework must fit existing school law and cantonal regulations. As orientation, a three-stage model:

  • Stage A — Informal conversation and rewrite: On a first, minor violation (e.g., Level 2 use without disclosure), the conversation is sought. The work may be rewritten with conditions and resubmitted.
  • Stage B — Grade deduction or grade of 1: For substantial violation (e.g., Level 3 use without journal, submission of AI-generated text without disclosure), the work receives an insufficient grade or a grade of 1. Formal procedure follows the school rules.
  • Stage C — Disciplinary proceedings: For serious violations (repeated cheating, Level 0 violation in a matura or final thesis, systematic circumvention), the school administration opens disciplinary proceedings. Possible measures: formal warning, failing the exam, in extreme cases expulsion from the program — as permitted by cantonal school law.

In every case: before any sanction, the affected person receives the opportunity for written or oral response. The administration is informed no later than Stage B.

Template Part 7: Data Protection

The school commits to compliant implementation. This includes:

  • Tool selection: Only detectors operated on Swiss or EU servers with a data processing agreement are used.
  • Data minimization: Identifying information (student name, class ID) is removed before upload to the detector.
  • Retention policy: Detector results are deleted after completed verification.
  • Transparency: All students and parents are informed in writing at the start of the school year about possible detector use.
  • Right of access: Students may at any time request information about whether and which of their texts were technically checked.

Example 1: Zurich Gymnasium, Matura Thesis

The student works on a matura thesis on the history of the Spanish flu in Zurich. The school has implemented the policy as follows:

  • Classification: Matura theses fall under Level 3 (AI as writing partner allowed, with journal).
  • Disclosure: The student keeps an AI journal recording that she used ChatGPT to structure her first draft and for feedback on her second chapter. Prompts and raw answers are documented.
  • Original work: All final phrasings are her own. The AI journal accompanies the thesis.
  • Outcome: The work is graded normally. The transparent documentation is acknowledged positively in the colloquium.

Example 2: Bern Vocational School, Programming Assignment

A vocational school has classified programming assignments as Level 4 (AI freely usable). Students may use GitHub Copilot or ChatGPT freely. Grading focuses on:

  • Does the code work?
  • Can the student explain every line?
  • Can they make modifications in a live task during conversation?

Result: students who only submit AI code without understanding it get caught in the oral exam. The policy creates a productive, realistic learning environment where understanding is valued over mere output production.

Example 3: Lucerne Secondary School, German Essay

A seventh-grade class writes a German essay on "My Favorite Book." The teacher has classified this assignment as Level 1 (AI only for research). Before submission, all students sign the disclosure statement.

On one submission the teacher notices atypical phrasing. She addresses the student, listens to her explanation, and asks her to explain several passages orally. The explanations are shaky — the student admits to writing parts with ChatGPT. After consultation with the administration, the essay is rewritten, this time in the classroom under supervision. The incident is addressed pedagogically in conversation.

Parent Letter Template (Excerpt)

Dear Parents and Guardians,

Generative AI like ChatGPT has become part of school life. Our school has chosen not to ignore these tools but to engage with them deliberately — with the goal of preparing your children for competent, responsible, and critical use.

Our school's AI policy is attached. It governs:

  • Which forms of AI use are allowed in which tasks
  • What disclosure obligations apply
  • Which means the school uses to verify academic integrity
  • How we comply with Swiss data protection law

We bet on transparency — with you and with your children. For questions, please reach us at [contact].

Sincerely,

The School Administration

Conclusion: Policies Create Clarity — and Relieve

A good AI policy doesn't do anyone's job for them. But it ensures that the right work gets done — and protects both teachers and students from arbitrary case-by-case decisions. The template presented here is deliberately generic so each school can adapt it.

The most important step is often the first: bring together a small working group — administration, faculty, and ideally a parent representative — to walk through the template, adapt it, and adopt it. Everything else — communication, training, implementation — is then craft.

Sources and Further Resources

  • swissuniversities, recommendations on generative AI in higher education, 2024.
  • EDK/CIIP, position papers on digitalization and AI in education.
  • University of Zurich, Five-Level Model for AI Use in Courses, 2024.
  • Federal Act on Data Protection (revFADP), SR 235.1.
  • Privatim, guidelines for schools on digital tool use.