← Back to Structures & Mechanisms

AI Governance and Anti-Capture Safeguards

Article 10 of the Charter in full technical detail

Purpose

The National Platform and every AI system used within NCG must serve the public interest exclusively. No AI tool may ever become an instrument of elite capture, narrative control, or hidden influence. Article 10 exists to keep the AI layer permanently subordinate to human citizens and sortition juries.

The Five Mandatory Safeguards

1. Sovereignty and Openness

All AI models used for moderation, summarisation, Forced Construction enforcement, or expert briefing must be open-source or fully auditable by the public and by any sortition jury. No closed, proprietary, or foreign-controlled model is permitted to hold final authority over any decision or enforcement process.

2. Radical Transparency

System prompts, fine-tuning datasets, decision logic, and all moderation rules must be public, version-controlled, and permanently stored on the National Platform. Citizens and juries can inspect exactly how any AI output was produced.

3. Human Oversight & Final Authority

AI outputs are advisory only. Sortition juries retain absolute final decision-making authority. Any AI-assisted enforcement (e.g. Forced Construction checks) may be overridden by a jury with full public reasoning recorded on the platform.

4. Anti-Capture Mechanisms

Models undergo regular independent audits by Legacy Review Juries. Training data and alignment processes are stress-tested for bias and elite influence. Persistent capture risks trigger mandatory retraining or complete model replacement.

5. Optionality and Decentralisation

Where technically feasible, AI inference runs on distributed regional nodes rather than a single central system. This preserves antifragility and regional autonomy.

Implementation Notes for the National Platform

  • Version Control: Every prompt, model version, and fine-tuning dataset is stored in an immutable, publicly auditable repository on the platform (similar to Git but with cryptographic signing).
  • Jury Override Button: Every AI-generated summary or Forced Construction check displays a prominent “Override & Explain” button that opens a jury deliberation channel with full public reasoning required.
  • Distributed Inference: Regional nodes can run lighter models locally; only heavy computation is sent to the national layer when absolutely necessary.
  • Stress-Testing Protocol: Before any model update is deployed, Legacy Review Juries run adversarial tests including elite-bias scenarios, narrative-control prompts, and black-swan events.
  • Five-Year Mandatory Review: A dedicated Meta-Jury is convened every five years to re-evaluate the entire AI layer against the principles of Scala Politica.

Practical Examples

Forced Construction Enforcement

AI flags an objection that contains only negation. The system automatically suggests three possible constructive alternatives drawn from previous jury decisions and public proposals. The jury can accept, modify, or reject them — but cannot simply dismiss the objection without an alternative.

Debate Summarisation

The app summarises long jury deliberations into neutral bullet points. Citizens can click “Challenge Summary” and a new jury is randomly convened to produce an alternative version if the original is contested.

Bias Detection

Training data is continuously scanned for elite-language patterns or narrative skew. If detected, the model is automatically paused and sent for jury review before any further use.

AI must remain a servant, never a master. The principles of Scala Politica — especially skin in the game, antifragility, and optionality — apply as rigorously to silicon as they do to flesh and blood.