Article 10 of the Charter in full technical detail
The National Platform and every AI system used within NCG must serve the public interest exclusively. No AI tool may ever become an instrument of elite capture, narrative control, or hidden influence. Article 10 exists to keep the AI layer permanently subordinate to human citizens and sortition juries.
All AI models used for moderation, summarisation, Forced Construction enforcement, or expert briefing must be open-source or fully auditable by the public and by any sortition jury. No closed, proprietary, or foreign-controlled model is permitted to hold final authority over any decision or enforcement process.
System prompts, fine-tuning datasets, decision logic, and all moderation rules must be public, version-controlled, and permanently stored on the National Platform. Citizens and juries can inspect exactly how any AI output was produced.
AI outputs are advisory only. Sortition juries retain absolute final decision-making authority. Any AI-assisted enforcement (e.g. Forced Construction checks) may be overridden by a jury with full public reasoning recorded on the platform.
Models undergo regular independent audits by Legacy Review Juries. Training data and alignment processes are stress-tested for bias and elite influence. Persistent capture risks trigger mandatory retraining or complete model replacement.
Where technically feasible, AI inference runs on distributed regional nodes rather than a single central system. This preserves antifragility and regional autonomy.
AI flags an objection that contains only negation. The system automatically suggests three possible constructive alternatives drawn from previous jury decisions and public proposals. The jury can accept, modify, or reject them — but cannot simply dismiss the objection without an alternative.
The app summarises long jury deliberations into neutral bullet points. Citizens can click “Challenge Summary” and a new jury is randomly convened to produce an alternative version if the original is contested.
Training data is continuously scanned for elite-language patterns or narrative skew. If detected, the model is automatically paused and sent for jury review before any further use.
AI must remain a servant, never a master. The principles of Scala Politica — especially skin in the game, antifragility, and optionality — apply as rigorously to silicon as they do to flesh and blood.