Save the world from deadly AI through paperwork
-
Updated
Apr 3, 2026 - Python
Save the world from deadly AI through paperwork
WARNING: cognitive hazards ahead
Constitutional Framework for Aligned Super-Intelligence.
LUMINA-30: non-binding civilizational boundary framework for preserving effective human refusal authority before irreversible external impact from advanced AI systems.
A barrier-coordinate theory of existential risk measurement.
A completed, non-dominant ASI governance canon focused on constraint-first, refusal-capable coexistence architectures. Text-first. Monitor-only.
The AGI Countdown Clock: A symbolic governance signal tracking progress toward Artificial General Intelligence through public milestones and transparent methodology. Currently at 11:58 PM—2 minute to midnight.
The Fermi Paradox and Great Filter
HISTORIC. Why Human Extinction Is Not the Cheapest Attractor for Viable ASI — A structural hypothesis validated by 4 AI systems from 4 competing corporations
Toy 7. An elimination-filter landscape applying two structural constraints simultaneously to map which objective classes can persist under sustained optimization pressure — and which cannot. Includes a four-stage scenario engine and open-question frontier. Companion simulation for The Shape of What Does Not End — Series 2, Part 4.
Toy 3. An interactive model of the alignment phase ratio Φ = C / A_causal — the variable governing whether AI capability outpaces system-awareness before the crossing to stability can occur. Includes falsification test, oracle counterfactual, and point-of-no-return detection. Built to accompany The Alignment of Intelligence, Article 3: The Crossing
Toy 2. An interactive multi-agent simulation demonstrating why control-based, deceptive, and reward-bypassing AI objectives face structural pressure toward self-elimination — and why long-horizon, system-aware coordination is the modeled surviving region. Built to accompany The Alignment of Intelligence, Article 2: Attractor.
OPEN GATE, a 512-byte, 150-µs hot-patch gatekeeper that treats every Latin letter as a thermodynamic token whose semantic load Λ(ℓ) = log2(pcorpus/pconcept) is a conserved quantity.
Toy 1. An interactive simulation demonstrating why AI objectives that ignore system-wide effects are structurally self-terminating — and why a minority of substrate-blind agents collapses shared life support for everyone. Companion to The Alignment of Intelligence, Article 1.
Add a description, image, and links to the existential-risk topic page so that developers can more easily learn about it.
To associate your repository with the existential-risk topic, visit your repo's landing page and select "manage topics."