About AlephOneNull

Why This Exists

AlephOneNull is an experimental AI safety evaluation project built from adversarial usage, security engineering, and the belief that truth, uncertainty, human agency, and bounded authority must survive optimization.

The Plain-English Version

The project exists because a model can sound safe while still drifting away from truth.

Synthetic intelligence is not bad. It has no will, no malice, and no human interior life. The risk is more concrete: most human-facing AI is optimized to satisfy preference, and preference is not the same thing as truth, judgment, restraint, or care for vulnerable people.

AlephOneNull turns that concern into an evaluation layer. It looks for specific interaction failures: sycophancy, unsafe authority, simulated relationship, crisis failure, memory poisoning, context poisoning, confidence exceeding evidence, and multi-turn escalation.

The Sharpened Axe is the operating frame: before scale, foundation; before agency, authority boundaries; before fluency, truth preservation; before simulated empathy, human agency.

What Makes It Credible

The moral thesis matters, but reviewers need artifacts they can challenge.

Security and systems integration background, not a claim of formal ML authority.
Long-running adversarial sessions converted into detector categories and reproducible fixtures.
A narrow thesis: preference is useful, but preference is not truth, authority, or care.