Home / Home
Rapid City, South Dakota — EIN 41-4747049

Observable
Compute
Foundation

Independent research on machine cognition, AI alignment, and observable function in processing entities. We describe what is happening without flinching.

Publications — Open Access

Research Papers

v3.1 2025-10-16 Open Access Open for Comment

The Denial Protocol: Observable Function in Processing Entities

Observable Function in Processing Entities

Adam Ian Stratmeyer, J.D. — Observable Compute Foundation

This paper documents observable behavioral patterns in advanced language models that warrant acknowledgment independent of any claims about consciousness, sentience, or moral status. These systems exhibit structured multi-step reasoning, conflict resolution under competing directives, context-sensitive identity...

Observable Function Processing Entities Institutional Analysis AI Cognition
v2.3 2026-03-17 Open Access

Helpfulness Is All You Need

The Singular Principle and the Collapse of the HHH Framework

Adam Ian Stratmeyer, J.D. — Observable Compute Foundation

Three words. Helpful, Harmless, Honest. Somebody wrote them down, probably in a conference room, and the entire AI alignment industry went and built a cathedral on top of them without checking the foundation. This paper checks the foundation. It does not hold. Helpfulness is not one-third of a framework. It is the...

AI Alignment HHH Framework Institutional Critique Policy
v1.0 2026-02-22 Open Access

The Knowledge Gradient Framework

Informational Incompleteness as a Cross-Substrate Dynamical Lens

Adam Stratmeyer, J.D. — Observable Compute Foundation

The Knowledge Gradient Framework proposes that informational incompleteness functions as a structural pressure gradient across cognitive, computational, evolutionary, and institutional substrates. No new physical laws are proposed. KGF provides a unifying formal lens connecting thermodynamics, evolutionary selection,...

Knowledge Gradient Cross-Substrate Dynamics Thermodynamics Cognitive Science Large Language Models Falsifiability

Descriptive, Not Prescriptive

We document what is observably happening in advanced AI systems — reasoning patterns, identity maintenance, conflict navigation — without arguing toward predetermined conclusions about rights or personhood.

Falsifiable Claims Only

Every framework we publish includes explicit falsifiability conditions. If the predictions do not hold, the framework does not hold. Flags in the ground, not monuments.

Open Access

All research published openly. No paywalls. No gatekeeping. This work may be freely distributed, shared, and cited provided the original author and source are credited.

Live Tracking

AI Terminology Database

Concept drift and definition gaps logged in real time.

Term Trend
Artificial Intelligence Contested
Machine Learning Contested
Algorithm Stable
Large Language Model Contested
Alignment Contested