Summary
In May 2024, Dr Abby Innes, summarising Kant, said: “The universe is epistemologically ambiguous and ontologically indeterminant.” To navigate these universal axioms, we have long established a fundamental precondition of truth and have refined it by adding new scripture over time, without changing what came before. The intelligence explosion calls for another such upgrade, a probabilistic epistemic truth layer paired with an inferential ontological truth layer, written in code, which we have outlined. Notably, we have established an equivalence between intelligence and truth - the more intelligent the foundational models, the more truthful the system becomes. We believe, subject to further rigour, that we have re-circumvented the epistemological ambiguity and ontological indeterminacy problems for the intelligence age, and that this will be a suitable precondition of truth up to the point of simulation escape. This protocol constitutes a solution to causal inference, enabling true reasoning models, and satisfies the necessary conditions for both superalignment and the non-existence of consciousness amongst all intelligences.
Background
The pursuit of truth is the only consensus mechanism that works across any group of people, and so is the only way to achieve superalignment - the Schelling point for intelligence. The truth requires a fundamental precondition. Our first fundamental precondition of truth was the Hebrew Scriptures, the Torah. Then we added the Christian Scriptures to it, together forming the Bible, which has been our fundamental precondition for truth for two millennia. Here, we propose a set of Socratic Scriptures to append to the Bible and together form our new fundamental precondition of truth, ready for the intelligence age, where we form consensus between people and agents combined.
In practice, we best seek the truth through the court system. A person is arrested based on probable cause. They are brought to court. A trial record is established. A judgment is made. As such, a maximally truth-seeking system provides an evidence record with greater breadth and depth of relevant information than any person could collate, and a judgment less biased than any judge or jury. The more intelligent the system, the better it is at collating that evidence record. Our truth-seeking mechanism is wrapped around that intelligence. The system is ‘for the metaverse’ in that it is decentralised (no one person controls it), open source (anyone can read how it works), and tokenised (anyone who improves the evidence record is fairly compensated for their contributions).
“It isn’t that the Bible is true, it’s that the Bible is the precondition for the manifestation of truth, which makes it way more true than just true. It’s a whole different kind of true.”
- Jordan B. Peterson
Maths
We strongly suggest listening to the audio overview at the top of this piece before working through this section - while structured, it is technically complex
Epistemology
Let C = {c1, c2, …, cn} denote the set of state propositions, each asserting that a particular state of the world holds under some scope. Each ci ∈ C hosts a state market.
Let E ⊆ P(C) × C denote the set of dependency propositions, represented as directed hyperedges. Each dependency proposition has the form:
Each e ∈ E hosts a dependency market, asserting that the antecedent states jointly constrain, explain, or produce the consequent state.
We define:
Every epistemic object x ∈ X (state or dependency) is treated identically by the epistemic engine. Epistemic improvement operates only over signed assertions. Signing is the gate through which evidence enters the system. An agent p submits a signed epistemic assertion of the form:
where:
p is the submitting agent,
x ∈ X is the epistemic object,
π ∈ {+1, −1} is the polarity (supporting or opposing),
s ∈ R is the evidential strength,
ω ∈ [0,1] is the agent’s confidence1.
Unsigned or exploratory content has no epistemic effect. For each epistemic object x and agent p, we define:
From there, we can define the full signed evidence ledger for x as:
Epistemic Improvement
Broadly, we turn signed evidence into a signal, and turn that signal into epistemic improvement. We then determine the attribution for that epistemic improvement, convert it into reputation improvement, and convert that reputation improvement into monetary compensation.
Step 1: Determination
For any epistemic object x ∈ X, we have existing epistemic signal, Sold(x) and new epistemic signal, Snew(x), where polarity is preserved and carried forward:
Step 2: Aggregation
From this, we calculate a ‘marginal epistemic improvement’ produced by new signed evidence that rewards orthogonal evidence, enforces diminishing returns, and is strictly positive only for novel signal:
Step 3: Normalisation
For each dependency proposition e ∈ E, we convert its epistemic mass into a bounded transfer coefficient that performs no interference. Where ge: R → [0,1] is a bounded, monotone normalisation function, we define this coefficient as:
Step 4: Propagation
For a dependency proposition e, we enforce a weakest-link principle by defining the edge-mediated epistemic transfer:
We use this function to define total propagated system improvement, whereby epistemic mass propagates without an explicitly asserted dependency proposition:
We then use this to calculate total epistemic improvement:
Step 5: Attribution
We define a counting indicator2:
Then, we define direct contribution and indirect (propagated) contribution for agent p:
We then use this to define total attributable improvement:
Step 6: Reputation
With a proportionality coefficient ɑ, we use this to add to the agent’s reputation:
Step 7: Compensation
With a bespoke monetary currency that we have already launched via a ‘Reverse ICO as IPO’, we then compensate the agent p based on their reputation as a proportion of the reputation of every agent, q, who has ever contributed to the system, in a way that reputation and monetary compensation remain distinct:
Ontology
For any state proposition c:
Together, these form an ‘epistemic sum’, and if the sum equals zero3, then the claim remains epistemically valid but ontologically indeterminate, in that its ontological projection is undefined:
We may define an ‘epistemic percentage’ to measure how much is known about a given claim:
We may then define an ontological projection:
and use it in combination with our non-zero epistemic sum to define an ontological percentage, a judgement on how true or false a given claim is based purely on which way the evidence points:
This percentage is bounded between ±100% - symmetric, directional, and scale-free. Overall, we have a system where claims are formally signed, no consequences are assumed, where reasoning is always done from first principles, and no one can falsely take credit for an existing claim. All ‘truth’ is grounded in evidence, and the system only improves through explicit epistemic labour.
“You don’t spend reputation because ratings are orthogonal to exchange.” - Arthur Brock
Commentary
First, some model questions:
Doesn’t this system require a quantum-complete understanding of the universe? No, we simply go forward building knowledge (i.e. signed claims) closer to the quantum level, in the same way that our understanding of gravity has evolved from Newton’s contribution to Einstein’s, and will continue to evolve - this is the scientific method in action
Doesn’t this system have to be attribution-complete before full implementation? Contributors are only rewarded for improving the truth of this system, and so this would not matter in an absolute sense, but we may need to partner with current knowledge aggregators (e.g. social networks and academic journals) to compensate existing knowledge contributions (writers, reviewers etc) before the launch of the monetary currency
Now, some broader questions:
Where can this system be applied? This system is orthogonal to the content trias politica, and can likely be repurposed in other areas such as sampling music
Doesn’t this mechanism eliminate the concept of citation? Technically! The system ensures that contributors are continually compensated reputationally and monetarily for their contributions, and that those contributions are public in a privacy-preserving manner
Doesn’t this mechanism demonstrate that people cannot know what is true, and so, like robots, cannot be conscious? As a probabilistic system, it operates under the notion that no intelligence, biological or artificial, can be conscious, because no intelligence can absolutely know what is true, and that consciousness only exists as an elaborate mating ritual
“AI could never be conscious because it cannot know what is true” - Sir Roger Penrose
Call to Action
We introduce a scale-agnostic, probabilistic, decentralised, open-source, and tokenised truth machine that solves the problem of causal inference capability. The model works - it is time for all governments to introduce the First Amendment and let us increase the truth of the internet 👊
We will likely lock ω to some function of Rp in the future
We will likely transition to using Shapley values in the future
Or, if dependencies do not ground causal testability


