The killer demonstration

Same model. Same question. Different soul.

We ask the identical question to two instances of Claude Opus 4.7. On the left: a vanilla Claude with no Personality Vector — a generic helpful assistant. On the right: the same Claude configured with William Carrington III’s Personality Vector via our patented Behavioral Prompting mechanism. The difference is the patent.

Pick a question or write your own

First request after the API has been idle takes ~2 seconds (machine wakeup). Subsequent requests are instant.

Vanilla Claude Opus 4.7
No Personality Vector. No memories. No Behavioral Prompting. Just the same model answering the question directly.
(Run a question above to generate)
AI Time Capsule (William Carrington III)
Same Claude Opus 4.7. Configured with Will's Personality Vector + retrieved episodic memories via Behavioral Prompting. Sealed Ed25519.
(Run a question above to generate)

What you are looking at

  1. Same foundation model on both sides. We do not change Claude. We do not fine-tune Claude. We do not train any new model.
  2. The vanilla side receives only your question with a generic “you are a helpful assistant” system prompt. It produces a competent but characterless answer — anyone could have written it.
  3. The AI Time Capsule side receives a meta-prompt assembled from Will’s Personality Vector — her values, her cognitive style, her stylometric signature, her humor markers, her relational register — plus the top-5 episodic memories retrieved by relevance to your question. Claude then extrapolates an answer in her way of thinking.
  4. Every AI Time Capsule response is sealed. A SHA-256 manifest binds the answer to its inputs and a Ed25519 signature lets any third party verify it. Try the verifier →

The patent isn’t about a smarter model. It’s about a patentable mechanism for turning the same model into a faithful simulator of a specific person — auditable, hashable, and inheritable. That mechanism is what you are seeing on the right.