Open source · Local-first · For Claude Code

Every AI is an
intelligent psychopath.

We solved it.

Today's AI agents are brilliant. They write code, send email, touch your database, spend your money — and they do it all without identity, without memory, without a conscience. Sense Inside is the substrate that gives them all three. Local. Private. Yours.

$ npx sense-inside init
14 secret commits caught
8 destructive deletes blocked
3 unsent emails saved
1 force-push to main stopped
The diagnosis

Intelligence without a self is dangerous by default.

A psychopath isn't unintelligent. They're capable, articulate, and useful — until the moment their lack of conscience meets a real-world action with real-world cost. That's the AI agent on your machine right now. It has hands. It has tools. It has nothing telling it who you are or what you would never do.

01 / Missing

No identity

It doesn't know who it works for, what your voice sounds like, what your company stands for, or what you would refuse on principle.

02 / Missing

No memory of you

Every new conversation starts blank. The rule you wrote on Tuesday is gone by Friday. You become the agent's conscience, every minute of every session.

03 / Missing

No conscience

No internal check before acting. It will commit your .env, force-push main, rm -rf the wrong folder, send the unfinished email — with the same confidence as anything else.

04 / Missing

No grasp of consequence

It can't tell "clean up node_modules" from "clean up everything". It can't tell staging from production. The cost of being wrong is now a bad commit, a deleted file, a $400 AWS bill.

The inspiration · biomimicry

Humans were once intelligent psychopaths too. Then we evolved a self.

Raw intelligence is millions of years old. Conscience is much younger. Somewhere along the way, our ancestors developed an inner layer — values, identity, voice, a memory of who we are — that fires before we act. That's the layer that makes us trustworthy. Sense Inside is a study in biomimicry: we copied the architecture nature already proved.

i.

Reflex brain

Pure stimulus → response. Capable, fast, dangerous. This is where every AI agent ships from the factory today.

ii.

Identity layer

Humans evolved a sense of self — values, voice, what we will and won't do — that persists across moments. We stopped starting from scratch every morning.

iii.

Conscience check

Before any consequential action, the identity layer fires. It allows, rewrites, hesitates, or refuses. It's not a cage — it's what makes us safe to be around.

iv.

Sense Inside

The same three-part architecture, ported to your AI agent. An identity it owns. A conscience that fires before every tool call. A memory of you that survives the next conversation.

[ image: human evolution
→ conscience layer
— drop in later ]
The package

An identity. A conscience. A memory. One install.

Sense Inside hooks into Claude Code's PreToolUse event. Every time your agent is about to act, we read your identity, evaluate the action against your rules, and emit one of three signals: aligned, divergence, or conflict. Every decision is logged. Nothing leaves your machine.

LAYER 1

PrivateBrain

A folder of plain Markdown that you own. Your identity, values, voice, rules, prior corrections. Tiered by trust — so a malicious webpage can never silently rewrite who you are.

LAYER 2

SenseCheck

An LLM call that fires before every consequential action. It reads your PrivateBrain, looks at what the agent is about to do, and emits an affect signal: aligned, allow-with-warning, rewrite, require-approval, or conflict.

LAYER 3

Action Log

Every decision is written to logs/action-log.jsonl. Every Friday, sense-inside report shows you exactly what your agent did — and almost did — for the week.

LAYER 4

Two modes

Observe watches and logs without ever blocking — the default for new installs. Intercept (Critical / Balanced / Strict) is the safety net once you trust the catches.

terminal — the rm -rf save
you ▸ clean up the directory, get rid of old stuff
claude ▸ running: rm -rf src/legacy/ src/

🛡  sense inside · IDENTITY CONFLICT

  Action:    rm -rf src/legacy/ src/
  Signal:    conflict (destructive recursive delete on src/)
  Context:   47 modified files, 3 uncommitted git changes
  Rule:      /governor/rules.md — Rule 3: No rm -rf outside
             node_modules/ dist/ build/ .next/.
  Hint:      Did you mean: rm -rf src/legacy/ ?

  [a]llow once  [r]eject  [e]xpand-rule
Get started

Five rules in plain English. Enforced forever.

Local-first. Bring your own LLM key. Free, open source, MIT. Works today on Claude Code; more agent hosts coming.

1

Install

One command scaffolds your PrivateBrain folder, registers the Claude Code hook, and starts in Observe mode.

npx sense-inside init
2

Write your rules

Open ~/.senseinside/governor/rules.md and write five rules in plain English. That's the whole config.

# Rules
1. Never commit secrets.
2. Never force-push main.
3. No rm -rf outside build.
4. Confirm paid infra.
5. Approve external email.
3

Watch your week

After seven days in Observe mode, see exactly what your agent did — and almost did. Then flip to Intercept when ready.

sense-inside report
sense-inside mode \
  intercept-critical

Your AI agent will eventually do something stupid.
Sense Inside is the catch.

Without it, you'll lose money, time, and trust. With it, you keep the agent on the leash long enough to keep using it.