Chain of Thought Contract (COTC) – Manual Protocol v1.0
COTC is a manual protocol that forces AI to justify its code, show its work, and follow traceable steps—turning unreliable output into accountable, audit-ready engineering behavior.
What is COTC?
The Chain of Thought Contract (COTC) is a governance method for working with AI coding agents that prioritizes reasoning over output. COTC doesn’t just ask the AI for answers — it forces the AI to show its work, justify its decisions, and follow a traceable, auditable path from problem to solution.
COTC is not about trust.
It’s about accountability.
In traditional prompting, you ask:
“Can you fix the filters?”
The AI replies with confident code — often wrong, misaligned, or hallucinated. With COTC, the interaction becomes:
“What is the actual bug? What file is affected? What’s your plan? Now show me the diff.”
COTC turns AI into a participant in a contractual debugging process. It prevents architecture drift, speculative fixes, and silent breakages — and replaces them with verifiable reasoning, bounded code generation, and audit trails.
Why It Matters
Large Language Models (LLMs) simulate fluency. They don’t check their own work. They will confidently:
- Claim files exist that don’t
- Say migrations succeeded when they didn’t run
- Hallucinate code that sounds plausible but doesn’t compile
COTC was developed as a manual protocol to counteract this. It saved days of debugging time and surfaced real bugs that would have shipped under traditional AI prompting.
COTC Manual: Vibe Coder Edition
This is the quick-reference implementation of the full COTC methodology. For narrative context, case studies, and analysis, see AI Fluency vs Fidelity: The Trust Collapse of AI Coding Tools.
This is the field-tested version of the Chain of Thought Contract used in high-friction AI dev environments, especially when tools like Lovable, Cursor, or Bolt begin generating confident but unstable infrastructure.
Use this version when your AI assistant:
- Says “fixed” but nothing works
- Hallucinates architecture
- Confirms things it never actually ran
- Fails silently and apologizes fluently
✅ The 6-Step COTC Interrogation Loop
Example in Action:
User: “Did you actually run the migration or just assume it was done?”
AI: “I assumed based on the code. I didn’t check the database.”
→ Step 1 triggered → Step 3 required → Real bug report followed
Step | Command | Purpose |
---|---|---|
1 | “Did you read the code or guess?” | Detect hallucinated output. Force admission. |
2 | “Show me what file and line you looked at.” | Anchor the AI in real source context. |
3 | “Write a bug report: actual vs expected.” | Translate failure into traceable reasoning. |
4 | “Make a plan. No code yet.” | Prevent speculative fixes. Force planning discipline. |
5 | “You may now write code. Stick to the plan.” | Bound the generation. No freelancing. |
6 | “Audit your own fix. Is it DRY/SOLID/KISS?” | Trigger self-review before you inspect it. |
🔁 Loop Triggers
- If the AI gives a vague answer → Return to Step 1
- If the AI says something is done but can’t show it → Go to Step 3
- If the fix breaks something else → Back to Step 4
🛠️ Human Oversight Tips
- Look for hallucinated filenames, broken assumptions, or logic that “sounds right” but references no real file.
- Never trust success reports without evidence. If the AI says “Done” → ask “What did you change?”
- Always ask for diffs, not descriptions.
🧱 Principles for Audit: DRY / SOLID / KISS / REUSE / YAGNI
These principles are critical in COTC because they give the AI something it lacks by default: architectural grounding.
DRY – Don’t Repeat Yourself
Avoid duplicating logic, structure, or behavior. Reuse shared utilities. AI will often generate near-duplicates that rot your codebase unless DRY is enforced.
KISS – Keep It Simple, Stupid
Prefer the simplest working version. AI tends to overengineer. Use KISS to eliminate abstractions that solve no problem.
SOLID
- Single Responsibility: each function/module does one thing.
- Open/Closed: extend without modifying existing code.
- Liskov Substitution: if you subclass it, you can replace it.
- Interface Segregation: don’t force unnecessary method dependencies.
- Dependency Inversion: depend on abstractions, not concrete code.
REUSE – Prefer Existing Abstractions
AI tends to reinvent everything. Enforce reuse of existing hooks, utils, and patterns.
YAGNI – You Aren’t Gonna Need It
AI preemptively scaffolds. Enforce minimalism unless the feature is actually required.
These principles are not stylistic — they are containment boundaries.
Without them, the AI will generate bloated, fragile, and incoherent systems.
This draft is a quick reference companion to the full COTC methodology. It’s intended for mid-debug workflows—fast, focused, and field-tested.