Navigation

The Cognitive Toolkit

Four Tools. Four Ways Your Thinking Goes Wrong.

Muddy thinking. Skipped steps. Missing perspectives. Unweighted trade-offs. Each tool catches a different failure mode before you ship it.

The Four Tools

Each tool is designed for a specific failure mode. Use them when your main AI workflow is stuck, when you need to challenge your assumptions, or when decisions need documented reasoning.

Not Prompts. Tools.

Most "reasoning" wrappers inject a system prompt the model can ignore. We inject structured tools that create auditable reasoning trails.

1

Isolated Sessions

Each reasoning session runs in isolation. Your main context stays clean. You get the insight—not polluted context influencing everything after.

2

Observable Tool Calls

The LLM invokes cognitive tools: score_option,surface_contradiction. You see every call.

3

Persistent Sessions

Sessions are saved automatically. Mark key moments. Return to past insights. Build a library of your reasoning—not just the conclusions.

# Decision Matrix session - the LLM gets these tools:
tools = [
    define_criteria,      # Lock in what matters before evaluating
    set_weights,          # Make priorities explicit
    score_option,         # Evaluate with reasoning attached
    run_sensitivity,      # Check if your decision is robust
]

# Sample tool invocation (observable in session):
> INVOKE: score_option(
    option="PostgreSQL",
    criterion="consistency_guarantees",
    score=0.95,
    reasoning="Full ACID compliance, strong transaction support"
)

When to Reach for These

Your Coding Agent Is Looping

Iteration 30 of the wrong approach. Each attempt builds on the last, sinking deeper into a flawed mental model. The agent's context is polluted.

Use: Structured Reflection to articulate the actual problem in fresh context. Return to the agent with clarity, not more iterations.

Decisions That Affect Multiple Teams

API design, architecture choices, pricing models. You've thought it through from your perspective. What about the perspectives you don't have?

Use: Context Switcher to surface blind spots before stakeholders do.

Everyone Agrees Too Quickly

The team is aligned. The AI validated the approach. Nobody's pushing back. Groupthink warning.

Use: Decision Matrix to force explicit trade-off analysis with sensitivity testing.

You Jumped Straight to Solutions

You asked for architecture recommendations before defining what you're actually solving. The AI gave you a plausible answer to a question you hadn't really asked.

Use: Sequential Thinking to enforce stage gates before committing to a direction.

Your AI Won't Challenge You. These Will.

$20/month for reasoning sessions that challenge instead of validate. No usage metering, no overage charges.