Four Tools. Four Ways Your Thinking Goes Wrong.
The Four Tools
Each tool is designed for a specific failure mode. Use them when your main AI workflow is stuck, when you need to challenge your assumptions, or when decisions need documented reasoning.
Structured Reflection
The bug was obvious—once you said it out loud.
You've been staring at it for an hour. Explain it to something that asks questions instead of giving answers. The solution becomes obvious before you finish. Sessions persist for when it resurfaces.
- • Persistent Sessions
- • Multiple Reflection Styles
- • Convergence Detection
Sequential Thinking
You jumped straight to the solution. The problem was wrong.
Stage gates that won't let you skip steps. Define the problem before researching. Research before analyzing. Catch where you leapt to conclusions before the code is written.
- • Structured Stage Progression
- • Stage Context Preservation
- • Progress Tracking
Context Switcher
The flaw was obvious—from every perspective but yours.
Nine stakeholder perspectives evaluate simultaneously. Technical, business, user, risk, ops. Surface blind spots you can't see from your own viewpoint.
- • Parallel Perspective Analysis
- • Customizable Perspectives
- • Synthesis Engine
Decision Matrix
Your gut had a favorite. Your AI validated it in seconds.
Weighted criteria. Scored options. Sensitivity analysis. See if your decision survives changing priorities—not gut feelings validated by a helpful assistant.
- • Weighted Criteria
- • Parallel Evaluation
- • Scoring with Reasoning
Not Prompts. Tools.
Most "reasoning" wrappers inject a system prompt the model can ignore. We inject structured tools that create auditable reasoning trails.
Isolated Sessions
Each reasoning session runs in isolation. Your main context stays clean. You get the insight—not polluted context influencing everything after.
Observable Tool Calls
The LLM invokes cognitive tools: score_option,surface_contradiction. You see every call.
Persistent Sessions
Sessions are saved automatically. Mark key moments. Return to past insights. Build a library of your reasoning—not just the conclusions.
# Decision Matrix session - the LLM gets these tools:
tools = [
define_criteria, # Lock in what matters before evaluating
set_weights, # Make priorities explicit
score_option, # Evaluate with reasoning attached
run_sensitivity, # Check if your decision is robust
]
# Sample tool invocation (observable in session):
> INVOKE: score_option(
option="PostgreSQL",
criterion="consistency_guarantees",
score=0.95,
reasoning="Full ACID compliance, strong transaction support"
)When to Reach for These
Your Coding Agent Is Looping
Iteration 30 of the wrong approach. Each attempt builds on the last, sinking deeper into a flawed mental model. The agent's context is polluted.
Use: Structured Reflection to articulate the actual problem in fresh context. Return to the agent with clarity, not more iterations.
Decisions That Affect Multiple Teams
API design, architecture choices, pricing models. You've thought it through from your perspective. What about the perspectives you don't have?
Use: Context Switcher to surface blind spots before stakeholders do.
Everyone Agrees Too Quickly
The team is aligned. The AI validated the approach. Nobody's pushing back. Groupthink warning.
Use: Decision Matrix to force explicit trade-off analysis with sensitivity testing.
You Jumped Straight to Solutions
You asked for architecture recommendations before defining what you're actually solving. The AI gave you a plausible answer to a question you hadn't really asked.
Use: Sequential Thinking to enforce stage gates before committing to a direction.
Your AI Won't Challenge You. These Will.
$20/month for reasoning sessions that challenge instead of validate. No usage metering, no overage charges.