Why forking helps with technical troubleshooting
A second opinion matters most when the stakes are high. On a single AI thread, you’re trusting one model’s read of the situation. On NodePad’s canvas, you can fork any message and send a parallel thread to a different model with the same context. If both models agree, your confidence goes up. If they diverge, you learn something important before you touch production.Diagnose in main thread
Work through the problem with one model. Get a clear hypothesis before you commit to any fix.
Fork for second opinion
Branch from the diagnosis message. Send the same context to a second model and ask whether it agrees — or what it sees differently.
Merge into a decision
Pull both findings together into a single node you write. Ship with confidence knowing two independent models reviewed the same problem.
Walkthrough: connection pool exhaustion
Here is how a production incident investigation plays out across a forked canvas.Open a canvas and start your diagnosis thread
Create a canvas for the incident. In the first node, paste your error output and stack trace and ask for a diagnosis. In this example, your API is throwing 500s with the message
pq: sorry, too many clients already.Using Claude Opus as your main thread model, ask: “What does this error mean and what’s the most likely cause?” Claude identifies connection pool exhaustion — the API is opening database connections faster than it’s closing them. It recommends pgbouncer as a connection pooler and suggests reviewing pool sizing configuration.Fork from the diagnosis message before touching production
Before you act on Claude’s recommendation, fork from the diagnosis message. This creates a new thread that inherits full context — the error, the stack trace, and Claude’s initial finding — but runs independently.Switch to GPT-5 on the forked thread and ask: “Do you agree with this diagnosis? What else do you see?” GPT-5 confirms the connection pool exhaustion reading and the pgbouncer recommendation. It also flags a separate issue: missing
defer conn.Close() calls in the middleware, which is causing a slow connection leak that would persist even after you add pooling.The fork gives you a genuinely independent read. GPT-5 is not seeing Claude’s answer — it’s working from the same raw problem context and arriving at its own conclusions.
Review both threads side by side
Arrange the two thread nodes as neighbors on the canvas. Compare what each model found:
- Claude Opus: pool exhaustion → pgbouncer + pool sizing
- GPT-5: agrees on pooling, additionally flags missing
defer conn.Close()→ connection leak in middleware
Merge your findings into a shipping decision
Create a merge node and write your decision in plain language, drawing from both threads: “Rolling both changes: pgbouncer for connection pooling with revised pool sizing, plus fixing the leaky middleware by adding the missing
defer conn.Close() calls. Shipping to staging first.”This merge node becomes your record of what you decided and why. Follow-up questions — from you or a teammate — can continue from this node with full context.Tips for engineering canvases
Scope threads with sticky notes to avoid scope creep
Scope threads with sticky notes to avoid scope creep
On a complex system, AI models will often offer advice beyond what you asked for. Sticky notes let you pin a constraint — “stay within the auth middleware, do not suggest schema changes” — so each thread stays focused on the layer you’re actually debugging.
Fork before any irreversible action
Fork before any irreversible action
The highest-value moment to fork is when you’re about to do something you can’t undo: deploy to production, run a migration, or open a firewall rule. A five-minute fork for a second opinion is almost always worth it.
Use the merge node as your incident record
Use the merge node as your incident record
A merge node written in your own words — “here’s what we found, here’s what we’re doing” — becomes a lightweight post-mortem. Because it lives on the canvas alongside the source threads, anyone reading it can trace the reasoning by expanding the parent nodes.
Different models have different blind spots
Different models have different blind spots
No model catches everything. Running two models in parallel threads on the same problem is a practical way to reduce the chance that a second issue hides behind the first. This is especially useful for security-sensitive changes.