Skip to main content
When something breaks in production, the pressure to ship a fix fast is real — but so is the cost of shipping the wrong one. NodePad lets you run your diagnosis in one thread, fork it to get a second opinion from a different model, and merge both perspectives into a single, grounded decision without losing any of the reasoning that led you there.

Why forking helps with technical troubleshooting

A second opinion matters most when the stakes are high. On a single AI thread, you’re trusting one model’s read of the situation. On NodePad’s canvas, you can fork any message and send a parallel thread to a different model with the same context. If both models agree, your confidence goes up. If they diverge, you learn something important before you touch production.

Diagnose in main thread

Work through the problem with one model. Get a clear hypothesis before you commit to any fix.

Fork for second opinion

Branch from the diagnosis message. Send the same context to a second model and ask whether it agrees — or what it sees differently.

Merge into a decision

Pull both findings together into a single node you write. Ship with confidence knowing two independent models reviewed the same problem.

Walkthrough: connection pool exhaustion

Here is how a production incident investigation plays out across a forked canvas.
1

Open a canvas and start your diagnosis thread

Create a canvas for the incident. In the first node, paste your error output and stack trace and ask for a diagnosis. In this example, your API is throwing 500s with the message pq: sorry, too many clients already.Using Claude Opus as your main thread model, ask: “What does this error mean and what’s the most likely cause?” Claude identifies connection pool exhaustion — the API is opening database connections faster than it’s closing them. It recommends pgbouncer as a connection pooler and suggests reviewing pool sizing configuration.
Attach a sticky note to this node scoped to the affected layer: “focus only on the database and connection management layer — do not suggest application-level rewrites.” This keeps the diagnosis from sprawling into unrelated recommendations.
2

Fork from the diagnosis message before touching production

Before you act on Claude’s recommendation, fork from the diagnosis message. This creates a new thread that inherits full context — the error, the stack trace, and Claude’s initial finding — but runs independently.Switch to GPT-5 on the forked thread and ask: “Do you agree with this diagnosis? What else do you see?” GPT-5 confirms the connection pool exhaustion reading and the pgbouncer recommendation. It also flags a separate issue: missing defer conn.Close() calls in the middleware, which is causing a slow connection leak that would persist even after you add pooling.
The fork gives you a genuinely independent read. GPT-5 is not seeing Claude’s answer — it’s working from the same raw problem context and arriving at its own conclusions.
3

Review both threads side by side

Arrange the two thread nodes as neighbors on the canvas. Compare what each model found:
  • Claude Opus: pool exhaustion → pgbouncer + pool sizing
  • GPT-5: agrees on pooling, additionally flags missing defer conn.Close() → connection leak in middleware
The overlap gives you confidence in the pooling fix. The divergence — GPT-5’s extra finding — gives you something Claude missed. Both are now on your canvas, visible together, before you’ve changed a single line of code.
4

Merge your findings into a shipping decision

Create a merge node and write your decision in plain language, drawing from both threads: “Rolling both changes: pgbouncer for connection pooling with revised pool sizing, plus fixing the leaky middleware by adding the missing defer conn.Close() calls. Shipping to staging first.”This merge node becomes your record of what you decided and why. Follow-up questions — from you or a teammate — can continue from this node with full context.
Merging in NodePad means you author the synthesis. The models don’t automatically reconcile their outputs. Read both threads yourself and write the decision — that’s where you catch contradictions before they reach production.

Tips for engineering canvases

On a complex system, AI models will often offer advice beyond what you asked for. Sticky notes let you pin a constraint — “stay within the auth middleware, do not suggest schema changes” — so each thread stays focused on the layer you’re actually debugging.
The highest-value moment to fork is when you’re about to do something you can’t undo: deploy to production, run a migration, or open a firewall rule. A five-minute fork for a second opinion is almost always worth it.
A merge node written in your own words — “here’s what we found, here’s what we’re doing” — becomes a lightweight post-mortem. Because it lives on the canvas alongside the source threads, anyone reading it can trace the reasoning by expanding the parent nodes.
No model catches everything. Running two models in parallel threads on the same problem is a practical way to reduce the chance that a second issue hides behind the first. This is especially useful for security-sensitive changes.