Skip to main content
NodePad gives you access to a wide range of AI models and lets you switch between them at any point in a conversation — not just at the start of a thread. You can also run different models simultaneously across your canvas, making it easy to compare outputs, gather second opinions, or delegate different tasks to the model best suited for each one.

Supported models

NodePad includes built-in support for the following models out of the box:

Claude

Anthropic’s Claude family, including Claude Opus for complex reasoning tasks and lighter variants for faster responses.

GPT-5

OpenAI’s latest model, suited for a broad range of writing, analysis, and coding tasks.

Gemini

Google’s Gemini family, including Gemini 2.5 Pro for long-context and multimodal work.

Open-source models

A curated set of open-weight models. Enterprise users can also connect their own inference endpoints.
The model list grows as new versions are released. Enterprise customers can also connect custom inference endpoints including Ollama, vLLM, Amazon Bedrock, and Google Vertex AI.

What makes NodePad different

Most AI tools lock you into a single model for an entire conversation. NodePad removes that constraint in two ways. Per-message model selection — You can change the model on any individual message, mid-thread. If a question calls for deeper reasoning, switch to Claude Opus for that message and switch back afterward. The rest of the thread is unaffected. Parallel model execution on the canvas — Each thread on your canvas is independent. You can open one thread with Claude and another with GPT-5, ask them the same question, and read the answers side by side. No copy-pasting, no tab-switching.
Running the same prompt across two or three models at once is one of the fastest ways to pressure-test an answer or spot where models disagree.

Next steps

Per-message model selection

Learn how to choose a different model for any individual message in a thread.

Bring your own keys

Connect your own API keys or point NodePad at your own inference endpoint.