Why bring your own keys
Keep data in your infrastructure
Requests route through your own API accounts. NodePad never stores prompts or responses on its servers when you use your own keys.
Use your negotiated pricing
If your organization has volume pricing with Anthropic, OpenAI, or Google, your NodePad usage bills against those rates — not NodePad’s retail margins.
Air-gapped environments
Connect a local Ollama instance or an internal vLLM deployment. NodePad can operate without any outbound internet traffic to third-party model providers.
Vendor flexibility
Switch providers or endpoints without changing how you use NodePad. Your canvas, threads, and workflows stay intact.
Supported inference endpoints
In addition to standard provider API keys, NodePad supports the following self-hosted and cloud inference backends:- Ollama
- vLLM
- Amazon Bedrock
- Google Vertex AI
Ollama lets you run open-weight models locally on your own machine or internal server. Point NodePad at your Ollama instance’s address and it appears as a selectable model provider across your canvas. Suitable for fully air-gapped or on-premises deployments.
How to configure your keys or endpoint
Open workspace settings
From any canvas, open the workspace settings panel using the menu in the top-right corner of the NodePad interface.
Go to the Models tab
Select the Models tab in the settings panel. You’ll see the current model configuration for your workspace.
Add your credentials
Enter your API key for the provider you want to connect, or supply the base URL and any required authentication details for your self-hosted endpoint.
Test the connection
Use the Test connection button to confirm NodePad can reach your endpoint and authenticate successfully before saving.
Your API keys are encrypted at rest and never exposed to other workspace members. Workspace admins can see which providers are connected but cannot view the key values themselves.