Vault
The Vault is where you securely store API keys, tokens, and model credentials in SmythOS.
Instead of hardcoding secrets, you add them once to Vault. When your agent runs, SmythOS injects the right key into the right component automatically.
Add your first key
You’ll use Vault most often to manage API keys. Here’s a simple example:
- Go to Vault → API Keys → Add New
- Paste your API token
- Give it a descriptive name (e.g. Zapier Prod)
- Choose a scope (where the key can be used)
- Save
Now your agent can reference that key without you pasting it into every component.
| Field | What it means |
|---|---|
| Key | The actual API token |
| Name | A label to help you identify the key |
| Scope | Limits where the key is applied |
Example: Zapier key
{
"name": "Zapier Prod",
"key": "zp-abc123...",
"scope": "Zapier Action"
}
This key will automatically attach when you run Zapier integrations.
Built-in AI providers
SmythOS includes managed credentials for several AI providers so you can test quickly without setup.
| Provider | Availability |
|---|---|
| OpenAI | Built-in |
| Anthropic | Built-in |
| Google AI | Built-in |
| Perplexity | Built-in |
Bring your own provider keys
For production use, you’ll usually connect your own provider credentials. This ensures full control over billing, quotas, and model options.
| Provider | Setup Type | Notes |
|---|---|---|
| OpenAI | Manual Setup | Use your GPT key |
| Anthropic | Manual Setup | Connect Claude |
| Google AI | Manual Setup | Use Gemini |
| Together AI | Manual Setup | Open-source models |
| xAI | Manual Setup | Grok |
To add a provider, open Vault → Add Provider and paste your credentials.
Custom Models
You can now add custom models to the SmythOS platform directly through the Vault.
This feature lets you connect your own model servers or third-party hosted APIs, supporting both Ollama SDK and OpenAI-compatible SDK endpoints.
How it works
- Go to Vault → Custom Models
- Click Add Custom Model
- Provide the required details:
- Name: The display name shown in the model list
- Model ID: Copy from your supported model (e.g.,
grok-1,ollama-mistral-7b, etc.) - Base URL: The API base endpoint (for SaaS: must be public, not localhost or private IP)
- Provider: Select Ollama or OpenAI
- API Key: Add the provider key if required
- Context Window: Maximum tokens for input + output
- Max Output Tokens: Maximum tokens for model response
- Fallback Model: Used when the custom model is unreachable
Feature Toggles
- Text Completion (enabled by default) — Makes model available in all LLM components
- Function Calling / Tool Use (disabled by default) — Enables model for Agents with skill calling
Once saved, reload your Builder page to see the newly added custom model in your model list.
Provider Specifications
Ollama
- Base URL:
http://your-hosted-modelyour hosted Ollama endpoint - Supported Models: Mistral, Llama 2, Neural Chat, Wizard Coder, and others available in Ollama's model library
- Authentication: Usually not required for local instances; include API key if your hosted instance requires it
OpenAI-Compatible
- Base URL:
https://api.openai.com/v1(OpenAI) or provider-specific endpoint (Grok, Together AI, etc.) - Supported Providers: OpenAI, Azure OpenAI, Together AI, Replicate, Fireworks, xAI (Grok)
- Authentication: API key if required for OpenAI-compatible providers
Example: Adding an Ollama Model
{
"name": "Mistral 7B Local",
"model_id": "mistral-7b",
"base_url": "http://your-hosted-model",
"provider": "Ollama",
"api_key": "",
"context_window": 8192,
"max_output_tokens": 4096,
"fallback_model": "gpt-4o-mini"
}
Example: Adding a Grok Model (OpenAI-Compatible)
{
"name": "Model for Grok",
"model_id": "grok-1",
"base_url": "https://api.grok.ai/v1",
"provider": "OpenAI",
"api_key": "grk-1234abcd...",
"context_window": 32000,
"max_output_tokens": 8000,
"fallback_model": "gpt-4o-mini"
}
After saving, reload the Builder. Your custom model will appear at the top of the model list. You can now select it for testing or deployment just like built-in models.
Using Custom Models
Custom models appear in all LLM selection dropdowns across SmythOS:
- Agent Settings → Default LLM
- Skill components with LLM options
- Any component that accepts LLM selection
Managing Custom Models
- Edit: Go to Vault → Custom Models, click the Edit icon
- Delete: Click the Delete icon
Managing Keys
Custom model API keys are stored securely in Vault. You can reveal, edit, or remove them at any time from the model entry.
Enterprise platforms
If your team uses managed AI infrastructure, Vault supports enterprise providers too.
| Platform | Typical use |
|---|---|
| Amazon Bedrock | Secure AWS-hosted LLMs |
| Google Vertex AI | Managed ML + LLMs |
How to add
- Go to Vault → Add Enterprise Model
- Choose Bedrock or Vertex AI
- Name it, enter credentials, save
Required credentials
| Platform | Credentials needed | Optional settings |
|---|---|---|
| Google Vertex AI | Service Account Key, Project ID, Region | Context window, Max tokens, Temperature, Top-p, Response formatting |
| Amazon Bedrock | AWS Access Key ID, Secret, Region | Context window, Max tokens, Temperature, Top-p, Response formatting |
Use keys inside agents
Once a key is in Vault and scoped, SmythOS injects it automatically into your components.
API Output
You can pull keys into headers, body, or OAuth flows.
{
"url": "https://api.example.com/v1/items",
"method": "GET",
"headers": {
"Authorization": "Bearer {{key}}"
}
}
See the API Output component guide for details.
Hugging Face
Enter your Vault-stored token in Access Token inside the Hugging Face integration.
Zapier
Any key scoped to Zapier Action is automatically attached when your Zapier workflows run.
Best practices
- Give each key the smallest scope that works
- Rotate keys regularly and after any suspected exposure
- Keep separate keys for development, staging, and production
- Audit your usage logs for anomalies
Next steps
- Learn more about Authenticated Workflows
- See how to deploy agents with Vault access
- Explore the API Output component