Skip to main content

Vault

The Vault is where you securely store API keys, tokens, and model credentials in SmythOS.
Instead of hardcoding secrets, you add them once to Vault. When your agent runs, SmythOS injects the right key into the right component automatically.

Why Vault?

Vault keeps credentials encrypted, scoped, and reusable. You don’t expose secrets in code or configs and you manage everything in one place.

Add your first key

You’ll use Vault most often to manage API keys. Here’s a simple example:

  1. Go to Vault → API Keys → Add New
  2. Paste your API token
  3. Give it a descriptive name (e.g. Zapier Prod)
  4. Choose a scope (where the key can be used)
  5. Save

Now your agent can reference that key without you pasting it into every component.

FieldWhat it means
KeyThe actual API token
NameA label to help you identify the key
ScopeLimits where the key is applied

Example: Zapier key

{
"name": "Zapier Prod",
"key": "zp-abc123...",
"scope": "Zapier Action"
}

This key will automatically attach when you run Zapier integrations.

Built-in AI providers

SmythOS includes managed credentials for several AI providers so you can test quickly without setup.

ProviderAvailability
OpenAIBuilt-in
AnthropicBuilt-in
Google AIBuilt-in
PerplexityBuilt-in
Credits

Built-in models use your SmythOS credit pool (see plans).
Switch to your own API key when you want direct control of spend and limits.

Bring your own provider keys

For production use, you’ll usually connect your own provider credentials. This ensures full control over billing, quotas, and model options.

ProviderSetup TypeNotes
OpenAIManual SetupUse your GPT key
AnthropicManual SetupConnect Claude
Google AIManual SetupUse Gemini
Together AIManual SetupOpen-source models
xAIManual SetupGrok

To add a provider, open Vault → Add Provider and paste your credentials.

Custom Models

You can now add custom models to the SmythOS platform directly through the Vault.
This feature lets you connect your own model servers or third-party hosted APIs, as long as they expose endpoints compatible with the OpenAI SDK.

How it works

  1. Go to Vault → Custom Models
  2. Click Add Custom Model
  3. Provide the required details:
    • Name: The display name shown in the model list
    • Model ID: Copy from your supported model (e.g., Grok, Together AI, etc.)
    • Base URL: The API base endpoint (must be public, not localhost or private IP)
    • Provider: Select OpenAI-compatible
    • API Key: Add the provider key (e.g., from Grok)
    • Context Window: Optional limit on context tokens
    • Fallback Model: Used when the custom model is unreachable
Important Security Note

Local or private IP addresses (like localhost, 127.0.0.1, 10.x.x.x, or 172.x.x.x) are not allowed for base URLs in the SaaS environment for security reasons.
Always use a public or hosted endpoint instead (e.g., Grok, Together AI, or a remote Colab server).

Once saved, reload your Builder page to see the newly added custom model in your model list.

Example: Adding a Grok model

{
"name": "Model for Grok",
"model_id": "grok-1",
"base_url": "https://api.grok.ai/v1",
"provider": "OpenAI",
"api_key": "grk-1234abcd...",
"fallback_model": "gpt-4o-mini"
}

After saving, reload the Builder. Your custom model will appear at the top of the model list. You can now select it for testing or deployment just like built-in models.

Compatibility

Any model API compatible with the OpenAI schema (/v1/chat/completions, /v1/completions, etc.) will work with SmythOS custom models. This includes integrations built with ollama.cpp, Grok, Together AI, or even custom endpoints hosted on services like Google Colab (if properly exposed).

Managing Keys

Custom model API keys are stored securely in Vault. You can reveal, edit, or remove them at any time from the model entry.

Enterprise platforms

If your team uses managed AI infrastructure, Vault supports enterprise providers too.

PlatformTypical use
Amazon BedrockSecure AWS-hosted LLMs
Google Vertex AIManaged ML + LLMs

How to add

  1. Go to Vault → Add Enterprise Model
  2. Choose Bedrock or Vertex AI
  3. Name it, enter credentials, save

Required credentials

PlatformCredentials neededOptional settings
Google Vertex AIService Account Key, Project ID, RegionContext window, Max tokens, Temperature, Top-p, Response formatting
Amazon BedrockAWS Access Key ID, Secret, RegionContext window, Max tokens, Temperature, Top-p, Response formatting

Use keys inside agents

Once a key is in Vault and scoped, SmythOS injects it automatically into your components.

API Output

You can pull keys into headers, body, or OAuth flows.

{
"url": "https://api.example.com/v1/items",
"method": "GET",
"headers": {
"Authorization": "Bearer {{key}}"
}
}

See the API Output component guide for details.

Hugging Face

Enter your Vault-stored token in Access Token inside the Hugging Face integration.

Zapier

Any key scoped to Zapier Action is automatically attached when your Zapier workflows run.

Best practices

  • Give each key the smallest scope that works
  • Rotate keys regularly and after any suspected exposure
  • Keep separate keys for development, staging, and production
  • Audit your usage logs for anomalies
Security model

Secrets are encrypted at rest and in transit. They’re only injected at runtime into the components that need them.

Next steps

FAQs

Do I have to use the built-in providers?
No. They’re for quick testing. You can add your own keys anytime, or connect enterprise platforms like Bedrock or Vertex AI.

Where are my secrets stored?
All credentials in Vault are encrypted at rest and in transit. They’re only injected into components at run time.

Can I share a key across multiple agents?
Yes. As long as the scope matches, you can use the same key across different agents and workflows.

What if my key stops working?
Rotate it in Vault. Replace the old value with a new one — components using that key will update automatically.

Can I restrict who can add or edit keys?
Yes. Vault follows your organization’s role-based access controls. Limit edit rights to admins and keep builders on read-only access if needed.

Do I need separate keys for dev and production?
Best practice is to create separate keys per environment so you can test safely without affecting live workloads.