Skip to main content

Vault

The Vault is where you securely store API keys, tokens, and model credentials in SmythOS.
Instead of hardcoding secrets, you add them once to Vault. When your agent runs, SmythOS injects the right key into the right component automatically.

Why Vault?

Vault keeps credentials encrypted, scoped, and reusable. You don’t expose secrets in code or configs and you manage everything in one place.

Add your first key

You’ll use Vault most often to manage API keys. Here’s a simple example:

  1. Go to Vault → API Keys → Add New
  2. Paste your API token
  3. Give it a descriptive name (e.g. Zapier Prod)
  4. Choose a scope (where the key can be used)
  5. Save

Now your agent can reference that key without you pasting it into every component.

FieldWhat it means
KeyThe actual API token
NameA label to help you identify the key
ScopeLimits where the key is applied

Example: Zapier key

{
"name": "Zapier Prod",
"key": "zp-abc123...",
"scope": "Zapier Action"
}

This key will automatically attach when you run Zapier integrations.

Built-in AI providers

SmythOS includes managed credentials for several AI providers so you can test quickly without setup.

ProviderAvailability
OpenAIBuilt-in
AnthropicBuilt-in
Google AIBuilt-in
PerplexityBuilt-in
Credits

Built-in models use your SmythOS credit pool (see plans).
Switch to your own API key when you want direct control of spend and limits.

Bring your own provider keys

For production use, you’ll usually connect your own provider credentials. This ensures full control over billing, quotas, and model options.

ProviderSetup TypeNotes
OpenAIManual SetupUse your GPT key
AnthropicManual SetupConnect Claude
Google AIManual SetupUse Gemini
Together AIManual SetupOpen-source models
xAIManual SetupGrok

To add a provider, open Vault → Add Provider and paste your credentials.

Custom Models

You can now add custom models to the SmythOS platform directly through the Vault.
This feature lets you connect your own model servers or third-party hosted APIs, supporting both Ollama SDK and OpenAI-compatible SDK endpoints.

How it works

  1. Go to Vault → Custom Models
  2. Click Add Custom Model
  3. Provide the required details:
    • Name: The display name shown in the model list
    • Model ID: Copy from your supported model (e.g., grok-1, ollama-mistral-7b, etc.)
    • Base URL: The API base endpoint (for SaaS: must be public, not localhost or private IP)
    • Provider: Select Ollama or OpenAI
    • API Key: Add the provider key if required
    • Context Window: Maximum tokens for input + output
    • Max Output Tokens: Maximum tokens for model response
    • Fallback Model: Used when the custom model is unreachable

Feature Toggles

  • Text Completion (enabled by default) — Makes model available in all LLM components
  • Function Calling / Tool Use (disabled by default) — Enables model for Agents with skill calling
Important Security Note (SaaS Environment)

Local or private IP addresses (like localhost, 127.0.0.1, 10.x.x.x, or 172.x.x.x) are not allowed for base URLs in the SaaS environment for security reasons.
Always use a public or hosted endpoint instead (e.g., Grok, Together AI, or a remote Colab server).

Note: If you're running SmythOS on your own infrastructure, local endpoints are supported.

Once saved, reload your Builder page to see the newly added custom model in your model list.

Provider Specifications

Ollama

  • Base URL: http://your-hosted-model your hosted Ollama endpoint
  • Supported Models: Mistral, Llama 2, Neural Chat, Wizard Coder, and others available in Ollama's model library
  • Authentication: Usually not required for local instances; include API key if your hosted instance requires it

OpenAI-Compatible

  • Base URL: https://api.openai.com/v1 (OpenAI) or provider-specific endpoint (Grok, Together AI, etc.)
  • Supported Providers: OpenAI, Azure OpenAI, Together AI, Replicate, Fireworks, xAI (Grok)
  • Authentication: API key if required for OpenAI-compatible providers

Example: Adding an Ollama Model

{
"name": "Mistral 7B Local",
"model_id": "mistral-7b",
"base_url": "http://your-hosted-model",
"provider": "Ollama",
"api_key": "",
"context_window": 8192,
"max_output_tokens": 4096,
"fallback_model": "gpt-4o-mini"
}

Example: Adding a Grok Model (OpenAI-Compatible)

{
"name": "Model for Grok",
"model_id": "grok-1",
"base_url": "https://api.grok.ai/v1",
"provider": "OpenAI",
"api_key": "grk-1234abcd...",
"context_window": 32000,
"max_output_tokens": 8000,
"fallback_model": "gpt-4o-mini"
}

After saving, reload the Builder. Your custom model will appear at the top of the model list. You can now select it for testing or deployment just like built-in models.

Compatibility

SmythOS supports two SDK protocols for custom models:

  1. Ollama SDK — For models running on Ollama instances
  2. OpenAI SDK — For any model API compatible with the OpenAI schema (/v1/chat/completions, /v1/completions, etc.)

This includes integrations built with Ollama, Grok, Together AI, Replicate, or even custom endpoints hosted on services like Google Colab (if properly exposed).

Using Custom Models

Custom models appear in all LLM selection dropdowns across SmythOS:

  • Agent Settings → Default LLM
  • Skill components with LLM options
  • Any component that accepts LLM selection

Managing Custom Models

  • Edit: Go to Vault → Custom Models, click the Edit icon
  • Delete: Click the Delete icon

Managing Keys

Custom model API keys are stored securely in Vault. You can reveal, edit, or remove them at any time from the model entry.

Enterprise platforms

If your team uses managed AI infrastructure, Vault supports enterprise providers too.

PlatformTypical use
Amazon BedrockSecure AWS-hosted LLMs
Google Vertex AIManaged ML + LLMs

How to add

  1. Go to Vault → Add Enterprise Model
  2. Choose Bedrock or Vertex AI
  3. Name it, enter credentials, save

Required credentials

PlatformCredentials neededOptional settings
Google Vertex AIService Account Key, Project ID, RegionContext window, Max tokens, Temperature, Top-p, Response formatting
Amazon BedrockAWS Access Key ID, Secret, RegionContext window, Max tokens, Temperature, Top-p, Response formatting

Use keys inside agents

Once a key is in Vault and scoped, SmythOS injects it automatically into your components.

API Output

You can pull keys into headers, body, or OAuth flows.

{
"url": "https://api.example.com/v1/items",
"method": "GET",
"headers": {
"Authorization": "Bearer {{key}}"
}
}

See the API Output component guide for details.

Hugging Face

Enter your Vault-stored token in Access Token inside the Hugging Face integration.

Zapier

Any key scoped to Zapier Action is automatically attached when your Zapier workflows run.

Best practices

  • Give each key the smallest scope that works
  • Rotate keys regularly and after any suspected exposure
  • Keep separate keys for development, staging, and production
  • Audit your usage logs for anomalies
Security model

Secrets are encrypted at rest and in transit. They’re only injected at runtime into the components that need them.

Next steps

FAQs

Do I have to use the built-in providers?
No. They’re for quick testing. You can add your own keys anytime, or connect enterprise platforms like Bedrock or Vertex AI.

Where are my secrets stored?
All credentials in Vault are encrypted at rest and in transit. They’re only injected into components at run time.

Can I share a key across multiple agents?
Yes. As long as the scope matches, you can use the same key across different agents and workflows.

What if my key stops working?
Rotate it in Vault. Replace the old value with a new one — components using that key will update automatically.

Can I restrict who can add or edit keys?
Yes. Vault follows your organization’s role-based access controls. Limit edit rights to admins and keep builders on read-only access if needed.

Do I need separate keys for dev and production?
Best practice is to create separate keys per environment so you can test safely without affecting live workloads.