Agent Recipes
The SmythOS Runtime Environment (SRE) gives you reusable components and connectors.
This page shows common recipes for combining them into working agents, using real examples from the SRE repo.
How to use these
Recipe 1: RAG (Retrieval-Augmented Generation)
Index documents, then answer queries with context.
{`import { Agent, Model } from '@smythos/sdk';
// Create an agent with GPT-4
const agent = new Agent({ name: 'RAG Demo', model: 'gpt-4o' });
// Pinecone connector with OpenAI embeddings
const pinecone = agent.vectorDB.Pinecone('my-namespace', {
indexName: 'my-index',
apiKey: process.env.PINECONE_API_KEY,
embeddings: Model.OpenAI('text-embedding-3-large'),
});
// Insert a document
await pinecone.insertDoc('doc-1', 'The Eiffel Tower is in Paris, France.');
// Query with retrieval
const results = await pinecone.search('Where is the Eiffel Tower located?');
console.log(results);
`}
When to use
Recipe 2: API Call → Classify → Summarize
Fetch external data, route it, and produce a concise answer.
{`import { Agent } from '@smythos/sdk';
const agent = new Agent({ name: 'News Summarizer', model: 'gpt-4o' });
// Fetch data from an API
const api = agent.component.APICall({ url: 'https://newsapi.org/v2/top-headlines' });
// Classify articles by topic
const classifier = agent.component.Classifier({ classes: ['tech', 'sports', 'politics'] });
// Summarize with the LLM
classifier.out.Topic.connect(agent.llm.OpenAI('gpt-4o').in.Input);
// Example run
const response = await agent.run();
console.log(response);
`}
Recipe 3: Parallel Workflows (Fan-Out / Fan-In)
Run multiple LLM prompts in parallel, then join results.
{`import { Agent } from '@smythos/sdk';
const agent = new Agent({ name: 'Parallel Demo', model: 'gpt-4o' });
// Branch into async tasks
const asyncBlock = agent.component.Async();
// Two tasks in parallel
asyncBlock.add(agent.llm.OpenAI('gpt-4o').prompt('Summarize AI news'));
asyncBlock.add(agent.llm.OpenAI('gpt-4o').prompt('Summarize sports news'));
// Join results
const awaitBlock = agent.component.Await(asyncBlock);
const output = await agent.run();
console.log(output);
`}
Why it matters
Recipe 4: Stream Responses for a Chat UI
Stream tokens for a live user interface instead of waiting for full completion.
{`import { Agent, TLLMEvent } from '@smythos/sdk';
const agent = new Agent({ name: 'ChatBot', model: 'gpt-4o' });
const stream = await agent.llm.OpenAI('gpt-4o').prompt('Tell me a joke.').stream();
stream.on(TLLMEvent.Content, (chunk) => process.stdout.write(chunk));
stream.on(TLLMEvent.End, () => console.log('\\n-- done --'));
`}
Recipe 5: Import and Extend a Studio Workflow
Mix no-code (Studio) with code (SDK).
{`import path from 'node:path';
import { Agent, Model } from '@smythos/sdk';
async function main() {
const agentPath = path.resolve(__dirname, 'my-agent.smyth');
// Import Studio workflow
const agent = Agent.import(agentPath, {
model: Model.OpenAI('gpt-4o'),
});
// Extend with custom code skill
agent.addSkill({
name: 'reverse',
description: 'Reverse text',
process: async ({ input }) => input.split('').reverse().join(''),
});
// Run
const result = await agent.prompt('Reverse "SmythOS".');
console.log(result);
}
main().catch(console.error);
`}
Why it’s powerful
What’s Next?
- Explore more in the SRE repo examples
- Learn about Building Agents
- Dive into Hybrid Workflows for mixing approaches