GenAI LLM Component
The GenAI LLM component adds language skills to your agents. Write a precise prompt, select a model, connect inputs and outputs, then test and refine. Studio manages execution, security, observability, and file parsing so you can focus on building outcomes.
When To Use GenAI LLM
What You Can Build Quickly
- Summarise: Turn long documents into action-oriented briefs
- Generate: Draft emails, replies, or outlines automatically
- Extract: Pull values from text (names, dates, amounts) into JSON
- Classify: Route tickets by priority, category, or sentiment
- Process Files: Parse PDF or DOCX into searchable text
Step 1: Select a Model
Pick a built-in model or connect your own.
| Field | Required? | Description | Notes |
|---|---|---|---|
| Model | Yes | The LLM that executes your prompt | Includes GPT-5 family, Claude, Gemini, Groq, and others |
| Custom Model | No | Your own hosted endpoint or API | Best for specialised domains or large context sizes |
Context Windows
Step 2: Write a Precise Prompt
The prompt tells the model what to do, the format to use, and any constraints.
Example Prompt
You are an assistant that extracts structured insights.
From {{Attachment.text}}, create a JSON object with:
- "title": one-sentence summary
- "key_points": 3 bullet points
- "action": a next step recommendation
Return only valid JSON.
Prompt Guidelines
Step 3: Connect Inputs
Inputs are values you pass into the model.
| Input | Required? | Description | Notes |
|---|---|---|---|
| Input | Yes | Main string or variable used in the prompt | Inserted as {{Input}} |
| Attachment | No | Files like PDF, DOCX, PNG, or JPG | Auto-converted to text and available as Attachment.text |
Reusable Inputs
Step 4: Configure Model Settings
Start with defaults. Adjust only when you need to guide behaviour.
Maximum Output Tokens caps reply length and prevents cutoffs.
- Short replies: 128 to 256
- Long form: 1024 to 4096
- Typical default: 2048 or 8192 by model
Sizing tip
Example
Summarise {{Input}} in 120 to 150 words. Output markdown.
Quick Reference
| Setting | What It Controls | OpenAI | Anthropic | Other Providers |
|---|---|---|---|---|
| Maximum Output Tokens | Caps how many tokens the model can generate in one reply | |||
| Verbosity | Detail level in reasoning output | GPT-5 only | ||
| Reasoning Effort | Trade-off between deeper reasoning and speed | GPT-5 only | ||
| Passthrough | Returns raw, unformatted output | |||
| Use Agent System Prompt | Applies global system instructions consistently | |||
| Use Context Window | Includes conversation history in requests | |||
| Use Web Search | Lets the model fetch real-time facts | |||
| Top P | Probability mass sampling for variety | |||
| Top K | Restricts sampling to top K tokens | |||
| Temperature | Controls randomness and creativity | |||
| Stop Sequences | Defines strings where generation should end |
Quick Presets
Step 5: Define Outputs
Expose the model’s reply and map fields for downstream use.
| Output | Required? | Description | Example |
|---|---|---|---|
| Reply | Yes | Main model output | Paragraph, list, or JSON text |
| Custom Output | No | Extracted fields from the reply | Reply.summary, Reply.json.customer_id |
Custom Output Mapping
[
{ "Name": "title", "Expression": "Reply.title" },
{ "Name": "summary", "Expression": "Reply.summary", "Format": "markdown" }
]
Formatting Options
Before You Go Live
You are close. Run a quick loop to make sure this component behaves the way you expect, then deploy it and keep an eye on it.
Quick Play Test
Review These Things
Test Your Component
Using {{Input}} or {{Attachment.text}}, return only this JSON:
{
"title": "<one sentence>",
"key_points": ["<point 1>", "<point 2>", "<point 3>"],
"action": "<one next step>"
}
If a field is missing, output an empty string instead of guessing.
Ship And Watch It
- Connect to the Code Component for validation or post processing
- Add to your workflow and follow the Deploying Agents guide
- Keep an eye on logs and usage in Observability. Look for spikes in token count, latency jumps, or empty fields
Green Flags To Publish
When To Revisit Settings
What's Next
- Review the Prompt Guide for proven techniques
- Use the Debugging Guide for deeper techniques
- Explore Data Spaces if you need persistent knowledge