GenAI LLM Component
Use the GenAI LLM component to generate dynamic responses using a prompt and an LLM like OpenAI or Anthropic. It's perfect for summaries, content generation, extraction, and more.
Why this matters
What You’ll Configure
- Model Selection
- Prompt Setup
- Input Binding
- Advanced Parameters
- Output Mapping
- Testing & Debug
- Best Practices
- Troubleshooting Tips
- What to Try Next
Model setup
Step 1: Choose a Model
Field | Required? | Description | Tips |
---|---|---|---|
Model | Yes | The LLM used for classification | By default you will see GPT 4o Mini (SmythOS 128K). You can also choose GPT 4.1 Nano, GPT 4.1 Mini, GPT o1, Claude Sonnet 3.7, Gemini 2.5 Pro, Sonar or other supported engines. |
Custom Model | No | Use your own hosted model or external API | You can connect to OpenAI, Claude, Google AI, Grok, or any endpoint compatible with your workflow. |
Token limits
Step 2: Define the Prompt
Basic Prompt
Summarize {{Input}} into one paragraph.
Prompt inputs
Step 3: Add Inputs
Input Name | Required? | Description | Notes |
---|---|---|---|
Input | Yes | Main prompt content | Injected as {{Input}} |
Attachment | No | Files or images (PDF, DOCX, JPG, etc.) | Converted to text for model input |
TIP
Step 4: Configure Advanced Settings
Temperature controls randomness in model output.
Lower = focused. Higher = creative.
Step 5: Define Outputs
Field | Required? | Description | Notes |
---|---|---|---|
Reply | Yes | The model's main output | Default output |
Custom Output | No | Extract fields using expressions | Reply.title , Reply.summary , etc. |
{
"Name": "summary",
"Expression": "Reply.summary",
"Format": "markdown",
"Description": "The generated summary"
}
Format values
Validation
Step 6: Debug and Preview
Live preview
Example Prompt Input:
“Write a short article about the Sakura tree.”
Example Custom Output Mapping:
[
{ "Name": "title", "Expression": "Reply.title" },
{ "Name": "content", "Expression": "Reply.body" },
{ "Name": "keywords", "Expression": "Reply.keywords", "Format": "json" }
]
Best Practices
- Use specific prompts: "List 3 key points from..." is better than "Summarize"
- Format custom outputs if needed (
text
,html
,json
) - Use mock inputs in Debug to test multiple prompt paths
- Avoid putting complex logic inside a prompt — let the model generate clean data
- Use
Passthrough Mode
for total control over rendering or streaming - Use Retry + Condition blocks to handle failed outputs or empty results
Troubleshooting Tips
If your prompt isn't working...
What to Try Next
- Combine GenAI LLM with Agent Skill to let users submit prompts naturally
- Pipe GenAI output into RAG Remember to store facts for reuse
- Use Code Component downstream to transform, filter, or validate replies
TIP