Skip to main content

GenAI LLM Component

Use the GenAI LLM component to generate dynamic responses using a prompt and an LLM like OpenAI or Anthropic. It's perfect for summaries, content generation, extraction, and more.

Why this matters

With GenAI LLM, you can transform inputs like text or files into structured or styled output.
No custom code needed. You only need a prompt, a model, and the right inputs.

What You’ll Configure

Model setup

SmythOS includes default models (e.g. OpenAI GPT).
You can also connect custom providers like Claude or Gemini.
See Model Rates → for cost and usage.

Step 1: Choose a Model

FieldRequired?DescriptionTips
ModelYesThe LLM used for classificationBy default you will see GPT 4o Mini (SmythOS 128K). You can also choose GPT 4.1 Nano, GPT 4.1 Mini, GPT o1, Claude Sonnet 3.7, Gemini 2.5 Pro, Sonar or other supported engines.
Custom ModelNoUse your own hosted model or external APIYou can connect to OpenAI, Claude, Google AI, Grok, or any endpoint compatible with your workflow.
Token limits

Default models have token windows from 128K to 1M tokens (shown next to each model name). If you need to classify very large inputs or include extra context, use a custom model with extended limits.

Step 2: Define the Prompt

Basic Prompt
LLMprompttemplate
Summarize {{Input}} into one paragraph.
Prompt inputs

Use {{Input}}, {{input.email}}, or even {{Attachment.text}} in your prompt.
Files like PDFs or DOCX will be auto-processed into text where possible.

Step 3: Add Inputs

Input NameRequired?DescriptionNotes
InputYesMain prompt contentInjected as {{Input}}
AttachmentNoFiles or images (PDF, DOCX, JPG, etc.)Converted to text for model input
TIP

You can rename inputs, assign types (string, file, etc.), and set defaults or make them optional.

Step 4: Configure Advanced Settings

Temperature controls randomness in model output.

Lower = focused. Higher = creative.

Step 5: Define Outputs

FieldRequired?DescriptionNotes
ReplyYesThe model's main outputDefault output
Custom OutputNoExtract fields using expressionsReply.title, Reply.summary, etc.
{
"Name": "summary",
"Expression": "Reply.summary",
"Format": "markdown",
"Description": "The generated summary"
}
Format values

Set Format to text, markdown, html, or json for downstream handling.

Validation

SmythOS won’t validate your Custom Output expressions.
Make sure field names like Reply.title exist.

Step 6: Debug and Preview

Live preview

Use Debug to run your prompt with real or sample input.
You’ll see live results and can tweak output mappings right away.

Example Prompt Input:
“Write a short article about the Sakura tree.”

Example Custom Output Mapping:

[
{ "Name": "title", "Expression": "Reply.title" },
{ "Name": "content", "Expression": "Reply.body" },
{ "Name": "keywords", "Expression": "Reply.keywords", "Format": "json" }
]

Best Practices

  • Use specific prompts: "List 3 key points from..." is better than "Summarize"
  • Format custom outputs if needed (text, html, json)
  • Use mock inputs in Debug to test multiple prompt paths
  • Avoid putting complex logic inside a prompt — let the model generate clean data
  • Use Passthrough Mode for total control over rendering or streaming
  • Use Retry + Condition blocks to handle failed outputs or empty results

Troubleshooting Tips

If your prompt isn't working...
  • Confirm that your prompt references actual inputs (like {{Input}})
  • Check for typos in custom output expressions (e.g., Reply.summary vs Reply.summray)
  • Make sure your model isn’t hitting token limits — reduce long attachments or prompts
  • If file inputs aren’t being read, verify they are processed into Attachment.text
  • For empty outputs, try lowering Temperature and simplifying your prompt
  • Run in Debug Mode and inspect full raw output to fine-tune your extraction fields

What to Try Next

  • Combine GenAI LLM with Agent Skill to let users submit prompts naturally
  • Pipe GenAI output into RAG Remember to store facts for reuse
  • Use Code Component downstream to transform, filter, or validate replies
TIP

Looking for good prompt patterns? Visit our Prompt Guide →.