Skip to main content

LLM Assistant Component

Use the LLM Assistant component to create stateful, multi-turn chat experiences. It automatically tracks conversation history, allowing your agent to provide coherent and context-aware responses over multiple interactions.

Why this matters

Unlike a standard LLM call, the LLM Assistant remembers previous messages in the same conversation. This is crucial for building chatbots and agents that can understand follow-up questions and maintain a natural conversational flow.

What You’ll Configure

Step 1: Select a Model

Choose the language model that will power your assistant. You can use built-in models or connect to your own.

FieldRequired?DescriptionTips
ModelYesThe LLM used for generating responses.Defaults to SmythOS-provided models (e.g., OpenAI). You can also select other shared models like Claude or Together AI. For more, see Model Rates.
Custom ModelNoConnect to your own LLM provider, such as Amazon Bedrock or Google Vertex AI.This is an enterprise feature. You will need to provide your own credentials and select a foundation model. Contact us to enable it.

Step 2: Define the Behavior

The Behavior field acts as the system prompt, giving the assistant its core instructions, personality, and constraints.

SettingRequired?DescriptionDefault Value
BehaviorNoThe system prompt that guides the assistant's tone and actions.You are a helpful assistant that helps people with their questions
Crafting a Good Persona

Be specific in your behavior prompt. For example: "You are a friendly and witty support agent for an e-commerce store specializing in sneakers. Always answer in a fun and slightly informal tone."

Step 3: Configure Inputs

These inputs are essential for tracking the conversation and capturing the user's message.

InputRequired?DescriptionNotes
UserIdYesA unique identifier for the end-user.Used to group all conversations for a specific user.
ConversationIdYesA unique identifier for a single conversation thread.Allows a single user to have multiple, separate conversations.
UserInputYesThe message or prompt submitted by the user.This is what the assistant will respond to.

Step 4: Configure Advanced Settings

Fine-tune the assistant's streaming behavior.

SettingDescription
Passthrough ModeControls how responses are streamed. When disabled (default), the response streams automatically. When enabled, you get manual control over the output, which is useful for post-processing or custom streaming logic.

Step 5: Handle the Output

The component produces a single output containing the assistant's reply.

OutputDescriptionData Structure
ResponseThe complete message generated by the LLM Assistant.String

Best Practices

  • Use Stable IDs: Ensure that UserId and ConversationId are consistent across interactions to maintain conversation history correctly.
  • Set a Clear Behavior: A well-defined system prompt in the Behavior field is the key to a reliable and predictable assistant.
  • Manage Context: For very long conversations, be aware of the model's context window. The assistant will automatically handle history, but extremely long threads may eventually lose early context.
  • Use Passthrough for Complex Logic: If you need to validate, modify, or log the assistant's reply before showing it to the user, enable Passthrough Mode.

Troubleshooting Tips

If your assistant is not working as expected...
  • Assistant is "forgetful": The most common cause is that the UserId or ConversationId is changing between turns. Verify you are passing the same IDs for every message in a conversation.
  • Incorrect tone or behavior: Your Behavior prompt may be too vague or is being overridden by the user's input. Make your instructions more specific and directive.
  • No response is generated: Check that the UserInput field is correctly mapped and not empty. Also, ensure your model provider (if custom) is configured correctly with valid credentials.
  • Response isn't streaming: Check if Passthrough Mode has been accidentally enabled.

What to Try Next

  • Use an Agent Skill to provide a user-friendly interface for your LLM Assistant.
  • Pass the Response output into a RAG Remember Component to store key facts from the conversation.
  • Use a Classifier Component on the UserInput before it reaches the assistant to detect user intent and route the conversation.