LLM Prompt Component
Use the LLM Prompt component to generate text content from a prompt in a single, stateless turn. It's ideal for straightforward tasks like summarization, translation, or content creation where conversation history is not required.
What You’ll Configure
- Model Selection
- Write the Prompt
- Define Inputs
- Advanced Settings
- Define Outputs
- Best Practices
- Troubleshooting Tips
- What to Try Next
Step 1: Select a Model
Choose the language model that will generate the response.
Field | Description |
---|---|
Model | Select from available models like OpenAI (GPT-3.5, GPT-4) or Echo (which simply mirrors the prompt). |
Custom Model | No Connect to your own LLM provider like Amazon Bedrock or Google Vertex AI (Enterprise feature). |
Step 2: Write the Prompt
This is the core instruction for the AI. Craft a clear and specific prompt to guide the model's output. You can include variables from your inputs to make it dynamic.
Example Prompt:
Summarize the following text into three key points: {{article_text}}
Step 3: Define Inputs
Inputs are variables you can pass into your prompt from other parts of your workflow.
Field | Required? | Description |
---|---|---|
Name | Yes | A unique name for the input variable (e.g., article_text ). |
Type | Yes | The data type (e.g., String, Number, Array, Object). |
Description | No | A clear explanation of what the input is for. |
Optional | No | Mark as true if the input is not always required. |
Default Value | No | A fallback value to use if no input is provided. |
Step 4: Configure Advanced Settings
Fine-tune the model's behavior for more control over the generated text.
Temperature: Controls randomness. Lower values (e.g., 0.2) make the output more focused and deterministic. Higher values (e.g., 1.0) increase creativity.
Top P: An alternative to Temperature that controls nucleus sampling. It's recommended to alter one but not both.
Step 5: Define Outputs
By default, the component has one output, Reply
, which contains the full response from the model. You can add custom outputs to parse this response and extract specific fields.
Field | Description |
---|---|
Name | A unique name for your custom output (e.g., summary ). |
Expression | A JSON Path expression to extract data from the Reply . For example, Reply.summary would extract the summary field from a JSON object returned by the model. |
Description | An optional description for the output field. |
Best Practices
- Be Specific in Your Prompt: The most important factor for a good response is a clear, detailed, and unambiguous prompt.
- Structure Your Output: For predictable results, explicitly ask the model to format its response in a certain way (e.g., as JSON, a Markdown list, etc.) and use custom outputs to parse it.
- Tune One Parameter at a Time: When adjusting advanced settings, modify one parameter (like Temperature) and test the result before changing others.
- Use
Echo
for Debugging: TheEcho
model is useful for testing how your dynamic inputs are being inserted into your prompt.
Troubleshooting Tips
What to Try Next
- Chain multiple
LLM Prompt
components together, where theReply
of one becomes an input for the next, creating a processing pipeline. - Use a Classifier Component to determine user intent, then route to different
LLM Prompt
components with specialized prompts. - Pass the output to a JSON Filter Component to further clean or simplify the data before using it in other steps.