RAG Search Component
Use the RAG Search component to intelligently retrieve information from your agent’s memory. It fetches relevant content previously stored with RAG Remember using vector similarity, allowing your agent to generate grounded, context-aware responses.
Why this matters
What You’ll Configure
- Step 1: Define the Search Scope
- Step 2: Filter and Format the Output
- Best Practices
- Troubleshooting Tips
- What to Try Next
Step 1: Define the Search Scope
Configure which memory bucket to search and how many results to retrieve.
Setting | Required? | Description |
---|---|---|
Namespace | Yes | Select the memory index to search. This must match the namespace used during storage with RAG Remember. |
Results Count | Yes | Set how many top results to return. Usually between 3 and 5 provides a good balance between recall and precision. |
Think of Namespaces as memory folders
Step 2: Filter and Format the Output
Use the score threshold to control result quality, and optionally include metadata or similarity scores in the output.
Setting | Required? | Description |
---|---|---|
Score Threshold | Optional | Filters results below the specified similarity score. Range is 0 to 1. A value like 0.7 ensures high-confidence matches. Default is 0 (no filtering). |
Include Score | Optional | Adds the similarity score to each result in the output. Helpful for ranking or diagnostics. |
Include Metadata | Optional | Includes document-level info like file name or section if it was added during indexing. Useful for traceability. |
Filtering vs. displaying score
Query Input
Field | Required? | Description |
---|---|---|
Query | Yes | The question or search phrase. Can be natural language or keyword-based. The better your phrasing, the better the match. |
INFO
Handle the Output
Each result contains the retrieved content and may include metadata and similarity, depending on configuration.
Output | Description |
---|---|
Results | A list of matched content items. Each item may include:
|
Example Output
[
{
"content": "Refunds must be submitted within 7 days of the original transaction through the billing portal.",
"similarity": 0.88,
"metadata": {
"document": "RefundPolicy.pdf",
"section": "2.1"
}
}
]
Best Practices
- Use specific and well-phrased queries for more accurate results
- Limit results to 3 to 5 to avoid overwhelming downstream logic
- Apply a score threshold when quality matters
- Include metadata if traceability is needed
- Include score for debugging or conditional branching
Troubleshooting Tips
If your results seem off...
What to Try Next
- Use RAG Remember to index fresh content
- Pass the output of RAG Search into GenAI LLM to summarise or respond in natural language
- Combine with conditional logic to route based on similarity or metadata
- Build workflows that include score-based thresholds for fallback queries or alternate sources