rag_search tool is the cornerstone of information retrieval within the UBIK platform. It allows agents to perform Retrieval-Augmented Generation (RAG) searches across your uploaded documents.
Unlike a standard keyword search, this tool uses semantic understanding to find the most relevant “chunks” of text from your knowledge base and uses a Large Language Model (LLM) to synthesize a precise answer grounded in those facts.
When to Use This Tool
Userag_search when you need to:
- Answer specific questions based on your private data (e.g., “What is the vacation policy?”).
- Find specific facts buried in large documents.
- Verify information against a trusted source.
- Retrieve context to support a conversation.
This tool is optimized for retrieval accuracy and grounded generation. It is not intended for processing entire documents or generating long-form summaries (use
information_analysis for that).Input Parameters
The tool accepts the following parameters:| Parameter | Type | Required | Description |
|---|---|---|---|
query | string | Yes | The natural language question or search query. Be as specific as possible for best results. |
document_ids | array<uuid> | No | A list of specific Document UUIDs to search within. If omitted, the search runs across all documents accessible to the user/session. |
Scoping & Permissions
Therag_search tool automatically respects the security context of the execution:
- User Access: Searches documents owned by the user or shared with them via workspaces.
- Session Context: If running within a chat session, it includes documents attached to that specific session.
- External ID: For multi-tenant applications, it strictly enforces
external_user_idboundaries, ensuring users never see data from other tenants.
Output Structure
The tool returns a structured object containing the answer, the evidence used to generate it, and metadata about the execution.| Field | Description |
|---|---|
response | The natural language answer. Can include a “Reflection” block (thinking process), Markdown formatting, and inline citations pointing to specific chunks. |
contexts | A list of the retrieved text chunks passed to the LLM. Includes chunk_id, document_id, and text_preview. |
sources_used | A list of indices (ranks) corresponding to the contexts that were explicitly used to form the answer. |
model | The specific LLM used for generation. |
execution_id | The unique identifier for this tool execution. |
Example Usage
1. Broad Search
Searching across all available knowledge. Input:2. Scoped Search
Searching only within a specific technical manual. Input:Multimodal Capabilities
Therag_search pipeline is fully multimodal. If you have indexed documents containing images (like PDFs with charts or slides), the search can retrieve relevant visual context.
- Text-to-Image Retrieval: Your text query can match descriptions of images.
- Image Understanding: The generation model can “see” the retrieved images to answer questions about charts, diagrams, or photos.
Activation RequiredMultimodal RAG is not enabled by default. To activate this feature for your workspace, please contact the UBIK team at contact@ubik-agent.com.

