Skip to main content
Polling for results is effective, but for a more responsive, real-time experience, you can stream events directly from UBIK. This guide covers the two main types of streaming available in the API:
  1. Tool Execution Streaming: For monitoring long-running tool tasks.
  2. Agent Session Streaming: For receiving real-time chat responses from agents.

Tool Execution Streaming

This section shows you how to connect to a tool’s event stream. This assumes you have already started a tool execution as shown in the Executing Your First Tool guide.
1

Start a Tool Execution

First, execute a tool as you normally would. The initial 202 Accepted response will contain a stream_url. This is the endpoint we’ll use to receive live events.
Response
{
  "execution_id": "exec_123456789",
  "status": "pending",
  "tool_id": "d1e2f3a4-b5c6-7890-1234-567890abcdef",
  "details_url": "https://app.ubik-agent.com/api/v1/tool-executions/exec_123456789",
  "stream_url": "https://app.ubik-agent.com/api/v1/tool-executions/exec_123456789/stream"
}
Copy the stream_url for the next step.
2

Connect to the Event Stream

Now, you can connect to the stream_url using any SSE-compatible client. The connection will remain open, and the server will push events as they happen.
# Use the -N flag to disable buffering
curl -X GET "https://app.ubik-agent.com/api/v1/tool-executions/exec_123456789/stream" \
     -H "X-API-KEY: YOUR_API_KEY" \
     -N
In the JavaScript EventSource example, we pass the API key as a query parameter. Our server is configured to accept the API key from either the X-API-KEY header or a query parameter named api_key for SSE connections, as EventSource does not support custom headers.
3

Tool Execution Events

As the tool executes, you will receive a series of JSON objects. Each object contains an event_type and a data payload. Here are the detailed event types and their structures:

tool_update

Used for general progress updates or structured data changes.
{
  "phase": "processing",        // Optional: Current execution phase
  "message": "Processing...",   // Optional: Human-readable status
  "output_key": "result",       // Optional: Key for structured data
  "data": { ... }               // Optional: Arbitrary structured data
}

tool_partial_update

Used for streaming content (e.g., text generation, partial image downloads).
{
  "content": "partial text...", // The chunk of content
  "output_key": "response",     // The output field being streamed
  "data": { ... }               // Optional: Additional metadata
}

tool_input_required

Sent when an interactive tool pauses execution to wait for user input.
{
  "prompt": "Please confirm...", // Question for the user
  "input_types": ["text"],       // Allowed input types
  "timeout": 300,                // Timeout in seconds
  "state": { ... }               // Internal state (opaque)
}

tool_end

The final event of a successful execution. The data contains the direct outputs of the tool.
For compatibility reasons or depending on the execution method, this event may sometimes appear as final_result. It should be treated the same way as tool_end.
{
  "execution_id": "exec_123...", // Execution ID
  "response": "Full text...",    // Example output (tool dependent)
  "images": [...]                // Example output
}

error

Indicates that an error occurred during execution.
{
  "message": "Error description",
  "code": "ERROR_CODE"
}

Handling Large Events (Chunking)

For events that exceed the size limit (e.g., large base64 encoded images or long text), the API uses a “chunking” mechanism. These events are split into multiple parts with the _delta_sse suffix added to the original event name.For example, if a tool_update event is too large, you will receive a series of tool_update_delta_sse events.Structure of a chunked event:
{
  "chunk_id": "unique-uuid-for-this-group",
  "chunk_index": 0,           // Index of the chunk (starts at 0)
  "total_chunks": 5,          // Total number of chunks
  "original_event_type": "tool_update",
  "chunk_data": "...",        // Fragment of the original JSON data (string)
  "is_last_chunk": false      // true for the last chunk
}
Reassembly Logic (Client-Side):
  1. Detect if the event name ends with _delta_sse.
  2. Store the chunk_data in a buffer, ordered by chunk_index.
  3. When is_last_chunk is true, concatenate all chunk_data fragments.
  4. Parse the concatenated string as JSON.
  5. Process the resulting object as if it were the original_event_type.
JavaScript Example for Chunk Handling:
const eventBuffers = {}; // Store chunks by chunk_id

eventSource.addEventListener('message', (event) => {
    // Note: EventSource might not expose the event type directly in 'message'
    // It is better to use a generic handler or listen to all possible types
});

// Utility function to handle all incoming events
function handleIncomingEvent(eventType, eventData) {
    if (eventType.endsWith('_delta_sse')) {
        const { chunk_id, chunk_index, chunk_data, is_last_chunk, original_event_type } = eventData;
        
        if (!eventBuffers[chunk_id]) {
            eventBuffers[chunk_id] = [];
        }
        eventBuffers[chunk_id][chunk_index] = chunk_data;

        if (is_last_chunk) {
            const fullDataString = eventBuffers[chunk_id].join('');
            delete eventBuffers[chunk_id]; // Cleanup
            
            try {
                const fullData = JSON.parse(fullDataString);
                // Recursively process the reassembled event
                handleIncomingEvent(original_event_type, fullData);
            } catch (e) {
                console.error("Error parsing reassembled chunk", e);
            }
        }
        return; // Wait for other chunks
    }

    // Normal event processing
    console.log(`Received event: ${eventType}`, eventData);
    if (eventType === 'tool_end') {
        console.log("Final result:", eventData);
    }
}

Example Event Stream

event: tool_update
data: {"phase": "retrieval", "message": "Fetching relevant documents..."}

event: tool_update
data: {"phase": "generation", "message": "Generating response..."}

event: tool_partial_update
data: {"content": "The financial results show..."}

event: tool_partial_update
data: {"content": " a significant increase in revenue."}

event: tool_end
data: {"execution_id": "...", "status": "completed", "outputs": {"response": "The financial results show a significant increase in revenue."}}

Integrating Tool Events in Chat UI

When building a custom chat interface, you often need to combine the permanent conversation history (text) with transient real-time updates (events). This ensures your UI is both responsive during execution and accurate when reloading history.

The Dual-Stream Strategy

  1. Text Stream (response_chunk): Contains the “source of truth” for the conversation. It includes delimiters that mark where a tool was called and what the final result was.
  2. Event Stream (tool_update, tool_partial_update): Contains real-time status updates, logs, and progress indicators that should be displayed while the tool is running.

Implementation Logic

To build a rich UI like the one in Ubik, follow this pattern:
  1. Detect Start: When you parse <<TOOL_STEP_START/tool_name:exec_id>> in the text stream, create a Tool Container in your UI. Mark it as “Loading” or “Running”.
  2. Live Updates: Listen for SSE events. If you receive a tool_update or tool_partial_update where tool_execution_id matches exec_id, update the content of your Tool Container (e.g., show “Searching web…”, update a progress bar, or stream logs).
  3. Detect End: When you parse <<TOOL_STEP_RESULT_START>><<TOOL_STEP_RESULT_END>> in the text stream, you have the final, permanent output. You can now replace the “Loading” state in your Tool Container with the final static result.
This approach ensures that even if the user refreshes the page (losing the transient events), the text stream still contains the full execution history (inputs and results) needed to render the completed tool state.

Tool-Specific Event Streams

Each tool may emit different data structures in its tool_update and tool_partial_update events depending on its function (e.g., a web search tool might stream “Searching URL…” updates, while a code execution tool streams console logs). To build a robust UI, you will need to analyze the specific event stream for the tools you intend to support.
Note: Detailed guides documenting the specific event structures for each native tool will be available in the future. For now, we recommend inspecting the events returned by the tools during development.
For a list of available tools and their capabilities, please refer to the Native Tools guide.

Agent Session Streaming

Agent Sessions provide a different set of events tailored for conversational interfaces. When you send a message with "stream": true, the server will stream the agent’s thought process and response chunks.

Agent Session Events

Unlike tool executions, agent session events are often used to render a chat interface in real-time. For a complete list of event types and their structure, please refer to the Agent Session Events Guide.

Dedicated Stream Endpoint

While POST /agent-sessions/{id}/messages supports direct streaming, UBIK also provides a dedicated endpoint for subscribing to session events. This is useful for:
  1. Reconnection: Resuming a stream if the connection drops (past events are replayed).
  2. Browser Compatibility: Using EventSource which requires a GET request and URL-based auth.
Endpoint: GET /agent-sessions/{session_id}/stream

Authentication with JWT

Since the standard EventSource API in browsers cannot send custom headers (like X-API-KEY), this endpoint accepts a token query parameter. You should generate a short-lived JWT using /auth/token and pass it here. For details on generating tokens, see the Authentication & Security Guide.
// 1. Get a short-lived token from your backend
const token = "eyJhbGciOiJIUz..."; 

// 2. Connect using the token param
const streamUrl = `https://app.ubik-agent.com/api/v1/agent-sessions/${sessionId}/stream?token=${token}`;
const eventSource = new EventSource(streamUrl);