Skip to main content
Streaming delivers the agent’s response token-by-token as it’s generated—ideal for chat interfaces, live demos, or anywhere a human is watching. Instead of waiting for the full response, you display output progressively for a more responsive experience. For background jobs or backend integrations, see Runs for async patterns with polling and webhooks.

Basic Usage

import { Subconscious } from "subconscious";

const client = new Subconscious({
  apiKey: process.env.SUBCONSCIOUS_API_KEY!,
});

const stream = client.stream({
  engine: "tim-gpt",
  input: {
    instructions: "Write a short essay about space exploration",
    tools: [{ type: "platform", id: "web_search" }],
  },
});

for await (const event of stream) {
  if (event.type === "delta") {
    process.stdout.write(event.content);
  } else if (event.type === "done") {
    console.log("\n\nRun completed:", event.runId);
  }
}

Event Types

The stream emits different event types:
Event TypeDescription
deltaA chunk of generated content
doneThe run has completed
errorAn error occurred

Delta Events

Delta events contain incremental content:
{
  type: "delta",
  content: "The history of space exploration..."
}

Done Events

Done events signal completion and include the run ID:
{
  type: "done",
  runId: "run_abc123..."
}

Server-Sent Events (SSE)

The streaming endpoint uses Server-Sent Events (SSE) format, compatible with the OpenAI streaming format. Each event is sent as:
data: {"type": "delta", "content": "chunk of text"}

data: {"type": "done", "runId": "run_abc123"}

data: [DONE]

Error Handling

Handle errors gracefully in your stream processing:
try {
  for await (const event of stream) {
    if (event.type === "delta") {
      process.stdout.write(event.content);
    } else if (event.type === "error") {
      console.error("Stream error:", event.error);
      break;
    }
  }
} catch (error) {
  console.error("Connection error:", error);
}