# Cancel a run Source: https://docs.subconscious.dev/api-reference/cancel-a-run api-reference/openapi.json post /runs/{runId}/cancel Cancel a running or queued run. Returns the updated run with status 'canceled'. No-op if run is already terminal. # Create a new agent run Source: https://docs.subconscious.dev/api-reference/create-a-new-agent-run api-reference/openapi.json post /runs Creates a new agent run. The run is processed asynchronously. Use the returned runId to poll for status or set up webhooks to receive notifications. # Create and stream a run Source: https://docs.subconscious.dev/api-reference/create-and-stream-a-run api-reference/openapi.json post /runs/stream Creates a run and streams execution events via Server-Sent Events (SSE). Uses OpenAI-compatible SSE format. # Get a run Source: https://docs.subconscious.dev/api-reference/get-a-run api-reference/openapi.json get /runs/{runId} Retrieve the status and results of a specific run # Introduction Source: https://docs.subconscious.dev/api-reference/introduction Welcome to the Subconscious AI API documentation ## Welcome The Subconscious API enables you to build powerful AI agents that can reason, use tools, and complete complex tasks. Our API is designed around **Runs** - discrete agent executions that process your instructions and return structured results. ## Core Concepts ### Runs A **Run** represents a single agent execution. You provide instructions and optionally tools, and the agent works to complete the task. Runs can be: * **Async** (`POST /v1/runs`) - Queue a run and poll for results or receive webhooks * **Streaming** (`POST /v1/runs/stream`) - Stream execution events in real-time via SSE ### Engines Choose from multiple inference engines optimized for different use cases: | Engine | Type | Description | | --------------- | -------- | --------------------------------------------------------------- | | `tim-edge` | Unified | Highly efficient engine tuned for performance with search tools | | `tim-gpt` | Compound | Complex reasoning engine backed by OpenAI GPT-4.1 | | `tim-gpt-heavy` | Compound | Complex reasoning engine backed by OpenAI GPT-5.2 | ### Tools Agents can use tools to interact with external systems: * **Platform tools** - Built-in capabilities like web search * **Function tools** - Your custom HTTP endpoints * **MCP tools** - Model Context Protocol servers ### Structured Output Request structured responses using JSON Schema via `answerFormat` and `reasoningFormat` fields. ## Authentication All API endpoints require Bearer token authentication: ```bash theme={null} curl https://api.subconscious.dev/v1/runs \ -H "Authorization: Bearer YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{"engine": "tim-gpt", "input": {"instructions": "..."}}' ``` Get your API key from the [Subconscious platform](https://www.subconscious.dev/platform/api-keys). ## Quick Example ### Create an async run ```bash theme={null} curl -X POST https://api.subconscious.dev/v1/runs \ -H "Authorization: Bearer YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "engine": "tim-gpt", "input": { "instructions": "Research the latest developments in quantum computing" } }' ``` Response: ```json theme={null} { "runId": "run_abc123...", "status": "queued" } ``` ### Poll for results ```bash theme={null} curl https://api.subconscious.dev/v1/runs/run_abc123 \ -H "Authorization: Bearer YOUR_API_KEY" ``` Response (when complete): ```json theme={null} { "runId": "run_abc123...", "status": "succeeded", "result": { "answer": "Recent developments in quantum computing include...", "reasoning": null }, "usage": { "inputTokens": 1234, "outputTokens": 567, "durationMs": 45000 } } ``` ## Webhooks Instead of polling, configure webhooks to receive notifications when runs complete: ```bash theme={null} curl -X POST https://api.subconscious.dev/v1/webhooks/subscriptions \ -H "Authorization: Bearer YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "callbackUrl": "https://your-app.com/webhooks", "eventTypes": ["job.succeeded", "job.failed"] }' ``` Or attach a one-off callback URL to a specific run: ```json theme={null} { "engine": "tim-gpt", "input": { "instructions": "..." }, "output": { "callbackUrl": "https://your-app.com/webhooks" } } ``` # List runs Source: https://docs.subconscious.dev/api-reference/list-runs api-reference/openapi.json get /runs List all runs for the authenticated organization # Webhooks Source: https://docs.subconscious.dev/core-concepts/async-webhooks Get notified when runs complete Webhooks push results to your server when a run finishes—no polling required. You provide a URL, we POST the result when it's ready. For async basics (fire-and-forget, polling with `client.wait()`), see [Runs](/core-concepts/runs). ## 1. Add a Callback URL Pass `callbackUrl` in the `output` field when creating a run: ```bash theme={null} curl -X POST https://api.subconscious.dev/v1/runs \ -H "Authorization: Bearer $SUBCONSCIOUS_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "engine": "tim-gpt", "input": { "instructions": "Generate a detailed report" }, "output": { "callbackUrl": "https://your-server.com/webhooks/subconscious" } }' ``` This returns immediately with a `runId`. When the run completes, we'll POST the result to your callback URL. ## 2. Build a Webhook Handler Create an endpoint to receive the webhook POST: ```python Python (FastAPI) theme={null} from fastapi import FastAPI, Request import json app = FastAPI() @app.post("/webhooks/subconscious") async def handle_webhook(request: Request): payload = await request.json() run_id = payload.get("runId") status = payload.get("status") if status == "succeeded": # Extract the answer from the result content = payload["result"]["choices"][0]["message"]["content"] parsed = json.loads(content) answer = parsed.get("answer", "") print(f"Run {run_id} completed: {answer[:100]}...") # Save to database, trigger next step, etc. elif status == "failed": print(f"Run {run_id} failed: {payload.get('error')}") return {"received": True} # Run with: uvicorn server:app --host 0.0.0.0 --port 8000 ``` ```typescript Node.js (Express) theme={null} import express from "express"; const app = express(); app.use(express.json()); app.post("/webhooks/subconscious", (req, res) => { const { runId, status, result, error } = req.body; if (status === "succeeded") { // Extract the answer from the result const content = result.choices[0].message.content; const parsed = JSON.parse(content); console.log(`Run ${runId} completed:`, parsed.answer?.slice(0, 100)); // Save to database, trigger next step, etc. } else if (status === "failed") { console.error(`Run ${runId} failed:`, error); } // Always respond quickly with 2xx res.status(200).json({ received: true }); }); app.listen(8000, () => console.log("Webhook server running on :8000")); ``` Your endpoint must be publicly accessible. For local development, use [ngrok](https://ngrok.com) or similar. ## 3. Webhook Payload When a run finishes, we POST this JSON to your URL: ```json theme={null} { "jobId": "9bb85845-9b20-4bb0-96dc-686a0aa3dcfe", "runId": "1a45adf1-ad50-4452-a2c7-150b5bcd215c", "orgId": "6d3c89bf-2665-45f8-87aa-c925979151c9", "status": "succeeded", "model": "tim-gpt", "engine": "tim-gpt", "result": { "choices": [ { "message": { "role": "assistant", "content": "{\"reasoning\": [...], \"answer\": \"2 + 2 = 4.\"}" } } ], "usage": { "prompt_tokens": 1678, "completion_tokens": 83 } }, "error": null, "tokens": { "inputTokens": 1678, "outputTokens": 83, "costCents": 0 }, "createdAt": "2026-01-16T20:54:09.090Z", "startedAt": "2026-01-16T20:54:09.190Z", "completedAt": "2026-01-16T20:54:11.779Z" } ``` | Field | Description | | ------------- | ------------------------------------------------------------- | | `runId` | The run's unique ID | | `jobId` | Internal job ID | | `status` | Final status (`succeeded`, `failed`, `timed_out`, `canceled`) | | `result` | The completion result with `choices` array | | `error` | Error details (when failed, otherwise `null`) | | `tokens` | Token counts (`inputTokens`, `outputTokens`, `costCents`) | | `createdAt` | When the run was created | | `startedAt` | When processing began | | `completedAt` | When the run finished | ## Delivery Guarantees Webhooks are delivered with: * **Retries** — Failed deliveries retry with exponential backoff * **Timeouts** — We wait up to 30 seconds for your 2xx response * **Dead-letter queue** — Exhausted retries are stored for inspection * **Idempotency** — Include a unique ID in your handler to prevent duplicates ## Best Practices * **Respond quickly** — Return 2xx within 30 seconds, process async if needed * **Be idempotent** — You may receive the same webhook twice * **Log payloads** — Store raw payloads for debugging * **Validate origin** — Check request headers (signature verification coming soon) ## Related Async patterns and polling Handle failed runs # Runs Source: https://docs.subconscious.dev/core-concepts/runs Understanding the core unit of agent execution A **Run** represents a single agent execution. You provide instructions and optionally tools, and the agent works to complete the task. ## What is a Run? When you call `client.run()`, Subconscious creates a Run that: 1. Receives your instructions and tools 2. Processes the request through our inference engine 3. Executes tool calls as needed 4. Returns structured results ## Run Lifecycle Every Run progresses through a series of states: ```mermaid theme={null} graph LR A[queued] --> B[running] B --> C[succeeded] B --> D[failed] B --> E[canceled] B --> F[timed_out] ``` | Status | Description | | ----------- | ------------------------------ | | `queued` | Run is waiting to be processed | | `running` | Run is actively being executed | | `succeeded` | Run completed successfully | | `failed` | Run encountered an error | | `canceled` | Run was canceled by the user | | `timed_out` | Run exceeded the timeout limit | ## Creating a Run ### Synchronous (Wait for Completion) The simplest approach—wait for the run to complete: ```python Python theme={null} from subconscious import Subconscious client = Subconscious(api_key="your-api-key") run = client.run( engine="tim-gpt", input={ "instructions": "Summarize the latest AI news", "tools": [{"type": "platform", "id": "web_search"}], }, options={"await_completion": True}, ) print(run.result.answer) ``` ```typescript Node.js theme={null} import { Subconscious } from "subconscious"; const client = new Subconscious({ apiKey: process.env.SUBCONSCIOUS_API_KEY!, }); const run = await client.run({ engine: "tim-gpt", input: { instructions: "Summarize the latest AI news", tools: [{ type: "platform", id: "web_search" }], }, options: { awaitCompletion: true }, }); console.log(run.result?.answer); ``` ### Asynchronous (Fire and Forget) Some workloads are better suited for async execution: * **Long-running tasks** — Many tool calls, large searches, multi-step plans * **Durability requirements** — You care that they finish, not that you watch every token * **Fan-out to other systems** — Pipelines, CRMs, warehouses Start a run without waiting, then check status later: ```python Python theme={null} from subconscious import Subconscious client = Subconscious(api_key="your-api-key") # Start without waiting run = client.run( engine="tim-gpt", input={"instructions": "Generate a report"}, ) print(f"Run started: {run.run_id}") # Check status later status = client.get(run.run_id) print(status.status) # 'queued' | 'running' | 'succeeded' | 'failed' ``` ```typescript Node.js theme={null} import { Subconscious } from "subconscious"; const client = new Subconscious({ apiKey: process.env.SUBCONSCIOUS_API_KEY!, }); // Start without waiting const run = await client.run({ engine: "tim-gpt", input: { instructions: "Generate a report" }, }); console.log(`Run started: ${run.runId}`); // Check status later const status = await client.get(run.runId); console.log(status.status); ``` ### Polling with `client.wait()` For convenience, use `client.wait()` to automatically poll until the run completes: ```python Python theme={null} from subconscious import Subconscious client = Subconscious(api_key="your-api-key") # Start a run run = client.run( engine="tim-gpt", input={"instructions": "Generate a detailed report"}, ) # Poll until complete result = client.wait( run.run_id, options={ "interval_ms": 2000, # Poll every 2 seconds "max_attempts": 60, # Give up after 60 attempts }, ) print(result.result.answer) ``` ```typescript Node.js theme={null} import { Subconscious } from "subconscious"; const client = new Subconscious({ apiKey: process.env.SUBCONSCIOUS_API_KEY!, }); // Start a run const run = await client.run({ engine: "tim-gpt", input: { instructions: "Generate a detailed report" }, }); // Poll until complete const result = await client.wait(run.runId, { intervalMs: 2000, // Poll every 2 seconds maxAttempts: 60, // Give up after 60 attempts }); console.log(result.result?.answer); ``` ### When to Use What | Pattern | Best For | | ----------------------------------- | ----------------------------------------------------------------------- | | **Sync** (`await_completion: true`) | Simple tasks, quick responses | | **Streaming** | Human watching, chat UIs | | **Async + Polling** | Background jobs, dashboards | | **Async + Webhooks** | Integrations, pipelines ([see Webhooks](/core-concepts/async-webhooks)) | ## Run Response Structure When a run completes, you receive a response with these fields: ```typescript theme={null} interface RunResponse { runId: string; status: | "queued" | "running" | "succeeded" | "failed" | "canceled" | "timed_out"; result?: { answer: string; reasoning?: Task[]; }; usage?: { inputTokens: number; outputTokens: number; durationMs: number; }; error?: { code: string; message: string; }; } ``` ### Key Fields | Field | Description | | ------------------ | ----------------------------------------------- | | `runId` | Unique identifier for this run | | `status` | Current state of the run | | `result.answer` | The agent's final answer | | `result.reasoning` | Step-by-step reasoning process (when available) | | `usage` | Token usage and timing information | | `error` | Error details if the run failed | ## Canceling a Run You can cancel a run that's still in progress: ```python Python theme={null} from subconscious import Subconscious client = Subconscious(api_key="your-api-key") # Cancel a run in progress client.cancel(run.run_id) ``` ```typescript Node.js theme={null} import { Subconscious } from "subconscious"; const client = new Subconscious({ apiKey: process.env.SUBCONSCIOUS_API_KEY!, }); // Cancel a run in progress await client.cancel(run.runId); ``` ## Related Stream responses in real-time Get notified when runs complete Deep dive into parsing run results Configure tools for your runs # Streaming Source: https://docs.subconscious.dev/core-concepts/streaming Real-time responses for chat UIs and live demos Streaming delivers the agent's response token-by-token as it's generated—ideal for chat interfaces, live demos, or anywhere a human is watching. Instead of waiting for the full response, you display output progressively for a more responsive experience. For background jobs or backend integrations, see [Runs](/core-concepts/runs) for async patterns with polling and webhooks. ## Basic Usage ```typescript Node.js theme={null} import { Subconscious } from "subconscious"; const client = new Subconscious({ apiKey: process.env.SUBCONSCIOUS_API_KEY!, }); const stream = client.stream({ engine: "tim-gpt", input: { instructions: "Write a short essay about space exploration", tools: [{ type: "platform", id: "web_search" }], }, }); for await (const event of stream) { if (event.type === "delta") { process.stdout.write(event.content); } else if (event.type === "done") { console.log("\n\nRun completed:", event.runId); } } ``` ```python Python theme={null} from subconscious import Subconscious client = Subconscious(api_key="your-api-key") for event in client.stream( engine="tim-gpt", input={ "instructions": "Write a short essay about space exploration", "tools": [{"type": "platform", "id": "web_search"}], }, ): if event.type == "delta": print(event.content, end="", flush=True) elif event.type == "done": print(f"\n\nRun completed: {event.run_id}") ``` ```bash cURL theme={null} curl https://api.subconscious.dev/v1/runs/stream \ -H "Content-Type: application/json" \ -H "Authorization: Bearer YOUR_API_KEY" \ -N \ -d '{ "engine": "tim-gpt", "input": { "instructions": "Write a short essay about space exploration", "tools": [{"type": "platform", "id": "web_search"}] } }' ``` ## Event Types The stream emits different event types: | Event Type | Description | | ---------- | ---------------------------- | | `delta` | A chunk of generated content | | `done` | The run has completed | | `error` | An error occurred | ### Delta Events Delta events contain incremental content: ```typescript theme={null} { type: "delta", content: "The history of space exploration..." } ``` ### Done Events Done events signal completion and include the run ID: ```typescript theme={null} { type: "done", runId: "run_abc123..." } ``` ## Server-Sent Events (SSE) The streaming endpoint uses Server-Sent Events (SSE) format, compatible with the OpenAI streaming format. Each event is sent as: ``` data: {"type": "delta", "content": "chunk of text"} data: {"type": "done", "runId": "run_abc123"} data: [DONE] ``` ## Error Handling Handle errors gracefully in your stream processing: ```typescript Node.js theme={null} try { for await (const event of stream) { if (event.type === "delta") { process.stdout.write(event.content); } else if (event.type === "error") { console.error("Stream error:", event.error); break; } } } catch (error) { console.error("Connection error:", error); } ``` ```python Python theme={null} try: for event in client.stream(...): if event.type == "delta": print(event.content, end="", flush=True) elif event.type == "error": print(f"Stream error: {event.error}") break except Exception as e: print(f"Connection error: {e}") ``` ## Related Sync, async, and polling patterns Get notified when runs complete # Structured Output Source: https://docs.subconscious.dev/core-concepts/structured-output Structured output allows you to define the exact shape of the agent's response using JSON Schema. This ensures you receive data in a predictable, parseable format. ## When to Use Structured Output Use structured output when you need: * Responses that integrate with other systems * Consistent data formats for downstream processing * Type-safe responses in your application ## Using answerFormat The `answerFormat` field accepts a JSON Schema that defines the structure of the agent's answer: ```python Python theme={null} from subconscious import Subconscious client = Subconscious(api_key="your-api-key") run = client.run( engine="tim-gpt", input={ "instructions": "Analyze the sentiment of this review: 'Great product, fast shipping!'", "tools": [], "answerFormat": { "type": "object", "title": "SentimentAnalysis", "properties": { "sentiment": { "type": "string", "enum": ["positive", "negative", "neutral"], "description": "The overall sentiment" }, "confidence": { "type": "number", "description": "Confidence score from 0 to 1" }, "keywords": { "type": "array", "items": {"type": "string"}, "description": "Key phrases that influenced the sentiment" } }, "required": ["sentiment", "confidence", "keywords"] } }, options={"await_completion": True}, ) # Response is already a dict matching your schema result = run.result.answer print(result["sentiment"]) # "positive" print(result["confidence"]) # 0.95 print(result["keywords"]) # ["Great product", "fast shipping"] ``` ```typescript Node.js theme={null} import { Subconscious } from "subconscious"; const client = new Subconscious({ apiKey: process.env.SUBCONSCIOUS_API_KEY!, }); const run = await client.run({ engine: "tim-gpt", input: { instructions: "Analyze the sentiment of this review: 'Great product, fast shipping!'", tools: [], answerFormat: { type: "object", title: "SentimentAnalysis", properties: { sentiment: { type: "string", enum: ["positive", "negative", "neutral"], description: "The overall sentiment", }, confidence: { type: "number", description: "Confidence score from 0 to 1", }, keywords: { type: "array", items: { type: "string" }, description: "Key phrases that influenced the sentiment", }, }, required: ["sentiment", "confidence", "keywords"], }, }, options: { awaitCompletion: true }, }); // Response is already an object matching your schema const result = run.result?.answer; console.log(result.sentiment); // "positive" console.log(result.confidence); // 0.95 console.log(result.keywords); // ["Great product", "fast shipping"] ``` When using `answerFormat`, `run.result.answer` returns a **parsed object** (dict in Python, object in JavaScript), not a JSON string. You can access fields directly without parsing. ## Using Pydantic Models (Python) The Python SDK automatically converts Pydantic models to JSON Schema: ```python theme={null} from subconscious import Subconscious from pydantic import BaseModel import os class SentimentAnalysis(BaseModel): sentiment: str confidence: float keywords: list[str] client = Subconscious(api_key=os.environ.get("SUBCONSCIOUS_API_KEY")) run = client.run( engine="tim-gpt", input={ "instructions": "Analyze the sentiment of: 'Great product!'", "answerFormat": SentimentAnalysis, # Pass the class directly }, options={"await_completion": True}, ) print(run.result.answer["sentiment"]) ``` ## Using Zod Schemas (TypeScript) The TypeScript SDK provides a `zodToJsonSchema` helper to convert Zod schemas: ```typescript theme={null} import { Subconscious, zodToJsonSchema } from "subconscious"; import { z } from "zod"; const SentimentAnalysis = z.object({ sentiment: z.string().describe("The overall sentiment"), confidence: z.number().describe("Confidence score from 0 to 1"), keywords: z.array(z.string()).describe("Key phrases that influenced the sentiment"), }); type SentimentAnalysis = z.infer; const client = new Subconscious({ apiKey: process.env.SUBCONSCIOUS_API_KEY!, }); const run = await client.run({ engine: "tim-gpt", input: { instructions: "Analyze the sentiment of: 'Great product!'", answerFormat: zodToJsonSchema(SentimentAnalysis, "SentimentAnalysis"), }, options: { awaitCompletion: true }, }); const result = run.result?.answer as unknown as SentimentAnalysis; console.log(result.sentiment); ``` ### Supported Zod Types The `zodToJsonSchema` function supports the following Zod types: | Zod Type | Converts To | | ------------------- | --------------------------------------- | | `z.string()` | `{ type: "string" }` | | `z.number()` | `{ type: "number" }` | | `z.boolean()` | `{ type: "boolean" }` | | `z.array(z.T())` | `{ type: "array", items: {...} }` | | `z.object({...})` | `{ type: "object", properties: {...} }` | | `z.enum([...])` | `{ type: "string", enum: [...] }` | | `z.optional(z.T())` | Omits from `required` array | Use `.describe()` on Zod fields to add descriptions that help the agent understand what each field should contain. ## Schema Requirements Your JSON Schema must include: | Field | Required | Description | | ---------------------- | -------- | -------------------------------------- | | `type` | Yes | Must be `"object"` | | `title` | Yes | A name for the schema | | `properties` | Yes | The fields in your response | | `required` | Yes | Array of required field names | | `additionalProperties` | No | Optional (must be `false` if provided) | ## Supported Types The following JSON Schema types are supported: | Type | Example | | --------- | --------------------------------------------- | | `string` | `{"type": "string"}` | | `number` | `{"type": "number"}` | | `integer` | `{"type": "integer"}` | | `boolean` | `{"type": "boolean"}` | | `array` | `{"type": "array", "items": {...}}` | | `object` | `{"type": "object", "properties": {...}}` | | `enum` | `{"type": "string", "enum": ["a", "b", "c"]}` | ## Using reasoningFormat You can also structure the reasoning output with `reasoningFormat`: ```python theme={null} run = client.run( engine="tim-gpt", input={ "instructions": "Research and compare two products", "tools": [{"type": "platform", "id": "web_search"}], "answerFormat": {...}, "reasoningFormat": { "type": "object", "title": "ResearchSteps", "properties": { "steps": { "type": "array", "items": { "type": "object", "properties": { "action": {"type": "string"}, "result": {"type": "string"} } } } }, "required": ["steps"] } }, options={"await_completion": True}, ) ``` ## Related Understanding the run response structure Parsing and using agent responses # Tools Source: https://docs.subconscious.dev/core-concepts/tools Give agents access to APIs, search, and external services Tools let agents take actions—search the web, call APIs, or query databases. You control which tools are available; the agent decides when to use them. ## Quick Start Add tools to any run: ```python Python theme={null} from subconscious import Subconscious import os client = Subconscious(api_key=os.environ.get("SUBCONSCIOUS_API_KEY")) run = client.run( engine="tim-gpt", input={ "instructions": "Find the latest news about SpaceX", "tools": [ {"type": "platform", "id": "web_search"} ] }, options={"await_completion": True}, ) ``` ```typescript Node.js theme={null} import { Subconscious } from "subconscious"; const client = new Subconscious({ apiKey: process.env.SUBCONSCIOUS_API_KEY!, }); const run = await client.run({ engine: "tim-gpt", input: { instructions: "Find the latest news about SpaceX", tools: [{ type: "platform", id: "web_search" }], }, options: { awaitCompletion: true }, }); ``` *** ## Platform Tools Built-in tools hosted by Subconscious. No setup required. | ID | Name | Description | | ----------------------- | ---------------- | ----------------------------------------------------- | | `web_search` | Google Search | Search the web for information | | `webpage_understanding` | Jina Reader | Extract and summarize webpage content | | `parallel_search` | Parallel Search | Precision search for facts from authoritative sources | | `parallel_extract` | Parallel Extract | Extract specific content from a webpage | | `exa_search` | Exa Search | Semantic search for high-quality content | | `exa_crawl` | Exa Crawl | Retrieve full webpage content | | `exa_find_similar` | Exa Similar | Find pages similar to a given URL | ## Custom Function Tools Call your own HTTP endpoints. You host the tool; Subconscious calls it during agent execution. ### Tool Schema ```typescript theme={null} type FunctionTool = { type: "function"; name: string; description: string; url: string; method: "POST" | "GET"; timeout?: number; // seconds, default 30 parameters: { type: "object"; properties: Record; required?: string[]; }; headers?: Record; // HTTP headers for this tool defaults?: Record; // Hidden params injected at call time }; ``` ### Building a Tool Server Your tool endpoint receives POST requests with parameters as JSON and returns JSON results. ```python Python (FastAPI) theme={null} # server.py from fastapi import FastAPI from pydantic import BaseModel app = FastAPI() class WeatherRequest(BaseModel): city: str units: str = "celsius" @app.post("/weather") async def get_weather(req: WeatherRequest): # Your logic here - call weather API, database, etc. return { "city": req.city, "temperature": 22, "units": req.units, "condition": "sunny" } # Run with: uvicorn server:app --host 0.0.0.0 --port 8000 ``` ```typescript Node.js (Express) theme={null} // server.ts import express from "express"; const app = express(); app.use(express.json()); app.post("/weather", (req, res) => { const { city, units = "celsius" } = req.body; // Your logic here - call weather API, database, etc. res.json({ city, temperature: 22, units, condition: "sunny", }); }); app.listen(8000, () => console.log("Tool server running on :8000")); ``` ### Registering with Subconscious Once your server is running, register the tool in your run: ```javascript theme={null} tools: [ { type: "function", name: "get_weather", description: "Get current weather for a city", url: "https://your-server.com/weather", method: "POST", timeout: 10, parameters: { type: "object", properties: { city: { type: "string", description: "City name" }, units: { type: "string", enum: ["celsius", "fahrenheit"] }, }, required: ["city"], }, }, ]; ``` Your endpoint must be publicly accessible. For local development, use a tunnel like [ngrok](https://ngrok.com) or [Cloudflare Tunnel](https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/). ### Tool Headers Add custom HTTP headers that are sent when calling your tool's endpoint. Useful for authentication: ```javascript theme={null} { type: "function", name: "my_api", description: "Call my authenticated API", url: "https://api.example.com/endpoint", method: "POST", parameters: { type: "object", properties: { query: { type: "string" } }, required: ["query"] }, // Headers sent only when THIS tool is called headers: { "x-api-key": "your-secret-key", "x-custom-header": "custom-value" } } ``` Each tool can have its own headers—they're only sent when that specific tool's endpoint is called. ### Default Arguments Inject parameter values that are **hidden from the model** and automatically merged at call time. Perfect for session IDs, public API keys, or other values you don't want the model to see or generate: ```javascript theme={null} { type: "function", name: "search_database", description: "Search the database", url: "https://api.example.com/search", method: "POST", parameters: { type: "object", properties: { query: { type: "string", description: "Search query" }, // Define these for validation, but they'll be hidden from the model sessionId: { type: "string" }, apiKey: { type: "string" } }, required: ["query"] // Only query is required from the model }, // Hidden from model, injected at call time defaults: { sessionId: "user-session-abc123", apiKey: "secret-api-key" } } ``` *** ## MCP Tools Connect to [Model Context Protocol](https://modelcontextprotocol.io/) servers to use their tools. ```typescript theme={null} type McpTool = { type: "mcp"; server: string; // MCP server URL name?: string; // Specific tool (omit to use all tools from server) auth?: { type: "bearer" | "api_key"; token?: string; header?: string; // Header name for api_key auth }; }; ``` **Example:** ```javascript theme={null} tools: [ // Use all tools from an MCP server { type: "mcp", server: "https://mcp.example.com" }, // Use a specific tool with auth { type: "mcp", server: "https://mcp.example.com", name: "query_database", auth: { type: "bearer", token: "your-token" }, }, ]; ``` *** ## Saved Tools Don't want to pass the full tool schema every time? Save tool configurations in the dashboard and reference them by name. Go to [subconscious.dev/platform/tools](https://www.subconscious.dev/platform/tools) and create a new tool with your endpoint URL, parameters, and description. Use your saved tool's API name in requests: ```javascript theme={null} tools: [ { type: "platform", id: "my_custom_tool" } ] ``` Saved tools are useful when: * You use the same tool across multiple agents * You want to update configuration without changing code * You're sharing tools with your team *** ## Combining Tools Agents can use multiple tools together. Give them what they need: ```javascript theme={null} tools: [ // Platform tools for web research { type: "platform", id: "web_search" }, { type: "platform", id: "webpage_understanding" }, // Your custom API { type: "function", name: "save_to_database", description: "Save research findings to our database", url: "https://api.example.com/save", method: "POST", parameters: { type: "object", properties: { title: { type: "string" }, content: { type: "string" }, tags: { type: "array", items: { type: "string" } }, }, required: ["title", "content"], }, }, ]; ``` ## Related See how tools are executed in runs Build your first agent with tools # Engines Source: https://docs.subconscious.dev/engines Learn about our agent engines ## Beyond Models: Agent Engines Our agent engines combine a language model with a custom inference runtime. We offer two engine types with one identical developer experience. ### Engine Types **Unified Engines** - Co-designed model and runtime for peak performance on efficient models. **Compound Engines** - Frontier models (OpenAI, Google) chained together with smart context management. ## Available Engines | Name | API Name | Type | Description | | ------------- | --------------- | -------- | -------------------------------------------------------------------------------------------- | | TIM-Edge | `tim-edge` | Unified | Highly efficient engine tuned for performance with tools | | TIM-GPT | `tim-gpt` | Compound | Complex reasoning engine for long-context and tool use backed by the power of OpenAI GPT-4.1 | | TIM-GPT-Heavy | `tim-gpt-heavy` | Compound | Complex reasoning engine for long-context and tool use backed by the power of OpenAI GPT-5.2 | ### TIM-Edge TIM-Edge is our highly efficient unified engine, tuned for performance with search tools. It combines a co-designed model and runtime for peak performance, making it ideal for use cases where speed and efficiency are critical. ### TIM-GPT TIM-GPT is a compound engine that provides complex reasoning capabilities for long-context and tool use, backed by the power of OpenAI GPT-4.1. This is our recommended engine for most use cases, offering a great balance of cost, performance, and reasoning ability. ### TIM-GPT-Heavy TIM-GPT-Heavy is our most powerful compound engine, built for complex reasoning tasks with long-context and tool use, backed by the power of OpenAI GPT-5.2. Use this when you need maximum capability for the most challenging problems. All engines can be used with either the sync or async APIs. In async mode, we enqueue work for a background worker to process and expose a simple polling + webhook story for long‑running jobs. # Dedicated Endpoints Source: https://docs.subconscious.dev/enterprise/dedicated Reserve a dedicated endpoint Dedicated endpoints for more throughput, higher security, and hosting of models post-trained on your tools. Want a dedicated endpoint? [Contact us](https://calendly.com/jack-subconscious/dedicated). # On Prem Source: https://docs.subconscious.dev/enterprise/on-prem Deploy in your infra. Run our system on your own infrastructure. Need on prem? [Contact us](https://calendly.com/jack-subconscious/dedicated). # Privacy and Security Source: https://docs.subconscious.dev/enterprise/privacy How information is logged. We take security with the utmost seriousness because we understand that we're handling your core data and workflows. Your business processes, sensitive information, and operational data are entrusted to us, and we treat this responsibility with the highest level of care and attention. We deeply value our customers' data because we recognize that it represents more than just information. It represents your business processes, intellectual property, customer relationships, and competitive advantages. We are currently in the process of obtaining SOC2 compliance to provide independent verification of our security controls and data protection practices. **Contact us if you have questions about our privacy and security practices.** # Error Handling Source: https://docs.subconscious.dev/guides/error-handling Handle API errors and implement retries ## Error Response Format ```json theme={null} { "error": { "code": "invalid_request", "message": "The 'engine' field is required" } } ``` ## HTTP Status Codes | Status | Meaning | What to do | | ------ | ------------------- | ------------------------ | | `400` | Bad Request | Fix request parameters | | `401` | Unauthorized | Check API key | | `402` | Payment Required | Add credits | | `403` | Forbidden | Check permissions | | `404` | Not Found | Verify run ID | | `409` | Conflict | Idempotency collision | | `500` | Server Error | Retry with backoff | | `502` | Bad Gateway | Upstream failed, retry | | `503` | Service Unavailable | Engine down, retry later | ## SDK Exceptions Both SDKs throw typed exceptions you can catch: ```python Python theme={null} from subconscious import Subconscious from subconscious.errors import ( SubconsciousError, # Base class AuthenticationError, # 401 ValidationError, # 400 NotFoundError, # 404 ) import os client = Subconscious(api_key=os.environ.get("SUBCONSCIOUS_API_KEY")) try: run = client.run( engine="tim-gpt", input={"instructions": "Hello"}, options={"await_completion": True}, ) except AuthenticationError: print("Invalid API key") except ValidationError as e: print(f"Bad request: {e.message}") except SubconsciousError as e: print(f"Error {e.status}: {e.code} - {e.message}") ``` ```typescript Node.js theme={null} import { Subconscious, SubconsciousError, // Base class AuthenticationError, // 401 ValidationError, // 400 NotFoundError, // 404 } from "subconscious"; const client = new Subconscious({ apiKey: process.env.SUBCONSCIOUS_API_KEY!, }); try { const run = await client.run({ engine: "tim-gpt", input: { instructions: "Hello" }, options: { awaitCompletion: true }, }); } catch (e) { if (e instanceof AuthenticationError) { console.error("Invalid API key"); } else if (e instanceof ValidationError) { console.error(`Bad request: ${e.message}`); } else if (e instanceof SubconsciousError) { console.error(`Error ${e.status}: ${e.code} - ${e.message}`); } } ``` Exception properties: * `code` — Error type (`"invalid_request"`, `"authentication_failed"`, etc.) * `status` — HTTP status code * `message` — Human-readable message * `details` — Additional context (validation errors) ## Retry Logic Retry on 5xx errors with exponential backoff: ```python Python theme={null} import time import os from subconscious import Subconscious from subconscious.errors import SubconsciousError client = Subconscious(api_key=os.environ.get("SUBCONSCIOUS_API_KEY")) def run_with_retry(instructions, max_retries=3): for attempt in range(max_retries): try: return client.run( engine="tim-gpt", input={"instructions": instructions}, options={"await_completion": True}, ) except SubconsciousError as e: if e.status < 500 or attempt == max_retries - 1: raise time.sleep(2 ** attempt) run = run_with_retry("Summarize the news") ``` ```typescript Node.js theme={null} import { Subconscious, SubconsciousError } from "subconscious"; const client = new Subconscious({ apiKey: process.env.SUBCONSCIOUS_API_KEY!, }); async function runWithRetry(instructions: string, maxRetries = 3) { for (let attempt = 0; attempt < maxRetries; attempt++) { try { return await client.run({ engine: "tim-gpt", input: { instructions }, options: { awaitCompletion: true }, }); } catch (e) { if (!(e instanceof SubconsciousError) || e.status < 500) throw e; if (attempt === maxRetries - 1) throw e; await new Promise((r) => setTimeout(r, Math.pow(2, attempt) * 1000)); } } } const run = await runWithRetry("Summarize the news"); ``` ## Run-Level Errors Runs can fail after being accepted. Check the `status` field: ```python Python theme={null} run = client.run( engine="tim-gpt", input={"instructions": "Your task"}, options={"await_completion": True}, ) match run.status: case "succeeded": print(run.result.answer) case "failed": print(f"Failed: {run.error.message}") case "timed_out": print("Timed out") case "canceled": print("Canceled") ``` ```typescript Node.js theme={null} const run = await client.run({ engine: "tim-gpt", input: { instructions: "Your task" }, options: { awaitCompletion: true }, }); switch (run.status) { case "succeeded": console.log(run.result?.answer); break; case "failed": console.error("Failed:", run.error); break; case "timed_out": console.error("Timed out"); break; case "canceled": console.log("Canceled"); break; } ``` ## Quick Reference | Error | Code | Fix | | --------------- | ----- | ----------------------------------------- | | Missing API key | `401` | Add `Authorization: Bearer sk-...` header | | Invalid API key | `401` | Check key is active in dashboard | | No credits | `402` | Add credits at subconscious.dev | | Run not found | `404` | Verify run ID exists | | Bad request | `400` | Check required fields and types | | Engine down | `503` | Retry in a few minutes | # Response Handling Source: https://docs.subconscious.dev/guides/response-handling Parse and work with agent responses and reasoning traces When a run completes, the agent returns structured results that include both the final answer and the reasoning process. This guide explains how to interpret and use these responses. ## Response Structure The `run.result` object contains: ```typescript theme={null} interface RunResult { answer: string | object; // String without answerFormat, object with answerFormat reasoning?: Task[]; // Step-by-step reasoning (when available) } ``` ## The Answer Field The `answer` field contains the agent's final response. Depending on your use case: * **Without structured output**: A natural language string * **With structured output**: A parsed object matching your schema (no JSON parsing needed) ### Accessing Structured Answers When using `answerFormat`, the answer is already a parsed object. See [Structured Output](/core-concepts/structured-output) for schema examples. ```python Python theme={null} from subconscious import Subconscious client = Subconscious(api_key="your-api-key") run = client.run( engine="tim-gpt", input={ "instructions": "Analyze sentiment", "answerFormat": {...} # Your schema }, options={"await_completion": True}, ) # answer is already a dict - no parsing needed answer = run.result.answer print(answer["sentiment"]) ``` ```typescript Node.js theme={null} import { Subconscious } from "subconscious"; const client = new Subconscious({ apiKey: process.env.SUBCONSCIOUS_API_KEY!, }); const run = await client.run({ engine: "tim-gpt", input: { instructions: "Analyze sentiment", answerFormat: {...} // Your schema }, options: { awaitCompletion: true }, }); // answer is already an object - no parsing needed const answer = run.result?.answer; console.log(answer.sentiment); ``` ## Understanding the Reasoning Field The `reasoning` field provides visibility into how the agent arrived at its answer. Each task in the array represents a step in the reasoning process. ### Task Structure ```typescript theme={null} interface Task { thought?: string; // Agent's internal reasoning title?: string; // Step title tooluse?: ToolCall; // Tool call details subtasks?: Task[]; // Nested reasoning steps conclusion?: string; // Step conclusion } interface ToolCall { tool_name: string; // Name of the tool called parameters: any; // Input parameters tool_result: any; // Result from the tool } ``` ### Example Reasoning Trace ```json theme={null} { "reasoning": [ { "title": "Analyzing the request", "thought": "I need to search for information about Tesla's stock performance", "tooluse": { "tool_name": "web_search", "parameters": { "query": "Tesla stock performance this week" }, "tool_result": { "results": [ { "title": "Tesla Stock Analysis", "snippet": "Tesla's stock has shown..." } ] } }, "conclusion": "Found relevant information about Tesla's stock" } ], "answer": "Based on my search, Tesla's stock performance this week shows..." } ``` ## Extracting Tool Call Information Loop through the reasoning to extract tool usage: ```python Python theme={null} def extract_tool_calls(reasoning): tool_calls = [] for task in reasoning or []: if task.get("tooluse"): tool_calls.append({ "tool": task["tooluse"]["tool_name"], "params": task["tooluse"]["parameters"], "result": task["tooluse"]["tool_result"] }) # Check subtasks recursively if task.get("subtasks"): tool_calls.extend(extract_tool_calls(task["subtasks"])) return tool_calls # Usage calls = extract_tool_calls(run.result.reasoning) for call in calls: print(f"Tool: {call['tool']}") ``` ```typescript Node.js theme={null} function extractToolCalls(reasoning: Task[]): any[] { const toolCalls: any[] = []; for (const task of reasoning || []) { if (task.tooluse) { toolCalls.push({ tool: task.tooluse.tool_name, params: task.tooluse.parameters, result: task.tooluse.tool_result, }); } if (task.subtasks) { toolCalls.push(...extractToolCalls(task.subtasks)); } } return toolCalls; } // Usage const calls = extractToolCalls(run.result?.reasoning || []); calls.forEach((call) => console.log(`Tool: ${call.tool}`)); ``` ## Usage Information The `usage` field provides metrics about the run: ```typescript theme={null} interface RunUsage { inputTokens: number; // Tokens in the prompt outputTokens: number; // Tokens generated durationMs: number; // Total execution time toolCalls?: { // Tool usage breakdown [toolName: string]: number; }; } ``` ### Tracking Costs Use the usage information to track costs: ```python theme={null} usage = run.usage input_cost = usage.input_tokens / 1_000_000 * 2.00 # tim-gpt pricing output_cost = usage.output_tokens / 1_000_000 * 8.00 total_cost = input_cost + output_cost print(f"Run cost: ${total_cost:.4f}") ``` ## Related Understanding run lifecycle Define response schemas Handle errors gracefully # Templates & Examples Source: https://docs.subconscious.dev/guides/templates Pre-built examples to jumpstart your Subconscious agent projects Use `npx create-subconscious-app` to quickly scaffold a new project from one of our templates. Each template includes working code, documentation, and best practices. **Build with AI coding assistants:** Install our skill to get expert Subconscious guidance while coding: ```bash theme={null} npx skills add https://github.com/subconscious-systems/skills --skill subconscious-dev ``` This gives Claude Code, Cursor, and other AI assistants deep knowledge of Subconscious patterns—perfect for vibecoding your templates! ## Quick Start ```bash theme={null} npx create-subconscious-app ``` The CLI will: 1. Show you available templates 2. Let you choose one interactively 3. Download and set up the project 4. Provide next steps You can also specify a template directly: ```bash theme={null} npx create-subconscious-app my-agent --example e2b_cli ``` Or list all available templates: ```bash theme={null} npx create-subconscious-app --list ``` ## Available Templates

TypeScript • Autonomous agent with secure code execution

  • Long-horizon reasoning
  • Secure sandbox execution
  • File I/O
  • Multi-language support
  • Data science ready

Python • Web research and information gathering

  • Multi-source research
  • Live thinking feedback
  • Real-time streaming
  • Multiple search providers
  • Automatic citations

TypeScript • Type-safe agent responses

  • Zod validation
  • TypeScript inference
  • Automatic parsing
  • Sentiment analysis example

Python • Type-safe agent responses

  • Pydantic validation
  • Automatic type conversion
  • Sentiment analysis example
  • FastAPI/Django ready

TypeScript (React + Convex) • Full-stack real-time applications

  • Real-time WebSocket updates
  • React frontend
  • Convex backend
  • Tool calling
  • Database integration
## Customizing Templates All templates are fully customizable. After creating a project: 1. **Modify the code** - Each template includes well-documented source code 2. **Add tools** - Extend functionality by adding custom tools 3. **Change the engine** - Switch between different Subconscious engines 4. **Customize the UI** - For frontend templates, modify React components ## Contributing Templates Have a great example? We'd love to include it! Templates should: * Be production-ready or close to it * Include clear documentation * Follow best practices * Be easy to understand and modify [Submit a template →](https://github.com/subconscious-systems/subconscious) ## Related Get started in 5 minutes Learn the fundamentals Add custom tools to your agents Get type-safe responses # Libraries Source: https://docs.subconscious.dev/libraries Official client libraries for Node.js and Python We provide official SDKs for seamless integration with your preferred language. ## Install an official SDK Install the official TypeScript/JavaScript SDK from [npm](https://www.npmjs.com/package/subconscious). ```bash theme={null} npm install subconscious ``` A simple API request would look like this: ```typescript theme={null} import { Subconscious } from "subconscious"; const client = new Subconscious({ apiKey: process.env.SUBCONSCIOUS_API_KEY!, }); const run = await client.run({ engine: "tim-gpt", input: { instructions: "Your task here", tools: [], }, options: { awaitCompletion: true }, }); console.log(run.result?.answer); ``` Install the official Python SDK from [PyPI](https://pypi.org/project/subconscious-sdk/). ```bash theme={null} pip install subconscious-sdk ``` A simple API request would look like this: ```python theme={null} from subconscious import Subconscious client = Subconscious(api_key="your-api-key") run = client.run( engine="tim-gpt", input={ "instructions": "Your task here", "tools": [], }, options={"await_completion": True}, ) print(run.result.answer) ``` ## SDK Methods Both SDKs provide the same core methods: | Method | Description | | ---------------------- | ------------------------------------ | | `client.run()` | Create and optionally wait for a run | | `client.stream()` | Create a run and stream events | | `client.get(runId)` | Get the status of a run | | `client.wait(runId)` | Poll until a run completes | | `client.cancel(runId)` | Cancel a running or queued run | ## Next Steps See the SDKs in action with a complete example Learn about Runs, Tools, and Streaming # Subconscious Platform Source: https://docs.subconscious.dev/overview Subconscious is a developer-first platform for building production-ready AI agents. We provide a complete agent system that handles context management, tool orchestration, and long-horizon reasoning, so you can focus on defining what your agent should do. Our platform works with any service or tool, including MCPs, giving you everything you need to deploy sophisticated agents at scale. ## Quickstart Build your first agent in under 5 minutes Start from pre-built examples ### For AI tools * [subconscious.dev/llms.txt](https://docs.subconscious.dev/llms.txt) - Documentation index with links * [subconscious.dev/llms-full.txt](https://docs.subconscious.dev/llms-full.txt) - Complete documentation in one file **Build with AI coding assistants:** Let your coding agents understand the Subconscious API. ```bash theme={null} npx skills add https://github.com/subconscious-systems/skills --skill subconscious-dev ``` This gives Claude Code, Cursor, and other AI assistants deep knowledge of Subconscious, so you can build with the API faster. ### Try it now ```python Python theme={null} from subconscious import Subconscious client = Subconscious(api_key="your-api-key") run = client.run( engine="tim-gpt", input={"instructions": "Summarize the latest AI news"}, options={"await_completion": True}, ) print(run.result.answer) ``` ```typescript Node.js theme={null} import { Subconscious } from "subconscious"; const client = new Subconscious({ apiKey: process.env.SUBCONSCIOUS_API_KEY!, }); const run = await client.run({ engine: "tim-gpt", input: { instructions: "Summarize the latest AI news" }, options: { awaitCompletion: true }, }); console.log(run.result?.answer); ``` ## Engines Our agent engines combine a language model with a custom inference runtime. We offer **Unified Engines** (co-designed model and runtime) and **Compound Engines** (frontier models with smart context management). Unified engine tuned for performance with search tools Compound engine backed by OpenAI GPT-4.1 Compound engine backed by OpenAI GPT-5.2 [View all engines →](/engines) ## Start building Get started in 5 minutes Extend agents with your APIs Real-time output for chat UIs Durable jobs for pipelines Get JSON responses from agents Full endpoint documentation # Pricing Source: https://docs.subconscious.dev/pricing How pricing works. ## How Pricing Works We offer three agent engines with different pricing tiers based on your needs: ### Available Engines | Engine | Type | Input Tokens | Output Tokens | | ----------------- | -------- | -------------------- | --------------------- | | **TIM-Edge** | Unified | \$0.50 per 1M tokens | \$2.00 per 1M tokens | | **TIM-GPT** | Compound | \$2.00 per 1M tokens | \$8.00 per 1M tokens | | **TIM-GPT-Heavy** | Compound | \$2.00 per 1M tokens | \$15.00 per 1M tokens | ### Input vs Output Tokens **Input Tokens** include: * Your initial prompt * Tool use response values injected into to the context window during an agent run. **Output Tokens** include: * Only what the model generates. This includes reasoning tokens, parameters generation, and the final result. # Quickstart Source: https://docs.subconscious.dev/quickstart Get started with Subconscious in under 5 minutes ## 1. Create an Account [Sign up for the platform](https://subconscious.dev/platform) and create your account. Then generate an API key from your dashboard. ## 2. Try the Playground Before writing code, try the [Playground](https://www.subconscious.dev/playground) to test prompts and see agents in action. Subconscious Playground **Build with AI coding assistants:** Install our skill to get expert Subconscious guidance while coding: ```bash theme={null} npx skills add https://github.com/subconscious-systems/skills --skill subconscious-dev ``` This gives Claude Code, Cursor, and other AI assistants deep knowledge of Subconscious patterns—perfect for vibecoding your next agent app! ## 3. Choose Your Path ### Option A: Start with a Template (Recommended) The fastest way to get started is using our template generator: ```bash theme={null} npx create-subconscious-app ``` This interactive CLI lets you choose from pre-built examples: * **E2B CLI Agent** - Autonomous agent with secure code execution * **Search Agent CLI** - Web research agent with streaming * **Structured Output (TypeScript)** - Type-safe responses with Zod * **Structured Output (Python)** - Type-safe responses with Pydantic * **Convex Real-time App** - Full-stack app with real-time updates See [Templates](/guides/templates) for details on each example. ### Option B: Install the SDK **Step 1: Install the SDK** ```bash Node.js theme={null} npm install subconscious ``` ```bash Python theme={null} pip install subconscious-sdk ``` **Step 2: Run Your First Agent** ```typescript Node.js theme={null} import { Subconscious } from "subconscious"; const client = new Subconscious({ apiKey: process.env.SUBCONSCIOUS_API_KEY!, }); const run = await client.run({ engine: "tim-gpt", input: { instructions: "Search for the latest AI news and summarize the top 3 stories", tools: [{ type: "platform", id: "web_search" }], }, options: { awaitCompletion: true }, }); console.log(run.result?.answer); ``` ```python Python theme={null} from subconscious import Subconscious client = Subconscious(api_key="your-api-key") run = client.run( engine="tim-gpt", input={ "instructions": "Search for the latest AI news and summarize the top 3 stories", "tools": [{"type": "platform", "id": "web_search"}], }, options={"await_completion": True}, ) print(run.result.answer) ``` ```bash cURL theme={null} curl https://api.subconscious.dev/v1/runs \ -H "Content-Type: application/json" \ -H "Authorization: Bearer YOUR_API_KEY" \ -d '{ "engine": "tim-gpt", "input": { "instructions": "Search for the latest AI news and summarize the top 3 stories", "tools": [{"type": "platform", "id": "web_search"}] } }' ``` That's it! You've just run your first agent. ## Next Steps Explore pre-built examples and templates Learn about Runs, Tools, Streaming, and more Explore available models and their capabilities Stream responses in real-time Full API documentation # Learn More Source: https://docs.subconscious.dev/resources/learn-more Explore more about our work and our team * [Join our team](https://www.subconscious.dev/careers?ref=docs): We're hiring! Join our founding team. * [Research](https://www.subconscious.dev/research?ref=docs): Read the technical report for our release and experiment with TIM. * [About our team and our mission](https://www.subconscious.dev/about?ref=docs): Learn about what we think the next 5 years will look like * **Alpha**: Alpha is Hongyin's dog. She's the best of us. Alpha, Hongyin's dog # Logs Source: https://docs.subconscious.dev/resources/logs How information is logged. ## Overview Logs serve as a comprehensive history of your interactions with our system and provide valuable insights for debugging your agents and understanding their thinking and reasoning processes. ## What We Log ### Requests and Responses We automatically log every request you send through Subconscious, including: * **Your input requests** - The complete request data you submit * **Model outputs** - The full response from our AI models * **HTTP status codes** - Success (200), client errors (400), and server errors (500) ### Usage Tracking We track your usage across all platforms: * **Playground usage** - All interactions in our web interface * **API usage** - Every API call made to our endpoints ## Accessing Your Logs You can view all your logged data directly in our platform dashboard, giving you complete visibility into your system interactions and performance. ## Privacy and Sensitive Data If you're working with sensitive information that you prefer not to be recorded in our logs, please [contact our team](mailto:privacy@subconscious.dev) to discuss custom logging arrangements. ## Benefits Our comprehensive logging system helps you: * **Debug issues** - Quickly identify and resolve problems * **Monitor performance** - Track response times and success rates * **Understand agent behavior** - See the complete reasoning process * **Audit usage** - Maintain detailed records of all interactions ## Async jobs and webhook logs For async workloads and webhooks there are a few additional places to look: * **Worker logs** * Async worker: processes jobs from the queue and calls the engine. * Webhook worker: delivers webhook POSTs and handles retries/backoff. * Logs include: * `requestId`, `jobId`, `deliveryId`, `orgId`, * `durationMs`, `queueAgeMs`, `status`, `errorCode`. * **Queue health and scaling** * We emit metrics and alarms around: * Async queue age, * Webhook queue age, * Async and webhook DLQ depth. * This makes it easy to spot stuck backlogs or failing deliveries. * **Local test logs** * When you run our helper scripts (for example `run-all-tests.sh`), we write: * `*-jest.log`, `*-tsc.log`, `*-env.log`, and queue snapshots under `apps/api/test-logs/` in the monorepo. * These logs help you verify that your environment is wired correctly and make regressions easy to detect.