Skip to main content
We’re currently in research preview! We’re excited to share our system with you, and we would love to hear your feedback. If you have thoughts, we’d love to hear from you.
Getting started with Subconscious is straightforward: set your goals, make your tools, and call the Subconscious API. The platform handles the complex reasoning and tool orchestration for you.

Basic Workflow

  1. Define your prompt - Describe what you want the agent to accomplish
  2. Create your tools - Define the functions your agent can use (optional)
  3. Call the API - Send your request and get intelligent results

Code Examples

Note: TIM models currently require streaming (stream=True) for sync chat completions when using the OpenAI-compatible API.

Example 1: Calling an Agent with No Tools

The simplest use case - just send a prompt and get a response.
from openai import OpenAI

# Initialize the client with Subconscious endpoint
client = OpenAI(
    base_url="https://api.subconscious.dev/v1",
    api_key="YOUR_API_KEY"
)

# Make a simple request without tools
response = client.chat.completions.create(
    model="tim-large",
    messages=[
        {
            "role": "user",
            "content": "Find the derivative of f(x) = x^3 * sin(x)"
        }
    ],
    stream=True
)

# Stream and print the response
for chunk in response:
    print(chunk.choices[0].delta.content or "", end="")

Example 2: Calling an Agent with One Tool

This example illustrates how tools are defined and passed to the API.
from openai import OpenAI

# Initialize the client with Subconscious endpoint
client = OpenAI(
    base_url="https://api.subconscious.dev/v1",
    api_key="YOUR_API_KEY"
)

# Define a tool for web search
tools = [
    {
        "type": "function",
        "name": "web_search",
        "description": "Search the web for current information",
        "url": "TOOL_SERVER_URL",
        "method": "GET" or "POST",
        "timeout": SECONDS_TO_TIMEOUT,
        "parameters": {
            "type": "object",
            "properties": {
                "query": {
                    "type": "string",
                    "description": "The search query"
                }
            },
            "required": ["query"],
            "additionalProperties": false
        }
    }
]

# Make request with one tool
response = client.chat.completions.create(
    model="tim-large",
    messages=[
        {
            "role": "user",
            "content": "Find the latest news about Tesla's stock performance this week."
        }
    ],
    tools=tools,
    stream=True
)

# Stream and print the response
for chunk in response:
    print(chunk.choices[0].delta.content or "", end="")

Example 3: Calling an Agent with Multiple Tools

This example shows how multiple tools can be composed for more advanced agents.
from openai import OpenAI

# Initialize the client with Subconscious endpoint
client = OpenAI(
    base_url="https://api.subconscious.dev/v1",
    api_key="YOUR_API_KEY"
)

# Define multiple tools
tools = [
    {
        "type": "function",
        "name": "web_search",
        "description": "Search the web for current information",
        "url": "TOOL_SERVER_URL",
        "method": "GET" or "POST",
        "timeout": SECONDS_TO_TIMEOUT,
        "parameters": {
            "type": "object",
            "properties": {
                "query": {"type": "string", "description": "The search query"}
            },
            "required": ["query"],
            "additionalProperties": false
        }
    },
    {
        "type": "function",
        "name": "calculate",
        "description": "Perform mathematical calculations",
        "url": "TOOL_SERVER_URL",
        "method": "GET" or "POST",
        "timeout": SECONDS_TO_TIMEOUT,
        "parameters": {
            "type": "object",
            "properties": {
                "expression": {"type": "string", "description": "Mathematical expression to evaluate"}
            },
            "required": ["expression"],
            "additionalProperties": false
        }
    },
    {
        "type": "function",
        "name": "send_email",
        "description": "Send an email notification",
        "url": "TOOL_SERVER_URL",
        "method": "GET" or "POST",
        "timeout": SECONDS_TO_TIMEOUT,
        "parameters": {
            "type": "object",
            "properties": {
                "recipient": {"type": "string", "description": "Email recipient"},
                "subject": {"type": "string", "description": "Email subject"},
                "body": {"type": "string", "description": "Email body"}
            },
            "required": ["recipient", "subject", "body"]
        }
    }
]

# Make request with multiple tools
response = client.chat.completions.create(
    model="tim-large",
    messages=[
        {
            "role": "user",
            "content": "Research the current Bitcoin price, calculate the percentage change from last week, and email me a summary report."
        }
    ],
    tools=tools,
    stream=True
)

# Stream and print the response
for chunk in response:
    print(chunk.choices[0].delta.content or "", end="")

Async workflows

All of the examples above use the sync (blocking) chat completion API. When you have long‑running tasks or want stronger durability guarantees, you can switch to the async flow:
  • Enqueue work with POST /v1/chat/completions/async.
  • Poll job status with GET /v1/requests/{requestId}/status.
  • Optionally receive callbacks via webhooks when jobs complete.
See the “Async jobs and webhooks” guide in the Platform section for a deeper walkthrough of the async pipeline and how to plug it into your own systems.

Interpreting Results

Because Subconscious uses an OpenAI-compatible API, the response structure follows the standard OpenAI format. When collected into a complete response, the agent’s reasoning and final answer are contained within the choices[0].message.content field as a JSON object. This field is a string containing structured JSON that represents the agent’s reasoning process and final output.

Response Structure

The object follows this TypeScript interface:
export interface ModelResponse {
  reasoning: Task[];
  answer: string;
}

export interface Task {
  thought?: string;
  title?: string;
  tooluse?: ToolCall;
  subtasks?: Task[];
  conclusion?: string;
}

export interface ToolCall {
  parameters: any;
  tool_name: string;
  tool_result: any;
}

Response Fields Explained

  • reasoning: An array of Task objects that show the agent’s step-by-step reasoning process
  • answer: The final, human-readable answer to your original prompt

Task Structure

Each task in the reasoning array can contain:
  • thought: The agent’s internal reasoning about what to do next
  • title: A descriptive title for this reasoning step
  • tooluse: Details about a tool call, including:
    • parameters: The input parameters sent to the tool
    • tool_name: The name of the tool that was called
    • tool_result: The result returned by the tool
  • subtasks: Nested tasks showing more detailed reasoning steps
  • conclusion: The agent’s conclusion after completing this reasoning step

Example Response

A simplified choices[0].message.content response will contain JSON like this:
{
  "reasoning": [
    {
      "title": "Analyzing the request",
      "thought": "I need to search for information about Tesla's stock performance",
      "tooluse": {
        "tool_name": "web_search",
        "parameters": {
          "query": "Tesla stock performance this week"
        },
        "tool_result": {
          "results": [
            {
              "title": "Tesla Stock Analysis",
              "snippet": "Tesla's stock has shown..."
            }
          ]
        }
      },
      "conclusion": "Found relevant information about Tesla's stock performance"
    }
  ],
  "answer": "Based on my search, Tesla's stock performance this week shows..."
}

Other Response Fields

The complete API response also includes:
  • usage - Token usage information for the request
  • model - The model that was used for the completion
This structured response format allows you to:
  • Understand how your agent arrives at its final answer
  • Debug your agent’s reasoning process
  • Monitor tool usage and results
  • Optimize your tool configurations