Skip to main content
When a run completes, the agent returns structured results that include both the final answer and the reasoning process. This guide explains how to interpret and use these responses.

Response Structure

The run.result object contains:
interface RunResult {
  answer: string; // Natural language, or JSON-encoded string when using answerFormat
  parsedAnswer?: unknown; // Client-side JSON.parse of `answer` (parsed_answer in Python)
  reasoning?: Task[]; // Step-by-step reasoning (when available)
}

The Answer Field

The answer field contains the agent’s final response as a string:
  • Without structured output: A natural language string
  • With structured output: A JSON-encoded string matching your schema

Accessing Structured Answers

When using answerFormat, the API sends answer as a JSON-encoded string. Both SDKs decode it for you and expose the native value on run.result.parsed_answer (Python) / run.result.parsedAnswer (TypeScript). answer stays as the raw string if you need it. The SDKs do not validate the decoded value against your schema — validate yourself with Pydantic or Zod if you want a typed instance.
from subconscious import Subconscious

client = Subconscious(api_key="your-api-key")

run = client.run(
    engine="tim",
    input={
        "instructions": "Analyze sentiment",
        "answerFormat": {...}  # Your schema
    },
    options={"await_completion": True},
)

# run.result.answer is the raw JSON string; parsed_answer is the decoded dict
print(run.result.parsed_answer["sentiment"])

# Hydrate into a typed Pydantic model if you need validation:
# typed = SentimentAnalysis.model_validate(run.result.parsed_answer)

Understanding the Reasoning Field

The reasoning field provides visibility into how the agent arrived at its answer. Each task in the array represents a step in the reasoning process.

Task Structure

interface Task {
  thought?: string; // Agent's internal reasoning
  title?: string; // Step title
  tooluse?: ToolCall; // Tool call details
  subtasks?: Task[]; // Nested reasoning steps
  conclusion?: string; // Step conclusion
}

interface ToolCall {
  tool_name: string; // Name of the tool called
  tool_call_id?: string; // Server-assigned id for the tool call
  parameters: any; // Input parameters
  tool_result: any; // Result from the tool
}

Example Reasoning Trace

{
  "reasoning": [
    {
      "title": "Analyzing the request",
      "thought": "I need to search for information about Tesla's stock performance",
      "tooluse": {
        "tool_name": "web_search",
        "parameters": {
          "query": "Tesla stock performance this week"
        },
        "tool_result": {
          "results": [
            {
              "title": "Tesla Stock Analysis",
              "snippet": "Tesla's stock has shown..."
            }
          ]
        }
      },
      "conclusion": "Found relevant information about Tesla's stock"
    }
  ],
  "answer": "Based on my search, Tesla's stock performance this week shows..."
}

Extracting Tool Call Information

Loop through the reasoning to extract tool usage:
def extract_tool_calls(reasoning):
    # reasoning is a list of ReasoningTask pydantic models — use attribute access
    tool_calls = []
    for task in reasoning or []:
        if task.tooluse:
            tool_calls.append({
                "tool": task.tooluse.tool_name,
                "params": task.tooluse.parameters,
                "result": task.tooluse.tool_result,
            })
        # Check subtasks recursively
        if task.subtasks:
            tool_calls.extend(extract_tool_calls(task.subtasks))
    return tool_calls

# Usage
calls = extract_tool_calls(run.result.reasoning)
for call in calls:
    print(f"Tool: {call['tool']}")

Usage Information

The usage field provides metrics about the run:
interface RunUsage {
  inputTokens: number; // Tokens in the prompt
  outputTokens: number; // Tokens generated
  durationMs?: number; // Total execution time (when reported by the server)
}
In Python, fields are exposed as snake_case attributes (input_tokens, output_tokens, duration_ms) on the Usage pydantic model.

Tracking Costs

Use the usage information to track costs:
usage = run.usage
input_cost = usage.input_tokens / 1_000_000 * 2.00  # tim pricing
output_cost = usage.output_tokens / 1_000_000 * 8.00
total_cost = input_cost + output_cost
print(f"Run cost: ${total_cost:.4f}")

Runs

Understanding run lifecycle

Structured Output

Define response schemas

Error Handling

Handle errors gracefully