Errors

The current backend does not use one perfectly normalized error envelope across every route, but the behavior is consistent enough to document. Failures usually return an HTTP status plus either an error message, a message, or both.

Error shape

In practice, client code should expect a response body similar to one of these patterns:

Typical error shapes

{ "error": "organizationId is required" }

Alternative shape

{ "error": "failed", "message": "Prompt execution failed upstream" }

Validation-specific shape

{
  "error": "Graph validation failed",
  "code": "AGENT_GRAPH_INVALID",
  "details": [
    { "code": "missing_entry_step", "message": "Entry step is required" }
  ]
}

Status codes

  • Name
    400
    Description

    Invalid input, missing required fields, or malformed request payloads.

  • Name
    401
    Description

    Missing or invalid authentication.

  • Name
    402
    Description

    Billing or entitlement enforcement, such as seat limits.

  • Name
    404
    Description

    The requested resource does not exist in the authorized organization.

  • Name
    409
    Description

    Conflict, such as inviting a user who is already an active member.

  • Name
    422
    Description

    Structured validation failure, currently used for invalid agent graphs.

  • Name
    500
    Description

    Internal server error, provider failure, or an uncaught backend exception.

Common platform failures

Most integration failures fall into a few buckets:

  • missing organizationId
  • invalid or missing bearer token
  • lacking the required permission for the organization
  • lacking the required billing entitlement for AI or voice features
  • invalid prompt, tool, agent, or version identifiers
  • invalid agent graph configuration at publish time

Example entitlement failure handling

try {
  await client.runs.list({ organizationId: 'org_123' })
} catch (error) {
  if (error.status === 402) {
    // Prompt the user to upgrade or reduce usage
  }
}

Debugging workflow

When an AI execution fails, debug in this order:

  1. Check the HTTP status and response body.
  2. Confirm the credential type and organization scope are correct.
  3. Confirm the referenced prompt, version, tool, or agent exists in that organization.
  4. Inspect the corresponding Run, Agent Run, or Realtime Session record in the app.
  5. If the issue is graph-related, look for structured validation details or missing entry/transition data.

For runtime issues, the observability pages are part of the debugging story, not an optional extra. The fastest way to understand platform behavior is usually to inspect the recorded run rather than to reason only from the request payload.

Was this page helpful?