Quickstart

This guide gets you from a fresh account to your first inspectable AI execution. The current product flow is workspace-first: create an organization, open the AI platform, define a prompt, optionally attach a tool, run it, and inspect the result.

Before you begin

You need:

  • a signed-in user session
  • permission to access the AI platform in an organization
  • an organization with AI enabled in its entitlements

If you are starting from a brand-new account, the product sends you through onboarding first.

Create a workspace

Current onboarding is a three-step flow:

  1. Create your workspace by submitting a full name and organization name.
  2. Review the default Free plan summary.
  3. Choose where to begin, such as Playground, Models, Users, or Billing.

The backend provisions the organization server-side through POST /api/onboarding/complete, then the app refreshes tenant context and routes you into the product.

Complete onboarding

curl -X POST "$API_BASE/api/onboarding/complete" \
  -H "Authorization: Bearer $USER_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "fullName": "Jane Doe",
    "organizationName": "Acme AI"
  }'

Create a prompt

Open the Prompts page in the AI platform and create a prompt with:

  • a name
  • an optional description
  • an optional default model

Prompt versions are the executable unit. A prompt can have multiple versions, and each version can define:

  • system prompt content
  • optional developer prompt content
  • variables schema
  • response format
  • temperature and output token limits
  • attached layers
  • attached tools

Create a prompt

curl -X POST "$API_BASE/api/prompts" \
  -H "Authorization: Bearer $USER_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "organizationId": "org_123",
    "name": "Returns eligibility",
    "description": "Decide whether a purchase qualifies for return"
  }'

Attach a tool

Tools are optional, but they are where the platform becomes operational instead of purely prompt-driven. In the current app, tools can be created as:

  • manual_stub for deterministic fixture-like outputs
  • http for outbound calls to allowlisted hosts

After you create a tool, attach it to a prompt version. That same prompt version can then be reused in direct runs, multi-step agents, and voice agents.

Create a tool

curl -X POST "$API_BASE/api/tools" \
  -H "Authorization: Bearer $USER_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "organizationId": "org_123",
    "toolKey": "lookup_policy",
    "name": "Lookup Policy",
    "type": "manual_stub",
    "config": {
      "result": { "ok": true, "policy": "returnable" }
    },
    "inputSchema": {
      "type": "object",
      "properties": {
        "orderId": { "type": "string" }
      },
      "required": ["orderId"]
    }
  }'

Run and inspect

You can execute a prompt from the app or programmatically. The platform stores each execution as a run with runtime metadata such as:

  • prompt and version
  • provider and model
  • input and output tokens
  • estimated cost
  • latency
  • tool call counts

The Runs and Run Detail views are where you inspect what actually executed.

Run a prompt

curl -X POST "$API_BASE/api/prompts/prompt_123/run" \
  -H "Authorization: Bearer $USER_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "organizationId": "org_123",
    "variables": {
      "orderId": "order_987",
      "reason": "damaged"
    }
  }'

Next steps

  • Read Prompts to understand versions, layers, and execution.
  • Read Tools to understand runtime capability design.
  • Read Runs to understand observability and debugging.
  • Read Voice Agents and Realtime Sessions for live conversations.
  • Read Service Keys if you want to automate these flows from backend systems.

Was this page helpful?