Skip to main content

Monitoring

Agents do work. Monitoring is how you know what they did, whether it went well, and when to fix something.

Three views matter most: Agent Interactions, Async Tasks, and Agent Team Runs.

Agent Interactions

Open Work → Agent Interactions to see every conversation an agent has had — public chats, triggered runs, API calls, agent-team calls.

For each interaction you can see:

  • Which agent ran
  • Who or what triggered it (user, webhook, schedule, parent agent)
  • The full conversation transcript
  • Every tool call, with input and output
  • Token usage and time per turn
  • Final outcome (Completed, In_progress, Failed)

Filter by agent, time range, status, and source.

Agent Interactions list with search, agent / source / status filters, and per-row status badges

If an agent is misbehaving — refusing to answer, calling the wrong tool, looping — this is where you start. Click any row to open the conversation detail panel, which shows the model used, duration, source, and the full system prompt + transcript on the right.

Agent Interactions detail panel: model, duration, source, full system prompt (Purpose / Guardrails / Inputs)

Async Tasks

Some agents are queue-driven. Tasks land in a backlog; an agent picks them up and works through them.

Open Work → Async Tasks to see:

  • Tasks waiting for an agent
  • Tasks currently being worked
  • Tasks finished today

You can manually pause the queue, re-prioritise tasks, or trigger an agent to drain the backlog.

This is the right pattern for: ticket triage, document review, batch summarisation, anything where the work arrives faster than you can babysit each one.

Agent Team Runs

When users hit an agent team, every run is logged. Work → Agent Team Runs shows:

  • The user input
  • Which agents were called and in what order
  • Each agent's interaction log nested inside
  • The final output back to the user
  • Total time and status

Click any run for the full breakdown. The detail view shows the agent topology with timing — useful for spotting which specialist agent is slow.

All three views support:

  • Time range filters (last hour, last day, custom range)
  • Status filters (success, error, in-progress)
  • Agent or app filters
  • Free-text search on user inputs and outputs

Save the filters you use often.

Errors and retries

Failed runs surface a red badge with the error message. Common ones:

  • Tool timeout — an external API didn't respond in time
  • Auth error — credentials for an integration expired
  • Model error — the LLM provider returned an error
  • Validation error — the agent's output didn't match the expected schema

Errors are not silent. They show up in the relevant view and can fire alerts via Triggers if you've set up an event trigger on failure.

Cost tracking

Every interaction tracks tokens used and estimated cost (per the model's published pricing). Aggregate views per agent, per app, per day let you see where your spend is going.