Skip to main content
The Dashboard gives you a high-level view of how your CX automation is performing. Track key metrics over any date range to understand the impact of your Multi-Agent Systems and team operations.

Accessing the dashboard

Go to Dashboard in the sidebar. Select a date range using the picker. It defaults to the current month.

Key metrics

AI automation rate

The percentage of conversations handled entirely by AI, without human intervention.
AI Automation Rate = (Total Conversations - Live Chats) / Total Conversations x 100
This is your north star metric. A higher rate means your MAS is handling more conversations autonomously. The dashboard shows the current period compared to the previous period so you can track trends.

Average AI response time

How long it takes your MAS to generate a response, averaged across all AI-handled conversations. Measured in seconds. Several factors affect response time:
  • Model choice: Reasoning models (like GPT-5) are significantly slower than non-reasoning models. If an agent doesn’t need deep reasoning, switching to a faster model can cut response time substantially.
  • Workflow complexity: More agents in the chain means more LLM calls. A Triage > Orders > Refund flow takes longer than a single agent handling everything directly.
  • Tool calls: Each tool call adds latency. Some tools (API tools calling slow external services, web search, MCP tools) can take several seconds each. Unnecessary or redundant tool calls compound the problem.
  • Parallel tool calls: Enabling parallel tool calls in Model Settings can reduce time when an agent needs multiple independent pieces of data.
To understand exactly where time is being spent, use the MAS Stats page. It breaks down average execution time per agent, per tool, and per LLM call. See MAS Stats for details.

Conversations started

A daily chart showing how many new conversations were started across all channels. Helps you identify volume patterns and plan staffing.

Total responses

A daily chart of total AI responses generated. Compare with conversations started to understand how many turns the average conversation requires.

Live chat volume

A daily chart showing conversations that required human intervention. When this number goes down over time, your MAS is doing its job.

Label distribution

A breakdown of conversation labels showing what your customers are contacting you about. Uses the labels you’ve defined in Settings > Inbox > Labels. This is especially useful when combined with AI labeling workflows. The AI automatically categorizes every conversation, and the dashboard shows you the distribution. A spike in “Shipping Issue” labels might indicate a fulfillment problem. A rise in “Returns” might signal a product quality issue.

Using dashboard data

Identify automation gaps: If your automation rate is lower than expected, check which conversations required human intervention and look for patterns. Can you add a new agent or tool to handle those cases? Optimize response times: Use MAS Stats to identify which agents and tools are the slowest. Then drill into individual Traces to understand why. Track label trends: Use label distribution data to catch operational problems early. Sudden changes in label distribution often point to real-world issues (shipping delays, product defects, website bugs) that you can address proactively. Measure impact of changes: After modifying agent instructions, switching models, or adding new tools, compare dashboard metrics before and after to measure the impact.