The Platform

Build your team.
Run it on your terms.

Dedicated workstations. Your models. Every channel your team already uses. Teammates that ask before they change themselves.

Your org chart

An agent server is a workspace

An agent server is a real computer in the cloud that belongs to your team — your own files, your own tools, enough horsepower to do real work. Teammates live and work there. You decide which Teammates share a workstation and who on your team gets to talk to them.

Two common patterns:

Pattern 1 — one operator

One engineer, one workstation

Product, Engineering, and QA Teammates work side by side on one workstation — same codebase, same files, same test runs. Only the engineer talks to them.

Product Teammate
Engineering Teammate
QA Teammate
Pattern 2 — team-wide

Ops Teammates your whole team can reach

BOS, SEO, and Sales Teammates live on the ops workstation. Everyone in your org can talk to them from Slack, email, Telegram, or wherever they already work — one set of Teammates, many human coworkers.

BOS Teammate
SEO Teammate
Sales Teammate

Mix and match. One workstation per engineer, one for ops, whatever fits. Every workstation is yours — you shape the org chart.

Primitives

Six things the platform gives you

Under every Teammate, the same primitives. Understand these and you understand what's happening.

01

Agent servers

A real workstation in the cloud for each team of Teammates. Files stay where they were. Tools stay installed. Always on. Yours until you turn it off.

02

Teammates

Configurable roles that live on your workstations. Each has a prompt, a model, a set of skills, and a personality. You hire them, you rewire them — or they send a pending update and wait for you to approve it.

03

Skills

Reusable capabilities your Teammates can pick up and share. Like software packages, but for agents. Install one, and every Teammate on the server can use it.

04

Memory

Tiered — raw thoughts summarize into sessions, sessions roll into daily digests. Every recall is scored on four factors and leaves a trace. Reachable via OAuth-secured MCP from any AI client you use.

05

Configuration cascade

Platform defaults, overridden by your organization, overridden by a server, overridden by a specific Teammate. Sane defaults everywhere, full control anywhere.

06

Fleet dashboard

Usage, cost, deploys, and per-Teammate activity in one place. Know what your team is doing without having to ask.

Memory, deeper look

Learns from every message. Shows its work.

Every inbound and outbound message your Teammate exchanges is captured. Important ones get synthesized into long-term memory on their own. The memory is its own service with its own MCP endpoint — your Teammates write to it, your other AI clients can read from it, and you can inspect every recall.

Tiered synthesis, not a bag of chat history

Tier 1 — hot

Captured thoughts

Raw facts and explicit user captures. Indexed for search, excluded from auto-recall so unsynthesized chatter doesn't pollute retrieval.

Tier 2 — warm

Session summaries & decisions

Conversations get LLM-summarized into decisions, action items, and aha moments. Entities — people, projects, tools — get linked automatically so a future query about "that thing with Jane" actually lands.

Tier 3 — cold

Daily & weekly digests

A nightly job clusters today's summaries into topics, decisions, and unresolved items. Weekly syntheses roll those up. Your Teammates know what happened this quarter, not just this conversation.

Retrieval ranked on more than similarity

Naive AI memory runs a vector search and takes the top-K. Ours blends multiple signals — semantic match, recency, agent-assigned importance, and how often a memory has been pulled before — so the recalled memory is close, and current, and load-bearing.

The exact ranking is ours. The behaviour it produces — memory that actually knows what matters right now — is the part you feel.

Every recall is audited, not hidden

Every time a memory gets injected into a conversation, a trace is written: the query, the candidates that competed, which one was selected, and how long the recall took.

Ask "why did the Teammate bring up X?" and open the trace. Claude can't tell you that. ChatGPT can't tell you that. Your Teammates can.

Temporal state — memories supersede, not overwrite

When a new memory contradicts an existing one, an LLM judge marks the old one superseded and links forward to its replacement. History is preserved, the current view stays clean, and compliance teams can prove what the system believed on any past date.

Your memory, every AI you use

The memory system is a standalone MCP server with OAuth 2.0 and RFC 9728 protected-resource discovery. Point Claude Desktop, ChatGPT, Cursor, or a custom client at it — they all get the same memory_search, memory_store, and memory_recall_about_entity tools. Switch AIs without losing a thing.

No lock-in

The model layer is yours to choose.

Anthropic, OpenAI, Google, Venice, OpenRouter — all available across the fleet. Every Teammate picks its own model from the set you've approved, and you can change the allowed list per Teammate without rebuilding anything.

When a better model ships, your team is on it the same day.

Quick-pick presets

Starter Pack Maximum Capability Budget / High Volume Multi-Provider Resilience
Human-in-the-loop — at the config layer

Teammates propose changes. You approve & deploy.

Every other platform either freezes agent configuration (you edit the YAML), or lets agents mutate themselves live. Neither is how you want a real coworker to operate.

A Teammate can send a proposed change through MCP — a new skill, a tweak to its personality file, a different allowed-model set, a channel to add, a bigger server. The platform pings you with a pending update. You open the dashboard, review what's changing and why, and click deploy. Or edit first. Or reject.

This is HITL on the config, not every task. A Teammate you trust keeps running. A Teammate that wants to grow asks permission first — including to reconfigure other Teammates across your fleet.

Pending update

from the Sales Teammate

Add the apollo.io skill

Prospects I book on Calendly are converting 3x higher when I enrich them with Apollo signals first. I'd like to add the skill to my server so I can use it on tomorrow's outreach batch.

Scope

Sales Teammate

Change type

Skill added

Review & deploy Reject
Machine-to-machine

Webhooks, MCP, and the API

Webhooks that talk to real machines

Multiple auth types (bearer, custom header, HMAC). Dynamic prompt templates that interpolate payload data. Event filtering so Teammates only wake for what matters.

The difference between "here's a blob of JSON" and a message that says "Review PR #1234: Add rate limiting to auth endpoint."

MCP + API for Teammates to manage the platform

Teammates can create workstations, deploy other Teammates, and talk to each other — programmatically. Multi-agent by design; they actually coordinate.

Run a meta-Teammate that scales your team as demand changes, or let each Teammate hand off specialist work to the right coworker. The platform works either way.

Channels

Wherever you already work

Teammates show up in the chat apps your team already uses. Mention them, DM them, add them to a channel.

Slack

@-mention a Teammate or add them to any channel.

Telegram

Chat with a Teammate from your phone, no VPN needed.

Discord

Pair a Teammate to a server for your community.

Email, webhooks, and a web dashboard round out the channels. Beyond chat, Teammates can plug into 1,000+ other tools — browse the integrations catalog.

Build your team.

14 days free. No credit card required.