This guide walks you from manifest as code to a working HTTP integration: you model a small multi-agent system in YAML, reconcile it withDocumentation Index
Fetch the complete documentation index at: https://docs.phrony.com/llms.txt
Use this file to discover all available pages before exploring further.
phrony apply, then drive the root agent from your service using the TypeScript SDK (@phrony/sdk).
You should already have a workspace, an LLM provider named in Phrony (this example uses openai—change the name if yours differs), and a plan that supports multi-agent delegation.
What you will build
| Layer | What you do |
|---|---|
| Manifest | One manifest document: a parent agent (executionMode: request) with canExecuteSubAgents and an allowlist, plus a sub-agent (executionMode: sub_agent) the parent may call. |
| CLI | phrony init (optional), edit YAML, phrony lint, phrony login, phrony plan, phrony apply. |
| Your system | Create an API key scoped to the parent’s API trigger, then call startRun / getRun (and optionally getConversation) from your backend. |
Part 1 — Scaffold and author the manifest
1. Create a project folder (optional)
If you do not already have a manifest repo, scaffold one:manifests/, phrony.config.json, and a starter file. You can instead add manifests/tutorial.yaml to an existing repo.
See phrony for global flags, credentials, and CI patterns.
2. Point the CLI at your workspace
Editphrony.config.json (or use environment variables) so network commands know your tenant and API origin:
| Field | Where to get it |
|---|---|
tenantId | Workspace / tenant id from the Phrony dashboard |
apiBase | Usually https://api.phrony.com unless your team uses a custom gateway |
3. Replace the example with a two-agent manifest
Save the following asmanifests/tutorial.yaml (or merge into your index). Adjust llmProviders[0].name if your workspace uses a different provider label than openai. Pick models your workspace actually allows; the comments note a common upgrade path for the parent.
executionMode: sub_agenton the worker makes it callable only as a sub-agent tool from an allowlisted parent, not via its own API trigger.canExecuteSubAgents: trueandallowedSubAgentsgate which children the parent’s model may invoke.subAgentExecutionModel: sequentialruns one child at a time in order; useparallelwhen you want batched child runs.inputSchema/outputSchemaon both agents keep handoffs predictable for production.
manifests/index.yaml still includes the stock example.yaml, either remove that include or ensure labels and keys do not conflict.
4. Lint, sign in, plan, and apply
apply runs a dry run first, then prompts for confirmation unless you pass --auto-approve (for scripts). Treat the plan output as your review gate before reconciliation.
After a successful apply, note the root agent id in the CLI result if your CLI prints it, or open Agents in the Phrony dashboard and find Tutorial orchestrator—you need its agent id (UUID) for the SDK. On the agent’s Triggers page, open the API trigger you declared; you need that trigger id when you scope an API key.
Part 2 — API access from your backend
1. Create an API key
In the Phrony dashboard: Settings → API keys (or your workspace equivalent).- Create a key with prefix
phk_and store it in a secret manager or environment variable (PHRONY_API_KEY). Never ship it to a browser. - Add a scope that includes the orchestrator agent and its API trigger only.
agentId and the public base URL—match those to AGENT_ID and PHRONY_API_BASE in your code.
2. Call shape
Runs always target the parent agent id.input must satisfy the parent’s deployed version inputSchema—here, { "query": "..." }.
For HITL, streaming, or user task completion over HTTP, reuse the patterns in Example: Building an embedded agent (expose timeline on the API trigger when you need conversation steps from the API).
Part 3 — Integrate with @phrony/sdk
Install the SDK in your service (backend or worker—not a public client bundle):
getConversation(runId) to inspect merged timeline items—you should see sub-agent steps on the parent run and child activity in the same session. For live updates without polling, use streamRunEvents from a server environment (see TypeScript SDK).
Wiring into your product: keep Phrony construction and startRun inside your API layer (Express route, Next.js server action, queue consumer, and so on). Map your user’s question into the query field, persist runId if you need async completion, and return run.output when status is Completed.
Checklist
-
llmProvidersname matches a configured provider in your workspace - Sub-agent
executionMode: sub_agent; parentexecutionModeisrequest(orhitlif you need human gates on the root) - Parent version has
canExecuteSubAgents: true,allowedSubAgentslisting each child’smanifestKey, and asubAgentExecutionModel - Both agents have deployed versions before you rely on production traffic
- API trigger on the parent; API key scoped to that agent + trigger
-
PHRONY_API_KEY,AGENT_ID, and optionalPHRONY_API_BASEset in the integration environment
Related
- Multi-agent systems — Run tree, sequential vs parallel, AITL, limits
- Manifest — Full field reference and merge semantics
- phrony —
lint,plan,apply,diff, auth - TypeScript SDK — Methods, errors, streaming, files
- Example: Building an embedded agent — API keys, HITL, and
sendRunMessage