Guard is the Phrony dashboard area for tool-call policy and anomaly handling: open Guard in the main sidebar (labeled Beta while the product evolves) to triage incidents, see unresolved versus solved history, and—when your workspace supports it—work with rule proposals. This section explains what Guard is for, the principles behind it, and where incidents and human review show up. For rule JSON, evaluation order, and execution-mode constraints, read L1 rules and evaluation.Documentation Index
Fetch the complete documentation index at: https://docs.phrony.com/llms.txt
Use this file to discover all available pages before exploring further.
Why agents need stops, blocks, and review
Language models do not behave like traditional software. The same agent configuration can produce different actions from one run to the next: the model reasons over context that changes every turn, sampling and provider behavior add variation, and small differences in prompts or inputs can steer the chain of thought toward a different tool choice. That is useful for flexible work, but it means you cannot prove from the prompt alone that a run will never attempt a dangerous call. In production, “dangerous” is often ordinary capability used at the wrong time: initiating a payment, sending email or messages at scale, changing permissions, deleting or overwriting data, or calling an integration with arguments that violate policy. Without a layer that can refuse execution, pause for a person, or end the session before damage is done, a single bad attempt can become a real incident. Operators therefore need explicit blocking, termination, and human-in-the-loop paths—not as an optional extra, but as part of how you ship agents safely. That need exists because of how general-purpose LLMs work, not because Phrony is unreliable. Phrony does not replace the model; it wraps attempts to use tools in policy, history, and approvals so you can enforce what may run and when a human must be in the path. The underlying limitation—stochastic behavior, incomplete specification of intent in natural language, and no built-in guarantee that the model will always respect business rules—is a property of today’s LLM-based agents as a class. Framing it honestly helps teams invest in the right controls instead of treating occasional misbehavior as a bug in the platform. Guard and L1 rules are where that enforcement becomes concrete in Phrony: declarative checks and incident workflows so you can stop or review attempts before or after they matter for operations—not because the product failed, but because governed execution is how you run LLM agents next to real systems and real money.Principles
These ideas apply whether you use the rule builder in Limits & safety or export rules in a manifest.- Least privilege first — The model can only attempt tools that appear on the agent version’s operations allowlist. L1 rules further constrain how those allowed tools may be used (arguments, repetition, and so on).
- Decide before execution — For each tool attempt, the runtime runs an inline check against your effective rule list before the integration call proceeds. Outcomes such as block and terminate session apply immediately when they win precedence.
- Same rules, recorded events — Phrony can replay the same rule definitions against what already happened on a run so counters, audit, and Guard incidents stay aligned. That follow-up path does not reverse a call that already completed; it powers operations like incident lists and async signals.
-
One winner, full audit — If several rules match one attempt, every match is stored for auditing, but the single outcome follows strict precedence:
terminate_sessionbeatsblockbeatspausebeatsallow. See L1 rules — precedence. -
Pause means a person — A
pauserule stops before the tool runs and creates ananomaly_reviewuser task. That path is not available for agents in Request execution mode; use HITL or configure Sub-agent behavior appropriately. See L1 rules — execution mode. - Guard is the incidents hub — When automated handling needs visibility or follow-up, incidents accumulate under Guard (often with a badge when something unresolved needs attention).
What L1 is
L1 (“layer one”) anomaly control is the version-level feature that holds those declarative rules. You enable it and edit rules under Limits & safety on an agent version. L1 is not the same as operation-level Require approval (HITL): approvals are a fixed gate per operation; L1 expresses patterns across allowed tools—arguments, how often a tool fires in a run or session, or frequency in a time window (window predicates behave differently inline versus async; see L1 rules).Incidents and proposals
- Incidents — Records tied to policy matches or detection outcomes on runs. Use Guard to filter unresolved versus solved and open detail when you need context for remediation or sign-off.
-
Rule proposals — When available, workflows may suggest or stage rule changes; treat them like any other governance change—review impact on HITL vs Request agents and on
pauserules before applying.
Where work appears in the product
| Surface | Role |
|---|---|
| Guard (sidebar) | Incidents hub; badge when unresolved items may need attention. |
| Review → Approvals | anomaly_review (approve or reject a paused tool attempt) and anomaly_alert (informational acknowledge) alongside other human steps; filter by Anomaly. |
| Stopping new runs | There is no Block agent control (with a confirmation step) that halts every future run for an agent in one action. To prevent new work, set triggers to inactive, adjust schedules, or narrow API key scopes until you are ready to accept traffic again. |
Related reading
- L1 rules and evaluation — Rule shape, predicates, multi-agent merge, manifest fields.
- Agent — Limits & safety — Where you configure L1 on a version.
- Human in the loop — L1 and HITL — How
pausefits next to operation approvals. - User task — kinds —
anomaly_reviewandanomaly_alert.