When we started talking to studios about deploying Unreal MCP Server v2 in shared editor environments, the same conversation kept happening:
"Cool. What stops the AI from running
delete_asseton the entireGame/Maps/folder?"
The honest answer for v2 was: nothing. v2 had bDestructiveHint = true annotations, but they were hints — informational metadata for clients, not enforced gates. Any caller could call any tool. CORS was wide open (Access-Control-Allow-Origin: *). There was no authentication. The plugin was a magnificent toy for individual developers and a non-starter for shared infrastructure.
v3's Phase B + C work is the answer to that conversation. None of it is glamorous. None of it adds tool count. But it's the difference between a plugin you run alone on localhost and a plugin a studio's IT team will actually let you run.
This is the design process and the trade-offs we made.
The threat model we picked
We talked to a half-dozen studios about what they actually feared. The list was shorter than expected:
- Accidental destruction. Agent picks the wrong tool, deletes the wrong asset, can't undo.
- Adversarial network access. A browser tab on a developer's laptop reaches localhost:13579 and starts editing assets, because there's nothing stopping it.
- Untraceable changes. Asset shows up modified in the depot, nobody knows which agent run did it.
- Privilege creep. A "read-only inspector" agent that someone wired up six months ago accidentally gets used by a more permissive client and now it can delete things.
We deliberately did not target:
- Insider threats. If a malicious developer wants to wreck your project, they don't need MCP — they have the editor.
- Supply-chain attacks on the plugin itself. Out of scope; that's a build-system concern.
- Side-channel attacks on the JSON-RPC transport. Out of scope; we're not a TLS endpoint.
That narrowed scope made the design tractable.
Decision 1: Three scopes, not three roles
The obvious first instinct was role-based access control — define viewer, editor, admin, and gate tools per role. We tried it. It collapsed under the question "is set_actor_transform editor or admin?" (depends on whether moving an actor breaks gameplay tests, which depends on the actor) and the question "who decides which role each tool is in?" (us, forever, with every new tool).
We replaced it with three scopes that are properties of the call, not the caller:
- Read. Inspecting state. Tools annotated
readOnlyHint=true. - Scene. Mutating non-destructive scene state. Spawn, edit, wire.
- Destructive. Things you can't easily undo with Ctrl+Z. Delete asset, revert SC file, fracture geometry, submit changelist.
Tools self-classify via their existing annotations (bDestructiveHint lifts a tool to Destructive; otherwise it's Scene; otherwise it's Read). The caller's max-allowed scope is set by the request context. The annotation is the single source of truth. Authors who add a tool decide its classification once, at the registration site, in the same file as the tool's body. There's nowhere else to forget to update.
The master kill switch is bAllowDestructiveScope. Set it to false and no caller can ever reach Destructive — even with a valid token, even with the right Origin. This is the recommended setting for CI / shared-editor environments. The agent gets a structured EMCPError::scope_denied and has to ask a human.
Decision 2: Bearer tokens, not OIDC
The studios that asked for auth wanted "real auth" — OIDC, group claims, expiry, the whole thing. We didn't ship that. We shipped opaque shared-secret bearer tokens.
Reasoning:
- Real OIDC needs a working IdP integration, JWKS, expiry handling, and refresh logic. That's a multi-week project that ships poorly without a partner studio to test against.
- The threat model above doesn't actually need OIDC — it needs something that stops a random browser tab from reaching
:13579. A long random shared secret inAuthTokensdoes that. - Studios that do need real OIDC can put a reverse proxy in front of the plugin and have the proxy add
Authorization: Bearer <opaque-token>after validating the user's OIDC session. The plugin doesn't have to know.
The plugin returns a clean 401 Unauthorized with WWW-Authenticate: Bearer realm="unreal-mcp" when the token is missing or wrong, which is the standard handshake every HTTP client knows how to handle.
Decision 3: Origin allow-list, not CORS off
The CORS change is the one v3 breaking change and we agonized over it.
v2 sent Access-Control-Allow-Origin: *. That's fine for a plugin you run on your own machine; it's a problem if any web page in any browser tab can reach localhost:13579 and get a successful CORS handshake. Browsers happily forward credentials and cookies along with * if credentials: 'include' isn't asked for, but more importantly, * means the agent has no defense against drive-by editor manipulation from a malicious website you happen to be visiting.
v3 echoes the request's Origin header back only if it's in AllowedOrigins (default: [http://localhost, http://127.0.0.1]). Non-allowed origins get 403 Forbidden. Server-side AI clients (Claude Code, Python scripts) don't have an Origin header at all, so they're unaffected.
This breaks browser-based clients that assumed wildcard CORS. We chose to break them deliberately because the alternative — leaving the wildcard and shipping a "production-safe" feature set on top of it — would have been false advertising.
Decision 4: Dry-run via cancelled transactions, not via simulation
The "preview an effect before committing" pattern in most MCP servers means the tool simulates the effect and returns what would happen. That's a duplicate code path: every tool needs both a real implementation and a simulation. The simulations rot.
We did the opposite. Real call, then revert. Specifically: when arguments.dry_run = true, the registry wraps the tool's normal execution in an FScopedTransaction, runs the tool against the real editor world, then cancels the transaction at the end. The structured response (computed actor IDs, intended bounds, etc.) flows back to the agent; the world state is unchanged.
This has two virtues:
- There's no second code path for tool authors to maintain. Dry-run is free for any tool that opts in via
bSupportsDryRun = true. - The dry-run response is exactly what the real run would produce. There's no "oh, the simulation was wrong" failure mode.
Limitation: tools with non-undoable side effects (writes outside the editor world, filesystem writes, sc_submit) can't opt in. Those tools refuse with EMCPError::Unsupported if you ask for dry_run, and the agent has to ask a human to confirm before calling the real thing.
Decision 5: Multi-call transactions, keyed by Mcp-Session-Id
A single tool call gets one undo step. v2 left it at that. The result was awful: an agent that spawned 40 actors required 40 Ctrl+Z presses to revert a planning mistake. Studios would not tolerate it.
v3 adds three new JSON-RPC methods — transactions/begin, transactions/commit, transactions/rollback — that wrap a sequence of tools/call invocations into a single editor transaction. They're keyed by Mcp-Session-Id, so two concurrent agents in the same editor don't trample each other's transactions.
The agent is responsible for matching begin with commit or rollback. We considered automatic timeouts and decided against them — the right answer when an agent crashes mid-transaction is for the next call (or a watchdog) to issue transactions/rollback explicitly. Surprise auto-commits are worse than orphaned transactions.
What we deliberately didn't ship
- Audit log. v3 logs every tool call to the editor's Output Log via
LogUnrealMCP. That's not a structured audit trail. Studios that need one can scrape the log; the proper structured-event audit log is on the v3.1 roadmap. - Per-token rate limits. v3 has per-client (IP) rate limiting. Per-token is a v3.1 item.
- Per-tool scope overrides. A tool's destructive-ness is a property of the tool, not a config knob. We don't want studios to whitelist a single destructive tool while running with
bAllowDestructiveScope=False— that recreates the v2 problem with extra steps. - Network TLS. Out of scope — terminate TLS at a reverse proxy.
The pattern across these "didn't ship" items is consistent: when there's a clean way to do the thing outside the plugin (a proxy, a log scraper), we'd rather ship the small thing well than the big thing poorly.
What this enables
The full studio-config recipe:
[/Script/UnrealMCPServer.MCPSettings]
bRequireAuthToken=True
AuthTokens=("studio-prod-token","ci-readonly-token")
AllowedOrigins=("http://localhost","http://127.0.0.1","https://internal-tools.example.com")
bAllowDestructiveScope=False
Pair with the new sc_submit requiring a description, multi-call transactions wrapping each agent run, and structured EMCPError responses the agent can plan against — and you have an editor agent surface that's actually traceable, scope-gated, and auditable.
Three years ago that was science fiction. v3 ships it.
→ Read the Security docs → Read the Agent Ergonomics docs → Get Unreal MCP Server v3 — launch pricing for 5 days