For as long as Unreal has existed, the standard advice for "how do I build a camera system" has been some variant of: attach a SpringArmComponent, tune its lag values, override PlayerCameraManager for anything complex, and accept that you will be hand-rolling blends between gameplay, cutscene, and conversation cameras for the life of the project.
Unity solved this problem cleanly with Cinemachine in 2017. Unreal developers have watched from the sidelines for eight years.
UE5.7 ships the Gameplay Cameras plugin as production-ready, and it is — finally — Unreal's real answer. Camera rigs as first-class data assets. Context-driven selection. Typed transitions. Director-friendly iteration. We have shipped one small project and two prototypes on it in the last six months. This guide is what we wish the docs told us.
What the plugin actually gives you
The mental model, if you know Cinemachine: Virtual Cameras become Camera Rigs. A Camera Director (Cinemachine Brain) picks which rig is active based on context. Transitions between rigs are first-class, with typed blending. The gameplay code never sets a camera transform directly — it pushes context, and the director resolves it.
If you do not know Cinemachine: the plugin lets you author cameras as data assets (Camera Rig Assets) with modular behaviors stacked on them. Following a target, framing the player and an enemy together, adding a recoil shake, handling camera collision — each is a Camera Node you drag into the rig's node graph. The rig runs as a behavior tree each frame, producing a final camera pose.
The core types you will interact with:
UCameraRigAsset— the data asset. Contains a node graph. Authored in the editor.UCameraNode— base class for rig behaviors. Ships with ~30 built-ins: FollowTarget, FramingSubjects, LookAt, OrbitInput, BoomArm, CollisionAvoidance, CameraShake, Offset, and so on.UCameraDirector— decides which rig is active for a given player. You subclass this and implementEvaluateCameraDirector.UCameraRigTransition— declarative transition between two rigs. Ease curves, duration, blend-from-previous-pose behavior.UGameplayCameraComponent— the actor-facing component that replaces your CameraActor + SpringArm stack. Owns the director, runs the active rig, outputs the final POV.
The key architectural shift: cameras are not actors anymore. They are data that a component evaluates. This sounds pedantic. In practice it is the entire reason the system scales where the old one did not.
Why the old system failed
The classic Unreal camera pattern looks roughly like:
APlayerCharacter
USpringArmComponent
UCameraComponent
This is fine for one-camera games. It falls apart as soon as you need:
- A different camera for aim-down-sights
- A different camera for dialogue
- A cinematic override during a scripted sequence
- A different camera when the player is climbing, driving, swimming
- Smooth blending between all of the above
The "solutions" the Unreal community has historically used:
- One big camera with a lot of state. SpringArm properties change based on character state. Works until the blend logic becomes a 400-line function.
- Multiple camera actors the PlayerController switches between. Blending is manual. Every new camera is a new actor class. Variables shared across cameras live in the controller. Nightmare at scale.
- Override
APlayerCameraManager. Full control, but you are writing a camera system from scratch in C++, and every designer tweak is a code change.
The Gameplay Cameras plugin collapses this to: one UGameplayCameraComponent, one director, N rig assets, M transitions. Designers add and modify rigs without engineer involvement. Code pushes context, not transforms.
Building your first rig
Start with the Third Person Follow preset. File → New Asset → Camera Rig Asset, pick the preset. You get a node graph with:
FollowTarget— anchors the camera to an actor/componentBoomArm— SpringArm-equivalent with collision avoidanceOrbitInput— binds yaw/pitch inputsLookAt— aims the camera at the follow target
Each node exposes ~5-15 properties. The BoomArm default for a third-person action game we like:
Arm Length: 350.0
Collision Radius: 12.0
Probe Channel: Camera
Lag Speed: 10.0
Rotation Lag: 20.0
Min Pitch: -70.0
Max Pitch: 70.0
Attach a UGameplayCameraComponent to your character. Point its CameraRig at the asset. Play. You have a working third-person camera in about 90 seconds.
Now add variation. Duplicate the rig, call it CR_ThirdPerson_ADS. Shrink the arm length to 120. Add a FramingSubjects node that keeps the player at screen-right third (x = 0.67). Save.
In your director (more on this shortly), push CR_ThirdPerson_ADS when the player is aiming. Transition is 0.25s ease-in-out. That is the entire ADS camera implementation. No spring arm tween code. No "is aiming" branch inside the camera class.
Directors: the contextual brain
The director is the piece that separates "a camera system" from "a bunch of cameras." Subclass UCameraDirector and implement EvaluateCameraDirector. The base pattern:
void UMyDirector::EvaluateCameraDirector(
const FCameraDirectorEvaluationParams& Params,
FCameraDirectorEvaluationResult& OutResult)
{
AMyPlayer* Player = Cast<AMyPlayer>(Params.OwnerActor);
if (!Player) { return; }
if (Player->IsInDialog()) { OutResult.ActiveCameraRig = DialogRig; }
else if (Player->IsAiming()) { OutResult.ActiveCameraRig = AimRig; }
else if (Player->IsClimbing()) { OutResult.ActiveCameraRig = ClimbRig; }
else { OutResult.ActiveCameraRig = DefaultRig; }
}
This is 15 lines. It also scales cleanly: every new gameplay state is one more branch and one more rig asset. Designers author the rig. You author the branch. Nobody touches camera math.
A more sophisticated director tracks priority stacks — dialogue pushes a rig onto a stack, and when dialogue ends it pops. We use a priority-sorted array of "camera requests" where gameplay systems can push/pop requests with a priority and an optional lifetime. The director picks the highest-priority active request. This pattern generalizes to cutscenes, scripted moments, tutorials, and systemic events without any of them needing to know about each other.
Transitions: where the polish lives
A transition is a first-class UCameraRigTransition asset with properties:
DurationSeconds— 0.0 for cuts, 0.15-0.6 for most gameplay blendsBlendType— Linear, EaseIn, EaseOut, EaseInOut, or a custom curve assetBlendPivot— whether to blend from the previous rig's live pose or from its "resolved" pose (matters for cases where you want the old camera to keep tracking during the blend)InterruptBehavior— what happens if a new transition fires mid-blend (Replace, Queue, Reject)
A transition lives between a From rig (or wildcard) and a To rig. Declaring transitions as assets, instead of hardcoded per-switch logic, is what makes this system designer-friendly. Your cinematic designer can tune the ADS-in transition to 0.18s with a cubic ease-out and not touch a single line of code.
Default transition values we have shipped with:
| From → To | Duration | Blend |
|---|---|---|
| Default → ADS | 0.20s | EaseOut |
| ADS → Default | 0.28s | EaseInOut |
| Gameplay → Dialog | 0.45s | EaseInOut |
| Dialog → Gameplay | 0.55s | EaseInOut |
| Gameplay → Cinematic | 0.00s (cut) | - |
| Cinematic → Gameplay | 0.80s | Custom S-curve |
The cinematic-in cut is intentional. You almost always want a hard cut into a cinematic so the director clearly hands off control. The cinematic-out blend is longer to gently restore the player's spatial awareness.
The Cinemachine comparison, honestly
If you know Cinemachine, here is the translation table:
| Cinemachine | Gameplay Cameras |
|---|---|
| Virtual Camera | Camera Rig Asset |
| Brain | Camera Director |
| CinemachineBody (Transposer, Framing Transposer) | BoomArm, FollowTarget, FramingSubjects |
| CinemachineAim (Composer, POV, Hard Look At) | LookAt, OrbitInput |
| CinemachineNoise (Basic Multi-Channel Perlin) | CameraShake, PerlinNoise |
| Priority-based vcam selection | Director's explicit evaluation |
| Blend Lists | Rig Transition assets |
| StateDrivenCamera | Director with state-machine logic |
What Gameplay Cameras does better:
- Node graph authoring is richer than Cinemachine's fixed Body/Aim/Noise slots. You can stack six modifiers in any order.
- Typed transitions with interrupt behavior are a first-class feature, not a cobbled-together extension.
- Integration with Unreal's input system via OrbitInput is cleaner than Cinemachine's input axis hookup.
What Cinemachine still does better:
- Documentation maturity. 8 years vs. 6 months. The Unity side wins.
- Preview in editor. Cinemachine's live preview of inactive vcams is excellent. Gameplay Cameras has preview, but it is less polished as of 5.7.
- Community assets. Cinemachine has dozens of extension packs. Gameplay Cameras has Epic's built-ins and a handful of early marketplace entries.
Net: parity on architecture, Unity ahead on ecosystem. In 12-18 months the ecosystem gap closes.
Hybrid workflow: gameplay cameras plus spline-driven sequences
Gameplay Cameras is the right tool for systemic cameras — the ones that follow rules based on game state. It is the wrong tool for scripted cameras — the ones that follow a specific path a designer hand-authored for a specific beat.
For scripted moments, we combine the plugin with our Cinematic Spline Tool. The pattern:
- Designer authors a spline path in the level for a scripted camera move (a boss intro, a dramatic reveal, a hero shot).
- At trigger time, the gameplay system pushes a high-priority camera request into the director with a special rig:
CR_SplineFollow. CR_SplineFollowis a rig with one node:SplineFollow, which reads a spline reference from its context and outputs the camera pose at the current spline-time.- When the spline finishes, the gameplay system pops the request, and the director falls back to whatever gameplay rig was active before.
The transition from gameplay into the spline is a hard cut. The transition out is an 0.8s ease back to gameplay, which lands the player's camera smoothly in the new position.
This hybrid pattern is the single biggest productivity multiplier we have found for cinematic-heavy projects. Designers own the spline paths. Engineers own the director logic. The two systems meet at a tiny, well-defined interface: the rig asset that reads a spline.
Designer iteration: the real benefit
The under-sold feature of the plugin is how fast designers iterate. Consider a common request: "the aiming camera feels too tight, pull it out a bit."
Old system: engineer opens the character class, finds the ADS logic, changes SpringArm->TargetArmLength, recompiles, hot-reloads, tests. 5-10 minutes per iteration.
New system: designer opens CR_ThirdPerson_ADS, changes Arm Length from 120 to 140, saves. Play-in-editor picks it up instantly. 10 seconds per iteration.
Multiply across a project. A full-scope game has 40-80 camera states. Each goes through 20-50 tuning passes over the project lifetime. Moving those iterations from "engineer + 5 minutes" to "designer + 10 seconds" saves hundreds of engineer-hours and produces a better-feeling game because designers can tune to their actual taste instead of their ability to describe their taste to an engineer.
Performance
A fully-populated rig (10 nodes) costs ~0.03-0.08ms per tick on a modern mid-range CPU. The evaluation is cache-friendly because rig data is laid out in compact structs. We have run 4 local-co-op cameras simultaneously with no measurable frame impact.
The director evaluation itself is free (a few branches). The transitions are near-free. The only place we have seen cost is custom Camera Nodes that call into expensive systems — for instance, a node that does a screen-space raycast every tick. If you write custom nodes, profile them individually; they tick every frame on every active camera.
Gotchas we hit
Editor preview is not always accurate. The PIE preview of a rig sometimes diverges from the actual in-game evaluation because the director's context is not fully populated in PIE. Always validate final tuning in actual gameplay, not in the rig preview.
Input routing surprises. OrbitInput consumes input by default. If you have other systems reading the same input axes, you need to configure the node to forward rather than consume. Took us two hours to find this the first time.
Director lifetime. The director is owned by the UGameplayCameraComponent. If you destroy and recreate the component, director state (like priority stacks) is lost. Persist director state on the controller if you need cross-respawn continuity.
Blueprint access. The plugin exposes most of its surface to Blueprint, but some advanced director patterns (priority stacks with typed payloads) still want C++. Plan for a thin C++ base director even on mostly-Blueprint projects.
Cinematic sequencer integration. Sequencer does not natively push into the director. You either (a) add a Sequencer track that pushes a camera request on section start and pops on section end, or (b) give Sequencer authoritative control during a section and let the director detect and yield. Pick one and document it; mixing both is confusing.
Production recommendations
For a new project in mid-2026, our defaults:
- Ship the plugin from day one on all new Unreal projects.
- Author one "base" rig per gameplay mode (exploration, combat, aim, climb, drive, dialogue). Six rigs covers 80% of most games.
- Use a priority-stack director. It generalizes.
- Define a short list of transitions with meaningful names (
Tr_ToCombat,Tr_ToDialog,Tr_CinematicCut,Tr_CinematicRestore). Designers pick from the list. Do not let one-off transitions proliferate. - For scripted moments, always go through the director with a high-priority request. Never let a cutscene directly mutate the camera component.
- Integrate with a spline tool early. Designer-authored paths are the bridge between systemic and cinematic cameras. Our Cinematic Spline Tool plugs directly into the director request pattern above.
- If you lean on AI-assisted workflows, the plugin's data-asset-driven design works well with the Unreal MCP Server — rig iteration via structured prompts is surprisingly effective for rough first passes that a designer then polishes.
A full example: a cover-shooter camera stack
Let's concretely sketch what a shipped camera system looks like on this plugin. Assume a third-person cover shooter with the following camera states: Exploration (default follow), Combat (tighter framing, faster orbit), ADS (over-shoulder, narrow FOV), Cover-Low (peek framing when crouched behind cover), Cover-High (peek framing when standing behind cover), Dialog (two-shot framing with NPC), Death (pulled-out slow-rotate), Cinematic (director-relinquished to Sequencer).
That is 8 rigs. Plus 4 variants per weapon type for ADS (14 weapons × 4 = 56, but practically we collapse to 4 ADS-variant rigs parameterized by weapon data). Call it 12 rigs total, plus ~14 named transitions.
The director is a priority-stack with categories:
- Cinematic (priority 1000) — Sequencer pushes here
- Death (priority 900)
- Dialog (priority 700)
- Cover-High / Cover-Low (priority 500)
- ADS (priority 400)
- Combat (priority 200)
- Exploration (priority 100, the floor)
Gameplay systems push/pop requests with typed payloads. The director picks the highest-priority active request and maps it to a rig. Mapping logic is a small switch; the heavy lifting is in the rig assets and the transition table.
Engineering cost for the director + request-plumbing: ~2 days in C++. Designer cost for authoring 12 rigs with polish tuning: ~3 weeks across a few iteration cycles. The designer spend is the project spend. That is correct — camera feel is fundamentally a design problem, and the plugin correctly puts the time where the value is.
Lyra integration
If you are starting from the Lyra sample on 5.7, it ships a Gameplay Cameras integration layer (ULyraCameraMode and friends) that wraps the plugin's director in Lyra's existing camera-mode stack. You can use this as-is for fast bootstrap, or strip it and go direct — we recommend going direct for anything you plan to ship, because the Lyra wrapper exists for backwards-compat with the old Lyra camera system and adds a layer you do not need.
Custom camera nodes
You will eventually want a behavior the built-in nodes do not cover. Writing a custom node is ~100 lines:
UCLASS()
class UMyCustomNode : public UCameraNode
{
GENERATED_BODY()
public:
virtual void OnRun(const FCameraNodeEvaluationParams& Params,
FCameraNodeEvaluationResult& OutResult) override;
UPROPERTY(EditAnywhere) float MyTunableValue = 1.0f;
};
Read the input pose from OutResult.CameraPose (set by upstream nodes), modify it, leave it for downstream nodes. Nodes compose by order in the rig graph. Keep each node single-purpose; do not build a monolith.
Examples of worthwhile custom nodes we have shipped:
SpeedBasedFOV— widens FOV by gameplay speed, for motorcycles and running parkour.SocialDistanceOffset— pushes the camera out when NPCs are near, preventing the camera from kissing NPC faces in crowded markets.AimAssistPivot— shifts the camera slightly toward an aimed-at target, for controller-assist players.TerrainAwareHeight— raises the camera when the player is in a deep pit so the player is not framed against nothing but a wall.
Each was 40-150 lines. Each was the right abstraction for its feature.
Working with Sequencer
Sequencer is Unreal's cinematic editor, and it has its own Camera Cut track that mutates the camera directly. This conflicts with the director model. Three patterns we have seen used:
- Sequencer wins mode. At the start of a Sequencer section, push a Cinematic request at priority 1000. Inside the request's rig, do nothing — the Cinematic camera cut track provides the pose via a separate path. When Sequencer finishes, pop.
- Director-owns mode. Sequencer exposes a custom "push rig" track instead of a Camera Cut track. Designers add rigs to the Sequencer timeline; the director handles transitions. This keeps blending consistent but loses the Sequencer editor's camera-specific features.
- Hybrid mode. Gameplay-owned cinematics use pattern 1; content-authored cinematics use pattern 2. Document which is which.
Pattern 3 is what we ship with. The line: "was this authored by a cinematic designer in Sequencer" → pattern 1. "Was this triggered systemically by gameplay" → pattern 2.
Bottom line
The Gameplay Cameras plugin is the camera system Unreal has needed for a decade. It is production-ready as of 5.7, the architecture is right, and the designer-iteration speedup is real and measurable. The remaining gap versus Cinemachine is ecosystem and docs, not fundamentals, and both are closing fast.
For any new project, use it. For an in-flight project, migration cost depends on how tangled your existing camera code is. A clean SpringArm-based project ports in a few days. A project with a custom PlayerCameraManager subclass and seven camera modes hand-blended in code will take a few weeks. In both cases, the payoff is faster iteration for the remaining lifetime of the project.
Author the rigs. Push the context. Let the director do the rest.