Everyone making productivity tools claims they save time. We wanted to put real numbers behind that claim, so we ran an experiment: build the same level twice using two different approaches, and track everything.
This isn't a marketing exercise dressed up as a test. We're going to show you where the Unreal MCP Server genuinely saves time, where manual work is still faster, and why the best approach is almost always a hybrid of both.
The Test Setup
We defined a moderately complex interior/exterior level: a medieval tavern with an outdoor courtyard. The scope was chosen to be large enough to reveal meaningful time differences, but small enough to complete in a single session with each approach.
The Level Spec
- Exterior courtyard — 40m x 40m area with stone walls, a gate, scattered barrels and crates, two trees, and a well
- Interior tavern — main hall with tables, chairs, a bar counter, fireplace, hanging lanterns, and a staircase to a balcony
- Lighting — exterior: late afternoon directional light with warm tones. Interior: mixed firelight and lantern illumination
- Material assignments — appropriate materials on all surfaces (wood, stone, metal, fabric)
- No gameplay logic — this test focused purely on level construction, not scripting
The Rules
Both approaches started from an empty level with the same asset library pre-imported. The asset library included roughly 60 static meshes (architectural pieces, props, foliage), 25 materials, and standard lighting actors.
- Manual approach: all work done through the standard Unreal Editor interface — viewport placement, Details panel, content browser, and property editing. No custom scripts or editor utilities.
- MCP approach: all work done through Claude Code connected to the Unreal MCP Server. The developer could only interact with the editor through natural language prompts. After completion, a manual polish pass was allowed (and tracked separately).
The same developer did both approaches on the same hardware, with a day between them to avoid fatigue crossover. The manual approach went first to prevent the MCP version from being faster simply because the developer already knew the layout.
What We Tracked
For each task, we recorded:
- Time — wall clock time from start to "done" (including iteration and corrections)
- Iteration count — how many times work was undone and redone
- Final quality — subjective assessment of the visual result on a consistent scale
Time was measured with a stopwatch. No rounding, no generous estimates. The numbers you see below are real.
Task 1: Level Blockout
The first task was creating the basic level geometry — walls, floors, ceilings, and major structural shapes. No detail props, no materials, no lighting. Just the spatial layout.
Manual Approach
Started by placing BSP brushes for the courtyard walls, then the tavern interior. Used the ruler tool for measurements. Placed doorways by subtracting BSP volumes. Built the staircase using individual step meshes from the asset library.
The courtyard walls went quickly — four wall segments, a gate opening, done. The interior was slower. Getting the bar counter at the right height, the fireplace alcove at the right depth, and the staircase at the right angle required a lot of viewport manipulation and Details panel number entry.
Time: 38 minutes Iterations: 6 (mostly staircase angle and fireplace proportions)
MCP Approach
Started with broad descriptions and refined incrementally:
"Create a rectangular room 15m x 10m x 4m with stone walls, a wooden floor, and an open ceiling for now"
"Add a courtyard outside the main entrance, 40m x 40m, enclosed by 3m stone walls with a gate opening on the south side"
"Add a bar counter along the west wall, 6m long, 1.1m tall. Place a fireplace alcove in the north wall, 2m wide, 1.5m deep"
"Build a staircase in the northeast corner going up to a balcony that runs along the east wall at 3m height"
The courtyard and main room went fast. The staircase took three attempts — the first placement had steps facing the wrong direction, the second had the balcony at the wrong height. After correction, the result was solid.
Time: 14 minutes Iterations: 4 (staircase direction, balcony height, gate width, fireplace depth)
Analysis
The MCP approach was 2.7x faster for blockout work. The biggest time savings came from not having to manually position each element through the viewport transform gizmo. Describing spatial relationships in natural language ("along the west wall," "in the northeast corner") was faster than translating those relationships into XYZ coordinates by hand.
The MCP version required fewer iterations, but the iterations it did need were caused by spatial ambiguity in natural language. "Northeast corner" left room for interpretation about exact placement. With manual work, you place things exactly where you want them the first time — but the placement itself takes longer.
Task 2: Asset Placement
With the blockout done, the next task was populating the scene with props — tables, chairs, barrels, crates, lanterns, the well, trees, and decorative elements.
Manual Approach
Started with the interior. Placed tables and chairs in clusters — three table groups in the main hall, each with 3-4 chairs. Used duplicate and move for efficiency, then adjusted rotations so nothing looked uniform.
The bar area got shelves, mugs, and bottles. The fireplace got a grate and logs. Hanging lanterns were placed along the ceiling beams using the transform snapping to get consistent heights.
The courtyard received barrels and crates clustered near the walls, two trees (positioned manually and scaled slightly differently), and the well at center.
This was the most tedious task. Not because any single placement was hard, but because there were roughly 85 individual actors to place, position, rotate, and scale. Each one required several clicks at minimum: drag from content browser, position in viewport, adjust rotation, maybe tweak scale.
Time: 67 minutes Iterations: 11 (mostly spacing adjustments, a few height corrections for hanging items)
MCP Approach
This is where MCP's batch capability made the biggest difference:
"Place 3 rectangular tables in the main hall, spaced evenly, each with 4 chairs around them. Vary the chair rotations slightly so they look natural"
"Along the bar counter, place 8 bar stools spaced evenly. Add shelving behind the bar with mugs and bottles"
"Add 6 hanging lanterns along the ceiling beams, evenly spaced, at a height of 3.8m"
"In the courtyard, cluster 4 barrels and 3 crates near the east wall. Place a well at the center of the courtyard. Add two trees, one near each courtyard corner on the north side"
Each prompt placed multiple actors with spatial relationships already defined. The AI understood "evenly spaced," "clustered near," and "along the ceiling beams" and translated them into reasonable coordinates.
Where the MCP approach stumbled: fine positioning. The lanterns were at the right height but not centered on the beams. Two of the chairs were slightly clipping through table legs. The barrel cluster looked a bit too regular.
After the MCP placement pass, we did a manual polish pass to fix these issues.
Time: 22 minutes (MCP placement) + 12 minutes (manual polish) = 34 minutes total Iterations: 7 (MCP corrections) + 8 (manual adjustments during polish)
Analysis
MCP cut asset placement time roughly in half. The time savings came from batch placement — describing groups of objects with spatial relationships is dramatically faster than placing them one by one. Twelve tables and chairs in one prompt versus 48 individual click-drag-position operations.
But the manual polish pass was essential. AI placement gets things to roughly the right spot, but the last 5% of precision — exact alignment with geometry, avoiding interpenetration, matching a specific visual rhythm — still needs human eyes and hands.
Task 3: Lighting Setup
The level needed two distinct lighting moods: warm late-afternoon sun for the courtyard, and dim firelight-and-lantern ambiance for the interior.
Manual Approach
Started with the directional light for the exterior. Rotated it to roughly 30 degrees above the horizon, angled from the west. Adjusted intensity and color temperature to warm golden tones. Added a sky light for ambient fill.
Interior lighting was more involved. Placed a point light inside the fireplace with orange-red color and medium intensity. Then placed point lights at each of the 6 lantern positions with warm yellow color and lower intensity. Adjusted attenuation radii so the light pools overlapped naturally without washing out.
Added a few small fill lights in dark corners to keep the interior readable without destroying the mood. Adjusted the post-processing volume for the interior to add slight bloom and warm color grading.
Each light required opening the Details panel, setting color, intensity, attenuation radius, and shadow settings individually. Six lantern lights with identical settings meant repeating the same property changes six times (or duplicating and repositioning, which still required 6 position adjustments).
Time: 29 minutes Iterations: 9 (mostly intensity balancing and attenuation radius tweaking)
MCP Approach
"Add a directional light angled 30 degrees above the horizon from the west. Set it to a warm golden color temperature around 4500K with intensity suitable for late afternoon"
"Add a sky light for ambient exterior fill, moderate intensity"
"Place a point light inside the fireplace with orange-red color, intensity 8, attenuation radius 600, casting soft shadows"
"At each of the 6 lantern positions, place a point light with warm yellow color, intensity 3, attenuation radius 400, shadow casting enabled"
"Add a post-processing volume covering the interior. Set slight bloom, warm color grading, and auto-exposure with a narrow range to prevent the interior from adjusting to the bright exterior"
The directional light and sky light were placed correctly on the first attempt. The fireplace light needed one intensity adjustment (initial placement was too bright). The lantern lights were placed accurately because the AI already knew where the lantern meshes were from the previous task.
The post-processing volume was the most impressive result — describing the desired behavior ("prevent the interior from adjusting to the bright exterior") resulted in correct auto-exposure min/max settings without having to look up the specific property names.
Time: 11 minutes Iterations: 3 (fireplace intensity, one lantern radius adjustment, post-processing bloom threshold)
Analysis
Lighting was 2.6x faster with MCP. The big wins were batch light placement (all 6 lantern lights from a single prompt) and not having to manually navigate property panels for each light's settings.
More importantly, describing lighting in terms of intent ("warm golden," "late afternoon," "prevent the interior from adjusting to the bright exterior") mapped naturally to the right technical settings. The developer didn't need to remember that auto-exposure range is controlled by Min EV100 and Max EV100 — describing the desired behavior was enough.
That said, final lighting balance still required visual judgment. The AI got the settings into a good range, but the difference between "good" and "exactly right" lighting is subjective and iterative. The three MCP iterations were faster than the nine manual iterations, but both approaches required visual iteration.
Task 4: Property Editing
The final task was material assignment and property fine-tuning. Every surface needed an appropriate material, and various actors needed property adjustments (collision settings, mobility flags, light channel assignments).
Manual Approach
Material assignment was the core of this task. Each structural element needed the right material: stone for walls, wood planks for floors, dark wood for the bar and furniture, metal for lantern frames, fabric for chair cushions.
This meant selecting an actor, finding the material slot in the Details panel, browsing the content browser for the right material, dragging it onto the slot. For meshes with multiple material slots, this had to be done per slot.
With 85+ actors and some having 2-3 material slots each, this was roughly 120 material assignments. Each one took 15-30 seconds of clicking and browsing.
Property editing beyond materials — setting all props to Static mobility, disabling collision on decorative items, assigning light channels — added more Details panel work.
Time: 52 minutes Iterations: 5 (wrong material on a few surfaces, one light channel mistake)
MCP Approach
"Assign the stone wall material to all wall meshes in the level. Use the wood plank material for all floor meshes"
"Set all furniture meshes (tables, chairs, bar counter, shelves, stools) to use the dark wood material"
"Assign the metal material to all lantern frame meshes. Use the fabric material for chair cushion slots"
"Set all prop actors to Static mobility. Disable collision on all decorative items (mugs, bottles, small props)"
"Set the fireplace and lantern lights to Light Channel 1. Set the directional and sky light to Light Channel 0"
Batch property editing is where MCP's advantage is most dramatic. What took 52 minutes manually took 8 minutes through MCP. A single prompt to assign materials to all wall meshes replaced 15+ individual material drag-and-drop operations.
The AI correctly identified which meshes were walls, which were furniture, and which were decorative props based on their names and scene context. Two material assignments were wrong (the staircase railing got stone instead of dark wood, and one barrel got the wrong wood material), but fixing those was a quick follow-up prompt.
Time: 8 minutes (MCP) + 4 minutes (manual correction) = 12 minutes total Iterations: 3 (two material corrections, one mobility fix)
Analysis
Property editing showed the largest time differential: 4.3x faster with MCP. This makes sense — batch property operations are exactly what AI assistance is designed for. The work is repetitive, rule-based, and involves a lot of UI navigation. Describing rules ("all wall meshes get stone material") is dramatically faster than applying those rules one actor at a time.
The Honest Results
Here are the complete numbers:
| Task | Manual | MCP (+ Polish) | Speedup |
|---|---|---|---|
| Level Blockout | 38 min | 14 min | 2.7x |
| Asset Placement | 67 min | 34 min | 2.0x |
| Lighting Setup | 29 min | 11 min | 2.6x |
| Property Editing | 52 min | 12 min | 4.3x |
| Total | 186 min | 71 min | 2.6x |
The MCP approach completed the same level in 38% of the time. Three hours and six minutes versus one hour and eleven minutes.
But raw time isn't the whole story. Let's talk about what these numbers don't capture.
Quality Comparison
Both versions of the level looked good. Neither was embarrassing. But they weren't identical.
The manual version had slightly better fine-grained placement. Props felt a touch more intentional in their positioning — the developer placed each item with a specific visual composition in mind. Small details like a barrel rotated to face its most interesting side toward the player, or a chair pulled out at just the right angle, were present in the manual version and absent in the initial MCP version.
The MCP version, after the manual polish pass, reached equivalent visual quality. The polish pass added the intentionality that pure AI placement lacks. Without the polish pass, the MCP version would have scored lower on visual quality despite being technically correct.
Effort and Fatigue
This is the metric nobody talks about, but it matters. After 186 minutes of manual placement, the developer was noticeably fatigued. The last 30 minutes of property editing involved a lot of repetitive clicking, and attention to detail dropped.
After 71 minutes of MCP work, the developer was still alert and engaged. Typing natural language descriptions and evaluating results is less draining than repetitive viewport and panel interactions. This has real implications for quality over a full workday.
Where MCP Wins
Based on our test and broader usage, MCP provides the largest time savings in these scenarios:
Batch Operations
Any task that requires applying the same operation to many actors. Material assignment, property changes, mobility settings, collision configuration. The time savings scale linearly with actor count — the more actors involved, the bigger the MCP advantage.
Spatial Reasoning from Description
Placing things "evenly spaced along the wall" or "clustered near the corner" is faster in natural language than calculating positions manually. The AI does the spatial math that you'd otherwise have to do in your head or with the ruler tool.
Initial Placement and Rough Layout
Getting actors into approximately the right position with approximately the right properties happens much faster through MCP. The first 90% of the work is where AI excels.
Exploratory Iteration
"Try making the lights warmer." "Move all the tables 50 units toward the center." "What happens if we double the fog density?" These exploratory what-if questions are cheap to ask through MCP and expensive to execute manually.
Tasks Requiring Technical Knowledge
Setting up post-processing volumes, configuring auto-exposure ranges, or adjusting material instance parameters by name — tasks where you'd normally need to remember exact property names and valid value ranges — are faster to describe by intent than to execute through the editor UI.
Where Manual Still Wins
MCP is not universally faster. Here's where manual work remains the better choice.
Precision Placement
When an actor needs to be at exactly the right position — not approximately right, but pixel-perfect — direct viewport manipulation is faster than describing the exact position in words. Dragging an actor and watching it snap into place is more efficient than saying "move it 3.7 units to the left, 1.2 units forward, rotate 7 degrees clockwise."
This is why the manual polish pass exists. AI gets you close. Your eyes and hands get you there.
Visual Composition
Arranging objects to create a specific visual composition — leading lines, depth layering, framing elements — requires aesthetic judgment that the AI doesn't have. You can describe what you want, but the description would need to be so detailed that it's faster to just place the actors yourself.
Complex Geometry Manipulation
BSP operations, mesh editing, and complex boolean geometry are easier to do visually than to describe. When you need to subtract one shape from another to create an alcove with specific dimensions, viewport manipulation is more intuitive than verbal description.
One-Off Tasks
If you're placing a single actor exactly where you want it, the overhead of typing a prompt, waiting for the AI to process it, and then verifying the result is actually slower than just dragging the actor from the content browser. MCP's advantage comes from scale and repetition — for single operations, direct manipulation wins.
Artistic Judgment Calls
"Does this look right?" is a question only you can answer. The AI can place 50 trees, but deciding whether the tree line creates the right silhouette against the sky requires looking at it and making a judgment call. Some creative decisions can't be delegated.
The Hybrid Approach
The most productive workflow isn't purely manual or purely MCP. It's a deliberate combination:
Phase 1: AI Blockout (MCP)
Use MCP for the initial spatial layout. Describe rooms, corridors, terrain features, and major structural elements. Don't worry about perfection — get the broad strokes right.
Why MCP: spatial reasoning from description is fast, and blockout is inherently rough. Precision isn't important yet.
Phase 2: AI Population (MCP)
Use MCP for batch asset placement. Describe groups of objects with spatial relationships. Let the AI handle quantity and even spacing.
Why MCP: batch placement is where the biggest time savings occur. Placing 12 chairs individually takes 12 operations. Placing 12 chairs through one prompt takes one.
Phase 3: AI Properties (MCP)
Use MCP for material assignment, property editing, and configuration. Describe rules rather than individual assignments.
Why MCP: property editing is the most tedious manual task and the most dramatic MCP speedup (4.3x in our test).
Phase 4: Manual Polish (Manual)
Switch to direct viewport manipulation for final adjustments. Fix clipping, adjust rotations for visual interest, tweak exact positions, refine lighting balance by eye.
Why Manual: the last 5-10% of quality requires aesthetic judgment and pixel-level control that AI can't provide.
Phase 5: AI Audit (MCP)
After manual polish, use MCP to verify consistency. Ask the AI to check for overlapping actors, missing materials, incorrect property values, or actors outside expected bounds.
Why MCP: scanning an entire level for issues is tedious by hand and instant through MCP. The AI can check every actor against a set of rules in seconds.
This five-phase approach gave us the best results in our testing: AI speed for bulk work, human judgment for quality, and AI thoroughness for verification.
Beyond This Test
Our test used a single, moderately complex level. The results would shift for different project types:
- Large open worlds: MCP advantage increases. More actors, more batch operations, more repetitive placement. The Procedural Placement Tool is even faster for environment scatter, but MCP handles the structural and prop placement that procedural tools don't cover.
- Small, handcrafted scenes: MCP advantage decreases. When every actor is a deliberate artistic choice, the manual approach's precision matters more.
- Technical setup (lighting, post-processing): MCP advantage is consistent regardless of level size. Describing intent is always faster than navigating property panels.
- Iterative design: MCP advantage increases significantly. "Make all the lights 20% brighter" is a one-second prompt and a multi-minute manual task. Over dozens of iteration cycles, this compounds.
What About the Blender Side?
Everything we've discussed applies equally to the Blender MCP Server. The same patterns hold: batch operations and rough layout are faster through AI, precision work and artistic judgment are faster by hand, and the hybrid approach beats either method alone.
The Blender MCP Server provides 212 tools across 22 categories with 14 context resources, so the breadth of AI-assisted operations is comparable to the Unreal side. If your pipeline spans both Unreal and Blender, both MCP servers work with the same AI clients — Claude Code, Cursor, or Windsurf — so the workflow is consistent across tools.
Conclusion
The 2.6x overall speedup we measured is real, but it comes with important context. MCP doesn't produce finished, polished work. It produces good first drafts very fast. The manual polish pass is not optional if you care about quality.
The honest assessment:
- MCP saves the most time on the most tedious work — batch operations, property editing, repetitive placement
- MCP saves moderate time on spatial layout — describing rooms and structure is faster than building them click by click
- MCP doesn't save time on precision work — fine positioning, visual composition, and artistic judgment still require direct human manipulation
- The hybrid approach is better than either method alone — AI speed for bulk work, human precision for polish, AI thoroughness for verification
We're not claiming MCP replaces manual editor work. We're claiming it handles the mechanical parts so you can spend more time on the creative parts. Our test supports that claim with real numbers.
If you want to try this yourself, grab the Unreal MCP Server and run through a similar test with your own project. Your specific speedup will depend on your level complexity, asset count, and personal editor proficiency. But if your work involves any significant amount of repetitive editor operations, the time savings are likely to be meaningful.
For a quick setup walkthrough, check out our 15-minute MCP getting-started guide. For the full deep dive on AI-assisted workflows, see the complete MCP guide.