Every game environment starts the same way: someone draws a picture of a place that doesn't exist yet. Then someone else has to build it. That translation from 2D concept to 3D playable space is one of the hardest parts of environment design — not because any single step is difficult, but because the pipeline has so many steps.
This post walks through building a complete environment from concept art using three tools together: the Unreal MCP Server for AI-assisted scene building, the Procedural Placement Tool for vegetation and prop scatter, and the Cinematic Spline Tool for a presentation flythrough. We'll compare the AI-assisted approach against the traditional manual pipeline at every step.
The Traditional Pipeline
Before we get into the AI-assisted workflow, let's be honest about what the traditional concept-to-level pipeline looks like:
- Concept analysis (1-2 hours) — studying the art, identifying key landmarks, estimating scale, listing required assets
- Blockout (4-8 hours) — placing BSP or proxy geometry to establish layout, scale, and spatial relationships
- Asset replacement (2-4 hours) — swapping blockout geometry for production meshes
- Environment scatter (4-8 hours) — hand-placing vegetation, rocks, props, and ground cover
- Lighting (2-4 hours) — establishing mood, time of day, key light placement
- Material and texture work (2-4 hours) — assigning and tweaking materials for the production meshes
- Polish (4-8 hours) — decals, particles, sound, post-processing, detail props
- Cinematic presentation (2-4 hours) — building a camera flythrough to showcase the environment
Total for a mid-complexity environment: 20-40 hours across multiple days.
That range is wide because it depends heavily on the artist's speed, the complexity of the environment, and how many iteration cycles happen. But even the low end — 20 hours — is a significant investment. And much of that time is spent on mechanical operations, not creative decisions.
The AI-Assisted Pipeline
Our target: the same quality result in significantly less time, by using AI for the mechanical parts and reserving manual effort for the creative parts.
Here's what we're building: a mountain temple ruin. The concept art shows an ancient stone temple complex built into a mountainside, partially reclaimed by nature. Dense forest surrounds the lower elevations. The temple structures are arranged on terraced platforms connected by stone stairways. A waterfall cascades down one side of the mountain, with a river flowing through the valley below.
It's a common fantasy RPG environment, complex enough to be interesting but grounded enough that real-world reference helps.
Step 1: Analyzing the Concept (30 minutes)
This step is the same whether you're using AI or not. You can't automate creative interpretation.
What We Identified
Scale reference points:
- The temple entrance is roughly 6m tall based on the implied human-sized doorway
- The main temple structure appears to be 15-20m wide and 10-12m tall
- The terraced platforms are at three elevation levels, roughly 5m apart
- The total scene spans approximately 200m x 150m
Key landmarks:
- Main temple building with columned entrance
- Three terraced platforms connected by stone stairs
- Smaller shrine structures on the lower terraces
- A waterfall on the east side, roughly 15m tall
- Dense forest below the temple complex
- Rocky cliff face behind the main temple
- Stone path leading from the valley up to the first terrace
Asset requirements:
- Temple architecture: walls, columns, roof elements, stone blocks
- Natural: trees, undergrowth, moss, grass, rocks, cliff meshes
- Water: waterfall, river, mist particles
- Decorative: stone carvings, lanterns, fallen debris, vines
Mood notes:
- Late afternoon lighting, golden hour
- Warm stone tones against deep green vegetation
- Sense of ancient grandeur slowly being absorbed by nature
- Atmospheric haze in the valley, clearer at elevation
The Difference
In a traditional pipeline, this analysis feeds into a mental plan that the artist executes step by step. In the AI-assisted pipeline, this analysis becomes the set of instructions we give the AI. The more specific our analysis, the better the AI output.
Step 2: Blockout with MCP (1.5 hours)
This is where the AI-assisted pipeline diverges sharply from the traditional one. Instead of manually placing and sizing BSP geometry, we describe the scene to the AI through the Unreal MCP Server.
Terrain Foundation
"Create a landscape 200m x 150m. Sculpt a mountain slope that rises from 0m on the south side to 25m on the north side. The slope should be gradual in the southern third, then steepen in the middle third, then form a near-vertical cliff face in the northern third. Add a valley channel running east-west through the southern quarter, 3m deep and 10m wide."
The AI creates the landscape and applies heightmap modifications. The result isn't precisely what the concept art shows — terrain sculpting through text has inherent imprecision — but it establishes the correct spatial relationships. The cliff is in the right place. The valley is at the right elevation. The terraces have room to exist.
We spent about 10 minutes manually refining the landscape sculpt: smoothing transition areas, adding rock outcroppings on the cliff face, and shaping the valley for the river path.
Temple Platforms
"Create three terraced platforms on the mountainside. Platform 1: 30m x 20m at elevation 5m, flat surface. Platform 2: 25m x 18m at elevation 10m, 15m north of Platform 1. Platform 3: 20m x 15m at elevation 15m, 10m north of Platform 2. Each platform should have retaining walls on the south and east sides, 2m thick stone walls extending down to the terrain."
Three terraces with retaining walls in one prompt. The AI calculated the positions relative to the terrain slope and created the geometry. The retaining walls needed manual adjustment — the AI placed them at exact platform edges, but they looked better slightly recessed with visible stone texture on the faces.
Main Temple Structure
"On Platform 3, create the main temple structure. Rectangular footprint, 18m x 12m, 10m tall to the roofline. The south face has a columned entrance — 6 columns, evenly spaced, 5m tall, 0.6m diameter. Add a 2m deep portico extending from the column line. Create a triangular pediment above the columns, 3m tall at peak. The walls are 1m thick stone."
The temple appeared in the viewport: a recognizable classical temple form with columns and pediment. The proportions were close to the concept art. We manually adjusted the column spacing (the AI had them too evenly distributed — staggering the middle two slightly made the entrance feel more natural) and added subtle imperfections to the wall geometry to suggest age.
Secondary Structures
"On Platform 2, create two smaller shrine structures, 5m x 5m each, 4m tall. Position them 8m apart near the north edge of the platform. Each should have an open doorway on the south face, 2m wide, 3m tall. Add a stone altar block inside each shrine, 1.5m x 0.8m x 1m."
"On Platform 1, create a gateway arch at the south edge. 4m wide, 5m tall, 1m thick. Add low stone walls extending 5m from each side of the arch, 1m tall."
Connecting Stairways
"Create stone stairways connecting the platforms. From Platform 1 to Platform 2: a 3m wide stairway on the west side, following the terrain slope. From Platform 2 to Platform 3: a 2.5m wide stairway on the east side. Each step should be 0.3m tall and 0.4m deep. Add low stone railings 0.5m tall on both sides of each stairway."
Stairs are one of the most tedious things to build manually in a blockout. Each step is an individual piece of geometry that needs to be sized, positioned, and aligned. A stairway spanning 5m of elevation at 0.3m per step is roughly 17 steps — 17 individual BSP boxes to create, size, and position. The AI generated both stairways in under 30 seconds.
The Stone Path
"Create a winding stone path from the valley floor to Platform 1's gateway. The path should be 2.5m wide, following the terrain contour. Start at position (0, -60, 0) in the valley and end at the gateway arch. Add three switchbacks to handle the elevation change."
The AI created the path geometry following the terrain. We refined the curves manually — AI-generated paths tend to have very regular switchbacks, and organic-looking paths need some asymmetry.
Blockout Assessment
Total blockout time: about 1.5 hours, including the manual refinement passes. The AI handled approximately 70% of the geometry placement. The remaining 30% was manual adjustment for organic feel and intentional imperfection.
In a traditional pipeline, this blockout would take 4-6 hours. The AI saved us roughly 3 hours — not by being faster at individual operations, but by handling dozens of operations per prompt while we focused on creative direction.
Step 3: Environment Scatter (2 hours)
With the architecture in place, the scene needed to feel alive. The concept art shows dense forest, overgrown ruins, and natural rock formations. This is where the Procedural Placement Tool takes over.
Forest Cover
The lower elevations (below the first platform) needed dense forest. We configured four scatter layers:
Large trees:
- Mixed broadleaf species (oak and beech-type meshes)
- Spacing: 6-10m
- Scale variation: 0.85-1.25
- Slope constraint: below 35 degrees
- Altitude range: 0-8m (below the temple complex)
- Exclusion zone: 3m from the stone path, 5m from building geometry
Medium trees and saplings:
- Smaller tree meshes, 3-5m tall
- Spacing: 3-5m
- Density increased near forest edges for a natural tree line
- Altitude range: 0-12m (extending slightly up the mountainside)
Undergrowth:
- Ferns, bushes, tall grass
- High density with clustering
- Stronger clustering near water sources and in sheltered areas
- Exclusion zones around paths and structures
Ground cover:
- Grass, moss, fallen leaves
- Very high density using HISM
- Covering all terrain below 45 degrees slope
The tool scattered approximately 18,000 instances in under 2 seconds. The forest immediately transformed the scene from "blockout with terrain" to "environment." The density was right on the first pass for the forest floor, but we needed to reduce the tree line sharpness where forest met temple platforms. We adjusted the altitude falloff curve and regenerated — 3 seconds.
Temple Overgrowth
The concept art shows nature reclaiming the temple — vines on walls, moss on stone, grass growing through cracks in the platforms. This required a different scatter approach than the forest.
Vine and ivy placement:
- Configured to place on vertical surfaces (walls and columns)
- Density controlled by a painted mask — heavier on the north-facing walls, lighter on the entrance
- Randomized hanging lengths
Moss and lichen:
- Surface scatter on horizontal stone surfaces
- Higher density in shaded areas (north sides, under overhangs)
- Patches, not uniform coverage
Grass through cracks:
- Small grass tufts placed along platform edges and at wall-floor junctions
- Low density to suggest neglect without suggesting complete abandonment
- The temple should look ancient, not destroyed
Fallen stone debris:
- Broken column segments and stone blocks
- Scattered near the secondary structures (suggesting partial collapse)
- Clustering near the base of walls
- Exclusion from walkable paths
This scatter layer added about 5,000 instances. The balance was important — too much overgrowth and the temple looks destroyed rather than ancient. Too little and it looks maintained. We aimed for "abandoned for centuries but structurally intact," matching the concept art's mood.
Rocky Terrain
The cliff face and mountainside needed rock scatter:
- Large boulders on the cliff base and scattered across slopes
- Medium rocks concentrated on steeper terrain
- Pebbles and scree at slope transitions
- Slope-based placement: more rocks on steeper surfaces
Waterfall Area
The area around the waterfall needed special attention:
- Dense moss and fern placement near the water source
- Wet-looking rocks (separate mesh set with darker textures)
- Mist-zone scatter for particles (placed using the MCP Server separately)
- Reduced tree density near the falls for visibility
Scatter Results
Total placed instances: approximately 28,000 across all layers. The entire scatter configuration took about 1.5 hours, with 30 minutes of iteration adjusting density curves and exclusion zones.
A comparable hand-placed environment would take 6-8 hours minimum, and would likely have less consistent density and coverage. The procedural approach also makes iteration trivial — when we decided the temple platforms needed more grass growth, we adjusted one density slider and regenerated in seconds.
Step 4: Lighting and Materials (2 hours)
The concept art specifies late afternoon, golden hour lighting. This is a great lighting scenario because the warm directional light creates strong shadows and golden highlights on the stone, while the cooler ambient light fills the shadows with blue.
Directional Light
We used the MCP Server for the initial lighting setup:
"Set the directional light angle to simulate late afternoon — about 25 degrees above the horizon, coming from the west-southwest. Set the color temperature to 4200K. Intensity 8 lux. Enable cascaded shadow maps with 4 cascades."
The low sun angle was critical. It rakes across the temple columns and creates long shadows on the platforms. The warm color temperature bathes the stone in golden light while the sky provides cool ambient fill.
Atmospheric Effects
"Add exponential height fog. Density 0.008, start distance 0, fog max opacity 0.9. Set the inscattering color to a warm amber. Add height falloff at the valley floor elevation. Enable volumetric fog at density 0.002."
The height fog in the valley creates the atmospheric haze visible in the concept art — the forest below the temple is slightly hazed, creating depth separation between the foreground architecture and the background vegetation.
"Set the sky atmosphere to clear with light cirrus clouds. Set the ground albedo to warm green for vegetation bounce light."
Temple Lighting
The temple interior (visible through the entrance) needed supplemental lighting:
"Inside the main temple, add 3 point lights along the center line. Warm color temperature 3500K. Low intensity — 200 lux each. Radius 8m. This simulates light bouncing inside from the entrance."
The interior shouldn't be fully lit — just enough to see shapes inside, creating mystery about what's in the temple.
Material Assignment
We used the MCP Server for batch material operations:
"Assign the stone temple material to all actors with 'Temple' or 'Platform' or 'Stair' or 'Wall' in their names. Assign the rough stone material to actors with 'Column' in the name. Assign the moss-covered stone material to the shrine structures."
"For the gateway arch and the stone path, use the worn stone material with UV scale 2.0."
Batch material assignment by name pattern saved significant time. We had already named our blockout geometry descriptively (a habit worth developing regardless of workflow), so the pattern matching was accurate.
Manual Material Refinement
After the batch assignment, we manually adjusted:
- UV tiling on individual surfaces that looked stretched
- Material instance parameters for variation (some walls slightly darker, some columns slightly more weathered)
- Moss blend material on surfaces where the procedural scatter placed moss meshes
- Water material on the river and waterfall surfaces
Post-Processing
"Add a post-process volume covering the scene. Set color grading: slight warm tint in highlights, cool tint in shadows, saturation 0.9, contrast 1.1. Enable bloom with intensity 0.3 and threshold 1.0. Add lens flare at low intensity for the sun. Auto exposure: min 1.0, max 3.0."
The post-processing tied the scene together. The warm/cool split toning is what makes golden hour lighting feel cinematic rather than just "yellow light."
Step 5: Cinematic Flythrough (1.5 hours)
The environment is built. Now we need to present it. The Cinematic Spline Tool handles the camera work.
Planning the Camera Path
A good flythrough tells a story. It doesn't just show the environment — it reveals it in a deliberate sequence. Our plan:
- Opening shot — wide establishing shot from the valley, looking up at the temple complex on the mountainside
- Approach — camera follows the stone path upward through the forest
- Gateway reveal — camera passes through the gateway arch onto Platform 1
- Terrace ascent — camera rises along the stairways, showing the temple getting closer
- Temple entrance — camera approaches the columned entrance at eye level
- Interior peek — camera pushes slightly into the temple doorway
- Pullback and crane — camera pulls back and cranes up for the final wide shot, revealing the full scope
Setting Up the Spline
The Cinematic Spline Tool uses spline paths to define camera movement. We placed a spline with 12 control points following the path described above.
The key settings:
- Filmback preset: Super 35mm (matching the cinematic feel of the concept art)
- Base focal length: 35mm for the wide shots, transitioning to 50mm for the approach, then 85mm for the entrance detail
- Movement speed: Variable — slow for the establishing shot, moderate for the approach, slow again for the reveals
Camera Techniques
Shot 1 - Establishing (0:00-0:08): The camera starts low in the valley, looking up through the trees at the temple complex above. We used a slow dolly forward with a 24mm wide lens to emphasize the scale of the mountain and the height of the temple.
The Cinematic Spline Tool's crane simulation handled the slight vertical drift as the camera moved forward — a real crane would bob slightly, and that motion adds organic life to the shot.
Shot 2 - Forest approach (0:08-0:18): The camera follows the stone path through the forest. Trees pass close on both sides. The focal length tightens to 35mm, creating a more intimate framing.
We added Perlin noise camera shake at very low intensity (amplitude 0.3, frequency 0.5) to simulate a Steadicam operator walking the path. Too much shake looks handheld. This amount just adds life.
Shot 3 - Gateway reveal (0:18-0:24): As the camera passes through the gateway arch, it emerges onto Platform 1 with a clear view up to the main temple. This is the first time the viewer sees the full temple complex without trees blocking the view.
We widened the focal length back to 28mm for the reveal and slowed the camera movement to let the viewer absorb the scale.
Shot 4 - Terrace ascent (0:24-0:34): The camera rises alongside the stairway from Platform 1 to Platform 2, then continues toward Platform 3. The shrine structures on Platform 2 pass at eye level, giving a sense of the detail in the secondary buildings.
The spline path here was slightly offset from the stairs — the camera doesn't follow the stairs exactly but moves in a smooth arc that happens to parallel them. This looks more natural than a camera locked to the stair geometry.
Shot 5 - Temple entrance (0:34-0:42): The camera approaches the main temple at eye level, focal length tightening to 85mm. The columns fill the frame. The narrower lens compresses the depth, making the temple feel massive and imposing.
The Cinematic Spline Tool's focus tracking locked onto the temple entrance, maintaining sharp focus on the doorway while the columns in the foreground went slightly soft.
Shot 6 - Interior peek (0:42-0:46): A brief push into the doorway. Just enough to see the interior space — dark, mysterious, with those subtle warm point lights hinting at depth. Then a pause.
Shot 7 - Crane pullback (0:46-0:56): The camera pulls back from the entrance and cranes upward in one smooth motion, rising above the temple to reveal the full complex from above. The final frame shows the temple complex, the terraced platforms, the surrounding forest, the waterfall, and the valley below.
We used the crane and jib arm simulation for this shot. The camera follows the crane arm's arc, which creates a naturally accelerating vertical movement — slow at first, faster as it rises. This feels more cinematic than a linear vertical move.
The focal length widens to 20mm for the final frame, maximizing the scope of the reveal.
Rendering the Flythrough
Total flythrough duration: 56 seconds at 30fps. We rendered at 1920x1080 using the Movie Render Queue with motion blur enabled and anti-aliasing set to temporal.
The Cinematic Spline Tool exported the camera as a Sequencer track, so the Movie Render Queue handled it natively. No special export steps.
Comparing Workflows
Time Comparison
| Phase | Traditional | AI-Assisted | Savings |
|---|---|---|---|
| Concept Analysis | 1.5 hours | 0.5 hours | 1 hour |
| Blockout | 5 hours | 1.5 hours | 3.5 hours |
| Environment Scatter | 6 hours | 2 hours | 4 hours |
| Lighting & Materials | 3 hours | 2 hours | 1 hour |
| Polish | 4 hours | (included above) | -- |
| Cinematic | 3 hours | 1.5 hours | 1.5 hours |
| Total | 22.5 hours | 7.5 hours | 15 hours |
The AI-assisted pipeline completed in roughly one-third the time. But the savings weren't uniform:
- Blockout saw the largest percentage reduction (70%) because it's the most mechanical phase
- Scatter was dramatically faster (67% reduction) thanks to procedural rules replacing manual placement
- Lighting saw the smallest reduction (33%) because lighting is fundamentally a creative-judgment task
- Cinematic was moderately faster (50%) — the spline tool simplified camera work but shot planning and timing remain manual
Quality Comparison
Honest assessment: the AI-assisted environment is not identical in quality to what a senior environment artist would produce in 22 hours of manual work. Specific differences:
Where AI-assisted was comparable:
- Overall composition and spatial relationships
- Vegetation density and variety
- Lighting mood and atmosphere
- Cinematic camera work
Where AI-assisted was slightly lower quality:
- Fine detail in architecture (manual modeling produces cleaner geometry)
- Material variation (hand-tweaked materials have more subtle variation)
- Edge cases in scatter (a few instances of grass growing through solid stone)
- Storytelling props (we had less time for hand-placed narrative details)
Where AI-assisted was arguably better:
- Consistency of scatter density across the entire scene
- Lighting balance (the AI-assisted audit caught intensity issues we might have missed)
- Speed of iteration (we tested more lighting scenarios because each attempt was faster)
The quality gap is real but small, and it narrows with iteration. The AI-assisted approach gets you to 85% quality in 30% of the time. The question is whether the remaining 15% justifies the remaining 70% of the time.
For final shipping environments: probably yes, spend the extra time. For prototypes, vertical slices, pitch demos, and portfolio pieces: the AI-assisted result is more than sufficient.
Tips for Best Results
After running through this pipeline, here are the practical lessons we'd share:
Be Specific in Your Descriptions
"Create a temple" gives you a generic box. "Create a rectangular structure, 18m x 12m x 10m, with a columned entrance on the south face featuring 6 columns at 2.5m spacing" gives you something close to your concept art. The AI executes what you describe — invest time in describing precisely.
Work in Passes, Not All at Once
Don't try to describe the entire scene in one prompt. Work in the same passes a manual artist would: terrain first, major structures, secondary structures, environmental scatter, lighting, polish. Each pass builds on the context of the previous one.
Name Everything Descriptively
When the AI creates actors, give them meaningful names. "Temple_MainBuilding," "Platform_02," "Stairway_East." This pays off when you need to reference them later — for material assignment, lighting, or property changes. The MCP Server's context resources use actor names for identification.
Use Procedural Scatter for Volume, Manual for Story
The Procedural Placement Tool excels at filling space with appropriate density and variation. But storytelling details — the vine that's grown through a cracked column, the birds nesting in a temple alcove, the offering left at a shrine — need intentional human placement. Use both approaches.
Start Wide, Refine Narrow
Begin with broad strokes: overall layout, major geometry, full-scene lighting. Then narrow down: adjust individual room proportions, tweak specific scatter densities, refine individual light positions. The AI is most efficient at the broad strokes. Manual refinement handles the narrow focus.
Combine All Three Tools
This walkthrough used the Unreal MCP Server for scene building, the Procedural Placement Tool for environment scatter, and the Cinematic Spline Tool for camera work. Each tool handles a different phase of the pipeline. Together, they cover the full concept-to-presentation workflow.
All three are available individually or as part of the Complete Toolkit bundle, which also includes the Blueprint Template Library for gameplay systems and the Blender MCP Server for AI-assisted 3D modeling.
Learn the Traditional Way First
This might seem counterintuitive in a post about AI-assisted workflows, but understanding the manual process makes you better at directing the AI. Knowing what a good blockout looks like helps you write better prompts. Understanding lighting principles helps you describe the mood you want. Familiarity with scatter patterns helps you configure better placement rules.
AI tools amplify your existing skills. They don't replace the need to have skills in the first place.
Getting Started
If you want to try this pipeline yourself:
- Install the Unreal MCP Server and connect your AI client
- Start with a simple blockout — describe a single room or building and iterate from there
- Add the Procedural Placement Tool for environment scatter once your blockout is solid
- Use the Cinematic Spline Tool when you're ready to present your work
Each tool's documentation covers setup, configuration, and detailed walkthroughs.
The concept-to-level pipeline has always been about translating creative vision into technical execution. AI-assisted tools don't change the vision part — that's still entirely yours. They compress the execution part so you can iterate faster, explore more options, and spend your time on the decisions that actually shape the player's experience.