Procedural level design in Unreal Engine 5 is one of those capabilities that sits in a frustrating middle ground. The PCG (Procedural Content Generation) framework is extraordinarily powerful — but the learning curve is steep, the node graphs get complex fast, and iteration cycles are slow. You tweak a parameter, wait for generation, evaluate, tweak again. For solo developers and small studios, the time investment to set up PCG properly often exceeds the time it would save.
That's where AI-assisted procedural level design changes the equation. By combining UE5's PCG framework with MCP (Model Context Protocol) automation through the Unreal MCP Server, we can drive procedural generation through natural language, iterate on results conversationally, and dramatically reduce the setup time that makes PCG inaccessible for many teams.
This isn't about replacing level designers or generating entire games from a single prompt. It's about making procedural workflows more approachable and faster to iterate on. In this tutorial, we'll walk through how the PCG framework works, how MCP tools can drive it, and practical examples that you can adapt to your own projects.
Understanding UE5's PCG Framework
Before we get into AI-assisted workflows, let's make sure we're on the same page about what the PCG framework actually does and how it works under the hood.
What PCG Actually Is
The Procedural Content Generation framework, introduced in Unreal Engine 5.2 and significantly expanded in 5.3 and 5.4, is a node-based system for generating and placing content in your levels. Think of it as a visual scripting system specifically for world building. You define rules — "place trees here, rocks there, grass everywhere else" — and the framework executes those rules to populate your world.
The core components are:
PCG Graphs: These are the rule sets. A graph takes inputs (surface data, spline paths, point clouds) and produces outputs (spawned actors, instanced static meshes, modified landscape). Each graph is a directed acyclic graph of nodes that transform and filter data.
PCG Volumes: These define where generation happens. You place a volume in your level, assign it a graph, and it generates content within its bounds. Volumes can overlap, and their graphs can interact.
PCG Components: These attach PCG behavior to individual actors. A single building actor might have a PCG component that generates furniture inside it, debris around it, and vegetation growing on it.
Point Data: The fundamental data type in PCG. Everything flows as collections of points with attributes — position, rotation, scale, density, custom metadata. Nodes create, transform, filter, and consume these points.
Why PCG Is Powerful but Underused
The PCG framework can generate entire forests, populate cities with props, distribute enemies across a dungeon, and place thousands of objects with consistent rules. Studios with dedicated technical artists use it extensively.
But for the average developer, several barriers exist:
-
Node graph complexity. A realistic biome distribution graph can easily hit 50-100 nodes. Understanding data flow through that graph requires significant experience.
-
Parameter tuning. PCG graphs have dozens of parameters. Finding the right values is trial-and-error, and each regeneration takes time.
-
Blueprint integration. Connecting PCG output to gameplay systems (AI navigation, collision, streaming) requires additional setup that isn't always documented well.
-
Debugging. When generation produces unexpected results, figuring out which node is responsible means inspecting data at each stage — a tedious process.
This is exactly the kind of problem where AI assistance provides genuine value. Not generating the content itself, but helping you build, configure, and iterate on the systems that generate content.
How MCP Tools Drive PCG
The Unreal MCP Server provides 207 tools across 34 categories that expose Unreal Editor functionality to AI assistants through the Model Context Protocol. Several of these categories are directly relevant to PCG workflows.
The MCP-PCG Connection
Here's how the pieces fit together. When you're working with an AI assistant (Claude, for example) connected to the Unreal MCP Server, the AI can:
Read the current state of your level. It can query what actors exist, what their properties are, where volumes are placed, and what PCG graphs are assigned. This context is critical — the AI needs to understand what's already in your scene before suggesting changes.
Create and configure PCG volumes. The AI can spawn PCG volume actors, set their bounds, assign graphs, and configure generation parameters. This means you can say "create a PCG volume covering the northeast quadrant of my landscape" and the AI handles the actor creation and positioning.
Modify PCG graph parameters. Through property access tools, the AI can adjust the parameters exposed by your PCG graphs — density values, spawn distances, random seeds, filtering thresholds. This is where iterative refinement becomes conversational.
Trigger regeneration. After parameter changes, the AI can trigger PCG regeneration and report back on results — actor counts, distribution statistics, performance metrics.
Inspect generation results. Post-generation, the AI can analyze what was produced. How many trees were spawned? What's the density distribution? Are there obvious gaps or clustering issues?
The Natural Language Iteration Loop
The real power here isn't any single operation. It's the iteration loop. Traditional PCG iteration looks like this:
- Open PCG graph editor
- Find the parameter you want to change
- Modify the value
- Close graph editor (or switch to viewport)
- Regenerate
- Evaluate result visually
- Repeat
With MCP-assisted iteration:
- Tell the AI what you want changed: "The forest is too dense near the river. Reduce tree density within 30 meters of any water body by 40%."
- The AI adjusts the relevant parameters
- Regeneration triggers
- You evaluate visually
- Continue the conversation: "Better, but the transition between dense and sparse areas is too sharp. Add a 20-meter gradient."
The conversation maintains context. The AI remembers what you've already tried, what worked, and what didn't. This is dramatically faster than manual parameter hunting, especially when you're working with unfamiliar PCG graphs or complex parameter interactions.
Practical Example 1: Generating a Forest Biome
Let's walk through a concrete example. We'll create a forest biome using PCG, driven by natural language through MCP.
Setting Up the Foundation
Start with a landscape. For this example, assume a 4km x 4km landscape with painted biome layers — a common setup for open-world projects. The landscape has slope data, height data, and landscape layers for different terrain types (grass, dirt, rock, snow).
The first step is establishing the PCG infrastructure. Here's how the conversation with your AI assistant might go:
You: "I need to set up procedural forest generation on my landscape. Create a PCG volume that covers the entire landscape. The forest should respect the terrain layers — trees on grass and dirt layers, no trees on rock or snow. Use my TreePack data asset for the mesh list."
The AI, through MCP, would:
- Query the landscape bounds to determine volume size
- Spawn a PCG volume actor sized to match
- Create or assign a PCG graph that samples landscape layers
- Configure the graph to filter points based on layer weights
- Set up the spawn node to reference your TreePack asset
This setup, which might take 30-45 minutes manually (finding the right nodes, connecting them correctly, setting initial parameters), happens in under a minute.
Defining Distribution Rules
Forests aren't uniform. Real forests have density variation, species distribution patterns, and edge behaviors. Here's where PCG gets interesting and where AI assistance becomes particularly valuable.
You: "The forest should have three density zones. Core forest: 0.8 trees per square meter, using mostly large oaks and pines. Forest edge: 0.3 trees per square meter, using smaller birch and shrub meshes. Scattered: 0.05 trees per square meter, individual trees with wider spacing. Use a noise function to create organic boundaries between zones."
The AI sets up the layered generation:
- A Perlin noise generator creates the zone boundaries
- Three separate spawn chains handle each density zone
- Mesh selection nodes distribute species based on zone type
- Scale variation adds natural randomness within each zone
Slope and Altitude Filtering
You: "Trees shouldn't spawn on slopes greater than 35 degrees. Above 800 meters elevation, switch to alpine species only — shorter pines and no deciduous trees. Below 200 meters near the river valley, increase broadleaf density."
The AI adds slope filtering nodes and altitude-based species switching. Each of these is a node chain in the PCG graph — straightforward individually, but the compound setup is where time savings accumulate.
Ground Cover Integration
Trees alone don't make a forest. You need understory.
You: "Under the tree canopy, spawn ferns and mushroom meshes at 2 per square meter. In forest clearings, use wildflower meshes instead. Add fallen log meshes along natural drainage lines."
This adds secondary generation passes that reference the primary tree output. The AI configures PCG to use the tree positions as input for understory generation — spawning ground cover in the shadow of the canopy while switching to different meshes in gaps.
Iterative Refinement
Here's where the workflow really shines. You regenerate and evaluate:
You: "The oaks are spawning too close to each other. Minimum spacing should be 4 meters for large trees. Also, the forest edge transition is too regular — it looks like a grid pattern. Add more randomness to the edge noise."
You: "Better. But there's a clearing in the southwest that looks unnatural. Can you add a few standalone large trees in that area to break up the empty space?"
You: "The fallen logs are all oriented the same way. Randomize their rotation and add slight scale variation — 0.8 to 1.2 range."
Each of these refinements is a specific parameter change or node addition. Through MCP, the AI makes the change and triggers regeneration. What would be 10-15 minutes of graph editing per iteration becomes a 30-second conversation exchange.
Combining with the Procedural Placement Tool
For even more control over specific areas, you can layer the Procedural Placement Tool on top of PCG-generated content. The Procedural Placement Tool offers rule-based scatter with more granular artist control — density painting, exclusion zones, and per-instance overrides.
The workflow becomes:
- PCG handles the broad strokes — overall forest distribution across the landscape
- The Procedural Placement Tool handles hero areas — hand-tuned placement around points of interest, paths, and gameplay spaces
- MCP automation coordinates between the two systems
You: "In the area around the village (coordinates 1200, 800 to 1600, 1200), disable PCG generation and switch to Procedural Placement Tool rules. I want more control over individual tree positions there."
This layered approach gives you the efficiency of procedural generation for the 90% of your world that players pass through quickly, and the precision of hand-tuned placement for the 10% they spend the most time in.
Practical Example 2: Urban Environment Generation
Forests are relatively forgiving — randomness looks natural. Urban environments are harder. Buildings need to align to streets, props need to respect sidewalks and intersections, and the overall layout needs to feel designed rather than random.
Street Grid Setup
Urban PCG typically starts with a road network. You can define this through splines.
You: "Create a grid of road splines for a small town. Main road running north-south, 12 meters wide. Three cross streets running east-west, 8 meters wide, spaced 80 meters apart. Add slight curvature to the cross streets so they're not perfectly straight."
The AI creates the spline actors through MCP. Each spline defines a road center line with width metadata.
Building Placement Along Streets
You: "Along both sides of each road, place building footprints. Use a PCG graph that: samples points along the road splines at 15-20 meter intervals (randomized), offsets them 10 meters from the road center, orients them to face the road, and assigns a random building mesh from my TownBuildings data table. Buildings should not overlap — use a minimum spacing check."
This is a classic PCG pattern: spline sampling plus offset plus orientation. The AI sets up the graph with:
- Spline sampler nodes for each road
- Point offset and rotation nodes
- Data table lookup for mesh assignment
- Self-pruning to eliminate overlaps
Props and Street Furniture
You: "Add street lights every 25 meters along the main road, alternating sides. Place trash cans near building entrances. Add park benches at the two largest gaps between buildings on each street. Scatter newspaper and leaf debris meshes on sidewalks at low density."
Each of these is a separate PCG pass, some referencing the road splines directly (street lights), some referencing the building output (trash cans near entrances), and some using area sampling (debris).
The Challenge of Urban PCG
Let's be honest about limitations here. PCG-generated urban environments almost always need manual cleanup. Buildings spawn in awkward configurations. Streets don't quite connect properly at intersections. The "feel" of a designed space is hard to achieve purely procedurally.
What PCG gives you is a starting point that's 60-70% there. You then spend your time on the interesting design work — adjusting building types for variety, adding landmark structures by hand, tuning sight lines — rather than the tedious work of placing 200 generic buildings one by one.
AI assistance through MCP makes this even more practical because the cleanup phase is also conversational:
You: "The building at position (1350, 920) is clipping into the adjacent building. Move it 3 meters east and rotate it 5 degrees clockwise."
You: "The intersection at Main Street and Second Avenue looks empty. Add a small park with benches and a fountain. This should be hand-placed, not procedural."
You: "All the buildings on the east side of Main Street are the same mesh. Replace every other one with a different variant from the TownBuildings table."
Practical Example 3: Dungeon Layout Generation
Dungeons are an interesting PCG challenge because they need to be both procedural and gameplay-functional. A dungeon that looks random isn't fun. It needs pacing, progression, and flow.
Room-and-Corridor Approach
The most proven dungeon generation method uses a room-and-corridor model. You generate rooms first, then connect them.
You: "Generate a dungeon layout with 12-15 rooms. Room sizes should vary: 3 small rooms (8x8 to 12x12 meters), 6-8 medium rooms (15x15 to 25x20 meters), 2-3 large rooms (30x25 to 40x30 meters), and 1 boss room (50x40 meters). Place rooms in a loose grid with 10-15 meter corridors connecting adjacent rooms. No room should have more than 4 connections. The boss room should be the furthest from the entrance."
This kind of generation can be implemented through a custom PCG graph that uses point generation for room centers, sizing attributes for room bounds, and connection logic for corridors. The AI sets up the scaffolding through MCP.
Populating Rooms
Once the layout exists, each room needs content.
You: "For small rooms, spawn 1-2 enemy spawn points and 0-1 loot containers. Medium rooms get 3-5 enemies and 1-2 loot containers plus environmental props — crates, barrels, rubble. Large rooms get 6-8 enemies, 2-3 loot containers, and a puzzle element placeholder. The boss room gets a single boss spawn point, 4 pillar actors for cover, and a loot chest near the far wall."
The AI configures per-room-type generation rules. Each room type has its own PCG subgraph with appropriate density and placement rules.
Critical Path and Pacing
Here's where pure procedural generation struggles and where the hybrid approach matters.
You: "Mark the critical path from entrance to boss room. Along this path, rooms should escalate in difficulty — fewer enemies near the start, more near the end. Place key item spawns in rooms that branch off the critical path, so players are rewarded for exploration. Add locked door actors at two points along the critical path that require keys from the branch rooms."
This requires the AI to understand the graph connectivity of the generated layout, identify the critical path, and distribute gameplay elements along it. It's the kind of higher-level design logic where AI assistance provides genuine value — not just placing objects, but reasoning about placement in the context of player experience.
Dungeon Art Pass
You: "Apply themed mesh sets to the dungeon. The first third (rooms closest to entrance) uses 'Crypt' themed meshes — stone walls, torches, cobwebs. The middle third uses 'Cavern' themed meshes — rough rock walls, glowing crystals, water pools. The final third and boss room use 'Temple' themed meshes — ornate pillars, inscribed walls, magical lighting."
The AI modifies the PCG graph to select mesh variants based on room position along the critical path. This is a cosmetic pass but it makes a huge difference in how the dungeon feels — procedural layout with themed progression creates a sense of journey.
Performance Considerations for Procedural Content
Procedural generation is powerful, but it can also destroy your frame rate if you're not careful. Here are the performance considerations we've learned from experience, and how MCP tools can help manage them.
Instance Count Management
The most common PCG performance issue is spawning too many instances. A naive forest generation pass might produce 500,000 tree instances across a large landscape. Even with Nanite and ISM (Instanced Static Meshes), that's a lot of draw calls and memory.
You: "What's the current instance count from the forest PCG volume?"
The AI queries the generation results through MCP and reports: "Forest PCG generated 487,312 instances across 12 mesh types. Breakdown: Large Oak — 23,400, Pine — 45,200, Birch — 18,900, Shrub variants — 89,000, Ferns — 156,000, Mushrooms — 48,000, Ground cover — 106,812."
You: "That's too many. Reduce fern density by 50% and ground cover by 40%. For shrubs, increase minimum spacing from 0.5m to 1m. Keep tree counts as they are."
This kind of informed optimization is much faster when the AI can report exact counts and you can make targeted reductions rather than guessing.
LOD and Distance Culling
PCG-generated content needs appropriate LOD (Level of Detail) settings, especially for vegetation.
You: "Set up distance culling for the forest PCG output. Trees should render at full LOD within 200 meters, switch to LOD1 at 400 meters, and cull entirely beyond 800 meters. Small props (mushrooms, ferns) should cull at 100 meters. Ground cover culls at 50 meters."
The AI configures HISM (Hierarchical Instanced Static Mesh) component settings on the PCG output. This is tedious to set up manually for each mesh type but straightforward for the AI to batch-configure.
HLOD Integration
For large worlds, Hierarchical Level of Detail (HLOD) is essential. PCG content needs to be included in HLOD generation.
You: "Ensure all PCG-generated trees are included in the HLOD build. Cluster size should be 64 meters. Use simplified proxy meshes for the HLOD representations."
World Partition Considerations
If you're using World Partition (and you should be for large worlds), PCG generation needs to respect partition grid boundaries. The AI can help configure PCG volumes to align with your partition grid and ensure generation doesn't create dependencies across streaming cells.
You: "My world partition grid is 128 meters. Make sure the forest PCG volume generates independently per cell so that loading one cell doesn't require adjacent cells to be loaded."
Memory Profiling
After generation, it's worth checking memory impact.
You: "What's the estimated memory footprint of the current PCG output? Break it down by mesh type."
The AI can query mesh asset sizes and multiply by instance counts to give you rough estimates. This isn't a precise profiling tool, but it identifies obvious problems — like a single mushroom mesh that's 50MB and has 48,000 instances.
Advanced Techniques: Combining PCG with Gameplay Systems
The most interesting procedural level design workflows go beyond visual placement and integrate with gameplay systems.
Navigation Mesh Integration
PCG content affects AI navigation. Large rocks and dense vegetation create natural barriers that the NavMesh needs to account for.
You: "After forest generation, identify areas where tree density exceeds 1.5 per square meter. Mark those as NavMesh obstacles — AI shouldn't pathfind through dense forest. Create clearings of at least 5 meter radius every 40-50 meters in dense areas so AI can path around them."
This creates gameplay-meaningful procedural content. Dense forest isn't just visual — it's a navigation barrier that funnels AI movement along clearings and paths.
Gameplay Spawn Integration
You can use PCG output to inform gameplay placement.
You: "In the forest, identify the 10 largest natural clearings. Place enemy camp spawn points in the 5 clearings closest to roads. Place resource node spawn points in the 5 clearings furthest from roads."
The AI analyzes the PCG generation output, identifies gap regions, ranks them, and places gameplay actors accordingly. This creates a natural relationship between environment and gameplay — camps appear in accessible clearings, while valuable resources are hidden in remote areas.
Dynamic PCG at Runtime
Some PCG systems generate at runtime for infinite procedural worlds. This is an advanced use case, but MCP can help with the setup.
You: "Configure the dungeon PCG graph for runtime generation. It should generate a new layout each time the player enters. Use the player's progression level as a seed modifier so difficulty scales. Cache generated layouts so re-entering the same dungeon gives the same result within a session."
Runtime PCG has significant performance implications and requires careful optimization. The AI can set up the framework, but you'll need to profile and optimize for your target hardware. This is one of those areas where AI gets you 80% of the way there and the remaining 20% requires deep technical expertise.
Weather and Time-of-Day Response
PCG content can respond to dynamic conditions.
You: "Set up a PCG layer for snow accumulation. When the weather system triggers snowfall, gradually increase the density of snow mesh instances on horizontal surfaces. Rate: 10% density increase per minute of snowfall, capping at 80%. When snowfall stops, decrease at 5% per minute."
This kind of dynamic PCG creates living environments where the procedural content isn't just set dressing but an active part of the world simulation.
The Iterative Design Philosophy
The most important thing about AI-assisted procedural level design isn't any specific technique. It's the shift in design philosophy.
From "Build Then Test" to "Describe and Refine"
Traditional level design is construction-oriented. You build something, test it, then modify what you built. Each modification requires navigating the editor, finding the right tools, and executing changes manually.
AI-assisted PCG is conversation-oriented. You describe what you want, evaluate the result, and refine through further description. The AI handles the translation from intent to implementation.
This doesn't mean you never touch the editor. Visual evaluation still requires your eyes. Playtesting still requires you to move through the space. Artistic judgment is still entirely yours. But the gap between "I want this to change" and "this has changed" shrinks from minutes to seconds.
When to Use Manual Placement Instead
Not everything should be procedural. Here's our practical heuristic:
Use PCG for:
- Large areas where individual placement would take hours
- Background and fill content that players see but don't closely examine
- Repeated patterns (forests, grasslands, rubble, debris fields)
- Content that needs to change or regenerate (seasonal variations, destructible environments)
Use manual placement for:
- Hero moments — the one tree on the hilltop that frames the vista
- Gameplay-critical placement — cover positions, jump distances, sight lines
- Narrative elements — environmental storytelling props, breadcrumbs
- Areas where "designed" quality is visible and expected
Use hybrid (PCG base + manual override) for:
- Towns and villages — procedural building distribution with hand-placed landmarks
- Combat arenas — procedural cover with manually tuned critical positions
- Transition areas — procedural fill with hand-crafted focal points
The Procedural Placement Tool is designed for this hybrid workflow. It lets you scatter content with rules while maintaining per-instance artistic override. Combined with PCG for broad coverage and MCP for conversational iteration, you have a complete procedural level design pipeline.
Version Control and Reproducibility
One practical concern with procedural content: version control. If your level is procedurally generated, what do you check into source control?
Our approach:
- PCG graphs and parameter sets are versioned. These are your "source code" for the level.
- Random seeds are stored explicitly. Same graph + same seed = same output. This ensures reproducibility.
- Manual overrides are tracked separately. After PCG generation, any hand-placed modifications are stored as override layers.
- Generation is deterministic. Given the same inputs, you get the same outputs. This is critical for multiplayer and testing.
MCP tools can help enforce this by logging all parameter changes with timestamps and providing before/after snapshots of generation results.
Setting Up Your PCG-MCP Pipeline
If you're ready to start using AI-assisted procedural level design, here's the practical setup path.
Prerequisites
- Unreal Engine 5.3 or later — for full PCG framework support
- Unreal MCP Server — installed and configured for your project
- An MCP-compatible AI assistant — Claude with MCP support is our recommendation
- Basic PCG knowledge — understand what graphs, volumes, and point data are (Epic's documentation covers the fundamentals)
Step 1: Start Simple
Don't try to generate an entire world on day one. Start with a single PCG volume that scatters rocks across a small landscape section. Get comfortable with the iteration loop — describe what you want, evaluate the result, refine through conversation.
Step 2: Build Reusable PCG Graph Templates
As you develop PCG graphs that work well, save them as reusable templates. The AI can help you parameterize graphs so they're configurable across different environments. A "forest scatter" graph with exposed parameters for species, density, and slope tolerance is useful across many projects.
Step 3: Layer Complexity Gradually
Once basic scatter is working, add layers: ground cover under trees, props along paths, gameplay elements in clearings. Each layer can be a conversation with the AI, building on what's already in place.
Step 4: Integrate with Gameplay
Connect PCG output to your gameplay systems — navigation, spawning, quests. This is where procedural level design becomes procedural game design, and it's the most rewarding (and most challenging) phase.
Step 5: Optimize and Ship
Profile performance, optimize instance counts, set up LODs and culling, verify NavMesh integrity, test on target hardware. The AI can help with the systematic parts of optimization, but final performance tuning requires hands-on profiling.
Common Mistakes and How to Avoid Them
After working with PCG-MCP workflows extensively, here are the mistakes we see most often and how to avoid them.
Mistake 1: Over-Proceduralizing Everything
The excitement of procedural generation leads some developers to try making everything procedural. Resist this urge. Not every placement decision benefits from procedural rules. Sometimes you just need to put a chair in a specific spot because that's where the chair goes.
A good rule of thumb: if describing the placement rule takes longer than just placing the object, place it manually. PCG shines for bulk operations, not individual precision.
Mistake 2: Ignoring Seed Management
Random seeds are your lifeline for reproducibility. Every PCG graph should have an explicitly set seed — never rely on random seeds in production. When you find a generation result you like, note the seed. When you need to regenerate (after a graph change, for example), you can return to known-good seeds as a starting point.
You: "Set the random seed on all PCG volumes to 42. After I approve the current generation, save this seed value to the level's metadata so we can reproduce it."
Mistake 3: Not Profiling Early Enough
It's tempting to build out the full PCG system before thinking about performance. Don't. Profile after every major addition. A forest that runs at 60fps doesn't necessarily still run at 60fps after you add the understory layer, the ground cover layer, and the debris layer.
You: "After adding the ground cover layer, what's the total instance count? And what's the frame time impact in the PCG volume area compared to outside it?"
Catching performance problems early is much cheaper than discovering them when your world is fully dressed and removing content would require re-evaluating the entire visual composition.
Mistake 4: Forgetting About Streaming
In World Partition projects, every actor has a streaming cell assignment. PCG-generated actors need to be assigned to appropriate cells, and the generation itself should respect cell boundaries to avoid cross-cell dependencies that cause streaming issues.
Test your PCG content with streaming enabled early. Walk between streaming cells and watch for pop-in, loading hitches, and missing content at cell boundaries.
Mistake 5: Not Documenting PCG Rules
Six months from now, when you need to modify the forest generation, you won't remember why the density curve uses those specific values. Document your PCG decisions — not just the parameter values, but the reasoning behind them.
MCP conversations are actually useful here because the natural language descriptions you used to create the PCG setup serve as implicit documentation. Save your MCP conversation logs as part of your project documentation.
What This Looks Like in Practice
We use this workflow internally for our own projects. A recent open-world prototype used PCG-MCP for approximately 70% of its environmental content — forests, grasslands, rocky outcrops, road-side scatter. The remaining 30% was hand-placed: towns, quest locations, boss arenas, and narrative spaces.
Total environment art time for the prototype: roughly 3 weeks for one artist. Our estimate for the same scope without procedural generation: 8-10 weeks. That's not a marketing claim — it's an honest accounting of a specific project with specific requirements. Your mileage will vary based on art quality targets, world size, and team experience with PCG.
The AI-assisted iteration was the biggest time saver. Not the initial generation — that's fast but imprecise. The refinement. Being able to say "the hillside east of the lake needs more variety" and have the AI adjust density curves, add species variation, and regenerate in under a minute. That refinement loop, done hundreds of times over the course of a project, is where weeks of time savings accumulate.
Conclusion
AI-assisted procedural level design in Unreal Engine 5 isn't magic. It's a practical workflow that combines the PCG framework's generation capabilities with MCP's natural language interface. The PCG framework handles the heavy lifting of content placement. MCP tools provide the bridge between your design intent and the technical implementation. And the AI assistant gives you a conversational iteration loop that makes procedural workflows accessible even if you're not a technical artist with years of PCG experience.
The technology is real and it's useful today. Start with the Unreal MCP Server, set up a simple PCG volume, and have a conversation about what you want your world to look like. You might be surprised how quickly it takes shape.
For rule-based scatter that gives you per-instance control alongside procedural generation, check out the Procedural Placement Tool. And if you're building gameplay systems to populate those procedurally generated levels, the Blueprint Template Library provides production-ready systems for inventory, dialogue, quests, and more — so you can focus on the design work that actually requires a human.