The Blender-to-Unreal pipeline has always been functional but friction-filled. Every major version of either tool shifts the details: export settings change, material translation breaks in new ways, and hard-won workflow knowledge becomes outdated. Blender 5.x and Unreal Engine 5.7 are no exception — the fundamentals are the same, but the specifics have shifted enough that your 2024 pipeline documentation is likely causing you subtle problems.
This post is a comprehensive, updated guide to moving assets from Blender 5.x to UE 5.7 in 2026. We cover the full pipeline: Geometry Nodes export, LOD generation, Nanite mesh preparation, material conversion, batch import automation, and the common pitfalls specific to Blender 5.x. We use this pipeline internally for our own projects and for the asset development behind tools like DetailForge, so the advice comes from production experience rather than theory.
What Has Changed in Blender 5.x for Export
Geometry Nodes Overhaul
Blender 5.x has continued the steady evolution of Geometry Nodes that began in Blender 3.x, and the changes affect export workflows significantly.
Realized vs. instanced geometry. Geometry Nodes can produce geometry in two forms: instances (references to existing meshes placed at multiple transforms) and realized geometry (unique mesh data). For export to Unreal, this distinction matters enormously.
Instances export as individual mesh copies unless you explicitly realize them before export. If you have a Geometry Nodes setup that scatters 500 rocks using instances, exporting to FBX will either produce 500 separate meshes (one per instance) or a single massive mesh (if realized). Neither is what you want for Unreal.
The correct approach for most cases is to realize the geometry in Blender, but do it strategically:
- Unique meshes: Export as separate FBX files. In Unreal, these become individual Static Meshes that can be placed as instances using the Instanced Static Mesh system or the Procedural Placement Tool.
- Scattered instances of the same mesh: Export the base mesh as one FBX. Export the instance transforms as a CSV or JSON file. In Unreal, use a data-driven placement system to recreate the scatter.
- Merged geometry (building facades, terrain details): Realize and merge in Blender, then export as a single mesh. This works well for Nanite, which handles dense single meshes efficiently.
Attribute transfer. Blender 5.x Geometry Nodes use an attribute system for storing per-vertex, per-face, and per-instance data. Vertex colors, custom normals, UVs, and arbitrary named attributes can all be generated by Geometry Nodes. However, FBX export only supports a subset of these attributes — primarily vertex colors (up to 8 sets) and UVs (up to 8 sets).
If your Geometry Nodes pipeline generates custom attributes for material blending, weathering, or other purposes, you need to convert those attributes to vertex color channels before export. Blender 5.x provides nodes for this conversion, but it is an extra step that is easy to forget.
Updated FBX Exporter
Blender 5.x includes an updated FBX exporter with several changes relevant to Unreal workflows:
Improved armature export. Bone orientations and scales translate more reliably to Unreal's skeleton system. The long-standing issues with bone roll and axis alignment have been significantly reduced, though not completely eliminated for complex rigs.
Better material slot handling. Material assignments are more consistently preserved during export. The old issue where material slots would shuffle or duplicate on export has been addressed in most cases.
Mesh optimization options. The exporter now includes options for removing degenerate triangles, merging by distance, and applying modifiers on export. These previously required manual steps before exporting.
glTF as an alternative. Blender 5.x's glTF exporter has improved to the point where it is a viable alternative to FBX for many workflows. Unreal 5.7 supports glTF import natively. glTF handles PBR materials more cleanly than FBX, and the format is less ambiguous. Consider testing glTF for your pipeline, especially if FBX material translation is causing issues.
Preparing Meshes for Nanite
When to Use Nanite
Nanite is not appropriate for every mesh. Understanding when to use it saves time and avoids subtle rendering issues.
Use Nanite for:
- Static environment geometry (buildings, rocks, terrain features, props)
- High-poly meshes where traditional LODs would be labor-intensive
- Meshes that will be instanced many times (foliage, debris, modular pieces)
- Film-quality meshes that you want to use without manual LOD creation
Do not use Nanite for:
- Skinned meshes (characters, creatures, animated objects) — Nanite does not support skeletal mesh deformation
- Meshes with complex material effects that rely on world position offset or vertex animation
- Translucent or masked materials (Nanite supports masked materials with caveats, but translucent materials are not supported)
- Meshes that need precise collision (Nanite's simplified collision may not match the visual mesh closely enough for gameplay collision)
Mesh Preparation Checklist
Before exporting a mesh intended for Nanite:
1. Clean topology is less critical than you think. One of Nanite's advantages is that it handles messy topology gracefully. You do not need perfectly clean quad-based topology for Nanite meshes. Nanite will cluster and simplify the mesh regardless of the input topology. Spend your cleanup time on issues that affect rendering (overlapping faces, zero-area triangles) rather than topology flow.
2. UVs still matter. Nanite handles geometry LODs automatically, but it does not generate UVs. Your UVs need to be complete and non-overlapping for correct texturing. This is the same requirement as traditional meshes, and it is the step that benefits most from careful attention.
3. Normals should be intentional. Nanite preserves custom normals, so your smoothing groups and custom normal edits will carry through. Auto-smooth in Blender 5.x uses the angle-based method by default — verify that the auto-smooth angle produces the results you expect before export.
4. Scale matters. Export at Unreal scale (1 Blender unit = 1 cm in Unreal, or adjust in the export settings). Nanite's screen-space error calculations assume correct world scale. Meshes exported at the wrong scale will have incorrect LOD transitions — appearing overly simplified when they should be detailed, or retaining too much detail at distance.
5. Pivot point placement. Set the origin (pivot point) to a logical position before export. For modular pieces, this is typically the snap point. For props, it is typically the base. In Blender, use Object > Set Origin to position it correctly. An incorrect pivot point means repositioning every mesh after import.
Triangle Count Guidelines
Nanite handles high triangle counts well, but there are practical limits:
- Props (furniture, tools, small objects): 10K-100K triangles is typical. Nanite handles this easily.
- Modular building pieces: 50K-500K triangles depending on detail level. The upper end is for hero pieces with fine detail.
- Large environment pieces (cliff faces, terrain sections): 100K-2M triangles. Nanite is designed for this scale.
- Extremely dense meshes (photogrammetry, ZBrush sculpts): 2M-10M+ triangles. Nanite can handle this, but import times increase significantly. Consider decimating in Blender first to reduce the import bottleneck without noticeably affecting visual quality.
Going above 10M triangles per mesh is possible but offers diminishing returns. Nanite's virtualized geometry makes the same mesh look nearly identical at 2M and 20M triangles under normal viewing conditions.
LOD Generation for Non-Nanite Meshes
Not every mesh should use Nanite. For skeletal meshes, animated objects, and meshes with material effects that Nanite does not support, you still need traditional LODs.
Blender-Side LOD Generation
Blender 5.x provides several approaches for LOD generation:
Decimate modifier. The simplest approach. Apply the Decimate modifier with different ratios for each LOD level. Typical ratios:
- LOD0: Original mesh (100%)
- LOD1: 50% of original
- LOD2: 25% of original
- LOD3: 10% of original
The Decimate modifier with the Collapse method generally produces better results than the Planar or Un-Subdivide methods for organic meshes. For hard-surface meshes, the Planar method can be effective at removing coplanar triangles.
Remesh modifier. For a more uniform simplification, the Remesh modifier in Voxel mode can produce clean LODs. The trade-off is that it destroys UVs, so you need to re-unwrap or use a projection-based UV method for lower LODs.
Manual LODs. For hero assets (main character, key props), manual LOD creation produces the best results. Start from the high-poly mesh and manually retopologize each LOD level. This is time-intensive but gives you full control over which details are preserved at each level.
Export Convention
Export each LOD as a separate FBX file with a consistent naming convention:
SM_Rock_LOD0.fbx
SM_Rock_LOD1.fbx
SM_Rock_LOD2.fbx
SM_Rock_LOD3.fbx
Or export all LODs in a single FBX file using Blender's LOD naming convention (append _LOD0, _LOD1, etc. to the object names). Unreal's import pipeline can recognize this naming convention and automatically set up the LOD chain.
Automating LOD Generation with MCP
The Blender MCP Server can automate LOD generation for batches of meshes. A typical workflow:
- Select all meshes that need LODs
- Instruct the AI agent: "Generate 3 LOD levels for all selected meshes using the Decimate modifier, with ratios 0.5, 0.25, and 0.1. Export each LOD as a separate FBX to the export directory."
- The agent applies the modifier at each ratio, renames the objects with LOD suffixes, and exports them
For a batch of 20 meshes, this process takes minutes instead of the hour or more it would take manually. The time savings scale linearly with batch size.
Material Conversion: Blender to Unreal
The Fundamental Mismatch
Blender's material system (Shader Editor with node-based materials) and Unreal's material system (Material Editor with a different node-based system) are conceptually similar but technically incompatible. There is no automatic 1:1 conversion, and there never will be — the rendering engines are fundamentally different.
What does transfer cleanly:
- Texture references (the image files themselves)
- Basic PBR parameters (base color, roughness, metallic, normal)
- UV channel assignments
What does not transfer:
- Procedural textures (Noise, Voronoi, Wave, etc.)
- Complex node setups (Mix nodes, color ramps, math operations)
- Shader-specific features (Principled BSDF settings that have no Unreal equivalent)
The Practical Approach
Step 1: Bake procedural materials to textures. If your Blender material uses any procedural nodes, bake them to texture maps before export. This converts node-based complexity into image data that any engine can use.
Bake at a resolution appropriate for the asset's screen size. Props that will never fill more than a quarter of the screen do not need 4K textures. Environment pieces viewed at distance can use 1K or 2K.
Step 2: Export PBR texture sets. For each material, export:
- Base Color (albedo) map
- Normal map (use the OpenGL format — Unreal's import pipeline expects this from Blender)
- Roughness map
- Metallic map
- Ambient Occlusion map (optional, but useful for Unreal's material setup)
- Emissive map (if applicable)
Step 3: Set up Unreal materials from textures. In Unreal, create Material Instances from a master material that accepts PBR texture inputs. Assign the exported textures to the appropriate slots.
This step is where automation saves significant time. Manually creating material instances and assigning textures for 50 assets is tedious and error-prone. The Unreal MCP Server can automate this: "Create material instances for all imported meshes using the MasterMaterial parent, assigning textures based on the naming convention _BaseColor, _Normal, _Roughness, _Metallic."
Normal Map Orientation
This catches people every time. Blender uses OpenGL-style normal maps (Y+ pointing up). Unreal uses DirectX-style normal maps (Y- pointing up, or equivalently, the green channel is inverted).
You have two options:
- Flip the green channel on import in Unreal. Check the "Flip Green Channel" option in the texture import settings.
- Bake DirectX-format normals from Blender. In Blender's bake settings, there is no native option for this, but you can invert the green channel of the baked normal map using the compositor or a simple node setup.
Pick one approach and use it consistently. Mixing approaches across assets creates inconsistent lighting that is difficult to diagnose.
Batch Import Automation
The Manual Approach (and Why It Does Not Scale)
Importing one asset into Unreal is straightforward: drag the FBX into the Content Browser, adjust import settings, click Import. Importing 100 assets this way takes hours and introduces inconsistency — different settings on different imports, missed textures, incorrect material assignments.
Automated Import with MCP
The Unreal MCP Server enables batch import workflows through the AI agent:
Batch FBX import. "Import all FBX files from /Export/Buildings/ into /Game/Environment/Buildings/. Enable Nanite for all meshes. Set collision to Use Complex as Simple."
Automated material assignment. "For each imported mesh, create a material instance from MI_Master_Environment. Assign textures from /Game/Textures/Buildings/ matching the mesh name prefix."
LOD chain setup. "For meshes with LOD suffixes, configure the LOD chain with screen sizes 1.0, 0.5, 0.25, 0.1."
Validation pass. "Audit all imported meshes in /Game/Environment/Buildings/. Report any meshes with missing materials, triangle count above 5M, or missing collision."
Each of these operations would take minutes to hours manually. Through MCP automation, the entire batch import pipeline runs in minutes with consistent settings across all assets.
The Blender-Side Batch Export
The Blender MCP Server handles the export side:
"Export all objects in the Buildings collection as individual FBX files. Apply all modifiers. Set scale to 1.0 (Unreal scale). Include custom normals. Bake all procedural materials to 2K textures. Save exports to /Export/Buildings/."
Combining Blender-side batch export with Unreal-side batch import creates an end-to-end pipeline that moves large asset batches between tools with minimal manual intervention.
Common Pitfalls with Blender 5.x
Pitfall 1: Geometry Nodes Evaluation on Export
Blender 5.x evaluates Geometry Nodes setups at export time. If your Geometry Nodes tree is computationally expensive (high-iteration simulations, dense scatters), the export can take significantly longer than expected — or even crash Blender if memory is exhausted.
Fix: Simplify or disable Geometry Nodes before export. If you need the Geometry Nodes output, realize the geometry first (Apply modifier), then export the realized mesh. This separates the computation from the export process.
Pitfall 2: Attribute Naming Conflicts
Blender 5.x allows arbitrary attribute names on mesh data. Some of these names conflict with Unreal's expected attribute names. For example, an attribute named "color" might conflict with Unreal's vertex color interpretation. Attributes named "position" or "normal" can cause import errors.
Fix: Use prefixed attribute names that are unlikely to conflict: "custom_blend_weight" instead of "weight," "scatter_density" instead of "density."
Pitfall 3: Collection Instancing vs. Object Instancing
Blender 5.x has two instancing systems: object-level instancing (duplication) and collection instancing (linking a collection as an instance). These behave differently on export. Collection instances may export as empty transforms without geometry, or may not export at all depending on the exporter settings.
Fix: Before export, make all instances real (Object > Make Instances Real, or the equivalent in Blender 5.x's menu structure). This converts instances to actual geometry that exports reliably.
Pitfall 4: Hair and Curves
Blender 5.x's hair system uses the Curves object type. These do not export to FBX in a format that Unreal can use directly. If your asset has Blender hair, you need to convert it to mesh geometry before export, or use Unreal's own Groom system and export via Alembic.
Fix: For game-ready hair cards, convert curves to mesh in Blender and texture them as billboard strips. For Groom-based hair (higher quality, higher cost), export as Alembic (.abc) and import into Unreal's Groom system.
Pitfall 5: Scale and Unit Discrepancies
Blender 5.x defaults to metric units with a scale of 1.0, where 1 unit = 1 meter. Unreal uses centimeters as its base unit. The FBX exporter's scale settings interact with both Blender's scene scale and Unreal's import scale in ways that can produce meshes at 100x or 0.01x the intended size.
Fix: Set Blender's unit scale to 0.01 (so 1 Blender unit = 1 cm), or set the FBX export scale to 100, or set Unreal's import scale to 1.0. Pick one approach, document it, and apply it consistently. Test with a reference cube (1m x 1m x 1m in intended scale) to verify the pipeline before processing batches.
End-to-End Pipeline Summary
Here is the complete pipeline for a typical asset batch:
- Model in Blender. Create meshes with clean UVs, intentional normals, and correct pivot points.
- Generate LODs (non-Nanite meshes). Use the Decimate modifier or manual retopology. Automate with the Blender MCP Server for batches.
- Bake procedural materials. Convert any Geometry Nodes or procedural shader outputs to texture maps.
- Export FBX/glTF. Use consistent export settings. Verify scale with a reference object.
- Import to Unreal. Use batch import through the Unreal MCP Server for consistent settings.
- Enable Nanite (applicable meshes). Configure Nanite settings per asset type.
- Set up materials. Create material instances and assign textures, automating through MCP where possible.
- Validate. Run an audit pass to catch missing materials, incorrect scale, missing collision, or other issues.
- Place in scenes. Use the Procedural Placement Tool for environment scatter, or place hero assets manually.
This pipeline, once established and documented, produces consistent results across asset batches and team members. The automation steps through MCP servers reduce the per-asset overhead from minutes to seconds, which makes the difference between a 100-asset batch being a day of work or an hour.
Document your pipeline. Automate the tedious parts. Verify with reference assets before processing batches. The Blender-to-Unreal pipeline will never be perfectly seamless — they are different tools with different assumptions — but with the right setup, it can be efficient, reliable, and largely invisible to the creative process.