Launch Discount: 25% off for the first 50 customers — use code LAUNCH25

StraySparkStraySpark
ProductsDocsBlogGamesAbout
Back to Blog
tutorial
StraySparkMay 2, 20265 min read
AI-Powered 3D Modeling in Blender: Complete Guide to the Blender MCP Server 
BlenderMcpAiAutomation3d Modeling

Blender is one of the most powerful 3D tools available, and it's free. But power comes with complexity. The interface has deep menus, dense modifier stacks, and a learning curve that humbles even experienced artists. What if you could skip the menu-diving and just describe what you want?

That's the idea behind the Blender MCP Server. It connects AI assistants like Claude, Cursor, and Windsurf directly to Blender, giving them access to 212 tools across 22 categories. You describe what you want in plain language. The AI translates that into Blender operations and executes them.

This guide covers everything: what MCP is, how to set it up, what the tools can do, and five real workflows that show what AI-assisted 3D work actually looks like in practice.

Why Blender + AI

Blender's strength is also its challenge. It can do modeling, sculpting, texturing, rigging, animation, simulation, compositing, and video editing. Each of those domains has its own toolset, its own keyboard shortcuts, and its own way of thinking.

For experienced Blender users, AI assistance removes the friction of repetitive operations. You know what you want — you just don't want to click through 15 menus to get there. "Add a subdivision surface modifier at level 2 with optimal display" is faster than navigating to the modifier panel, clicking Add Modifier, selecting Subdivision Surface, then adjusting the settings.

For newer users, AI assistance lowers the barrier. Instead of watching a 20-minute tutorial to figure out how to UV unwrap a mesh, you describe what you need and the AI handles the technical execution. You still need to understand the concepts, but the interface stops being the bottleneck.

For production pipelines, AI assistance enables batch operations that would otherwise require custom Python scripts. Renaming 50 objects to match a naming convention, applying the same modifier stack to every mesh in a collection, exporting each object as a separate FBX — these are tasks that take minutes to describe but hours to do manually.

What Is MCP?

MCP stands for Model Context Protocol. It's an open standard created by Anthropic that lets AI models communicate with external tools. Think of it as a universal adapter between AI assistants and the software you use.

Without MCP, an AI assistant can only generate text — code snippets, instructions, descriptions. You have to manually copy that output and apply it yourself. With MCP, the AI assistant can directly execute operations inside your tools. No copy-pasting. No context switching.

The protocol works through a server-client architecture:

  • MCP Server — runs inside your tool (in this case, Blender) and exposes a set of operations the AI can call
  • MCP Client — your AI assistant (Claude Code, Cursor, Windsurf, Claude Desktop) that sends requests to the server
  • Tools — specific operations the server exposes, like "create a mesh," "add a material," or "set a keyframe"
  • Context Resources — read-only information the AI can query, like "what objects are in the scene?" or "what materials exist?"

The Blender MCP Server exposes 212 tools and 14 context resources. Everything runs locally — no cloud dependencies, no API keys for the Blender connection, no data leaving your machine.

Setting Up the Blender MCP Server

Requirements

  • Blender 5.0 or later
  • An MCP-compatible AI client (Claude Code, Cursor, Windsurf, or Claude Desktop)
  • The Blender MCP Server plugin

Installation

The server installs as a standard Blender add-on. Download the plugin, open Blender, go to Edit > Preferences > Add-ons, click Install, and select the downloaded file. Enable the add-on and restart Blender.

Once enabled, the MCP server starts automatically when Blender launches. You'll see a small indicator in the status bar confirming the server is running.

Connecting Your AI Client

Point your AI client to the server address shown in the add-on preferences. The exact configuration depends on your client:

  • Claude Code — add the server to your MCP configuration file
  • Cursor — add it through the MCP settings panel
  • Windsurf — configure in the MCP connections settings
  • Claude Desktop — add to the MCP server configuration

Once connected, your AI assistant has access to all 212 tools and 14 context resources. You can verify the connection by asking the AI something like "What objects are currently in the Blender scene?"

The 212 Tools: What They Actually Do

The tools are organized into 22 categories. Here's an overview of what's available:

Object Operations

Creating, deleting, duplicating, and transforming objects. Moving, rotating, scaling. Setting origins. Parenting and unparenting. These are the fundamental operations you'd do manually with G, R, S keys and the properties panel.

Mesh Modeling

Vertex, edge, and face operations. Extrusion, inset, bevel, loop cuts, merge, dissolve. Bridge edge loops, fill faces, separate meshes, join meshes. The core modeling toolkit translated into AI-callable operations.

Modifiers

Adding, configuring, applying, and removing modifiers. Every standard Blender modifier is supported — Subdivision Surface, Array, Mirror, Boolean, Bevel, Solidify, Shrinkwrap, Lattice, and more. The AI can set any modifier parameter directly.

Materials and Shading

Creating materials, assigning them to objects, configuring shader nodes. Setting base color, metallic, roughness, emission, transparency. Creating node setups for PBR workflows. Assigning textures.

UV Mapping

Smart UV project, unwrap, lightmap pack, cube projection, cylinder projection, sphere projection. Mark and clear seams. Pack UV islands. Scale and rotate UV coordinates.

Rigging and Armatures

Creating armatures, adding bones, setting bone properties. Parenting meshes to armatures with automatic weights. Creating bone constraints — IK, Copy Rotation, Track To, Limit Location, and others. Setting up basic rigs.

Animation and Keyframing

Inserting keyframes, setting interpolation modes, adjusting timing. Creating shape keys and drivers. Setting up basic animation curves. Managing NLA strips.

Scene and Rendering

Camera and light creation and configuration. Setting render resolution, samples, output format. Configuring EEVEE and Cycles settings. Setting world properties, HDRI environment maps.

Collections and Organization

Creating, renaming, and managing collections. Moving objects between collections. Toggling visibility and renderability per collection.

Import and Export

FBX, OBJ, GLTF/GLB, STL, PLY, Alembic import and export. Setting scale, axes, and format-specific options.

And More

There are also categories for curves and splines, sculpting, particle systems, constraints, drivers, grease pencil, and compositing. The full list covers essentially every operation you'd do through Blender's interface.

The 5 Tool Presets

Just like our Unreal MCP Server, the Blender MCP Server ships with tool presets that load only the tools relevant to your current task:

  • Modeling — mesh operations, modifiers, transforms, UV mapping
  • Shading — materials, textures, shader nodes, UV tools
  • Rigging & Animation — armatures, bones, constraints, keyframing
  • Scene Setup — cameras, lights, rendering, world settings
  • Full Access — all 212 tools enabled

Using presets helps the AI make better tool choices. When only 40 modeling tools are loaded instead of all 212, the AI spends less time deciding which tool to use and more time using the right one.

14 Context Resources

Tools let the AI do things. Context resources let the AI understand things. The Blender MCP Server provides 14 context resources:

  • Scene overview — all objects, their types, locations, visibility states
  • Object details — mesh data, modifier stacks, material assignments
  • Material library — all materials in the file, their node setups, parameters
  • Collection hierarchy — organizational structure of the scene
  • Armature data — bone hierarchies, constraints, weight groups
  • Render settings — current resolution, samples, engine configuration
  • Active selection — what's currently selected in the viewport
  • Timeline state — current frame, frame range, playback settings
  • Modifier stacks — per-object modifier lists with parameter values
  • UV map data — UV layer information per mesh
  • Shape key data — blend shape configurations
  • Constraint data — object and bone constraint setups
  • World settings — environment, background, ambient lighting configuration
  • Available add-ons — installed and enabled add-on list

These resources let the AI answer questions about your scene before making changes. "What material is the character using?" or "How many bones does the armature have?" — the AI can look this up directly instead of guessing.

Workflow 1: Modeling a Prop

Let's start with something practical — modeling a simple game-ready barrel from scratch using AI prompts.

Starting the Model

You tell the AI:

"Create a cylinder with 24 segments, 2m tall, 0.5m radius. Name it Barrel."

The AI calls the mesh creation tool with those parameters. A cylinder appears in your scene. So far, this is faster than navigating Add > Mesh > Cylinder and then adjusting the parameters in the operator panel, but not dramatically so.

The real value shows up in the next steps.

Adding Detail

"Add edge loops at 0.3m, 0.7m, 1.3m, and 1.7m from the bottom. Then select the top and bottom faces and inset them by 0.05m. Scale the inset faces down slightly on Z to create a concave top and bottom."

Manually, this is: Tab into edit mode, Ctrl+R for loop cut, click to place, repeat three more times, switch to face select, select the top face, I to inset, type 0.05, repeat for the bottom, S then Z to scale. It's not hard, but it's a sequence of 15+ individual operations.

The AI handles the entire sequence in one request. You describe the end result, not the button presses.

Adding the Metal Bands

"Select the edge loops at 0.3m and 0.7m from the bottom. Extrude them outward by 0.02m to create raised bands. Do the same for the loops at 1.3m and 1.7m."

This creates the characteristic metal bands around the barrel. The AI handles the selection and extrusion in edit mode.

Applying a Modifier Stack

"Add a Subdivision Surface modifier at level 2 render, 1 viewport. Then add a Bevel modifier with 0.005m width and 3 segments, set to weight mode. Apply smooth shading."

Two modifiers configured and smooth shading applied in one prompt. The barrel now looks like a proper game asset instead of a faceted cylinder.

UV Unwrapping

"Mark seams along one vertical edge and around the top and bottom rims. Smart UV project the mesh with an island margin of 0.02."

UV unwrapping is one of those tasks that's straightforward in concept but fiddly in practice. Having the AI handle seam marking and projection saves time and avoids the common mistake of forgetting seams.

The Result

From empty scene to UV-unwrapped, subdivision-ready barrel in about five prompts. Each prompt took 5-10 seconds to type. The entire process took under three minutes.

Could you model this barrel faster by hand if you know all the shortcuts? Maybe, if you're very experienced. But the AI approach has two advantages: you don't need to remember any shortcuts, and you can describe what you want in terms of the result rather than the process.

When to Take Over Manually

The AI-modeled barrel is a solid starting point, but for hero assets, you'll want to refine manually. Sculpting fine details, adjusting edge flow for deformation, and making subjective decisions about proportions are all better done by hand. Use AI for the structural work, then switch to manual sculpting and tweaking for the final 20%.

Workflow 2: Material Setup

Materials are where AI assistance really pays off, because material setup involves a lot of parameter configuration — exactly the kind of repetitive work AI handles well.

Creating a PBR Material

"Create a new material called M_WornMetal. Set it up as a PBR metallic material with base color RGB (0.6, 0.55, 0.5), metallic 0.9, roughness 0.4."

The AI creates the material, adds a Principled BSDF node (or configures the existing one), and sets the parameters. One prompt replaces creating a material, opening the shader editor, and clicking through node properties.

Adding Texture Maps

"Add an image texture node for the base color. Load the file 'worn_metal_basecolor.png' from the textures folder. Connect it to the Base Color input. Add another image texture for the roughness map — 'worn_metal_roughness.png' — and set it to Non-Color. Connect it to the Roughness input."

Texture hookup is mechanical work. You know which maps go where. The AI handles the node creation, file loading, color space setting, and connection.

Normal Maps

"Add a normal map setup. Load 'worn_metal_normal.png' as Non-Color, run it through a Normal Map node, and connect to the Normal input. Set the normal map strength to 0.8."

Normal map setup always requires that intermediate Normal Map node with the color space set correctly. It's the kind of thing you set up correctly 95% of the time and spend 20 minutes debugging the other 5%. Having the AI handle it eliminates that failure mode.

Creating Material Variations

This is where AI assistance becomes genuinely faster than manual work.

"Duplicate the M_WornMetal material. Create three variations: M_WornMetal_Rusty with base color shifted toward orange (0.7, 0.4, 0.2) and roughness 0.7. M_WornMetal_Polished with roughness 0.15 and metallic 1.0. M_WornMetal_Painted with base color (0.2, 0.3, 0.5), metallic 0.0, roughness 0.6."

Three material variations in one prompt. Manually, each variation requires: duplicate material, rename, open shader editor, find the right node, change values, repeat. For three variations with three parameter changes each, that's 18 individual clicks minimum.

Assigning Materials to Objects

"Assign M_WornMetal to the barrel body. Assign M_WornMetal_Rusty to the barrel bands. Assign M_WornMetal_Painted to the barrel lid."

Material assignment across multiple objects in one prompt. The AI handles selection, material slot creation, and assignment.

Batch Material Operations

When you have dozens of objects that all need similar material setups, AI shines:

"For every object in the 'Props' collection, create a material with the same name as the object prefixed with 'M_'. Set all materials to metallic 0.0, roughness 0.5, base color (0.8, 0.8, 0.8)."

This kind of batch operation would normally require a Python script. The AI effectively writes and executes that script for you, but you never have to touch Python.

Workflow 3: Rigging Basics

Rigging is one of Blender's most complex areas. The AI won't replace a technical artist for production rigs, but it handles the mechanical parts — bone creation, hierarchy setup, basic constraints — so you can focus on weight painting and deformation quality.

Creating an Armature

"Create an armature for the character mesh 'Hero_Body'. Start with a root bone at the origin, 0.2m long, pointing up. Name it 'Root'."

The AI creates the armature object and the first bone. Simple, but it saves navigating to Add > Armature > Single Bone and then renaming in the properties panel.

Building the Bone Chain

"In edit mode, extrude from Root to create a spine chain: Spine_01 (0.3m), Spine_02 (0.3m), Spine_03 (0.25m), Neck (0.15m), Head (0.2m). All pointing up along Z."

Building a bone chain manually means: select the tip, extrude, position, rename, repeat. For five bones, that's about 15 operations. The AI does it in one prompt.

Adding Limbs

"From Spine_03, create a left shoulder chain: Clavicle_L (0.15m, pointing left along X), UpperArm_L (0.3m, pointing left-down at 10 degrees), LowerArm_L (0.28m, same direction), Hand_L (0.1m). Mirror all these bones to create the right side."

This is where the time savings become significant. Building symmetrical limb chains manually requires careful positioning on one side, then either mirroring or rebuilding on the other. The mirror step alone saves several minutes.

Leg Chains

"From Root, create left leg chain: UpperLeg_L (0.4m, pointing down along -Z), LowerLeg_L (0.38m, pointing down and slightly forward), Foot_L (0.15m, pointing forward along Y), Toe_L (0.08m, same direction). Mirror to create right side."

Same pattern as the arms. The AI handles bone creation, naming, positioning, and mirroring.

Basic IK Setup

"Add an IK constraint to LowerArm_L targeting a new empty called IK_Hand_L. Chain length 2. Do the same for LowerArm_R. Add IK constraints to LowerLeg_L and LowerLeg_R with chain length 2, targeting new empties IK_Foot_L and IK_Foot_R."

IK setup is a common source of confusion for newer riggers. Which bone gets the constraint? What chain length? Where do the targets go? The AI handles the mechanical setup. You can then adjust pole targets and chain lengths based on how the deformation looks.

Parenting the Mesh

"Parent Hero_Body to the armature with automatic weights."

One prompt for the operation that connects your mesh to the rig. The AI handles the selection order (mesh first, then armature) and the parenting mode.

The Limitation

The AI builds the skeleton and basic constraints. What it can't do well is weight painting — the process of defining how each bone influences each vertex. Weight painting requires visual judgment (does the elbow deform cleanly?), interactive testing (does the shoulder collapse at extreme poses?), and artistic decisions (how much should the chest stretch when the arm raises?).

Build the rig structure with AI. Weight paint manually. This split plays to each approach's strengths.

Workflow 4: Scene Lighting and Rendering

Lighting setup involves creating lights, positioning them, adjusting intensity and color, configuring shadows, and iterating until the mood is right. The creation and configuration parts are perfect for AI. The artistic iteration is still yours.

Three-Point Lighting Setup

"Set up three-point lighting for the character at the origin. Key light: area light, 1000W, warm white (4500K), positioned 3m away at 45 degrees left and 30 degrees above. Fill light: area light, 300W, cool white (6500K), 3m away at 30 degrees right and 15 degrees above. Rim light: area light, 500W, neutral white (5500K), 3m behind and 45 degrees above."

A classic three-point setup in one prompt. Manually, this requires creating three lights, positioning each one (which means typing coordinates or using the 3D cursor), setting intensity and color for each, and adjusting size. It's straightforward work, but there are a lot of parameters to set.

HDRI Environment

"Set the world background to an HDRI environment map. Load 'studio_small_08_4k.exr' from the HDRIs folder. Set the strength to 0.8. Rotate the environment 45 degrees on Z."

HDRI setup in Blender requires opening the shader editor in world mode, adding an Environment Texture node, loading the image, connecting it through a mapping node for rotation, and adjusting strength. The AI compresses all of that into one prompt.

Render Configuration

"Set up Cycles rendering at 1920x1080. Set samples to 256 with denoising enabled. Use GPU compute. Set the output format to PNG with 16-bit color depth. Enable transparent background."

Render settings are scattered across multiple panels in Blender's properties editor. Having the AI configure them saves time and ensures you don't forget a setting (like forgetting to enable GPU compute and waiting 10x longer than necessary).

Camera Setup

"Create a camera 5m from the origin, pointing at the character. Set focal length to 85mm for portrait framing. Set the aperture to f/2.8 with a focus distance on the character's face. Enable depth of field."

Camera configuration with depth of field is another multi-step operation that the AI handles cleanly. The focal length, aperture, and focus distance all interact, and having them set in one prompt ensures consistency.

Lighting Iteration

After the initial setup, you'll want to iterate. This is where context resources help — the AI can query the current lighting state before making changes:

"The key light is too harsh. Reduce it to 700W and increase the size to 2m for softer shadows. Also, add a subtle blue fill from below — area light, 100W, color temperature 8000K, positioned 2m below the character pointing up."

Because the AI has context awareness through the 14 resources, it knows which light is the "key light" from the previous setup. It doesn't need you to specify the object name.

Workflow 5: Batch Operations

Batch operations are arguably where the Blender MCP Server delivers the most raw time savings. Any operation that needs to be repeated across many objects is a candidate.

Batch Renaming

"Rename all objects in the 'Environment' collection to follow the convention 'ENV_[original name]_[type]' where type is MESH, LIGHT, CAMERA, or EMPTY based on the object type. Convert spaces to underscores."

Blender has a built-in batch rename tool, but it's limited in logic. This kind of conditional renaming based on object type would normally require a Python script. The AI handles the logic.

Batch Modifier Application

"For every mesh in the 'HighPoly' collection: apply all modifiers. Then add a Decimate modifier set to ratio 0.25. Apply it. Export each mesh as a separate FBX to the 'exports/lowpoly/' folder, named after the object."

This chain — apply modifiers, decimate, export — performed across an entire collection would take significant time manually. For a collection of 30 objects, you're looking at 90+ individual operations done by hand.

Batch Material Assignment

"Every object in the scene whose name starts with 'Wall_' should use the material 'M_BrickWall'. Every object starting with 'Floor_' should use 'M_ConcreteFloor'. Every object starting with 'Trim_' should use 'M_WoodTrim'."

Pattern-based material assignment. The AI iterates through scene objects, matches names, and assigns materials. For a large architectural scene with hundreds of named objects, this saves substantial time.

Batch Export for Game Engines

"Export every collection as a separate FBX file. Use the collection name as the filename. Set scale to 0.01 for Unreal Engine compatibility. Include only mesh objects. Apply modifiers on export. Save to the 'game_export' folder."

Game-ready export with engine-specific settings. The collection-based export structure means each game asset group is its own file, ready for import into Unreal Engine or Unity.

Batch Transform Operations

"For all objects in the 'Scatter_Rocks' collection: randomize rotation on all axes between 0 and 360 degrees. Randomize scale uniformly between 0.8 and 1.3. Apply transforms."

Quick scatter randomization. Useful for breaking up repetitive placement of props and environment objects.

When AI Helps vs. When to Work Manually

After months of using AI-assisted Blender workflows, we've developed a clear sense of where AI adds value and where it doesn't.

AI Excels At

Structured creation — anything where you know the parameters and just need them applied. Creating objects with specific dimensions, setting up materials with known values, configuring render settings.

Repetitive operations — batch rename, batch export, applying the same operation to many objects. Anything you'd write a Python script for, but don't want to write a Python script for.

Setup and scaffolding — creating the skeleton of a rig, setting up a lighting arrangement, building a modifier stack. The initial structural work that follows established patterns.

Configuration — render settings, world properties, physics parameters. Any operation that involves setting values in property panels.

Learning acceleration — if you're new to Blender, AI can execute operations you understand conceptually but don't know the interface path for. "How do I do X?" becomes "Do X."

Manual Work Is Better For

Sculpting and organic modeling — the push-and-pull of sculpting requires real-time visual feedback. You need to see how the surface changes as you drag. AI can't provide this interactive loop.

Weight painting — how a mesh deforms depends on visual judgment. No text description can capture "the elbow skin should stretch a bit more on the outside."

Texture painting — hand-painting textures directly on a 3D model is inherently visual and interactive.

Animation polish — blocking out keyframes with AI works. Polishing animation curves, adjusting timing, adding secondary motion — these require playing the animation back and making subtle adjustments by feel.

Composition and framing — "does this camera angle look good?" is a judgment call. The AI can position a camera precisely, but you decide if the composition works.

Topology cleanup — fixing n-gons, redirecting edge flow for deformation, and retopology require understanding the 3D form and predicting how it will deform. AI can run automated cleanup operations, but manual retopology produces better results for organic models.

The Best Approach: Hybrid

The most productive workflow combines both. Use AI for the structural, mechanical, and repetitive parts. Switch to manual for the artistic, visual, and judgment-dependent parts.

A typical session might look like:

  1. AI creates the base mesh with specified dimensions
  2. Manual sculpting to refine the shape
  3. AI applies modifiers and UV unwraps
  4. Manual texture painting
  5. AI sets up the rig skeleton and constraints
  6. Manual weight painting and testing
  7. AI configures materials, lighting, and render settings
  8. Manual composition and final adjustments

Each step plays to the strengths of the approach being used.

Getting Started

Install the Blender MCP Server, connect your AI client, and start with a simple task. Material setup (Workflow 2) is the easiest to verify — you can see immediately if the material parameters are correct.

The documentation covers installation, client configuration, tool preset selection, and troubleshooting.

If you're also working in Unreal Engine, the Blender MCP Server pairs well with the Unreal MCP Server. Model and texture in Blender with AI assistance, then export and build your scene in Unreal with AI assistance. The Complete Toolkit bundle includes both.

AI-assisted 3D work isn't about replacing artists. It's about removing the interface overhead so you can spend more time on creative decisions and less time clicking through menus.

Tags

BlenderMcpAiAutomation3d Modeling

Continue Reading

tutorial

Blender to Unreal Pipeline: The Complete Asset Workflow for Indie Devs

Read more
tutorial

UE5 Landscape & World Partition: Building Truly Massive Open Worlds in 2026

Read more
tutorial

Multiplayer-Ready Architecture: Designing Your UE5 Game Systems for Replication

Read more
All posts
StraySparkStraySpark

Game Studio & UE5 Tool Developers. Building professional-grade tools for the Unreal Engine community.

Products

  • Complete Toolkit (Bundle)
  • Procedural Placement Tool
  • Cinematic Spline Tool
  • Blueprint Template Library
  • Unreal MCP Server
  • Blender MCP Server

Resources

  • Documentation
  • Blog
  • Changelog
  • Roadmap
  • FAQ
  • Contact

Legal

  • Privacy Policy
  • Terms of Service

© 2026 StraySpark. All rights reserved.