May 28, 2025
3D generative models have advanced rapidly, yet they still lack the resolution and compositional flexibility of production-grade assets. Most generators output a single, monolithic mesh, making the result difficult to animate or integrate into a production pipeline. The challenge is even greater for image-to-3D methods, because concept art often features unusual poses, ambiguities, occlusions, and hidden parts that must be inferred. Humans solve this problem intuitively: we identify individual parts first and then assemble them into a coherent whole. Until now, most 3D generative systems have taken the opposite approach. With this release, we introduce a more human-like workflow; automatically creating a part kit from a single image and then generating the assembled mesh. Users can then interactively edit the parts with our Chat-to-3D feature to take it even further.
Leveraging Inference-time Computation for better Mesh Resolution
One of the biggest advances in LLMs has been leveraging inference-time computation to solve harder problems. Our key insight is to apply the same idea to 3D. When a person looks at an image, they run mental simulations - segmenting, reasoning about occlusions, and inferring hidden elements from prior knowledge. Our method mirrors this agentic process: we unroll multiple trajectories to identify parts, handle occlusions, verify spatial consistency, and ultimately output the set of parts that best matches the input image. This recursive reasoning yields unprecedented mesh quality, because the system can zoom in and out to capture even the tiniest details. In practice, we truncate the internal trace length; this parameter is variable and depends on the inference-time compute budget. For now, we keep it fixed, as generating highly detailed 3D assets or worlds can require several minutes of GPU computation.
Over the coming days, we will release an API as well as more advanced features to speed up this workflow. This release is available starting now to all our Maker+ level users on https://3d.csm.ai/.