In Proceedings of the SIGGRAPH Asia 2025 Conference Papers, a research team affiliated with UNIST reports a new AI technology that can animate 3D characters to mimic the exact movements shown in a single 2D image, all while preserving natural proportions and avoiding distortions. This development could reduce the barriers to creating 3D content for the metaverse, animation and gaming industries.
Led by Professor Kyungdon Joo from the Graduate School of Artificial Intelligence at UNIST, the team has created DeformSplat—an innovative AI framework that adjusts the pose of 3D characters generated through Gaussian modeling, maintaining their shape and realism even when viewed from different angles.
3D Gaussian Splatting is a technique that reconstructs 3D objects from 2D images, enabling realistic rendering of models from flat photos. However, animating these models—such as in cartoons or video games—has traditionally required multiple images or video footage captured from different angles. Without such data, models often deform unnaturally, with limbs bending oddly or stretching in unrealistic ways.
DeformSplat enables the animation of a 3D character using just a single photograph. In tests, the animated characters maintained their proportions and shape from various perspectives—whether viewed from the side, back, or front—without any noticeable distortion. For example, when a pose where the character raises an arm is input, the AI reproduces this movement accurately, regardless of the viewing angle.
This is made possible through two main technological innovations. First, the "Gaussian-to-Pixel Matching" links the 3D Gaussian points of the model with 2D pixels from the photo, allowing the system to transfer pose information directly onto the 3D model. Second, the "Rigid Part Segmentation" automatically identifies and groups rigid regions—like limbs or the torso—ensuring that these parts move naturally without bending or stretching unnaturally during animation.
Together, these techniques enable realistic and distortion-free movement of 3D characters based on just a single photograph.
Professor Joo explained, "Previous methods struggled to animate 3D objects from a single image without distortions. Our approach considers the structural properties of objects, distinguishing rigid regions and generating realistic movements."
He added, "This work represents a significant step toward making 3D content creation easier, faster, and more affordable, especially for industries like gaming and animation."
More information: Jinhyeok Kim et al, Rigidity-Aware 3D Gaussian Deformation from a Single Image, Proceedings of the SIGGRAPH Asia 2025 Conference Papers (2025). DOI: 10.1145/3757377.3763937
Citation: An AI approach for single-image-based 3D character animation with preserved proportions (2026, January 8) retrieved 9 January 2026 from https://techxplore.com/news/2026-01-ai-approach-image-based-3d.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.