Recent advances in 3D Gaussian Splatting (3DGS) have improved the visual fidelity of dynamic avatar reconstruction. However, existing methods often overlook the inherent chromatic similarity of human skin tones, leading to poor capture of intricate facial geometry under subtle appearance changes. This is caused by the affine approximation of Gaussian projection, which fails to be perspective-aware to depth-induced shear effects. To this end, we propose True-to-Geometry Avatar Dynamic Reconstruction (TGA), a perspective-aware 4D Gaussian avatar framework that sensitively captures fine-grained facial variations for accurate 3D geometry reconstruction. Specifically, to enable color-sensitive and geometry-consistent Gaussian representations under dynamic conditions, we introduce Perspective-Aware Gaussian Transformation that jointly models temporal deformations and spatial projection by integrating Jacobian-guided adaptive deformation into the homogeneous formulation. Furthermore, we develop Incremental BVH Tree Pivoting to enable fast frame-by-frame mesh extraction for 4D Gaussian representations. A dynamic Gaussian Bounding Volume Hierarchy (BVH) tree is used to model the topological relationships among points, where active ones are filtered out by BVH pivoting and subsequently re-triangulated for surface reconstruction. Extensive experiments demonstrate that TGA achieves superior geometric accuracy.
There's a lot of excellent work that was introduced around the same time as ours.
GaussianAvatars pioneeringly rigs Gaussian point clouds to a parametric morphable face model.
Topo4D builds temporally consistent 4D topology and high-fidelity textures for dynamic scene reconstruction across frames.
SurFhead reconstructs geometrically accurate avatars using 2D Gaussian surfels.
We kindly recommend checking out Gaussian Opacity Fields, which leverages ray-tracing-based volume rendering of 3D Gaussians to directly extract geometry via level-set identification and adaptive Marching Tetrahedra.