zl程序教程

您现在的位置是:首页 >  其他

当前栏目

狂舞:用神经动态外观合成技术制作单眼人类动画

2023-03-14 22:37:23 时间

合成运动中的人类动态外观在AR/VR和视频编辑等应用中起着核心作用。虽然最近提出了许多方法来解决这个问题,但处理具有复杂纹理和高动态运动的宽松服装仍然是一个挑战。在本文中,我们提出了一种基于视频的外观合成方法,该方法可以解决这些挑战,并为以前没有展示过的野外视频展示了高质量的结果。具体来说,我们采用了一个基于StyleGAN的架构来完成基于视频的人物运动重定位任务。我们引入了一个新的运动特征,用于调节发生器的权重,以捕捉动态的外观变化,以及规范化基于单帧的姿势估计,以提高时间一致性。我们在一组具有挑战性的视频上评估了我们的方法,并表明我们的方法在质量和数量上都达到了最先进的性能。

原文题目:Dance In the Wild: Monocular Human Animation with Neural Dynamic Appearance Synthesis

原文:Synthesizing dynamic appearances of humans in motion plays a central role in applications such as AR/VR and video editing. While many recent methods have been proposed to tackle this problem, handling loose garments with complex textures and high dynamic motion still remains challenging. In this paper, we propose a video based appearance synthesis method that tackles such challenges and demonstrates high quality results for in-the-wild videos that have not been shown before. Specifically, we adopt a StyleGAN based architecture to the task of person specific video based motion retargeting. We introduce a novel motion signature that is used to modulate the generator weights to capture dynamic appearance changes as well as regularizing the single frame based pose estimates to improve temporal coherency. We evaluate our method on a set of challenging videos and show that our approach achieves state-of-the art performance both qualitatively and quantitatively.