FlexiAct: Towards Flexible Action Control
in Heterogeneous Scenarios

1Tsinghua University, 2ARC Lab, Tencent PCG

๐ŸŽฅ Demonstration Video ๐ŸŽฅ

๐Ÿ’ก TL;DR ๐Ÿ’ก

We achieve action control in heterogeneous scenarios
with varying spatial structures or cross-domain subjects.

๐Ÿ’ญ Abstract ๐Ÿ’ญ

Action customization involves generating videos where the subject performs actions dictated by input control signals. Current methods use pose-guided or global motion customization but are limited by strict constraints on spatial structure such as layout, skeleton, and viewpoint consistency, reducing adaptability across diverse subjects and scenarios. To overcome these limitations, we propose FlexiAct, which transfers actions from a reference video to an arbitrary target image. Unlike existing methods, FlexiAct allows for variations in layout, viewpoint, and skeletal structure between the subject of the reference video and the target image, while maintaining identity consistency. Achieving this requires precise action control, spatial structure adaptation, and consistency preservation. To this end, we introduce RefAdapter, a lightweight image-conditioned adapter that excels in spatial adaptation and consistency preservation, surpassing existing methods in balancing appearance consistency and structural flexibility. Additionally, based on our observations, the denoising process exhibits varying levels of attention to motion (low frequency) and appearance details (high frequency) at different timesteps. So we propose FAE (Frequency-aware Action Extraction), which, unlike existing methods that rely on separate spatial-temporal architectures, directly achieves action extraction during the denoising process. Experiments demonstrate that our method effectively transfers actions to subjects with diverse layouts, skeletons, and viewpoints.

โš™๏ธ Pipeline Overview โš™๏ธ

Overview of FlexiAct. (1) The upper part illustrates RefAdapterโ€™s training, which conditions arbitrary frames to enable transitions across diverse spatial structures. (2) The lower part outlines FAEโ€™s training and inference, where attention weights of video tokens to the frequency-aware embedding are dynamically adjusted based on timesteps, facilitating action extraction.

Method Overview

Thank you for waiting for the video to load.

๐ŸŽž๏ธ Results ๐ŸŽž๏ธ

๐Ÿ“‹ Quantitative Comparison ๐Ÿ“‹

Quantitative Comparison

BibTeX

If you find our work useful, please cite our paper:

@article{zhang2025flexiact,
    title={FlexiAct: Towards Flexible Action Control in Heterogeneous Scenarios},
    author={Zhang, Shiyi and Zhuang, Junhao and Zhang, Zhaoyang and Shan, Ying and Tang, Yansong},
    journal={arXiv preprint arXiv:2505.03730},
    year={2025}
  }
OSZAR »