Zhiyuan Li1,2,3 · Chi-Man Pun1 📪 · Chen Fang2 · Jue Wang2 · Xiaodong Cun3 📪
1 University of Macau 2 Dzine.ai 3 GVC Lab, Great Bay University
- [2025.12.15] 🔥 Release
paper! - [2025.12.12] 🔥 Release
inference code,configandpretrained weights!
We present PersonaLive, a real-time and streamable diffusion framework capable of generating infinite-length portrait animations on a single 12GB GPU.
# clone this repo
git clone https://github.com/GVCLab/PersonaLive
cd PersonaLive
# Create conda environment
conda create -n personalive python=3.10
conda activate personalive
# Install packages with pip
pip install -r requirements.txt
-
Download pre-trained weight of based models and other components (sd-image-variations-diffusers and sd-vae-ft-mse), you can run the following command to download weights automatically:
python tools/download_weights.py -
Download pre-trained weights into the
./pretrained_weightsfolder.
Finally, these weights should be orgnized as follows:
pretrained_weights
├── onnx
│ ├── unet_opt
│ │ ├── unet_opt.onnx
│ │ └── unet_opt.onnx.data
│ └── unet
├── personalive
│ ├── denoising_unet.pth
│ ├── motion_encoder.pth
│ ├── motion_extractor.pth
│ ├── pose_guider.pth
│ ├── reference_unet.pth
│ └── temporal_module.pth
├── sd-vae-ft-mse
│ ├── diffusion_pytorch_model.bin
│ └── config.json
└── sd-image-variations-diffusers
│ ├── image_encoder
│ │ ├── pytorch_model.bin
│ │ └── config.json
│ ├── unet
│ │ ├── diffusion_pytorch_model.bin
│ │ └── config.json
│ └── model_index.json
└── tensorrt
└── unet_work.engine
python inference_offline.py
# install Node.js 18+
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash
nvm install 18
cd webcam
source start.sh
Converting the model to TensorRT can significantly speed up inference (~ 2x ⚡️). Building the engine may take about 20 minutes depending on your device. Note that TensorRT optimizations may lead to slight variations or a small drop in output quality.
python torch2trt.py
python inference_online.py
then open http://0.0.0.0:7860 in your browser. (*If http://0.0.0.0:7860 does not work well, try http://localhost:7860)
If you find PersonaLive useful for your research, welcome to 🌟 this repo and cite our work using the following BibTeX:
@article{li2025personalive,
title={PersonaLive! Expressive Portrait Image Animation for Live Streaming},
author={Li, Zhiyuan and Pun, Chi-Man and Fang, Chen and Wang, Jue and Cun, Xiaodong},
journal={arXiv preprint arXiv:2512.11253},
year={2025}
}This code is mainly built upon Moore-AnimateAnyone, X-NeMo, StreamDiffusion, RAIN and LivePortrait, thanks to their invaluable contributions.


