Skip to content

dknos/PersonaLive

 
 

Repository files navigation

PersonaLive

Expressive Portrait Image Animation for Live Streaming

1 University of Macau    2 Dzine.ai    3 GVC Lab, Great Bay University

GitHub

highlight

  

📣 Updates

  • [2025.12.15] 🔥 Release paper!
  • [2025.12.12] 🔥 Release inference code, config and pretrained weights!

⚙️ Framework

Image 1

We present PersonaLive, a real-time and streamable diffusion framework capable of generating infinite-length portrait animations on a single 12GB GPU.

🚀 Getting Started

🛠 Installation

# clone this repo
git clone https://github.com/GVCLab/PersonaLive
cd PersonaLive

# Create conda environment
conda create -n personalive python=3.10
conda activate personalive

# Install packages with pip
pip install -r requirements.txt

⏬ Download weights

  1. Download pre-trained weight of based models and other components (sd-image-variations-diffusers and sd-vae-ft-mse), you can run the following command to download weights automatically:

    python tools/download_weights.py
    
  2. Download pre-trained weights into the ./pretrained_weights folder.

Finally, these weights should be orgnized as follows:

pretrained_weights
├── onnx
│   ├── unet_opt
│   │   ├── unet_opt.onnx
│   │   └── unet_opt.onnx.data
│   └── unet
├── personalive
│   ├── denoising_unet.pth
│   ├── motion_encoder.pth
│   ├── motion_extractor.pth
│   ├── pose_guider.pth
│   ├── reference_unet.pth
│   └── temporal_module.pth
├── sd-vae-ft-mse
│   ├── diffusion_pytorch_model.bin
│   └── config.json
└── sd-image-variations-diffusers
│   ├── image_encoder
│   │   ├── pytorch_model.bin
│   │   └── config.json
│   ├── unet
│   │   ├── diffusion_pytorch_model.bin
│   │   └── config.json
│   └── model_index.json
└── tensorrt
    └── unet_work.engine

🎞️ Offline Inference

python inference_offline.py

📸 Online Inference

📦 Setup Web UI

# install Node.js 18+
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash
nvm install 18

cd webcam
source start.sh

🏎️ Acceleration (Optional)

Converting the model to TensorRT can significantly speed up inference (~ 2x ⚡️). Building the engine may take about 20 minutes depending on your device. Note that TensorRT optimizations may lead to slight variations or a small drop in output quality.

python torch2trt.py

▶️ Start Streaming

python inference_online.py

then open http://0.0.0.0:7860 in your browser. (*If http://0.0.0.0:7860 does not work well, try http://localhost:7860)

📋 Citation

If you find PersonaLive useful for your research, welcome to 🌟 this repo and cite our work using the following BibTeX:

@article{li2025personalive,
  title={PersonaLive! Expressive Portrait Image Animation for Live Streaming},
  author={Li, Zhiyuan and Pun, Chi-Man and Fang, Chen and Wang, Jue and Cun, Xiaodong},
  journal={arXiv preprint arXiv:2512.11253},
  year={2025}
}

❤️ Acknowledgement

This code is mainly built upon Moore-AnimateAnyone, X-NeMo, StreamDiffusion, RAIN and LivePortrait, thanks to their invaluable contributions.

About

PersonaLive! : Expressive Portrait Image Animation for Live Streaming

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 92.9%
  • Svelte 4.6%
  • TypeScript 2.2%
  • JavaScript 0.2%
  • HTML 0.1%
  • Shell 0.0%