AI Heap
Published on

VCT: Training Consistency Models with Variational Noise Coupling

arXiv:2502.18197 - [arXiv,PDF]
Authors
  • Name
    Gianluigi Silvestri
  • Name
    Luca Ambrogioni
  • Name
    Chieh-Hsin Lai
  • Name
    Yuhta Takida
  • Name
    Yuki Mitsufuji
  • Affiliation
    University of XYZ
  • Affiliation
    Institute of ABC
  • Affiliation
    National University of DEF
  • Affiliation
    Research Center GHI
  • Affiliation
    Technology Institute JKL
Consistency Training (CT) has recently emerged as a strong alternative to diffusion models for image generation. However, non-distillation CT often suffers from high variance and instability, motivating ongoing research into its training dynamics. We propose Variational Consistency Training (VCT), a flexible and effective framework compatible with various forward kernels, including those in flow matching. Its key innovation is a learned noise-data coupling scheme inspired by Variational Autoencoders, where a data-dependent encoder models noise emission. This enables VCT to adaptively learn noise-todata pairings, reducing training variance relative to the fixed, unsorted pairings in classical CT. Experiments on multiple image datasets demonstrate significant improvements: our method surpasses baselines, achieves state-of-the-art FID among non-distillation CT approaches on CIFAR-10, and matches SoTA performance on ImageNet 64 x 64 with only two sampling steps. Code is available at https://github.com/sony/vct.