ZSVC: Zero-shot Style Voice Conversion with Latent Diffusion Models

0. Contents

  1. Abstract
  2. Demos -- zero-shot style voice for arbitrary speakers


1. Abstract

Style voice conversion aims to transform the speaking style of source speech into a desired style while keeping the original speaker's identity. However, previous style voice conversion approaches primarily focus on well-defined domains such as emotional aspects, limiting their practical applications. In this study, we present ZSVC, a novel Zero-shot Style Voice Conversion approach that utilizes a speech codec and a latent diffusion model with speech prompting mechanism to facilitate in-context learning for speaking style conversion. To disentangle speaking style and speaker timbre, we introduce information bottleneck to filter speaking style in the source speech and employ Uncertainty Modeling Adaptive Instance Normalization (UMAdaIN) to perturb the speaker timbre in the style prompt. Moreover, we propose a novel adversarial training strategy to enhance in-context learning and improve style similarity. Experiments conducted on 44,000 hours of speech data demonstrate the superior performance of ZSVC in generating speech with diverse speaking styles in zero-shot scenarios.

描述图片的文字1

Figure 1: Overall framework of proposed ZSVC. 'IC' means in-context learning through speech prompting mechanism.

描述图片的文字1

Figure 2: The detailed architecture of proposed ZSVC. The dashed line means only available in training.



2. Demos -- zero-shot style voice for arbitrary speakers

Part 1. Source speech

Style prompt LGVC StyleVC ZSVC

Part 2. Source speech

Style prompt LGVC StyleVC ZSVC

Part 3. Source speech

Style prompt LGVC StyleVC ZSVC

Part 4. Source speech

Style prompt LGVC StyleVC ZSVC

Part 5. Source speech

Style prompt LGVC StyleVC ZSVC