Deformation-Recovery diffusion model (DRDM): Instance deformation for image manipulation and synthesis.

Zheng J-Q., Mo Y., Sun Y., Li J., Wu F., Wang Z., Vincent T., Papież BW.

In medical imaging, diffusion models have shown great potential for synthetic image generation. However, these approaches often lack interpretable correspondence between generated and real images and can create anatomically implausible structures or illusions. To address these limitations, we propose the Deformation-Recovery Diffusion Model (DRDM), a novel diffusion-based generative model that emphasizes morphological transformation through deformation fields rather than direct image synthesis. DRDM introduces a topology-preserving deformation field generation strategy, which randomly samples and integrates multi-scale Deformation Velocity Fields (DVFs). DRDM is trained to learn to recover unrealistic deformation components, thus restoring randomly deformed images to a realistic distribution. This formulation enables the generation of diverse yet anatomically plausible deformations that preserve structural integrity, thereby improving data augmentation and synthesis for downstream tasks such as few-shot learning and image registration. Experiments on cardiac Magnetic Resonance Imaging and pulmonary Computed Tomography show that DRDM is capable of creating diverse, large-scale deformations, while maintaining anatomical plausibility of deformation fields. Additional evaluations on 2D image segmentation and 3D image registration tasks indicate notable performance gains, underscoring DRDM's potential to enhance both image manipulation and generative modeling in medical imaging applications. The project page: https://jianqingzheng.github.io/def_diff_rec/.

DOI

10.1016/j.media.2026.103987

Type

Journal article

Publication Date

2026-02-11T00:00:00+00:00

Volume

110

Keywords

Data augmentation, Generative model, Image registration, Image synthesis, Segmentation

Permalink More information Close