Diffusion models [1,2,3] are a family of latent-variable generative
models which exhibit state-of-the-art results in probabilistic modeling. in generation modeling. They have shown great performance to generate realistic images (Figure 1) or text (Figure 2). Among others, the DALL-E neural network has become popular for the quality of its generated images from text.

Inverse problems is a very active field of research that aims at recovering an unknown object from its observation by an imaging device for instance. This includes denoising (removing noise from an image), deblurring (removing blur from an image), inpainting (recovering for the missing part of an image), super-resolution (increasing the resolution of an image), etc. In this project, we will be particularly interested in  practical applications in computer vision or in microscopy.

Generative models such as DALLE-2 can be used to solve ill-posed inverse problems. Historically, the first approaches used generative adversarial networks (GANs) [4,5]. However, GANs can be difficult to train or to be used for solving inverse problems as prior. Normalizing flows [6,7] alleviate some of these problems, but the quality of the generated samples and expensive computations prevent its success in effectively solving imaging problems.
In this project, we explore the capacity of the recent diffusion-based generative models to solve ill-posed inverse problems.


Figure 1: Two images generated using DALL-E 2. Left: input is ”Teddy bears working on a new AI research
on the moon in the 1980s”. Right: input is ”A bowl of soup that looks like a monster spray-painted on a


Figure 2: Three outputs produced by OpenAI text completion algorithm

Project description

In the first part of the project, we will implement and explore diffusion methods for simple tasks.

In the second part, depending on the remaining time and the interest of the student, we will explore one of the following applications.

Generative model and microscopy

Generative models have been successfully applied to microscopy. Recently, they have been used to address the cryogenic Electron Microscopy (cryo-EM) problem [8]. However, they use Wasserstein GAN which seems to be outdated by the diffusion models.
The main disadvantage of diffusion models is that generating new samples rest on the fine discretization of a stochastic differentiable equation involving a signal of the size of the images. Recently, it was proposed to use a diffusion model on a low-dimensional latent space rather than the image space, reducing significantly the computational time needed for inference [9].

Continuous representation

Implicit neural representation aims at representing a function $x\mapsto f(x)$ by a neural network taking the position $x$ as input.
It has been successfully applied to solve several challenging imaging problems such as in microscopy [10].
There exist several implicit representations: pixel-based (that corresponds to linear interpolation with learnable amplitude at fixed positions), NURBS, and neural networks with various positional encoding (sine, random features). Each representation has a different implicit bias that might explain the different performance depending on the application.

In this project, we explore the possibility of the diffusion model being coupled with implicit representation. In particular, we will be interested in solving inverse problems with applications in microscopy (cryo-EM, cryo-ET).
We will build on several recent works conducted in our group [11,10] and [12].
This project requires strong coding and a few mathematical skills.

Generative model for non-Euclidean spaces

Diffusion models are widely studied for signals defined in Euclidean space. In that case, the stochastic differential equation and its reversed discretization is well defined. However, not all signals are naturally defined in Euclidean spaces. For instance, cryo-EM projections are naturally defined as signals on the rotation SO(3) or graphs can be embedded into Hyperbolic spaces [13].

In this project, we will be interested in extending the diffusion model to non-euclidean spaces. This project requires strong mathematical interest and few coding skills.


Any of the above project can be adapted into either a Bachelor or a Master thesis.

Depending on the chosen project, you will be supervised by at least two different person. Please reach out with one of the following for a first inquiry:

  • Valentin Debarnot, valentin.debarnot[@]unibas.ch
  • Ivan Dokmanić, ivan.dokmanic[@]unibas.ch


[1] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances
in Neural Information Processing Systems, 33:6840–6851, 2020.

[2] Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. arXiv
preprint arXiv:2010.02502, 2020.

[3] Alexander Quinn Nichol and Prafulla Dhariwal. Improved denoising diffusion probabilistic
models. In International Conference on Machine Learning, pages 8162–8171. PMLR, 2021.

[4] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair,
Aaron Courville, and Yoshua Bengio. Generative adversarial networks. Communications of the
ACM, 63(11):139–144, 2020.

[5] Martin Arjovsky, Soumith Chintala, and L ́eon Bottou. Wasserstein generative adversarial net-
works. In International conference on machine learning, pages 214–223. PMLR, 2017.

[6] Durk P Kingma and Prafulla Dhariwal. Glow: Generative flow with invertible 1x1 convolutions.
Advances in neural information processing systems, 31, 2018.

[7] Konik Kothari, AmirEhsan Khorashadizadeh, Maarten de Hoop, and Ivan Dokmani ́c. Trum-
pets: Injective flows for inference and inverse problems. In Uncertainty in Artificial Intelligence,
pages 1269–1278. PMLR, 2021.

[8] Harshit Gupta, Michael T McCann, Laurene Donati, and Michael Unser. Cryogan: a new recon-
struction paradigm for single-particle cryo-em via deep adversarial learning. IEEE Transactions
on Computational Imaging, 7:759–774, 2021.

[9] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bj ̈orn Ommer. High-
resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Con-
ference on Computer Vision and Pattern Recognition, pages 10684–10695, 2022.

[10] Valentin Debarnot, Sidharth Gupta, Konik Kothari, and Ivan Dokmanic. Joint cryo-et align-
ment and reconstruction with neural deformation fields. arXiv preprint arXiv:2211.14534, 2022.

[11] Sidharth Gupta, Konik Kothari, Valentin Debarnot, and Ivan Dokmani ́c. Differentiable uncal-
ibrated imaging. arXiv preprint arXiv:2211.10525, 2022.

[12] Amir Khorashadizadeh, Anadi Chaman, Valentin Debarnot, and Ivan Dokmani ́c. Funknn:
Neural interpolation for functional generation. 2022.

[13] Anastasis Kratsios, Valentin Debarnot, and Ivan Dokmani ́c. Small transformers compute uni-
versal metric embeddings. arXiv preprint arXiv:2209.06788, 202