Inspired by the extraordinary success of deep learning in a variety of computer vision problems, we are experiencing a paradigm shift from traditional approaches based on assumed linear models to learned nonlinear models in signal-image-video processing. Although surprisingly good results have been obtained by learned models in image/video restoration and compression in controlled experiments, generalization of models trained by synthetic data to images/video captured by real cameras is less than perfect, and studies that address theoretical explanation of these results are lacking. Furthermore, because there are no theoretical bounds on the performance limits of learned models, it is not possible to tell how much the results can be further improved. In order to fill this gap in the state of the art, this project aims to:
• Establish a theoretical foundation and performance bounds for nonlinear image/video restoration and compression leveraging recent advances in deep learning.
• Develop new architectures and training methods to achieve better generalization to real-world applications.
• Develop new architectures, visual loss functions and training methods for perceptual image/video restoration and compression.
• Investigate applications of self-supervised operational neural networks to deep image/video restoration and compression.
• Determine the best network architecture to capture the temporal correlation in videos most effectively.