filename : Ber23a.pdf entry : inproceedings conference : CVPR 2023, Vancouver, Canada, 18-22 June, 2023 pages : 22347-22355 year : 2023 month : 06 title : Kernel Aware Resampler subtitle : author : Bernasconi, Michael and Djelouah, Abdelaziz and Salehi, Farnood and Gross, Markus and Schroers, Christopher booktitle : Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) ISSN/ISBN : editor : publisher : IEEE publ.place : volume : issue : language : English keywords : resampling, super-resolution, kernel-estimation abstract : Deep learning based methods for super-resolution have become state-of-the-art and outperform traditional approaches by a significant margin. From the initial models designed for fixed integer scaling factors (e.g. x2 or x4), efforts were made to explore different directions such as modeling blur kernels or addressing non-integer scaling factors. However, existing works do not provide a sound framework to handle them jointly. In this paper we propose a framework for generic image resampling that not only addresses all the above mentioned issues but extends the sets of possible transforms from upscaling to generic transforms. A key aspect to unlock these capabilities is the faithful modeling of image warping and changes of the sampling rate during the training data preparation. This allows a localized representation of the implicit image degradation that takes into account the reconstruction kernel, the local geometric distortion and the anti-aliasing kernel. Using this spatially variant degradation map as conditioning for our resampling model, we can address with the same model both global transformations, such as upscaling or rotation, and locally varying transformations such lens distortion or undistortion. Another important contribution is the automatic estimation of the degradation map in this more complex resampling setting (i.e. blind image resampling). Finally, we show that state-of-the-art results can be achieved by predicting kernels to apply on the input image instead of direct color prediction. This renders our model applicable for different types of data not seen during the training such as normals.