From CNNs to Shift-Invariant Twin Models Based on Complex Wavelets

Abstract

We propose a novel antialiasing method to increase shift invariance and prediction accuracy in convolutional neural networks. Specifically, we replace the first-layer combination “real-valued convolutions + max pooling” (ℝR mathbbRMax) by “complex-valued convolutions + modulus'' (ℂC mathbbCMod), which is stable to translations. To justify our approach, we claim that ℂC mathbbCMod and ℝR mathbbRMax produce comparable outputs when the convolution kernel is band-pass and oriented (Gabor-like filter). In this context, ℂC mathbbCMod can be considered as a stable alternative to ℝR mathbbRMax. Thus, prior to antialiasing, we force the convolution kernels to adopt such a Gabor-like structure. The corresponding architecture is called mathematical twin, because it employs a well-defined mathematical operator to mimic the behavior of the original, freely-trained model. Our antialiasing approach achieves superior accuracy on ImageNet and CIFAR-10 classification tasks, compared to prior methods based on low-pass filtering. Arguably, our approach’s emphasis on retaining high-frequency details contributes to a better balance between shift invariance and information preservation, resulting in improved performance. Furthermore, it has a lower computational cost and memory footprint than concurrent work, making it a promising solution for practical implementation.

Publication
32th European Signal Processing Conference (EUSIPCO)