BANet: Blur-aware Attention Networks
for Dynamic Scene Deblurring





Abstract

Image motion blur usually results from moving objects or camera shakes. Such blur is generally directional and non-uniform. Previous research efforts attempt to solve non-uniform blur by using self-recurrent multi-scale or multi-patch architectures accompanying with self-attention. However, using self-recurrent frameworks typically leads to a longer inference time, while inter-pixel or inter-channel self-attention may cause excessive memory usage. This paper proposes blur-aware attention networks (BANet) which accomplish accurate and efficient deblurring via a single forward pass. Our BANet utilizes region-based self-attention with multi-kernel strip pooling to disentangle blur patterns of different magnitudes and orientations and with cascaded parallel dilated convolution to aggregate multi-scale content features. Extensive experimental results on the GoPro and HIDE benchmarks demonstrate that the proposed BANet performs favorably against the state-of-the-arts in blurred image restoration and can provide deblurred results in real-time.

Papers

Citation

Fu-Jen Tsai*, Yan-Tsung Peng*, Yen-Yu Lin, Chung-Chi Tsai, and Chia-Wen, "BANet: Blur-aware Attention Networks for Dynamic Scene Deblurring", arXiv preprint arXiv:2101.07518, 2021.


BibTex
@inproceedings{BANet,
  author    = {Tsai, Fu-Jen* and Peng, Yan-Tsung* and Lin, Yen-Yu and Tsai, Chung-Chi and  Lin, Chia-Wen},
  title     = {BANet: Blur-aware Attention Networks for Dynamic Scene Deblurring},
  booktitle = {arXiv preprint arXiv:2101.07518},
  year      = {2021}
}
      
Code and Results