AttaNet: Attention-Augmented Network
for Fast and Accurate Scene Parsing

*denotes equal contribution

Abstract

overview

Two factors have proven to be very important to the performance of semantic segmentation models: global context and multi-level semantics. However, generating features that capture both factors always leads to high computational complexity, which is problematic in real-time scenarios. In this paper, we propose a new model, called Attention-Augmented Network (AttaNet), to capture both global context and multi-level semantics while keeping the efficiency high. AttaNet consists of two primary modules: Strip Attention Module (SAM) and Attention Fusion Module (AFM). Viewing that there is a significantly larger amount of vertical strip areas than horizontal ones in the natural images, SAM utilizes a striping operation to reduce the complexity of encoding global context in the vertical direction drastically while keeping most of contextual information, compared to the non-local approaches. Moreover, AFM follows a cross-level aggregation strategy to limit the computation, and adopts an attention strategy to weight the importance of different levels of features at each pixel when fusing them, obtaining an efficient multi-level representation. We have conducted extensive experiments on two semantic segmentation benchmarks, and our network achieves different levels of speed/accuracy tradeoff on Cityscapes, e.g., 71 FPS/79.9% mIoU, 130 FPS/78.5% mIoU, and 180 FPS/70.1% mIoU, and leading performance on ADE20K as well.

Training a network without and with SAM

SAM

wSAM

The benefits of our SAM are three-fold. First, since the striped feature map is the combination of all pixels along the same spatial dimension, this gives strong supervision in capturing anisotropy or banded context. Second, we first ensure that the relationships between each pixel and all columns are considered, and then estimate the attention map along the horizontal axis, thus our network can generate dense contextual dependencies. Moreover, this module adds only a few parameters to the backbone network, and therefore takes up very little GPU memory.

Training a network without and with AFM

wAFM

Since low-level features contain excessive spatial details while high-level features are rich in semantics, simply aggregating multi-level information would weaken the effectiveness of information propagation. To address this issue, we introduce Attention Fusion Module which enables each pixel to choose individual contextual information from multi-level features in the aggregation phase. AFM can exploit more discriminative context for each class.