Dynamic Steerable Blocks in Deep Residual Networks
| Authors |
|
|---|---|
| Publication date | 2017 |
| Host editors |
|
| Book title | Proceedings of the British Machine Vision Conference 2017 |
| ISBN (electronic) |
|
| Event | 28th British Machine Vision Conference |
| Article number | 145 |
| Number of pages | 13 |
| Publisher | BMVA Press |
| Organisations |
|
| Abstract |
Filters in convolutional networks are typically parameterized in a pixel basis, that does not take prior knowledge about the visual world into account. We investigate the generalized notion of frames designed with image properties in mind, as alternatives to this parametrization. We show that frame-based ResNets and Densenets can improve performance on Cifar-10+ consistently, while having additional pleasant properties like steerability. By exploiting these transformation properties explicitly, we arrive at dynamic steerable blocks. They are an extension of residual blocks, that are able to seamlessly transform filters under pre-defined transformations, conditioned on the input at training and inference time. Dynamic steerable blocks learn the degree of invariance from data and locally adapt filters, allowing them to apply a different geometrical variant of the same filter to each location of the feature map. When evaluated on the Berkeley Segmentation contour detection dataset, our approach outperforms all competing approaches that do not utilize pre-training.
|
| Document type | Conference contribution |
| Note | With supplement |
| Language | English |
| Published at | https://doi.org/10.5244/C.31.145 |
| Other links | https://ivi.fnwi.uva.nl/isis/publications/2017/JacobsenBMVC2017 |
| Downloads |
paper145
(Final published version)
|
| Supplementary materials | |
| Permalink to this page | |