Semantic Projection Network for Zero- and Few-Label Semantic Segmentation
| Authors |
|
|---|---|
| Publication date | 2019 |
| Book title | 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition |
| Book subtitle | proceedings : 16-20 June 2019, Long Beach, California |
| ISBN |
|
| ISBN (electronic) |
|
| Series | CVPR |
| Event | IEEE Conference on Computer Vision and Pattern Recognition |
| Pages (from-to) | 8248-8257 |
| Publisher | Los Alamitos, CA: IEEE Computer Society |
| Organisations |
|
| Abstract |
Semantic segmentation is one of the most fundamental problems in computer vision and pixel-level labelling in this context is particularly expensive. Hence, there have been several attempts to reduce the annotation effort such as learning from image level labels and bounding box annotations. In this paper we take this one step further and focus on the challenging task of zero- and few-shot learning of semantic segmentation. We define this task as image segmentation by assigning a label to every pixel even though either no labeled sample of that class was present during training, i.e. zero-label semantic segmentation, or only a few labeled samples were present, i.e. few-label semantic segmentation.Our goal is to transfer the knowledge from previously seen classes to novel classes. Our proposed semantic projection network (SPNet) achieves this goal by incorporating a class-level semantic information into any network designed for semantic segmentation, in an end-to-end manner. We also propose a benchmark for this task on the challenging COCO-Stuff and PASCAL VOC12 datasets. Our model is effective in segmenting novel classes, i.e. alleviating expensive dense annotations, but also in adapting to novel classes without forgetting its prior knowledge, i.e. generalized zero- and few-label semantic segmentation.
|
| Document type | Conference contribution |
| Language | English |
| Published at | https://doi.org/10.1109/CVPR.2019.00845 |
| Other links | http://www.proceedings.com/52034.html |
| Permalink to this page | |
