Towards Mesh-based Deep Learning for Semantic Segmentation in Photogrammetry
| Authors |
|
|---|---|
| Publication date | 2021 |
| Host editors |
|
| Book title | XXIV ISPRS Congress "Imaging today, foreseeing tomorrow" |
| Book subtitle | Commission II |
| Series | ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences |
| Event | XXIV ISPRS Congress |
| Pages (from-to) | 59-66 |
| Publisher | ISPRS |
| Organisations |
|
| Abstract |
This research is the first to apply MeshCNN–a deep learning model that is specifically designed for 3D triangular meshes–in the photogrammetry domain. We highlight the challenges that arise when applying a mesh-based deep learning model to a photogrammetric mesh, especially wrt data set properties. We provide solutions on how to prepare a remotely sensed mesh for a machine learning task. The most notable pre-processing step proposed is a novel application of the Breadth-First Search algorithm for chunking a large mesh into computable pieces. Furthermore, this work extends MeshCNN such that photometric features based on the mesh texture are considered in addition to the geometric information. Experiments show that including color information improves the predictive performance of the model by a large margin. Besides, experimental results indicate that segmentation performance could be advanced substantially with the introduction of a high-quality benchmark for semantic segmentation on meshes.
|
| Document type | Conference contribution |
| Language | English |
| Published at | https://doi.org/10.5194/isprs-annals-V-2-2021-59-2021 |
| Downloads |
isprs-annals-V-2-2021-59-2021
(Final published version)
|
| Permalink to this page | |