Walicka, A., & Pfeifer, N. (2023). Deep learning based classification of multinational airborne laser scanning data. In EGU General Assembly 2023. EGU General Assembly 2023, Wien, Austria. https://doi.org/10.5194/egusphere-egu23-8538
E120 - Department für Geodäsie und Geoinformation E120-07 - Forschungsbereich Photogrammetrie
-
Published in:
EGU General Assembly 2023
-
Date (published):
2023
-
Event name:
EGU General Assembly 2023
en
Event date:
23-Apr-2023 - 28-Apr-2023
-
Event place:
Wien, Austria
-
Keywords:
airborne laser scanning; deep learning
en
Abstract:
Airborne laser scanning (ALS) point clouds are commonly acquired to describe a 3D shape of terrain and attributes of objects and landforms located on it. They proved to be useful in a variety of applications, including geometrical characterization of both man-made and natural objects and landforms. Usually, the first step of point cloud processing is classification. As a result, its accuracy highly influences the results of subsequent processing. Therefore, reliable and automatic point cloud classification is of key importance in most of the ALS data applications.
Recently, deep learning techniques attracted the attention of the community in the context of point cloud classification. However, the reproducibility of the trained deep learning networks remains unexplored because there is too little accessible and precisely classified 3D training data. Recently many countries have published their national ALS data sets. This initiative leads to promising options for deep learning classification of point clouds as it allows for comprehensive training of deep networks.
In this study we present the investigations that aimed at creation of a universal, deep learning based classifier that will be able to classify point clouds of varying characteristics. The experiments were carried out using selected parts of data sets that have been made available by three European countries: Poland, Austria and Switzerland. The point clouds were classified into four classes: ground and water, vegetation, buildings and bridges, and others. The results of the experiments showed that it is possible to achieve high overall accuracy of the classification for the ground and water (above 98%), vegetation (92-97%, depending on a test site), and building and bridges (92-96%, depending on a test site). A lower overall accuracy was achieved for class others because of a very high variability of geometry of objects that belong to this class. Furthermore, in some cases, adding training data from a different country to the initial training data resulted in improved classification accuracy in selected classes and reduced dataset-specific errors.
As a result, this study proves that it is possible to create a universal, deep learning based classifier that will be able to maintain high classification accuracy while it processes data sets of different characteristics.
en
Research Areas:
Environmental Monitoring and Climate Adaptation: 100%