dc.description.abstract | As a common output format of sensors used for scanning real world environments, point clouds are a ubiquitous representation of 3D geometry. A relatively inefficient, unordered data structure for this purpose, point cloud processing and compression has been a target of intense research focus. Notably, this focus has been predominantly targeted on simple object scenes, often with uniform sampling and a lack of noise. This is not usually the case for larger room-scale environments. At the same time, a lot of focus on point clouds in the deep learning space has been in extracting semantic information, which necessitates a large amount of training data which is not present for high density point clouds. The hypothesis is that compression can be achieved by fitting geometric primitives to the point cloud in a supervised manner. Such a system needs datasets with geometric ground truth information and realistic noise distribution. In this thesis, the noise distribution of room-scale point clouds is explored and a virtual VLP-16 LiDAR scanner with a realistic error model is implemented. Existing point cloud compression approaches are evaluated on noisy room-scale data. The results suggest that a Gaussian noise applied to the distance between the sample and sensor is an adequate approximation with the right parameters. The performance of existing signal processing--based compression approaches does not significantly degrade on noisy room-scale data, however their neural network--based counterparts struggle with performance variability. Exploiting geometric information and advances in deep learning on point clouds for compression is an important area for further study. | eng |