We introduce TreeStructor, a novel approach for isolating and reconstructing forest trees. The key novelty is a deep neural model that uses neural ranking to assign pre-generated connectable 3D geometries to a point cloud. TreeStructor is trained on a large set of synthetically generated point clouds. The input to our method is a forest point cloud that we first decompose into point clouds that approximately represent trees and then into point clouds that represent their parts. We use a point cloud encoder-decoder to compute embedding vectors that retrieve the best-fitting surface mesh for each tree part point cloud from a large set of predefined branch parts. Finally, the retrieved meshes are connected and oriented to obtain individual surface meshes of all trees represented by the forest point cloud. We qualitatively and quantitatively validate that our method can reconstruct forest trees with unprecedented accuracy and visual fidelity. TreeStructor outperforms the state-of-the-art reconstruction method for around 6% on quantitative metrics and 12% less error compared with QSM on low-quality scanned data.
@ARTICLE{10950450,
author={Zhou, Xiaochen and Li, Bosheng and Benes, Bedrich and Habib, Ayman and Fei, Songlin and Shao, Jinyuan and Pirk, Sören},
journal={IEEE Transactions on Geoscience and Remote Sensing},
title={TreeStructor: Forest Reconstruction with Neural Ranking},
year={2025},
volume={},
number={},
pages={1-1},
keywords={Vegetation;Point cloud compression;Forestry;Solid modeling;Three-dimensional displays;Vegetation mapping;Skeleton;Image reconstruction;Vectors;Laser radar;Neural Networks;Forest Modeling;3D Reconstruction;and Remote Sensing},
doi={10.1109/TGRS.2025.3558312}}