高级检索
缪永伟, 高伟豪, 范然, 刘复昌. 投影视图指导的点云形状无监督保细节补全网络 (GDC2023录用推荐)[J]. 计算机辅助设计与图形学学报. DOI: 10.3724/SP.J.1089.2023-00441
引用本文: 缪永伟, 高伟豪, 范然, 刘复昌. 投影视图指导的点云形状无监督保细节补全网络 (GDC2023录用推荐)[J]. 计算机辅助设计与图形学学报. DOI: 10.3724/SP.J.1089.2023-00441
YongWei MIAO, Weihao Gao, Ran Fan, Fuchang Liu. An Unsupervised Detail-Preserving Point Cloud Completion Network Guided by Projection Views[J]. Journal of Computer-Aided Design & Computer Graphics. DOI: 10.3724/SP.J.1089.2023-00441
Citation: YongWei MIAO, Weihao Gao, Ran Fan, Fuchang Liu. An Unsupervised Detail-Preserving Point Cloud Completion Network Guided by Projection Views[J]. Journal of Computer-Aided Design & Computer Graphics. DOI: 10.3724/SP.J.1089.2023-00441

投影视图指导的点云形状无监督保细节补全网络 (GDC2023录用推荐)

An Unsupervised Detail-Preserving Point Cloud Completion Network Guided by Projection Views

  • 摘要: 传统点云形状修补中通常需要以完整点云数据作为先验进行有监督学习, 其往往导致点云修复补全网络的泛化性和鲁棒性并不理想; 而经无监督学习策略生成的点云修复结果往往容易偏离输入的点云形状本身而导致其难以恢复原始形状的精细细节结构. 基于生成式对抗网络框架, 借助待修复形状的三视角投影图像特征信息, 提出一种投影视图指导的三维点云形状无监督保细节补全网络. 网络中的点云形状修补分支采用点云生成器初步生成完整点云数据, 该生成结果将能够恢复形状整体结构但丢失了输入点云的形状细节信息. 为了弥补三维形状细节信息, 点云形状投影图像修复分支以缺失的点云形状作为输入, 首先通过三视角投影获得缺失点云形状的投影图像; 然后对投影图像使用树形图卷积结构的点云生成器修复得到完整的投影视图; 其次对生成得到的投影图使用Resnet-18网络提取特征, 计算三视角生成图之间的特征距离并进行特征对齐, 再将特征对齐后的图像特征与提取生成的点云特征计算特征距离损失, 并将该损失加入判别器中以判断生成点云的真假, 同时优化神经网络生成器参数以生成保细节的完整点云形状. 针对ShapeNet数据集进行网络训练并使用KITTI和ModelNet40数据集分别进行验证, 实验表明该网络相比已有无监督补全网络修复结果的平均CD误差降低11%~41%、平均F1-Scoret提升0.8%~14%, 其能够有效修复点云形状结构并恢复其形状细节, 且对不同程度数据缺失或含噪声的点云修复具有鲁棒性、网络具有较好的泛化性.

     

    Abstract: Traditional supervised learning based method for repairing point cloud shapes typically requires  the complete point cloud data as a prior information, which often leads to its poor generalization and low robustness. Meanwhile, the point completion results generated by the existing unsupervised learning approaches often deviate from the input shape itself, making it difficult to recover the fine details of the original shape. Owing to the network framework of generative adversarial network (GAN), an unsupervised detail preserving point cloud completion network is proposed which is guided by the feature information of three projection views obtained from the underlying shape. The network branch of point cloud shape repairing adopts a point cloud generator to initially generate the complete shape data, which can recover its overall structure of the input shape but always lose its shape details. To compensate for the detail information of the input shape, the network branch of projection view completion firstly obtains the three-view projection images of the missing point cloud shape. Then, the point cloud generator with a tree-shaped convolution module is adopted to repair three projection images of the underlying shape, obtaining the completed views with detail preservation. Next, the feature extraction network Resnet-18 is applied to extract image features from the generated projection views, and the feature distances between the three views are calculated and also aligned. The feature distance between these aligned image features and the shape features extracted from the generated point cloud can thus be calculated, and also be added to the loss function of the discriminator which is employed to judge the truth or falsehood of the generated point cloud. Finally, network parameters of the point cloud generator can be optimized to generate a completed 3d shape with detail preservation. The proposed unsupervised learning based network is trained and tested on ShapeNet dataset for shape completion task and also validated on the KITTI and ModelNet40 datasets. Compared with the existing unsupervised completion networks, the average CD error of our network is reduced by 11%~41%, and the average F1-Scoret is improved by 0.8%~14%, which can demonstrate its effectiveness for repairing shape structure of the input point cloud data and also recovering its shape details. In addition, our point completion network is robust to different degrees of data incompleteness or model noise, and also shows its generalization performance on unseen objects.

     

/

返回文章
返回