| 38 | 0 | 143 |
| 下载次数 | 被引频次 | 阅读次数 |
基于几何的点云压缩(geometry-based point cloud compression, G-PCC)可有效降低点云传输对网络带宽和存储的要求,但重建后的点云质量常因点的消失而显著下降.文章提出了一种基于多分支(multi-branch)的G-PCC点云几何后处理方法,通过提取多尺度几何特征,并在每个尺度上使用基于k近邻的最大池化层来聚合几何邻域信息,从而预测体素块的概率,实现更精确的点云重建.在国际运动图像专家组(Moving Picture Experts Group, MPEG)推荐的通用测试条件下,该方法与G-PCC(octree)、G-PCC(trisoup)相比,平均获得91.89%(84.57%)和75.24%(73.51%)的D1(D2)BD-Rate增益;与传统方法LUT相比,平均获得76.78%(70.37%)的D1(D2)BD-Rate增益;与基于深度学习的方法DGPP相比,平均获得23.95%(21.41%)的BD-Rate增益.此外,该方法相较于现有基于学习的方法,复杂度更低,具有更广阔的应用前景.
Abstract:Geometry-based point cloud compression(G-PCC) effectively reduces the bandwidth and storage requirements for point cloud transmission. However, the quality of the reconstructed point cloud often degrades significantly due to point disappearance. To address this issue, this paper proposed a multi-branch-based geometry post-processing method for G-PCC compressed point cloud. The method extracted multi-scale geometric features and employed a k-nearest neighbors(kNN) based max pooling layer at each scale to aggregate geometric neighborhood information, thereby predicting voxel occupancy probabilities for more accurate point cloud reconstruction. Evaluated under the Moving Picture Experts Group(MPEG) common test conditions, the proposed method achieved average BD-Rate gains of 91.89%(84.57%) and 75.24%(73.51%) for D1(D2) metrics compared to G-PCC(octree) and G-PCC(trisoup), respectively. It also obtained average BD-Rate gains of 76.78%(70.37%) for D1(D2) over the traditional LUT method and 23.95%(21.41%) over the deep learning-based DGPP method. Furthermore, the proposed approach demonstrated lower computational complexity compared to existing learning-based methods, indicating promising potential for practical applications.
[1] SCHWARZ S,PREDA M,BARONCINI V,et al.Emerging MPEG standards for point cloud compression[J].IEEE J Emerg Sel Top Circuits Syst,2019,9(1):133-148.
[2] GRAZIOSI D,NAKAGAMI O,KUMA S,et al.An overview of ongoing point cloud compression standardization activities:video-based (V-PCC) and geometry-based (G-PCC)[J].APSIPA Trans Signal Inf Process,2020,9(1):e13.
[3] BORGES T M,GARCIA D C,QUEIROZ R L.Fractional super-resolution of voxelized point clouds[J].IEEE Trans Image Process,2022,31:1380-1390.
[4] ALEXA M,BEHR J,COHEN-OR D,et al.Computing and rendering point set surfaces[J].IEEE Trans Vis Comput Graph,2003,9(1):3-15.
[5] HUANG H,WU S H,GONG M L,et al.Edge-aware point set resampling[J].ACM Trans Graph,2013,32(1):1-12.
[6] MAO A H,DU Z H,HOU J H,et al.PU-Flow:a point cloud upsampling network with normalizing flows[J].IEEE Trans Vis Comput Graph,2023,29(12):4964-4977.
[7] FAN X Q,LI G,LI D Q,et al.Deep geometry post-processing for decompressed point clouds[C]//2022 IEEE International Conference on Multimedia and Expo.Taipei:IEEE,2022:1-6.
[8] AKHTAR A,LI Z,VAN DER AUWERA G,et al.PU-Dense:sparse tensor-based point cloud geometry upsampling[J].IEEE Trans Image Process,2022,31:4133-4148.
[9] YUAN W T,KHOT T,HELD D,et al.PCN:point completion network[C]//2018 International Conference on 3D Vision.Verona:IEEE,2018:728-737.
[10] SULLIVAN G J,OHM J R,HAN W J,et al.Overview of the high efficiency video coding (HEVC) standard[J].IEEE Trans Circuits Syst Video Technol,2012,22(12):1649-1668.
[11] BROSS B,CHEN J L,OHM J R,et al.Developments in international video coding standardization after AVC,with an overview of versatile video coding (VVC)[J].Proc IEEE,2021,109(9):1463-1493.
[12] BROSS B,WANG Y K,YE Y,et al.Overview of the versatile video coding (VVC) standard and its applications[J].IEEE Trans Circuits Syst Video Technol,2021,31(10):3736-3764.
[13] YU L Q,LI X Z,FU C W,et al.PU-Net:point cloud upsampling network[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.Salt Lake City:IEEE,2018:2790-2799.
[14] YU L Q,LI X Z,FU C W,et al.EC-Net:an edge-aware point set consolidation network[C]//15th European Conference on Computer Vision.Cham:Springer,2018:398-414.
[15] WANG Y F,WU S H,HUANG H,et al.Patch-based progressive 3D point set upsampling[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition.Long Beach:IEEE,2019:5951-5960.
[16] LI R H,LI X Z,FU C W,et al.PU-GAN:a point cloud upsampling adversarial network[C]//2019 IEEE/CVF International Conference on Computer Vision.Seoul:IEEE,2019:7202-7211.
[17] QIAN G C,ABUALSHOUR A,LI G H,et al.PU-GCN:point cloud upsampling using graph convolutional networks[C]//2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition.Nashville:IEEE,2021:11678-11687.
[18] PANG J H,LODHI M A,TIAN D.GRASP-Net:geometric residual analysis and synthesis for point cloud compression[C]//Proceedings of the 1st International Workshop on Advances in Point Cloud Compression,Processing and Analysis.New York:Association for Computing Machinery,2022:11-19.
[19] WANG J Q,DING D D,LI Z,et al.Multiscale point cloud geometry compression[C]//2021 Data Compression Conference.Snowbird:IEEE,2021:73-82.
[20] CHOY C,GWAK J,SAVARESE S.4D spatio-temporal ConvNets:minkowski convolutional neural networks[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition.Long Beach:IEEE,2019:3070-3079.
基本信息:
DOI:10.19926/j.cnki.issn.1674-232X.2023.09.151
中图分类号:TP391.41;TP18
引用信息:
[1]钱虞杰,丁丹丹.基于multi-branch的点云几何后处理方法[J].杭州师范大学学报(自然科学版),2025,24(06):664-672.DOI:10.19926/j.cnki.issn.1674-232X.2023.09.151.
2025-11-30
2025-11-30