RefRec: Pseudo-labels Refinement via Shape Reconstruction
for Unsupervised 3D Domain Adaptation
3DV 2021 (oral)

Department of Computer Science and Engineering (DISI)
University of Bologna, Italy

Abstract

overview

Unsupervised Domain Adaptation (UDA) for point cloud classification is an emerging research problem with relevant practical motivations. Reliance on multi-task learning to align features across domains has been the standard way to tackle it. In this paper, we take a different path and propose RefRec, the first approach to investigate pseudo-labels and self-training in UDA for point clouds. We present two main innovations to make self-training effective on 3D data: i) refinement of noisy pseudo-labels by matching shape descriptors that are learned by the unsupervised task of shape reconstruction on both domains; ii) a novel self-training protocol that learns domain-specific decision boundaries and reduces the negative impact of mislabelled target samples and in-domain intra-class variability. RefRec sets the new state of the art in both standard benchmarks used to test UDA for point cloud classification, showcasing the effectiveness of self-training for this important problem.

Method overview

overview

Our method comprises three steps. First, in the pseudo-labels warm up step, we train a reconstruction network $\Phi_{rec}$ on both source and target domains. The weights of the encoder are used to initialize the backbone $\Phi^{w}_{cls}$ of a classifier, that is then trained on the source domain. In the refinement step, we use the classifier to split target samples in the easy ($\mathcal{E}$) and hard ($\mathcal{H}$) sets according to their confidence and refine them by performing nearest neighbor queries in the auto-encoder feature space. Finally, in the self-training step, we train a target-specific classifier $\Psi_t$ by refined pseudo-labels and online pseudo-labels obtained with the mean teacher architecture.

Results

Shape classification accuracy on the PointDA-10 dataset.
Method ModelNet
to ShapeNet
ModelNet
to ScanNet
ShapeNet
to ModelNet
ShapeNet
to ScanNet
ScanNet
to ModelNet
ScanNet
to ShapeNet
Average
No Adaptation80.243.175.840.763.267.261.7
PointDAN80.245.371.246.959.866.261.6
DefRec80.046.068.541.763.068.261.2
DefRec+PCM81.150.354.352.854.069.060.3
3D Puzzle81.649.773.641.965.968.163.5
RefRec81.456.585.453.373.073.170.5
Oracle93.264.29564.295.093.2-
Shape classification accuracy on the ScanObjectNNdataset.
Method ModelNet to ScanObjectNN
No Adaptation49.6
PointDAN56.4
3D Puzzle58.5
RefRec61.3

Citation

    @inproceedings{cardace2021refrec,
    title={RefRec: Pseudo-labels Refinement via Shape Reconstruction for Unsupervised 3D Domain Adaptation},
    author={Adriano Cardace and Riccardo Spezialetti and Pierluigi Zama Ramirez and Samuele Salti and Luigi Di Stefano},
    booktitle={2021 International Conference on 3D Vision (3DV)},
    year={2021},
    organization={IEEE}}
              

Acknowledgements

We gratefully acknowledge NVIDIA Corporation with the donation of the Titan RTX 2080Ti GPU used in this work. We also would like to thank Injenia srl for partly supporting this research work.