RefRec: Pseudo-labels Refinement via Shape Reconstruction for Unsupervised 3D Domain Adaptation 3DV 2021 (oral)
Abstract
Unsupervised Domain Adaptation (UDA) for point cloud classification is an emerging research problem with relevant practical motivations. Reliance on multi-task learning to align features across domains has been the standard way to tackle it. In this paper, we take a different path and propose RefRec, the first approach to investigate pseudo-labels and self-training in UDA for point clouds. We present two main innovations to make self-training effective on 3D data: i) refinement of noisy pseudo-labels by matching shape descriptors that are learned by the unsupervised task of shape reconstruction on both domains; ii) a novel self-training protocol that learns domain-specific decision boundaries and reduces the negative impact of mislabelled target samples and in-domain intra-class variability. RefRec sets the new state of the art in both standard benchmarks used to test UDA for point cloud classification, showcasing the effectiveness of self-training for this important problem.
Method overview
Our method comprises three steps. First, in the pseudo-labels warm up step, we train a reconstruction network $\Phi_{rec}$ on both source and target domains. The weights of the encoder are used to initialize the backbone $\Phi^{w}_{cls}$ of a classifier, that is then trained on the source domain. In the refinement step, we use the classifier to split target samples in the easy ($\mathcal{E}$) and hard ($\mathcal{H}$) sets according to their confidence and refine them by performing nearest neighbor queries in the auto-encoder feature space. Finally, in the self-training step, we train a target-specific classifier $\Psi_t$ by refined pseudo-labels and online pseudo-labels obtained with the mean teacher architecture.
Results
Shape classification accuracy on the PointDA-10 dataset.Method | ModelNet to ShapeNet | ModelNet to ScanNet | ShapeNet to ModelNet | ShapeNet to ScanNet | ScanNet to ModelNet | ScanNet to ShapeNet | Average | |
---|---|---|---|---|---|---|---|---|
No Adaptation | 80.2 | 43.1 | 75.8 | 40.7 | 63.2 | 67.2 | 61.7 | |
PointDAN | 80.2 | 45.3 | 71.2 | 46.9 | 59.8 | 66.2 | 61.6 | |
DefRec | 80.0 | 46.0 | 68.5 | 41.7 | 63.0 | 68.2 | 61.2 | |
DefRec+PCM | 81.1 | 50.3 | 54.3 | 52.8 | 54.0 | 69.0 | 60.3 | |
3D Puzzle | 81.6 | 49.7 | 73.6 | 41.9 | 65.9 | 68.1 | 63.5 | RefRec | 81.4 | 56.5 | 85.4 | 53.3 | 73.0 | 73.1 | 70.5 |
Oracle | 93.2 | 64.2 | 95 | 64.2 | 95.0 | 93.2 | - |
Method | ModelNet to ScanObjectNN |
---|---|
No Adaptation | 49.6 |
PointDAN | 56.4 |
3D Puzzle | 58.5 |
RefRec | 61.3 |