Connecting NeRFs, Images, and Text
CVPRW–INRV 2024

Department of Computer Science and Engineering (DISI)
University of Bologna, Italy

Abstract

Neural Radiance Fields (NeRFs) have emerged as a standard framework for representing 3D scenes and objects, introducing a novel data type for information exchange and storage. Concurrently, significant progress has been made in multimodal representation learning for text and image data. This paper explores a novel research direction that aims to connect the NeRF modality with other modalities, similar to established methodologies for images and text. To this end, we propose a simple framework that exploits pre-trained models for NeRF representations alongside multimodal models for text and image processing. Our framework learns a bidirectional mapping between NeRF embeddings and those obtained from corresponding images and text. This mapping unlocks several novel and useful applications, including NeRF zero-shot classification and NeRF retrieval from images or text.

Applications of our framework


Method

In order to learn a bidirectional mapping between images or text and NeRFs, we train two MLPs: one maps CLIP image embeddings to \({\tt nf2vec}\) NeRF embeddings, while the other computes the mapping in the opposite direction.

Training procedure


Zero-shot NeRF classification

To perform zero-shot NeRF classification, we map a NeRF embedding into a CLIP embedding and use the latter to query a gallery of textual embeddings identifying classes.

Our method performs better than baselines that use CLIP embeddings of several NeRF views as query, and does so without requiring any rendering step.


Zero-shot NeRF classification

NeRF retrieval (from images or text)

We perform NeRF retrieval from either images or text by mapping a CLIP embedding into a NeRF embedding and using the latter to query a gallery of NeRF embeddings.

Our method shows promising results with both real images and text as query.

NeRF retrieval
NeRF retrieval

NeRF generation (from images or text)

Our method also allows generating novel NeRF views from both images and textual descriptions.


NeRF generation


Cite us

@inproceedings{ballerini2024clip2nerf,
  title     = {Connecting {NeRFs}, Images, and Text},
  author    = {Ballerini, Francesco 
               and Zama Ramirez, Pierluigi 
               and Mirabella, Roberto  
               and Salti, Samuele
               and Di Stefano, Luigi},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)},
  year      = {2024}
}