Weight Space Representation Learning on Diverse NeRF Architectures

University of Bologna, Italy
ICLR 2026

Our framework learns architecture-agnostic representations of NeRFs by processing their parameters as input. This allows performing downstream tasks on NeRFs independently of their neural parameterization and without reconstructing the underlying 3D object.

Abstract

Neural Radiance Fields (NeRFs) have emerged as a groundbreaking paradigm for representing 3D objects and scenes by encoding shape and appearance information into the weights of a neural network. Recent studies have demonstrated that these weights can be used as input for frameworks designed to address deep learning tasks; however, such frameworks require NeRFs to adhere to a specific, predefined architecture. In this paper, we introduce the first framework capable of processing NeRFs with diverse architectures and performing inference on architectures unseen at training time. We achieve this by training a Graph Meta-Network within an unsupervised representation learning framework, and show that a contrastive objective is conducive to obtaining an architecture-agnostic latent space. In experiments conducted across 13 NeRF architectures belonging to three families (MLPs, tri-planes, and, for the first time, hash tables), our approach demonstrates robust performance in classification, retrieval, and language tasks involving multiple architectures, even unseen at training time, while also matching or exceeding the results of existing frameworks limited to single architectures.

BibTeX

@inproceedings{ballerini2026weight,
  title     = {Weight Space Representation Learning on Diverse {NeRF} Architectures},
  author    = {Ballerini, Francesco and Zama Ramirez, Pierluigi and Di Stefano, Luigi and Salti, Samuele},
  booktitle = {The Fourteenth International Conference on Learning Representations},
  year      = {2026}
}