<div class="csl-bib-body">
<div class="csl-entry">Koch, S., Wald, J., Colosi, M., Vaskevicius, N., Hermosilla, P., Tombari, F., & Ropinski, T. (2025). RelationField: Relate Anything in Radiance Fields. In <i>2025 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)</i> (pp. 21706–21716). IEEE. https://doi.org/10.1109/CVPR52734.2025.02022</div>
</div>
-
dc.identifier.uri
http://hdl.handle.net/20.500.12708/223673
-
dc.description.abstract
Neural radiance fields are an emerging 3D scene representation and recently even been extended to learn features for scene understanding by distilling open-vocabulary features from vision-language models. However, current method primarily focus on object-centric representations, supporting object segmentation or detection, while understanding semantic relationships between objects remains largely unexplored. To address this gap, we propose RelationField, the first method to extract inter-object relationships directly from neural radiance fields. RelationField represents relationships between objects as pairs of rays within a neural radiance field, effectively extending its formulation to include implicit relationship queries. To teach RelationField complex, open-vocabulary relationships, relationship knowledge is distilled from multi-modal LLMs. To evaluate RelationField, we solve open-vocabulary 3D scene graph generation tasks and relationship-guided instance segmentation, achieving state-of-the-art performance in both tasks. See the project website at relationfield.github.io.
en
dc.language.iso
en
-
dc.subject
3d scene graph
en
dc.subject
3d scene understanding
en
dc.subject
open-vocabulary
en
dc.subject
radiance fields
en
dc.subject
spatial understanding
en
dc.title
RelationField: Relate Anything in Radiance Fields
en
dc.type
Inproceedings
en
dc.type
Konferenzbeitrag
de
dc.contributor.affiliation
University of Ulm (Ulm, DE)
-
dc.contributor.affiliation
Technical University of Munich, Germany
-
dc.contributor.affiliation
University of Ulm (Ulm, DE)
-
dc.relation.isbn
979-8-3315-4364-8
-
dc.relation.doi
10.1109/CVPR52734.2025
-
dc.relation.issn
1063-6919
-
dc.description.startpage
21706
-
dc.description.endpage
21716
-
dc.type.category
Full-Paper Contribution
-
dc.relation.eissn
2575-7075
-
tuw.booktitle
2025 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
-
tuw.peerreviewed
true
-
tuw.relation.publisher
IEEE
-
tuw.researchTopic.id
I5
-
tuw.researchTopic.name
Visual Computing and Human-Centered Technology
-
tuw.researchTopic.value
100
-
tuw.publication.orgunit
E193-01 - Forschungsbereich Computer Vision
-
tuw.publisher.doi
10.1109/CVPR52734.2025.02022
-
dc.description.numberOfPages
11
-
tuw.author.orcid
0009-0007-5777-3206
-
tuw.author.orcid
0000-0001-8141-2725
-
tuw.author.orcid
0000-0002-1409-5114
-
tuw.author.orcid
0000-0002-7857-5512
-
tuw.event.name
2025 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
-
tuw.event.startdate
10-06-2025
-
tuw.event.enddate
17-06-2025
-
tuw.event.online
On Site
-
tuw.event.type
Event for scientific audience
-
tuw.event.place
Nashville
-
tuw.event.country
US
-
tuw.event.presenter
Koch, Sebastian
-
wb.sciencebranch
Informatik
-
wb.sciencebranch
Mathematik
-
wb.sciencebranch.oefos
1020
-
wb.sciencebranch.oefos
1010
-
wb.sciencebranch.value
90
-
wb.sciencebranch.value
10
-
item.openairetype
conference paper
-
item.openairecristype
http://purl.org/coar/resource_type/c_5794
-
item.cerifentitytype
Publications
-
item.languageiso639-1
en
-
item.grantfulltext
none
-
item.fulltext
no Fulltext
-
crisitem.author.dept
University of Ulm (Ulm, DE)
-
crisitem.author.dept
E193-01 - Forschungsbereich Computer Vision
-
crisitem.author.dept
Technical University of Munich, Germany
-
crisitem.author.dept
Universität Ulm
-
crisitem.author.orcid
0009-0007-5777-3206
-
crisitem.author.orcid
0000-0002-6435-7709
-
crisitem.author.orcid
0000-0001-8141-2725
-
crisitem.author.orcid
0000-0002-1409-5114
-
crisitem.author.orcid
0000-0002-7857-5512
-
crisitem.author.parentorg
E193 - Institut für Visual Computing and Human-Centered Technology