Towards Semantically-Aware Few-Shot 3D Reconstruction
DOI:
https://doi.org/10.2195/lj_proc_wei_en_202510_01Keywords:
3D reconstruction, semantic awareness, deep learning, occlusion handling, scene understandingAbstract
Acquiring rich object-level information, including shape, texture, and geometry, serves as a fundamental building block across multiple domains. In this context, few-shot reconstruction has become a prominent research field due to the ability to achieve 3D reconstruction from a limited set of input images. By leveraging prior knowledge encoded within a trained neural network, these methods can recover unseen features beyond the information obtained from the recorded sensor data. However, current approaches either model the entire environment without emphasizing specific regions of interest or restrict the process to the target object by completley neglecting the surrounding context in a prepossessing step. One potential approach is to apply object masking in the images and then directly map semantic information from 2D to 3D through deep learning. Nevertheless, this task reflecs highly non-linear properties, and integrating semantic cues remains a significant challenge. In this work-in-progress paper, we explore a pipeline for semantically aware few-shot 3D reconstruction on real-world data.
Downloads
Published
How to Cite
Issue
Section
Categories
License
Copyright (c) 2025 Logistics Journal: Proceedings

This work is licensed under a Creative Commons Attribution 4.0 International License.