1,391 to 1,400 of 5,008 Results
Sep 25, 2024 - S-Lab for Advanced Intelligence
Lan, Mengcheng; Chen, Chaofeng; Ke, Yiping; Wang, Xinjiang; Feng, Litong; Zhang, Wayne, 2024, "ProxyCLIP: Proxy Attention Improves CLIP for Open-Vocabulary Segmentation", https://doi.org/10.21979/N9/YY8L5O, DR-NTU (Data), V1
Open-vocabulary semantic segmentation requires models to effectively integrate visual representations with open-vocabulary semantic labels. While Contrastive Language-Image Pre-training (CLIP) models shine in recognizing visual concepts from text, they often struggle with segment... |
Adobe PDF - 3.5 MB -
MD5: 396eed51abcc20b34e2a977c570d33ee
|
Sep 25, 2024 - S-Lab for Advanced Intelligence
Lan, Mengcheng; Chen, Chaofeng; Ke, Yiping; Wang, Xinjiang; Feng, Litong; Zhang, Wayne, 2024, "ClearCLIP: Decomposing CLIP Representations for Dense Vision-Language Inference", https://doi.org/10.21979/N9/S6NTDJ, DR-NTU (Data), V1
Despite the success of large-scale pretrained Vision-Language Models (VLMs) especially CLIP in various open-vocabulary tasks, their application to semantic segmentation remains challenging, producing noisy segmentation maps with mis-segmented regions. In this paper, we carefully... |
Adobe PDF - 4.7 MB -
MD5: ef5b43ea1ec6f6fa4a3204222101b526
|
Sep 25, 2024 - S-Lab for Advanced Intelligence
Yuan, Haobo; Li, Xiangtai; Zhou, Chong; Li, Yining; Chen, Kai; Loy, Chen Change, 2024, "Open-Vocabulary SAM: Segment and Recognize Twenty-thousand Classes Interactively", https://doi.org/10.21979/N9/L05ULT, DR-NTU (Data), V1
The CLIP and Segment Anything Model (SAM) are remarkable vision foundation models (VFMs). SAM excels in segmentation tasks across diverse domains, whereas CLIP is renowned for its zero-shot recognition capabilities. This paper presents an in-depth exploration of integrating these... |
Adobe PDF - 3.4 MB -
MD5: fa2b0303cf17c6b247314d2a9252040d
|
Sep 25, 2024 - S-Lab for Advanced Intelligence
Wu, Tianhao; Zheng, Chuanxia; Wu, Qianyi; Cham, Tat-Jen, 2024, "ClusteringSDF: Self-Organized Neural Implicit Surfaces for 3D Decomposition", https://doi.org/10.21979/N9/RJUHMC, DR-NTU (Data), V1
3D decomposition/segmentation remains a challenge as large-scale 3D annotated data is not readily available. Existing approaches typically leverage 2D machine-generated segments, integrating them to achieve 3D consistency. In this paper, we propose ClusteringSDF, a novel approach... |
Adobe PDF - 5.7 MB -
MD5: b796cd399fb8241b429ce3943ca1a23b
|
Sep 25, 2024 - S-Lab for Advanced Intelligence
Xu, Baixin; Hu, Jiangbei; Hou, Fei; Lin, Kwan-Yee; Wu, Wayne; Qian, Chen; He, Ying, 2024, "Parameterization-driven Neural Surface Reconstruction for Object-oriented Editing in Neural Rendering", https://doi.org/10.21979/N9/0C9BU9, DR-NTU (Data), V1
The advancements in neural rendering have increased the need for techniques that enable intuitive editing of 3D objects represented as neural implicit surfaces. This paper introduces a novel neural algorithm for parameterizing neural implicit surfaces to simple parametric domains... |
Sep 25, 2024 -
Parameterization-driven Neural Surface Reconstruction for Object-oriented Editing in Neural Rendering
Adobe PDF - 9.4 MB -
MD5: a85b602783d50cc3f5c9166ddd818e79
|
