71 to 80 of 235 Results
Sep 30, 2024 - DING Ovi Lian
Ding, Ovi Lian; Liu, Qinglin; Su, Pei-Chen; Chan, Siew Hwa, 2024, "Wet air co-electrolysis in high temperature solid oxide electrolysis cell for sustainable single-step production of ammonia feedstock", https://doi.org/10.21979/N9/YTS2JL, DR-NTU (Data), V1
Final report |
Sep 30, 2024 - DING Ovi Lian
Ding, Ovi Lian, 2024, "Novel Self-Supported Robust Catalyst for Blue Hydrogen Production with Methane Cracking", https://doi.org/10.21979/N9/OP8BHR, DR-NTU (Data), V1
Data for catalyst performance in terms of carbon formation |
Sep 27, 2024 - S-Lab for Advanced Intelligence
Wu, Tianxing; Si, Chenyang; Jiang, Yuming; Huang, Ziqi; Liu, Ziwei, 2024, "FreeInit: Bridging Initialization Gap in Video Diffusion Models", https://doi.org/10.21979/N9/JMCW1W, DR-NTU (Data), V1
Though diffusion-based video generation has witnessed rapid progress, the inference results of existing models still exhibit unsatisfactory temporal consistency and unnatural dynamics. In this paper, we delve deep into the noise initialization of video diffusion models, and disco... |
Sep 27, 2024 - S-Lab for Advanced Intelligence
Lan, Yushi; Fangzhou Hong; Shuai Yang; Shangchen Zhou; Bo Dai; Xingang Pan; Chen Change Loy, 2024, "LN3Diff: Scalable Latent Neural Fields Diffusion for Speedy 3D Generation", https://doi.org/10.21979/N9/UZ06ZG, DR-NTU (Data), V1
The field of neural rendering has witnessed significant progress with advancements in generative models and differentiable rendering techniques. Though 2D diffusion has achieved success, a unified 3D diffusion pipeline remains unsettled. This paper introduces a novel framework ca... |
Sep 27, 2024 - S-Lab for Advanced Intelligence
Chen, Yongwei; Wang, Tengfei; Wu, Tong; Pan, Xingang; Jia, Kui; Liu, Ziwei, 2024, "ComboVerse: Compositional 3D Assets Creation Using Spatially-Aware Diffusion Guidance", https://doi.org/10.21979/N9/BAZCX6, DR-NTU (Data), V1
Generating high-quality 3D assets from a given image is highly desirable in various applications such as AR/VR. Recent advances in single-image 3D generation explore feed-forward models that learn to infer the 3D model of an object without optimization. Though promising results h... |
Sep 26, 2024 - S-Lab for Advanced Intelligence
Tang, Jiaxiang; Chen, Zhaoxi; Chen, Xiaokang; Wang, Tengfei; Zeng, Gang; Liu, Ziwei, 2024, "LGM: Large Multi-View Gaussian Model for High-Resolution 3D Content Creation", https://doi.org/10.21979/N9/27JLJB, DR-NTU (Data), V1
3D content creation has achieved significant progress in terms of both quality and speed. Although current feed-forward models can produce 3D objects in seconds, their resolution is constrained by the intensive computation required during training. In this paper, we introduce Lar... |
Sep 25, 2024 - S-Lab for Advanced Intelligence
Lan, Mengcheng; Chen, Chaofeng; Ke, Yiping; Wang, Xinjiang; Feng, Litong; Zhang, Wayne, 2024, "ProxyCLIP: Proxy Attention Improves CLIP for Open-Vocabulary Segmentation", https://doi.org/10.21979/N9/YY8L5O, DR-NTU (Data), V1
Open-vocabulary semantic segmentation requires models to effectively integrate visual representations with open-vocabulary semantic labels. While Contrastive Language-Image Pre-training (CLIP) models shine in recognizing visual concepts from text, they often struggle with segment... |
Sep 25, 2024 - S-Lab for Advanced Intelligence
Lan, Mengcheng; Chen, Chaofeng; Ke, Yiping; Wang, Xinjiang; Feng, Litong; Zhang, Wayne, 2024, "ClearCLIP: Decomposing CLIP Representations for Dense Vision-Language Inference", https://doi.org/10.21979/N9/S6NTDJ, DR-NTU (Data), V1
Despite the success of large-scale pretrained Vision-Language Models (VLMs) especially CLIP in various open-vocabulary tasks, their application to semantic segmentation remains challenging, producing noisy segmentation maps with mis-segmented regions. In this paper, we carefully... |
Sep 25, 2024 - XU Bowen
New, T. H.; Xu, Bowen; Shi, Shengxian, 2024, "Replication Data for: Collisions of vortex rings with hemispheres", https://doi.org/10.21979/N9/0MLY8J, DR-NTU (Data), V1
The data support the findings in the paper of "Collisions of vortex rings with hemispheres". |
Sep 25, 2024 - S-Lab for Advanced Intelligence
Yuan, Haobo; Li, Xiangtai; Zhou, Chong; Li, Yining; Chen, Kai; Loy, Chen Change, 2024, "Open-Vocabulary SAM: Segment and Recognize Twenty-thousand Classes Interactively", https://doi.org/10.21979/N9/L05ULT, DR-NTU (Data), V1
The CLIP and Segment Anything Model (SAM) are remarkable vision foundation models (VFMs). SAM excels in segmentation tasks across diverse domains, whereas CLIP is renowned for its zero-shot recognition capabilities. This paper presents an in-depth exploration of integrating these... |
