81 to 84 of 84 Results
Sep 9, 2024
Appointment: Professor Research topics: • Dr Loy's research interests include computer vision and deep learning with a focus on image/video restoration and enhancement, creative content generation, and representation learning. For more information, visit this webpage. |
Jun 20, 2024
Wu, Haoning; Zhang, Erli; Liao, Liang; Chen, Chaofeng; Hou, Jingwen; Wang, Annan; Sun, Wenxiu; Yan, Qiong; Lin, Weisi, 2024, "Replication Data for: Towards Explainable In-the-Wild Video Quality Assessment: A Database and a Language-Prompted Approach", https://doi.org/10.21979/N9/ELWDPE, DR-NTU (Data), V1
A large-scale in-the-wild VQA database, named Maxwell, created to gather more than two million human opinions across 13 specific quality-related factors, including technical distortions e.g. noise, flicker and aesthetic factors e.g. contents. |
Jun 20, 2024
Wu, Haoning; Zhang, Zicheng; Zhang, Erli; Chen, Chaofeng; Liao, Liang; Wang, Annan; Li, Chunyi; Sun, Wenxiu; Yan, Qiong; Zhai, Guangtao; Lin, Weisi, 2024, "Replication Data for: Q-Bench: A Benchmark for General-Purpose Foundation Models on Low-level Vision", https://doi.org/10.21979/N9/M41ERD, DR-NTU (Data), V1
We present Q-Bench, a holistic benchmark crafted to systematically evaluate potential abilities of MLLMs on three realms: low-level visual perception, low-level visual description, and overall visual quality assessment. |
Jun 20, 2024
Wu, Haoning; Zhang, Zicheng; Zhang, Erli; Chen, Chaofeng; Liao, Liang; Wang, Annan; Xu, Kaixin; Li, Chunyi; Hou, Jingwen; Zhai, Guangtao; Xue, Geng; Sun, Wenxiu; Yan, Qiong; Lin, Weisi, 2024, "Replication Data for: Q-Instruct: Improving Low-level Visual Abilities for Multi-modality Foundation Models", https://doi.org/10.21979/N9/GPLPNI, DR-NTU (Data), V1
The dataset consisting of human natural language feedback on low-level vision. |
