This is the data repository for research outputs of project "Image/Video Quality Assessment for Human and Artificial Intelligence" led by Prof. Weisi Lin. The objectives of this project include:

1.Investigate into the characteristics of the human visual system (HVS) and artificial intelligence (AI) and their requirements for image quality assessment (IQA) for various mobile tasks interested by SenseTime, and build a related database for the project use and further uses by SenseTime

2.Develop an automatic visual quality assessment framework that can be used to aid human evaluation in handphone application development

3.Explore the possibilities for the said framework to be utilized in an objective function in the related optimization or a loss function in machine learning based visual computing systems for AI

4.Demonstrate the uses selected by SenseTime, in high dynamic range (HDR) image manipulations, compression, denoising/restoration/super-resolution, segmentation, 2D-3D conversion, stereo perception, 3D pose estimation, matting/re-touching, and so on.

Featured Dataverses

In order to use this feature you must have at least one published dataverse.

Publish Dataverse

Are you sure you want to publish your dataverse? Once you do so it must remain published.

Publish Dataverse

This dataverse cannot be published because the dataverse it is in has not been published.

Delete Dataverse

Are you sure you want to delete your dataverse? You cannot undelete this dataverse.

Advanced Search

1 to 3 of 3 Results
Jun 20, 2024
Wu, Haoning; Zhang, Erli; Liao, Liang; Chen, Chaofeng; Hou, Jingwen; Wang, Annan; Sun, Wenxiu; Yan, Qiong; Lin, Weisi, 2024, "Replication Data for: Towards Explainable In-the-Wild Video Quality Assessment: A Database and a Language-Prompted Approach", https://doi.org/10.21979/N9/ELWDPE, DR-NTU (Data), V1
A large-scale in-the-wild VQA database, named Maxwell, created to gather more than two million human opinions across 13 specific quality-related factors, including technical distortions e.g. noise, flicker and aesthetic factors e.g. contents. The github project page is: https://g...
Jun 20, 2024
Wu, Haoning; Zhang, Zicheng; Zhang, Erli; Chen, Chaofeng; Liao, Liang; Wang, Annan; Li, Chunyi; Sun, Wenxiu; Yan, Qiong; Zhai, Guangtao; Lin, Weisi, 2024, "Replication Data for: Q-Bench: A Benchmark for General-Purpose Foundation Models on Low-level Vision", https://doi.org/10.21979/N9/M41ERD, DR-NTU (Data), V1
We present Q-Bench, a holistic benchmark crafted to systematically evaluate potential abilities of MLLMs on three realms: low-level visual perception, low-level visual description, and overall visual quality assessment. The github project page is: https://github.com/Q-Future/Q-Be...
Jun 20, 2024
Wu, Haoning; Zhang, Zicheng; Zhang, Erli; Chen, Chaofeng; Liao, Liang; Wang, Annan; Xu, Kaixin; Li, Chunyi; Hou, Jingwen; Zhai, Guangtao; Xue, Geng; Sun, Wenxiu; Yan, Qiong; Lin, Weisi, 2024, "Replication Data for: Q-Instruct: Improving Low-level Visual Abilities for Multi-modality Foundation Models", https://doi.org/10.21979/N9/GPLPNI, DR-NTU (Data), V1
The first dataset consisting of human natural language feedback on low-level vision. The github project page is: https://github.com/Q-Future/Q-Instruct The public download link is: https://huggingface.co/datasets/q-future/Q-Instruct-DB
Add Data

Log in to create a dataverse or add a dataset.

Share Dataverse

Share this dataverse on your favorite social media networks.

Link Dataverse
Reset Modifications

Are you sure you want to reset the selected metadata fields? If you do this, any customizations (hidden, required, optional) you have done will no longer appear.