Skip to content
View ZhangErliCarl's full-sized avatar
Block or Report

Block or report ZhangErliCarl

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
ZhangErliCarl/README.md

πŸ‘‹ Hello, I'm ZHANG Erli!

πŸ‘€ About me

I am a final-year undergraduate at Nanyang Technological University πŸ‡ΈπŸ‡¬, majoring in Computer Science. My current research interests include Large Multimodal Models, Visual Quality Assessment and AI in Healthcare.

I'm currently applying for PhD programs starting in 2024 Fall!

πŸ“– Publications

  • Conference: ICLR 2024 (spotlight)
  • Description: A benchmark for multi-modality LLMs on low-level vision and visual quality assessment.
  • πŸ“– Paper
  • Conference: ACMMM 2023 (oral)
  • Description: Introduced a 16-dimensional VQA Dataset and Method for a more explainable VQA.
  • πŸ“– Paper
  • Conference: ICCV 2023
  • Description: A state-of-the-art NR-VQA method that predicts disentangled aesthetic and technical quality.
  • πŸ“– Paper
  • Demo: Demo

πŸ“¬ Contact Me

Pinned

  1. DOVER DOVER Public

    Forked from VQAssessment/DOVER

    [ICCV 2023, Official Code] for paper "Exploring Video Quality Assessment on User Generated Contents from Aesthetic and Technical Perspectives". Official Weights and Demos provided.

    Jupyter Notebook 1

  2. MaxVQA MaxVQA Public

    Forked from VQAssessment/ExplainableVQA

    [ACMMM, 2023] "Towards Explainable Video Quality Assessment: A Database and a Language-Prompted Approach"

    Python 1

  3. Q-Future/Q-Bench Q-Future/Q-Bench Public

    β‘ [ICLR2024 Spotlight] (GPT-4V/Gemini-Pro/Qwen-VL-Plus+16 OS MLLMs) A benchmark for multi-modality LLMs (MLLMs) on low-level vision and visual quality assessment.

    Jupyter Notebook 189 11