VQualA 2025 — Program
Hybrid
Date: Sunday, October 19, 2025
Time: 1:00 PM – 5:00 PM HST
Format: Hybrid (in-person & online)
In-person Venue 303B
On-site: Room 303B
1:00 – 1:10 PM HST

Opening Remarks

Overview of VQualA 2025 goals and structure

1:10 – 1:40 PM HST
Invited Talk — Dr. Junfeng He (Research Scientist, Google)

Title: Evaluate and Improve AIGC via Modeling Human Feedback and Behavior

Abstract

In this talk, I will present our recent work on evaluation and post-training of AIGC. In particular, I will talk about how to build an rich human feedback (auto-rater) model to predict raters’ rich feedback for generated images, which can serve as an interpretable AIGC evaluation and reward model. Moreover, I will show how to improve image generation models via fine-tuning with our auto-rater model predictions, e.g. achieving region-aware fine-tuning for T2I models to fix problematic regions (CVPR 2025 paper), or fine-tuning with multi rewards. Finally, we will also discuss a rich human behavior model across various kinds of visual content, and give some example about how to use it to improve visual content for better user experience.

1:40 – 2:50 PM HST

Paper Session I — Image and Video Quality Assessment

Each: 8 min talk + 2 min Q&A

Discussion & Networking (Session I)
2:50 – 3:00 PM HST
3:00 – 4:20 PM HST

Paper Session II — Face and Multimodal Quality Assessment

Each: 8 min talk + 2 min Q&A

Discussion & Networking (Session II)
4:20 – 4:30 PM HST
4:30 – 5:00 PM HST

VQualA 2025 — Overview of Challenges and Closing Remarks

Overview of challenge tracks and winning solutions

Closing Remarks

  • Acknowledgements to invited speaker, sponsors, participants, and organizing committee
  • Start here Highlights
    Top