Call for Papers Call for Papers | The 4th International Workshop on Human-centric Multimedia Analysis

Important Date

  • Register date: July 15th, 2024.
  • Release of training data: July 25th, 2024.
  • Open for submission on validation set: August 1st, 2024.
  • Submission deadline on validation set: September 20th, 2024.
  • Open for submission on test set: September 21th, 2024.
  • Submission deadline on test set: October 1st, 2024.
  • Submission deadline for method description paper: October 15th, 2024.
  • Winner and invitation speakers: No later than October 20th, 2024.

Challenge Overview

Gait recognition (GR), which aims to identify a target person using her/his walking pattern in a video, has been studied for over two decades. Different from face and fingerprint, gait is remotely accessible, non-contacting, and hard to disguise, which makes it has unique potential in social security. However, gait recognition is still a very challenging task due to various uncertain factors in real-world scenes, such as occlusions, varied viewpoints, arbitrary walking styles, and so on. Although existing GR methods have achieved excellent performance on laboratory environments, they cannot work well on real-world scenes. Therefore, we organize the ACM MM'24 Multimodal Gait Recognition (MGR) Challenge to promote the development of robust and practical gait recognition methods.

Figure

The ACM MM'24 Multimodal Gait Recognition (MGR) Challenge seeks to unleash the power of gait characteristic by encouraging participants to develop novel algorithms and systems that can effectively combine and utilize different gait data. The goal is to create solutions that can perform robust gait recognition in real-world environments, overcoming limitations of traditional systems and paving the way for new applications and innovations.

Challenge Structure

The challenge will be structured in several phases to ensure a comprehensive evaluation of the submitted solutions:

  1. Track Selection: Our challenge is divided into four tracks, a team can participate in no more than two (≤ 2) of the following four race tracks, and only one of Track 1 and Track 2 can be chosen. Datasets can be downloaded from the codalab website for each track below.
  2. Development Phase: During this phase, participants will develop their gait recognition algorithms using the provided dataset. They are encouraged to explore innovative techniques for data fusion, noise reduction, and performance improvement..
  3. Evaluation Phase: Submitted solutions will be evaluated based on the predefined criteria, i.e., Rank-1. A separate validation dataset, which is not included in the initial release, will be used for the evaluation to ensure the fairness and reliability of the results..
  4. Paper Submission and Oral Presentation: The top three winners of each track are required to submit a technical report, and the first-place winners are required to deliver an oral presentation on-site or online..

Prizes and Recognition

The ACM MM'24 Multimodal Gait Recognition Challenge will offer attractive prizes for the top-performing teams of each track, including monetary awards and certificates of recognition. In addition, the winners will have the opportunity to release their work, gaining visibility and recognition in the academic and industrial communities.

Call to Action

We invite researchers, developers, and innovators to join us in this exciting challenge and contribute to the future of gait recognition technology. Whether you are an experienced professional or a passionate newcomer to the field, your participation will help drive the progress of multimodal gait recognition systems and unlock new possibilities for their application. We look forward to your participation and to seeing the groundbreaking solutions that will emerge from this competition. Let’s advance gait recognition together at ACM MM'24!

Convenience

Here is an open-source codebase for your convenience: MGR-CodeBase.
There are also two wonderful projects for your reference: OpenGait and FastPoseGait.

Organizers

Wu Liu (University of Science and Technology of China)
Jinkai Zheng (Hangzhou Dianzi University)
Xinchen Liu (JD Explore Academy)
Chenggang Yan (Hangzhou Dianzi University)

Contact

If you have any questions, please contact Jinkai Zheng (zhengjinkai3@hdu.edu.cn)