The First International Workshop on Bodily Expressed Emotion Understanding (BEEU 2020), to be held in conjunction with the 2020 European Conference on Computer Vision (ECCV) in Glasgow, United Kingdom on 23 August 2020, is calling for contributions in the form of research papers and/or competition in a data modeling challenge.
Understanding human bodily expressed emotion is of great importance in the field of computer vision, robotics, psychology/psychiatry, and graphics. This BEEU workshop focuses on computer vision and machine learning methods for understanding human bodily expressed emotion.
Call for papers
We invite submissions of 1) original research papers of unpublished work and 2) extended abstracts of preliminary work. Original research papers should be no more than 14 pages (excluding references). Extended abstracts are up to 4 pages (including references). Detailed guidelines are available on the workshop Website (http://cydar.ist.psu.edu/conference/BEEU2020/). The papers will be peer-reviewed and, if accepted, published in the ECCV workshop proceedings after the conference. Accepted submissions will be in the form of oral/poster/spotlight presentations at the workshop. The covered topics include but are not limited to:
- Bodily expression open datasets
- Computer vision methods to understand bodily expression
- Expressive human pose representation
- Human movements coding system
- Applications in robotics, autonomous driving, and medicine, etc.
- Algorithmic fairness and data ethics related to emotion modeling
- Data sharing and open science with human subject data
The challenge is based on the BoLD (Body Language Dataset; Luo, Y., Ye, J., Adams, R.B., Jr., Li, J. Newman, M.G. & Wang, J.Z. 2020. ARBEE: Towards automated recognition of bodily expression of emotion in the wild. International Journal of Computer Vision, 128(1):1-25.), which is a large and growing dataset containing annotated short-video samples of bodily expression of emotions, developed at Penn State, with partial support from Amazon. The dataset contains nearly 19,000 video clips from YouTube videos and is annotated by more than 5,000 human subjects from seven ethnicity groups and over a hundred countries. For more information about the challenge, please see https://cydar.ist.psu.edu/emotionchallenge.
To access the challenge data, please register at https://cydar.ist.psu.edu/emotionchallenge. Training and validation data will be made available to participants. At the end of the competition (4 August 2020), participants will be required to submit their trained models (in the form of working code) to the organizers to ensure reproducibility of results. All submissions will be evaluated on a held-out test dataset to ensure a fair comparison. Participants with strong results should submit a full paper to this workshop describing their proposed approach for tackling the challenge’s task(s) as well as the results obtained.
- Jitendra Malik, Arthur J. Chick Professor of Electrical Engineering & Computer Sciences (EECS), University of California at Berkeley
- Norman Badler, Rachleff Professor of The Department of Computer and Information Science (CIS), University of Pennsylvania
- Nikolaus Troje, Professor of Department of Biology, York University
- Agata Lapedriza, MIT Media Lab & Associate Professor from Universitat Oberta de Catalunya
- Xin Lu (tentative), Engineering Manager and Scientist, Adobe Research
Submission deadlines are as follows:
- Paper submission deadline (research paper and abstract tracks): 4 July 2020 (16:59 US Pacific Time or 23:59 UTC-0)
- Challenge Results and Paper submission deadline (challenge track): 4 August 2020 (16:59 US Pacific Time or 23:59 UTC-0)
- Paper acceptance notification: 20 August 2020
- Camera-ready submission deadline: 12 September 2020 (16:59 US Pacific Time or 23:59 UTC-0
- James Z. Wang, Penn State University (email@example.com)
- Reginald B. Adams, Jr., Penn State University (firstname.lastname@example.org)
- Yelin Kim, Amazon Lab126 (email@example.com)
Others contributed to the workshop organization are: Jia Li, Yu Luo, Michelle G. Newman, and Sarah M. Rajtmajer of Penn State University.