Novel applications of affective computing have emerged in recent years in domains ranging from health care to the 5th generation mobile network. Many of these have found improved emotion classification performance when fusing multiple sources of data (e.g., audio, video, brain, face, thermal, physiological, environmental, positional, text, etc.). Multimodal affect recognition has the potential to revolutionize the way various industries and sectors utilize information gained from recognition of a person’s emotional state, particularly considering the flexibility in the choice of modalities and measurement tools (e.g., surveillance versus mobile device cameras). Multimodal classification methods have been proven highly effective at minimizing misclassification error in practice and in dynamic conditions. Further, multimodal classification models tend to be more stable over time compared to relying on a single modality, increasing their reliability in sensitive applications such as mental health monitoring and automobile driver state recognition. To continue the trend of lab to practice within the field and encourage new applications of affective computing, this workshop provides a forum for the exchange of ideas on future directions, including novel fusion methods and databases, innovations through interdisciplinary research, and emerging emotion sensing devices.

Call for papers

AMAR 2020 gathers researchers working in the areas of affective computing, human-computer interaction, brain-computer interfaces, mental and digital health, behavioral sciences, cybersecurity, and other disciplines which have or can leverage automated emotion recognition through the fusion of multiple modalities. This workshop will discuss various ubiquitous sensing devices (e.g., brain, face, thermal, physiological, environmental, positional, etc.) to decode emotions in ways relevant to specific applications and domains. This workshop aims to expose current use cases for affective computing and emerging applications of affective computing to spark future work.

Topics of interest

Topics of interest include but are not limited to:

  • Health applications with a focus on multimodal affect
  • Multimodal affective computing for cybersecurity applications (e.g., biometrics and IoT security)
  • Inter-correlations and fusion of ubiquitous multimodal data as it relates to applied emotion recognition (e.g. face and EEG data)
  • Leveraging ubiquitous devices to create reliable multimodal applications for emotion recognition
  • Applications of in-the-wild data vs. lab controlled
  • Facilitation and collection of multimodal data (e.g. ubiquitous data) for applied emotion recognition
  • Engineering applications of multimodal affect (e.g., robotics, social engineering, domain inspired hardware / sensing technologies, etc.)

Among those listed in the topics of interest, additional applications include but are not limited to:

  • Quantified-Self and Self-regulation
  • Engagement Measurement
  • Lie Detection
  • Smart human-machine interfaces
  • Intelligent transportation systems
  • Video games
  • Immersed Virtual Experiences

NOTE: Topics that do not demonstrate an existing or potential application of affective computing / emotion recognition are not topics of interest for this workshop.


  • Paper submission: February 1, 2020
  • Decision to Authors: February 14, 2020
  • Camera-ready papers due: February 28, 2020


Workshop candidates are invited to submit papers up to 4 pages plus one for references. Submissions to AMAR 2020 should have no substantial overlap with any other paper submitted to FG2020 or already published. All persons who have made any substantial contribution to the work should be listed as authors, and all listed authors should have made some substantial contribution to the work. Papers presented at AMAR 2020 will appear in the IEEE Xplore digital library. Papers should follow the FG conference format (anonymous). Paper submissions will be handled using EasyChair (click here for more details on submitting your paper to AMAR2020).


  • Shaun Canavan, University of Florida
  • Tempestt Neal, University of South Florida
  • Marvin Andujar, University of South Florida
  • Lijun Yin, Binghamton University