Abstract

Detecting and classifying human emotion is one of the most challenging problems at the confluence of psychology, affective computing, kinesiology, and data science. While previous studies have shown that human observers are able to perceive another’s emotions by simply observing physical cues (like facial expressions, prosody, body gestures, and walking styles), this project aims to develop an automated artificial intelligence based technique for perceiving human emotions based on kinematic and kinetic variables—that is, based on both the contextual and intrinsic qualities of human motion. The proposed research will examine the role of age and gender on gait-based emotion recognition using deep learning. After collecting full-body gaits across age and gender in a motion-capture lab, Bera, Shim, and Manocha will employ an autoencoder-based semi-supervised deep learning algorithm to learn perceived human emotions from walking style. They will then hierarchically pool these joint motions in a bottom-up manner, following kinematic chains in the human body, and coupling this data with both perceived emotion (by an external observer) and self-reported emotion.

Key Outcomes

  • Project Dost: An AI-driven virtual humanoid agent designed to have conversations and behave like real humans. Dost is an end-to-end framework, starting from natural speech generation to realistic human motions.
  • Follow-up funding ($90,522) from MPowering the State initiative
  • Paper presentations:
    • Can a Robot Trust You? A DRL-Based Approach to Trust-Driven Human-Guided Navigation (IEEE ICRA 2021)

Investigators

Jae Kun Shim

Professor
301-405-2492 | jkshim@umd.edu
Profile

Dinesh Manocha

Distinguished University Professor
301.405.2741 | dmanocha@umd.edu
Profile

Top