In today’s world, intelligent systems and applications have become an integral part of our daily lives. These systems require human-computer interaction technology to function efficiently. This technology enables intelligent hardware to gather physiological and behavioral information from humans and use it to accomplish specific tasks, leading to increased convenience and efficiency in society. The significance of human-computer interaction technology is evident in various research fields.

One of the significant challenges in human-computer interaction is emotion recognition. Recognizing human emotions is crucial for intelligent machines to interact effectively with humans. Facial micro-expressions have become increasingly popular in recent years for recognizing human emotions. Micro-expressions are brief and involuntary facial expressions that occur when a person tries to hide emotions. They can reveal human beings’ true emotional states and convey more substantial information compared to ordinary facial expressions. Therefore, automatic recognition of micro-expressions has potentially useful applications in many fields, such as clinical diagnoses, security work, and human-computer interaction.

The Adaptive Spatiotemporal Attention Neural Network for CDMER

To solve the challenges of cross-database micro-expression recognition (CDMER), researchers have proposed an adaptive spatiotemporal attention neural network (ASTANN). The approach utilizes a deep neural network with a spatiotemporal attention mechanism to focus on the subtle and instant features of micro-expressions to solve CDMER problems.

The first step in the proposed approach is to preprocess the databases by extracting optical flow information. The optical flow information is then combined with facial images to generate new representations. Three images of the new representation are then selected to serve as the dynamic expression sequence and fitted into the network for further spatio-temporal feature extraction.

The architecture can automatically capture useful information that is sparse in the spatial and temporal domains in micro-expression samples for CDMER tasks by employing spatio-temporal attention. The attention mechanism calculates attention weights for samples in both the spatial and temporal domains, highlighting information that is more useful in samples for the backbone framework.

To optimize the network parameters and alleviate the distribution gap between the source and target databases, a simple yet effective loss function is developed. The approach utilizes a domain adaptation method that embeds the correlation alignment (CORAL) loss into the first fully connected (FC) layer of the neural network, significantly enhancing the performance of cross-database tasks.

The performance of the proposed approach is compared to state-of-the-art (SOTA) methods on two benchmark tasks, and the results show that the approach has superior performance. In the future, researchers aim to investigate whether combining multimodal information, such as text and audio, may assist the recognition process, which can contribute to the research field of CDMER.

The adaptive spatiotemporal attention neural network is a promising approach for CDMER tasks. The approach utilizes spatiotemporal attention to capture useful information in micro-expression samples and a simple yet effective domain adaptation method to optimize network parameters. The proposed approach has superior performance compared to SOTA methods, and future research can examine combining multimodal information to further improve recognition accuracy.

Technology

Articles You May Like

New Low-Cost Catalyst Developed to Produce Clean Hydrogen from Water Using Renewable Energy
The United Arab Emirates Plans to Triple Renewable Energy Supply and Invest $54 Billion to Meet Growing Energy Demands
Using Satellite Imagery and AI to Detect Unexploded Munitions in Ukraine
Industry Experts Warn of the Risks Posed by Artificial Intelligence

Leave a Reply

Your email address will not be published. Required fields are marked *