Researchers at Edith Cowan University (ECU) are helping machines become more emotionally aware, using a new method that allows them to better recognise human facial expressions.
"As more digital systems, from virtual assistants to wellbeing apps, interact with people, it's becoming increasingly important that they understand how we feel," said ECU PhD student Mr Sharjeel Tahir.
Instead of training systems to interpret emotions using single images, the team led by ECU senior lecturer and artificial intelligence (AI) expert Dr Syed Afaq Shah explored a more human-like approach: showing a group of related facial expressions together, allowing the machine to 'see' a broader emotional context.
"Just like we don't judge how someone feels from one glance, our method uses multiple expressions to make more informed predictions," Mr Tahir explained. "It's a more reliable way to help machines understand emotions - even when faces are seen from different angles or under different lighting."
While this research doesn't involve physical robots, the findings could influence how future emotionally aware systems are developed - such as those used in mental health support, customer service, or interactive education.
"We're laying the groundwork for machines that don't just see faces, but understand them," Mr Tahir said.
Co-author PhD student Mr Nima Mirnateghi noted that this proposed method delivers rich visual cues that enhance the AI model's ability to recognise emotions while maintaining computational efficiency and achieving a significantly higher accuracy rate.
"By exposing the model to diverse features within a structured set, we found that it learns existing patterns far more effectively, refining its emotional recognition capabilities," he added.
Under the supervision of Dr Shah, Mr Tahir is now working on generating artificial empathy in Artificial agents, allowing them to respond accordingly when presented with human emotions.
"There is a significant need for emotional support these days, and that gap could be filled by emotionally aware or emotionally intelligent machines or robots," he said.
Mr Mirnateghi said that the research has not only pushed the boundaries of emotion recognition in AI but has also sparked a deeper exploration into the underlying decision-making processes of AI models.
"Our research group is now focused on explainable AI in language models, uncovering the intricate mechanisms that dictate how Artificial agents interprets recognition patterns.
"By making these processes more transparent, we aim to create AI systems that are inherently understandable - bridging the gap between advanced computation and human intuition. For example, what makes a machine emotionally intelligent? That's one of the questions that our current research aims to explore," he said.