Cornell University researchers have invented an earphone that can continuously track full facial expressions by observing the contour of the cheeks — and can then translate expressions into emojis or silent speech commands.
With the ear-mounted device, called C-Face, users could express emotions to online collaborators without holding cameras in front of their faces — an especially useful communication tool as much of the world engages in remote work or learning.
With C-Face, avatars in virtual reality environments could express how their users are actually feeling, and instructors could get valuable information about student engagement during online lessons. It could also be used to direct a computer system, such as a music player, using only facial cues.
“This device is simpler, less obtrusive and more capable than any existing ear-mounted wearable technologies for tracking facial expressions,” said Cheng Zhang, assistant professor of information science and senior author of “C-Face: Continuously
Researchers from Cornell University have created an earphone system that can track a wearer’s facial expressions even when they’re wearing a mask. C-Face can monitor cheek contours and convert the wearer’s expression into an emoji. That could allow people to, for instance, convey their emotions during group calls without having to turn on their webcam.
“This device is simpler, less obtrusive and more capable than any existing ear-mounted wearable technologies for tracking facial expressions,” Cheng Zhang, director of Cornell’s SciFi Lab and senior author of a paper on C-Face, said in a statement. “In previous wearable technology aiming to recognize facial expressions, most solutions needed to attach sensors on the face and even with so much instrumentation, they could only recognize a limited set of discrete facial expressions.”
The earphone uses two RGB cameras that are positioned below each ear. They can record changes in cheek contours when the wearer’s