Researchers in Japan make android child’s face strikingly more expressive

Researchers at Osaka University employ a quantitative approach to add rich nuance to the expressions of their robot child face

— by Osaka University, Japan

Japan’s affection forrobots是什么秘密。但就是这种感觉双方在该国的惊人的机器人?我们现在可能更近了一步,以让更多的机器人面部表情与沟通。

While robots have featured in advances in healthcare, industrial, and other settings in Japan, capturing humanistic expression in a robotic face remains an elusive challenge. Although their system properties have been generally addressed, androids’ facial expressions have not been examined in detail. This is owing to factors such as the huge range and asymmetry of natural human facial movements, the restrictions of materials used in android skin, and of course the intricate engineering and mathematics driving robots’ movements.

Read original article
Download original article (pdf)

A trio of researchers at Osaka University has now found a method for identifying and quantitatively evaluating facial movements on their android robot child head. Named Affetto, the android’s first-generation model was reported in a 2011 publication. The researchers have now found a system to make the second-generation Affetto more expressive. Their findings offer a path for androids to express greater ranges of emotion, and ultimately have deeper interaction with humans.


“表面变形是控制机器人的脸,一个关键问题”研究的共同作者Minoru Asadaexplains. “Movements of their soft facial skin create instability, and this is a big hardware problem we grapple with. We sought a better way to measure and control it.”

研究人员在调查Affetto 116个不同的面部点来测量其三维运动。面部点是由所谓的变形单位支持。每个单元包括一组创建一个独特的面部扭曲机制,如降低或升高的唇缘或眼睑的一部分。然后从这些测量进行一个数学模型来量化它们的表面运动模式。

Related:Why humans find faulty robots more likeable

While the researchers encountered challenges in balancing the applied force and in adjusting the synthetic skin, they were able to employ their system to adjust the deformation units for precise control of Affetto’s facial surface motions.

“Android robot faces have persisted in being a black box problem: they have been implemented but have only been judged in vague and general terms,” study first authorHisashi Ishihara说。“我们的精确结果将让我们有效地控制机器人的面部运动,以推出更多细致入微的表情,比如微笑和皱眉。”


重新发布指南:开放获取和分享研究的一部分万博亚洲体育前沿的使命。当你有一个链接回原研只要 - 除非另有说明,您可以重新发布张贴在前沿的新闻博客文章。万博亚洲体育manbetx 手机客户端卖物品是不允许的。


Fill in your details below or click an icon to log in:

Gravatar Logo

您正在使用您的WordPress.com帐户评论。(Log Out/更改)

Google photo

You are commenting using your Google account.(Log Out/更改)

Twitter picture

You are commenting using your Twitter account.(Log Out/更改)

Facebook photo

You are commenting using your Facebook account.(Log Out/更改)

Connecting to %s