HOME    About this site    mypage    Japanese    library    university    Feedback

University of the Ryukyus Repository >
Faculty of Engineering >
Peer-reviewed Journal Articles (Faculty of Engineering) >

 
Title :畳み込みニューラルネットワークを用いた表情表現の獲得と顔特徴量の分析
Title alternative :Feature Acquisition and Analysis for Facial Expression Recognition Using Convolutional Neural Networks
Authors :西銘, 大喜
遠藤, 聡志
當間, 愛晃
山田, 考治
赤嶺, 有平
Authors alternative :Nishime, Taiki
Endo, Satoshi
Toma, Naruaki
Yamada, Koji
Akamine, Yuhei
Issue Date :1-Sep-2017
Abstract :Facial expressions play an important role in communication as much as words. In facial expression recognition by human, it is difficult to uniquely judge, because facial expression has the sway of recognition by individual difference and subjective recognition. Therefore, it is difficult to evaluate the reliability of the result from recognition accuracy alone, and the analysis for explaining the result and feature learned by Convolutional Neural Networks (CNN) will be considered important. In this study, we carried out the facial expression recognition from facial expression images using CNN. In addition, we analysed CNN for understanding learned features and prediction results. Emotions we focused on are "happiness", "sadness", "surprise", "anger", "disgust", "fear" and "neutral". As a result, using 32286 facial expression images, have obtained an emotion recognition score of about 57%; for two emotions (Happiness, Surprise) the recognition score exceeded 70%, but Anger and Fear was less than 50%. In the analysis of CNN, we focused on the learning process, input and intermediate layer. Analysis of the learning progress confirmed that increased data can be recognized in the following order "happiness", "surprise", "neutral", "anger", "disgust", "sadness" and "fear". From the analysis result of the input and intermediate layer, we confirmed that the feature of the eyes and mouth strongly influence the facial expression recognition, and intermediate layer neurons had active patterns corresponding to facial expressions, and also these activate patterns do not respond to partial features of facial expressions. From these results, we concluded that CNN has learned the partial features of eyes and mouth from input, and recognize the facial expression using hidden layer units having the area corresponding to each facial expression.
URL :https://doi.org/10.1527/tjsai.F-H34
Type Local :雑誌掲載論文
ISSN :1346-0714
Publisher :社団法人 人工知能学会
URI :http://hdl.handle.net/20.500.12000/37607
Citation :人工知能学会論文誌 Vol.32 no.5
Appears in Collections:Peer-reviewed Journal Articles (Faculty of Engineering)

Files in This Item:

File Description SizeFormat
Vol37no5F.pdf1722KbAdobe PDFView/Open