Abstract


The final purpose of this study is to construct an affective interface in order to realize human-computer empathy by using facial expression. As the basic study, real-time processing methods for facial expression recognition and synthesis based on the researches in the field of psychology are described in this paper. In the facial expression recognition, 18 feature values are first calculated from the 16 feature points set on the facial organs which are extracted from the facial images captured by a color CCD camera. Then, the degrees of the basic six expressions are recognized from the feature values by fuzzy estimation. On the other hand, in the facial synthesis, the expression to be presented is first converted to the facial muscle contraction, then the deformation of skin caused by the contraction on the 3D polygon model is calculated by muscle models, in which the contraction rates and time variations were decided by subjective experiments beforehand. The authors have constructed the prototype systems and have confirmed that the system performed in real time.

Powered by JDBS Ver.2.1.0
(C) 2020 Hirotake Ishii