Facial expressions indicates the mood of the person. If a system can recognize expressions of a person, it can detect the emotions and intentions. Facial expression recognition is field inside face analysis which relies on vision, learning and image processing. Many researches are taking places in these fields to improve face recognition. In tis project, we will have a look at detecting expressions using 3D range images. Objective is to differentiate between a smiling face and normal face using this technology. Expressions are generated by expansion or contraction of facial muscles. As a result of this muscular activity, facial features such as nose, lips,eye lids, eye brows and skin textures are temporally deformed. This happens for a small duration in the range of 250ms and 5s. Different stages of each expression are, onset (attack), apex (sustain) and offset (relaxation). In contrast to these spontaneous expressions, posed or deliberate expressions can be found very commonly in social interactions. These expressions typically last longer than spontaneous expressions.
First stage of this project is to collect smiling and neutral face images of a person is taken and stored into DB. In 3D facial expression recognition, registration is the first step in preprocessing. To register face image, a method based on the symmetric property of the face is used. Trilinear interpolation was used, to convert the 3D scan from triangulated mesh format to a range image with a sampling interval of 2.5 mm. Scanning process will result in face surfaces, which contain unwanted holes, in areas covered by dark hair. To circumvent this problem, the cubic spline interpolation method was used to patch the holes.
Contraction of the Zygomatic Major muscle generates smile. This muscle is located in the cheek bone and inserts in muscles near the corner of the mouth. When a person smile, there will be bulge of the cheek muscle and there will be an uplift of the corner of the mouth.
Different steps used to extract smiling expression from a 3D range facial image are
- To obtain coordinates A, B, C, D and E in the face range image in the figure, a new algorithm is developed. A and D are at the extreme points of the base of the nose. B and E are the points defined by the corners of the mouth. C is in the middle of the lower lip.
- The first feature is the width of the mouth BE normalized by the length of AD.Obviously, while smiling the mouth becomes wider. The first feature is represented by MW.
- The second feature is the depth of the mouth (The difference between the Z coordinates of points BC and EC) normalized by the height of the nose to capture the fact that the smiling expression pulls back the mouth. This second feature is represented by MD.
- The third feature is the uplift of the corner of the mouth, compared with the middle of the lower lip, d1 and d2, as shown in the Figure1, normalized by the difference of the Y coordinates of points AB and DE, respectively and represented by LC.
- The fourth feature is the angle of AB and DE with the central vertical profile, represented by AG.
- The last two features are extracted from the semicircular areas, which are defined by using AB and DE as diameters. The histograms of the range (Z coordinates) of all the points within these two semicircles are calculated.