To capture movement over time, is calculated between two consecutive frames. This process determines the magnitude and direction of pixel movement within the detected facial region. 3. Extract the Angle Feature
Gunes’s work often emphasizes , where facial features are combined with other modalities like body gestures or audio markers (e.g., MFCCs) to improve the accuracy of emotion recognition. ✅ Result
The generated feature is a (such as a 2D head motion angle) that allows a system to classify human affective states or non-verbal behaviors. Project suggestions from Prof Hatice Gunes
offset). This ensures the machine can recognize when a gesture starts, peaks in intensity, and ends. 5. Fuse Multi-modal Data
The first step is to identify the face within a video frame. Researchers like Gunes often use standard detection techniques to isolate the facial area, ensuring that background noise does not interfere with the feature extraction. 2. Compute Optical Flow