: It is often used to benchmark models that predict where a person is looking and what action they are performing simultaneously.
: Activities of daily living (cooking) recorded with a head-mounted camera and an eye-tracker.
: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015 Key Dataset Details
The video file is a sample from the GTEA Gaze+ (Georgia Tech Egocentric Activities) dataset . This dataset is widely used in computer vision research for egocentric (first-person) action recognition and hand-object interaction analysis. The primary research paper associated with this dataset is:

