DAiSEE

Dataset for Affective States
in E-Environments

Abhay Gupta, Arjun D'Cunha, Kamal Awasthi, Vineeth Balasubramanian

Bored

Confused

The difference between real and virtual worlds is shrinking at an astounding pace. With more and more users working on computers to perform a myriad of tasks from online learning to shopping, interaction with such systems is an integral part of life. In such cases, recognizing a user's engagement level with the system (s)he is interacting with can change the way the system interacts back with the user. This will lead not only to better engagement with the system but also pave the way for better human-computer interaction. Hence, recognizing user engagement can play a crucial role in several contemporary vision applications including advertising, healthcare, autonomous vehicles, and e-learning. However, the lack of any publicly available dataset to recognize user engagement severely limits the development of methodologies that can address this problem. To facilitate this, we introduce DAiSEE, the first multi-label video classification dataset comprising of 9068 video snippets captured from 112 users for recognizing the user affective states of boredom, confusion, engagement, and frustration "in the wild". The dataset has four levels of labels namely - very low, low, high, and very high for each of the affective states, which are crowd annotated and correlated with a gold standard annotation created using a team of expert psychologists. We have also established benchmark results on this dataset using state-of-the-art video classification methods that are available today. We believe that DAiSEE will provide the research community with challenges in feature extraction, context-based inference, and development of suitable machine learning methods for related tasks, thus providing a springboard for further research.

Engaged

Frustrated

Download

We request you to kindly fill up the form to download the dataset. Completion of the form takes less than a minute. Please note that completion of the form means that you agree to the terms and conditions of using DAiSEE

Kindly note that the Downloadable file is about 15 Gb large. Make sure that you have a reliable internet connection and sufficient space on your system. The README file has all important information about the dataset. We have also included a few scripts to get you started :) .

Dataset Sample

Here are a few sample pictures from the dataset:

Bored:0

Engaged:0

Bored:1

Engaged:3

Confused:1

Confused:2

Bored:2

Frustrated:2

Engaged:2

Frustrated:3

Disclaimer

I agree with following items:

  • To cite Gupta, DCunha, Awasthi & Balasubramanian (2018) and Kamath et al. (2016) in any paper of mine or my collaborators that makes any use of the database. The references are:
    • A Gupta, A DCunha, K Awasthi, V Balasubramanian, DAiSEE: Towards User Engagement Recognition in the Wild, arXiv preprint: arXiv:1609.01885
    • A Kamath, A Biswas, V. Balasubramanian, A Crowdsourced Approach to Student Engagement Recognition in e-Learning Environments, IEEE Winter Conference on Applications of Computer Vision (WACV'16)
  • To use the images for research purposes only.
  • Not to provide the images to second parties.
  • If I reproduce images in electronic or print media, to use only those from the following subjects and include notice of copyright (© Vineeth Balasubramanian).

Contact us

Email us - daisee-dataset@iith.ac.in

Give us a week to answer your queries :)