Building an affective video dataset for emotion elicitation

Emotion Regulation Assistance System based on EEG features and Emotion Recognition, 2013. (This is a detailed description. If you are interested in a short description, please refer to my CV: FengQianliCV.)

The main purpose of this project is to build a neuro-feedback system for emotion regulation. This systems is designed to help training astronauts and soldiers and to help the patients who need to enhance their capability of self-regulating their emotional states. The team of this whole project contains one master’s student and three undergraduates. Our(three undergraduates) work in this project is focusing on building an affective video and image dataset for emotion elicitation. And since I am the leader of the team, all the paperworks of the project is also part of my job.

In case that you may not familiar with this topic, please let me briefly talk about the motivation behind building this video dataset. (If you are not interested in the background, please skip the next two paragraphs.)

To build this system, an algorithm to classify different emotion is required. Then the training data is needed to train the classifier. The training data should be the EEG signal corresponding to different emotional states. To get this training data, we should recording the EEG signal while the participants are in specific emotion we are interested in, that is to say, we need recording EEG signal in an emotion elicitation experiment.

There are multiple ways to elicit emotions: text, music, images, videos, interaction with confederates. Among those methods, emotion elicited in via showing text and listening to the music is relatively weak and limited in categories, which may not be good enough to get good EEG signals for training. Image elicitation is the one used most widely at the time. One can use IAPS, the international affective picture system,  to elicit specific emotion from participants by showing the affective pictures. However, some studies indicate that IAPS does not perform very well among Chinese because of the culture difference. Video is another method used in the elicitation experiment and studies show that it has relatively high ecological validity. However, similar to the images, the current dataset of the affective video is not for Chinese. Based on the above background, building our own affective video dataset may be a good idea for this project.


To build the video dataset, over 100 video clips are firstly being selected from online resources in mainland, China to form a candidate video pool. These videos contains films, TV shows and family videos sharing on the internet. Then we (the three undergraduates group) go through all the videos and marked them as high, medium and low depends on the elicitation intensity we feel. The “high” videos marked by any of us will enter to the next candidate video pool for evaluation experiment. The second candidate pool contains 40 video clips.

The evaluation experiment is online. After contact with any of the three group members, an interested participant will get a consent form, a TAS-20 (Toronto Alexithymia scale) and a BDI (Beck Depression Inventory), a detailed experiment instruction and a set of URLs of videos through emails. The target participants are the undergraduate students between age 18-22, which is the same group as the expected participants in the following elicitation experiment. The evaluation experiments is advertised during the winter vacation in 2013 to ensure the participants are not disturbed by the schooling pressure. 29 undergraduates from 19 universities attended this experiments.

The self-report scale used in the online evaluation experiment contains 3 sections.

  1. a 7-point Likert scale for 6 basic emotions. The participants are asked to rate the strongest feeling of emotion during the video playing. “0” is described as I didn’t experience any of this emotion; “8” is described as the emotion I felt in this video is the strongest I have ever experienced in my life.
  2. a 5-point Self-Assessment Manikins(SAM) scale for valence and arousal self-report evaluation. In this section the participants are also asked to rate the valence and arousal they experienced during the video playing.
  3. a section for the order of elicited emotions. In this project, we are more interested in eliciting single emotion rather than compound or mixed emotions. But only with above two sections, it is impossible to know whether the reported emotion are elicited simultaneously or independently(in time). So we ask participants to order the emotion in the order of time. “1” indicates the firstly elicited emotion, “2” is the second one and so forth. Replication is allowed if the emotions are elicited simultaneously.

After the experiment, 16 videos are selected according to their means, variance, and if the target emotion is mixed with others.

The dataset also contains affective images section, which is based on the IAPS(International Affective Pictures System), but excludes the images that do not effectively elicit target emotions in our participants.

This project attended the 2013 “Challenge Cup” Undergraduate Extracurricular Science and Technology Works Competition in Tianjin University and got the 3rd prize.