Briareo Dataset

About the Dataset

Natural User interfaces in which the interaction is not carried through physical devices, are becoming more and more important in many computer vision fields, since they are extremely user-friendly and intuitive. The use of this type of technology can be an effective way to reduce and monitor driver’s inattention during the driving activity. To this end we propose a new dataset, called Briareo, specifically collected for the hand gesture recognition task in the automotive context. We focus on the acquisition of dynamic hand gestures, where each gesture is a combination of motion and one or more hand poses. Images have been collected from an innovative point of view: the acquisition devices are placed in the central tunnel between the driver and the passenger seats, orientated towards the car ceiling. In this way, visual occlusions produced by driver’s body can be mitigated. Three different sensors are used in order to acquire the dataset:

  • Traditional RGB camera
  • Time-of-Flight (ToF) depth sensor (pmdtec Pico Flexx)
  • Stereo Camera (Leap Motion)
The collected data has a great variability: a high number of subjects and gestures from different sources has been recorded.


  • RGB camera
    • Traditional Camera, high availability for different resolution and form factors
    • Frame rate: 30 fps
  • ToF sensor
    • Pico Flexx
    • Resolution: 224 × 171
    • Frame rate: 45 fps
    • Form factor: 68mm × 17mm × 7.35mm - it can be easily integrated in a car cockpit!
    • Range: 0.1 - 1 meters
  • Stereo Camera
    • Leap Motion
    • Resolution: 640 × 240 raw images, 400 x 400 rectified images
    • Framerate: 200 fps
    • Form factor: 70mm × 12mm × 3mm - it can be easily integrated in a car cockpit!
The dataset is composed of 12 gesture classes:
  1. Fist
  2. Pinch
  3. Flip-over
  4. Telephone
  5. Right swipe
  6. Left swipe
  7. Top-down swipe
  8. Bottom-up swipe
  9. Thumb
  10. Index
  11. Clockwise rotation
  12. Counter-clockwise rotation
A total of 40 subjects (33 males and 7 females) have taken part to the data collection. Every subject has performed each gesture 3 times, leading to a total of 120 collected sequences. Each sequence lasts at least 40 frames. At the end of this procedure, we record an additional sequence including all hand gestures in a single recording. The three cameras have been synchronized so that the frames at a certain instant depict the same scene. The following data are released within the dataset: RGB images, depth maps and infrared intensities (Pico Flexx), raw and rectified infrared images (Leap Motion), 3D hand joints (Leap Motion SDK).


To obtain a copy of the dataset, please send an email to s.pini[at]unimore[dot]it and guido.borghi[at]unimore[dot]it stating:

  1. Your name, title, and affilation
  2. Your intended use of the data
  3. The following statement: 
You are hereby given permission to copy this data in electronic or hardcopy form for your own scientific use and to distribute it for scientific use to colleagues within your research group. Inclusion of rendered images or video made from this data in a scholarly publication (printed or electronic) is also permitted. In this case, credit must be given to the publication. However, the data may not be included in the electronic version of a publication, nor placed on the Internet. These restrictions apply to any representations (other than images or video) derived from the data, including but not limited to simplifications, remeshing, and the fitting of smooth surfaces. The making of physical replicas this data is prohibited, and the data may not be distributed to students in connection with a class. For any other use, including distribution outside your research group, written permission is required. Any commercial use of the data is prohibited. Commercial use includes but is not limited to sale of the data, derivatives, replicas, images, or video, inclusion in a product for sale, or inclusion in advertisements (printed or electronic), on commercially-oriented web sites, or in trade shows. 

An email will be sent to you with the instructions to get the dataset.


We sincerely thank all the people who participated in the experiments that led to the creation of this dataset.


We believe in open research and we are happy if you find this data useful. If you use it, please cite our paper.

  title={Hand Gestures for the Human-Car Interaction: the {B}riareo dataset},
  author={Manganaro, Fabio and Pini, Stefano and Borghi, Guido and Vezzani, Roberto and Cucchiara, Rita},
  journal={20th International Conference on Image Analysis and Processing (ICIAP)},