Andres Jessé Porfirio, Msc, 2013

LIBRAS Sign Language Hand Configuration Recognition Based on 3D Meshes (Reconhecimento das Configurações de Mão da LIBRAS a Partir de Malhas 3D)

Author: Andres Jessé Porfirio

Supervisor: Daniel Weingaertner

Automatic recognition of Sign Language signs is an important process that enhances the quality of use of digital media by hearing impaired people. Additionally, sign recognition enables a way of communication between deaf and hearing people who do not understand Sign Language. The approach of sign recognition used in this work is based on the global parameters of LIBRAS (Brazilian Sign Language): hand configuration, location or point of articulation, movement, palm orientation and facial expression. These parameters are combined to comprise signs, in a similar manner that phonemes are used to form words in spoken (oral) language.

This work presents a way to recognize one of the LIBRAS global parameters, the hand configuration, from 3D meshes. The Brazilian Sign Language has 61 hand configurations. This work made use of a database containing 610 videos of 5 different users signing each hand configuration twice at distinct times, totaling 10 captures for each hand configuration. Two pictures depicting the front and the side views of the hand were manually extracted from each video. These pictures were segmented and pre-processed, after which they were used as input to the 3D reconstruction processing.

The generation of the 3D meshes from the front and side images of the hand configuration was done using the Shape from Silhouette technique. The recognition of the hand configurations from the 3D meshes was done with the use of SVM classifier – Support Vector Machine. The characteristics used to distinguish the mesh were obtained using the Spherical Harmonics method: a 3D mesh descriptor that is rotation, translation and scale invariant. Results achieved a hit rate average of 96.83% with Rank 3, demonstrating the efficiency of the method.

Text & Files

  • Reconhecimento das Configurações de Mão da LIBRAS a Partir de Malhas 3D (Dissertation, Presentation)
  • Article “LIBRAS Sign Language Hand Configuration Recognition Based on 3D Meshes”. (SMC 2013)
  • LIBRAS-HC-RGBDS Video Database
  • Pre-Processed JPEG Database: IN_2i
  • Scripts Package
    • 3D reconstruction script
    • Blender script used when is called;
    • Spherical Harmonics signature computation script;
    • getsig: Spherical Harmonics extraction tool by Michael Kazhdan;
    • Script used to separate the signatures per class
    • classification script
    • script used to generate logs during classification
    • pre_processing_tools.tar.gz: tools used in the database pre-processing
    • libsvm 3.14

Setup Instructions

The 3D reconstruction and classification requires the environment setup:

  1. Install Blender 3D 2.63a, available at:
    • For x64 linux machines: blender-2.63a-linux-glibc27-x86_64.tar.bz2
    • Uncompress, move to /usr/local/ and create a symbolic link to blender executable at /usr/local/bin/
    • Test it: $blender (the program must be executed)
  2. Install Python 2.7.3 ;
  3. Libsvm 3.14 is already included in the Scripts package (compiled for x64 linux machines);
  4. Download and the Scripts Package


Experiments Reproduction

The experiments can be reproduced by following this process:

  1. The first step is to convert the LIBRAS-HC-RGBDS videos to JPEG (for each video is generated one image sequence, each image corresponds to one frame) and manually prepare the images to the 3D reconstruction: frame selecion -> smooth -> crop -> noise correction. The result must be stored in the folder “IN_2i”.
    • To reproduce the experiment you can skip this step and use the given  IN_2i database. Download the Pre-Processed  IN_2i file and the Scripts Package to proceed;
      • The process used to correct the images (jpeg converted from LIBRAS-HC-RGBDS) was done by hand using the software Gimp and some tools (check pre_processing_tools.tar.gz package) were developed to help this process, they are:
        • smooths the image to remove kinect noise;
        • crops the image keeping just the hand area;
        • Note: this tools make use of python-opencv pacakge;
    • Uncompress the Database and Scripts in the same folder;
  2. $python
  3. $python IN_2i OUT_2i
    • This script performs the reconstruction 3D using the shape from silhouette method.
    • The output is stored in the OUT_2i folder and contains all meshs in two formats: blend and ply;
    • You must delete the OUT_2i if it already exists, this is a safety method to avoid the override of old experiments.
  4. $python
    • This script computes the signatures from the OUT_2i/PLY meshs.
    • The result is stored in the OUT_2i/SIG folder;
  5. $python
    • This script organizes the signatures to the SVM classification;
    • The result is stored in the T2i folder;
  6. This script performs the SVM train and classification based on the T2i signatures;
  7. The result is stored in the TESTS/SVM folder;
  8. A log table is generated at TESTS/SVM/mlog.csv containing the result of 10 SVM runs;



After process the resulting structure will be organized as follows:

  • IN_2i: Frames selected, cropped and corrected to the 3D reconstruction;
  • OUT_2i: Product of the 3D reconstruction and signatures;
    • OUT_2i/BLEND: Meshs in Blender 3D format;
    • OUT_2i/PLY: Meshs in Stanford PLY format;
    • OUT_2i/SIG: Spherical Harmonics Signatures;
  • T2i: Signatures organized to SVM classification;
  • TESTS: SVM classification output and logs;