BACK

HearAI: Towards Deep learning-based Sign Language Recognition

11:50 - 12:10, 6th of October (Thursday) 2022/ ARENA STAGE

Deaf and hearing-impaired people have a huge communication barrier.  Different nationalities use different sign languages, and there is no universal one, as they are natural human languages with their own grammatical rules and lexicons. Deep learning-based methods for sign language translation need a lot of adequately labeled training data to perform well.  

In the HearAI non-profit project, we addressed this problem and investigated different multilingual open sign language corpora labeled by linguists in the language-agnostic Hamburg Notation System (HamNoSys).

First, we simplified the difficult-to-understand structure of the HamNoSys without significant loss of gloss meaning by introducing numerical multilabels.

Second, we utilized estimated pose landmarks and selected video keyframes' image-level features to recognize isolated glosses. We separately analyzed possibilities of dominant hand location, its position and shape, and overall movement symmetry, which allowed us to deeply explore the usefulness of HamNoSys for gloss recognition.

TRACK:
AI/ML Tech4Good