Skip to main content
National Winner

Eshara - Arabic Sign Language Translation

An innovative sign language translation system that uses deep learning and avatar technology to facilitate seamless communication between hard-of-hearing individuals and their communities.

  • How Eshara works, in a nutshell.

  • Full walkthrough of our project, along with a demo of the working interface.

    Full walkthrough of our project, along with a demo of the working interface.

  • Avatar performing the animated sign gesture, initially typed out by the non-signer.

  • Our 4 avatars used to perfom the animated signs, for an added personal touch.

  • Signer spelling out their name to the interface. Watch how the translations appear!

  • First prototype deployment of our model, not very user friendly ..

What it does

Eshara is an innovative aravbic sign language translation system that uses deep learning and avatar technology to facilitate seamless communication between deaf and hard-of-hearing individuals and their hearing counterparts.


Your inspiration

Our inspiration for this project came after interviews with the African Sign Language Resources Center. They profoundly touched us by asking if this was just a project for us or if we were genuinely committed to helping their community. Witnessing the daily challenges faced by deaf and hard-of-hearing individuals in communicating with their society and the lack of accessible solutions motivated us to leverage technology to bridge this gap. Our desire to create an inclusive world where everyone can communicate effortlessly, regardless of hearing ability, led to the development of a system using deep learning and avatars.


How it works

The project presents an innovative approach to bridging communication gaps for the hard of hearing community in the Middle East by leveraging advanced technologies in video analysis and Deep Learning. A Mobile phone is used to record signers performing Arabic Sign Language. The system then processes these recordings to extract crucial features of the signer’s hand movements before passing them to a custom-built Deep Learning model, trained and tested on a proprietary dataset of over 60,000 videos of Arabic Sign Language to accurately translate signs. Additionally, the project features a custom-made avatar system that translates natural language back into Sign Language for illiterate users. Animated avatars perform the sign gestures corresponding to the input text, ensuring that communication is bidirectional. This product enhances accessibility and inclusivity for the hard of hearing community in the Middle East.


Design process

Data Collection and Preparation To build a robust model, we developed a comprehensive dataset of 62,000 videos. This dataset was crucial for training our deep learning model to accurately recognize and translate sign language. Model Training and Evaluation We experimented with different representations of the signs, training the model on each to determine the most effective one. After rigorous testing and evaluation, we selected the representation that provided the highest accuracy and reliability. Our trained model was initially deployed for testing on a command line interface. This allowed us to verify the model's performance in real-time and make necessary adjustments based on the results. Web Application Development To make the model accessible to a broader audience, we developed a web application. This app hosts the model, enabling users to interact with it easily, and allows users to sign gestures via phone camera, which are then translated into text in real-time. Avatar Creation and Customization In parallel with the model development, we focused on creating custom avatars for translating text messages back into sign language. Each avatar was meticulously animated and customized to resemble real humans, using ourselves as inspiration. This added a personal touch.


How it is different

- Our extensive dataset represents a wide range of Arabic sign language gestures, providing a robust foundation for accurate and reliable model training. - Our model is the only one trained on the largest Arabic sign language dataset. It achieves an accuracy of 98% and above, making it highly reliable. - Our system not only translates sign language into text using deep learning but also converts text back into sign language through animated avatars, ensuring effective bidirectional communication. - We created custom avatars that are meticulously animated to look and move like real humans, adding a personal and relatable touch. These avatars are inspired by ourselves, making the interaction more engaging and lifelike. - The system is designed to be scalable, allowing for deployment on various platforms and making it accessible to a broad audience. This ensures that more people can benefit from our solution.


Future plans

To increase accessibility, we intend to develop a mobile application version of our sign language translation system. This will enable users to communicate seamlessly on the go, using their smartphones or tablets, resembling a messeging application. After that, we plan to extend our system to support additional sign languages beyond Arabic, making it a versatile tool for the global deaf and hard-of-hearing community. This involves collaborating with sign language experts and collecting new datasets for different sign languages.


Awards

Our sign language translation system has been recognized for its innovation and impact through the following awards: - First Place in the Senior Design Competition at the American Univerity Of Sharjah. - Second Place in the Advanced Engineering Competition at the University of Wollongong.


End of main content. Return to top of main content.

Select your location