Natural Sentence Generation Using Sign Language Gestures
DOI:
https://doi.org/10.5281/zenodo.10546384Keywords:
WLASL, NLP, MobileNetV2, Deep Learning, CNN, T5, TKinter, Sign LanguageAbstract
Sign language improves communication between hearing-impaired people, and that is the only way to communicate with non-signers. American Sign Language is known for its efficiency in improving communication between hearing-impaired people. This creative application uses technology to overcome the communication gap between non-signers and those with hearing impairments, fostering inclusivity and understanding by creating an application that detects the signs and generates natural sentences using those sequence of words. By utilizing MobileNetV2 for sign language gesture detection and incorporating Natural Language Processing (NLP) techniques for sentence generation, the proposed project not only recognizes the visual expressions of American Sign Language but also converts them into coherent English sentences. This seamless integration of computer vision and language processing technologies holds the potential to expand communication for hearing-impaired and deaf-mute people, providing them with a more effective means to interact with the broader community.
Downloads
Downloads
Published
Issue
Section
License
Copyright (c) 2024 Sushmitha Adiga, S. Jyothi, R. Umadevi, G. D. Snehalatha, C. Surbhi
This work is licensed under a Creative Commons Attribution 4.0 International License.