A Hybrid Approach to Arabic Sign Language Recognition by HSV Space and Deep Learning Classification

Main Article Content

Noor Fadel Hussain
Hiba Al-Khafaji

Abstract

Understanding and automatically recognizing sign language is crucial for equal opportunities for deaf and hard-of-hearing individuals in society. In the world of Arabic sign language communication, people share their ideas, thoughts, and feelings and engage in meaningful daily interactions. While artificial intelligence and computer vision technologies have improved, studies focused on Arabic sign language processing have come first and continue to be a research frontier compared to other world languages. This research proposed a methodology to leverage the HSV color space model for initial segmentation to eliminate background noise to refine the input data. This process segments the hand or semantic sign area. Then, the data is passed onto a Convolutional Neural Network (CNN) for precise sign classification. This method’s novelty is the coordinated interplay of differential data simplification through color space manipulation and the profound representational power of the CNN for improved overall system performance for hood sign recognition.


The research also seeks to fill an existing gap in the field of Arabic signal processing by presenting an integrated model, opening up broad horizons for applications in the fields of education, healthcare, and smart government services. The proposed method showed promising results, as the results of the used criteria ranged between Accuracy 95.3%, Precision 93.5%, Recall 92.8%, and F1-score 93.1%.

Background:


Recognizing sign languages ​​has always been a vital topic in societies, especially with the recent massive expansion of the deaf and mute community. It serves as the main form of communication for over 70 million deaf and hard-of-hearing individuals globally. Thus, the development of simple, accurate, and effective methods that utilize readily available resources deserves attention. Among the various forms of sign language, sign language and Arabic are especially significant, as Arabic is predominantly spoken in the Middle East and North Africa. Yet, the challenges that arise from the communication gap between this population and the hearing members of society continue to affect various areas of the deaf and hard-of-hearing individuals educational, social, and professional for educational, social, and, professional integration. Hence, creating automated systems that can comprehend Arabic Sign Language is an essential development in helping socially integrate and bridge communication gaps [1].


Materials and Methods:


The proposed hand gesture recognition system relies on a multi-stage methodology aimed at high-resolution image processing and efficient feature extraction. The methodology begins with the data collection phase, where a set of images and videos representing letters and words in Arabic Sign Language is collected on the Kaggle Dataset.


Results:


The proposed Arabic Sign Language recognition system was developed through two principal stages: (1) segmentation using the HSV color space with experimentally defined thresholds, and (2) classification via a Convolutional Neural Network (CNN). The segmentation stage, relying on the HSV model with the defined lower range (H=0,S=30,V=60) and upper range (H=20,S=150,V=255), demonstrated effectiveness in isolating the hand region from complex backgrounds, contributed significantly to noise reduction and the removal of irrelevant small objects, and enhanced the clarity of the extracted hand region prior to classification.


Conclusion:


Recognizing sign languages ​​has always been a vital topic in societies, especially with the recent massive expansion of the deaf and mute community. Therefore, it is important to focus on easy and highly accurate techniques that can be relied upon with relatively available resources. The proposed methodology is highly flexible and scalable, as the HSV color space boundaries can be modified or the CNN architecture optimized to suit research requirements and the characteristics of different datasets.

Article Details

Section

Articles

How to Cite

[1]
“A Hybrid Approach to Arabic Sign Language Recognition by HSV Space and Deep Learning Classification”, JUBPAS, vol. 33, no. 4, pp. 312–327, Jan. 2026, doi: 10.29196/jubpas.v33i4.6173.

Similar Articles

You may also start an advanced similarity search for this article.

Most read articles by the same author(s)