A Hybrid Transfer Learning Framework for Efficient Facial Keypoint Detection

Main Article Content

Shahad Eaad Abdulmahdi

Abstract

Background:


Face keypoint detection is one of the basic supporting tasks for applications ranging from emotion and facial expression human-computer interaction. Typical deep learning models struggle to make a trade-off among accuracy, generalization, and computational efficiency.


Materials and Methods:


A hybrid deep learning model is proposed in this paper by stacking three pre-trained CNNs: VGG16, ResNet50, and MobileNetV2 to improve both accuracy and speed of facial landmark localization. This contains several steps of data preprocessing-normalization, converting into grayscale, resizing images to 150 × 150 pixels; using MTCNN for face detection and ROI extraction. In the extraction phase, fully connected layers predict the 2D coordinates of 15 key facial landmarks based on concatenated output features from three parallel CNN branches. The network is trained with the optimizer [learning rate = 0.001], Mean Squared Error(MSE) as the loss function over 50 epochs.


Results:


The hybrid model showed good convergence and stability with final error values of  MAE = 3.3, MAPE = 7.6, and RMSE = 4.39. From both quantitative and qualitative analysis results, the predicted keypoints nearly match ground-truth values for different facial images clearly proving the model to be robust as well as having high localization accuracy.


Conclusion:


The proposed hybrid architecture effectively combines several CNN models to optimize and balance representational depth, feature diversity, and computational efficiency. It attains high performance and generalizability in facial keypoint detection, thus making it a potential solution for state-of-the-art applications in emotion recognition, facial analytics, and human-computer interaction

Article Details

Section

Articles

How to Cite

[1]
“A Hybrid Transfer Learning Framework for Efficient Facial Keypoint Detection”, JUBPAS, vol. 34, no. 1, pp. 51–77, Apr. 2026, doi: 10.29196/jubpas.v34i1.6362.

Similar Articles

You may also start an advanced similarity search for this article.