A Hybrid Graph Neural Network Framework Integrating Handcrafted Features for Real-Time Iris Recognition in Motion

Authors
  • Usman A. ABDURRAHMAN

    Department of Information and Communication Technology, Northwest University, Kano, Kano State, Nigeria

    Author

  • Abdulkadir A. BICHI

    Department of Software Engineering, Northwest University, Kano, Kano State, Nigeria

    Author

  • Usman HARUNA

    Department of Software Engineering, Northwest University, Kano, Kano State, Nigeria

    Author

  • Akibu M. ABDULLAHI

    School of Computing and Informatics, Albukhary International University, Alor Setar, Malaysia

    Author

Keywords:
Iris recognition, graph neural networks, feature fusion, moving image sequences, real-time biometrics.
Abstract

Iris recognition in dynamic environments, such as surveillance footage or real-time video streams, remains a significant challenge due to motion blur, occlusion, and the high computational cost of processing sequential frames. While traditional texture-based methods like Gabor filters struggle with motion deformations, modern deep learning approaches, particularly Graph Neural Networks (GNNs), offer superior spatial analysis but often at the expense of real-time performance. This paper proposes a novel hybrid framework that addresses these limitations by integrating handcrafted feature descriptors directly into a GNN architecture. Rather than relying solely on learned embeddings, the model initializes graph nodes using a fusion of traditional texture patterns and deep features, providing a richer and more resilient representation of the iris structure from the outset. Furthermore, we introduce a lightweight message-passing mechanism optimized for edge deployment, significantly reducing latency to meet the 25–30 frames-per-second requirement of real-time systems. By combining the interpretability and speed of traditional methods with the adaptive power of graph-based deep learning, the proposed approach enhances recognition accuracy under motion conditions while ensuring scalability for large databases. The proposed method achieves 95.1% recognition accuracy across three datasets, improving upon the original GNN framework by 2.3 percentage points and traditional methods by 9.2 points. It maintains robust performance under motion blur at 89.2% accuracy while operating at 35 frames per second on edge hardware. The system scales efficiently to one million users with query times of just 5.3 milliseconds. Experimental results demonstrate that this hybrid strategy outperforms both standalone deep networks and conventional algorithms, offering a viable path toward practical, real-world iris recognition in motion.

References
Cover Image
Downloads
Published
23-03-2026
Section
Articles
License

Copyright (c) 2026 FUDMA Journal of Engineering and Technology

Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

How to Cite

A Hybrid Graph Neural Network Framework Integrating Handcrafted Features for Real-Time Iris Recognition in Motion. (2026). FUDMA Journal of Engineering and Technology, 2(1), 147-155. https://doi.org/10.33003/t9svh189

Similar Articles

31-40 of 45

You may also start an advanced similarity search for this article.

Most read articles by the same author(s)