SWAP Metrics OPTIMIZATION in Mobile Face Anti-Spoofing Systems Using Knowledge Distillation

Main Article Content

Ostap Stets
Ihor Konovalenko

Abstract

Face anti‑spoofing (FAS) on mobile devices demands models that are not only accurate but also fast, lightweight, and energy‑efficient – encapsulated by SWAP metrics (Speed, Weight, Accuracy, Power consumption). This paper investigates how knowledge distillation can optimize these SWAP metrics for neural networks in FAS. Large, high‑performing teacher models are distilled into compact student models that retain high accuracy while drastically reducing model weight and improving inference speed. Latest researches have shown that distilled FAS models can achieve accuracy on par with state‑of‑the‑art networks but with significantly lower computational cost, making real‑time mobile deployment feasible. The paper presents practical formulas for the knowledge distillation loss, comparative evaluations of models on SWAP criteria. It is concluded that knowledge distillation produces lightweight FAS models that run efficiently on mobile platforms (e.g., achieving nearly 7× faster inference for a distilled model with less than 1 M parameters, at approximately 99% of the teacher’s accuracy) while consuming a fraction of the power. In this article we outline future research directions – including multi-modal distillation and adaptive architectures, that could further push SWAP metrics optimization in this field.

Article Details

Section

Articles

References

1. Zhi L., Cai R., Li H., Lam K., Hu Y., Kot A. C. (2022). One‑Class Knowledge Distillation for Face Presentation Attack Detection. IEEE Transactions on Information Forensics and Security 17, pp. 1353–1368. Available at: https://doi.org/10.48550/arXiv.2205.03792.

2. Haoliang L., Wang S., He P., Rocha A. (2020) Face Anti‑Spoofing with Deep Neural Network Distillation. IEEE Journal of Selected Topics in Signal Processing, 14 (5), pp. 933–946. Available at: https://doi.org/ 10.1109/JSTSP.2020.3001719.

3. Geoffrey H., Vinyals O., Dean J. (2015). Distilling the Knowledge in a Neural Network. NIPS Deep Learning Workshop. Available at: https://doi.org/10.48550/arXiv.1503.02531.

4. Jun Z., Zhang Y., Shao F., Ma X., Feng S., Zhang S., Wu Y., Zhou D. (2024). Efficient Face Anti‑Spoofing via Head‑Aware Transformer Based Knowledge Distillation with 5 MB Model Parameters. Applied Soft Computing 166: 112237. Available at: https://doi.org/10.1016/j.asoc.2024.112237.

5. Zhang J., Zhang Y., Shao F., Ma X., Zhou D. (2024) KDFAS: Multi-stage Knowledge Distillation Vision Transformer for Face Anti-spoofing. In: Liu, Q., et al. Pattern Recognition and Computer Vision. PRCV 2023. Lecture Notes in Computer Science, vol. 14429. Springer. Available at: https://doi.org/10.1007/978-981-99-8469-5_13.

6. Kong Z., Zhang W., Wang T., Zhang K., Li Y., Tang X., Luo W. (2024). Dual Teacher Knowledge Distillation with Domain Alignment for Face Anti-spoofing. Available at: https://doi.org/10.48550/ arXiv.2401.01102.

7. Xiao J., Wang W., Zhang L., Liu H. (2024) A MobileFaceNet-Based Face Anti-Spoofing Algorithm for Low-Quality Images. Electronics, 13 (14), 2801. Available at: https://doi.org/10.3390/electronics13142 801.

8. Kim M., Tariq S. Woo S. S. (2021). FReTAL: Generalizing Deepfake Detection using Knowledge Distillation and Representation Learning in IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1001–1012. Available at: https://doi.org/10.48550/arXiv.2105.13617.

9. Cao J., Liu Y., Ding J., Li L. (2022). Self-supervised Face Anti-spoofing via Anti-contrastive Learning in: Yu, S., et al. Pattern Recognition and Computer Vision. PRCV 2022. Lecture Notes in Computer Science 13535. Springer, Cham. Available at: https://doi.org/10.1007/978-3-031-18910-4_39.

10. Fang H., Liu A., Yuan H., Zheng J., Zeng D., Liu Y., Deng J., Escalera S., Liu X., Wan J., Lei Z. (2024). Unified Physical-Digital Face Attack Detection. Available at: https://doi.org/10.48550/arXiv.2401.17699.

11. Wang Y., Han Y., Wang C., Song S., Tian Q., Huang G. (2023). Computation-efficient Deep Learning for Computer Vision: A Survey. Available at: https://doi.org/10.48550/arXiv.2308.13998.

12. Zhang L., Gungor O,. Ponzina F., Rosing T. (2024). E-QUARTIC: Energy Efficient Edge Ensemble of Convolutional Neural Networks for Resource-Optimized Learning. Available at: https://doi.org/10.48550/ arXiv.2409.08369.

13. Chen D. (2024). A Note on Knowledge Distillation Loss Function for Object Classification. Available at: https://doi.org/10.48550/arXiv.2109.06458.

14. Cheng T., Zhang Y., Yin Y., Zimmermann R., Yu Z., Guo B. (2023). A Multi-Teacher Assisted Knowledge Distillation Approach for Enhanced Face Image Authentication. Proceedings of the 2023 ACM International Conference on Multimedia Retrieval (ICMR 23), pp. 135–143. Available at: https://doi.org/ 10.1145/3591106.3592280.

15. Kong C., Zheng K., Liu Y., Wang S., Rocha A., Li H. (2024). M3FAS: An Accurate and Robust MultiModal Mobile Face Anti-Spoofing System. Available at: https://doi.org/10.48550/arXiv.2301.12831.

16. Stets O. (2024). SWAP metrics optimization methods for mobile face anti-spoofing neural networks. Materials of the Ⅻ scientific and technical conference “Information models, systems and technologies” (Ternopil., 18–19 December 2024), pp. 91. Available at: http://elartu.tntu.edu.ua/handle/lib/47417.

References:

1. Zhi L., Cai R., Li H., Lam K., Hu Y., Kot A. C. (2022). One‑Class Knowledge Distillation for Face Presentation Attack Detection. IEEE Transactions on Information Forensics and Security 17, pp. 1353–1368. Available at: https://doi.org/10.48550/arXiv.2205.03792.

2. Haoliang L., Wang S., He P., Rocha A. (2020) Face Anti‑Spoofing with Deep Neural Network Distillation. IEEE Journal of Selected Topics in Signal Processing, 14 (5), pp. 933–946. Available at: https://doi.org/ 10.1109/JSTSP.2020.3001719.

3. Geoffrey H., Vinyals O., Dean J. (2015). Distilling the Knowledge in a Neural Network. NIPS Deep Learning Workshop. Available at: https://doi.org/10.48550/arXiv.1503.02531.

4. Jun Z., Zhang Y., Shao F., Ma X., Feng S., Zhang S., Wu Y., Zhou D. (2024). Efficient Face Anti‑Spoofing via Head‑Aware Transformer Based Knowledge Distillation with 5 MB Model Parameters. Applied Soft Computing 166: 112237. Available at: https://doi.org/10.1016/j.asoc.2024.112237.

5. Zhang J., Zhang Y., Shao F., Ma X., Zhou D. (2024) KDFAS: Multi-stage Knowledge Distillation Vision Transformer for Face Anti-spoofing. In: Liu, Q., et al. Pattern Recognition and Computer Vision. PRCV 2023. Lecture Notes in Computer Science, vol. 14429. Springer. Available at: https://doi.org/10.1007/978-981-99-8469-5_13.

6. Kong Z., Zhang W., Wang T., Zhang K., Li Y., Tang X., Luo W. (2024). Dual Teacher Knowledge Distillation with Domain Alignment for Face Anti-spoofing. Available at: https://doi.org/10.48550/ arXiv.2401.01102.

7. Xiao J., Wang W., Zhang L., Liu H. (2024) A MobileFaceNet-Based Face Anti-Spoofing Algorithm for Low-Quality Images. Electronics, 13 (14), 2801. Available at: https://doi.org/10.3390/electronics13142 801.

8. Kim M., Tariq S. Woo S. S. (2021). FReTAL: Generalizing Deepfake Detection using Knowledge Distillation and Representation Learning in IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1001–1012. Available at: https://doi.org/10.48550/arXiv.2105.13617.

9. Cao J., Liu Y., Ding J., Li L. (2022). Self-supervised Face Anti-spoofing via Anti-contrastive Learning in: Yu, S., et al. Pattern Recognition and Computer Vision. PRCV 2022. Lecture Notes in Computer Science 13535. Springer, Cham. Available at: https://doi.org/10.1007/978-3-031-18910-4_39.

10. Fang H., Liu A., Yuan H., Zheng J., Zeng D., Liu Y., Deng J., Escalera S., Liu X., Wan J., Lei Z. (2024). Unified Physical-Digital Face Attack Detection. Available at: https://doi.org/10.48550/arXiv.2401.17699.

11. Wang Y., Han Y., Wang C., Song S., Tian Q., Huang G. (2023). Computation-efficient Deep Learning for Computer Vision: A Survey. Available at: https://doi.org/10.48550/arXiv.2308.13998.

12. Zhang L., Gungor O,. Ponzina F., Rosing T. (2024). E-QUARTIC: Energy Efficient Edge Ensemble of Convolutional Neural Networks for Resource-Optimized Learning. Available at: https://doi.org/10.48550/ arXiv.2409.08369.

13. Chen D. (2024). A Note on Knowledge Distillation Loss Function for Object Classification. Available at: https://doi.org/10.48550/arXiv.2109.06458.

14. Cheng T., Zhang Y., Yin Y., Zimmermann R., Yu Z., Guo B. (2023). A Multi-Teacher Assisted Knowledge Distillation Approach for Enhanced Face Image Authentication. Proceedings of the 2023 ACM International Conference on Multimedia Retrieval (ICMR 23), pp. 135–143. Available at: https://doi.org/ 10.1145/3591106.3592280.

15. Kong C., Zheng K., Liu Y., Wang S., Rocha A., Li H. (2024). M3FAS: An Accurate and Robust MultiModal Mobile Face Anti-Spoofing System. Available at: https://doi.org/10.48550/arXiv.2301.12831.

16. Stets O. (2024). SWAP metrics optimization methods for mobile face anti-spoofing neural networks. Materials of the Ⅻ scientific and technical conference “Information models, systems and technologies” (Ternopil., 18–19 December 2024), pp. 91. Available at: http://elartu.tntu.edu.ua/handle/lib/47417.