Security of Machine Learning Models
With the immense amount of data produced every data, manual data analyse has become an impossible task. Therefore, Machine learning approaches are widely adopted in software engineering and cybersecurity application to automatically deal with volumes of data proactively. Machine Learning (ML) enabled cybersecurity systems such as Intrusion Detection Systems (IDS), phishing detectors and spam filters have gained the attention of both research and industrial communities. ML solutions can pro-actively detect cyberattacks thus saving millions of dollar loss caused by cyberattacks and reducing the time required for manual security analytics.
On the other hand, recent studies have found that these systems are vulnerable to data manipulation by adversaries at training and testing time. Furthermore, traditional security critical software development practices are not applicable on these systems due to their complex structure.
There is no available standard for engineering secure ML-enabled cybersecurity systems. This research area aims to develop a model-driven framework for enabling the development of secure ML-enabled cybersecurity systems. The main goal of this area is to systematically develop approaches for assessing and evaluating the quality of ML-enabled cybersecurity systems.
Related Publications
- Sabir, Bushra, et al. "Machine Learning for Detecting Data Exfiltration." ACM Computing Survey 2021 (https://arxiv.org/abs/2012.09344).
- Sabir, Bushra, M. Ali Babar, and Raj Gaire. "ReinforceBug: A Framework to Generate Adversarial Textual Examples." NAACL-HLT 2021 (https://arxiv.org/abs/2103.08306).
- Sabir, Bushra, M. Ali Babar, and Raj Gaire. "An Evasion Attack against ML-based Phishing URL Detectors." arXiv preprint arXiv:2005.08454 (Submitted to ACMTOPS) (https://arxiv.org/abs/2005.08454).