Options
Adversarial AI models for Cyber Security
Author(s)
Date Issued
2022
Date Available
2022-12-09T16:59:17Z
Abstract
Technology is influencing our lives in numerous ways. With the explosive growth of ubiquitous systems and data availability, many security threats arise, and an appetite to manage and mitigate such risks. As a result, cyber security has become an indispensable necessity and takes center stage to protect against known and unknown adversaries. Furthermore, with the proliferation of algorithms and computing systems, Machine Learning (ML)/Artificial Intelligence (AI) have become significant in tackling cyber security problems. The performance benefits provided by the applications built using ML/AI will be impactful when the security and reliability properties of the system are robust. Designing robust and secure real-world Machine Learning Systems in Cyber Security (MLSCS) is a multi-disciplinary endeavor and requires an in-depth understanding of the machine learning life cycle. ML has to be resilient to malicious attacks at all stages of the ML life cycle and protect themselves from the compromise of the system’s integrity, availability, and confidentiality security objectives. A large body of work studying failure modes of ML systems operating in adversarial environments is explored in the literature. But unfortunately, they miss the adversary view of all stages of ML life cycle, exposing them to larger attack surfaces. Furthermore, the adversary threat models and mitigation techniques discussed in the literature can be incoherent with the stakeholder’s goals, slowing down the defense process and making systems vulnerable. This thesis proposes an adversary modeling framework Cloud Atlas for AI models based on the properties of adversarial science with four principal components. It evaluates the security robustness of MLSCS under realistic threat models, covering all stages of ML life cycle, and respects cyber security domain-specific constraints. More specifically, a detailed threat taxonomy is proposed encompassing all stages of the ML life cycle, which forms the basis for the threat modeling component. Novel offensive and defensive methods are designed, including a new Explainable Artificial Intelligence (XAI) based attack surface to continuously evaluate the security and robustness of MLSCS in the assessment component. Recently proposed standards are extended to communicate relevant threats and weaknesses to stakeholders and end-users, thereby improving trust in the underlying system in reporting component. Finally, the adversary risk mitigation component supports new methodologies to quantify and transfer risks.
Type of Material
Doctoral Thesis
Publisher
University College Dublin. School of Computer Science
Qualification Name
Ph.D.
Copyright (Published Version)
2022 the Author
Language
English
Status of Item
Peer reviewed
This item is made available under a Creative Commons License
File(s)
No Thumbnail Available
Name
104490991.pdf
Size
24.12 MB
Format
Adobe PDF
Checksum (MD5)
9b841d67f8af406cedd0f0d5e95d4e82
Owning collection