Skip to main content
Loading Events

« All Events

  • This event has passed.

Dissertation Defense - Qudrat E Alahy Ratul

April 26 @ 10:00 am - 12:00 pm MDT

Interpretation and Robustness of Black-Box ML Models for Secure Cyberspace

Computing doctoral student

Presented by Qudrat E Alahy Ratul, Computing Doctoral Candidate, Computer Science emphasis
Join via Zoom

Abstract: The rapid advancements in machine learning (ML) have led to the development of sophisticated black-box models capable of tackling various challenges in the field of cybersecurity. However, the interpretability and robustness of these models are often overlooked, making them vulnerable to adversarial attacks and limiting their adoption in critical applications. This dissertation investigates the interpretation and robustness of black-box ML models for secure cyberspace, presenting research contributions that address these issues. By exploring state-of-the-art techniques and methodologies, this research contributes to understanding and enhancing model interpretability and robustness in cybersecurity, paving the way for more reliable and trustworthy ML-based systems. This dissertation also addresses the challenges and opportunities in interpreting and enhancing the robustness of black-box machine learning (ML) models for secure cyberspace, focusing on the domains of interpretability and robustness. The research encompasses four main contributions. First, we analyze the robustness of Automatic Scientific Claim Verification (ASCV) tools against adversarial rephrasing attacks, highlighting the need for resilient models and proposing a novel attack model that generates targeted adversarial examples. Second, we evaluate the effectiveness of attribution methods in enhancing ML interpretability, emphasizing their role in building trust between humans and AI systems. Third, we introduce the Generality and Precision Shapley Attributions (GAPS) method, which improves the trustworthiness of explanations provided by ML models, offering insights into their decision-making processes. Lastly, we present a few-shot transfer learning approach for fast user personalization in identifying private images, demonstrating the application of interpretable ML techniques in cybersecurity.

Supervisory Committee: Dr. Edoardo Serra, Dr. Francesca Spezzano, Dr. Maria Soledad Pera
Graduate Faculty Representative: Dr. Amy Ulappa