Skip to main content
To KTH's start page

Mitigating AI-Enabled Cyber Attacks on Hardware, Software, and System Users

Time: Thu 2024-10-10 13.00

Location: D31, Lindstedtsvägen 5, Stockholm

Video link: https://kth-se.zoom.us/j/61272075034

Language: English

Subject area: Computer Science

Doctoral student: Fredrik Heiding , Nätverk och systemteknik

Opponent: Senior Associate Professor Andreas Jacobsson, Malmö University, Malmö, Sweden

Supervisor: Adjunct Professor Ulrik Franke, Medieteknik och interaktionsdesign, MID; Professor Pontus Johnson, Nätverk och systemteknik

Export to calendar

QC 20241004

Abstract

This doctoral thesis addresses the rapidly evolving landscape of computer security threats posed by advancements in artificial intelligence (AI), particularly large language models (LLMs). We demonstrate how AI can automate and enhance cyberattacks to identify the most pressing dangers and present feasible mitigation strategies. The study is divided into two main branches: attacks targeting hardware and software systems and attacks focusing on system users, such as phishing. The first paper of the thesis identifies research communities within computer security red teaming. We created a Python tool to scrape and analyze 23,459 articles from Scopus's database, highlighting popular communities such as smart grids and attack graphs and providing a comprehensive overview of prominent authors, institutions, communities, and sub-communities. The second paper conducts red teaming assessments of connected devices commonly found in modern households, such as connected vacuum cleaners and door locks. Our experiments demonstrate how easily attackers can exploit different devices and emphasize the need for improved security measures and public awareness. The third paper explores the use of LLMs to generate phishing emails. The findings demonstrate that while human experts still outperform LLMs, a hybrid approach combining human expertise and AI significantly reduces the cost and time requirements to launch phishing attacks while maintaining high success rates. We further analyze the economic aspects of AI-enhanced phishing to show how LLMs affect the attacker's incentive for various phishing use cases. The fourth study evaluates LLMs' potential to automate and enhance cyberattacks on hardware and software systems. We create a framework for evaluating the capability of LLMs to conduct attacks on hardware and software and evaluate the framework by conducting 31 AI-automated cyberattacks on devices from connected households. The results indicate that while LLMs can reduce attack costs, they do not significantly increase the attacks' damage or scalability. We expect this to change with future LLM versions, but the findings present an opportunity for proactive measures to develop benchmarks and defensive tools to control the misuse of LLMs.

urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-353243