Skip to main content

Stealthy Sensor Attacks on Feedback Systems

Attack Analysis and Detector-based Defense Mechanisms

Time: Tue 2021-11-09 09.00

Location: https://kth-se.zoom.us/meeting/register/u50qd-mprTorHt3D83f70OXsEbQH_6_gVH64, F3, Lindstedtsvägen 26, Stockholm (English)

Subject area: Electrical Engineering

Doctoral student: David Umsonst , Reglerteknik

Opponent: Professor Hideaki Ishii, Tokyo Institute of Technology

Supervisor: Professor Henrik Sandberg, ACCESS Linnaeus Centre, Reglerteknik

Export to calendar

Abstract

In this thesis, we investigate sensor attacks on feedback systems and how anomaly detectors can be used as a defense mechanism. We consider an attacker with access to all sensor measurements and full knowledge about the closed-loop system model. The attacker wants to maximize its impact on the system while not triggering an alarm in the anomaly detector. The defender wants to mitigate the attacker's impact with detector-based defense mechanisms, which consider the normal operating cost as well.

The sensor attack consists of three different stages and we begin by analyzing each stage separately. First, to launch its stealthy attack, the attacker needs to estimate the internal controller state perfectly while only accessing the measurements. We prove that the attacker can perfectly estimate the controller state if and only if the linear controller dynamics do not have eigenvalues outside of the unit circle. The theory for controller state estimation is applied to reference estimation as well, where we show that the attacker can estimate common reference signals, such as constant and sinusoidal signals, regardless of the controller used. Second, the attacker estimates the internal anomaly detector state. When the detector has linear dynamics, the attacker is able to estimate the state while injecting a malicious signal that mimics the detector output statistics based on the Kullback-Leibler divergence. In the third stage, the attacker launches its worst-case attack on the closed-loop system. We provide a convex optimization approach to estimate the worst-case impact of an attack with an infinity norm-based objective. All stages are evaluated in an experimental setup, which shows that both the controller and the detector play a critical role for the feasibility and the severity of the attack.

Next, we investigate detector-based defense mechanisms and begin by looking into metrics to compare detectors under attack. The metric we consider first is based on the worst-case attack impact and the average time between false alarms. Since the attacker's objective, and, thus, the impact, is often unknown, we propose a metric that has the advantage of being agnostic to the attacker's objective. We then present a fixed detector threshold setting based on a Stackelberg game framework, which minimizes the cost induced by the false alarms and the attack impact. Further, a moving target defense is obtained by designing a dynamic threshold setting where the threshold is periodically chosen at random from a discrete set. By analyzing one period in a Bayesian game framework, we determine the optimal distribution over the discrete set by solving a linear program, such that the operator's cost is minimized. In the Bayesian framework, we also include uncertainty about the attacker's objective into our analysis. Additionally, we determine a necessary and sufficient condition for when the optimal distribution does not concentrate the probability on only one threshold. For comparing detectors and threshold-based defense mechanisms, we need to know which threshold results in a certain false alarm rate under nominal conditions. Since the threshold tuning is often non-trivial, we derive three finite sample guarantees for a data-driven threshold tuning such that the threshold guarantees an acceptable false alarm rate with a high probability.

urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-303502