Complexity-aware Decision-making with Applications to Large-scale and Human-in-the-loop Systems
Time: Fri 2023-06-09 10.00
Location: D37, Lindstedtsvägen 5, Stockholm
Video link: https://kth-se.zoom.us/j/62018776394
Language: English
Subject area: Electrical Engineering
Doctoral student: Elis Stefansson , Reglerteknik
Opponent: Professor Iman Shames, Australian National University, Canberra, Australia
Supervisor: Professor Karl H. Johansson, Reglerteknik; Professor Henrik Sandberg, Reglerteknik
QC 20230523
Abstract
This thesis considers control systems governed by autonomous decision-makers and humans. We formalise and compute low-complex control policies with applications to large-scale systems, and propose human interaction models for controllers to compute interaction-aware decisions.
In the first part of the thesis, we consider complexity-aware decision-making, formalising the complexity of control policies and constructing algorithms that compute low-complexity control policies. More precisely, first, we consider large-scale control systems given by hierarchical finite state machines (HFSMs) and present a planning algorithm for such systems that exploits the hierarchy to compute optimal policies efficiently. The algorithm can also handle changes in the system with ease. We prove these properties and conduct simulations on HFSMs with up to 2 million states, including a robot application, where our algorithm outperforms both Dijkstra's algorithm and Contraction Hierarchies.
Second, we present a planning objective for control systems modelled as finite state machines yielding an explicit trade-off between a policy's performance and complexity. We consider Kolmogorov complexity since it captures the ultimate compression of an object on a universal Turing machine. We prove that this trade-off is hard to optimise in the sense that dynamic programming is infeasible. Nonetheless, we present two heuristic algorithms obtaining low-complexity policies and evaluate the algorithms on a simple navigation task for a mobile robot, where we obtain low-complexity policies that concur with intuition.
In the second part of the thesis, we consider human-in-the-loop systems and predict human decision-making in such systems. First, we look at how the interaction between a robot and a human in a control system can be predicted using game theory, focusing on an autonomous truck platoon interacting with a human-driven car. The interaction is modelled as a hierarchical dynamic game, where the hierarchical decomposition is temporal with a high-fidelity tactical horizon predicting immediate interactions and a low-fidelity strategic horizon estimating long-term behaviour. The game enables feasible computations validated through simulations yielding situation-aware behaviour with natural and safe interactions.
Second, we seek models to explain human decision-making, focusing on driver overtaking scenarios. The overtaking problem is formalised as a decision problem with perceptual uncertainty. We propose and numerically analyse risk-agnostic and risk-aware decision models, judging if an overtaking is desirable. We show how a driver's decision time and confidence level can be characterised through two model parameters, which collectively represent human risk-taking behaviour. We detail an experimental testbed for evaluating the decision-making process in the overtaking scenario and present some preliminary experimental results from two human drivers.