Skip to main content

Ultra-Reliable and Resilient Communication Service for Cyber-Physical Systems

Time: Thu 2023-08-24 14.00

Location: F3 (Flodis), Lindstedtsvägen 26 & 28, Stockholm

Video link:

Language: English

Subject area: Information and Communication Technology Computer Science

Doctoral student: Milad Ganjalizadeh , Kommunikationssystem, CoS, Ericsson Research, Ericsson AB, Stockholm, Sweden

Opponent: Professor Petar Popovski, Department of Electronic Systems, Aalborg University, Aalborg, Denmark

Supervisor: Associate Professor Marina Petrova, Kommunikationssystem, CoS; Professor Emeritus Jens Zander, Kommunikationssystem, CoS

QC 20230613


Cyber-Physical Systems (CPSs) are becoming ubiquitous in modern society, enabling new applications that rely on the seamless interaction between computing, communication, and physical processes. In this context, ultra-reliable low-latency communications (URLLC) emerges as a crucial element, reliably allowing the real-time exchange of critical data.

In wireless networks, reliability is commonly evaluated based on the percentage of packets delivered successfully, with timeliness sometimes considered. Nevertheless, in CPSs, performance is typically assessed by operational metrics such as availability (as the ability to provide service at any given time) and reliability (as the ability to maintain consistent service over an extended period). To bridge the gap between these two domains, we study the CPSs performance in terms of wireless communications and derive a mapping function between the well-known network metrics (such as packet error ratio) and operational metrics (namely communication service availability and reliability) for deterministic traffic arrivals. This thesis then deals with wireless system orchestration techniques that aim to facilitate URLLC for CPSs, factoring in spectrum and energy efficiency. It investigates two scenarios: i) a single service, where the focus is only on URLLC, and ii) mixed services, where other services simultaneously run on the same network as URLLC.

In the first part, we assume that the impact of other nearby services on URLLC service is negligible. Accordingly, we concentrate on diversity techniques and power control as primary methods to enhance communication service availability and reliability at the cost of redundant transmissions and excessive resource usage. Thus, we devise a deep reinforcement learning (DRL) orchestrator that optimizes the number of hybrid automatic repeat request retransmissions and transmission power to enhance these metrics. We use a deep Q-network framework along with a branching soft actor-critic (BSAC) framework to address scalability issues in per-device orchestration. Our 3GPP-compliant simulations show that our approach achieves significant gains in computational time and memory requirements compared to the state-of-the-art. Besides, our approach requires substantially less energy or spectrum to achieve the target metrics. Additionally, we offer valuable insights into the practical implementation of DRL solutions for URLLC service in real-world wireless communication systems.

In the second part, we examine mixed services with an emphasis on distributed learning as a coexistent service. We consider 5G-NR's quality of service mechanisms to prioritize URLLC traffic and develop models to characterize distributed training workflow in terms of training delay, model size, and convergence. This leads to an optimization problem that uses device selection to minimize distributed learning convergence time, while meeting URLLC availability requirements. We transform this coexistence problem into a DRL problem and tackle it with our adjusted BSAC framework. Our simulations reveal that our approach achieves URLLC service availability performance comparable to the scenario where all communication resources are solely dedicated to URLLC service, and significantly higher than the performance achieved using a static slicing approach with unvarying dedicated resources to slices.  Finally, we propose a hierarchical reinforcement learning architecture for dynamic resource slicing on a large timescale, thereby enhancing network flexibility, scalability, and profitability.