Skip to main content
To KTH's start page To KTH's start page

EECS Congrats…

our newly appointed Docents

Published Dec 05, 2018

EECS congratulates our newly appointed docents Çiçek Çavdar, researcher at the Department of Communication Systems and Josephine Sullivan, researcher at the Department of Robotics, perception and learning at KTH. Read what they talked about in the docent lecture.

Future network architectures for 5G and beyond: Virtualization, cloudification, densification

Recently cloud-RAN (CRAN) has been proposed to decouple digital units (DUs) and radio units (RUs) of base stations (BSs), and centralize DUs into a central office, where virtualization and cloud computing technologies are leveraged to move DUs in the “cloud”. By this way base stations become more affordable and radio networks can be densified to boost the capacity. 

Today, consensus is yet to be achieved on how the fronthaul traffic will be transported between RUs and DUs, and how virtualization of network resources will occur from radio network segment to the centralized baseband processing units. It is not about designing the communication systems but also about computing systems in the edge or centralized cloud. In this talk, a 5G mobile network architecture, called virtualized-CRAN (V-CRAN) will be presented, moving towards a cell-less 5G network architecture. The concept of a “virtualized-BS” (V-BS) that can be optimally formed by exploiting several enabling technologies such as software-defined radio (SDR) and Coordinated Multi-Point (CoMP) Transmission/Reception will be introduced. Virtual BSs can be formed dynamically as the user traffic moves in an area. The processing units can go to sleep together with the BSs and transport systems jointly in this energy and cost optimized system. Several use cases of V-CRAN are presented to show how network architecture evolution can enhance system throughput, energy efficiency, and mobility management.

Josephine Sullivan, RPL, Computer Vision and the Deep Learning Transformation

Since 2012, with the introduction of deep and large convolutional neural networks trained on large datasets, the research field of computer vision has been transformed and dominated by the renaissance of neural networks in their new guise as efficient trainable high capacity function approximators. The latter seems to be true with the crucial caveat that you have to have sufficient labelled training data. However, even in these data-rich training regime the traits that led to the original disillusionment with neural networks in the 90's - dark arts needed to successfully train and their black box nature - still persist to some degree. In this talk I will give an overview of the exciting results and tweaks on familiar ideas within neural networks that have brought so much success to computer vision and then present current work on trying to "understand" what a network can and does learn.