Skip to main content

Compression-based Data Identification and Representation Learning

Time: Fri 2020-10-02 14.30

Location: Zoom link for online defence (English)

Subject area: Electrical Engineering

Doctoral student: Hanwei Wu , Teknisk informationsvetenskap

Opponent: Associate professor Conci Nicola, university of Trento

Supervisor: Markus Flierl, Teknisk informationsvetenskap; Professor Mikael Skoglund, Signaler, sensorer och system, Teknisk informationsvetenskap

Export to calendar

Abstract

Large-scale data generation, acquisition, and processing are happening at everymoment in our society. This thesis explores the opportunities for applying lossycompression methods and concepts for improving information engineering techniquesthat transform a large amount of collected data into useful applications. Two specificapplications are investigated: data identification and representation learning.The lossy compression methods, such as product quantization and hierarchicalvector quantization, can be used to build the data structures for efficient retrievaland identification. There exists a trade-off between the rate of compressed data andgeneral retrieval performance. This thesis focus on studying this trade-off under thesimilarity identification framework. The similarity identification task is to identifythe items in a database that are similar to a given query item for a given metric. Thisthesis studies the trade-off between the rate and identifiability of the compresseddata for correlated query and source signals. Signal processing methods such asKarhunen-Loève transform and linear prediction are applied to exploit the lineardependence of the source and query signals. In addition, practical schemes basedon tree-structured vector quantizers and transform-based models are proposed forsimilarity identification.Representation learning aims to transform the real-world observations into an-other feature space that is more amiable to particular applications. Ideally, thelearned representation should only contain essential information of the original data,and irrelevant information should be removed. This thesis focuses on integratingdeep learning models with lossy compression methods and concepts to improverepresentation learning. For learning representation for large-scale image retrieval,the product quantizer is incorporated into the bottleneck stage of autoencodermodels and trained in an end-to-end fashion for image retrieval tasks. The trainedencoder neural network concatenated with a product quantizer is then used toproduce short indices for each image for fast retrieval. For improving unsuper-vised representation learning, a quantization-based regularizer is introduced tothe autoencoder-based models for fostering a similarity-preserving mapping at theencoder. It is demonstrated that the proposed regularization method results inimproved latent representations for downstream tasks such as classification andclustering. Finally, a contrastive loss based on conditional mutual information (CMI)for learning representations of time series data is proposed. An encoder is firsttrained to maximize the mutual information between the latent variables and thetrend information conditioned on the encoded observed variables. Then the featuresextracted from the trained encoder are used to learn a subsequent logistic regressionmodel for predicting time series movements. The CMI maximization problem canbe transformed into a classification problem of determining whether two encodedivrepresentations are sampled from the same class or not. It is shown that the proposedmethod is effective for improving the generalization of deep learning models thatare trained on limited data.

urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-280319