Source Code Representations of Deep Learning for Program Repair
Time: Mon 2023-12-11 09.00
Location: F3 (Flodis), Lindstedtsvägen 26 & 28, Stockholm
Language: English
Subject area: Computer Science
Doctoral student: Zimin Chen , Teoretisk datalogi, TCS
Opponent: Professor Zhendong Su, ETH Zürich, Zürich, Switzerland
Supervisor: Professor Martin Monperrus, Teoretisk datalogi, TCS; Professor Benoit Baudry, Programvaruteknik och datorsystem, SCS
QC 20231117
Abstract
Deep learning, leveraging artificial neural networks, has demonstrated significant capabilities in understanding intricate patterns within data. In recent years, its prowess has been extended to the vast domain of source code, where it aids in diverse software engineering tasks such as program repair, code summarization, and vulnerability detection. However, using deep learning for analyzing source code poses unique challenges. This thesis primarily focuses on the challenges of representing source code to deep learning models for the purpose of automated program repair, a task that aims to automatically fix program bugs.
Source code, inherently different from natural languages, is both large in size and unique in vocabulary due to freely named identifiers, thus presenting the out-of-vocabulary challenge. Furthermore, its inherent precision requires exact representation; even a minor error can cause complete system failures. These characteristics underscore the importance of designing appropriate input and output representations for deep learning models, ensuring that they can efficiently and accurately process code for the purposes of program repair. The core contributions of this thesis address these challenges.
First, we propose a compact input representation that encapsulates the essential context for bug fixing. The compact input representation retains the relevant information that is essential to understanding the bug while removing unnecessary context that might add noise to the model.
Second, we tackle the out-of-vocabulary problem by harnessing techniques from natural language processing, capitalizing on existing code elements for bug fixes, and drawing parallels to the redundancy assumption in traditional program repair approaches.
Third, to address the precision of source code, we integrate bug information into the input representation and pivot the model's output from complete code generation to concise edit instructions, offering a more focused and accurate approach.
Last, we show that by unifying the source code representation across multiple code-related tasks, we facilitate transfer and multi-task learning. Both learning strategies can help in mitigating issues faced when training on limited datasets.