Leveraging machine learning for software redocumentation—A comprehensive comparison of methods in practice (original) (raw)
Related papers
Maybe Deep Neural Networks are the Best Choice for Modeling Source Code
ArXiv, 2019
Statistical language modeling techniques have successfully been applied to source code, yielding a variety of new software development tools, such as tools for code suggestion and improving readability. A major issue with these techniques is that code introduces new vocabulary at a far higher rate than natural language, as new identifier names proliferate. But traditional language models limit the vocabulary to a fixed set of common words. For code, this strong assumption has been shown to have a significant negative effect on predictive performance. But the open vocabulary version of the neural network language models for code have not been introduced in the literature. We present a new open-vocabulary neural language model for code that is not limited to a fixed vocabulary of identifier names. We employ a segmentation into subword units, subsequences of tokens chosen based on a compression criterion, following previous work in machine translation. Our network achieves best in clas...
A ML-LLM pairing for better code comment classification
arXiv (Cornell University), 2023
The "Information Retrieval in Software Engineering (IRSE) 1 " at FIRE 2023 shared task introduces code comment classification, a challenging task that pairs a code snippet with a comment that should be evaluated as either useful or not useful to the understanding of the relevant code. We answer the code comment classification shared task challenge by providing a twofold evaluation: from an algorithmic perspective, we compare the performance of classical machine learning systems and complement our evaluations from a data-driven perspective by generating additional data with the help of large language model (LLM) prompting to measure the potential increase in performance. Our best model, which took second place in the shared task, is a Neural Network with a Macro-F1 score of 88.401% on the provided seed data and a 1.5% overall increase in performance on the data generated by the LLM.
Comparing Deep Learning-based Approaches for Source Code Classification
2021
In recent years, various methods for source code classification using deep learning have been proposed. In these methods, the source code classification is performed by letting the neural network learn the source code's token sequence, etc. In that case, it is necessary to select the appropriate neural network or source code representation because the learning efficiency decreases when neural networks and source code representations that are not effective for source code classification are used for learning. However, it is not clear which neural networks or combinations of source code representations are effective for realizing high-precision source code classification methods. In this study, we compare the source code classification method using deep learning. First, we selected 3 neural networks that are widely used in existing research. Next, we compared the accuracy of a total of 6 source code classification methods in which the neural network trained the token sequence or a...
Source code representation for comment generation and program comprehension
2021
Code comment generation is the task of generating a high-level natural language description for a given code snippet. Comments help software developers maintain programs; however, comments are mostly missing or are outdated. Many studies develop models to generate comments automatically, mainly using deep neural networks. A missing point in the current research is capturing each character's information and the syntactic differences of tokens. Moreover, the contextual meaning of code tokens is generally overlooked. In this thesis, we present LAnguage Model and Named Entity Recognition Code comment generator (LAMNER-Code). A character-level language model is used to learn the semantic representation and a Named Entity Recognition model is trained for learning the code entities. These representations are used in a Neural Machine Translation architecture to produce comments. We evaluate the generated comments using our model and other baselines against ground truth on a Java dataset...
Leveraging Deep Learning for Abstractive Code Summarization of Unofficial Documentation
arXiv (Cornell University), 2023
Usually, programming languages have official documentation to guide developers with APIs, methods, and classes. However, researchers identified insufficient or inadequate documentation examples and flaws with the API's complex structure as barriers to learning an API. As a result, developers may consult other sources (e.g., StackOverflow, GitHub, etc.) to learn more about an API. Recent research studies have shown that unofficial documentation is a valuable source of information for generating code summaries. We, therefore, have been motivated to leverage such a type of documentation along with deep learning techniques towards generating high-quality summaries for APIs discussed in informal documentation. This paper proposes an automatic approach using the BART algorithm, a state-of-the-art transformer model, to generate summaries for APIs discussed in StackOverflow. We built an oracle of human-generated summaries to evaluate our approach against it using ROUGE and BLEU metrics which are the most widely used evaluation metrics in text summarization. Furthermore, we evaluated our summaries empirically against a previous work in terms of quality. Our findings demonstrate that using deep learning algorithms can improve summaries' quality and outperform the previous work by an average of %57 for Precision, %66 for Recall, and %61 for F-measure, and it runs 4.4 times faster.
Are deep neural networks the best choice for modeling source code?
Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering, 2017
Current statistical language modeling techniques, including deeplearning based models, have proven to be quite effective for source code. We argue here that the special properties of source code can be exploited for further improvements. In this work, we enhance established language modeling approaches to handle the special challenges of modeling source code, such as: frequent changes, larger, changing vocabularies, deeply nested scopes, etc. We present a fast, nested language modeling toolkit specifically designed for software, with the ability to add & remove text, and mix & swap out many models. Specifically, we improve upon prior cache-modeling work and present a model with a much more expansive, multi-level notion of locality that we show to be well-suited for modeling software. We present results on varying corpora in comparison with traditional N-gram, as well as RNN, and LSTM deep-learning language models, and release all our source code for public use. Our evaluations suggest that carefully adapting N-gram models for source code can yield performance that surpasses even RNN and LSTM based deep-learning models.
STACC: Code Comment Classification using SentenceTransformers
arXiv (Cornell University), 2023
Code comments are a key resource for information about software artefacts. Depending on the use case, only some types of comments are useful. Thus, automatic approaches to classify these comments have been proposed. In this work, we address this need by proposing, STACC, a set of SentenceTransformersbased binary classifiers. These lightweight classifiers are trained and tested on the NLBSE Code Comment Classification tool competition dataset, and surpass the baseline by a significant margin, achieving an average F1 score of 0.74 against the baseline of 0.31, which is an improvement of 139%. A replication package, as well as the models themselves, are publicly available.
DeepVS: an efficient and generic approach for source code modelling usage
Electronics Letters, 2020
The source code suggestions provided by current IDEs are mostly dependent on static type learning. These suggestions often end up proposing irrelevant suggestions for a peculiar context. Recently, deep learning-based approaches have shown great potential in the modeling of source code for various software engineering tasks. However, these techniques lack adequate generalization and resistance to acclimate the use of such models in a real-world software development environment. This letter presents DeepVS, an end-to-end deep neural code completion tool that learns from existing codebases by exploiting the bidirectional Gated Recurrent Unit (BiGRU) neural net. The proposed tool is capable of providing source code suggestions instantly in an IDE by using pre-trained BiGRU neural net. The evaluation of this work is twofold , quantitative and qualitative. Through extensive evaluation on ten real-world open-source software systems, the proposed method shows significant performance enhancement and its practicality. Moreover, the results also suggest that DeepVS tool is capable of suggesting zero-day (unseen) code tokens by learning coding patterns from real-world software systems.
CodeGRU: Context-aware deep learning with gated recurrent unit for source code modeling
Information and Software Technology, 2020
Context: Recently deep learning based Natural Language Processing (NLP) models have shown great potential in the modeling of source code. However, a major limitation of these approaches is that they take source code as simple tokens of text and ignore its contextual, syntactical and structural dependencies. Objective: In this work, we present CodeGRU, a gated recurrent unit based source code language model that is capable of capturing source codes contextual, syntactical and structural dependencies. Method: We introduce a novel approach which can capture the source code context by leveraging the source code token types. Further, we adopt a novel approach which can learn variable size context by taking into account source codes syntax, and structural information. Results: We evaluate CodeGRU with real-world data set and it shows that CodeGRU outperforms the state-of-the-art language models and help reduce the vocabulary size up to 24.93%. Unlike previous works, we tested CodeGRU with an independent test set which suggests that our methodology does not requisite the source code comes from the same domain as training data while providing suggestions. We further evaluate CodeGRU with two software engineering applications: source code suggestion, and source code completion. Conclusion: Our experiment confirms that the source codes contextual information can be vital and can help improve the software language models. The extensive evaluation of CodeGRU shows that it outperforms the state-of-the-art models. The results further suggest that the proposed approach can help reduce the vocabulary size and is of practical use for software developers.
ArXiv, 2021
Code understanding is an increasingly important application of Artificial Intelligence. A fundamental aspect of understanding code is understanding text about code, e.g., documentation and forum discussions. Pre-trained language models (e.g., BERT) are a popular approach for various NLP tasks, and there are now a variety of benchmarks, such as GLUE, to help improve the development of such models for natural language understanding. However, little is known about how well such models work on textual artifacts about code, and we are unaware of any systematic set of downstream tasks for such an evaluation. In this paper, we derive a set of benchmarks (BLANCA Benchmarks for LANguage models on Coding Artifacts) that assess code understanding based on tasks such as predicting the best answer to a question in a forum post, finding related forum posts, or predicting classes related in a hierarchy from class documentation. We evaluate the performance of current state-of-the-art language model...