Evaluation of Strategies over Static Code Analysis Tools (original) (raw)
Related papers
There is a lack of information concerning the quality of static code analysis tools. In order to overcome this we therefore developed a method and a tool supporting quality engineers to determine the quality of static code analysis tools. This paper shows how the method works and where the tool supports it. We already applied the combination of the method and its tool to two static code analysis tools in different versions. On this basis, we further illustrate some results of the usage of the method.
Static Code Analysis Tools: A Systematic Literature Review
DAAAM Proceedings
Static code analysis tools are being increasingly used to improve code quality. Such tools can statically analyze the code to find bugs, security vulnerabilities, security spots, duplications, and code smell. The quality of the source code is a key factor in any software product and requires constant inspection and supervision. Static code analysis is a valid way to infer the behavior of a program without executing it. Many tools allow static analysis in different frameworks, for different programming languages, and for detecting different defects in the source code. Still, a small number of tools provide support for domain-specific languages. This paper aims to present a systematic literature review focusing on the most frequently used static code analysis tools and on classifying the presented tools according to the supported both generalpurpose and domain-specific programming languages and the types of defects a specific tool can detect.
Static Code Analysis tools can reduce the number of bugs in one program therefore it can reduce the cost of this program. Many developers don’t use these tools losing a lot of time with manual code analysis (in some cases there are no analysis at all) and a lot of money with resources to do the analysis. In this paper we will test and study the results of three static code analysis tools that by being inexpensive can efficiently remove the most common vulnerabilities in a software. It can be difficult to compare tools with different characteristics but we can get interesting results by testing the tools together.
Preliminary Results On Using Static Analysis Tools For Software Inspection
15th International Symposium on Software Reliability Engineering, 2004
Software inspection has been shown to be an effective defect removal practice, leading to higher quality software with lower field failures. Automated software inspection tools are emerging for identifying a subset of defects in a less labor-intensive manner than manual inspection. This paper investigates the use of automated inspection for a large-scale industrial software system at Nortel Networks. We propose and utilize a defect classification scheme for enumerating the types of defects that can be identified by automated inspections. Additionally, we demonstrate that automated code inspection faults can be used as efficient predictors of field failures and are effective for identifying fault-prone modules.
On the Value of Static Analysis for Fault Detection in Software
IEEE Transactions on Software Engineering, Vol. 32, No. 4, pp. 240-253, 2006
No single software fault-detection technique is capable of addressing all fault-detection concerns. Similarly to software reviews and testing, static analysis tools (or automated static analysis) can be used to remove defects prior to release of a software product. To determine to what extent automated static analysis can help in the economic production of a high-quality product, we have analyzed static analysis faults and test and customer-reported failures for three large-scale industrial software systems developed at Nortel Networks. The data indicate that automated static analysis is an affordable means of software fault detection. Using the Orthogonal Defect Classification scheme, we found that automated static analysis is effective at identifying Assignment and Checking faults, allowing the later software production phases to focus on more complex, functional, and algorithmic faults. A majority of the defects found by automated static analysis appear to be produced by a few key types of programmer error and some of these types have the potential to cause security vulnerabilities. Statistical analysis results indicate the number of automated static analysis faults can be effective for identifying problem modules. Our results indicate static analysis tools are complementary to other fault-detection techniques for the economic production of a high-quality software product.
Automated Static Analysis: Survey of Tools
Not any error detection tool, is capable of detecting, processing and rectifying all the errors. Our main aim is to increase the level of authenticity that ASA (Automatic Static Analysis) can provide us. These Static analysis tools are used to check for vulnerabilities in systems and programs, as the correctness or authenticity of the program is the greatest concern in developing them and verifying them prior to their release. These tools use a wide variety of functions, to prevent many errors and loopholes from occurring at many different stages of the programs. But still, ASA tools provide a lot of false positives that again require a lot of human involvement to rectify them. Here we review the different techniques, and methods that are primarily used in Automatic Static Analysis, and also that coding concerns that arise in the process.
Investigating Automatic Static Analysis Results to Identify Quality Problems: An Inductive Study
2012 35th Annual IEEE Software Engineering Workshop, 2012
Background: Automatic static analysis (ASA) tools examine source code to discover "issues", i.e. code patterns that are symptoms of bad programming practices and that can lead to defective behavior. Studies in the literature have shown that these tools find defects earlier than other verification activities, but they produce a substantial number of false positive warnings. For this reason, an alternative approach is to use the set of ASA issues to identify defect prone files and components rather than focusing on the individual issues. Aim: We conducted an exploratory study to investigate whether ASA issues can be used as early indicators of faulty files and components and, for the first time, whether they point to a decay of specific software quality attributes, such as maintainability or functionality. Our aim is to understand the critical parameters and feasibility of such an approach to feed into future research on more specific quality and defect prediction models. Method: We analyzed an industrial C# web application using the Resharper ASA tool and explored if significant correlations exist in such a data set.
Identifying and Documenting False Positive Patterns Generated by Static Code Analysis Tools
2017 IEEE/ACM 4th International Workshop on Software Engineering Research and Industrial Practice (SER&IP)
Static code analysis tools are known to flag a large number of false positives. A false positive is a warning message generated by a static code analysis tool for a location in the source code that does not have any known problems. This thesis presents our approach and results in identifying and documenting false positives generated by static code analysis tools. The goal of our study was to understand the different kinds of false positives generated so we can (1) automatically determine if a warning message from a static code analysis tool truly indicates an error, and (2) reduce the number of false positives developers must triage. We used two open-source tools and one commercial tool in our study. Our approach led to a hierarchy of 14 core false positive patterns, with some patterns appearing in multiple variations. We implemented checkers to identify the code structures of false positive patterns and to eliminate them from the output of the tools. Preliminary results showed that we were able to reduce the number of warnings by 14.0%-99.9% with a precision of 94.2%-100.0% by applying our false positive filters in different cases.