Software assurance (SwA) is defined as "the level of confidence that software is free from vulnerabilities, either intentionally designed into the software or accidentally inserted at any time during its lifecycle, and that the software functions in the intended manner."[1]
SW Assurance = Quality + Security
Software Assurance happens across the Software Lifecycle - not in any one process or tool.
The earlier in the Software Lifecycle it happens the better.
Training is the foundation for a Security SDLC - it prevents the defects being put into the code in the first place - detection is much more expensive than prevention, and correction is much more expensive that detection.
<50% of defects are associated with coding
A significant number of defect occur before any code is written i.e. in Requirement Analysis and Design Phases
Of the 4 Defect types in the code, Static Analysis is optimal for only 1: Generic Defects - Visible in the Code i.e. Static Analysis will only find a subset of the defects in the code.
The % of issues findable by static analysis is MUCH closer to 10% than 100%.
An example for illustration purposes: Playing with numbers gives ~12% of defects findable by static analysis ( 1/2 * 1/4 = 1/8)
where
By "BIG" I mean the ones that hit the headlines and have global impact.
Static Analysis does not find the big media vulnerabilities e.g.
These are typically found first through fuzzing - not static analysis (and painstaking analysis and reverse engineering by experts and months of work).
A third party tool (e.g. WhiteSourceSoftware, BlackDuck) should be used to be informed of vulnerabilities (these tools also support open source license compliance reporting).
Using more than one Static Analysis tool - at different times in the SDLC - gives a better Return On Investment
Different test tools have different characteristics:
The probability that all tools fail to find an issue (i.e. an issue remains in the code) is the product of each tool failing (false negative for that tool) e.g. if 3 orthogonal tools are applied and have a 50% chance of finding an issue then the probality of an issue not being found when the 3 tools are applied is 12.5%
(∏i(1-Ci): (1-.5)(1-.5)(1-.5) = .125 = 12.5%
where
For most tools the individual probality of an issue not being found is significantly higher than 12.5%.
"The OWASP SonarQube project aims to provide open source SAST using the existing open source solutions. SonarQube is one of the world’s most popular continuous code quality tools and it's actively used by many developers and companies.
This project aims to enable more security functionalities to SonarQube and use it as an SAST. This project will use open source sonar plugins, rules, as well as other open source plugins especially FindSecBugs and its security rules. FindSecBugs enables the taint analysis"
In general, we want to optimize the signal to noise ratio (S/N) i.e. filter - especially in a first pass assessment as this maximes the return on investment/effort. we want to
Filtering improves the quality of the result but sacrifices some of the signal to remove most of the noise.
https://www.owasp.org/index.php/Benchmark provides test code in Java that allows benchmarking and comparison of different tools.
For cost reasons we want to find defects ASAP i.e. not have to wait for a scan on a nightly build or a release etc...
This can be done as code is being typed at development type e.g. https://www.sonarlint.org/ (Both SonarLint and SonarQube rely on the same static source code analyzers) e.g. For C/C++ code, I use VSCode on Linux and use Cppcheck plugin to check as I type.
Cover | Title |
---|---|