4. Enterprise Testing of the Software Supply Chain
Introduction
Veracode has been publishing State of Software Security (SOSS)1 reports since 2010.
This year we began investigating our dataset from perspectives that are not routinely
covered in our traditional SOSS reports, which allows us to extend our analysis to a
variety of topical areas. The first feature supplement was published in April and focused
on the vulnerabilities in software applications used by publicly traded companies.2
This is our second feature supplement for 2012.
The focus of this report is the state of enterprise programs that assess the security of software purchased from vendors
(where an enterprise is defined as companies with over $500 million in annual revenue). Security experts have long
advised enterprises to incorporate application security testing into their software procurement or vendor management
activities. Yet only recently has the idea that software vulnerabilities contribute to IT supply chain risks garnered more
media attention and enterprise interest. As a result, enterprises are looking for guidance on establishing application
security programs to expose and manage the risks associated with vendor supplied software. While our SOSS reports
have always analyzed vulnerabilities, remediation and compliance data across different supplier types (internally devel-
oped, commercial, open source and outsourced), this feature supplement will extend our investigation to include:
• Software security testing program metrics (e.g. program participation rates)
• How different program approaches impact vendor compliance with application security policies.
This SOSS feature supplement draws on continuously updated information in Veracode’s cloud-based application
security services platform. Unlike a survey, the data comes from actual security analysis of web and non-web
applications across industry verticals, languages and platforms. This data also represents multiple security testing
methodologies (static binary, dynamic and manual) on a wide range of application types and programming languages.
The resulting intelligence is unique in the breadth and depth it offers.
In order to focus only on enterprises with programs for vendor application security testing, we derived the data
on which this report is based from 939 application builds submitted to the Veracode platform during an 18 month
time period from January 2011 to June 2012.3 Veracode analyzes the vendor submitted applications, attests to the
applications’ security posture to the requesting enterprise, and provides detailed, prioritized remediation guidance
to the software vendor.
We believe this SOSS feature supplement presents some interesting findings and we hope you enjoy reading the report.
1 Previous Volumes of State of Software Security reports are available at www.veracode.com/reports
2 Study of Software Related Cybersecurity Risks in Public Companies, Veracode April 2011 (info.veracode.com/state-of-software-security-volume-4-supplement.html)
3 It should be noted that in any study of this size, sampling issues arise because of the nature of the way the data was collected. For example, it is important
to remember that all applications in this study came from organizations that were motivated enough about application security to engage Veracode for an
independent application security assessment. Care has been taken to only present comparisons where a statistically significant sample size was present.
2
5. Enterprise Testing of the Software Supply Chain
Executive Summary
The House Select Committee on Intelligence recently recommended that U.S. companies
refrain from purchasing telecommunications equipment from Chinese manufacturers.
Among many reasons for the recommendation, the 60 page report4 cited:
• “(T)he threat posed to U.S. national-security interests by vulnerabilities in the telecommunications supply chain.”
• “Vendors financing their own security evaluations create conflicts of interest that lead to skepticism about the
independence and rigor of the result.”
These statements have ignited a firestorm of discussions and media coverage about software supply chain vulnera-
bilities. However, the raging debate about Chinese cyber-spying and US protectionism does little to clarify the real
issue, which is that enterprises assume too much risk when they implicitly trust their software providers to develop
safe software.
This report provides analysis of the actual state of vendor application security testing programs currently being
implemented by our enterprise customers (where an enterprise is defined as companies with over $500 million
in annual revenue). In addition to our analysis of enterprise risks, the vendor assessment market and application
vulnerabilities, we also examine the state of enterprise assessment programs. Specifically, our analysis of enterprise
programs focuses on understanding how different enterprise approaches to implementing their programs impact
metrics such as vendor participation, applications assessed, and compliance with application security policies.
Key Findings
Testing vendor applications is a growing trend in many industries.
The volume of vendor supplied application assessments continues to grow with a 49% increase from the first
quarter of 2011 to the second quarter of 2012 (Figure 7). Enterprises in many industries are starting to secure their
software supply chains. In fact, Figure 2 shows 51% of the enterprises requesting assessments belong to industries
other than Financial Services, Software/IT Services and Technology (i.e. the three industry segments that historically
dominated vendor assessment requests in past SOSS reports). However, fewer than one in five of our enterprise
customers have requested a code-level security test from at least one vendor (Figure 3). The low percentage of
companies is an indication that formal vendor software testing programs are still in a relatively early stage of adoption.
4 Investigative Report on the U.S. National Security Issues Posed by Chinese Telecommunications Companies Huawei and ZTE (intelligence.house.gov/
press-release/investigative-report-us-national-security-issues-posed-chinese-telecommunications)
3
6. Enterprise Testing of the Software Supply Chain
Enterprises with a programmatic approach to vendor application security testing have more vendors
and applications participating in their programs than enterprises with an ad-hoc approach.
We analyzed the program results experienced by our customers in terms of the number of participating applications
(Figure 13) and vendors (Figure 14). We found that enterprises fell into two distinct groups, which aligned with their
approach to implementing their programs:
• Ad-hoc approach: Where enterprises lacked a protocol for selecting applications for testing and appeared to
request application security testing from vendors on a case by case basis. Additionally, the enterprise requestor
often had limited business, contractual and technical details to respond to specific vendor questions and concerns,
thus the requestor appeared to lack a strong mandate from their business and procurement teams. As a result this
group had fairly low numbers of vendors and applications participating in their programs (averaging 7 applications
and 4 vendors).
• Programmatic approach: Where enterprises developed a formal protocol for selecting and requesting vendor
applications for testing. Specifically programmatic application testing programs tended to leverage best practices
for defining an overall program, such as ensuring collaboration between security, business and procurement teams;
specifying business, contractual and technical details as part of the policy; and providing a strong mandate for vendor
application testing. This group enjoyed much higher participation levels (averaging 71 applications and 38 vendors).
Setting a less rigorous compliance policy that vendors perceive as achievable encourages higher
vendor participation.
Figure 17 shows that in enterprises with a programmatic approach, 45% of vendor applications become compliant
within one week, whereas only 28% of applications are compliant within one week for ad-hoc programs. Additionally,
most of those applications achieved compliance upon first submission (Figure 16). These results indicate some
enterprises with a programmatic approach chose to design policies to enable a significant portion of vendors to
achieve compliance with relative ease. In other words, obtaining initial visibility into the state of vendor software
security is more important for these enterprises than demanding compliance with a tough security policy.
Vendors are more successful at complying with application security policies defined by the enterprise
than they are at meeting industry standards.
38% of vendor supplied applications complied with enterprise-defined policies (Figure 11). Vendors struggle to meet the
more stringent requirements of polices guided by industry standards. Only 10% of applications comply with the OWASP
Top 10 and 30% with the CWE/SANS Top 25 (Figure 11). The results show secure development practices built into the
software development lifecycle (SDLC) are still not as widespread as they should be. However, the vendors that do strive
to comply with industry standards are well positioned to comply with any enterprise-defined security policy.
4
7. Enterprise Testing of the Software Supply Chain
With 62% of applications failing to reach compliance on first submission, procedures for managing
non-compliant applications are an important aspect of an enterprise’s security policy.
Figures 18 and 19 delve deeper into the group of non-compliant applications. Figure 18 shows large percentages
of non-compliant applications with only one build submitted: 39% for enterprises with a programmatic approach
and 57% for enterprises with an ad-hoc approach. Figure 18 also shows that 11% of applications remain out of
compliance with enterprise policies, in spite of vendor submission of new builds for testing (resubmission is clear
evidence of vendor remediation efforts). Enterprises with a programmatic approach only had 20% of applications
be out of compliance for more than 24 weeks, compared with 39% of applications participating in ad-hoc programs
(Figure 19). The results suggest that enterprises with a programmatic approach may do a better job ushering their
vendors through the process than enterprises with an ad-hoc approach.
Recommendations
Clearly it is no longer acceptable to ignore the risk of security vulnerabilities in vendor supplied software. Enforcement
of enterprise security policies conducted solely through vendor surveys is no longer sufficient to manage the vulnera-
bilities entering the enterprise through vendor software. Enterprises should consider several factors when developing
their security policies for vendor applications, including vulnerability prevalence, severity of software flaws and the
business criticality of the applications. Other factors governing the vendor relationship are equally important, such
as escalation procedures, product release timelines and mitigation acceptance processes. Enterprises that want to
accelerate vendor participation in the early stages of their programs should not design security policies that expect
perfection from vendor supplied software.
Successful management of vendor application security testing programs, like any other enterprise effort, depends
on technology to automate specific tasks, processes to ensure broad adoption, and strong leaders who can drive
the necessary collaboration between security, business and procurement teams. With those three elements in
place, the enterprise can more effectively wield its purchasing leverage to drive early success in terms of vendor
participation. However, enterprises should also demonstrate that they are serious about seeing significant application
security improvements from vendors, documented through test results. Over time, enterprises should strengthen
their vendor acceptance policies to reflect industry standard levels.
Specifically, enterprises should take the following steps to begin the process of creating a vendor application
security testing program (where one does not exist) or refining existing programs to maximize vendor participation
and compliance:
Step 1: Policy Definition
• Identify the business goals of application analysis.
• Determine the security testing types and products/services to be used.
• Document the analysis timeline and frequency of testing.
• Document vulnerability remediation expectations.
• Define an exception and escalation process for uncooperative vendors
5
8. Enterprise Testing of the Software Supply Chain
Step 2: Requirements Mandate to Vendors
• Introduce the security analysis mandate to all vendors.
• State the reason behind the mandate and the goals to be achieved.
• Introduce the analysis options to vendors.
• Confirm willingness and ability of vendor to meet analysis and timeline requirements.
Step 3: Vendor Education and Commitment
• Provide participating vendors with written guidance.
• Address all questions and concerns in an open, responsive manner.
• Target a commitment from vendors to remediate in a timely manner.
• Educate vendors on all aspects of analysis process, methodologies, IP protection, expectations and timelines.
Step 4: Communication and Execution
• Provide consistent project management (status meetings, reporting, etc.) in order to minimize delays in
vendor compliance.
• Drive vendor participation and cooperativeness by providing timely responses and accurate results.
• Acknowledge that vendor participation hinges on assurances to protect their IP.
Step 5: Results Communication
• Accept summary test results and allow vendors to limit disclosure of vulnerability specifics.
• Provide remediation advice and human guidance to vendor teams in order to enable compliance efforts.
• Provide feedback mechanisms for the vendor team to communicate remediation plans, then republish their results.
6
9. Enterprise Testing of the Software Supply Chain
Enterprise Risk
All enterprises assume some security risk by using applications purchased from
software vendors. According to PricewaterhouseCoopers (PwC) 2012 Global State
of Information Security Survey, “Over the past 24 months, the number of security
incidents attributed to customers, partners, and suppliers has nearly doubled.”5
PwC Survey
Security Incidents Attributed to Customers, Partners and Suppliers
2009 2010 2011
20%
17%
15%
15%
12%
11%
10%
10% 8%
5%
0%
Customer Partner or Supplier
Figure 1: PwC Survey, Security Incidents Attributed to Customers, Partners and Suppliers
Source: PwC 2012 Global State of Information Security Survey. Question 22: “Estimated likely source of incident.”
(Not all factors shown. Totals do not add up to 100%.)
5 Third Party Risk Management, PricewaterhouseCoopers LLP, April 2012 (isaca-centralohio.org/archive/presentations/2012_04_PWC_TPRM.pdf)
7
10. Enterprise Testing of the Software Supply Chain
When one envisions the complexity of the enterprise software supply chain it is clear why this is the current state
of affairs. Enterprises rely on an incredibly large software portfolio to conduct business. Some of Veracode’s largest
customers admit to purchasing business applications from well over 20,000 individual software vendors, ranging
from the largest software vendors in the world to two-person companies. Most applications and software acquired
by enterprises from their vendors are insecure, as our past reporting has shown. SOSS Volume 4 highlighted that
only 38% of commercial non-web applications complied with the CWE/SANS Top 25 while only 14% of commercial
web applications complied with the OWASP Top 10.
The same PwC study referenced above also showed that enterprises are starting to respond to those risks, but only
23.6% of respondents stated they have security procedures with which partners and suppliers must comply (and
code-level application assessments may or may not have been included in the required procedures). The key barrier
to adoption is the ability to enforce those security policies with actionable testing and remediation plans. Historically,
security testing of vendor supplied applications has been limited to manual penetration testing by consultants or
source code analysis tools used by internal teams (in the rare cases when a vendor supplies source code) which may
cover only a fraction of vendor software used by an enterprise. Enforcement of enterprise security policies is often
conducted through vendor surveys, which is akin to trusting the vendor to attest to the security of their own code.
Additionally, the lack of efficient methods of testing the security of software obtained through mergers, acquisitions
and procurement processes leaves a significant gap in the enterprise’s ability to manage the business risks of the
software supply chain.
To fill this gap and obtain insight into the security posture
of applications, regardless of their origin, more and more
Only 23.6% of respondents stated
enterprises are relying on independent software testing
they have security procedures
programs, like the one Veracode launched in 2009. While
with which partners and suppliers
vendors may not be pleased with discovering security issues
must comply.
in their software products, they do take actions to improve
security in response to enterprise concerns.6
6 Security Manager’s Journal: Security has to extend to your customers By Mathias Thurman, Computerworld, October 22, 2012
(www.computerworld.com/s/article/9232556/Security_Manager_s_Journal_Security_has_to_extend_to_your_customers?taxonomyId=208)
8
11. Enterprise Testing of the Software Supply Chain
Vendor Supplied Software Assessments
In this section we examine the assessment market to understand the organizational
dynamics at play within the enterprise and the types of applications being assessed.
While we acknowledge the Veracode customer base only represents a fraction of all
organizations that depend on software supply chains, the data still provides some
directional guidance for the market as a whole.
Enterprises with Assessment Programs
We begin by examining the distribution of enterprises with assessment
programs by industry segment (Figure 2). The Financial Services, Software/
51% of enterprises
IT Services and Technology industries account for 49% of enterprises that
requesting vendor
have requested application security assessments from their vendors. The
application testing
remaining 51% of enterprises come from a broad spectrum of industries.
come from a broad
The variety of industries represented here is a significant change from past
spectrum of industries.
volumes of SOSS. For example, in Volume 4 we reported that 93% of the
enterprises were in either the Software/IT Services industry or the Financial
Services industry.
Distribution of Enterprises Requesting Assessments by Industry
24% Other
21% Financial Services
14% Software and IT Services
14% Technology
6% Telecommunications
5% Healthcare
3% Business Services
3% Entertainment and Media
3% Government
2% Aerospace and Defense
2% Education
2% Utilities and Energy
Figure 2: Distribution of Enterprises Requesting Assessments by Industry
9
12. Enterprise Testing of the Software Supply Chain
Figure 3 shows the percentage of enterprises (defined as companies with over $500 million in annual revenue) that
have requested an application security assessment from at least one vendor. Figure 3 indicates that only 16% of
enterprises (fewer than one in five enterprises) have begun requesting application security assessments from their
vendors. This percentage is consistent with the statistics reported in the April 2012 SOSS supplement analysis of
public companies.7 It is evident that formal vendor application security assessment programs are still in a relatively
early stage of market adoption.
Percentage of Enterprises Requesting Assessments from Software Vendors
84% Tested no vendor supplied applications
16% Tested at least one vendor supplied application
Figure 3: Percentage of Enterprises Requesting Assessments from Software Vendors
This low percentage of assessment requests should be of concern when one considers that enterprises need to
be proactive about application security, not only for their own benefit, but to assure customers and auditors as well.
A survey of 100 US and UK enterprises conducted by Quocirca8 to assess the scale of the software security problem
provides some insight. Figure 4 is based on Quocirca’s research and is not part of Veracode’s dataset. Figure 4
shows that an enterprise’s customers and auditors often seek guarantees about the applications that underpin an
enterprise’s business processes. 82% of enterprises surveyed indicated that they get some level of inquiry about
software security from auditors; 45% say it is a requirement. Similarly, 61% of enterprises surveyed indicated that
they get some level of inquiry about software security from customers; 34% say it is a requirement. An example of
a customer inquiry would be customer of a payment processing service that must comply with the Payment Card
Industry Data Security Standard (PCI-DSS). This customer would ask for guarantees about the security of the
applications underpinning the payment processing service.
7 Study of Software Related Cybersecurity Risks in Public Companies, Veracode April 2012 (info.veracode.com/state-of-software-security-volume-4-supplement.html)
8 “Outsourcing the problem of software security,” Quocirca, 2-24-2012 (info.veracode.com/Quocirca_Outsourcing_Software_security.html)
10
13. Enterprise Testing of the Software Supply Chain
Quocirca Survey
Frequency of Software Security Inquiries by Enterprise Customers and Auditors
Requirement Inquire, not mandatory Inquire, infrequently Never
100% 3% 14%
90%
15%
80%
70%
25%
60%
50% 37% 5% 3% 1% 13%
11%
10%
40% 27%
30% 13% 15% 14%
24%
20%
12%
10%
45% 21% 24% 34% 23% 11%
0%
Overall USA UK Overall USA UK
Auditors Auditors Auditors Customers Customers Customers
Figure 4: Quocirca Survey, Frequency of Software Security Inquiries by Enterprise Customers and Auditors
Source: Quocirca
82% of enterprises surveyed indicated that they
get some level of inquiry about software security
from auditors; 45% say it is a requirement.
11
14. Enterprise Testing of the Software Supply Chain
Distribution and Volume of Application Assessments
Next we examine the nature of the vendor-supplied applications that are being assessed. Figure 5 shows that
enterprises focus their attention on business critical applications, with 77% of applications classified as Very High
or High business criticality. It is encouraging to see that Medium to Very Low criticality applications make up 23%
of the assessed vendor applications because it demonstrates some enterprises are beginning to view their entire
application portfolio as a source of risk that should be addressed. Data breach incidents are frequently multi-stage
attacks, where vulnerabilities in non-critical applications become a launching point for penetration deeper into
the enterprise. Proactively managing the security posture of lower criticality applications through policies and
independent assessments should enable enterprises to lower the business risk of using these applications.
Distribution of Vendor Supplied Applications by Business Criticality
21% Very High
56% High
19% Medium
4% Low
<1% Very Low
Figure 5: Distribution of Vendor Supplied Applications by Business Criticality
Figure 6 shows that enterprise subject Operations and Financial
applications to significant scrutiny. Typically these applications deal
The volume of vendor
with personally identifiable information (PII) such as credit card or other
supplied software or
financial information, or they access proprietary information that enables
application assessments
competitive advantage. It stands to reason that an enterprise would
continues to grow with
want to gauge the security of these vendor-supplied applications before
a 49% increase from
deploying them. What is surprising is the relatively low percentage of
the first quarter of 2011
security applications that have been tested, since enterprises depend
to the second quarter
on security applications to manage many other aspects of enterprise risk.
of 2012.
As we will see in Figure 12, security applications can be as insecure as
other software products.
12
15. Enterprise Testing of the Software Supply Chain
Distribution of Vendor Supplied Applications by Application Purpose
48% Business and IT Operations
18% Financial Services
17% Other
9% Customer Support
4% Security
4% Web Infrastructure
Figure 6: Distribution of Vendor Supplied Applications by Application Purpose
On a positive note, the volume of vendor supplied application testing continues to grow as seen in Figure 7. There has
been a 49% increase from the first quarter of 2011 to the second quarter of 2012. The growth comes from enterprises
in a wider range of industries beginning their programs (Figure 2) and from enterprises expanding the reach of their
existing programs (Figure 13). One possible reason for the rapid growth is
the realization that security assessments can be successfully implemented
and seamlessly incorporated into procurement processes. Additionally, by
Operations and Financial
leveraging several best practices compiled by analysts and early practition-
applications represent
ers, enterprises can expect increased vendor participation and reduced risk
66% of the assessments.
as those vendors achieve policy compliance.
Quarterly Trend of Number of Vendor Supplied Applications Assessed
p-value = 0.017
200
NUMBER OF APPLICATIONS ASSESSED
150
100
50
0
2011-1 2011-2 2011-3 2011-4 2012-1 2012-2
QUARTERS
Figure 7: Quarterly Trend of Number of Vendor Supplied Applications Assessed
13
16. Enterprise Testing of the Software Supply Chain
Security of Vendor Supplied Applications
In this section we analyze the vulnerability categories that affect the widest number
of applications, the relative prevalence of prominent vulnerability categories across
different languages, and application compliance with various security policies.
Enterprises should consider both prevalence and severity of software flaws when
developing their vendor acceptance policies. Enterprises typically use this information
to develop their security policies, with the goal of encouraging vendors to remediate
the flaws that most significantly impact the enterprise’s security posture.
Vulnerability Prevalence
Figures 8 and 9 depict the most prevalent vulnerability categories as defined by the percentage of application builds
containing one or more instances of each category. We examine web and non-web applications separately because
the vulnerability categories that affect these two groups are very different. When one or more vulnerabilities are
found in an application, that application is considered to have been affected by that vulnerability category. Since
many applications have been analyzed across multiple builds, we are treating each application build as an individual
unit to be counted. Therefore if an application has one SQL injection vulnerability in build 1, then the build is counted
towards the percentage affected by SQL injection. If in build 2 no SQL injection vulnerabilities are found, then build 2
is counted towards the percentage not affected by SQL injection.
The figures also indicate the vulnerability categories that appear on the
OWASP Top 10 (2010) list for web applications and the CWE/SANS Top 25
SQL injection is
(2011) list for non-web applications. These lists represent broad consensus
often reported as the
from security experts around the world who have shared their expertise
primary attack vector
about the severity of vulnerabilities that occur in web and non-web applica-
by organizations that
tions. Vulnerability severity depends on likelihood of exploitation by malicious
investigate and track
actors (i.e. the relative ease of finding the flaws and launching attacks to
breaches. SQL injec-
exploit the flaws) and the potential business impact of the attacks exploiting
tion affects 40% of
the flaw. For example, SQL injection is often reported as the primary attack
vendor supplied web
vector by organizations that investigate and track breaches. The 2012 Verizon
application builds.
Data Breach Investigations Report9 showed that when large organizations
were breached using a hacking technique, SQL injection was used 14% of
the time. This fact points to both the ease of identifying and launching attacks
to exploit SQL injection vulnerabilities and the fact that those attacks often
result in the exposure of large amounts of sensitive customer data.
9 2012 Data Breach Investigations Report, Verizon
14
17. Enterprise Testing of the Software Supply Chain
For web applications, Figure 8 shows that four of the top five flaw categories
are also among the OWASP Top 10. Information leakage is the most prevalent Four of the top five
vulnerability category, affecting 79% of vendor-supplied web application builds. flaw categories for
Cross-site scripting (XSS) is next at 71%, followed by cryptographic issues and web applications
directory traversal tied at 67% of application builds affected. SQL injection, are also among the
a perennial favorite of attackers, is in eighth place affecting 40% of vendor OWASP Top 10 most
supplied web application builds. These percentages are higher than those dangerous flaws.
reported across all web applications in Volume 4. For example, SOSS Volume 4
reported XSS at 68%, information leakage at 66%, directory traversal at 49%
and SQL injection at 32%. The upcoming SOSS Volume 5 report will also show
little variation from the web application percentages reported in Volume 4.
The higher percentages seen in Figure 8 appear to imply that web application
suppliers still have a long way to go in developing secure applications.
Top Vulnerability Categories
Percentage of Affected Vendor Supplied Web Application Builds
Indicate categories that are in the OWASP Top 10
Information Leakage 79%
Cross-site Scripting (XSS) 71%
Cryptographic Issues 67%
Directory Traversal 67%
CRLF Injection 63%
Time and State 51%
Insufficient Input Validation 48%
SQL Injection 40%
API Abuse 35%
Credentials Management 34%
Encapsulation 23%
OS Command Injection 21%
Session Fixation 19%
Race Conditions 18%
Error Handling 11%
0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%
Figure 8: Top Vulnerability Categories, Percentage of Affected Vendor Supplied Web Application Builds
15
18. Enterprise Testing of the Software Supply Chain
For non-web applications, Figure 9 shows that five of the top six vulnerability categories are also among the
CWE/SANS Top 25 most dangerous flaws. The most prevalent vulnerability category is cryptographic issues,
affecting 62% of vendor-supplied non-web application builds (Figure 8). Cryptographic issues are present on both
OWASP and CWE/SANS standards. One reason cryptographic issues are a concern is because enterprises depend
on encryption to protect data if a network breach has occurred. For example, algorithms that incorporate insufficient
sources of entropy may be susceptible to attack. Alternatively, attackers may use hard-coded cryptographic keys to
decrypt confidential information.
Error handling and directory traversal are next on our list at 58% and
57% respectively. These percentages are also higher than those reported
Five of the top six vulnera-
across all non-web applications in Volume 4 and those we will report in
bility categories are also
our upcoming Volume 5. Volume 4 reported cryptographic issues at 46%,
among the CWE/SANS
error handling at 24% and directory traversal at 34%. Our upcoming
Top 25 most dangerous flaw
Volume 5 report will show cryptographic issues at 47%, error handling
for non-web applications.
at 23% and directory traversal at 37%. The higher percentages seen in
Figure 9 demonstrate the need for vendors to continue to work towards
developing more secure software.
Top Vulnerability Categories
Percentage of Affected Vendor Supplied Non-Web Application Builds
Indicate categories that are in the CWE/SANS Top 25
Cryptographic Issues 62%
Error Handling 58%
Directory Traversal 57%
Numeric Errors 43%
Buffer Management Errors 42%
Buffer Overflow 41%
Time and State 35%
Dangerous Functions 27%
Untrusted Search Path 26%
OS Command Injection 24%
Information Leakage 23%
Credentials Management 15%
Race Conditions 15%
CRLF Injection 14%
Insufficient Input Validation 14%
Format String 12%
SQL Injection 12%
0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%
Figure 9: Top Vulnerability Categories, Percentage of Affected Vendor Supplied Non-Web Application Builds
16
19. Enterprise Testing of the Software Supply Chain
Language and Platform Analysis
Figure 10 shows that web-centric languages such as Java, .NET and PHP represent 70% of the applications as-
sessed. C/C++ accounts for 25% of vendor applications, indicating that commercial software continues to rely on
C/C++ code. At first glance, the relatively large percentage of applications developed on iOS may appear surprising,
since one would expect that vendors supplying mobile applications for enterprise app stores would offer both An-
droid and iOS versions and enterprises would have security concerns about both platforms. Veracode’s partnership
with Good Technology to test iOS applications for inclusion in enterprise app stores accounts for this imbalance.
Distribution of Applications by Language and Platform
40% Java
25% C/C++
23% .NET
7% PHP
4% iOS
1% Android
<1% ColdFusion
<1% J2ME
Figure 10: Distribution of Applications by Language and Platform
Next we explore the vulnerability categories that affect the most applications upon first submission by language family.
The information in Table 1 is intended to help enterprises and vendors understand the likelihood of finding different
vulnerability categories in Java, .NET and C/C++, which account for the largest number of applications in this dataset.10
Frequently vendors have many questions about what an assessment is likely to find when they are approached by
enterprises to participate in application security programs. The answers to these questions depend on the language
used to code the applications, not because one language is necessarily more secure than others, but because
the types of vulnerabilities that occur vary by language. For example, logging is one of the many functions that
is susceptible to CRLF injection and Java applications tend to log much more than applications written in other
languages. Thus one would expect to see a higher percentage of Java applications affected by CRLF injection
vulnerabilities than other languages. Table 1 shows this to be true: CRLF injection affects 71% of Java applications,
but only 41% of .NET applications. Additionally, CRLF injection does not make the Top 15 list for C/C++ applications.
10 The upcoming Veracode State of Software Security Report Volume 5 will contain more language analysis.
17
20. Enterprise Testing of the Software Supply Chain
Code quality, cryptographic issues and directory traversal affect most vendor applications written in Java and .NET
(the languages typically used to write web applications) on first submission. For Java, code quality tops the list at
86% of first submissions affected, while cryptographic issues at 77% claims the top spot for .NET. Suppliers of
web-based applications should also take note of the prevalence of SQL injection and cross-site scripting, as enter-
prises are increasingly concerned about the ease of discovery and exploitation of these vulnerabilities. SQL injection
affects 41% of Java applications and 32% of .NET applications upon first submission. Cross-site scripting is even
more pervasive, affecting 49% and 43% of Java and .NET applications respectively.
Error handling affects the most (87%) C/C++ applications on first submission. Buffer overflow flaws come in second
at 75%, while buffer management errors and numeric errors are tied for third place at 74%. These results prove that
fairly simple programming mistakes continue to persist in C/C++ development.
Vulnerability Distribution on First Submission by Language
Java .NET C/C++
Code Quality 86% Cryptographic Issues 78% Error Handling 87%
Cryptographic Issues 73% Code Quality 75% Buffer Overflow 75%
Directory Traversal 73% Directory Traversal 65% Buffer Management Errors 74%
CRLF Injection 71% Information Leakage 61% Numeric Errors 74%
Information Leakage 56% Time and State 46% Cryptographic Issues 66%
Time and State 56% Cross-site Scripting (XSS) 43% Directory Traversal 55%
Insufficient Input Validation 54% CRLF Injection 41% Dangerous Functions 51%
Cross-site Scripting (XSS) 49% Insufficient Input Validation 34% Time and State 44%
Credentials Management 44% SQL Injection 32% Code Quality 40%
API Abuse 42% OS Command Injection 23% Untrusted Search Path 27%
SQL Injection 41% Credentials Management 19% Format String 24%
Encapsulation 26% Untrusted Search Path 18% Race Conditions 23%
Session Fixation 25% Error Handling 18% OS Command Injection 20%
OS Command Injection 21% Buffer Management Errors 6% API Abuse 13%
Race Conditions 18% Buffer Overflow 6% Information Leakage 11%
Table 1: Vulnerability Distribution on First Submission by Language
18
21. Enterprise Testing of the Software Supply Chain
Compliance with Security Policies on First Submission
Figure 11 illustrates the compliance upon initial submission of vendor supplied applications against a variety of policies.
Web applications are assessed against the OWASP Top 10. Non-web applications are assessed against the CWE/SANS
Top 25. More details about how the Veracode platform determines policy compliance can be found in the Appendix.
38% of vendor supplied applications comply with
the enterprise’s security policy on the first submis-
sion. Complying with an enterprise security policy Vendors are more successful at complying
on the first submission is the goal for software with application security policies defined
vendors as it means that no further remediation by the enterprise than they are at meeting
effort is required to be accepted by the enterprise. industry standards. 38% of applications
Yet with more than half of the applications failing complied with enterprise-defined policies,
to comply, our results show that secure software but only 10% complied with the OWASP
development practices built into the software and 30% with the CWE/SANS.
development lifecycle (SDLC) are still not as
widespread as they should be.
Vendor supplied applications have a lower compliance rate with industry standards—only 10% for the OWASP
Top 10 and 30% for the CWE/SANS Top 25. These results suggest that enterprise policies are less stringent than
standards put forth by the security industry. The results also imply that vendors striving to comply with industry
standards are well positioned to comply with any enterprise-defined security policy. A Pearson’s Chi Squared
analysis of the results shows that Passing Customer Policy and Passing CWE/SANS Policy are highly correlated
with a p-value of < 0.001. It also shows that Passing Customer Policy and Passing OWASP Policy are highly
correlated with a p-value of < 0.001. Thus, statistical analysis strongly support the statement that if a build can
pass OWASP or SANS, it will very likely comply with enterprise policies.
Compliance with Policies on First Submission
Compliant Out of Compliance
Enterprise Policy 38% 62%
CWE/SANS Top 25 30% 70%
OWASP Top 10 10% 90%
0% 20% 40% 60% 80% 100%
Figure 11: Compliance with Policies on First Submission
19
22. Enterprise Testing of the Software Supply Chain
Compliance with Enterprise Policy by Application Purpose
We also explored how different application types performed against enterprise policy. Figure 12 shows compliance
rates upon first submission for each application type. Customer Support and Security had the worst compliance
rates; however, enterprises may be subjecting these applications to greater security scrutiny given their sensitivity.
Compliance with Enterprise Policy by Application Purpose on First Submission
Acceptable Not Acceptable
Customer Support 20% 80%
Security 24% 76%
Business and IT Operations 28% 72%
Other 32% 68%
Financial 41% 59%
Web Infrastructure 65% 35%
0% 20% 40% 60% 80% 100%
Figure 12: Compliance with Enterprise Policy by Application Purpose on First Submission
20
23. Enterprise Testing of the Software Supply Chain
State of Enterprise Testing Programs
In this section we analyze the results of vendor supplied application assessment
programs experienced by the enterprise. To ensure this data was not skewed, results
from top 2% of enterprises were eliminated from the analysis as they were deemed
to be outliers. The remaining enterprises fell into two distinct groups.
One group had fairly low numbers of vendors participating in their programs, therefore fewer applications were
assessed. The other group enjoyed much higher levels of vendor participation and program success. Figure 13
shows that the average number of applications assessed for group 1 enterprises is approximately 7 while the
average number of applications assessed for group 2 enterprises is approximately 71. Figure 14 shows that the
average number of vendors working with group 1 enterprises is approximately 4 while the average number of
vendors working with by group 2 enterprises is approximately 38.
The key difference between the groups was their approach to implementing and managing their programs. Those
enterprises with an ad-hoc approach to requesting vendor assessments had less success, whereas those enterprises
with a more programmatic approach, including the implementation of best practices offered from analyst firms and
early adopters,11 experienced more success.
Enterprises with an ad-hoc approach lacked a protocol for selecting applications for testing and appeared to request
application security testing from vendors on a case by case basis. Additionally, the enterprise requestor often had
limited business, contractual and technical details to respond to specific vendor questions and concerns, thus the
requestor appeared to lack a strong mandate from their business and procurement teams.
Enterprises with a programmatic approach developed a formal protocol for selecting and requesting vendor
applications for testing. Specifically programmatic application testing programs tended to leverage best practices
for defining an overall program, such as ensuring collaboration between security, business and procurement teams;
specifying business, contractual and technical details as part of the policy; and providing a strong mandate for vendor
application testing. Some of the best practices implemented by enterprises with a programmatic approach included:
• Collaborating with purchasing groups to include application security requirements in RFPs and contract language.
• Defining application security compliance policies that clearly outline required assessment methodologies,
acceptable and non-acceptable flaw types and frequency of assessments.
• Setting realistic timelines for vendors to meet the policy.
• Creating well defined escalation procedures for vendors remaining out of compliance.
• Clearly delineating the consequences of non-compliance to defined policies (i.e. reduction in annual software
maintenance fees; ceasing to do business with the vendor, etc.)
11 Secure Software Supply Chain Toolkit (info.veracode.com/vast-getting-started-toolkit.html)
21
24. Enterprise Testing of the Software Supply Chain
Number of Applications Assessed by an Enterprise Program
Group 1: Enterprises with an Ad-Hoc Approach
Group 2: Enterprises with a Programmatic Approach
150
Enterprises with a
NUMBER OF APPLICATIONS ASSESSED
programmatic approach
had approximately 10 times
100 more vendors and applica-
tions participating in their
programs than enterprises
50
with an ad-hoc approach.
0
Figure 13: Number of Applications Assessed by an Enterprise Program
Number of Vendors Participating in an Enterprise Program
Group 1: Enterprises with an Ad-Hoc Approach
Group 2: Enterprises with a Programmatic Approach
80
NUMBER OF VENDORS PARTICIPATING
60
40
20
0
Figure 14: Number of Vendors Participating in an Enterprise Program
22
25. Enterprise Testing of the Software Supply Chain
Achieving Compliance with Enterprise Policies
Next we investigated whether the differences in program approach extended to other metrics. Figure 15 shows
that there is a significant difference between the two groups in the number of applications that eventually achieved
policy compliance within the 18 month timeframe of our analysis. 52% of applications achieved policy compliance
when enterprises had a programmatic approach, while only 34% of applications achieved policy compliance
when enterprises had an ad-hoc approach. To delve deeper into this finding, we examine the group of compliant
applications with Figures 16 and 17 and the group of non-compliant applications with Figures 18 and 19.
Achieving Compliance with Enterprise Policy
Compliant Out of Compliance
Enterprises with a 52% 48%
Programmatic Approach
Enterprises with an
34% 66%
Ad-Hoc Approach
0% 20% 40% 60% 80% 100%
Figure 15: Achieving Compliance with Enterprise Policy
First we examined the number of application builds vendors submitted on their path to compliance with enterprise
policies. Figure 16 shows the percentage of applications that achieved compliance on the first build submitted,
second build submitted, etc. With ad-hoc programs 30% of vendor applications complied with enterprise policy
on first submission, while 47% of vendor applications achieved compliance on first submission for enterprises
with a programmatic approach.
Number of Builds Submitted to Achieve Policy Compliance
1 Build 2 Builds 3 Builds 4 Builds 5-10 Builds
1% 1%
Enterprises with a
47% 1%
Programmatic Approach
2% <1%
Enterprises with an
30% 1%
Ad-Hoc Approach
<1%
0% 10% 20% 30% 40% 50% 60% 70%
Figure 16: Number of Builds Submitted to Achieve Policy Compliance
23
26. Enterprise Testing of the Software Supply Chain
We also considered the amount of time taken for
applications to achieve compliance. We found that
Obtaining initial visibility into the
submitted applications take a shorter amount of time
state of vendor software security is
to reach compliance for enterprises with a programmatic
more important for these enterprises
approach than those with ad-hoc programs. Figure 17
than demanding compliance with a
shows 45% of vendor applications are compliant within
tough security policy. For enterprises
one week for enterprises with a programmatic approach,
using a programmatic approach,
whereas 28% of applications are compliant within one
45% of vendor applications become
week for ad-hoc programs. Once again vendors working
compliant within one week.
with enterprises using a programmatic approach
outperform vendors working with enterprises with
ad-hoc programs.
The time to reach compliance depends on many factors including the strength of the security policy, the complexity
of the remediations required, whether remediations can be incorporated into previously scheduled release cycles,
and the number of times the application must be remediated and retested before compliance is reached. However,
recall that Figure 11 shows that applications had an easier time complying with enterprise policies than with industry
standards, indicating the relative weakness of enterprise policies relative to industry standards. A similar conclusion
can be drawn here, that the relative strength of the security policy is a significant factor in determining the difference
in the results between ad-hoc and programmatic approaches. Enterprises with an ad-hoc approach appear to
mandate stronger security policies and therefore fewer vendor supplied applications achieve compliance. One area
of future investigation would be to understand the relative strengths of the policies from enterprises in both groups
and how the policies change over time. In particular, we will begin tracking whether enterprises do strengthen their
security policies to reflect industry standard levels over time.
Time Taken to Achieve Policy Compliance*
0-1 Week 1-4 Weeks 4-12 Weeks
Enterprises with a
45% 3% 4%
Programmatic Approach
2%
Enterprises with an
28% 4%
Ad-Hoc Approach
0% 10% 20% 30% 40% 50% 60%
Figure 17: Time Taken to Achieve Policy Compliance
* Slight differences between the total percentages in figures are due to rounding
24
27. Enterprise Testing of the Software Supply Chain
Analysis of Non-Compliant Applications
Next we dive deeper into the group of applications that have yet to comply with enterprise policy. Figure 18 shows
that 11% of vendors resubmitted new builds of applications for testing but are still out of compliance with enterprise
policies. Resubmission is clear evidence that the vendor is taking the program seriously, is willing to take actions to
remediate the identified flaws and is looking to validate improvements in their security posture. The Veracode platform
can document improvements in an application’s overall security quality score even when the application fails to comply
with enterprise policy on subsequent scans. This is one area where the escalation and decision-making aspects of the
enterprise’s security policy come into play. Depending on the enterprise’s goals and risk tolerance, the decision may
be made to continue with acquiring the application or renewing their licensing agreement if the vendor’s resubmission
results show some a positive trend and good faith effort in improving their security posture.
Figure 18 also shows large percentages of
non-compliant applications with only one build
Procedures for managing non-compliant
submitted: 39% for enterprises with a program-
applications are an important aspect of
matic approach and 57% for enterprises with
an enterprise's security policy, since large
an ad-hoc approach. These large percentages
percentages of non-compliant applications
illustrate the need for enterprise procedures
have only one build submitted.
for managing non-compliant applications.
Number of Builds Submitted for Non-Compliant Applications*
1 Build 2 Builds 3 Builds 4 Builds 5-10 Builds
Enterprises with a
39% 7% 3% 1%
Programmatic Approach
Enterprises with an
57% 6% 3%66% 2%
Ad-Hoc Approach
0% 10% 20% 30% 40% 50% 60% 70%
Figure 18: Number of Builds Submitted for Non-Compliant Applications
* Slight differences between the total percentages in figures are due to rounding
25
28. Enterprise Testing of the Software Supply Chain
Figure 19 shows the time spent out of compliance, which is the number of days between the vendor receiving
non-compliant results from their first submission and the end of the dataset time period. For example, if a vendor
received non-compliant results from their first submission on June 27, 2012 and the application remained in a
non-compliant state until July 1, 2012, then the time spent out of compliance would be listed as 0-1 week. Similarly,
if a vendor received non-compliant results from their first submission on March 15, 2012 and the application remained
in a non-compliant state until July 1, 2012, then the time spent out of compliance would be listed as 13-24 weeks.
Figure 19 shows both groups of enterprises having a small percentage of vendors that received their first submission
results in the last month of our research timeframe. These vendors may still be creating their initial remediation
plans. Additionally, 13% of applications reside in the 5-12 weeks band for enterprises with a programmatic approach
compared with 7% of applications participating in ad-hoc programs (7%). Some areas for future investigation include
tracking how many of these applications eventually become compliant and determining whether applications that
move into the longer ‘out of compliance’ timeframes have made security improvements.
Another interesting comparison between the two program types is in the number of applications that have spent
more than 24 weeks out of compliance. Enterprises with a programmatic approach only had 20% of applications
be out of compliance for more than 24 weeks, compared with 39% of applications participating in ad-hoc programs.
The results suggest that enterprises with a programmatic approach may do a better job ushering their vendors
through the process. One potential reason for the difference is the clarity, or lack thereof, of the enterprise’s
non-compliance policy or a lack of escalation procedures to drive compliance within a defined timeframe.
Time Spent Out of Compliance*
0-1 Week 1-4 Weeks 4-12 Weeks 12-24 Weeks 24-36 Weeks 36+ Weeks
1%
Enterprises with a
5% 13% 9% 3% 17%
Programmatic Approach
2%
Enterprises with an
3% 7% 16% 16% 23%
Ad-Hoc Approach
0% 10% 20% 30% 40% 50% 60% 70%
Figure 19: Time Spent Out of Compliance
* Slight differences between the total percentages in figures are due to rounding
The vendors with non-compliant applications may also be working with their customers on mitigation strategies to
minimize vulnerability risks until vendors are able to remediate critical flaws. The Veracode platform gives enterprises
a workflow for accepting and tracking mitigation procedures proposed by vendors until such time the flaws are
remediated and the application becomes compliant with enterprise policy. The workflow data therefore can offer
insights into how many applications include mitigations. That analysis shows 92% of non-compliant applications have
at least one flaw mitigation procedure accepted by the requesting enterprise. These results show the importance of
codifying the procedures for managing non-compliant applications so that vendors have clear guidelines.
26
29. Enterprise Testing of the Software Supply Chain
Appendix
Understanding How the Veracode Platform Determines Policy Compliance
The Veracode Platform automatically determines whether an application is compliant with OWASP Top 10,
CWE/SANS Top 25, the assigned Veracode Level and the assigned enterprise policy.
Veracode looks for specific flaws enumerated by the CWE list and uses standardized mappings to determine whether
a flaw belongs in the most recently published OWASP Top 10 and CWE/SANS Top 25 lists. Flaws discovered in an
application are compared to the respective standards. If a even a single flaw belonging to the standard is discovered,
then the application is deemed out of compliance with the standard. For our report web applications are assessed
against the OWASP Top 10 while non-web applications are assessed against the CWE/SANS Top 25.
Veracode Levels are Veracode’s independent standard for evaluating an application’s software quality. Veracode
Levels are assigned based on the business criticality of application. Each Veracode Level provides a predefined
security policy that aligns with different levels of risk the organization is willing to accept for applications of varying
business criticality. Table 2 shows the five Veracode Levels aligned with the five business criticality levels as defined
by the National Institute of Standards and Technology (NIST). When an application is assigned a business criticality
the Veracode platform automatically assigns the appropriate Veracode Level as a predefined policy and determines
whether the application is compliant with the Veracode Level. Each Veracode Level policy contains a combination
of the severity of the flaws found in the application, the types of tests being performed on the application and the
application’s overall security quality score. An application is deemed compliant when all three aspects of the
Veracode Level policy are met.
Predefined Policy Requirements for Veracode Levels
Predefined Policy Requirements
Business Veracode Flaw Severities Not Required Test Minimum Security
Criticality Levels Allowed in This Level Methodologies Quality Score
Very High VL5 Very High, High, Medium Static and Manual 90
High VL4 Very High, High, Medium Static 80
Medium VL3 Very High, High Static 70
Low VL2 Very High Static or Dynamic or Manual 60
Very Low VL1 Not Applicable Static or Dynamic or Manual Not Applicable
Table 2: Predefined Policy Requirements for Veracode Levels
27
30. Enterprise Testing of the Software Supply Chain
Veracode provides enterprises several options for defining custom policies, including:
• Disallowing specific flaw severities
• Disallowing specific types of flaws (specified by CWE number)
• Attaining a minimum security quality score
• Using predefined policies such as OWASP Top 10, CWE/SANS Top 25 or industry standards such as PCI
• Specifying application test methodologies and frequencies (e.g. monthly, quarterly, annually)
• Meeting timelines for remediating specified flaw severities
A custom enterprise policy may include any or all of these options. An application is deemed compliant when all
aspects of the enterprise’s custom policy are met.
Analyzing Compliance Metrics for Long Standing Applications
Many of our long standing vendor customers have applications that oscillate in and out of compliance with enterprise
policies over long periods of time for a variety of reasons. For example, new functionality in a new application version
may introduce new vulnerabilities causing the application to drop of out compliance with the enterprise policy.
Additionally, the enterprise policy may have changed over time. The Veracode platform may also have expanded
the scanning engine to look for a broader range of flaws. In an effort to aggregate and normalize the impact of the
various change reasons on our metrics, we designed the current SOSS analysis to treat a subset of application policy
state changes as if they are a first submission. In other words, SOSS queries look for ‘compliant’ to ‘non-compliant’
state changes that are caused by publishing results of a new scan. The SOSS queries then treat the state changing
scan as a new release of an application (a “SOSS-release” if you will). This analysis methodology recognizes two
sets of SOSS releases: ones that achieve compliance within the SOSS timeframe and ones that do not, based upon
all the possible reasons for state changes, from unacceptable to acceptable and from acceptable to unacceptable.
Our metrics are designed to provide insight into 1) how long it takes to achieve compliance for all the releases that
do and 2) for the releases that do not, on how much time the vendor has had to remediate.
For example, consider a vendor that uploads application 153 version1 and receives compliant results on the first
scan which was performed on March 15, 2012. Then SOSS analysis would count that as one compliant SOSS-release
and place it in the 0-1 week bucket in Figure 17: Time Taken To Achieve Policy Compliance. Suppose the vendor then
uploads application 153 verison2 and receives non-compliant results on June 27th, this means that the compliance
status of application 153 has changed from a compliant to non-compliant state because of the most recent June 27th
scan. This June 27th release is counted as Never Passed. Further, let us assume that the vendor does not make any
further submissions before the time period for the SOSS volume terminates. For Volume 5, this date is June 30, 2012.
The age assigned to the June 27th release is 0-1 week since it was only 3 days old when the SOSS analysis period
terminated. SOSS analysis counts this result in the 0-1 week bucket in Figure 19: Time Spent Out of Compliance.
This analysis methodology aggregates and normalizes all the reasons why the state may have changed and focuses
our metrics on when the vendor becomes aware of the policy compliance state change—i.e. when the vendor
receives the results of a new scan.
28