SlideShare une entreprise Scribd logo
1  sur  34
Télécharger pour lire hors ligne
Contract Number: N69250-08-D-0302
            Task Order: 0007




     Security Assessment of
Cloud Computing Vendor Offerings

             Final Report
              Oct 10, 2009


         Vassil Roussev, Ph.D.
      Golden G. Richard III, Ph.D.
          Daniel Bilar, Ph.D.

    Department of Computer Science
       University of New Orleans
Contents

Executive Summary ............................................................................................................. 2
Recommendations .............................................................................................................. 3
1.     Introduction ................................................................................................................. 5
2.     Background .................................................................................................................. 6
     2.1.    The Promise and Early Reality of the Cloud ..................................................................... 6
     2.2.    DoD Enterprises and the Cloud ........................................................................................ 9
     2.3.    Security Concerns on the Cloud ..................................................................................... 10
3.     Representation of the Navy Data Center System Environment................................ 11
4.     Navy Security Concerns and Evaluation Criteria ....................................................... 12
5.     Vendor Assessment Overview ................................................................................... 17
6.     Vendor Assessment: Amazon Web Services (IaaS) ................................................... 17
     6.1.    Description ..................................................................................................................... 17
     6.2.    Security Assessment ...................................................................................................... 18
7.     Vendor Assessment: Boomi Atmosphere (PaaS/SaaS).............................................. 20
     7.1.    Description ..................................................................................................................... 20
     7.2.    Security Assessment ...................................................................................................... 21
8.     Vendor Assessment: force.com (PaaS) ...................................................................... 22
     8.1.    Description ..................................................................................................................... 22
     8.2.    Security Assessment ...................................................................................................... 23
9.     Vendor Assessment: Pervasive Software (PaaS/SaaS) .............................................. 24
     9.1.    Description ..................................................................................................................... 24
     9.2.    Security Assessment ...................................................................................................... 25
     9.3.    Case Studies of Interest ................................................................................................. 26
Author Short Bio: Vassil Roussev ...................................................................................... 28
Author Short Bio: Golden G. Richard, III ........................................................................... 29
Author Short Bio: Daniel Bilar ........................................................................................... 30
List of Abbreviations ......................................................................................................... 31
References ........................................................................................................................ 32




                                                                                                                                           1
Executive Summary
Cloud computing is a relatively new term that describes a style of computing in which a
service provider offers computational resources as a service over the Internet. The main
characteristic of cloud computing is the promise that service availability can be scaled up
and down according to demand with the customer paying based on actual resource usage
and service level arrangements.
Cloud computing differs from prior approaches to scalability in that it is based on a new
business proposition, which turns IT capital costs into operating costs. Under this model,
a service provider builds a public cloud infrastructure and offers its services to customers,
who only pay what they use in terms of compute hours, storage, and network capacity.
The expectation is that, by efficiently sharing the infrastructure, customers will see the
cost and complexity of their IT operations go down significantly.
From a security perspective, sharing the physical IT infrastructure with other tenants
creates new attack vectors that are not yet well understood. Virtualization is perceived as
providing a very high level of protection, yet it is too early to estimate its long-term
effectiveness: after all, it is a software layer subject to implementation faults. Cloud
services are in their very early stages of adoption, with small-to-medium enterprises as its
first customers. In all likelihood, no determined adversary has systematically tried to
penetrate the defense mechanisms of a cloud provider. Just as importantly, providers do
not appear ready to deliver the specific offerings that a DoD enterprise needs.
Over the medium and the long run, there is little doubt that DoD‟s major IT operations
will, ultimately, move to scalable cloud services that will increase capabilities and reduce
costs. However, current offerings from public cloud providers are not, in our view,
suitable for DoD enterprises and do not meet their security requirements.
Large civilian enterprises have very similar concerns to those of DoD ones and have
taken the path to develop private clouds, which use the same technology but the actual
operation stays inside the enterprise data center. In our view, this is the general direction
in which DoD entities should focus their efforts. It is difficult to conceive a scenario
under which it becomes acceptable for DoD data and computations to physically leave
the enterprise. Moreover, DoD operations have a large enough scale to reap the benefits
of cloud services by sharing the infrastructure within DoD.
It is important to recognize that a generic cloud computing platform, by itself, provides
limited benefits. These come from virtualization, which by consolidating existing
deployments reduces overall hardware requirements, simplifies IT management, and
provides greater flexibility to react to variations in demand.
The true cost savings come from well-known sources—eliminating redundant
applications, consolidating operations, and sharing across enterprises. In cloud terms, it is
multi-tenancy—the sharing of a platform by multiple tenants—that provides the greatest
efficiency gains. It is also important to recognize that efficient scaling of services requires
that applications be engineered for the cloud and it is likely that most legacy applications
would need to be reengineered.




                                                                                           2
Recommendations
The overall assessment of this report is that cloud computing technology has significant
potential to substantially benefit DoD enterprises. Specifically, it can facilitate the
consolidation and streamlining of IT operations and can provide operational efficiencies,
such as rapid automatic scaling of service capacity based on demand. The same
technology can simplify the deployment of surge capacity, as well as fail-over, and
disaster recovery capabilities. Properly planned and deployed, cloud computing services
could ultimately bring higher levels of security by simplifying and speeding up the
process of deploying security upgrades, and by reducing deployed configurations to a
small number of standardized ones.
It is important to both separate the potential of cloud computing from the reality of
current offerings, and to critically evaluate the implications of using the technology
within a DoD enterprise. Our own view (supported by influential IT industry research) is
that the current state of cloud computing offerings does not live up to level of hype
associated with them. This is not uncommon for new technologies that burst to the
forefront of the news cycle; realistically, however, it takes time for the technology to
mature and fulfill its promise. From a DoD perspective, the most important question to
ask is:
How does the business proposition of cloud computing translate into the DoD domain?
The vast majority of the benefits from using infrastructure-as-a-service (IaaS) offerings
can be had now by aggressively deploying virtualization in existing, in-house data
centers. Further efficiencies require higher level of sharing at the platform (platform-as-a-
service, PaaS) and application levels (software-as-a-service, SaaS). In other words, there
need to be multiple tenants that share these deployments and associated costs. Due to
trust issues, it is difficult to envision a scenario where DoD enterprises share deployments
with non-DoD entities in a public cloud environment.
Further, regulatory requirements with respect to trust, security, and verification have
hidden costs that are generally unaccounted for in the baseline (civilian) model. The
responsibility of assuring compliance cannot be outsourced and would likely be made
more difficult and costly.
Based on the above observations, we make the following general recommendations. A
DoD enterprise should:
       Only consider deploying cloud services on facilities that it physically controls.
       The general assumption is that the physical location of one‟s data in the cloud
       should be irrelevant as users will have the same experience. This is patently not
       true for DoD systems as the trustworthiness and security of any facility that hosts
       data and computations must be assured at all times. Further yet, the supply chain
       of the physical hardware must also be trustworthy to ensure that no breaches can
       be initiated from it.
       Consider vendors who offer technology to be deployed in a private cloud, rather
       than public cloud services. Expanding on the previous point, it is inconceivable at
       this point that DoD would relinquish physical control over sensitive data and



                                                                                         3
computations to a third party. Therefore, DoD should look to adopt the
technology of cloud computing but modify the business model for in-house use.
For example, IaaS technology is headed for commoditization with multiple
competing vendors (VMWare, Cisco, Oracle, IBM, HP, etc.) working on
technologies that automate the management of entire data centers. Such offerings
can be expected to mature within the next 1-2 years.
Approach cloud service deployment in a bottom-up fashion. The deployment of
cloud services is an evolutionary process, which ultimately requires re-
engineering of applications, as well as business practices. The low-hanging fruit is
data center virtualization, which decouples data and services from the physical
machines. This enables considerable consolidation of hardware and software
platforms and is the basic enabler of cloud mobility. Following that, it is the
automation of virtualized environment management that can bring the costs
further down. At the next level, shared platform deployments, such as database
engines and application servers, provider further efficiencies. Finally, the most
difficult part is the consolidation of applications and shared deployment of cloud
versions of these applications.
We fully expect that these themes are familiar to DoD IT managers and some
aspects of these are already implemented. This should not come as a surprise as
these are the true concerns of business IT efficiency and cloud computing does
not magically wave them away. However, cloud computing does provide extra
leverage in that, once a service is ready for the cloud, it can be deployed on a
wide scale at a marginal incremental cost.
Look for opportunities to develop and share private cloud services within DoD.
Sister DoD entities are natural candidates for sharing cloud service
deployments—most have common functions and compliance requirements, such
as human resources and payroll, that are prime candidates for multi-tenant
arrangements. Unlike the public case, the consolation of such operations can bring
the cost of compliance down as fewer systems would need to be certified.
Critically evaluate any deployments scenarios using Red Team exercises and
similar techniques. As discussed further in the report, most of the service
guarantees promised by vendors are based on the design of the systems and have
not been independently verified. This may be acceptable in the public domain as
the price of failure for many enterprises is high but rarely catastrophic for the
nation; however, DoD facilities must be held to a much higher standard. The only
realistic way of assessing how DoD cloud services would perform under stress, or
under a sustained attack by a determined adversary is to periodically simulate
those conditions and observe the system‟s behavior.




                                                                                4
1. Introduction
Cloud computing is a relatively new term that describes a style of computing in which a
service provider offers computational resources as a service over the Internet. The main
characteristic of cloud computing is the promise that service availability can be scaled up
and down according to demand with the customer paying based on actual resource usage
and service level arrangements. Typically, the services are provided in a virtualized
environment with computation potentially migrating from machine to machine to allow
for optimal resource utilization. This process is completely transparent to the clients as
they see the same service regardless of the physical machine providing it. The name
„cloud computing‟ alludes to this concept as the Internet is often depicted as a cloud on
architectural diagrams, hence the computation happens „in the cloud‟.
Historically, the basic concept was pioneered by IBM in the 1960s under the name utility
computing, however, it has been no more than a decade since the cost and capabilities of
commodity hardware have made it possible for the idea to be realized on a massive scale.
In general, the service concept can be applied at three different levels and actual vendor
offering may be a combination of these:
       Software as a Service (SaaS). Under the SaaS model, application software is
       accessed and managed over the network and is usually hosted by a service
       provider. Licensing is tied to the number of concurrent users, rather than physical
       product copies. For example, Google provides all of its applications—Gmail,
       Google Docs, Picasa, etc.—as services, and most of them are not even available
       as standalone products.
       Platform as a Service (PaaS). PaaS offers as a service a specific solution platform
       on top of which developers can build applications. For example, in traditional
       computing LAMP (Linux, Apache, MySQL, Perl/PHP/Python) is a popular choice
       for developing Web applications. Example PaaS offerings include Google App
       Engine, force.com, and Microsoft's Windows Azure Platform.
       Infrastructure as a Service (IaaS). IaaS goes one level deeper and provides an
       entire virtualized environment as service. The services can be provided at the
       operating system level, or even the hardware level, where virtual compute,
       storage, and communications resources can be rented on an on-demand basis.
Given the wide variety of offerings many of which are not even directly comparable, this
report provides a conceptual analysis of the field that relates to the Navy-specific
requirements. This drives the analysis of specific offerings and will be extend the useful
lifetime of this report by providing a framework for evaluating other offerings.




                                                                                       5
2. Background
      "The security of these cloud-based infrastructure services is like Windows in 1999. It's being
      widely used and nothing tremendously bad has happened yet. But it's just in early stages of
      getting exposed to the Internet, and you know bad things are coming."
               - John Pescatore, Gartner VP and security analyst, (quoted by Financial Times, Aug 3, 20091)
Before presenting the security implication of transitioning the IT infrastructure to the
cloud, it is important to understand the business case behind the trend and the
technological means by which it is being implemented. These have direct bearing on the
ultimate cost/benefit analysis with respect to security and it is important to understand
that some of the security assumptions behind the typical enterprise analysis may not hold
true for a DoD implementation. In turn, this may prevent at least some of the possible
efficiencies and may completely alter the outcome of the decision process.

2.1. The Promise and Early Reality of the Cloud
There are dozens of definitions of the Cloud and Vaquero et al. [22] recently completed
an in-depth survey on the topic. Based in part on those results, we offer one of the more
rigorous definitions:
      “Clouds are a large pool of easily usable and accessible virtualized resources (such
      as hardware, development platforms and/or services). These resources can be
      dynamically reconfigured to adjust to a variable load (scale), allowing also for an
      optimum resource utilization. This pool of resources is typically exploited by a pay-
      per-use model in which guarantees are offered by the Infrastructure Provider by
      means of customized service level agreements (SLA).”
The above statement can be broken into three basic criteria for a “true” cloud:
          The management of the physical hardware is abstracted from the customer
          (tenant); this goes beyond virtualization—the location of the data and the
          computation is transparent to the tenant.
          Infrastructure capacity is highly elastic allowing tenants to consume the almost
          exact amount of resources demanded by users—the infrastructure automatically
          scales up/down on a fine time scale.
          Tenants incur no upfront capital expenditures—infrastructure costs are paid for
          incrementally as an operating expense.
The main selling point of all cloud computing offerings is that they provide significant
reductions of the total cost of ownership, offer more flexibility, and have low entrance
costs for new enterprises. It is important to realize that the main source of efficiency in a
public cloud infrastructure comes from sharing the cost with other tenants.
Cloud platforms can provide savings at each layer of the hardware/software stack,
starting with the hardware, all the way up to the application layer:
          Data center maintenance

1
    http://www.ft.com/cms/s/0/5aa4f33e-7fc4-11de-85dc-00144feabdc0.html



                                                                                                      6
Infrastructure/network maintenance
       Infrastructure management and provisioning
       Database management, provisioning, and maintenance
       Middleware/application server management
       Application management
       Patch deployment and testing
       Upgrade management
Typical data center arrangements in which racks of servers and storage are rented and
then managed by the tenant offer the lowest level of sharing and, therefore, the lowest
level of overall cost reduction. At the other end of the spectrum is a multi-tenant
arrangement in which multiple parties share the database, middleware, and application
platforms and outsource all maintenance to the provider. This arrangement promises the
greatest efficiency improvements. For example, Gartner Research reports [12] that its
clients are experiencing project savings of 25% to 40% by deploying customer
relationship management solutions via the SaaS model.
The real value of cloud computing kicks in when a full application platform is leveraged
to either purchase applications written to take advantage of a multi-tenant platform, or to
(re-)develop the applications on this new stack. For example, moving existing mail
servers from an in-house data center to a virtual machine somewhere in the cloud will not
yield any notable savings as the tenant would still be responsible for maintenance,
patching, and provisioning. The latter point is important—provisioning is done at peak
demand, the capability is utilized only a small fraction of the time, and the cost is borne
full time (According to McKinsey‟s survey [14], the average physical server is utilized at
about 10%.) In contrast, a true cloud solution, such as Google Mail, would relieve the
tenant from all of these concerns, potentially saving non-trivial amounts of physical and
human resources.
McKinsey‟s report offers a sobering view of current cloud deployments:
   Current cloud offerings are most attractive for small and medium-sized enterprises;
   adoption by larger enterprises (with the end goal of replacing the in-house data
   centers) faces significant hurdles:
   o Current cloud computing offerings are not cost-effective compared to large
     enterprise data centers. (This conclusion is based on the assumption that existing
     IT services would be replicated on the cloud and, therefore, most of the potential
     efficiencies would not be realized.)
       Figure 1 illustrates the point by considering Amazon‟s EC2 service for both
       Windows and Linux. The overall conclusion is that most EC2 options are more
       costly than TCO for a typical data center. It is possible to improve the cloud price
       through pre-pay agreements for Linux system (but not for Windows). Further,
       based on case studies, it is estimated that total cost of ownership (TCO) for 1
       month of CPU processing is about $150 for an in-house data center and $366 on




                                                                                       7
EC2 for a comparable instance. Consequently, cloud costs need to go down
   substantially before they can justify wholesale data center replacement.




               Figure 1 EC2 monthly CPU equivalent price options [14]
o Security and reliability concerns will have to be mitigated and applications‟
  architecture needs to be re-designed for the cloud. Section 2.3 discusses this point
  in more detail.
o Business perceptions of increased IT flexibility and effectiveness will have to be
  properly managed. Currently, cloud computing is relatively early in its technology
  cycle with actual capabilities considerably lagging public perceptions. There is no
  reason to doubt the momentum, or long-term viability of the technology itself as
  all the major IT vendors—IBM, HP, Microsoft, Google, Cisco, VMWare, etc.—
  are working towards providing integrated solutions. Yet, mature solutions for
  larger enterprises appear a few years away.
o The IT organizations will have to adapt to function in a cloud-centric world
  before all efficiencies can be realized.
Most of the gains in a private cloud deployment (using a cloud platform internally)
come from virtualizing server storage, network operations, and other critical building
blocks. In other words, by deploying virtualization aggressively, the enterprise can
achieve close to 90% of the utilization a cloud service provider (such as Google) can
achieve. Figure 2 illustrates the point.
The traditional IT stack consists of two layers: facilities—the actual (standardized)
data center facilities—and the IT platform—standardized servers, networking,
storage, and OS. Publicly-announced private clouds are essentially an aggressive
virtualization program on top of it—the virtualization layer contains hypervisors and
virtual machines in which OS instances execute.




                                                                                  8
Figure 2 Average server utilization rates [14]

2.2. DoD Enterprises and the Cloud
It should be clear that, as more and more of the functions are outsourced to the cloud
provider and are (implicitly) shared with other parties, a great deal of control over the
data and computation is handed over to the provider. For most private enterprises (the
most enthusiastic adopters so far) this may not present the same types of issues as for a
DoD entity:
       Legal Requirements. It may not be permissible for sensitive government data
       and processing to be entrusted to a cloud provider. Specifically, it is likely not
       acceptable for DoD to relinquish control of their physical location and it is clear
       that they would have to be guaranteed to stay on US territory. For example, some
       current cloud computing providers, such as force.com, use backup data centers
       outside of the U.S. as a disaster mitigation strategy. Further complicating the
       picture is knowing where backup data is stored and this picture isn‟t clear—
       force.com provides conflicting claims on this issue, stating both that backup tapes
       never leave the data center [17] and that they are periodically moved offsite [18].
       Google Apps terms of service state that it has the right to store and process
       customer data in the United States "or in any other country in which Google or its
       agents maintain facilities".
       Compliance. DoD enterprises must meet specific standards in terms of trust and
       security, starting with physical security and including security clearances, fine-
       grained access control, and strict accounting. Such requirements are generally
       lacking in the commercial space and, given the early stages of the technology
       development cycle, any compliance claims on the part of providers need to be
       scrutinized vigorously.
       Legacy systems. Many DoD enterprises have to support and maintain numerous
       legacy systems. Although the same is true for the commercial sector, it is not clear
       that the DoD entity will have the resources and the freedom to replace those with
       COTS solutions. Further, COTS solutions may not work as well (out of the box)



                                                                                       9
even for relatively standard functions like personnel and supply management,
       simply because the government tends to do things differently.
       Risk and Public Relations (PR) Risk. Objectively, the risk profile of DoD
       agencies is considerably higher than that of private enterprises as national security
       concerns are at stake. Accordingly, the public‟s tolerance for even minor breaches
       and failures is almost non-existent. Therefore, decision makers must necessarily
       put extra weight on the risk side of the equation and invest additional resources in
       managing and mitigating those risks. Thus, a risk scenario that is statistically very
       unlikely and can be ignored by private enterprises may require consideration (and
       resources) in a DoD enterprise. With respect to IT, this would usually manifest
       itself in the form of customized applications and procedures. Those are additional
       costs that cloud computing is unlikely to remedy as they will have to replicated on
       the cloud.

2.3. Security Concerns on the Cloud
Critical analysis reveals that cloud computing, both as a concept and as it is currently
implemented, brings a whole suite of security problems that stem primarily from the fact
that physical custody of data and computations is handed over to the provider.
Physical security. The first problem is that we have a completely new physical security
perimeter—for most enterprises this may not be noticeable but for a DoD entity
considerations could be different. First, is the location where the data is housed
physically as secure as the in-house option? If the computation is allowed to „float‟ in the
cloud, are all the possible locations sufficiently secure?
Confidentiality. Traditionally, operating systems are engineered such that a super-user
has unlimited access to all resources. This approach was recognized as a problem in
database systems so trusted DBMS were developed so that the administrator could
manipulate the structure of the database but not have (by default) access to the records.
The super-user problem escalates in the cloud environment—now the hypervisor
administrator has control over all OS instances under his command. Further, if the
computation migrates, another administrator has access to it, and so on. The concept of
trusted cloud computing has been recently put forward as a research idea [19] but, to the
best of our knowledge, no technical solutions are yet deployed.
Another concern is the physical location of the data—if it is migrated and replicated
numerous times, what are the guarantees that no traces will be left behind? Along the
same lines, OS instances are routinely cloned and suspended and the memory images
contain sensitive data—keys, unique identifiers, password hashes, etc.
New attack scenarios. In the cloud, we have new neighbors that we know nothing
about—normally, if they are malicious, they would have to work hard to breach the
security perimeter before they can launch an attack. In the cloud, they are already sharing
the infrastructure and can exploit security flaws virtual machine hypervisors, virtual
machines, or in third-party applications for privilege escalation and attack from the cloud.
In public cloud computing services, such as Amazon‟s, the barriers to entry are very low.
The services are highly automated and all that is needed is a credit card to open an
account and start probing the infrastructure from within.



                                                                                        10
Limits on security measures. Not having access to the physical infrastructure implies that
traditional measures that rely on physical access can no longer be deployed. Security
appliances, such as firewalls, IDS/IPS, spam filters, data leak monitors, can no longer be
used. Even for software-based protection mechanisms there are no solutions that allow
the tenant to control policy decisions. This has been recognized as a problem and
VMWare has recently introduced the VMSafe interface, which allows third-party
modules to control security policy decisions. Even so, tenants need also to be concerned
about the internal security procedures of the provider.
Testing and compliance. Part of ensuring the security of a system is to continuously test it
and, in some cases, such testing is part of the compliance requirement. By default,
providers prohibit malicious traffic and possibly even vulnerability scans [2], although it
may be possible to reach an arrangement as part of the SLA.
Incident response. Early detection and response is one of the critical components of
robust security and the cloud makes those tasks more challenging. To perform an
investigation, the administrator will have to rely on logs and other information from the
provider. It is essential that a) the provider collects the necessary information; and b) the
SLA provides for that information be accessible in a timely manner.
Technical challenges. OS security is built for the physical hardware and in its transition
to the virtual environment there are several challenges that are yet to be addressed:
         Natural sources of randomness. Cryptographic implementations rely on random
         numbers and it is the job of the OS to collect random data from non-
         deterministic events. In a virtualized environment, it is possible that cloned
         instances would behave in a predictable-enough manner to execute attacks [20]
         Trusted platform modules (TPM). Since the physical hardware is shared among
         several instances the TPM-supported remote attestation process would have to
         be redesigned.
3. Representation of the Navy Data Center System Environment
Currently, SPAWAR New Orleans supports over two dozen applications running in a
virtualized data center environment. At any given time, up to 300 VM instances are in
execution. Over time, it is expected that both the number of applications and the
workload will continue to grow steadily and increase demand on the operation. Part of
the mission is to support incident management, such as hurricane response, which
requires that redundant and reliable capacity be available. It is self-evident that, for most
of the applications, it is critical that they be available and functioning correctly under any
circumstances.
The baseline platform configuration consists of commodity Sun Microsystems and Dell
hardware, and Sun Solaris and Microsoft Windows operating systems. The virtualization
layer is implemented using VMWare products. Data storage is organized using a
centralized Storage Area Network, with local disk boot configuration and SAN-based
application installations.
Incident Response




                                                                                          11
Incident response systems support communication, coordination, and decision making
under emergency conditions, such as natural disasters. Specifically, Navy provides a
messaging platform, which allows for information to be broadly disseminated among first
responders. It also provides tracking of assets with information provided by transponders
and provides up-to-date information to decision makers through appropriate visualization.
Overall, most of the information exchanged via incident response application is of a
sensitive nature and, as such, must be closely monitored and regulated. Specific concerns
include the location of assets, personally-identifiable information, as well as operational
security. The latter presents a difficult challenge as it often has to do with the
accumulation of individual data points over time that can, together, paint a bigger picture
of the enterprise. This data may include patterns of response, organizational structure,
and availability of assets that would normally be kept secret.
The threat to operational security in this context is well illustrated by recent research on
hidden databases on the Internet [13]. A hidden database is one that is not generally
available but does provide a query interface, such as a Web form, that can return a limited
sample. An example would be any reservation system which provides information on
availability for specific parameters. Researchers have shown that it is quite feasible to
devise automated queries that can exploit the limited interface to obtain representative
samples of the underlying database and to accurately estimate the overall content of the
hidden database. The solution to such concerns involves careful design of the application
interface so that returned data does not permit an adversary to collect representative
samples. Absent specifics of the Navy application pool, this concern is not discussed
further in this report.
Logistics
Navy data centers provide logistic systems for Navy operations by tracking assets,
equipment, and maintenance schedules. The execution of such applications has concerns
similar to those discussed in the previous section, especially operational security.
Human Resources
Human resources applications provide the typical HR functions of a large enterprise as
applied to the specific needs of the Navy. In addition to hiring and discharging of service
members, the system keeps track of skill sets and pay certifications. An additional
concern is the non-disclosure of personally-identifiable information to third parties. This
includes restricting the release of information that can indirectly lead to such disclosure.
As with other operational security concerns, the counter-measures are application-
specific and are not discussed further.
4. Navy Security Concerns and Evaluation Criteria
In general terms, the Navy‟s IT operations have several major security concerns, some of
which are specific to its operation as a DoD enterprise. There are a number of factors that
a repeatable decision making process should incorporate when assessing cloud computing
vendor offerings. As an unclassified report, there are limits on the amount of specific
information that we could obtain and provide. Therefore, the outlined criteria should be




                                                                                        12
treated as a high-level roadmap and the specific details need to be fleshed out when an
actual decision point is at hand
Release-code Auditing
One of the main concerns is release-code auditing which requires that all source code be
audited before being put into production. This is, effectively, a major restriction on the
kind of services that can be provided by commercial vendors on general-purpose cloud
platforms.
It is likely economically and logistically infeasible for vendors to undergo a certification
process for all the code that could provide services to the Navy. For example, it would
not be feasible to use an existing scalable email platform, such as the one provided by
Google to improve the scalability of Navy‟s operations.
Compliance with DoD Regulations
Another hurdle in the adoption of COTS cloud services is the need to ensure that all
services comply with internal IT requirements. Due to the sensitive nature of this
information, no specific issues are discussed in this report. It is worth noting, however,
that even if the problem can be resolved administratively and the vendor can demonstrate
compliance, the Navy would still have to dedicate resources to monitor this compliance
on a continuous basis.
The Common Criteria for Information Technology Security Evaluation (CC) is an
international standard (ISO/IEC 15408) for computer security certification. CC is a
framework in which computer system users can specify their security requirements,
vendors can then implement and/or make claims about the security attributes of their
products, and testing laboratories can evaluate the products to determine if they actually
meet the claims. DoD security requirements can be expressed, in part, by referring to
specific protection profiles and can help determine minimum qualifications. For example,
VMWare‟s ESX Server 3.0.2 and VirtualCenter 2.0.2 have earned EAL4+ recognition,
whereas Xen, the virtualization solution used by Amazon, has not been certified.
Overall, the CC certification is a long and tedious process and speaks more directly to the
commitment of the vendor rather than the quality of the product. As a 2006 GAO report
[23] points out in its findings, there is “a lack of performance measures and difficulty in
documenting the effectiveness of the NIAP process”. Another side effect of certification
is that it takes up to 24 months for a product to go through an EAL4 certification process
[23] (p.8), which limits the availability of products with up-to-date features.
Given the fast pace of development in cloud computing products, basing a decision solely
on certification requirements may severely constrain DoD‟s choice. One possible
approach is to formulate narrower, agency-specific certification requirements to speed up
the certification. It is also important to recognize that the common EAL4 certification
does not cover some of the more sophisticated attack patterns that are relevant to cloud
computing, such as side channel attacks.
Another existing regulation that could provide a ready reference point for evaluation is
The Health Insurance Portability and Accountability Act of 1996 (HIPAA). HIPAA is a
set of established federal standards, implemented through a combination of
administrative, physical and technical safeguards, intended to ensure the security and



                                                                                        13
privacy of protected health information. These standards affect the use and disclosure of
protected health information by certain covered entities (such as healthcare providers
engaged in electronic transactions, health plans and healthcare clearinghouses) and their
business associates.
In technical terms, HIPAA requires the encryption of sensitive information „in-flight‟ and
„at-rest‟, dictates basic administrative and technical procedures for setting and enforcing
access control policies; and requires in-depth auditing capabilities, data back-up
procedures and disaster recovery mechanisms. These requirements are set based on
established best security practices and are common to all applications dealing with
sensitive information. From a DoD perspective, these are minimal standards and
additional safeguards are likely necessary for most applications. In that respect, HIPAA
can be considered a minimal qualification requirement.
Overall, the emergence of any major standards targeted at cloud computing should be
taken as a cue that cloud computing is reaching a certain level of maturity, as standards
development tends to follow rather than lead. However, the emergence of standards
cannot, by itself, be relied upon as a timing mechanism to identify optimal points for
adopting new technologies. Standards processes have mixed record on their timeliness (at
best) and tend to follow rather than lead. In our view, given the current pace of
development by major IT companies, it is likely that, within the next two years, mature
offerings will start to differentiate themselves from the rest.
Limits on Physical Security
Currently, the security of the services is provided, in part, through the use of appliances,
such as firewalls and intrusion detection systems. These devices are used to
compartmentalize the execution of different applications and isolate faults and breaches.
Having a separate and specialized device creates an independent layer of protection that
considerably raises the bar for attackers—a wholesale breach requires that multiple layers
be breached.
In a cloud deployment, there is no direct analogy to this practice although the industry
has recognized the problem and is moving toward offering solutions. Specifically,
virtualization vendors are beginning to offer an interface (API) through which third-party
virtual appliances could be plugged-in to enforce security policies and monitor traffic and
execution environments. The conceptual problem is that such components are dependent
on the security of the hypervisor—if it is breached there is the opportunity for attackers to
circumvent the virtual security appliances. Eventually, solutions to this problem will
likely be found—CPU vendors, such as Intel, are actively developing hardware support
for virtualization. Yet, at this relatively early stage of development, there is no proof that
virtual appliances offer the same level of protection as physical ones.
Underlying Hardware Security and Trust
Since cloud infrastructure is likely to be remote, it is impractical/impossible for it to be
scrutinized by the Navy to comply with its heightened level of security concerns and the
issue of hardware trojans must be addressed. With the advent of global markets,
vertically integrated chip manufacturing has given way to a cost-efficient, multi-stage




                                                                                          14
supply chain whose individual stages may or may not pass through environments friendly
to US interests (Figure 3)




                         Figure 3: IC manufacturing supply chain [9]
A DoD report crystallizes the issues thusly [9]:
       " Trustworthiness of custom and commercial systems that support military
       operations [..] has been jeopardized. Trustworthiness includes confidence that
       classified or mission critical information contained in chip designs is not
       compromised, reliability is not degraded or untended design elements inserted in
       chips as a result of design or fabrication in conditions open to adversary agents.
       Trust cannot be added to integrated circuits after fabrication; electrical testing and
       reverse engineering cannot be relied upon to detect undesired alterations in
       military integrated circuits. The shift from United States to foreign IC
       manufacture endangers the security of classified information embedded in chip
       designs; additionally, it opens the possibility that 'Trojan horses' and other
       unauthorized design inclusions may appear in unclassified integrated circuits used
       in military applications. More subtle shifts in process parameters or layout line
       spacing can drastically shorten the lives of components."
If for conventional IT delivery platforms which are locally accessible only design
specifications and the testing stages can be controlled, the situation is exacerbated for
remote cloud computing platforms: A very real possibility of hardware-based malicious
code, hiding in underlying integrated circuits, exists. Potentially serious ramifications of
hardware-based subversion include unmitigatable cross-tenant data leakage to VM
identification and denial of service attacks [16].
We stress that this type of surreptitious hardware subversion is not within the ability of
AV software of cloud vendors to deal with; it is an ongoing, open research problem to
detect such malicious code in hardware at all. It is exacerbated, however, by the lack of
infrastructure control inherent in using public cloud offerings, as well as the motivation
of the cloud infrastructure owner to purchase and deploy COTS available.
Client-side Vulnerabilities Mitigation
Cloud clients (in particular web browsers) incorporate more functionality than the mere
display of text and images, including rich dynamic content comprising media playback



                                                                                         15
and interactive page elements such as drop-down menus and image roll-overs. These
features includes extensions such as the Javascript programming language, as well as
additional features for the client such as application plugins (Acrobat Reader,
QuickTime, Flash, Real, and Windows Media Player), and Microsoft-specific
enhancements such as Browser Helper Objects and ActiveX (Microsoft Windows‟s
interactive execution framework). Invariably, these extensions have shown
implementation flaws that can be maliciously exploited as security vulnerabilities to gain
unauthorized access to obtain sensitive information.
In August 2009, Sensepost demonstrated a proof-of-concept password brute-forcing with
password reset links—most if not all cloud apps use some password recovery (email- or
secret questions-based), several ways to steal cloud resources: Amazon cloud instance of
Windows license stealing, paid application theft (via DevPay), “cloud DoS” with
exponential virus-like growth of EC2-hosted VM instances [4].
When a user accesses a remote site with a client (say a browser), he types in a URL, and
initiates the connection. Once connected, a relationship of trust is established: the user
and the website (the user initiated the connection, and now trusts the page and content
display) and conversely, the site and the user (in executing actions from the user‟s
browser). It is this trust, together with the various features incorporated into rich clients
that attackers could subvert through what is called Cross-Site Scripting (XSS) and Cross-
Site Request Forgery (CSRF) attacks.
Cross-site scripting attacks mostly use legitimate web sites as a conduit, where web sites
allow other (malicious) users to upload or post links on to the web site. It must be
emphasized that from the point of view of the clients, neither HTTPS (the encrypted
channel with the little lock in the browser that denotes „safety‟) nor logins protect against
XSS or CSRF attacks. In addition, unlike XSS attacks which necessitate user action by
clicking on a link, CSFR attacks can also be executed without the user‟s involvement,
since they exploit explicit software vulnerabilities (i. e. predictable invocation structures)
on the cloud platform. As such, the onus to prevent CSFR attacks falls squarely on the
cloud vendor application developers. Some login and cryptographic token approaches, if
conscientiously designed to prevent CSFR attacks, can be of help [5].
Availability, Scalability, and Resistance to Denial of Service Attacks
Practically all service providers claim up time well above 99%, scaling on demand, and
robust network security. By and large, such claims are based on a few architectural
features (redundant power supply, multiple network connections, presence of network
firewalls) and should be scrutinized with a healthy dose of skepticism. Few (if any) of the
providers have performed large-scale tests to quantify how their services really behave
under stress. Customers should demand concrete proof that the provider can fulfill the
promises in the service level agreement (SLA).
In our view, the only way to adequately assess the performance-related vulnerabilities is
to perform a Red Team exercise. Such an exercise, unlike the SLA, would be able to
answer some very specific questions:
       How many simultaneous infrastructure failures are necessary to take the system
       down?



                                                                                          16
How quickly can it recover?
       How does the system perform under excessive load—does performance degrade
       gracefully, or does it collapse?
       How well does the system really scale? In other words, as the load grows, how
       fast do resource requirements grow?
       What happens if other tenants misbehave—does that affect performance?
       How well can the service resist a massive brute-force DoD attack? The latter is
       not just a function of the deployed hardware and software but also includes the
       adequacy and the training of administrative staff and their ability to communicate
       and respond.
Short of a complete Red Team simulation, an experienced technical team should examine
in detail the architecture of the system and try to answer as many of the above questions
as possible. This will not give the same level of assurance but should be minimum
assessment performed on an offering selected for potential acquisition.
5. Vendor Assessment Overview
The presented assessments should be viewed as a small but typical sample of the types of
services offered on today‟s market. The research effort was hampered by a relative dearth
of specific information when it comes to the security of offered services. Although our
sample of surveyed products was by no means exhaustive, we found it difficult to extract
information from vendor representative beyond what is already publicly available. Some
completely ignored our requests for information and were not included here.

6. Vendor Assessment: Amazon Web Services (IaaS)
6.1. Description
There are still differing opinions on what exactly constitutes cloud computing, yet there
appears to be a consensus that, whatever the definition might be, Amazon‟s Web Services
(AWS) are a prime example. Frequently, IaaS are also referred to as utility computing.
Broadly, AWS consists of several basic services:
       Amazon Elastic Computing Cloud (EC2) is a service that permits a customer to
       create a virtual OS disk image (in a proprietary Amazon format) that will be
       executed by a web service running on Amazon‟s hosted web infrastructure.
       Amazon Elastic Block Store (EBS) provides block level storage volumes for use
       with EC2 instances. EBS volumes are off-instance storage that persists
       independently from the life of an instance. Volumes can range from 1 GB to 1 TB
       that can be mounted as devices by EC2 instances. Multiple volumes can be
       mounted to the same instance.
       Amazon SimpleDB is a web service providing the core database functions of data
       indexing and querying. The database is not a traditional relational database
       although it provides an SQL-like query interface.
       Amazon Simple Storage Servic (S3) is a service that essentially provides an
       Internet-accessible remote network share.




                                                                                     17
Amazon Simple Queue Service (SQS) is a messaging service which offers a
          reliable, scalable, hosted queue for storing messages as they travel between
          computers. The main purpose is to automate workflow processes and provides
          means to integrate it with EC2 and other AWS.
          Amazon Elastic MapReduce is a web service that emulates the MapReduce
          computional model adopted by Google. It utilizes a hosted Hadoop2 framework
          running on the EC2 infrastructure. Hadoop is the open source implementation of
          the ideas behind Google‟s MapReduce and is supported by major companies, such
          as IBM and Yahoo!.
          Amazon CloudFront is a recent addition which automates the process of creating a
          content distribution network.
          Amazon Virtual Private Cloud (VPC) enables enterprises to connect their existing
          infrastructure to a set of isolated AWS compute resources via a Virtual Private
          Network (VPN) connection, and to extend their existing management capabilities
          such as security services, firewalls, and intrusion detection systems to include
          their AWS resources.
It is fair to say that Amazon‟s services interfaces are emerging as one of the early de
facto technical standards of cloud computing. One recent development that can further
accelerate this trend is the development of Eucalyptus (Elastic Utility Computing
Architecture Linking Your Programs To Useful Systems). It is an open-source software
infrastructure for implementing cloud computing on clusters. The current interface to
Eucalyptus is compatible with Amazon's EC2, S3, and EBS interfaces, but the
infrastructure is designed to support multiple client-side interfaces. Eucalyptus is
implemented using commonly available Linux tools and basic Web-service technologies
and is becoming a standard component of the Ubuntu Linux distribution.

6.2. Security Assessment
The primary source of information for this assessment is the AWS security whitepaper
[3], which focuses on three traditional security dimensions: confidentiality, integrity, and
availability. It does not cover all aspects of interest but does provide a good sampling of
the overall state of affairs.
Certifications and Accreditations
Amazon is cognizant of its customers‟ need to meet certification requirement, however,
actual certification efforts appear to be at an early stage:
      “AWS is working with a public accounting firm to ensure continued Sarbanes Oxley
      (SOX) compliance and attain certifications such as recurring Statement on Auditing
      Standards No. 70: Service Organizations, Type II (SAS70 Type II) certification.”
Separately, Amazon provides a short white paper on building HIPAA-compliant
application [1]. From the content, it becomes clear that virtually all responsibility for
comply with the regulations falls on the customer.
AWS provides key-based authentication to access their virtual servers. Amazon EC2
creates a 2048 bit RSA key pair, with private and public keys and a unique identifier for
2
    http://hadoop.apache.org/



                                                                                        18
each key pair to facilitate secure access. The default setup for Amazon‟s EC2‟s firewall is
deny-all mode which automatically denies all inbound traffic unless the customer
explicitly opens an EC2 port. Administrators can create multiple security groups in order
to enforce different ingress policies as needed and can control each security group with a
PEM-encoded X.509 certificate and restrict traffic to each EC2 instance by protocol,
service port, or source IP address.
Physical Security
Physical security measures appear consistent with best industry practices but clearly do
not provide the same level of physical protection as an in-house DoD facility:
   “AWS data centers are housed in nondescript facilities, and critical facilities have
   extensive setback and and military grade perimeter control berms as well as other
   natural boundary protection. Physical access is strictly controlled both at the
   perimeter and at building ingress points by professional security staff utilizing video
   surveillance, state of the art intrusion detection systems, and other electronic means.
   Authorized staff must pass two-factor authentication no fewer than three times to
   access All visitors and contractors are required to present identification and are signed
   in and continually escorted by authorized staff.”
Backups
Backup policies are not described in any level of details and the provided brief
description is contradictory. It is only clear that some of the data is replicated at multiple
physical locations, while the remaining part is customer responsibility.
Amazon Elastic Compute Cloud (EC2) Security
EC2 security is the most descriptive part of the white paper and provides the greatest
amount of useful information although many details are obscured.
Amazon‟s virtualization layer utilizes a “highly customized version” of the Xen
hypervisor (http://xen.org). Xen is based on the para-virtualization model in which the
hypervisor runs inside a host operating system (OS) and provides a virtual hardware
interface to guest OS instances. Thus, all privileged access is mediated and controlled by
the hypervisor. The firewall is part of the hypervisor layer and mediates all traffic
between the network interface and the guest OS. All guest instances are isolated from
each other.
Virtual (guest) OS instances are built and are completely controlled by the customer as
physical machines would be. Customers have root access and all administrative control
over additional accounts, services, and applications. AWS administrators do not have
access to customer instances, and cannot log into the guest OS.
The firewall supports groups, thereby permitting different classes of instances to have
different rules. For example, in the case of a traditional three-tiered web application, web
servers would have ports 80/443 (http/https); application servers would have an
application-specific port open only to the web server group; database servers would have
port 3306 (MySQL) open only to the application server group. All three groups would
permit administrative access on port 22 (SSH), but only from the customer‟s corporate




                                                                                          19
network. The firewall is cannot be controlled not by the host/instance itself—it requires
the customer's X.509 certificate and key to authorize changes.
AWS provides an API that allows automated management of virtual machine instances.
Calls to launch and terminate instances, change firewall parameters, and perform other
functions must be signed by an X.509 certificate or the customer‟s Amazon Secret
Access Key and can be encrypted in transit with SSL to maintain confidentiality.
Customers have no access to physical storage devices and the disk virtualization layer
automatically wipes no longer in use to prevent data leaks. It is recommended that tenants
use encrypted file systems on top of the provided virtual block device to maintain
confidentiality.
Network security mechanisms address/mitigate the most common attack vectors. DDoS
are mitigated using known techniques such as syn cookies and connection limiting. In
addition, Amazon maintains some additional spare network capacity. VM instances
cannot spoof their own IP/MAC addresses as the virtualization layer enforces correctness.
Port scanning is generally considered ineffective as almost all ports are closed by
defaults. Packet sniffing is not possible as the virtualization layer will effectively prohibit
setting the virtual NIC from being put into promiscuous mode—no traffic that is not
addressed to the instance will be delivered. Man-in-the-middle attacks are prevented
through the use of SSL-encrypted communication.
Amazon S3/SimpleDB Security
Storage security, as represented by the S3 and SimpleDB services, has received relatively
light treatment. Data at rest is not automatically encrypted by the infrastructure and it is
the application‟s responsibility to do that. One drawback is that, if application data is
stored encrypted on SimpleDB, the query interface is effectively disabled.
The S3 APIs provide both bucket- and object-level access controls, with defaults that
only permit authenticated access by the bucket and/or object creator. Write and Delete
permission is controlled by an Access Control List (ACL) associated with the bucket.
Permission to modify the bucket ACLs is itself controlled by an ACL, and it defaults to
creator-only access. Therefore, the customer maintains full control over who has access
to their data. Amazon S3 access can be granted based on AWS Account ID, DevPay
Product ID, or open to everyone.
When an object is deleted from Amazon S3, removal of the mapping from the public
name to the object starts immediately, and is generally processed across the distributed
system within several seconds. Once the mapping is removed, there is no external access
to the deleted object. That storage area is then made available only for write operations
and the data is eventually overwritten by newly stored data.
7. Vendor Assessment: Boomi Atmosphere (PaaS/SaaS)
7.1. Description
Boomi AtomSphere offers an on-demand integration platform for any combination of
Software-as-a-Service, Platform-as-a-Service, Infrastructure as a Services, and on-
premise applications. Their main selling point is leveraging existing applications by




                                                                                           20
providing connectors for integration of SaaS offerings with on premise, back office
applications. This is a scenario that may be attractive for Navy purposes.
The integration processes are implemented through their proprietary, patent-pending
Atom, a dynamic runtime engine that can be deployed remotely or on premises. These
Atoms capture the components of end-to-end integration processes, including
transformation and business rules, processing logic and connectors.
They can be hosted by Boomi or other cloud vendors for SaaS-to-SaaS integration or
downloaded locally for SaaS to on-premise integrations. On-premise applications are
typically firewalled, with no direct access via the Internet, and no access even via a DMZ.
To handle this requirement, the Boomi Atom can be deployed on-premise to directly
connect the on-premise applications with one or more SaaS/Cloud applications. Changes
to firewalls (such as opening an inbound port) are not required, and the Atom supports a
full bi-directional movement of data between the applications being integrated. Deployed
locally, no data enters Boomi's data center at any point.
Should SaaS to SaaS integration be required and such applications accessed via a secure
Internet connection, Atoms can be hosted in Boomi's cloud, with Boomi managing the
uptime of the Atom. Customer data is isolated from other tenants in Boomi's platform
(though standard caveats apply, as mentioned above )
Finally, it is possible deploy Atoms into any cloud infrastructure that supports Java, such
as Amazon, Rackspace, and OpSource, offering direct non-Boomy connectivity between
between applications. Also in this deployment style, no customer data enter Boomi's data
center.

7.2. Security Assessment
Boomi has deployed multiple controls at the infrastructure, platform, application, and
data level, thus acknowledging multidimensional security aspects of their product.
Infrastructure Security
The Boomi infrastructure meets the AICPA SAS70 Type II (and Level l PCI DSS) audit
requirements, which is the most widely recognized regulatory compliance mandate issued
by the American Institute of Certifed Public Accountants. Its primary scope is the
inquiry, examination, and testing of service organization‟s control environment. Data
centers, managed-service providers, and SaaS vendors represent such service
organizations [15].
Boomi's controls include best-of-breed routers, firewalls and intrusion detection systems,
DDoS protection bolstered by redundant IP connections to world class carriers terminated
on a carrier grade network. Physical power continuity is provided by redundant UPS
power, diesel generator backups, as well as HVAC facilities. In addition, they have
instated multipoint monitoring of key metrics alerts for both mission critical and ongoing
maintenance issues.
Platform and Application Security
As noted, Atoms can reside locally or be hosted in Boomi's data center. An Atom can
communicate information to the Boomi data center in two modes, continuous automatic




                                                                                       21
and user-initiated communications. During ongoing communications, Atoms merely send
operational information such as online uptime status, tracking information cataloging
process executions, configuration updates, and code update checks to Boomi. User-
initiated communication is undertaken only upon the request of an authorized user. The
information sent includes logging information about specific integration processes, error,
failure and diagnostic messages, and retrieving schemata for the design of new
integration processes.
No inbound firewall ports need to be opened in order for an Atom to communicate with
Boomi's data center. Traffic is protected by standard 128 bit SSL encryption. Any
credential password needed for application integration (like an database password) is
encrypted by x509 private/public key pairs and stored for the account. When an Atom is
deployed, the encrypted password is passed along and the credentials supplied unlock the
password at runtime.
The AtomSphere platform (used to build, deploy and manage the Atoms, regardless of
deployment style) is accessed via a standard web browser. Boomi uses the OWASP Top
Ten list to address the most critical client and server side web security application flaws.
The U.S.Defense Information Systems Agency recommends OWASP Top Ten as key
best practices to be used as part of the DOD Information Technology Security
Certification and Accreditation Process (DITSCAP, now DIACAP) [10].
Cross-site Scripting (CSS) and Cross-Site Forgery Request (CSFR) mentioned earlier are
also listed in the Top Ten. Boomi's controls to prevent CSS relies on proper XML
encoding through an authenticated AWS REST-based API when data is delivered to the
client. Timestamps, as well as the aforementioned AWS authentication are used to
mitigate (though not eliminate) CSRF attacks [6].
Client Data Security
Boomi stresses that its AtomSphere platform does not by default retrieve, access or store
client data. It merely supports necessary data mapping rules to facilitate integration
without saving data at Boomi's location, unless specifically configured to do so. Hence,
data flowing through Atoms residing locally do not touch the Boomi data center: It is
transported directly to either the SaaS or a local application through an Atom component
(a connector) configured to user-specified security requirements. Should the client prefer
a zero-footprint Boomi data center hosted Atom deployment, the data center
infrastracture controls are used to safeguard the integrity, confidentiality and availability
of those Atoms.

8. Vendor Assessment: force.com (PaaS)
8.1. Description

force.com offers a Platform-as-a-Service cloud architecture to clients, designed to support
turn-key, Internet-scale applications. Their primary selling point is a track record of high
system uptime, with an Internet-facing mechanism for tracking reliability, and the ability




                                                                                         22
to declaratively develop certain classes of web-based applications, with little need to
write code.
Client applications to be executed on the force.com cloud are stored as metadata, which is
interpreted and transformed into objects executed by the force.com multitenant runtime
engine. Applications for the force.com architecture can be developed declaratively using
a native application framework, via a Java-like programming language called Apex, or
via exposed APIs that allow applications to be developed in C#, Java, and C++. The
APIs support integration with other environments, e.g. to allow data to be accessed from
sources external to the force.com infrastructure. Applications that use the API do not
execute within the force.com cloud—they must be hosted elsewhere.
force.com imposes a strict application testing regiment on new applications before they
are deployed, to ensure that new applications do not seriously impact the performance of
existing applications running in the force.com cloud. An extensive set of resource limits
is also imposed, to prevent applications from monopolizing CPU resources and memory.
Operations that violate these resource limits result in runtime exceptions in the
application.

8.2. Security Assessment

force.com deploys a number of mechanisms for increasing the security of applications
and associated data. These are described in the following sections.
Infrastructure Security
The force.com instrastructure‟s network is secured via external firewalls that block
unused protocols and the deployment of internal intrusion detection sensors on all
network segments. All communication with force.com is encrypted, via SSL/TLS. Third
party certification is regularly performed to assess network security. Power and HVAC
for datacenters is fully redundant, with multiple UPS‟s, power distribution units, diesel
generators, and cooling systems. External network connectivity is provided via fiber
enclosed in concrete vaults. A number of physical security measures are deployed at
force.com datacenters, including 24 hour manned security, biometric scanning for access
to computer systems, full video surveillance, and bullet-proof, concrete-walled rooms.
Computers hosting cloud-based applications are enclosed in steel cages with
authentication control for physical access. On the other hand, a successful phishing
attack has been mounted against force.com employees, resulting in the leakage of a large
amount of customer contact data [11].
Platform and Application Security
Native force.com applications are stored as metadata and executed by a runtime engine.
The database-oriented nature of the force.com APIs and the lack of low-level APIs for
applications executing within the cloud severely limits the possibility of; force.com
applications do not execute independently of the runtime engine (which has extensive
auditing and resource monitoring checks) and applications developed using force.com
APIs do not execute within the force.com cloud—they merely have access to data in the
cloud. On the other hand, if an attack vector against the multitenant runtime engine itself
were developed, then it appears that data and applications belonging to other



                                                                                       23
organizations could be manipulated, since data is comingled. No attack vectors of this
kind have been reported and the feasibility of developing such attacks is unknown.
Client Data Security
Measures to ensure client data security are vague—available force.com literature simply
states that “salesforce.com protects customer data by ensuring that only authorized users
can access it” and that “All data is encrypted in transfer.” One mechanism that might
have implications for DoD applications is the presence of a force.com platform “Recycle
Bin”, which stores deleted data for up to 30 days, during which the data is available for
restoration. It is unclear whether the platform implements secure deletion for data stored
in the force.com datacenter, and whether there is a mechanism for ensuring that deleted
data is removed from backups.
9. Vendor Assessment: Pervasive Software (PaaS/SaaS)
9.1. Description
Similar to Boomi but wider and more differentiated, Pervasive offers a an on-demand
integration suite for Software-as-a-Service and Platform-as-a-Service (confusingly
renamed Integration-as-a-Service, or IaaS). Pervasive emphasizes development speed,
application integration, as well as heterogeneous data connectivity: (200+ connectors,
among them to legacy COBOL , QSAM and MVS formats).
In addition, its product line field a remarkable capability to process non-relational, semi-
structured and un-structured content, which is typically not explicitly formulated and
buried in most organizations. Specialized pre-made turn key integration solutions for
industry sectors are offered, as well. As with Boomi, hosting options are available (data
integration through Pervasive's DataCloud) or any other cloud, as well as local premises.
Thus, a full range of SaaS-to-SaaS, on-premises-to-SaaS and on-premises to on-premises,
as well as traditional B2B and large-scale bulk data exchange.can be integrated via
Pervasive's platform




                                                                                        24
Figure 4 Pervasive Data Integration Platform
Their flagship product, the Pervasive Data Integration Platform, consists of a unified set
of tools for rapid development of seamless connectors that captures, as in Boomi's case,
process logic, data mapping and transformation rules. The connector 'secret sauce' lies in
the runtime Integration Engine, instantiated by the Integration Services (as shown in blue
and purple in Figure 4)

9.2. Security Assessment
In contrast to Boomi's strong emphasis on security (attested to by several position and
technical papers), surprisingly little detail is found on Pervasive's security stance. No
security controls are mentioned for the DataCloud. One sentence in their technical
description of the Integration Engine is devoted to noting that each Integration Engine
instantiated runs in its own address space, their isolation increasing reliability .
Several issues can be deduced from the type and scale technologies employed: The
Management of deployed component though the Integration Manager (yellow in Figure
4) is effected through standard browsers - this is subject to the standard XSS and CSFR
issues as delineated in previous section. Lastly, the workhorse of their integration
platform, the Integration Engine is designed to handle an impressive gamut of
applications (as evinced from Figure 5), from extract, transform, and load (ETL) projects
to B2B integration (e.g. HIPAA, HL7, ACORD, X12 and SOA adapters).
This in itself is not the problem, but coupled with the Engine being lightweight to deploy,
as Pervasive emphasized several times, thorough input validation seems unlikely. Before
any adoption for mission critical deployment, it is recommended that these and other



                                                                                       25
issues (e.g. interaction of applications that embed the Integration Engine is accomplished
also through relative complex COM APIs) be addressed.




                               Figure 5: Integration Engine API
Despite several contacts via email, telephone conversation with sales people, account
managers and senior system engineers with entreaties for material addressing the security
issues as formulated by the Navy's SOW, no further material was forthcoming as of date
of this writing.
We stress that the paucity of available information needn't necessarily reflect on the
quality of their safety controls: Indirect evidence that security controls across their IaaS,
PaaS and SaaS must meet minimum standards can be corroborated by 100+ case studies,
which span over a dozen sectors. Pervasive's family of products have been deployed as an
integration solution in industries subject to comparable information classification, audit
and access control stipulations as the Navy; health care sector being one of them.

9.3. Case Studies of Interest
The health care sector is of interest because of its statutory (HIPAA) data security
requirements, process complexity, entity scale and legacy system integration. Pervasive
lists about a dozen and a half case studies, among them the State of New York. NY
decided to modernize its Medicaid system with the Pervasive Data Integrator for
efficiency and HIPAA compliance reasons.
Originally created to streamline healthcare processes and reduce costs, HIPAA mandates
minimum standards to protect and keep private an individual‟s health information. For
organizations to be compliant, they must design their systems and applications to meet



                                                                                         26
HIPAA‟s privacy and security standards and related administrative, technical, and
physical safeguards.
These standards are referred to as the so-called Privacy and Security Rules. HIPAA‟s
Privacy Rule requires that individuals‟ health information be properly protected by
covered entities. Among other requirements, the privacy rule regulates encryption
standards for data in transmission (in-flight) and in storage (at-rest). HIPAA's Security
Rule mandates detailed administrative, physical and technical safeguards to protect health
information: Inter alia, this means implementation and deployment of access controls,
encryption, backup and audit controls for the data in question, subject to appropriate
classification and risk levels. Other industry case studies of potential interest to the Navy
may involve Transportation/Manufacturing (logistics), Public Sector/ Government
(statutes) and Financial (speed) sectors.
Judicious study of health care case studies may also yield insights into issues of scale and
legacy system migration. In this context, we mention the state of California's tackling of
HIPAA compliance with Pervasive software. Their unique requirements - non-negotiable
integration of legacy systems and a traffic volume of 10 million transactions/month have
Navy correspondences. Lastly, Louisiana East Jefferson General Hospital's transition
from proprietary database ETL tool to a Pervasive solution in order to optimize use of
their data warehouse may warrant a look, as well.




                                                                                         27
Author Short Bio: Vassil Roussev
Vassil Roussev is an Associate Professor of Computer Science at the University of New
Orleans (UNO). He received his Ph.D. in Computer Science from the University of North
Carolina—Chapel Hill in 2003. After that, he joined the faculty at UNO and has focused
his research into several related areas—computer security, digital forensics, distributed
systems and cloud computing, high-performance computing, and human-computer
interaction. The overall theme of his research is to bring to bear the massive
computational power of scalable distributed systems, as well as visual analytics tools to
solve challenges in security and forensics with short turnaround times. He is also working
on tighter integration of security and forensics tools as a means to enrich both areas of
research and practice.
Dr. Roussev‟s has over 20 peer-reviewed publications (book chapters, journal articles,
and conference papers) in the area of computer security and forensics, including featured
articles in IEEE Security and Privacy and Communications of the ACM. His research and
educational projects have been funded by DARPA, ONR, DoD, SPAWAR New Orleans
the State of Louisiana, and private companies, including a Sun Microsystems Academic
Excellence Grant.
Dr. Roussev is Director of the Network Security Lab (NSSAL) at UNO, coaches the
UNO Collegiate Cyber Defense Team, and represents UNO at the Large Resource
Allocations Committee of the Louisiana Optical Network Initiative (http://loni.org). He is
also a Co-PI on a $15mln project to create the LONI Institute (http://institute.loni.org/).
The LONI Institute seeks to develop a state-wide collaborative R & D environment
among Louisiana‟s research institutions with a clear focus on advancing computational
scientific research.




                                                                                       28
Author Short Bio: Golden G. Richard, III
Golden G. Richard III is Professor of Computer Science in the Department of Computer
Science at the University of New Orleans. He received a B.S. in Computer Science (with
honors) from the University of New Orleans in 1988, and an M.S. and Ph.D. in Computer
Science from The Ohio State University in 1991 and 1995, respectively. He joined UNO
in 1994. Dr. Richard‟s research interests include computer security, operating systems
internals, digital forensics, and reverse engineering. He is a GIAC-certified digital
forensics investigator and a member of the ACM, IEEE Computer Society, USENIX, the
American Academy of Forensics Sciences (AAFS), and the United States Secret Service
Taskforce on Electronic Crime. At the University of New Orleans, he directs the Greater
New Orleans Center for Information Assurance and co-directs the Networking, Security,
and Systems Administration Laboratory (NSSAL).
Prof. Richard has over 30 years of experience in computing and is a recognized expert in
digital forensics. He and his collaborators and students at the University of New Orleans
have made important research contributions in high-performance digital forensics, file
carving, evidence correlation mechanisms, on-the-spot digital forensics, and OS support
for digital forensics. Furthermore, he and his collaborators pioneered the use of Graphics
Processing Units (GPUs) to speed processing of digital evidence. Recently, he developed
and taught one of the first courses in academia on reverse engineering of malicious
software. He is the author of numerous publications in security and networking as well
as two books for McGraw-Hill, the first on service discovery protocols (Service and
Device Discovery: Protocols and Programming, 2002) and the second on mobile
computing (Fundamentals of Mobile and Pervasive Computing, 2005).




                                                                                      29
Author Short Bio: Daniel Bilar
Education

        Dartmouth College (Thayer School of Engineering), PhD Engineering Sciences, 2003
        Thesis: Quantitative Risk Analysis of Computer Networks
        Cornell University (School of Engineering), MEng Operations Research and
        Information Engineering, 1997
        Brown University (Department of Computer Science), BA Computer Science, 1995

Current Affiliation

        Assistant Professor of Computer Science, University of New Orleans, August 2008 -
        present
        Co-Chair, 6th Workshop on Digital Forensics and Incident Analysis (Port Elisabeth, SA),
        2010
        Advisory Board, Journal in Computer Virology (Springer, Paris), 2008-
        Professional Advisory Board, SANS GIAC Systems and Network Auditor, 2002-2005

Past Affiliations

        Endowed Faculty Fellow, Wellesley College (Wellesley MA), 2006-2008
        Visiting Professor of Computer Science, Colby College (Waterville ME), 2004-2006

Research Interests

Detection, Classification and Containment of Highly Evolved Malicious Software, Systems of
Systems Critical Infrastructure Modeling and Protection, Risk Analysis and Management of
Computer Networks

Dr Bilar was a founding member of the Institute for Security and Technology Studies at
Dartmouth College, conducting counter-terrorism technology research for the US Department of
Justice and Department of Homeland Security.




                                                                                             30
List of Abbreviations
CC          Common Criteria for Information Technology Security Evaluation
DBMS        Database management system
DoD         Department of Defense
EAL         Evaluation assurance level
EC2         Elastic cloud computing
IaaS        Infrastructure as a service
IT          Information technology
HIPAA       The Health Insurance Portability and Accountability Act of 1996
LAMP        Linux, Apache, MySQL, Perl/PHP/Python
NIAP        National Information Assurance Partnership
OS          Operating system
PaaS        Platform as a service
PR          Public relations
SaaS        Software as a service
SLA         Service level agreement
TCO         Total cost of ownership
TPM         Trusted platform module




                                                                              31
References
[1]    Amazon.com, “Creating HIPAA-Compliant Medical Data Applications with
       AWS”, April 2009,
       http://awsmedia.s3.amazonaws.com/AWS_HIPAA_Whitepaper_Final.pdf.
[2]    Amazon.com, “Amazon Web Services. Customer Agreement”,
       http://aws.amazon.com/agreement/.
[3]    Amazon.com, “Amazon Web Services: Overview of Security Processes”, Sep 2008,
       http://s3.amazonaws.com/aws_blog/AWS_Security_Whitepaper_2008_09.pdf
[4]    Nicholas Arvanitis, Marco Slaviero, Haroon Meer, "Clobbering the Cloud", BH
       USA 2009, August 2009, http://www.sensepost.com/research/presentations/2009-
       08-SensePost-BH-USA-2009.pptx
[5]    Adam Barth, Collin Jackson and John C. Mitchell, "Robust Defenses for Cross-Site
       Request Forgery", Proceedings of the ACM CCS 2008, October 2008,
       http://www.adambarth.com/papers/2008/barth-jackson-mitchell-b.pdf
[6]    Boomi Inc, "Boomi OWASP Top Ten Response", August 2009,
       http://www.boomi.com/files/boomi_datasheet_owasp_response.pdf
[7]    Rajkumar Chee, Shin Yeo, and Srikumar Venugopal, "Market-Oriented Cloud
       Computing: Vision, Hype, and Reality for Delivering Computing as the 5th
       Utility", Proceedings of the 2009 9th IEEE/ACM International Symposium on
       Cluster Computing and the Grid, May 2009
[8]    DARPA Microsystems Technology Office, BAA 07-24 "TRUST in Integrated
       Circuits", March 2007,                http://www.darpa.mil/MTO/solicitations/baa07-
       24/index.html
[9]    DOD Defense Science Board Task Force, "High Performance Microchip Supply",
       Feb 2005, http://www.acq.osd.mil/dsb/reports/2005-02-HPMS_Report_Final.pdf
[10]   DoD DISA, "Security Checklists", http://iase.disa.mil/stigs/checklist/index.html
[11]   eSecurity Planet, “Salesforce.com Scrambles To Halt Phishing Attacks”,
       http://www.esecurityplanet.com/trends/article.php/3709871/Salesforcecom-
       Scrambles-To-Halt-Phishing-Attacks.htm.
[12]   Gartner, Inc. “SaaS CRM Reduces Costs and Use of Consultants” by Michael
       Maoz. 15 October 2008
[13]   Panagiotis G. Ipeirotis, Luis Gravano, Mehran Sahami, “Probe, Count, and
       Classify: Categorizing Hidden Web Databases”, ACM SIGMOD 2001, Santa
       Barbara, California, USA
[14]   McKinsey & Co., “Report: Clearing the Air on Cloud Computing”, Apr 2009,
       http://uptimeinstitute.org/content/view/353/319/.
[15]   NDP LLC, "Why is SAS 70 Relevant to SaaS in Today's Regulatory Compliance
       Landscape?", 2009, http://www.sas70.us.com/industries/saas-and-sas70.php
[16]   Thomas Ristenpart, Eran Tromer, Hovav Shacham and Stefan Savage, "Hey, You,
       Get Off My Cloud! Exploring Information Leakage in Third- Party Computer
       Clouds", Proceedings of ACM CCS 2009, Nov. 2009,
       http://cseweb.ucsd.edu/~hovav/dist/cloudsec.pdf



                                                                                      32
[17] salesforce.com, “ISO 27001 Certified Security”,
     http://www.salesforce.com/platform/cloud-infrastructure/security.jsp.
[18] salesforce.com, “Three global centers and disaster recovery”,
     http://www.salesforce.com/platform/cloud-infrastructure/recovery.jsp.
[19] Nuno Santos, Krishna P. Gummadi, and Rodrigo Rodrigues, “Towards Trusted
     Cloud Computing”, USENIX Workshop on Hot Topics in Cloud Computing, San
     Diego, CA, Jun 2009.
     http://www.usenix.org/events/hotcloud09/tech/full_papers/santos.pdf
[20] Alex Stamos, Andrew Becherer and Nathan Wilcox, "Cloud Computing Models
     and Vulnerabilities - Raining on the Trendy New Parade", Blackhat USA Briefings,
     July 2009, https://media.blackhat.com/bh-usa-09/video/STAMOS/BHUSA09-
     Stamos-CloudCompSec-VIDEO.mov
[21] UC Berkeley Reliable Adaptive Distributed Systems Laboratory, "Above the
     Clouds:      A     BerkeleyView      of   Cloud       Computing",     Feb 2009,
     http://radlab.cs.berkeley.edu/
[22] Luis M. Vaquero, Luis Rodero-Merino, Juan Caceres, Maik Lindner, “A Break in
     the Clouds: Towards a Cloud Definition”, ACM SIGCOMM Computer
     Communication Review, Volume 39, Number 1, January 2009.
[23] United States Government Accountability Office. “INFORMATION
     ASSURANCE National Partnership Offers Benefits, but Faces Considerable
     Challenges”, Mar 2006, http://www.gao.gov/new.items/d06392.pdf.




                                                                                 33

Contenu connexe

Tendances

Cloud computing security_perspective
Cloud computing security_perspectiveCloud computing security_perspective
Cloud computing security_perspectivesolaigoundan
 
Www.Sas.Com Resources Whitepaper Wp 33890
Www.Sas.Com Resources Whitepaper Wp 33890Www.Sas.Com Resources Whitepaper Wp 33890
Www.Sas.Com Resources Whitepaper Wp 33890Gregory Pence
 
How will cloud computing transform technology
How will cloud computing transform technologyHow will cloud computing transform technology
How will cloud computing transform technologyTarunabh Verma
 
Iirdem a novel approach for enhancing security in multi cloud environment
Iirdem a novel approach for enhancing security in multi  cloud environmentIirdem a novel approach for enhancing security in multi  cloud environment
Iirdem a novel approach for enhancing security in multi cloud environmentIaetsd Iaetsd
 
Juniper Networks: Security for cloud
Juniper Networks: Security for cloudJuniper Networks: Security for cloud
Juniper Networks: Security for cloudTechnologyBIZ
 
Cloud Armor: An Overview of Trusty Supporting Reputation based Management for...
Cloud Armor: An Overview of Trusty Supporting Reputation based Management for...Cloud Armor: An Overview of Trusty Supporting Reputation based Management for...
Cloud Armor: An Overview of Trusty Supporting Reputation based Management for...IRJET Journal
 
The Nist definition of cloud computing cloud computing Research Paper
The Nist definition of cloud computing cloud computing Research PaperThe Nist definition of cloud computing cloud computing Research Paper
The Nist definition of cloud computing cloud computing Research PaperFaimin Khan
 
The Storage Side of Private Clouds
The Storage Side of Private CloudsThe Storage Side of Private Clouds
The Storage Side of Private CloudsDataCore Software
 
Cloud Computing IT Lexicon's Latest Hot Spot
Cloud Computing IT Lexicon's Latest Hot SpotCloud Computing IT Lexicon's Latest Hot Spot
Cloud Computing IT Lexicon's Latest Hot SpotTech Mahindra
 
IRJET- Effective Privacy based Distributed Storage Structure
IRJET- Effective Privacy based Distributed Storage StructureIRJET- Effective Privacy based Distributed Storage Structure
IRJET- Effective Privacy based Distributed Storage StructureIRJET Journal
 
Services, security challenges and security policies in cloud computing
Services, security challenges and security policies in cloud computingServices, security challenges and security policies in cloud computing
Services, security challenges and security policies in cloud computingeSAT Journals
 
What to consider while selecting public cloud service
What to consider while selecting public cloud serviceWhat to consider while selecting public cloud service
What to consider while selecting public cloud serviceNetmagic Solutions Pvt. Ltd.
 
Cloud Computing: Latest Buzzword or Glimpse of the Future?
Cloud Computing: Latest Buzzword or Glimpse of the Future?Cloud Computing: Latest Buzzword or Glimpse of the Future?
Cloud Computing: Latest Buzzword or Glimpse of the Future?white paper
 
Zimory White Paper: The Cloud's Slow European Take-off
Zimory White Paper: The Cloud's Slow European Take-offZimory White Paper: The Cloud's Slow European Take-off
Zimory White Paper: The Cloud's Slow European Take-offZimory
 

Tendances (20)

Cloud computing security_perspective
Cloud computing security_perspectiveCloud computing security_perspective
Cloud computing security_perspective
 
Www.Sas.Com Resources Whitepaper Wp 33890
Www.Sas.Com Resources Whitepaper Wp 33890Www.Sas.Com Resources Whitepaper Wp 33890
Www.Sas.Com Resources Whitepaper Wp 33890
 
How will cloud computing transform technology
How will cloud computing transform technologyHow will cloud computing transform technology
How will cloud computing transform technology
 
Iirdem a novel approach for enhancing security in multi cloud environment
Iirdem a novel approach for enhancing security in multi  cloud environmentIirdem a novel approach for enhancing security in multi  cloud environment
Iirdem a novel approach for enhancing security in multi cloud environment
 
Juniper Networks: Security for cloud
Juniper Networks: Security for cloudJuniper Networks: Security for cloud
Juniper Networks: Security for cloud
 
Cloud Armor: An Overview of Trusty Supporting Reputation based Management for...
Cloud Armor: An Overview of Trusty Supporting Reputation based Management for...Cloud Armor: An Overview of Trusty Supporting Reputation based Management for...
Cloud Armor: An Overview of Trusty Supporting Reputation based Management for...
 
The Nist definition of cloud computing cloud computing Research Paper
The Nist definition of cloud computing cloud computing Research PaperThe Nist definition of cloud computing cloud computing Research Paper
The Nist definition of cloud computing cloud computing Research Paper
 
Unovie Technical Architecture
Unovie Technical ArchitectureUnovie Technical Architecture
Unovie Technical Architecture
 
The Storage Side of Private Clouds
The Storage Side of Private CloudsThe Storage Side of Private Clouds
The Storage Side of Private Clouds
 
Cloud Computing
Cloud ComputingCloud Computing
Cloud Computing
 
E42053035
E42053035E42053035
E42053035
 
Cloud Computing IT Lexicon's Latest Hot Spot
Cloud Computing IT Lexicon's Latest Hot SpotCloud Computing IT Lexicon's Latest Hot Spot
Cloud Computing IT Lexicon's Latest Hot Spot
 
IRJET- Effective Privacy based Distributed Storage Structure
IRJET- Effective Privacy based Distributed Storage StructureIRJET- Effective Privacy based Distributed Storage Structure
IRJET- Effective Privacy based Distributed Storage Structure
 
Services, security challenges and security policies in cloud computing
Services, security challenges and security policies in cloud computingServices, security challenges and security policies in cloud computing
Services, security challenges and security policies in cloud computing
 
Ccsw
CcswCcsw
Ccsw
 
What to consider while selecting public cloud service
What to consider while selecting public cloud serviceWhat to consider while selecting public cloud service
What to consider while selecting public cloud service
 
Cloud computing
Cloud computingCloud computing
Cloud computing
 
Cloud Computing: Latest Buzzword or Glimpse of the Future?
Cloud Computing: Latest Buzzword or Glimpse of the Future?Cloud Computing: Latest Buzzword or Glimpse of the Future?
Cloud Computing: Latest Buzzword or Glimpse of the Future?
 
P18 2 8-5
P18 2 8-5P18 2 8-5
P18 2 8-5
 
Zimory White Paper: The Cloud's Slow European Take-off
Zimory White Paper: The Cloud's Slow European Take-offZimory White Paper: The Cloud's Slow European Take-off
Zimory White Paper: The Cloud's Slow European Take-off
 

En vedette

Keeping Control: Data Security and Vendor Management
Keeping Control: Data Security and Vendor ManagementKeeping Control: Data Security and Vendor Management
Keeping Control: Data Security and Vendor ManagementPaige Rasid
 
Third Party Vendor Risk Managment
Third Party Vendor Risk ManagmentThird Party Vendor Risk Managment
Third Party Vendor Risk ManagmentPivotPointSecurity
 
Third-Party Risk Management: A Case Study in Oversight
Third-Party Risk Management: A Case Study in OversightThird-Party Risk Management: A Case Study in Oversight
Third-Party Risk Management: A Case Study in OversightNICSA
 
Third-Party Risk Management: Implementing a Strategy
Third-Party Risk Management: Implementing a StrategyThird-Party Risk Management: Implementing a Strategy
Third-Party Risk Management: Implementing a StrategyNICSA
 
Risk Management Maturity Model (RMMM)
Risk Management Maturity Model (RMMM)Risk Management Maturity Model (RMMM)
Risk Management Maturity Model (RMMM)Adnan Naseem
 
Supplier Risk Assessment
Supplier Risk AssessmentSupplier Risk Assessment
Supplier Risk AssessmentGary Bahadur
 

En vedette (6)

Keeping Control: Data Security and Vendor Management
Keeping Control: Data Security and Vendor ManagementKeeping Control: Data Security and Vendor Management
Keeping Control: Data Security and Vendor Management
 
Third Party Vendor Risk Managment
Third Party Vendor Risk ManagmentThird Party Vendor Risk Managment
Third Party Vendor Risk Managment
 
Third-Party Risk Management: A Case Study in Oversight
Third-Party Risk Management: A Case Study in OversightThird-Party Risk Management: A Case Study in Oversight
Third-Party Risk Management: A Case Study in Oversight
 
Third-Party Risk Management: Implementing a Strategy
Third-Party Risk Management: Implementing a StrategyThird-Party Risk Management: Implementing a Strategy
Third-Party Risk Management: Implementing a Strategy
 
Risk Management Maturity Model (RMMM)
Risk Management Maturity Model (RMMM)Risk Management Maturity Model (RMMM)
Risk Management Maturity Model (RMMM)
 
Supplier Risk Assessment
Supplier Risk AssessmentSupplier Risk Assessment
Supplier Risk Assessment
 

Similaire à Cloud2009

Cloud Computing
 Cloud Computing Cloud Computing
Cloud ComputingAbdul Aslam
 
Federal Cloud Computing Strategy
Federal Cloud Computing StrategyFederal Cloud Computing Strategy
Federal Cloud Computing Strategyrameshgpai
 
Federal cloud-computing-strategy
Federal cloud-computing-strategyFederal cloud-computing-strategy
Federal cloud-computing-strategyDaniel Checchia
 
Cloud computing
Cloud computingCloud computing
Cloud computingleninlal
 
Migrating apps-to-the-cloud-final
Migrating apps-to-the-cloud-finalMigrating apps-to-the-cloud-final
Migrating apps-to-the-cloud-finaleng999
 
GigaOm Research: Bare-Metal-Clouds
GigaOm Research: Bare-Metal-CloudsGigaOm Research: Bare-Metal-Clouds
GigaOm Research: Bare-Metal-CloudsBenjamin Shrive
 
The Intersection of Identity Management and Cloud Computing
The Intersection of Identity Management and Cloud ComputingThe Intersection of Identity Management and Cloud Computing
The Intersection of Identity Management and Cloud ComputingHitachi ID Systems, Inc.
 
Anessentialguidetopossibilitiesandrisksofcloudcomputing apragmaticeffectivean...
Anessentialguidetopossibilitiesandrisksofcloudcomputing apragmaticeffectivean...Anessentialguidetopossibilitiesandrisksofcloudcomputing apragmaticeffectivean...
Anessentialguidetopossibilitiesandrisksofcloudcomputing apragmaticeffectivean...Nitish Bhardwaj
 
Anessentialguidetopossibilitiesandrisksofcloudcomputing apragmaticeffectivean...
Anessentialguidetopossibilitiesandrisksofcloudcomputing apragmaticeffectivean...Anessentialguidetopossibilitiesandrisksofcloudcomputing apragmaticeffectivean...
Anessentialguidetopossibilitiesandrisksofcloudcomputing apragmaticeffectivean...Nitish Bhardwaj
 
Anessentialguidetopossibilitiesandrisksofcloudcomputing apragmaticeffectivean...
Anessentialguidetopossibilitiesandrisksofcloudcomputing apragmaticeffectivean...Anessentialguidetopossibilitiesandrisksofcloudcomputing apragmaticeffectivean...
Anessentialguidetopossibilitiesandrisksofcloudcomputing apragmaticeffectivean...niti slideman
 
Anessentialguidetopossibilitiesandrisksofcloudcomputing apragmaticeffectivean...
Anessentialguidetopossibilitiesandrisksofcloudcomputing apragmaticeffectivean...Anessentialguidetopossibilitiesandrisksofcloudcomputing apragmaticeffectivean...
Anessentialguidetopossibilitiesandrisksofcloudcomputing apragmaticeffectivean...Nitish Bhardwaj
 
TierPoint White Paper_With all due diligence_2015
TierPoint White Paper_With all due diligence_2015TierPoint White Paper_With all due diligence_2015
TierPoint White Paper_With all due diligence_2015sllongo3
 
With-All-Due-Diligence20150330
With-All-Due-Diligence20150330With-All-Due-Diligence20150330
With-All-Due-Diligence20150330Jim Kramer
 
Maintaining Secure Cloud by Continuous Auditing
Maintaining Secure Cloud by Continuous AuditingMaintaining Secure Cloud by Continuous Auditing
Maintaining Secure Cloud by Continuous Auditingijtsrd
 
Israel Cloud Computing
Israel  Cloud ComputingIsrael  Cloud Computing
Israel Cloud ComputingKatrinMelamed
 
Requirements and Challenges for Securing Cloud Applications and Services
Requirements and Challenges for Securing Cloud Applications  and ServicesRequirements and Challenges for Securing Cloud Applications  and Services
Requirements and Challenges for Securing Cloud Applications and ServicesIOSR Journals
 
New Era in Insurance - Cloud Computing
New Era in Insurance - Cloud ComputingNew Era in Insurance - Cloud Computing
New Era in Insurance - Cloud ComputingNIIT Technologies
 

Similaire à Cloud2009 (20)

Cloud Computing
 Cloud Computing Cloud Computing
Cloud Computing
 
Federal Cloud Computing Strategy
Federal Cloud Computing StrategyFederal Cloud Computing Strategy
Federal Cloud Computing Strategy
 
Federal cloud-computing-strategy
Federal cloud-computing-strategyFederal cloud-computing-strategy
Federal cloud-computing-strategy
 
Cloud computing
Cloud computingCloud computing
Cloud computing
 
Cloud
CloudCloud
Cloud
 
SECURITY ISSUES IN CLOUD COMPUTING
SECURITY ISSUES IN CLOUD COMPUTINGSECURITY ISSUES IN CLOUD COMPUTING
SECURITY ISSUES IN CLOUD COMPUTING
 
Migrating apps-to-the-cloud-final
Migrating apps-to-the-cloud-finalMigrating apps-to-the-cloud-final
Migrating apps-to-the-cloud-final
 
GigaOm Research: Bare-Metal-Clouds
GigaOm Research: Bare-Metal-CloudsGigaOm Research: Bare-Metal-Clouds
GigaOm Research: Bare-Metal-Clouds
 
The Intersection of Identity Management and Cloud Computing
The Intersection of Identity Management and Cloud ComputingThe Intersection of Identity Management and Cloud Computing
The Intersection of Identity Management and Cloud Computing
 
Anessentialguidetopossibilitiesandrisksofcloudcomputing apragmaticeffectivean...
Anessentialguidetopossibilitiesandrisksofcloudcomputing apragmaticeffectivean...Anessentialguidetopossibilitiesandrisksofcloudcomputing apragmaticeffectivean...
Anessentialguidetopossibilitiesandrisksofcloudcomputing apragmaticeffectivean...
 
Anessentialguidetopossibilitiesandrisksofcloudcomputing apragmaticeffectivean...
Anessentialguidetopossibilitiesandrisksofcloudcomputing apragmaticeffectivean...Anessentialguidetopossibilitiesandrisksofcloudcomputing apragmaticeffectivean...
Anessentialguidetopossibilitiesandrisksofcloudcomputing apragmaticeffectivean...
 
Anessentialguidetopossibilitiesandrisksofcloudcomputing apragmaticeffectivean...
Anessentialguidetopossibilitiesandrisksofcloudcomputing apragmaticeffectivean...Anessentialguidetopossibilitiesandrisksofcloudcomputing apragmaticeffectivean...
Anessentialguidetopossibilitiesandrisksofcloudcomputing apragmaticeffectivean...
 
Anessentialguidetopossibilitiesandrisksofcloudcomputing apragmaticeffectivean...
Anessentialguidetopossibilitiesandrisksofcloudcomputing apragmaticeffectivean...Anessentialguidetopossibilitiesandrisksofcloudcomputing apragmaticeffectivean...
Anessentialguidetopossibilitiesandrisksofcloudcomputing apragmaticeffectivean...
 
TierPoint White Paper_With all due diligence_2015
TierPoint White Paper_With all due diligence_2015TierPoint White Paper_With all due diligence_2015
TierPoint White Paper_With all due diligence_2015
 
With-All-Due-Diligence20150330
With-All-Due-Diligence20150330With-All-Due-Diligence20150330
With-All-Due-Diligence20150330
 
Cloud Computing
Cloud ComputingCloud Computing
Cloud Computing
 
Maintaining Secure Cloud by Continuous Auditing
Maintaining Secure Cloud by Continuous AuditingMaintaining Secure Cloud by Continuous Auditing
Maintaining Secure Cloud by Continuous Auditing
 
Israel Cloud Computing
Israel  Cloud ComputingIsrael  Cloud Computing
Israel Cloud Computing
 
Requirements and Challenges for Securing Cloud Applications and Services
Requirements and Challenges for Securing Cloud Applications  and ServicesRequirements and Challenges for Securing Cloud Applications  and Services
Requirements and Challenges for Securing Cloud Applications and Services
 
New Era in Insurance - Cloud Computing
New Era in Insurance - Cloud ComputingNew Era in Insurance - Cloud Computing
New Era in Insurance - Cloud Computing
 

Plus de namblasec

Bio-op Errors in DNA Computing
Bio-op Errors in DNA ComputingBio-op Errors in DNA Computing
Bio-op Errors in DNA Computingnamblasec
 
Defending against Adversarial Cyberspace Participants
Defending against Adversarial Cyberspace ParticipantsDefending against Adversarial Cyberspace Participants
Defending against Adversarial Cyberspace Participantsnamblasec
 
On self-reproducing computer programs
On self-reproducing computer programsOn self-reproducing computer programs
On self-reproducing computer programsnamblasec
 
Using a novel behavioral stimuli-response framework to defend against adversa...
Using a novel behavioral stimuli-response framework to defend against adversa...Using a novel behavioral stimuli-response framework to defend against adversa...
Using a novel behavioral stimuli-response framework to defend against adversa...namblasec
 
Monitoring Smart Grid Operations and Maintaining Missions Assurance
Monitoring Smart Grid Operations and Maintaining Missions AssuranceMonitoring Smart Grid Operations and Maintaining Missions Assurance
Monitoring Smart Grid Operations and Maintaining Missions Assurancenamblasec
 
Known Knowns, Unknown Unknowns and Anti Virus stuff yadayadayada
Known Knowns, Unknown Unknowns and Anti Virus stuff yadayadayadaKnown Knowns, Unknown Unknowns and Anti Virus stuff yadayadayada
Known Knowns, Unknown Unknowns and Anti Virus stuff yadayadayadanamblasec
 

Plus de namblasec (6)

Bio-op Errors in DNA Computing
Bio-op Errors in DNA ComputingBio-op Errors in DNA Computing
Bio-op Errors in DNA Computing
 
Defending against Adversarial Cyberspace Participants
Defending against Adversarial Cyberspace ParticipantsDefending against Adversarial Cyberspace Participants
Defending against Adversarial Cyberspace Participants
 
On self-reproducing computer programs
On self-reproducing computer programsOn self-reproducing computer programs
On self-reproducing computer programs
 
Using a novel behavioral stimuli-response framework to defend against adversa...
Using a novel behavioral stimuli-response framework to defend against adversa...Using a novel behavioral stimuli-response framework to defend against adversa...
Using a novel behavioral stimuli-response framework to defend against adversa...
 
Monitoring Smart Grid Operations and Maintaining Missions Assurance
Monitoring Smart Grid Operations and Maintaining Missions AssuranceMonitoring Smart Grid Operations and Maintaining Missions Assurance
Monitoring Smart Grid Operations and Maintaining Missions Assurance
 
Known Knowns, Unknown Unknowns and Anti Virus stuff yadayadayada
Known Knowns, Unknown Unknowns and Anti Virus stuff yadayadayadaKnown Knowns, Unknown Unknowns and Anti Virus stuff yadayadayada
Known Knowns, Unknown Unknowns and Anti Virus stuff yadayadayada
 

Dernier

Generative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdfGenerative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdfIngrid Airi González
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc
 
Emixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native developmentEmixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native developmentPim van der Noll
 
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyesHow to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyesThousandEyes
 
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24Mark Goldstein
 
Data governance with Unity Catalog Presentation
Data governance with Unity Catalog PresentationData governance with Unity Catalog Presentation
Data governance with Unity Catalog PresentationKnoldus Inc.
 
Modern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better StrongerModern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better Strongerpanagenda
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity PlanDatabarracks
 
So einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdfSo einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdfpanagenda
 
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxThe Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxLoriGlavin3
 
[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality Assurance[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality AssuranceInflectra
 
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxLoriGlavin3
 
Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...Rick Flair
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxLoriGlavin3
 
A Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software DevelopersA Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software DevelopersNicole Novielli
 
Assure Ecommerce and Retail Operations Uptime with ThousandEyes
Assure Ecommerce and Retail Operations Uptime with ThousandEyesAssure Ecommerce and Retail Operations Uptime with ThousandEyes
Assure Ecommerce and Retail Operations Uptime with ThousandEyesThousandEyes
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfLoriGlavin3
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .Alan Dix
 
Scale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL RouterScale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL RouterMydbops
 
Decarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityDecarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityIES VE
 

Dernier (20)

Generative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdfGenerative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdf
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
 
Emixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native developmentEmixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native development
 
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyesHow to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
 
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
 
Data governance with Unity Catalog Presentation
Data governance with Unity Catalog PresentationData governance with Unity Catalog Presentation
Data governance with Unity Catalog Presentation
 
Modern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better StrongerModern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity Plan
 
So einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdfSo einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdf
 
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxThe Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
 
[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality Assurance[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality Assurance
 
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
 
Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
 
A Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software DevelopersA Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software Developers
 
Assure Ecommerce and Retail Operations Uptime with ThousandEyes
Assure Ecommerce and Retail Operations Uptime with ThousandEyesAssure Ecommerce and Retail Operations Uptime with ThousandEyes
Assure Ecommerce and Retail Operations Uptime with ThousandEyes
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdf
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .
 
Scale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL RouterScale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL Router
 
Decarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityDecarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a reality
 

Cloud2009

  • 1. Contract Number: N69250-08-D-0302 Task Order: 0007 Security Assessment of Cloud Computing Vendor Offerings Final Report Oct 10, 2009 Vassil Roussev, Ph.D. Golden G. Richard III, Ph.D. Daniel Bilar, Ph.D. Department of Computer Science University of New Orleans
  • 2. Contents Executive Summary ............................................................................................................. 2 Recommendations .............................................................................................................. 3 1. Introduction ................................................................................................................. 5 2. Background .................................................................................................................. 6 2.1. The Promise and Early Reality of the Cloud ..................................................................... 6 2.2. DoD Enterprises and the Cloud ........................................................................................ 9 2.3. Security Concerns on the Cloud ..................................................................................... 10 3. Representation of the Navy Data Center System Environment................................ 11 4. Navy Security Concerns and Evaluation Criteria ....................................................... 12 5. Vendor Assessment Overview ................................................................................... 17 6. Vendor Assessment: Amazon Web Services (IaaS) ................................................... 17 6.1. Description ..................................................................................................................... 17 6.2. Security Assessment ...................................................................................................... 18 7. Vendor Assessment: Boomi Atmosphere (PaaS/SaaS).............................................. 20 7.1. Description ..................................................................................................................... 20 7.2. Security Assessment ...................................................................................................... 21 8. Vendor Assessment: force.com (PaaS) ...................................................................... 22 8.1. Description ..................................................................................................................... 22 8.2. Security Assessment ...................................................................................................... 23 9. Vendor Assessment: Pervasive Software (PaaS/SaaS) .............................................. 24 9.1. Description ..................................................................................................................... 24 9.2. Security Assessment ...................................................................................................... 25 9.3. Case Studies of Interest ................................................................................................. 26 Author Short Bio: Vassil Roussev ...................................................................................... 28 Author Short Bio: Golden G. Richard, III ........................................................................... 29 Author Short Bio: Daniel Bilar ........................................................................................... 30 List of Abbreviations ......................................................................................................... 31 References ........................................................................................................................ 32 1
  • 3. Executive Summary Cloud computing is a relatively new term that describes a style of computing in which a service provider offers computational resources as a service over the Internet. The main characteristic of cloud computing is the promise that service availability can be scaled up and down according to demand with the customer paying based on actual resource usage and service level arrangements. Cloud computing differs from prior approaches to scalability in that it is based on a new business proposition, which turns IT capital costs into operating costs. Under this model, a service provider builds a public cloud infrastructure and offers its services to customers, who only pay what they use in terms of compute hours, storage, and network capacity. The expectation is that, by efficiently sharing the infrastructure, customers will see the cost and complexity of their IT operations go down significantly. From a security perspective, sharing the physical IT infrastructure with other tenants creates new attack vectors that are not yet well understood. Virtualization is perceived as providing a very high level of protection, yet it is too early to estimate its long-term effectiveness: after all, it is a software layer subject to implementation faults. Cloud services are in their very early stages of adoption, with small-to-medium enterprises as its first customers. In all likelihood, no determined adversary has systematically tried to penetrate the defense mechanisms of a cloud provider. Just as importantly, providers do not appear ready to deliver the specific offerings that a DoD enterprise needs. Over the medium and the long run, there is little doubt that DoD‟s major IT operations will, ultimately, move to scalable cloud services that will increase capabilities and reduce costs. However, current offerings from public cloud providers are not, in our view, suitable for DoD enterprises and do not meet their security requirements. Large civilian enterprises have very similar concerns to those of DoD ones and have taken the path to develop private clouds, which use the same technology but the actual operation stays inside the enterprise data center. In our view, this is the general direction in which DoD entities should focus their efforts. It is difficult to conceive a scenario under which it becomes acceptable for DoD data and computations to physically leave the enterprise. Moreover, DoD operations have a large enough scale to reap the benefits of cloud services by sharing the infrastructure within DoD. It is important to recognize that a generic cloud computing platform, by itself, provides limited benefits. These come from virtualization, which by consolidating existing deployments reduces overall hardware requirements, simplifies IT management, and provides greater flexibility to react to variations in demand. The true cost savings come from well-known sources—eliminating redundant applications, consolidating operations, and sharing across enterprises. In cloud terms, it is multi-tenancy—the sharing of a platform by multiple tenants—that provides the greatest efficiency gains. It is also important to recognize that efficient scaling of services requires that applications be engineered for the cloud and it is likely that most legacy applications would need to be reengineered. 2
  • 4. Recommendations The overall assessment of this report is that cloud computing technology has significant potential to substantially benefit DoD enterprises. Specifically, it can facilitate the consolidation and streamlining of IT operations and can provide operational efficiencies, such as rapid automatic scaling of service capacity based on demand. The same technology can simplify the deployment of surge capacity, as well as fail-over, and disaster recovery capabilities. Properly planned and deployed, cloud computing services could ultimately bring higher levels of security by simplifying and speeding up the process of deploying security upgrades, and by reducing deployed configurations to a small number of standardized ones. It is important to both separate the potential of cloud computing from the reality of current offerings, and to critically evaluate the implications of using the technology within a DoD enterprise. Our own view (supported by influential IT industry research) is that the current state of cloud computing offerings does not live up to level of hype associated with them. This is not uncommon for new technologies that burst to the forefront of the news cycle; realistically, however, it takes time for the technology to mature and fulfill its promise. From a DoD perspective, the most important question to ask is: How does the business proposition of cloud computing translate into the DoD domain? The vast majority of the benefits from using infrastructure-as-a-service (IaaS) offerings can be had now by aggressively deploying virtualization in existing, in-house data centers. Further efficiencies require higher level of sharing at the platform (platform-as-a- service, PaaS) and application levels (software-as-a-service, SaaS). In other words, there need to be multiple tenants that share these deployments and associated costs. Due to trust issues, it is difficult to envision a scenario where DoD enterprises share deployments with non-DoD entities in a public cloud environment. Further, regulatory requirements with respect to trust, security, and verification have hidden costs that are generally unaccounted for in the baseline (civilian) model. The responsibility of assuring compliance cannot be outsourced and would likely be made more difficult and costly. Based on the above observations, we make the following general recommendations. A DoD enterprise should: Only consider deploying cloud services on facilities that it physically controls. The general assumption is that the physical location of one‟s data in the cloud should be irrelevant as users will have the same experience. This is patently not true for DoD systems as the trustworthiness and security of any facility that hosts data and computations must be assured at all times. Further yet, the supply chain of the physical hardware must also be trustworthy to ensure that no breaches can be initiated from it. Consider vendors who offer technology to be deployed in a private cloud, rather than public cloud services. Expanding on the previous point, it is inconceivable at this point that DoD would relinquish physical control over sensitive data and 3
  • 5. computations to a third party. Therefore, DoD should look to adopt the technology of cloud computing but modify the business model for in-house use. For example, IaaS technology is headed for commoditization with multiple competing vendors (VMWare, Cisco, Oracle, IBM, HP, etc.) working on technologies that automate the management of entire data centers. Such offerings can be expected to mature within the next 1-2 years. Approach cloud service deployment in a bottom-up fashion. The deployment of cloud services is an evolutionary process, which ultimately requires re- engineering of applications, as well as business practices. The low-hanging fruit is data center virtualization, which decouples data and services from the physical machines. This enables considerable consolidation of hardware and software platforms and is the basic enabler of cloud mobility. Following that, it is the automation of virtualized environment management that can bring the costs further down. At the next level, shared platform deployments, such as database engines and application servers, provider further efficiencies. Finally, the most difficult part is the consolidation of applications and shared deployment of cloud versions of these applications. We fully expect that these themes are familiar to DoD IT managers and some aspects of these are already implemented. This should not come as a surprise as these are the true concerns of business IT efficiency and cloud computing does not magically wave them away. However, cloud computing does provide extra leverage in that, once a service is ready for the cloud, it can be deployed on a wide scale at a marginal incremental cost. Look for opportunities to develop and share private cloud services within DoD. Sister DoD entities are natural candidates for sharing cloud service deployments—most have common functions and compliance requirements, such as human resources and payroll, that are prime candidates for multi-tenant arrangements. Unlike the public case, the consolation of such operations can bring the cost of compliance down as fewer systems would need to be certified. Critically evaluate any deployments scenarios using Red Team exercises and similar techniques. As discussed further in the report, most of the service guarantees promised by vendors are based on the design of the systems and have not been independently verified. This may be acceptable in the public domain as the price of failure for many enterprises is high but rarely catastrophic for the nation; however, DoD facilities must be held to a much higher standard. The only realistic way of assessing how DoD cloud services would perform under stress, or under a sustained attack by a determined adversary is to periodically simulate those conditions and observe the system‟s behavior. 4
  • 6. 1. Introduction Cloud computing is a relatively new term that describes a style of computing in which a service provider offers computational resources as a service over the Internet. The main characteristic of cloud computing is the promise that service availability can be scaled up and down according to demand with the customer paying based on actual resource usage and service level arrangements. Typically, the services are provided in a virtualized environment with computation potentially migrating from machine to machine to allow for optimal resource utilization. This process is completely transparent to the clients as they see the same service regardless of the physical machine providing it. The name „cloud computing‟ alludes to this concept as the Internet is often depicted as a cloud on architectural diagrams, hence the computation happens „in the cloud‟. Historically, the basic concept was pioneered by IBM in the 1960s under the name utility computing, however, it has been no more than a decade since the cost and capabilities of commodity hardware have made it possible for the idea to be realized on a massive scale. In general, the service concept can be applied at three different levels and actual vendor offering may be a combination of these: Software as a Service (SaaS). Under the SaaS model, application software is accessed and managed over the network and is usually hosted by a service provider. Licensing is tied to the number of concurrent users, rather than physical product copies. For example, Google provides all of its applications—Gmail, Google Docs, Picasa, etc.—as services, and most of them are not even available as standalone products. Platform as a Service (PaaS). PaaS offers as a service a specific solution platform on top of which developers can build applications. For example, in traditional computing LAMP (Linux, Apache, MySQL, Perl/PHP/Python) is a popular choice for developing Web applications. Example PaaS offerings include Google App Engine, force.com, and Microsoft's Windows Azure Platform. Infrastructure as a Service (IaaS). IaaS goes one level deeper and provides an entire virtualized environment as service. The services can be provided at the operating system level, or even the hardware level, where virtual compute, storage, and communications resources can be rented on an on-demand basis. Given the wide variety of offerings many of which are not even directly comparable, this report provides a conceptual analysis of the field that relates to the Navy-specific requirements. This drives the analysis of specific offerings and will be extend the useful lifetime of this report by providing a framework for evaluating other offerings. 5
  • 7. 2. Background "The security of these cloud-based infrastructure services is like Windows in 1999. It's being widely used and nothing tremendously bad has happened yet. But it's just in early stages of getting exposed to the Internet, and you know bad things are coming." - John Pescatore, Gartner VP and security analyst, (quoted by Financial Times, Aug 3, 20091) Before presenting the security implication of transitioning the IT infrastructure to the cloud, it is important to understand the business case behind the trend and the technological means by which it is being implemented. These have direct bearing on the ultimate cost/benefit analysis with respect to security and it is important to understand that some of the security assumptions behind the typical enterprise analysis may not hold true for a DoD implementation. In turn, this may prevent at least some of the possible efficiencies and may completely alter the outcome of the decision process. 2.1. The Promise and Early Reality of the Cloud There are dozens of definitions of the Cloud and Vaquero et al. [22] recently completed an in-depth survey on the topic. Based in part on those results, we offer one of the more rigorous definitions: “Clouds are a large pool of easily usable and accessible virtualized resources (such as hardware, development platforms and/or services). These resources can be dynamically reconfigured to adjust to a variable load (scale), allowing also for an optimum resource utilization. This pool of resources is typically exploited by a pay- per-use model in which guarantees are offered by the Infrastructure Provider by means of customized service level agreements (SLA).” The above statement can be broken into three basic criteria for a “true” cloud: The management of the physical hardware is abstracted from the customer (tenant); this goes beyond virtualization—the location of the data and the computation is transparent to the tenant. Infrastructure capacity is highly elastic allowing tenants to consume the almost exact amount of resources demanded by users—the infrastructure automatically scales up/down on a fine time scale. Tenants incur no upfront capital expenditures—infrastructure costs are paid for incrementally as an operating expense. The main selling point of all cloud computing offerings is that they provide significant reductions of the total cost of ownership, offer more flexibility, and have low entrance costs for new enterprises. It is important to realize that the main source of efficiency in a public cloud infrastructure comes from sharing the cost with other tenants. Cloud platforms can provide savings at each layer of the hardware/software stack, starting with the hardware, all the way up to the application layer: Data center maintenance 1 http://www.ft.com/cms/s/0/5aa4f33e-7fc4-11de-85dc-00144feabdc0.html 6
  • 8. Infrastructure/network maintenance Infrastructure management and provisioning Database management, provisioning, and maintenance Middleware/application server management Application management Patch deployment and testing Upgrade management Typical data center arrangements in which racks of servers and storage are rented and then managed by the tenant offer the lowest level of sharing and, therefore, the lowest level of overall cost reduction. At the other end of the spectrum is a multi-tenant arrangement in which multiple parties share the database, middleware, and application platforms and outsource all maintenance to the provider. This arrangement promises the greatest efficiency improvements. For example, Gartner Research reports [12] that its clients are experiencing project savings of 25% to 40% by deploying customer relationship management solutions via the SaaS model. The real value of cloud computing kicks in when a full application platform is leveraged to either purchase applications written to take advantage of a multi-tenant platform, or to (re-)develop the applications on this new stack. For example, moving existing mail servers from an in-house data center to a virtual machine somewhere in the cloud will not yield any notable savings as the tenant would still be responsible for maintenance, patching, and provisioning. The latter point is important—provisioning is done at peak demand, the capability is utilized only a small fraction of the time, and the cost is borne full time (According to McKinsey‟s survey [14], the average physical server is utilized at about 10%.) In contrast, a true cloud solution, such as Google Mail, would relieve the tenant from all of these concerns, potentially saving non-trivial amounts of physical and human resources. McKinsey‟s report offers a sobering view of current cloud deployments: Current cloud offerings are most attractive for small and medium-sized enterprises; adoption by larger enterprises (with the end goal of replacing the in-house data centers) faces significant hurdles: o Current cloud computing offerings are not cost-effective compared to large enterprise data centers. (This conclusion is based on the assumption that existing IT services would be replicated on the cloud and, therefore, most of the potential efficiencies would not be realized.) Figure 1 illustrates the point by considering Amazon‟s EC2 service for both Windows and Linux. The overall conclusion is that most EC2 options are more costly than TCO for a typical data center. It is possible to improve the cloud price through pre-pay agreements for Linux system (but not for Windows). Further, based on case studies, it is estimated that total cost of ownership (TCO) for 1 month of CPU processing is about $150 for an in-house data center and $366 on 7
  • 9. EC2 for a comparable instance. Consequently, cloud costs need to go down substantially before they can justify wholesale data center replacement. Figure 1 EC2 monthly CPU equivalent price options [14] o Security and reliability concerns will have to be mitigated and applications‟ architecture needs to be re-designed for the cloud. Section 2.3 discusses this point in more detail. o Business perceptions of increased IT flexibility and effectiveness will have to be properly managed. Currently, cloud computing is relatively early in its technology cycle with actual capabilities considerably lagging public perceptions. There is no reason to doubt the momentum, or long-term viability of the technology itself as all the major IT vendors—IBM, HP, Microsoft, Google, Cisco, VMWare, etc.— are working towards providing integrated solutions. Yet, mature solutions for larger enterprises appear a few years away. o The IT organizations will have to adapt to function in a cloud-centric world before all efficiencies can be realized. Most of the gains in a private cloud deployment (using a cloud platform internally) come from virtualizing server storage, network operations, and other critical building blocks. In other words, by deploying virtualization aggressively, the enterprise can achieve close to 90% of the utilization a cloud service provider (such as Google) can achieve. Figure 2 illustrates the point. The traditional IT stack consists of two layers: facilities—the actual (standardized) data center facilities—and the IT platform—standardized servers, networking, storage, and OS. Publicly-announced private clouds are essentially an aggressive virtualization program on top of it—the virtualization layer contains hypervisors and virtual machines in which OS instances execute. 8
  • 10. Figure 2 Average server utilization rates [14] 2.2. DoD Enterprises and the Cloud It should be clear that, as more and more of the functions are outsourced to the cloud provider and are (implicitly) shared with other parties, a great deal of control over the data and computation is handed over to the provider. For most private enterprises (the most enthusiastic adopters so far) this may not present the same types of issues as for a DoD entity: Legal Requirements. It may not be permissible for sensitive government data and processing to be entrusted to a cloud provider. Specifically, it is likely not acceptable for DoD to relinquish control of their physical location and it is clear that they would have to be guaranteed to stay on US territory. For example, some current cloud computing providers, such as force.com, use backup data centers outside of the U.S. as a disaster mitigation strategy. Further complicating the picture is knowing where backup data is stored and this picture isn‟t clear— force.com provides conflicting claims on this issue, stating both that backup tapes never leave the data center [17] and that they are periodically moved offsite [18]. Google Apps terms of service state that it has the right to store and process customer data in the United States "or in any other country in which Google or its agents maintain facilities". Compliance. DoD enterprises must meet specific standards in terms of trust and security, starting with physical security and including security clearances, fine- grained access control, and strict accounting. Such requirements are generally lacking in the commercial space and, given the early stages of the technology development cycle, any compliance claims on the part of providers need to be scrutinized vigorously. Legacy systems. Many DoD enterprises have to support and maintain numerous legacy systems. Although the same is true for the commercial sector, it is not clear that the DoD entity will have the resources and the freedom to replace those with COTS solutions. Further, COTS solutions may not work as well (out of the box) 9
  • 11. even for relatively standard functions like personnel and supply management, simply because the government tends to do things differently. Risk and Public Relations (PR) Risk. Objectively, the risk profile of DoD agencies is considerably higher than that of private enterprises as national security concerns are at stake. Accordingly, the public‟s tolerance for even minor breaches and failures is almost non-existent. Therefore, decision makers must necessarily put extra weight on the risk side of the equation and invest additional resources in managing and mitigating those risks. Thus, a risk scenario that is statistically very unlikely and can be ignored by private enterprises may require consideration (and resources) in a DoD enterprise. With respect to IT, this would usually manifest itself in the form of customized applications and procedures. Those are additional costs that cloud computing is unlikely to remedy as they will have to replicated on the cloud. 2.3. Security Concerns on the Cloud Critical analysis reveals that cloud computing, both as a concept and as it is currently implemented, brings a whole suite of security problems that stem primarily from the fact that physical custody of data and computations is handed over to the provider. Physical security. The first problem is that we have a completely new physical security perimeter—for most enterprises this may not be noticeable but for a DoD entity considerations could be different. First, is the location where the data is housed physically as secure as the in-house option? If the computation is allowed to „float‟ in the cloud, are all the possible locations sufficiently secure? Confidentiality. Traditionally, operating systems are engineered such that a super-user has unlimited access to all resources. This approach was recognized as a problem in database systems so trusted DBMS were developed so that the administrator could manipulate the structure of the database but not have (by default) access to the records. The super-user problem escalates in the cloud environment—now the hypervisor administrator has control over all OS instances under his command. Further, if the computation migrates, another administrator has access to it, and so on. The concept of trusted cloud computing has been recently put forward as a research idea [19] but, to the best of our knowledge, no technical solutions are yet deployed. Another concern is the physical location of the data—if it is migrated and replicated numerous times, what are the guarantees that no traces will be left behind? Along the same lines, OS instances are routinely cloned and suspended and the memory images contain sensitive data—keys, unique identifiers, password hashes, etc. New attack scenarios. In the cloud, we have new neighbors that we know nothing about—normally, if they are malicious, they would have to work hard to breach the security perimeter before they can launch an attack. In the cloud, they are already sharing the infrastructure and can exploit security flaws virtual machine hypervisors, virtual machines, or in third-party applications for privilege escalation and attack from the cloud. In public cloud computing services, such as Amazon‟s, the barriers to entry are very low. The services are highly automated and all that is needed is a credit card to open an account and start probing the infrastructure from within. 10
  • 12. Limits on security measures. Not having access to the physical infrastructure implies that traditional measures that rely on physical access can no longer be deployed. Security appliances, such as firewalls, IDS/IPS, spam filters, data leak monitors, can no longer be used. Even for software-based protection mechanisms there are no solutions that allow the tenant to control policy decisions. This has been recognized as a problem and VMWare has recently introduced the VMSafe interface, which allows third-party modules to control security policy decisions. Even so, tenants need also to be concerned about the internal security procedures of the provider. Testing and compliance. Part of ensuring the security of a system is to continuously test it and, in some cases, such testing is part of the compliance requirement. By default, providers prohibit malicious traffic and possibly even vulnerability scans [2], although it may be possible to reach an arrangement as part of the SLA. Incident response. Early detection and response is one of the critical components of robust security and the cloud makes those tasks more challenging. To perform an investigation, the administrator will have to rely on logs and other information from the provider. It is essential that a) the provider collects the necessary information; and b) the SLA provides for that information be accessible in a timely manner. Technical challenges. OS security is built for the physical hardware and in its transition to the virtual environment there are several challenges that are yet to be addressed: Natural sources of randomness. Cryptographic implementations rely on random numbers and it is the job of the OS to collect random data from non- deterministic events. In a virtualized environment, it is possible that cloned instances would behave in a predictable-enough manner to execute attacks [20] Trusted platform modules (TPM). Since the physical hardware is shared among several instances the TPM-supported remote attestation process would have to be redesigned. 3. Representation of the Navy Data Center System Environment Currently, SPAWAR New Orleans supports over two dozen applications running in a virtualized data center environment. At any given time, up to 300 VM instances are in execution. Over time, it is expected that both the number of applications and the workload will continue to grow steadily and increase demand on the operation. Part of the mission is to support incident management, such as hurricane response, which requires that redundant and reliable capacity be available. It is self-evident that, for most of the applications, it is critical that they be available and functioning correctly under any circumstances. The baseline platform configuration consists of commodity Sun Microsystems and Dell hardware, and Sun Solaris and Microsoft Windows operating systems. The virtualization layer is implemented using VMWare products. Data storage is organized using a centralized Storage Area Network, with local disk boot configuration and SAN-based application installations. Incident Response 11
  • 13. Incident response systems support communication, coordination, and decision making under emergency conditions, such as natural disasters. Specifically, Navy provides a messaging platform, which allows for information to be broadly disseminated among first responders. It also provides tracking of assets with information provided by transponders and provides up-to-date information to decision makers through appropriate visualization. Overall, most of the information exchanged via incident response application is of a sensitive nature and, as such, must be closely monitored and regulated. Specific concerns include the location of assets, personally-identifiable information, as well as operational security. The latter presents a difficult challenge as it often has to do with the accumulation of individual data points over time that can, together, paint a bigger picture of the enterprise. This data may include patterns of response, organizational structure, and availability of assets that would normally be kept secret. The threat to operational security in this context is well illustrated by recent research on hidden databases on the Internet [13]. A hidden database is one that is not generally available but does provide a query interface, such as a Web form, that can return a limited sample. An example would be any reservation system which provides information on availability for specific parameters. Researchers have shown that it is quite feasible to devise automated queries that can exploit the limited interface to obtain representative samples of the underlying database and to accurately estimate the overall content of the hidden database. The solution to such concerns involves careful design of the application interface so that returned data does not permit an adversary to collect representative samples. Absent specifics of the Navy application pool, this concern is not discussed further in this report. Logistics Navy data centers provide logistic systems for Navy operations by tracking assets, equipment, and maintenance schedules. The execution of such applications has concerns similar to those discussed in the previous section, especially operational security. Human Resources Human resources applications provide the typical HR functions of a large enterprise as applied to the specific needs of the Navy. In addition to hiring and discharging of service members, the system keeps track of skill sets and pay certifications. An additional concern is the non-disclosure of personally-identifiable information to third parties. This includes restricting the release of information that can indirectly lead to such disclosure. As with other operational security concerns, the counter-measures are application- specific and are not discussed further. 4. Navy Security Concerns and Evaluation Criteria In general terms, the Navy‟s IT operations have several major security concerns, some of which are specific to its operation as a DoD enterprise. There are a number of factors that a repeatable decision making process should incorporate when assessing cloud computing vendor offerings. As an unclassified report, there are limits on the amount of specific information that we could obtain and provide. Therefore, the outlined criteria should be 12
  • 14. treated as a high-level roadmap and the specific details need to be fleshed out when an actual decision point is at hand Release-code Auditing One of the main concerns is release-code auditing which requires that all source code be audited before being put into production. This is, effectively, a major restriction on the kind of services that can be provided by commercial vendors on general-purpose cloud platforms. It is likely economically and logistically infeasible for vendors to undergo a certification process for all the code that could provide services to the Navy. For example, it would not be feasible to use an existing scalable email platform, such as the one provided by Google to improve the scalability of Navy‟s operations. Compliance with DoD Regulations Another hurdle in the adoption of COTS cloud services is the need to ensure that all services comply with internal IT requirements. Due to the sensitive nature of this information, no specific issues are discussed in this report. It is worth noting, however, that even if the problem can be resolved administratively and the vendor can demonstrate compliance, the Navy would still have to dedicate resources to monitor this compliance on a continuous basis. The Common Criteria for Information Technology Security Evaluation (CC) is an international standard (ISO/IEC 15408) for computer security certification. CC is a framework in which computer system users can specify their security requirements, vendors can then implement and/or make claims about the security attributes of their products, and testing laboratories can evaluate the products to determine if they actually meet the claims. DoD security requirements can be expressed, in part, by referring to specific protection profiles and can help determine minimum qualifications. For example, VMWare‟s ESX Server 3.0.2 and VirtualCenter 2.0.2 have earned EAL4+ recognition, whereas Xen, the virtualization solution used by Amazon, has not been certified. Overall, the CC certification is a long and tedious process and speaks more directly to the commitment of the vendor rather than the quality of the product. As a 2006 GAO report [23] points out in its findings, there is “a lack of performance measures and difficulty in documenting the effectiveness of the NIAP process”. Another side effect of certification is that it takes up to 24 months for a product to go through an EAL4 certification process [23] (p.8), which limits the availability of products with up-to-date features. Given the fast pace of development in cloud computing products, basing a decision solely on certification requirements may severely constrain DoD‟s choice. One possible approach is to formulate narrower, agency-specific certification requirements to speed up the certification. It is also important to recognize that the common EAL4 certification does not cover some of the more sophisticated attack patterns that are relevant to cloud computing, such as side channel attacks. Another existing regulation that could provide a ready reference point for evaluation is The Health Insurance Portability and Accountability Act of 1996 (HIPAA). HIPAA is a set of established federal standards, implemented through a combination of administrative, physical and technical safeguards, intended to ensure the security and 13
  • 15. privacy of protected health information. These standards affect the use and disclosure of protected health information by certain covered entities (such as healthcare providers engaged in electronic transactions, health plans and healthcare clearinghouses) and their business associates. In technical terms, HIPAA requires the encryption of sensitive information „in-flight‟ and „at-rest‟, dictates basic administrative and technical procedures for setting and enforcing access control policies; and requires in-depth auditing capabilities, data back-up procedures and disaster recovery mechanisms. These requirements are set based on established best security practices and are common to all applications dealing with sensitive information. From a DoD perspective, these are minimal standards and additional safeguards are likely necessary for most applications. In that respect, HIPAA can be considered a minimal qualification requirement. Overall, the emergence of any major standards targeted at cloud computing should be taken as a cue that cloud computing is reaching a certain level of maturity, as standards development tends to follow rather than lead. However, the emergence of standards cannot, by itself, be relied upon as a timing mechanism to identify optimal points for adopting new technologies. Standards processes have mixed record on their timeliness (at best) and tend to follow rather than lead. In our view, given the current pace of development by major IT companies, it is likely that, within the next two years, mature offerings will start to differentiate themselves from the rest. Limits on Physical Security Currently, the security of the services is provided, in part, through the use of appliances, such as firewalls and intrusion detection systems. These devices are used to compartmentalize the execution of different applications and isolate faults and breaches. Having a separate and specialized device creates an independent layer of protection that considerably raises the bar for attackers—a wholesale breach requires that multiple layers be breached. In a cloud deployment, there is no direct analogy to this practice although the industry has recognized the problem and is moving toward offering solutions. Specifically, virtualization vendors are beginning to offer an interface (API) through which third-party virtual appliances could be plugged-in to enforce security policies and monitor traffic and execution environments. The conceptual problem is that such components are dependent on the security of the hypervisor—if it is breached there is the opportunity for attackers to circumvent the virtual security appliances. Eventually, solutions to this problem will likely be found—CPU vendors, such as Intel, are actively developing hardware support for virtualization. Yet, at this relatively early stage of development, there is no proof that virtual appliances offer the same level of protection as physical ones. Underlying Hardware Security and Trust Since cloud infrastructure is likely to be remote, it is impractical/impossible for it to be scrutinized by the Navy to comply with its heightened level of security concerns and the issue of hardware trojans must be addressed. With the advent of global markets, vertically integrated chip manufacturing has given way to a cost-efficient, multi-stage 14
  • 16. supply chain whose individual stages may or may not pass through environments friendly to US interests (Figure 3) Figure 3: IC manufacturing supply chain [9] A DoD report crystallizes the issues thusly [9]: " Trustworthiness of custom and commercial systems that support military operations [..] has been jeopardized. Trustworthiness includes confidence that classified or mission critical information contained in chip designs is not compromised, reliability is not degraded or untended design elements inserted in chips as a result of design or fabrication in conditions open to adversary agents. Trust cannot be added to integrated circuits after fabrication; electrical testing and reverse engineering cannot be relied upon to detect undesired alterations in military integrated circuits. The shift from United States to foreign IC manufacture endangers the security of classified information embedded in chip designs; additionally, it opens the possibility that 'Trojan horses' and other unauthorized design inclusions may appear in unclassified integrated circuits used in military applications. More subtle shifts in process parameters or layout line spacing can drastically shorten the lives of components." If for conventional IT delivery platforms which are locally accessible only design specifications and the testing stages can be controlled, the situation is exacerbated for remote cloud computing platforms: A very real possibility of hardware-based malicious code, hiding in underlying integrated circuits, exists. Potentially serious ramifications of hardware-based subversion include unmitigatable cross-tenant data leakage to VM identification and denial of service attacks [16]. We stress that this type of surreptitious hardware subversion is not within the ability of AV software of cloud vendors to deal with; it is an ongoing, open research problem to detect such malicious code in hardware at all. It is exacerbated, however, by the lack of infrastructure control inherent in using public cloud offerings, as well as the motivation of the cloud infrastructure owner to purchase and deploy COTS available. Client-side Vulnerabilities Mitigation Cloud clients (in particular web browsers) incorporate more functionality than the mere display of text and images, including rich dynamic content comprising media playback 15
  • 17. and interactive page elements such as drop-down menus and image roll-overs. These features includes extensions such as the Javascript programming language, as well as additional features for the client such as application plugins (Acrobat Reader, QuickTime, Flash, Real, and Windows Media Player), and Microsoft-specific enhancements such as Browser Helper Objects and ActiveX (Microsoft Windows‟s interactive execution framework). Invariably, these extensions have shown implementation flaws that can be maliciously exploited as security vulnerabilities to gain unauthorized access to obtain sensitive information. In August 2009, Sensepost demonstrated a proof-of-concept password brute-forcing with password reset links—most if not all cloud apps use some password recovery (email- or secret questions-based), several ways to steal cloud resources: Amazon cloud instance of Windows license stealing, paid application theft (via DevPay), “cloud DoS” with exponential virus-like growth of EC2-hosted VM instances [4]. When a user accesses a remote site with a client (say a browser), he types in a URL, and initiates the connection. Once connected, a relationship of trust is established: the user and the website (the user initiated the connection, and now trusts the page and content display) and conversely, the site and the user (in executing actions from the user‟s browser). It is this trust, together with the various features incorporated into rich clients that attackers could subvert through what is called Cross-Site Scripting (XSS) and Cross- Site Request Forgery (CSRF) attacks. Cross-site scripting attacks mostly use legitimate web sites as a conduit, where web sites allow other (malicious) users to upload or post links on to the web site. It must be emphasized that from the point of view of the clients, neither HTTPS (the encrypted channel with the little lock in the browser that denotes „safety‟) nor logins protect against XSS or CSRF attacks. In addition, unlike XSS attacks which necessitate user action by clicking on a link, CSFR attacks can also be executed without the user‟s involvement, since they exploit explicit software vulnerabilities (i. e. predictable invocation structures) on the cloud platform. As such, the onus to prevent CSFR attacks falls squarely on the cloud vendor application developers. Some login and cryptographic token approaches, if conscientiously designed to prevent CSFR attacks, can be of help [5]. Availability, Scalability, and Resistance to Denial of Service Attacks Practically all service providers claim up time well above 99%, scaling on demand, and robust network security. By and large, such claims are based on a few architectural features (redundant power supply, multiple network connections, presence of network firewalls) and should be scrutinized with a healthy dose of skepticism. Few (if any) of the providers have performed large-scale tests to quantify how their services really behave under stress. Customers should demand concrete proof that the provider can fulfill the promises in the service level agreement (SLA). In our view, the only way to adequately assess the performance-related vulnerabilities is to perform a Red Team exercise. Such an exercise, unlike the SLA, would be able to answer some very specific questions: How many simultaneous infrastructure failures are necessary to take the system down? 16
  • 18. How quickly can it recover? How does the system perform under excessive load—does performance degrade gracefully, or does it collapse? How well does the system really scale? In other words, as the load grows, how fast do resource requirements grow? What happens if other tenants misbehave—does that affect performance? How well can the service resist a massive brute-force DoD attack? The latter is not just a function of the deployed hardware and software but also includes the adequacy and the training of administrative staff and their ability to communicate and respond. Short of a complete Red Team simulation, an experienced technical team should examine in detail the architecture of the system and try to answer as many of the above questions as possible. This will not give the same level of assurance but should be minimum assessment performed on an offering selected for potential acquisition. 5. Vendor Assessment Overview The presented assessments should be viewed as a small but typical sample of the types of services offered on today‟s market. The research effort was hampered by a relative dearth of specific information when it comes to the security of offered services. Although our sample of surveyed products was by no means exhaustive, we found it difficult to extract information from vendor representative beyond what is already publicly available. Some completely ignored our requests for information and were not included here. 6. Vendor Assessment: Amazon Web Services (IaaS) 6.1. Description There are still differing opinions on what exactly constitutes cloud computing, yet there appears to be a consensus that, whatever the definition might be, Amazon‟s Web Services (AWS) are a prime example. Frequently, IaaS are also referred to as utility computing. Broadly, AWS consists of several basic services: Amazon Elastic Computing Cloud (EC2) is a service that permits a customer to create a virtual OS disk image (in a proprietary Amazon format) that will be executed by a web service running on Amazon‟s hosted web infrastructure. Amazon Elastic Block Store (EBS) provides block level storage volumes for use with EC2 instances. EBS volumes are off-instance storage that persists independently from the life of an instance. Volumes can range from 1 GB to 1 TB that can be mounted as devices by EC2 instances. Multiple volumes can be mounted to the same instance. Amazon SimpleDB is a web service providing the core database functions of data indexing and querying. The database is not a traditional relational database although it provides an SQL-like query interface. Amazon Simple Storage Servic (S3) is a service that essentially provides an Internet-accessible remote network share. 17
  • 19. Amazon Simple Queue Service (SQS) is a messaging service which offers a reliable, scalable, hosted queue for storing messages as they travel between computers. The main purpose is to automate workflow processes and provides means to integrate it with EC2 and other AWS. Amazon Elastic MapReduce is a web service that emulates the MapReduce computional model adopted by Google. It utilizes a hosted Hadoop2 framework running on the EC2 infrastructure. Hadoop is the open source implementation of the ideas behind Google‟s MapReduce and is supported by major companies, such as IBM and Yahoo!. Amazon CloudFront is a recent addition which automates the process of creating a content distribution network. Amazon Virtual Private Cloud (VPC) enables enterprises to connect their existing infrastructure to a set of isolated AWS compute resources via a Virtual Private Network (VPN) connection, and to extend their existing management capabilities such as security services, firewalls, and intrusion detection systems to include their AWS resources. It is fair to say that Amazon‟s services interfaces are emerging as one of the early de facto technical standards of cloud computing. One recent development that can further accelerate this trend is the development of Eucalyptus (Elastic Utility Computing Architecture Linking Your Programs To Useful Systems). It is an open-source software infrastructure for implementing cloud computing on clusters. The current interface to Eucalyptus is compatible with Amazon's EC2, S3, and EBS interfaces, but the infrastructure is designed to support multiple client-side interfaces. Eucalyptus is implemented using commonly available Linux tools and basic Web-service technologies and is becoming a standard component of the Ubuntu Linux distribution. 6.2. Security Assessment The primary source of information for this assessment is the AWS security whitepaper [3], which focuses on three traditional security dimensions: confidentiality, integrity, and availability. It does not cover all aspects of interest but does provide a good sampling of the overall state of affairs. Certifications and Accreditations Amazon is cognizant of its customers‟ need to meet certification requirement, however, actual certification efforts appear to be at an early stage: “AWS is working with a public accounting firm to ensure continued Sarbanes Oxley (SOX) compliance and attain certifications such as recurring Statement on Auditing Standards No. 70: Service Organizations, Type II (SAS70 Type II) certification.” Separately, Amazon provides a short white paper on building HIPAA-compliant application [1]. From the content, it becomes clear that virtually all responsibility for comply with the regulations falls on the customer. AWS provides key-based authentication to access their virtual servers. Amazon EC2 creates a 2048 bit RSA key pair, with private and public keys and a unique identifier for 2 http://hadoop.apache.org/ 18
  • 20. each key pair to facilitate secure access. The default setup for Amazon‟s EC2‟s firewall is deny-all mode which automatically denies all inbound traffic unless the customer explicitly opens an EC2 port. Administrators can create multiple security groups in order to enforce different ingress policies as needed and can control each security group with a PEM-encoded X.509 certificate and restrict traffic to each EC2 instance by protocol, service port, or source IP address. Physical Security Physical security measures appear consistent with best industry practices but clearly do not provide the same level of physical protection as an in-house DoD facility: “AWS data centers are housed in nondescript facilities, and critical facilities have extensive setback and and military grade perimeter control berms as well as other natural boundary protection. Physical access is strictly controlled both at the perimeter and at building ingress points by professional security staff utilizing video surveillance, state of the art intrusion detection systems, and other electronic means. Authorized staff must pass two-factor authentication no fewer than three times to access All visitors and contractors are required to present identification and are signed in and continually escorted by authorized staff.” Backups Backup policies are not described in any level of details and the provided brief description is contradictory. It is only clear that some of the data is replicated at multiple physical locations, while the remaining part is customer responsibility. Amazon Elastic Compute Cloud (EC2) Security EC2 security is the most descriptive part of the white paper and provides the greatest amount of useful information although many details are obscured. Amazon‟s virtualization layer utilizes a “highly customized version” of the Xen hypervisor (http://xen.org). Xen is based on the para-virtualization model in which the hypervisor runs inside a host operating system (OS) and provides a virtual hardware interface to guest OS instances. Thus, all privileged access is mediated and controlled by the hypervisor. The firewall is part of the hypervisor layer and mediates all traffic between the network interface and the guest OS. All guest instances are isolated from each other. Virtual (guest) OS instances are built and are completely controlled by the customer as physical machines would be. Customers have root access and all administrative control over additional accounts, services, and applications. AWS administrators do not have access to customer instances, and cannot log into the guest OS. The firewall supports groups, thereby permitting different classes of instances to have different rules. For example, in the case of a traditional three-tiered web application, web servers would have ports 80/443 (http/https); application servers would have an application-specific port open only to the web server group; database servers would have port 3306 (MySQL) open only to the application server group. All three groups would permit administrative access on port 22 (SSH), but only from the customer‟s corporate 19
  • 21. network. The firewall is cannot be controlled not by the host/instance itself—it requires the customer's X.509 certificate and key to authorize changes. AWS provides an API that allows automated management of virtual machine instances. Calls to launch and terminate instances, change firewall parameters, and perform other functions must be signed by an X.509 certificate or the customer‟s Amazon Secret Access Key and can be encrypted in transit with SSL to maintain confidentiality. Customers have no access to physical storage devices and the disk virtualization layer automatically wipes no longer in use to prevent data leaks. It is recommended that tenants use encrypted file systems on top of the provided virtual block device to maintain confidentiality. Network security mechanisms address/mitigate the most common attack vectors. DDoS are mitigated using known techniques such as syn cookies and connection limiting. In addition, Amazon maintains some additional spare network capacity. VM instances cannot spoof their own IP/MAC addresses as the virtualization layer enforces correctness. Port scanning is generally considered ineffective as almost all ports are closed by defaults. Packet sniffing is not possible as the virtualization layer will effectively prohibit setting the virtual NIC from being put into promiscuous mode—no traffic that is not addressed to the instance will be delivered. Man-in-the-middle attacks are prevented through the use of SSL-encrypted communication. Amazon S3/SimpleDB Security Storage security, as represented by the S3 and SimpleDB services, has received relatively light treatment. Data at rest is not automatically encrypted by the infrastructure and it is the application‟s responsibility to do that. One drawback is that, if application data is stored encrypted on SimpleDB, the query interface is effectively disabled. The S3 APIs provide both bucket- and object-level access controls, with defaults that only permit authenticated access by the bucket and/or object creator. Write and Delete permission is controlled by an Access Control List (ACL) associated with the bucket. Permission to modify the bucket ACLs is itself controlled by an ACL, and it defaults to creator-only access. Therefore, the customer maintains full control over who has access to their data. Amazon S3 access can be granted based on AWS Account ID, DevPay Product ID, or open to everyone. When an object is deleted from Amazon S3, removal of the mapping from the public name to the object starts immediately, and is generally processed across the distributed system within several seconds. Once the mapping is removed, there is no external access to the deleted object. That storage area is then made available only for write operations and the data is eventually overwritten by newly stored data. 7. Vendor Assessment: Boomi Atmosphere (PaaS/SaaS) 7.1. Description Boomi AtomSphere offers an on-demand integration platform for any combination of Software-as-a-Service, Platform-as-a-Service, Infrastructure as a Services, and on- premise applications. Their main selling point is leveraging existing applications by 20
  • 22. providing connectors for integration of SaaS offerings with on premise, back office applications. This is a scenario that may be attractive for Navy purposes. The integration processes are implemented through their proprietary, patent-pending Atom, a dynamic runtime engine that can be deployed remotely or on premises. These Atoms capture the components of end-to-end integration processes, including transformation and business rules, processing logic and connectors. They can be hosted by Boomi or other cloud vendors for SaaS-to-SaaS integration or downloaded locally for SaaS to on-premise integrations. On-premise applications are typically firewalled, with no direct access via the Internet, and no access even via a DMZ. To handle this requirement, the Boomi Atom can be deployed on-premise to directly connect the on-premise applications with one or more SaaS/Cloud applications. Changes to firewalls (such as opening an inbound port) are not required, and the Atom supports a full bi-directional movement of data between the applications being integrated. Deployed locally, no data enters Boomi's data center at any point. Should SaaS to SaaS integration be required and such applications accessed via a secure Internet connection, Atoms can be hosted in Boomi's cloud, with Boomi managing the uptime of the Atom. Customer data is isolated from other tenants in Boomi's platform (though standard caveats apply, as mentioned above ) Finally, it is possible deploy Atoms into any cloud infrastructure that supports Java, such as Amazon, Rackspace, and OpSource, offering direct non-Boomy connectivity between between applications. Also in this deployment style, no customer data enter Boomi's data center. 7.2. Security Assessment Boomi has deployed multiple controls at the infrastructure, platform, application, and data level, thus acknowledging multidimensional security aspects of their product. Infrastructure Security The Boomi infrastructure meets the AICPA SAS70 Type II (and Level l PCI DSS) audit requirements, which is the most widely recognized regulatory compliance mandate issued by the American Institute of Certifed Public Accountants. Its primary scope is the inquiry, examination, and testing of service organization‟s control environment. Data centers, managed-service providers, and SaaS vendors represent such service organizations [15]. Boomi's controls include best-of-breed routers, firewalls and intrusion detection systems, DDoS protection bolstered by redundant IP connections to world class carriers terminated on a carrier grade network. Physical power continuity is provided by redundant UPS power, diesel generator backups, as well as HVAC facilities. In addition, they have instated multipoint monitoring of key metrics alerts for both mission critical and ongoing maintenance issues. Platform and Application Security As noted, Atoms can reside locally or be hosted in Boomi's data center. An Atom can communicate information to the Boomi data center in two modes, continuous automatic 21
  • 23. and user-initiated communications. During ongoing communications, Atoms merely send operational information such as online uptime status, tracking information cataloging process executions, configuration updates, and code update checks to Boomi. User- initiated communication is undertaken only upon the request of an authorized user. The information sent includes logging information about specific integration processes, error, failure and diagnostic messages, and retrieving schemata for the design of new integration processes. No inbound firewall ports need to be opened in order for an Atom to communicate with Boomi's data center. Traffic is protected by standard 128 bit SSL encryption. Any credential password needed for application integration (like an database password) is encrypted by x509 private/public key pairs and stored for the account. When an Atom is deployed, the encrypted password is passed along and the credentials supplied unlock the password at runtime. The AtomSphere platform (used to build, deploy and manage the Atoms, regardless of deployment style) is accessed via a standard web browser. Boomi uses the OWASP Top Ten list to address the most critical client and server side web security application flaws. The U.S.Defense Information Systems Agency recommends OWASP Top Ten as key best practices to be used as part of the DOD Information Technology Security Certification and Accreditation Process (DITSCAP, now DIACAP) [10]. Cross-site Scripting (CSS) and Cross-Site Forgery Request (CSFR) mentioned earlier are also listed in the Top Ten. Boomi's controls to prevent CSS relies on proper XML encoding through an authenticated AWS REST-based API when data is delivered to the client. Timestamps, as well as the aforementioned AWS authentication are used to mitigate (though not eliminate) CSRF attacks [6]. Client Data Security Boomi stresses that its AtomSphere platform does not by default retrieve, access or store client data. It merely supports necessary data mapping rules to facilitate integration without saving data at Boomi's location, unless specifically configured to do so. Hence, data flowing through Atoms residing locally do not touch the Boomi data center: It is transported directly to either the SaaS or a local application through an Atom component (a connector) configured to user-specified security requirements. Should the client prefer a zero-footprint Boomi data center hosted Atom deployment, the data center infrastracture controls are used to safeguard the integrity, confidentiality and availability of those Atoms. 8. Vendor Assessment: force.com (PaaS) 8.1. Description force.com offers a Platform-as-a-Service cloud architecture to clients, designed to support turn-key, Internet-scale applications. Their primary selling point is a track record of high system uptime, with an Internet-facing mechanism for tracking reliability, and the ability 22
  • 24. to declaratively develop certain classes of web-based applications, with little need to write code. Client applications to be executed on the force.com cloud are stored as metadata, which is interpreted and transformed into objects executed by the force.com multitenant runtime engine. Applications for the force.com architecture can be developed declaratively using a native application framework, via a Java-like programming language called Apex, or via exposed APIs that allow applications to be developed in C#, Java, and C++. The APIs support integration with other environments, e.g. to allow data to be accessed from sources external to the force.com infrastructure. Applications that use the API do not execute within the force.com cloud—they must be hosted elsewhere. force.com imposes a strict application testing regiment on new applications before they are deployed, to ensure that new applications do not seriously impact the performance of existing applications running in the force.com cloud. An extensive set of resource limits is also imposed, to prevent applications from monopolizing CPU resources and memory. Operations that violate these resource limits result in runtime exceptions in the application. 8.2. Security Assessment force.com deploys a number of mechanisms for increasing the security of applications and associated data. These are described in the following sections. Infrastructure Security The force.com instrastructure‟s network is secured via external firewalls that block unused protocols and the deployment of internal intrusion detection sensors on all network segments. All communication with force.com is encrypted, via SSL/TLS. Third party certification is regularly performed to assess network security. Power and HVAC for datacenters is fully redundant, with multiple UPS‟s, power distribution units, diesel generators, and cooling systems. External network connectivity is provided via fiber enclosed in concrete vaults. A number of physical security measures are deployed at force.com datacenters, including 24 hour manned security, biometric scanning for access to computer systems, full video surveillance, and bullet-proof, concrete-walled rooms. Computers hosting cloud-based applications are enclosed in steel cages with authentication control for physical access. On the other hand, a successful phishing attack has been mounted against force.com employees, resulting in the leakage of a large amount of customer contact data [11]. Platform and Application Security Native force.com applications are stored as metadata and executed by a runtime engine. The database-oriented nature of the force.com APIs and the lack of low-level APIs for applications executing within the cloud severely limits the possibility of; force.com applications do not execute independently of the runtime engine (which has extensive auditing and resource monitoring checks) and applications developed using force.com APIs do not execute within the force.com cloud—they merely have access to data in the cloud. On the other hand, if an attack vector against the multitenant runtime engine itself were developed, then it appears that data and applications belonging to other 23
  • 25. organizations could be manipulated, since data is comingled. No attack vectors of this kind have been reported and the feasibility of developing such attacks is unknown. Client Data Security Measures to ensure client data security are vague—available force.com literature simply states that “salesforce.com protects customer data by ensuring that only authorized users can access it” and that “All data is encrypted in transfer.” One mechanism that might have implications for DoD applications is the presence of a force.com platform “Recycle Bin”, which stores deleted data for up to 30 days, during which the data is available for restoration. It is unclear whether the platform implements secure deletion for data stored in the force.com datacenter, and whether there is a mechanism for ensuring that deleted data is removed from backups. 9. Vendor Assessment: Pervasive Software (PaaS/SaaS) 9.1. Description Similar to Boomi but wider and more differentiated, Pervasive offers a an on-demand integration suite for Software-as-a-Service and Platform-as-a-Service (confusingly renamed Integration-as-a-Service, or IaaS). Pervasive emphasizes development speed, application integration, as well as heterogeneous data connectivity: (200+ connectors, among them to legacy COBOL , QSAM and MVS formats). In addition, its product line field a remarkable capability to process non-relational, semi- structured and un-structured content, which is typically not explicitly formulated and buried in most organizations. Specialized pre-made turn key integration solutions for industry sectors are offered, as well. As with Boomi, hosting options are available (data integration through Pervasive's DataCloud) or any other cloud, as well as local premises. Thus, a full range of SaaS-to-SaaS, on-premises-to-SaaS and on-premises to on-premises, as well as traditional B2B and large-scale bulk data exchange.can be integrated via Pervasive's platform 24
  • 26. Figure 4 Pervasive Data Integration Platform Their flagship product, the Pervasive Data Integration Platform, consists of a unified set of tools for rapid development of seamless connectors that captures, as in Boomi's case, process logic, data mapping and transformation rules. The connector 'secret sauce' lies in the runtime Integration Engine, instantiated by the Integration Services (as shown in blue and purple in Figure 4) 9.2. Security Assessment In contrast to Boomi's strong emphasis on security (attested to by several position and technical papers), surprisingly little detail is found on Pervasive's security stance. No security controls are mentioned for the DataCloud. One sentence in their technical description of the Integration Engine is devoted to noting that each Integration Engine instantiated runs in its own address space, their isolation increasing reliability . Several issues can be deduced from the type and scale technologies employed: The Management of deployed component though the Integration Manager (yellow in Figure 4) is effected through standard browsers - this is subject to the standard XSS and CSFR issues as delineated in previous section. Lastly, the workhorse of their integration platform, the Integration Engine is designed to handle an impressive gamut of applications (as evinced from Figure 5), from extract, transform, and load (ETL) projects to B2B integration (e.g. HIPAA, HL7, ACORD, X12 and SOA adapters). This in itself is not the problem, but coupled with the Engine being lightweight to deploy, as Pervasive emphasized several times, thorough input validation seems unlikely. Before any adoption for mission critical deployment, it is recommended that these and other 25
  • 27. issues (e.g. interaction of applications that embed the Integration Engine is accomplished also through relative complex COM APIs) be addressed. Figure 5: Integration Engine API Despite several contacts via email, telephone conversation with sales people, account managers and senior system engineers with entreaties for material addressing the security issues as formulated by the Navy's SOW, no further material was forthcoming as of date of this writing. We stress that the paucity of available information needn't necessarily reflect on the quality of their safety controls: Indirect evidence that security controls across their IaaS, PaaS and SaaS must meet minimum standards can be corroborated by 100+ case studies, which span over a dozen sectors. Pervasive's family of products have been deployed as an integration solution in industries subject to comparable information classification, audit and access control stipulations as the Navy; health care sector being one of them. 9.3. Case Studies of Interest The health care sector is of interest because of its statutory (HIPAA) data security requirements, process complexity, entity scale and legacy system integration. Pervasive lists about a dozen and a half case studies, among them the State of New York. NY decided to modernize its Medicaid system with the Pervasive Data Integrator for efficiency and HIPAA compliance reasons. Originally created to streamline healthcare processes and reduce costs, HIPAA mandates minimum standards to protect and keep private an individual‟s health information. For organizations to be compliant, they must design their systems and applications to meet 26
  • 28. HIPAA‟s privacy and security standards and related administrative, technical, and physical safeguards. These standards are referred to as the so-called Privacy and Security Rules. HIPAA‟s Privacy Rule requires that individuals‟ health information be properly protected by covered entities. Among other requirements, the privacy rule regulates encryption standards for data in transmission (in-flight) and in storage (at-rest). HIPAA's Security Rule mandates detailed administrative, physical and technical safeguards to protect health information: Inter alia, this means implementation and deployment of access controls, encryption, backup and audit controls for the data in question, subject to appropriate classification and risk levels. Other industry case studies of potential interest to the Navy may involve Transportation/Manufacturing (logistics), Public Sector/ Government (statutes) and Financial (speed) sectors. Judicious study of health care case studies may also yield insights into issues of scale and legacy system migration. In this context, we mention the state of California's tackling of HIPAA compliance with Pervasive software. Their unique requirements - non-negotiable integration of legacy systems and a traffic volume of 10 million transactions/month have Navy correspondences. Lastly, Louisiana East Jefferson General Hospital's transition from proprietary database ETL tool to a Pervasive solution in order to optimize use of their data warehouse may warrant a look, as well. 27
  • 29. Author Short Bio: Vassil Roussev Vassil Roussev is an Associate Professor of Computer Science at the University of New Orleans (UNO). He received his Ph.D. in Computer Science from the University of North Carolina—Chapel Hill in 2003. After that, he joined the faculty at UNO and has focused his research into several related areas—computer security, digital forensics, distributed systems and cloud computing, high-performance computing, and human-computer interaction. The overall theme of his research is to bring to bear the massive computational power of scalable distributed systems, as well as visual analytics tools to solve challenges in security and forensics with short turnaround times. He is also working on tighter integration of security and forensics tools as a means to enrich both areas of research and practice. Dr. Roussev‟s has over 20 peer-reviewed publications (book chapters, journal articles, and conference papers) in the area of computer security and forensics, including featured articles in IEEE Security and Privacy and Communications of the ACM. His research and educational projects have been funded by DARPA, ONR, DoD, SPAWAR New Orleans the State of Louisiana, and private companies, including a Sun Microsystems Academic Excellence Grant. Dr. Roussev is Director of the Network Security Lab (NSSAL) at UNO, coaches the UNO Collegiate Cyber Defense Team, and represents UNO at the Large Resource Allocations Committee of the Louisiana Optical Network Initiative (http://loni.org). He is also a Co-PI on a $15mln project to create the LONI Institute (http://institute.loni.org/). The LONI Institute seeks to develop a state-wide collaborative R & D environment among Louisiana‟s research institutions with a clear focus on advancing computational scientific research. 28
  • 30. Author Short Bio: Golden G. Richard, III Golden G. Richard III is Professor of Computer Science in the Department of Computer Science at the University of New Orleans. He received a B.S. in Computer Science (with honors) from the University of New Orleans in 1988, and an M.S. and Ph.D. in Computer Science from The Ohio State University in 1991 and 1995, respectively. He joined UNO in 1994. Dr. Richard‟s research interests include computer security, operating systems internals, digital forensics, and reverse engineering. He is a GIAC-certified digital forensics investigator and a member of the ACM, IEEE Computer Society, USENIX, the American Academy of Forensics Sciences (AAFS), and the United States Secret Service Taskforce on Electronic Crime. At the University of New Orleans, he directs the Greater New Orleans Center for Information Assurance and co-directs the Networking, Security, and Systems Administration Laboratory (NSSAL). Prof. Richard has over 30 years of experience in computing and is a recognized expert in digital forensics. He and his collaborators and students at the University of New Orleans have made important research contributions in high-performance digital forensics, file carving, evidence correlation mechanisms, on-the-spot digital forensics, and OS support for digital forensics. Furthermore, he and his collaborators pioneered the use of Graphics Processing Units (GPUs) to speed processing of digital evidence. Recently, he developed and taught one of the first courses in academia on reverse engineering of malicious software. He is the author of numerous publications in security and networking as well as two books for McGraw-Hill, the first on service discovery protocols (Service and Device Discovery: Protocols and Programming, 2002) and the second on mobile computing (Fundamentals of Mobile and Pervasive Computing, 2005). 29
  • 31. Author Short Bio: Daniel Bilar Education Dartmouth College (Thayer School of Engineering), PhD Engineering Sciences, 2003 Thesis: Quantitative Risk Analysis of Computer Networks Cornell University (School of Engineering), MEng Operations Research and Information Engineering, 1997 Brown University (Department of Computer Science), BA Computer Science, 1995 Current Affiliation Assistant Professor of Computer Science, University of New Orleans, August 2008 - present Co-Chair, 6th Workshop on Digital Forensics and Incident Analysis (Port Elisabeth, SA), 2010 Advisory Board, Journal in Computer Virology (Springer, Paris), 2008- Professional Advisory Board, SANS GIAC Systems and Network Auditor, 2002-2005 Past Affiliations Endowed Faculty Fellow, Wellesley College (Wellesley MA), 2006-2008 Visiting Professor of Computer Science, Colby College (Waterville ME), 2004-2006 Research Interests Detection, Classification and Containment of Highly Evolved Malicious Software, Systems of Systems Critical Infrastructure Modeling and Protection, Risk Analysis and Management of Computer Networks Dr Bilar was a founding member of the Institute for Security and Technology Studies at Dartmouth College, conducting counter-terrorism technology research for the US Department of Justice and Department of Homeland Security. 30
  • 32. List of Abbreviations CC Common Criteria for Information Technology Security Evaluation DBMS Database management system DoD Department of Defense EAL Evaluation assurance level EC2 Elastic cloud computing IaaS Infrastructure as a service IT Information technology HIPAA The Health Insurance Portability and Accountability Act of 1996 LAMP Linux, Apache, MySQL, Perl/PHP/Python NIAP National Information Assurance Partnership OS Operating system PaaS Platform as a service PR Public relations SaaS Software as a service SLA Service level agreement TCO Total cost of ownership TPM Trusted platform module 31
  • 33. References [1] Amazon.com, “Creating HIPAA-Compliant Medical Data Applications with AWS”, April 2009, http://awsmedia.s3.amazonaws.com/AWS_HIPAA_Whitepaper_Final.pdf. [2] Amazon.com, “Amazon Web Services. Customer Agreement”, http://aws.amazon.com/agreement/. [3] Amazon.com, “Amazon Web Services: Overview of Security Processes”, Sep 2008, http://s3.amazonaws.com/aws_blog/AWS_Security_Whitepaper_2008_09.pdf [4] Nicholas Arvanitis, Marco Slaviero, Haroon Meer, "Clobbering the Cloud", BH USA 2009, August 2009, http://www.sensepost.com/research/presentations/2009- 08-SensePost-BH-USA-2009.pptx [5] Adam Barth, Collin Jackson and John C. Mitchell, "Robust Defenses for Cross-Site Request Forgery", Proceedings of the ACM CCS 2008, October 2008, http://www.adambarth.com/papers/2008/barth-jackson-mitchell-b.pdf [6] Boomi Inc, "Boomi OWASP Top Ten Response", August 2009, http://www.boomi.com/files/boomi_datasheet_owasp_response.pdf [7] Rajkumar Chee, Shin Yeo, and Srikumar Venugopal, "Market-Oriented Cloud Computing: Vision, Hype, and Reality for Delivering Computing as the 5th Utility", Proceedings of the 2009 9th IEEE/ACM International Symposium on Cluster Computing and the Grid, May 2009 [8] DARPA Microsystems Technology Office, BAA 07-24 "TRUST in Integrated Circuits", March 2007, http://www.darpa.mil/MTO/solicitations/baa07- 24/index.html [9] DOD Defense Science Board Task Force, "High Performance Microchip Supply", Feb 2005, http://www.acq.osd.mil/dsb/reports/2005-02-HPMS_Report_Final.pdf [10] DoD DISA, "Security Checklists", http://iase.disa.mil/stigs/checklist/index.html [11] eSecurity Planet, “Salesforce.com Scrambles To Halt Phishing Attacks”, http://www.esecurityplanet.com/trends/article.php/3709871/Salesforcecom- Scrambles-To-Halt-Phishing-Attacks.htm. [12] Gartner, Inc. “SaaS CRM Reduces Costs and Use of Consultants” by Michael Maoz. 15 October 2008 [13] Panagiotis G. Ipeirotis, Luis Gravano, Mehran Sahami, “Probe, Count, and Classify: Categorizing Hidden Web Databases”, ACM SIGMOD 2001, Santa Barbara, California, USA [14] McKinsey & Co., “Report: Clearing the Air on Cloud Computing”, Apr 2009, http://uptimeinstitute.org/content/view/353/319/. [15] NDP LLC, "Why is SAS 70 Relevant to SaaS in Today's Regulatory Compliance Landscape?", 2009, http://www.sas70.us.com/industries/saas-and-sas70.php [16] Thomas Ristenpart, Eran Tromer, Hovav Shacham and Stefan Savage, "Hey, You, Get Off My Cloud! Exploring Information Leakage in Third- Party Computer Clouds", Proceedings of ACM CCS 2009, Nov. 2009, http://cseweb.ucsd.edu/~hovav/dist/cloudsec.pdf 32
  • 34. [17] salesforce.com, “ISO 27001 Certified Security”, http://www.salesforce.com/platform/cloud-infrastructure/security.jsp. [18] salesforce.com, “Three global centers and disaster recovery”, http://www.salesforce.com/platform/cloud-infrastructure/recovery.jsp. [19] Nuno Santos, Krishna P. Gummadi, and Rodrigo Rodrigues, “Towards Trusted Cloud Computing”, USENIX Workshop on Hot Topics in Cloud Computing, San Diego, CA, Jun 2009. http://www.usenix.org/events/hotcloud09/tech/full_papers/santos.pdf [20] Alex Stamos, Andrew Becherer and Nathan Wilcox, "Cloud Computing Models and Vulnerabilities - Raining on the Trendy New Parade", Blackhat USA Briefings, July 2009, https://media.blackhat.com/bh-usa-09/video/STAMOS/BHUSA09- Stamos-CloudCompSec-VIDEO.mov [21] UC Berkeley Reliable Adaptive Distributed Systems Laboratory, "Above the Clouds: A BerkeleyView of Cloud Computing", Feb 2009, http://radlab.cs.berkeley.edu/ [22] Luis M. Vaquero, Luis Rodero-Merino, Juan Caceres, Maik Lindner, “A Break in the Clouds: Towards a Cloud Definition”, ACM SIGCOMM Computer Communication Review, Volume 39, Number 1, January 2009. [23] United States Government Accountability Office. “INFORMATION ASSURANCE National Partnership Offers Benefits, but Faces Considerable Challenges”, Mar 2006, http://www.gao.gov/new.items/d06392.pdf. 33