So much has been written, advertised and discussed about cloud computing, it is appropriate to define the term for common understanding. Cloud computing generally describes a method to supplement, consume and deliver IT services over the Internet. Web-based network resources, software and data services are shared under multi-tenancy and provided on-demand to customers. It is this central tenet of sharing - and the standardization it implies - that is the enabler of cloud computing’s core benefits. Cloud computing providers can amortize their costs across many clients and pass these savings on to them. This paradigm shift in computing infrastructure was a logical byproduct and consequence of the ease-of-access to remote and virtual computing sites provided by the Internet. The U.S. National Institute of Standards & Technology (NIST) defines four cloud deployment models: Community Cloud – Shares infrastructure between several organizations from a specific community with common concerns (e.g., security, compliance, jurisdiction), whether managed internally or by a third-party and hosted internally or externally. Public Cloud – The cloud infrastructure is provisioned by the cloud provider for open use by the general public. It may be owned, managed, and operated by a business, academic, or government organization, or some combination of them. Private Cloud – Infrastructure provisioned solely for a single organization, whether managed internally or by a third-party and hosted internally or externally. Hybrid Cloud – A composition of two or more clouds (private, community, or public) that remain unique entities but are bound together, offering the benefits of multiple deployment models. It can also be defined as multiple cloud systems that are connected in a way that allows programs and data to be moved easily from one deployment system to another.
The cloud helps you in two ways: by reducing the costs for operating your business and at the same time increase business agility and flexibility.You need to start testing all your applications immediately. However, this can be an overwhelming endeavor. You need large upfront capital and human investments: Hardware to procure, setup and maintain for the test/ staging environment Software to procure, install and maintain to automate the testing People with the right expertise and experience in security to hire, train and retain Process to define and refine so that everything is standardized and efficient
Some of the symptoms of that problem is that it takes an incredibly long time to get a server up and running and an application up and running.Multiple organizations, different areas of expertise whether it be server, storage, network and management, facilties. Whenever you plug in a server into this infrastructure, you have to cable it up to all those domains, but its not just the cable clutter if you will that is slowing things down, its also that every time you do one of these things, theres a process associated with it. And that process has a lot of manual overhead as well. So the end result is that the architecture that we have built in the last 10-15 years in the data center, kind of the rack, stack and wired world as its called in the past, this architecture forces some incredible organizational complexity on the customers when they go to a larger deployment. Complexity in terms of phy. As well as process complexity.We understand what the problem is and where it came from.[this slide is a simplification of the time-consuming process of standing up complex infrastructure. Make sure the customer sees their own process in all or part of the diagram]To provision new application infrastructure can takes weeks or months due to complexity. This process typically involves reviews and approvals, meetings and more meetings, plus the unpacking and implementation of the systems. The bottom left hand corner reflects the siloed nature of data center teams that must coordinate the build process across servers, network, storage and facilities, as in the previous slide. These meetings, handoffs and wait times between teams are just one aspect of the complexity in the overall provisioning process. [Note stop sign and heading back to the beginning in an endless loop] And sometimes the process can get derailed, requiring a return to the starting point and creating further delays.
Key pointsMany business leaders recognize this and are already moving towards adopting cloud services faster than many IT leaders are comfortable with.Business users have been quick to recognize the cloud’s advantages in speeding innovation, accelerating business processes, and reducing time to revenue. The increasing simplicity of rich cloud services, combined with an increasing level of IT sophistication of the consumers and employees of modern enterprises has resulted in pressure being applied to IT to speed the adoption of cloud services, and in some cases even bypassing IT and signing up for public cloud services like those from Salesforce and Google, often accessed over a smartphone, tablet or laptop owned and managed by the employee instead of the one traditionally provided to them by the enterprise.In theory this is great news, especially if you’re a service provider, but the reality is that cloud adoption will stall in the enterprise unelss we can address a number of critical challenges. Enterprise IT leaders that have been slower to adopt cloud solutions cite well-founded concerns about the challenges of maintainging security, service levels and a portfolio governance seamlessly across the entire IT value chain while ensuring that the decisions they make about cloud technology suppliers today don’t prevent them from innovating in the future.
To succeed, we need to rethink the role of the CIO and of IT. Moving from IT as a sole “supplier” or builder of services, to becoming the builder AND the broker of IT services. That means going beyond building world-class, reliable services inside the datacenter, but to create a core competency in aligning business needs with the optimal mix of internally and externally available services and then seamlessly blending them into a reliable, secure and compliant end-to-end experience.Starting with being able to source and consume the services you need from the market – building a network of suppliers you trust that can be relied upon to deliver at a predictable price and performance.CIO’s also need build a capability to act as an internal service provider, matching the transparency and flexibility of externally available services for those areas where either economies of scale, competitive advantage, risk or compliance mean that it makes more sense to provide their own services.No matter whether they seek to leverage public or private cloud services, both business and IT processes require transformation if they are to maximize the benefits of cloud technologies and ready the enterprise for accelerated innovation and improved agility.Finally, CIO’s need to manage and secure the entire IT value chain using the same consistent, seamless tools and processes or they risk creating silos that introduce cost, complexity and risk to hybrid environments.
We think that cloud is the third generation of computing, after mainframes and client server. It actually represents the maturation of the Internet. It is important to have a common definition of the cloud.
The slide above offers Gartner’s definition of cloud services, as well as fundamental characteristics which have progressively attracted consensus in the market. Almost everyone agrees today that cloud is an “evolved” way of delivering and consuming services, that leverages new technologies such as virtualization and automation, but also changes in the mindset of consumers (e.g: it is now totally accepted to wire money from your account to another one by using the internet portal of your bank). Some people would argue that is it essentially about leveraging new business models, or “consuming by the glass”, which is a drastic change in how IT use to deliver service (see the dedicated whitepaper on this).Also, when we talk about “what” we deliver as a service, we should be more specific: if we look at the typical technology layers within the enterprise, the majority of our customers talk about infrastructure, platform and applications. Cloud is enabling the delivery and consumption of those layers “as a service”.
We use NIST’s cloud definition as the standard. It is important to understand that there are many different types of clouds: SaaS, a full business application, PaaS, a rapid application development environment, IaaS, basic compute and storage. They can be deployed in different ways, but they are all characterized as resource pooling with elasticity, multi-tenancy and metered serviceCloud Computing – a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. Cloud Deployment Models Community Cloud – Shares infrastructure between several organizations from a specific community with common concerns (e.g., security, compliance, jurisdiction), whether managed internally or by a third-party and hosted internally or externally. Public Cloud – The cloud infrastructure is provisioned by the cloud provider for open use by the general public. It may be owned, managed, and operated by a business, academic, or government organization, or some combination of them. Private Cloud – Infrastructure provisioned solely for a single organization, whether managed internally or by a third-party and hosted internally or externally. Hybrid Cloud – A composition of two or more clouds (private, community, or public) that remain unique entities but are bound together, offering the benefits of multiple deployment models. It can also be defined as multiple cloud systems that are connected in a way that allows programs and data to be moved easily from one deployment system to another. Cloud Service Models Software as a Service (SaaS) – Employs the provider’s applications running on a cloud infrastructure. The applications are accessible from various client devices through either a thin client interface, such as a web browser (e.g., web-based email), or a program interface. The provider manages or controls the underlying cloud infrastructure with the possible exception of limited user-specific application configuration settings. Platform as a Service (PaaS) – Consumer-created or acquired applications supported by the provider are deployed onto the cloud infrastructure which the provider manages or controls. The consumer has control over the deployed applications and possible configuration settings for the application-hosting environment. Infrastructure as a Service (IaaS) – The consumer provisions processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The provider manages or controls the underlying cloud infrastructure while the consumer has control over operating systems, storage, and deployed applications; and possible limited control of select networking components (e.g., host firewalls). With IT transformation to the cloud, many of the traditional layers have been abstracted from customers perspective. Customers care more about who is accessing which application/data, and less about which platform the application is running on. Cloud Service Providers have increasing security responsibilities as they move from IaaS to PaaS, to SaaS.IaaS: For example, a LAMP stack (Linux, Apache web-server, MySQL DB, and Perl/PHP/Python) deployed on Amazons EC2 would be classified as a public off-premise, 3rd party managed IaaS solution, even if the instances and applications/data contained within them are owned by the Cloud consumer. Here Amazon is responsible for Infrastructure Security for the Physical and Network level. The consumer is responsible for securing the O/S, Apache Web-server and the MySQL DB.PaaS: Google App Engine includes – dynamic web-server, peristent storage, automatic scaling and load balancing, a Java/Python runtime and development environment, task queues, etc. Here Google provides the tools to secure the platform (e.g. JVM) and application by providing tools to integrate with Google accounts.SaaS: SalesForce.com – provides a purposeful set of applications that are hosted in the cloud. SalesForce.com takes care of protection of all layers – physical, network, system, database, application and users.
The security of the cloud depends not only the physical location of the assets (internal or external), but also the sensitivity of the information, who is consuming the information (multi-tenant Vs single tenant), and who is responsible for the governance, security, and compliance.Security risks depend on:Data classification of the assets, resources, and information being managed?Who manages them and how?Which controls are selected and how they are integrated?Compliance requirements?
Fortify gives you advanced technologies to ensure your applications are secure. Fortify inspects applications at the source code level (static testing) and while they are running (dynamic testing). Fortify supports more languages than any other application security vendor with significant strengths in the area of mobile application security. But it’s not just built for custom applications, Fortify and determine if vulnerabilities exist in commercial, custom and open source activities. And even more differentiated, Fortify can be delivered as a software you purchase or as a service. With unmatched flexibilityand depth of coverage, Fortify ensures you have a world class application security program in place.
Fortify gives you advanced technologies to ensure your applications are secure. Fortify inspects applications at the source code level (static testing) and while they are running (dynamic testing). Fortify supports more languages than any other application security vendor with significant strengths in the area of mobile application security. But it’s not just built for custom applications, Fortify and determine if vulnerabilities exist in commercial, custom and open source activities. And even more differentiated, Fortify can be delivered as a software you purchase or as a service. With unmatched flexibility and depth of coverage, Fortify ensures you have a world class application security program in place.You simply upload an application’s binaries and/or provide a URL for testing, using a highly secure cloud environment designed to safeguard sensitive uploads and intellectual property. HP Fortify on Demand then conducts a static and/or dynamic test and security experts verify the results. It presents correlated findings in an unbiased, tamper-proof report with results in just days, regardless of application size.