The document discusses infrastructure virtualization and cloud solutions. It defines virtualization as the creation of virtual versions of hardware platforms, operating systems, storage devices, and network resources. It describes how desktops, servers, networks, storage, and applications can be virtualized. It outlines some advantages and limitations of desktop virtualization, server virtualization, network virtualization, and storage virtualization. It also discusses application virtualization and different cloud service and deployment models. In conclusion, the document notes that cloud computing provides benefits but also risks that can be managed.
2. What is Virtualization?
• Wikipedia: ‘... is the creation of a virtual (rather than actual) version of something,
such as a hardware platform, operating system (OS), storage device, or network
resources.’
• Webopedia: ‘... to create a virtual version of a device or resource, such as a server,
storage device, network or even an operating system where the framework
divides the resource into one or more execution environments.’
iMatrix ICT Cloud - www.imatrixcloud.com.au
3. What can be Virtualized?
• Desktop
• Server
• Network
• Storage
• Application
iMatrix ICT Cloud - www.imatrixcloud.com.au
6. Advantages
• Management
• Security
• OS migration
• VDI Image
• Snapshot technology
• Go green
• Independence
Desktop Virtualization
iMatrix ICT Cloud - www.imatrixcloud.com.au
7. Limitations
• Potential security risk if the network is not properly managed
• Challenges in setting up and maintaining drivers for printers
and other peripherals
• Difficulty in running certain complex applications (e.g.
Multimedia)
• Increased downtime in the event of network failures
• Reliance on connectivity to corporate or public networks
• Can be difficult for a user to permanently delete his/her data
• Complex and high cost of VDI deployments and management
iMatrix ICT Cloud - www.imatrixcloud.com.au Desktop Virtualization
10. Limitations
• Dedicated servers for application that require high demands
on processing power
• Cannot overload server’s CPU by creating too many virtual
servers
• Migration:
- VM can migrate if both physical machines use the same
manufacturer CPU
- If a physical server requires maintenance and migration of the VM
is not an option all VM’s will be unavailable
Server Virtualization
iMatrix ICT Cloud - www.imatrixcloud.com.au
15. Risks
• Backing out a failed implementation
• Interoperability and vendor support
• Complexity
– Management of environment
– Infrastructure design
– Software or device itself
• Meta-data management
• Performance and scalability
iMatrix ICT Cloud - www.imatrixcloud.com.au Storage Virtualization
16. Implementation
• Host-based
– Simple to design and code
– Supports any storage array
– Improve storage utilization without thin provisioning restrictions
• Storage device-based
– No additional hardware or infrastructure requirements
– Provide most of the benefits of storage virtualization
– Does not add latency to individual I/Os
• Network-based
– True heterogeneous storage virtualization
– Caching of data
– Single management interface
– Replication data across devises
iMatrix ICT Cloud - www.imatrixcloud.com.au Storage Virtualization
18. Application Virtualization
• Application Streaming
– On-demand software distribution
• Desktop Virtualization / Virtual Desktop Infrastructure (VDI)
– Application is hosted in a VM or blade PC
Benefits
• Simplified operating system migrations
• Improved security – applications can be isolated from OS
• Reduce system integration and administration costs
• Uses fewer resources than a separate virtual machine
iMatrix ICT Cloud - www.imatrixcloud.com.au
20. Cloud – Service Models
• Infrastructure as a service (IaaS)
– Virtual machines, Storage, Firewall, Load balancer, Network
• Platform as a service (PaaS)
– Operating systems, programming language execution environment,
database, and web server
• Software as a service (SaaS)
– Access application software using networked devices (desktop PC,
laptop, tablet and Smartphone)
iMatrix ICT Cloud - www.imatrixcloud.com.au
21. Cloud – Deployment models
• Public cloud
– Applications, storage and other resources made available to general
public free or as a pay-per-use model
• Community cloud
– Shares infrastructure between several organizations with common
concern
• Hybrid cloud
– Combination of any two (private, community and public)
• Private cloud
– Operated solely and certainly of in house-applications
iMatrix ICT Cloud - www.imatrixcloud.com.au
22. Benefits
• Reduced costs • Scalability
• Increased storage • Business continuity
• Highly automated • Collaboration efficiency
• Flexibility • Minimal IT footprint
• More mobility (anytime,
anywhere)
• Allows IT to shift focus
iMatrix ICT Cloud - www.imatrixcloud.com.au Cloud Computing
24. Cloud Computing
In conclusion:
– Cloud computing is not risk-free
– Almost all the risks can be managed with minimal effort,
and balanced against all the benefits it provides
– Enterprises should decide whether or not adopting a cloud
solution comply with all of their organization requirements
iMatrix ICT Cloud - www.imatrixcloud.com.au
26. Thank you!
Gabriel Ionescu
iMatrix ICT Cloud - www.imatrixcloud.com.au
Notes de l'éditeur
Management Central management of all desktops and full control of what is being installed and used. Security Desktop can have the image locked down from external devices or prevent copying data from the image to your local machine. If the devise is stolen, the information/data is protected as being stored in the data centre OS migration You can just push the new OS image from a central location to a group of users or to all. Snapshot technology Ability to roll back desktops to different states. Go green A thin client VDI use less electricity than a desktop computer. Independence Can be used on any device: thin client, PC, Apple, Linux, etc. As long as you can connect to your VDI with ICA or RDP.
There are essentially four models for VDI operation. - Hosted (delivered as a service) - Centralized - Remote Synchronization - Client-Hosted Both Hosted and Centralized modes rely upon a constant network or internet connection to the server where the VDI instance is running. This model is similar in concept to thin clients, in that the client device only displays the virtual desktop. For this reason, a constant network connection is required. The Remote Synchronization model allows users to copy a VDI instance to a system, and then run the virtual desktop without a connection. In this model, users normally use virtual machines that are running on a centralized server, but can copy an image to be used locally when travelling. This disconnected or untethered mode of operation has its own set of advantages and disadvantages compared to traditional desktops and centralized VDI desktops. The Client-hosted model only uses centralized servers to manage virtual machine images, always running virtual machines on laptops or desktops. Local execution eliminates the infrastructure required for VDI execution servers in the data centre and also reduces network bandwidth consumption since the virtual machines are executing locally and not over a remote network.
Isolation One of the key reasons to employ virtualization is to isolate applications from each other. Running everything on one machine would be great if it all worked, but many times it results in undesirable interactions or even outright conflicts. The cause often is software problems or business requirements, such as the need for isolated security. Virtual machines allow you to isolate each application (or group of applications) in its own sandbox environment. Standardization A standardized hardware platform reduces support costs and increases the share of IT resources Consolidation Virtual machines also increase utilization and promote consolidation. Consolidation of servers results in easier management and decreased hardware costs. Ease of Testing Virtual machines let you test scenarios easily. Most virtual machine software today provides snapshot and rollback capabilities. This means you can stop a virtual machine, create a snapshot, perform more operations in the virtual machine, and then roll back again and again until you have finished your testing. Mobility Virtual machines are easy to move between physical machines. Most of the virtual machine software on the market today stores a whole disk in the guest environment as a single file in the host environment. Snapshot and rollback capabilities are implemented by storing the change in state in a separate file in the host information. Reduce Delivery Time Increase the speed to deployment for your software applications by leveraging templates server deployments. Reduce Costs Virtual server software (such as VMware, or Microsoft Virtual Server) allows multiple servers to run within a single server platform. This provides significant cost savings and provides additional flexibility. Flexibility The most significant benefit of virtual servers is their flexibility. This means that the specification (CPU, memory and disks) can easily be changed on-the-fly (within minutes). Leverage Server resources can be applied to virtual servers on-demand. This means that server resources can be directed to where they are most needed, when they are most needed. Get more out of your existing resources : Pool common infrastructure resources and break the legacy “one application to one server” model with server consolidation. Reduce hardware and operating costs by as much as 50% and energy costs by 80% Reduce the time it takes to provision new servers by up to 70% Decrease downtime and improve reliability with business continuity and built-in data disaster recovery Deliver IT services on-demand now and in the future, independent of hardware, OS, application or infrastructure providers Reduce data centre costs by reducing your physical infrastructure and improving your server to admin ratio : Fewer servers and related IT hardware means reduced real estate and reduced power and cooling requirements. Better management tools let you improve your server to admin ratio so personnel requirements are reduced as well. Increase availability of hardware and applications for improved business continuity : Securely backup and migrate entire virtual environments with no interruption in service. Eliminate planned downtime and recover immediately from unplanned issues. Gain operational flexibility : Respond to market changes with dynamic resource management, faster server provisioning and improved desktop and application deployment. Improve desktop manageability and security : Deploy, manage and monitor secure desktop environments that users can access locally or remotely, with or without a network connection, on almost any standard desktop, laptop or tablet PC.
Servers dedicated for applications with high demands on processing power, virtualization isn't a good choice. That's because virtualization essentially divides the servers processing power up among the virtual servers. It's also unwise to overload a server's CPU by creating too many virtual servers on one physical machine. The more virtual machines a physical server must support, the less processing power each server can receive. In addition, there's a limited amount of disk space on physical servers. Too many virtual servers could impact the server's ability to store data. Another limitation is migration. Right now, it's only possible to migrate a virtual server from one physical machine to another if both physical machines use the same manufacturer's processor. If a network uses one server that runs on an Intel processor and another that uses an AMD processor, it's impossible to port a virtual server from one physical machine to the other. If a physical server requires maintenance, porting the virtual servers over to other machines can reduce the amount of application downtime. If migration isn't an option, then all the applications running on the virtual servers hosted on the physical machine will be unavailable during maintenance.
Components of a virtual network - Network hardware, such as switches and network adapters, also known as network interface cards (NICs) - Network elements such as firewalls and load balancers - Networks, such as virtual LANs (VLANs) and containers such as virtual machines (VMs) and Solaris Containers Network storage devices - Network mobile elements such as laptops, tablets, and cell phones Network media, such as Ethernet and Fibre Channel
Non-disruptive data migration The host only knows about the logical disk (the mapped LUN) and so any changes to the meta-data mapping is transparent to the host. This means the actual data can be moved or replicated to another physical location without affecting the operation of any client. When the data has been copied or moved, the meta-data can simply be updated to point to the new location, therefore freeing up the physical storage at the old location. The process of moving the physical location is known as data migration. Most implementations allow for this to be done in a non-disruptive manner, that is concurrently while the host continues to perform I/O to the logical disk (or LUN). The mapping granularity dictates how quickly the meta-data can be updated, how much extra capacity is required during the migration, and how quickly the previous location is marked as free. The smaller the granularity the faster the update, less space required and quicker the old storage can be freed up. Improved utilization Utilization can be increased by virtue of the pooling, migration, and Thin Provisioning services. When all available storage capacity is pooled, system administrators no longer have to search for disks that have free space to allocate to a particular host or server. A new logical disk can be simply allocated from the available pool, or an existing disk can be expanded. Pooling also means that all the available storage capacity can potentially be used. In a traditional environment, an entire disk would be mapped to a host. This may be larger than is required, thus wasting space. In a virtual environment, the logical disk (LUN) is assigned the capacity required by the using host. Fewer points of management With storage virtualization, multiple independent storage devices, even if scattered across a network, appear to be a single monolithic storage device and can be managed centrally. However, traditional storage controller management is still required. That is, the creation and maintenance of RAID arrays, including error and fault management
Backing out a failed implementation Once the abstraction layer is in place, only the virtualizer knows where the data actually resides on the physical medium. Backing out of a virtual storage environment therefore requires the reconstruction of the logical disks as contiguous disks that can be used in a traditional manner. Interoperability and vendor support Interoperability is a key enabler to any virtualization software or device. It applies to the actual physical storage controllers and the hosts, their operating systems, multi path software and connectivity hardware. Interoperability requirements differ based on the implementation chosen. For example virtualization implemented within a storage controller adds no extra overhead to host based interoperability, but will require additional support of other storage controllers if they are to be virtualized by the same software Complexity Complexity affects several areas: - Management of environment: Although a virtual storage infrastructure benefits from a single point of logical disk and replication service management, the physical storage must still be managed. Problem determination and fault isolation can also become complex, due to the abstraction layer. - Infrastructure design: Traditional design ethics may no longer apply, virtualization brings a whole range of new ideas and concepts to think about (as detailed here) - The software or device itself: Some implementations are more complex to design and code - network based especially in-band (symmetric) designs in particular — these implementations actually handle the I/O requests and so latency becomes an issue. Meta-data management The meta-data management also has implications on performance. Any virtualization software or device must be able to keep all the copies of the meta-data atomic and quickly updateable. Some implementations restrict the ability to provide certain fast update functions, such as point-in-time copies and caching where super fast updates are required to ensure minimal latency to the actual I/O being performed. Performance and scalability Due to the nature of virtualization, the mapping of logical to physical requires some processing power and lookup tables. Therefore every implementation will add some small amount of latency.
Host-based virtualization requires additional software running on the host, as a privileged task or process. In some cases volume management is built-in to the operating system, and in other instances it is offered as a separate product. Storage devise-based concept: A primary storage controller provides the virtualization services and allows the direct attachment of other storage controllers. Depending on the implementation these may be from the same or different vendors. Network-based : Storage virtualization operating on a network based devices that use iSCSI or FC Fibre channel networks to connect as a SAN.
Benefits of application virtualization: Allows applications to run in environments that do not suit the native application May protect the operating system and other applications from poorly written or buggy code Uses fewer resources than a separate virtual machine Run applications that are not written correctly, for example applications that try to store user data in a read-only system-owned location Run incompatible applications side-by-side, at the same time and with minimal regression testing against one another Reduce system integration and administration costs by maintaining a common software baseline across multiple computers in an organization Simplified operating system migrations Accelerated application deployment, through on-demand application streaming Improved security, by isolating applications from the operating system Enterprises can easily track license usage Fast application provisioning to the desktop based upon user's roaming profile Allows applications to be copied to portable media and then imported to client computers without need of installing them
Reduced Cost Cloud technology is paid incrementally, saving organizations money. Increased Storage Organizations can store more data than on private computer systems. Highly Automated No longer do IT personnel need to worry about keeping software up to date. Access to automatic updates for your IT requirements may be included in your service fee. Flexibility/Scalability Cloud computing offers much more flexibility than past computing methods. Businesses can scale up or scale down their operation and storage needs quickly to suit their situation, allowing flexibility as their needs change. More Mobility Employees can access information wherever they are, rather than having to remain at their desks. Allows IT to Shift Focus No longer having to worry about constant server updates and other computing issues, government organizations will be free to concentrate on innovation. Business Continuity Whether you experience a natural disaster, power failure or other crisis, having your data stored in the cloud ensures it is backed up and protected in a secure and safe location. Being able to access your data again quickly allows you to conduct business as usual, minimising any downtime and loss of productivity. Collaboration efficiency Cloud offers the ability to communicate and share more easily outside of the traditional methods. If you are working on a project across different locations, you could use cloud computing to give employees, contractors and third parties access to the same files.
Legal implications With cloud computing, data from multiple customers is typically commingled on the same servers. That means that legal action taken against another customer that is completely unrelated to your business could have a ripple effect. Your data could become unavailable to you just because it was being stored on the same server as data belonging to someone else that was subject to some legal action. For example, a search warrant issued for the data of another customer could result in your data being seized as well. Security In fact, no IT system is completely secure, not even the internal data centre with multiple firewalls, restricted access and housed in a fire-resistant, earthquake-proof building. Compatibility Cloud computing networks may not be compatible with existing IT infrastructures Availability Its availability at critical junctures is potentially a matter of concern. Need to find out how data and service availability during bandwidth interruptions and DDOS (Distributed Denial Of Service) attacks. Compliance Actual data resides on multiple servers in multiple locations, often in multiple countries. There may be issues regarding the legality of certain data being stored outside national borders. Monitoring Since data is in the hands of the provider, monitoring may be a problem unless the proper processes are put in place. However, end-to-end monitoring over clouds is certainly possible. Lock in How easy the client can shift to another provider, if dissatisfied with the current service, without having to incur initial setup costs again. Standardization With the absence of codified standards, it is very difficult to judge whether the service provided is good or bad.