A plataforma Windows Azure abre espaço a desenvimento de aplicações utilizando o novo paradigma: "A Nuvem". Aplicações escaláveis, redundantes, e mais próximas do utilizador final. Isto tudo utilizando como base os conhecimentos que já tem e o novo Visual Studio 2010.
4. Business Drivers for IT Projects Increase market share and revenue Investing in product development and customer facing interaction channels Increase efficiency and lower TCO Investing in technologies and processes to drive efficiency and lower cost through optimization
5. Control High Low Economy of Scale Low High Control vs. Economy of Scale
6. Build vs. Buy Control High Low Economy of Scale Low High This is not new…
7. On premises vs. in the cloud Control High Low Economy of Scale Low High This is new…
8. Leveraging Economy of Scale Ratio between $ buckets for medium and very large DC Source: Above the Clouds: A Berkeley View of Cloud Computing
9.
10. Short distance access to hydroelectricpower Long distance transmission over the grid Energy generation based on expensive resources Location Matters
11. Short distance access to hydroelectricpower Price per KWH 3.6¢ Idaho 10.0¢ California 18.8¢ Hawaii Long distance transmission over the grid Energy generation based on expensive resources Location Matters
12. Application runs on-premises Buy my own hardware, and manage my own data center Application runs at a hoster Pay someone to host my application using hardware that I specify Application runs using cloud platform Pay someone for a pool of computing resources that can be applied to a set of applications Datacenter Options
19. IaaS User gets access to a pool of resources through VM abstraction User needs to patch/maintain the system Scales only if stateless or if well partitioned E.g. Amazon EC2
20. PaaS User gets access to a pool of resources through a programming model System is managed by the cloud provider Platform (programming model) can provide scalability E.g. Windows Azure, Google App Engine
21. SaaS User gets access to applications System is completely managed by the cloud provider Cloud provider may offer customization and extensibility options E.g. Salesforce.com, Microsoft BPOS, Google Apps
24. Wall Street firm on Amazon EC2 3000-- Number of EC2 Instances 300 CPU’s on weekends 300 -- Thursday 4/23/2009 Friday 4/24/2009 Sunday 4/26/2009 Monday 4/27/2009 Tuesday 4/28/2009 Saturday 4/25/2009 Wednesday 4/22/2009
25. Scale through PaaS The platform abstracts the physical infrastructure Ability to scale built into the programming model Google’s AppEngine Microsoft’s Windows Azure Amazon’s elastic MapReduce …
26. What is Windows Azure?An Operating System for the cloud Hardware Abstraction across multiple servers Distributed Scalable, Available Storage Deployment, Monitoring and Maintenance Automated Service Management, Load Balancers, DNS Programming Environments Interoperability Designed for Utility Computing
27. Why Windows Azure? OS Takes care of your service in the cloud Deployment Availability Patching Hardware Configuration You worry about writing the service
28. What is Windows Azure?Features Automated Service Management Compute Storage Developer Experience
58. Provide rich tooling and templates to easily build your applicationCompute Storage
59. Windows Azure API All operations are accessed through RoleEnvironment RoleEnvironment Handles Retrieving configuration information LocalResource locations (local disk) Accessing Role information like InstanceEndpoints necessary for inter-role communication Exposes events during instance lifecycle, allowing a chance to respond to topology or config change
60. Service Models Describes your Service <?xmlversion="1.0" encoding="utf-8"?> <ServiceDefinitionname="CloudService1" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WebRolename="WebRole"> <ConfigurationSettings> <Settingname="AccountName"/> </ConfigurationSettings> <LocalStoragename="scratch" sizeInMB="50"/> <InputEndpoints> <!-- Must use port 80 for http and port 443 for https when running in the cloud --> <InputEndpointname="HttpIn" protocol="http" port="80" /> </InputEndpoints> </WebRole> <WorkerRolename="WorkerRole"> <ConfigurationSettings> <Settingname="AccountName"/> <Settingname="TableStorageEndpoint"/> </ConfigurationSettings> </WorkerRole> </ServiceDefinition>
64. Storage Blobs, Drives, Tables, Queues Designed for the cloud 3 replicas Guaranteed consistency Accessible directly from the internet via REST API .NET Client library supported Does not require compute Storage account drives unique URL, e.g.: https://<youraccount>.blob.core.windows.net
65. Blobs Blobs stored in Containers 1 or more Containers per account Scoping is at container level …/Container/blobname $root is special name for root container Blobs Two types, Page and Block Page is random R/W, Block has block semantics Metadata, accessed independently name/value pairs (8kb total) Private or Public container access
66. Queues Simple asynchronous dispatch queue Create and delete queues Message: Retrieved at least once Max size 8kb Operations: Enqueue Dequeue RemoveMessage
67. Tables Entities and properties (rows & columns) Tables scoped by account Designed for billions+ Scale-out using partitions Partition key & row key Operations performed on partitions Efficient queries No limit on number of partitions Use ADO.NET Data Services
69. Service Lifecycle Create service package Binaries + Content + Service Metadata Deploy via web portal or Service Management API Add & remove capacity via web portal or API Deployed across fault domains Upgrade with zero downtime
70. Automated Service Management You tell us what, we take care of how What Service metadata How Metadata describes service No OS footprint Service is copied to instances Instances were copied to physical hardware Physical hardware booted from VHD All patching is performed offline
71. Service Monitoring SDK component providing distributed monitoring & data collection for cloud apps Support Standard Diagnostics APIs Trace, Debug normally Manage multiple role instances centrally Scalable Choose what to collect & when to collect it Event Logs, Trace/Debug, Performance Counters, IIS Logs, Crash Dumps, Arbitrary log files
72. Design Considerations Scale and availability are the design points Storage isn’t a relational database Stateless Stateless front ends, store state in storage Use queues to decouple components Instrument your application (Trace) Once you are on - stay on Think about patching & updates
73. The CxO Value Proposition Time to Market No upfront CAPEX Instant access to compute resources Total Cost of Ownership CAPEX/OPEX Economy of scale Elasticity Productivity Higher level of abstraction for compute resources Risk Mitigation Allows for trial and error Unpredictable utilization patterns
74. Cloud Pricing Pricing Options Pay as you go – flexible but unpredictable Flat fees, Quotas, Limits allow for better control Decisions based on business model, predictability, IT budgeting Pricing Dimensions Compute pricing # CPUs (hours deployed vs. hours utilized) Dedicated local storage & memory Bandwidth (inbound, outbound, inter-DC) Can become very pricy for large storages Disk shipments Storage Average used storage capacity vs. , “fixe sized” storage Storage transactions (read, write) Used query time
75. The Bandwidth Challenge Storage vs. Bandwidth Time and cost of transferring data AWS Import/Export Compute and storage location matters Compute where the data is stored instead of moving the data to the compute resource
76. The Regulation Challenge Safe Harbor Act Datacenter locations matter Patriot Act Big Brother might get access to your data Industry Regulations E.g. Payment Card Industry Data Security Standard (PCI DSS)
77. Where to Start? Find a business case first! Compute Scenarios Capabilities with little integration requirements Input compute output High scalability/performance requirements Challenging utilization requirements Storage Scenarios Archiving solutions Cross-corporate data sharing Look for capabilities with infrequent compute
80. Próximas reuniões presenciais 19/06/2010 - Junho 10/07/2010 - Julho 14/08/2010 - Agosto 18/09/2010 - SetembroReserva estes dias na agenda! :)
81. Obrigado! Pedro Rosa http://pt.linkedin.com/in/pedrobarraurosa http://twitter.com/pedrorosa
Notes de l'éditeur
You can draw the comparison between the desktop/server OS and the cloud OS. The desktop abstracts away things like printers, display drivers, memory etc. So you only have to worry about building your desktop application. The Cloud OS does this for the cloud, but instead of printers, display drivers etc. it does it across multiple servers, networking compoents, provides a “cloud file system” for storage, a programming environment etc.The last 2 points:1. Interoperability – the storage etc uses REST based protocols. Additionally, we support things like PHP, MySQL, Apache, etc. with the release of inter-role communication.2. Designed for Utility Computing – Rather than charging a per-seat license etc. you will be charged by consumption. The pricing is available on the windowsazure.com website.
Windows Azure is not about letting you setup and run an entire OS with your application.Instead it is about running your service, using commodity servers that are managed by Microsoft. Microsoft take care of deploying your service, patching the OS, keeping your service running, configuring hardware, infrastructure etc. All of this is automated.All you need to worry about is writing the service.
Here are some of the features we’ll walk through in the next few minutes.
This is the exploding cloud diagram
Windows Azure runs on Windows Server 2008 running .NET 3.5 SP1. At MIX09, we opened up support for Full Trust and FastCGI. Full Trust is starred here because while Full Trust gives you access to p/invoke into native code, it is code that still runs in user mode (not administrator). However, for most native code that is just fine. If you wanted to call into some Win32 APIs for instance, it might not work in all instances because we are not running your code under a system administrator account.There are 2 roles in playA web role – which is just a web site, asp.net, wcf, images, css etc.A worker role – which is similar to a windows service, it runs in the background and can be used to decouple processing. There is a diagram later that shows the architecture, so don’t worry about how it fits together just yet.Key to point out the inbound protocols are HTTP & HTTPS and now TCP ports – outbound are any TCP Socket, (but not UDP).All servers are stateless, and all public access is through load balancers. You can use inter-role communication to communicate point to point between roles (non-public, non-load balanced).
This should give a short introduction to storage. Key points are its durable (meaning once you write something we write it to disk), scalable (you have multiple servers with your data), available (the same as compute, we make sure the storage service is always running – there are 3 instances of your data at all times).Quickly work through the different types of storage:Blobs – similar to the file system, use it to store content that changes, uploads, unstructured data, images, movies etc.Drives – an NTFS mountable drive backed over blob storage. This enables interop and traditional file system capabilities over blob storage backingTables – Semi-structured, provides a partitioned entity store (more on partitions etc. in the Building Azure Services Talk) – allows you to have tables containing billions of rows, partitioned across multiple servers.Queues – Simple queue for decoupling Computer Web and Worker Roles.All access is through REST interface. You can actually access the storage from outside of the data center (you don’t need compute) and you can access storage via anything that can make a HTTP request.It also means table storage can be accesses via ADO.NET Data Services.
Remind them the cloud is all the hardware across the board.Point out the automated service management,
Developer SDK is a Cloud in a box, allowing you to develop and debug locally without requiring a connection to the cloud. You can do this without Visual Studio as there are command line tools for executing the “cloud in a box” and publishing to the cloud.There is also a separate download for the Visual Studio 2008 tools, which provide the VS debugging and templates.Requirements are any version of Visual Studio (including Web Developer Express), Vista SP1, Win7 RC or later.
There is a small API for the cloud, that allows you to do some simple things, such as logging, reading from a service configuration file, and local file system access. The API is small and is easy to learn.
To allow us to deploy and operate your service in the cloud, we need to know the structure of your service. You describe your service and operating parameters through the use of a service model. This service model tells us which roles you have, any service configuration and can also describe the number of instances you need for each role within your service. Whilst this model is simple today, the model will be extended to allow you to describe a much richer operational model – e.g. allowing scale-out and scale-down based upon consumption and performance.This file is also where you would store configuration that may change once deployed. Since all files within a role are read-only, you cannot change either an app.config or web.config file once deployed, the only configuration you can change is in the service model.
Key points here are that all external connections come through a load balancer. If you are familiar with the previous model, you will notice that two new features are diagrammed here as well, namely inter-role communication (notice there is no loadbalancer) and TCP ports directly to Worker Roles (or Web Roles). We will still use the storage to communicate async and realiably via queues for a lot of options. However, inter-role communication fills in when you need direct synchronous comm.
Key points here are that all external connections come through a load balancer. If you are familiar with the previous model, you will notice that two new features are diagrammed here as well, namely inter-role communication (notice there is no loadbalancer) and TCP ports directly to Worker Roles (or Web Roles). We will still use the storage to communicate async and realiably via queues for a lot of options. However, inter-role communication fills in when you need direct synchronous comm.
In this next section, we’ll dig a little deeper on storage.Recall there are 3 types of storage.Recall the design point is for the cloud, there are 3 replicas of data, and we implement guaranteed consistency. In the future there will be some transaction support and this is why we use guaranteed consistency.Access is via a storage account – you can have multiple storage accounts per project.Although the API is REST, there is a supported .net storage client in the SDK that you can use within your project. This makes working with storage much easier.
BlobsBlobs are stored in containers. There are 0 or more blobs per container and 0 or more containers per account. (since you can have 0 containers, but then you would not have any blobs either)Typically url in the cloud is http://accountname.blob.core.windows.net/container/blobpathBlob paths can contain the / character, so you can give the illusion of multiple folders, but there is only 1 level of containers.Blob capacity at CTP is 50gb.There is an 8k dictionary that can be associated with blobs for metadata.Blobs can be private or public:Private requires a key to read and writePublic requires a key to write, but NO KEY to read.Use blobs where you would use the file system in the past.
Queues are simple:Messages are placed in queues. Max size is 8k (and it’s a string)Message can be read from the queue, at which point it is hidden.Once whatever read the message from the queue is finished processing the message, it should then remove the message from the queue. If not the message is returned to the queue after a specific user defined time limit. This can be used to handle code failures etc.
Tables are simply collections of Entities.Entites must have a PartitionKey and RowKey – can also contain up to 256 other properties.Entities within a table need not be the same shape! E.g.:Entity 1: PartitionKey, RowKey, firstnameEntity 2: PartitionKey, RowKey, firstname, lastnameEntity 3: PartitionKey, Rowkey, orderId, orderData, zipCodePartitions are used to spread data across multiple servers. This happens automatically based on the partition key you provide. Table “heat” is also monitored and data may be moved to different storage endpoints based upon usage.Queries should be targeted at a partition, since there are no indexes to speed up performance. Indexes may be added at a later date.Its important to convey that whilst you could copy tables in from a local data source (e.g. sql) it would not perform well in the cloud, data access needs to be re-thought at this level. Those wanting a more traditional SQL like experience should investigate SDS.
Once you have built and tested your service, you will want to deploy it.The key to deployment and operations is the service model.To deploy – first you build your service, this takes the project output + Content (images, css etc.) and makes a single file. It also creates and instance of your service metadata.Next you would visit the web portal and upload the 2 solution files – from there the “cloud” takes care of deploying it onto the correct number of machines and getting it to run.To increase and decrease capacity today, you would edit the configuration from the web portal.For more than 1 instance, you should be deployed across fault domains, meaning separate hardware racks.In the portal you have a production and staging area, with different urls. You can upload the next version of your project into staging, then flip the switch – which essentially changes the load balancers to point to the new version.
So how do we do the automated deployment & manage your service.1st – remember the service metadata tells us exactly what we need to deploy and how many instances etc.There is no OS footprint, so your service can be copied around the data center without any configuration requirements.The OS itself is on a VHD, so it was copied to the hardware.The hardware itself was also booted from VHD, which was also copied around.Therefore, to put a new version of your software, or the OS that hosts it, all we need to do is copy it to a new machine and spin it up. It also means we can patch and test the OS offline. No live patching!!!
Now your service is deployed, how do YOU monitor it?With the diagnostics and monitoring API, you can deploy your roles and remotely configure what sources your instance should monitor. This configuration can be by role or by instance. You can configure standard tracing in your application, monitor the event logs or performance counters, collect log files like IIS logs or any log file as well as crash dumps of your application. Since this information can be pushed into your storage account on demand or on a scheduled basis, it is both highly scalable as well as easily manageable from outside of Windows Azure.
Some key things to rememberDesign points are scalability and availability – think it terms of lots of small servers rather than a single BIG server.Table storage is semi-structured – ITS NOT A RELATIONAL DATABASE – IT NEVER WILL BE. THAT IS SDS.Everything is stateless (you can maintain state in table or blob storage if YOU want to)Decouple everything using queues, and write code to be repeatable without breaking anything – in other words design for failure!Instrument and log your application yourself.Work on the idea that once you are on – stay on.How will you patch/update your service once it is switched on?