En este diapositivas der Microsoft podemos ver qué aporta SQL 2014 en áreas como: Tablas optimizadas en memòria, Cambios en estimacion de la cardinalidad, Cifrado de los Backups, Mejoras en arquitectures, Always On, Cambios en Resource Governor, Data files en Azure.
4. SQL Server engine
In-memory OLTP
4
New high-performance, memory-optimized online
transaction processing (OLTP) engine integrated into
SQL Server and architected for modern hardware
trends
Memory-optimized
table file group
In-Memory OLTP
engine: Memory-
optimized tables &
indexes
Native compiled SPs
& schema
In-Memory OLTP
compiler
Transaction log Data file group
Buffer pool for tables
& indexes
Lock Manager
Proc/plan cache for
ad-hoc, T-SQL;
interpreter, plans, etc.
Parser,
catalog,
optimizer
5. Suitable application characteristics
5
• Application is suited for in-memory processing
‐ All performance critical data already fits in memory
‐ Transaction locking or physical latching causing stalls and
blocking
• Application is “OLTP-Like”
‐ Relatively short-lived transactions
‐ High degree of concurrent transactions from many
connections
‐ Examples: Stock trading, travel reservations, order
processing
• Application porting simplified if
‐ Stored procedures used
‐ Performance problems isolated to subset of tables and SPs
6. SQL Server integration
• Same manageability,
administration, and
development experience
• Integrated queries and
transactions
• Integrated HA and
backup/restore
Main-memory optimized
• Optimized for in-memory
data
• Indexes (hash and range)
exist only in memory
• No buffer pool
• Stream-based storage for
durability
High concurrency
• Multiversion optimistic
concurrency control with full
ACID support
• Core engine uses lock-free
algorithms
• No lock manager, latches, or
spinlocks
T-SQL compiled to
machine code
• T-SQL compiled to machine
code via C code generator
and Visual C compiler
• Invoking a procedure is just a
DLL entry-point
• Aggressive optimizations at
compile time
Steadily declining memory
price, NVRAM
Many-core processors Stalling CPU clock rate TCO
Hardware trends Business
Hybrid engine and
integrated experience
High-performance data
operations
Frictionless scale-up Efficient, business-logic
processing
BenefitsIn-MemoryOLTPTechPillarsDrivers
In-memory OLTP architecture
7. Main-memory optimized
• Optimized for in-memory
data
• Indexes (hash and ordered)
exist only in memory
• No buffer pool
• Stream-based storage for
durability
Steadily declining memory
price, NVRAM
Hardware trends
Design considerations for memory-optimized
tables
7
Table constructs
Fixed schema; no ALTER TABLE; must drop/recreate/reload
No LOB data types; row size limited to 8,060
No constraints support (primary key only)
No identity or calculated columns, or CLR
Data and table size considerations
Size of tables = (row size * # of rows)
Size of hash index = (bucket_count * 8 bytes)
Max size SCHEMA_AND_DATA = 512 GB
IO for durability
SCHEMA_ONLY vs. SCHEMA_AND_DATA
Memory-optimized filegroup
Data and delta files
Transaction log
Database recovery
High performance data
operations
BenefitsIn-MemoryOLTPTechPillarsDrivers
8. Create Table DDL
CREATE TABLE [Customer](
[CustomerID] INT NOT NULL
PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 1000000),
[Name] NVARCHAR(250) NOT NULL
INDEX [IName] HASH WITH (BUCKET_COUNT = 1000000),
[CustomerSince] DATETIME NULL
)
WITH ( MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA);
This table is memory
optimized
This table is durable
Secondary indexes are
specified inline
Hash index
9. Create Procedure DDL
CREATE PROCEDURE [dbo].[InsertOrder] @id INT, @date DATETIME
WITH
NATIVE_COMPILATION,
SCHEMABINDING,
EXECUTE AS OWNER
AS
BEGIN ATOMIC
WITH
(TRANSACTION
ISOLATION LEVEL = SNAPSHOT,
LANGUAGE = 'us_english')
-- insert T-SQL here
END
This proc is natively compiled
Native procs must be
schema-bound
Atomic blocks
• Create a transaction if
there is none
• Otherwise, create a
savepoint
Execution context is required
Session settings are fixed at
create time
10. High concurrency
• Multiversion optimistic
concurrency control
with full ACID support
• Core engine uses lock-
free algorithms
• No lock manager,
latches, or spinlocks
Many-core processors
Hardware trends
High-concurrency design considerations
10
Impact of no locks or latches
Write-write conflict: design application for condition with try.catch
Applications dependent on locking; may not be a good candidate
Multiple versions of records
Increases the memory needed by memory-optimized tables
Garbage Collection used to reclaim old versions
Transaction scope
Support for Isolation Levels: Snapshot, Repeatable Read, Serializable
Commit time validation; again must retry logic to deal with failure
Frictionless scale-up
BenefitsIn-MemoryOLTPTechPillarsDrivers
11. Example: Write conflict
11
Time Transaction T1 (SNAPSHOT) Transaction T2 (SNAPSHOT)
1 BEGIN
2 BEGIN
3 UPDATE t SET c1=‘bla’ WHERE c2=123
4 UPDATE t SET c1=‘bla’ WHERE
c2=123 (write conflict)
First writer wins
12. Guidelines for usage
12
1. Declare isolation level – no locking hints
2. Use retry logic to handle conflicts and validation
failures
3. Avoid using long-running transactions
13. T-SQL compiled to
machine code
• T-SQL compiled to machine
code via C code generator
and Visual C compiler
• Invoking a procedure is just a
DLL entry-point
• Aggressive optimizations at
compile-time
Stalling CPU clock rate
Hardware trends
Design considerations for native compiled stored
procedures
13
Efficient business-logic
processing
BenefitsIn-MemoryOLTPTechPillarsDrivers
Native compiled stored
procedures
Non-native
compilation
Performance High. Significantly less
instructions to go through
No different than T-SQL calls
in SQL Server today
Migration strategy Application changes;
development overhead
Easier app migration as can
still access memory-
optimized tables
Access to objects Can only interact with
memory-optimized tables
All objects; access for
transactions across memory
optimized and B-tree tables
Support for T-SQL constructs Limited T-SQL surface area (limit on
memory-optimized
interaction)
Optimization, stats, and query
plan
Statistics utilized at CREATE -
> Compile time
Statistics updates can be used
to modify plan at runtime
Flexibility Limited (no ALTER procedure,
compile-time isolation level)
Ad-hoc query patterns
14. Performance gains
14
In-Memory
OLTP
Compiler
In-Memory
OLTP
Component
Memory-optimized Table
Filegroup
Data Filegroup
SQL Server.exe
In-Memory OLTP Engine for
Memory_optimized Tables &
Indexes
TDS Handler and Session Management
Natively Compiled
SPs and Schema
Buffer Pool for Tables & Indexes
Proc/Plan cache for ad-hoc T-
SQL and SPs
Client App
Transaction Log
Interpreter for TSQL, query
plans, expressions
Query
Interop
Access Methods
Parser,
Catalog,
Algebrizer,
Optimizer
10-30x more efficient
Reduced log bandwidth
& contention. Log
latency remains
Checkpoints are
background sequential
IO
No improvements in
communication stack,
parameter passing,
result set generation
Key
Existing SQL
Component
Generated .dll
15. In-memory data structures
15
Rows
• New row format
Structure of the row is optimized for memory residency and
access
• One copy of row
Indexes point to rows, they do not duplicate them
Indexes
• Hash index for point lookups
• Memory-optimized nonclustered index for range and
ordered scans
• Do not exist on disk – recreated during recovery
16. Memory-optimized table: Row format
Key Points
• Begin/End timestamp determines row’s validity
• No data or index page; just rows
• Row size limited to 8060 bytes to allow data to be moved to disk-based table
• Not every SQL table schema is supported
Row header Payload (table columns)
Begin Ts End Ts StmtId IdxLinkCount
8 bytes 8 bytes 4 bytes 2 bytes
8 bytes * (IdxLinkCount – 1)
17. Key lookup: B-tree vs. memory-optimized table
Matching index record
Hash index
on Name
R1 R2
R3
Non-clustered index
19. • No DML triggers
• No FOREIGN KEY or CHECK constraints
• No IDENTITY columns
• 8060 byte hard limit for row length
• No UNIQUE indexes other than for the PRIMARY KEY
• A maximum of 8 indexes, including the index supporting the PRIMARY
KEY
• No schema changes are allowed once a table is created (drop/recreate)
• Indexes are always and ONLY created at as part of the table creation
In-Memory OLTP Limitations
20. Indexes Comparison for different kinds of operations
Operation Memory hash Memory nonclustered Disk
Index Scan, retrieve all table rows. Yes Yes Yes
Index seek (=). Yes (Full key required.) Yes 1 Yes
Index seek (>, <, <=, >=,
BETWEEN).
No (index scan) Yes 1 Yes
Retrieve rows in a sort-order
matching the index definition.
No Yes Yes
Retrieve rows in a sort-order
matching the reverse of the index
definition.
No No Yes
21. Memory management
21
Data resides in memory at all times
• Must configure SQL server with sufficient memory to store
memory-optimized tables
• Failure to allocate memory will fail transactional workload at
run-time
• Integrated with SQL Server memory manager and reacts to
memory pressure where possible
Integration with Resource Governor
• “Bind” a database to a resource pool
• Mem-opt tables in a database cannot exceed the limit of
the resource pool
• Hard limit (80% of phys. memory) to ensure system remains
stable under memory pressure
22. Garbage collection
22
Stale Row Versions
• Updates, deletes, and aborted insert operations create
row versions that (eventually) are no longer visible to
any transaction
• Slow down scans of index structures
• Create unused memory that needs to be reclaimed (i.e.
Garbage Collected)
Garbage Collection (GC)
• Analogous to version store cleanup task for disk-based
tables to support Read Committed Snapshot (RCSI)
• System maintains ‘oldest active transaction’ hint
GC Design
• Non-blocking, Cooperative, Efficient, Responsive,
Scalable
• A dedicated system thread for GC
• Active transactions work cooperatively and pick up
parts of GC work
23. Cooperative garbage collection
Key Points
• Scanners can remove expired rows when
found
• Offloads work from GC thread
• Ensures that frequently visited areas of the
index are cleaned regularly
• A row needs to be removed from all indexes
before memory can be freed
• Garbage collection is most efficient if all
indexes are frequently accessed
100 200 1 John Smith Kirkland
200 ∞ 1 John Smith Redmond
50 100 1 Jim Spring Kirkland
300 ∞ 1 Ken Stone Boston
TX4: Begin = 210
Oldest Active Hint = 175
24. Durability
24
Memory-optimized tables can be durable or non-durable
• Default is ‘durable’
• Non-durable tables are useful for transient data
Durable tables are persisted in a single memory-optimized
file group
Storage used for memory-optimized has a different access pattern
than for disk tables
File group can have multiple containers (volumes)
Additional containers aid in parallel recovery; recovery happens at
the speed of IO
25. On-disk storage
25
Filestream is the underlying storage mechanism
Checksums and single-bit correcting ECC on files
Data files
• ~128MB in size, write 256KB chunks at a time
• Stores only the inserted rows (i.e. table content)
• Chronologically organized streams of row versions
Delta files
• File size is not constant, write 4KB chunks at a time.
• Stores IDs of deleted rows
26. Logging for memory-optimized tables
26
Uses SQL transaction log to store content
Each log record contains a log record header followed by opaque
memory optimized-specific log content
All logging for memory-optimized tables is logical
• No log records for physical structure modifications.
• No index-specific / index-maintenance log records.
• No UNDO information is logged
Recovery Models
All three recovery models are supported
27. Backup for memory-optimized tables
27
Integrated with SQL Database Backup
• Memory-Optimized file group is backed up as part SQL Server
database backup
• Existing backup scripts work with minimal or no changes
• Transaction log backup includes memory-optimized log records
Not supported
Differential backup
28. Recovery for memory-optimized tables
28
Analysis Phase
Finds the last completed checkpoint
Data Load
• Load from set of data/delta files from the last completed
checkpoint
• Parallel Load by reading data/delta files using 1 thread / file
Redo phase to apply tail of the log
• Apply the transaction log from last checkpoint
• Concurrent with REDO on disk-based tables
No UNDO phase for memory-optimized tables
Only committed transactions are logged
35. In-memory OLTP summary
35
What’s being delivered
• High-performance, memory-optimized OLTP engine integrated
into SQL Server and architected for modern hardware trends
Main benefits
• Optimized for in-memory data up to 20–30 times throughput
‐ Indexes (hash and range) exist only in memory; no buffer pool, B-trees
‐ T-SQL compiled to machine code via C code generator and Visual C
compiler
‐ Core engine uses lock-free algorithms; no lock manager, latches, or
spinlocks
• Multiversion optimistic concurrency control with full ACID
support
• On-ramp existing applications
• Integrated experience with same manageability, administration,
and development experience
37. Cardinality
37
• Execution Plans uses Cardinality in order to get an estimation of number of rows
for each query
• Although the new cardinality still works based on Statistics the algorithm has been
dramatically changed
• Databases in native compatibility level (120) works with the new estimator version
• Change that behavior with the OPTION (QUERYTRACEON 9481) to still be
executed with the old version (per Query basis)
• Databases in compatibility level 110 could be executed with the new version when
using QUERYTRACEON 2312 hint (per Query basis)
38. Scenarios
38
• New cardinality estimates use an average cardinality for recently added ascending
data (ascending key problem) without Statistics Update
‐ Old Version estimates 0 rows
‐ New Version uses average cardinality
• New cardinality estimates assume filtered predicates on the same table have some
correlation
‐ Old and New Version depends on the specific sample
• New cardinality estimates assume filtered predicates on different tables are
independent
‐ Old and New Version depends on the specific sample
39. 39
• Check the performance of your workloads and do the math
‐ Compatibility Level 110
‐ Compatibility Level 110 and add OPTION (QUERYTRACEON 2312) to some querys
‐ Native Compatibility Level (120)
‐ Native Compatibility Level (120) and add OPTION (QUERYTRACEON 9481) to some querys
Suggestions
42. Separation of duties enhancement
• Four new permissions
‐ CONNECT ANY DATABASE (server scope)
‐ IMPERSONATE ANY LOGIN (server scope)
‐ SELECT ALL USER SECURABLES (server scope)
‐ ALTER ANY DATABASE EVEN SESSION (database scope)
• Main benefit
‐ Greater role separation to restrict multiple DBA roles
‐ Ability to create new roles for database administrators who are not
sysadmin (super user)
‐ Ability to create new roles for users or apps with specific purposes
43. Best practices for separation of duties
• Eliminate the use of superusers (SA login, SYSADMIN server
role, DBO database user)
• Use permission system rather than superuser
• Use CONTROL SERVER (server level) and CONTROL DATABASE
(database level) instead and use DENY for specifics
• Always document the user of ownership chains
44. Backup encryption
• Increase security of backups stored separate from the instance
(another environment such as the Cloud)
• Encryption keys can be stored on-premises while backup files
in the cloud
• Support non-encrypted databases (don’t need to turn on
Transparent Data Encryption anymore)
• Different policies for databases and their backups
45. T-SQL BACKUP / RESTORE
BACKUP DATABASE <dbname> TO <device> = <path to device>
WITH
ENCRYPTION
(
ALGORITHM = <Algorithm_name> ,
{ SERVER CERTIFICATE = <Encryptor_Name> |
SERVER ASYMMETRIC KEY = <Encryptor_Name> }
);
No changes to RESTORE
47. Additional details
• AES 128, AES 192, AES 256, and Triple DES
• Unique backup key is generated for each backup
• Certificate
• Asymmetric key from an EKM provider only
• All operations require certificate or key
• Appended backup is not supported
• Compression has no effect on pre-encrypted databases
49. AlwaysOn in SQL Server 2014
• What’s being delivered
• Increase number of secondaries from four to eight
• Increase availability of readable secondaries
• Support for Windows Server 2012 CSV
• Enhanced diagnostics
• Main benefits
• Further scale out read workloads across (possibly geo-distributed) replicas
• Use readable secondaries despite network failures (important in geo-distributed environments)
• Improve SAN storage utilization
• Avoid drive letter limitation (max 24 drives) via CSV paths
• Increase resiliency of storage failover
• Ease troubleshooting
50. Description
• Increase number of secondaries (4–8)
• Max number of sync secondaries is still two
Increase number of Availability Group secondaries
Reason
• Customers want to use readable secondaries
• One technology to configure and manage
• Many times faster than replication
• Customers are asking for more database replicas (4–8)
• To reduce query latency (large-scale environments)
• To scale out read workloads
51. Description
Allow FCI customers to configure CSV paths for system and user databases
Support for Windows Server Cluster Shared Volumes
Reason
• Avoid drive letter limitation on SAN (max 24 drives)
• Improves SAN storage utilization and management
• Increased resiliency of storage failover (abstraction of temporary disk-level
failures)
• Migration of SQL Server customers using PolyServe (to be discontinued in 2013)
52. Description
Allow FCI customers to configure CSV paths for system and user databases
Support for Windows Server Cluster Shared Volumes
Reason
• Avoid drive letter limitation on SAN (max 24 drives)
• Improves SAN storage utilization and management
• Increased resiliency of storage failover (abstraction of temporary disk-level
failures)
• Migration of SQL Server customers using PolyServe (to be discontinued in 2013)
54. Resource Governor goals
• Ability to differentiate workloads
• Ability to monitor resource usage per group
• Limit controls to enable throttled execution or
prevent/minimize probability of “run-aways”
• Prioritize workloads
• Provide predictable execution of workloads
• Specify resource boundaries between workloads
56. Complete resource governance
• What’s being delivered
‐ Add max/min IOPS per volume to Resource Governor pools
‐ Add DMVs and perfcounters for IO statistics per pool per volume
‐ Update SSMS Intellisense for new T-SQL
‐ Update SMO and DOM for new T-SQL and objects
• Main benefits
‐ Better isolation (CPU, memory, and IO) for multitenant workloads
‐ Guarantee performance in private cloud and hosters scenario
57. Resource pools
• Represents physical resources
of server
• Can have one or more
workloads assigned to pool
• Pool divided into shared and
non-shared
• Pools control min/max for
CPU/memory and now IOPS
CREATE RESOURCE POOL pool_name
[ WITH
( [ MIN_CPU_PERCENT = value ]
[ [ , ] MAX_CPU_PERCENT = value ]
[ [ , ] CAP_CPU_PERCENT = value ]
[ [ , ] AFFINITY {SCHEDULER = AUTO |
(Scheduler_range_spec) | NUMANODE =
(NUMA_node_range_spec)} ]
[ [ , ] MIN_MEMORY_PERCENT = value ]
[ [ , ] MAX_MEMORY_PERCENT = value ]
[ [ , ] MIN_IOPS_PER_VOLUME = value ]
[ [ , ] MAX_IOPS_PER_VOLUME = value ])
]
58. Resource pools
• Minimums across all resource pools can not exceed 100
percent
• Non-shared portion provides minimums
• Shared portion provides maximums
• Pools can define min/max for CPU/Memory/IOPS
‐ Mins defined non-shared
‐ Max defined shared
59. Steps to implement Resource Governor
• Create workload groups
• Create function to classify requests into workload group
• Register the classification function in the previous step with the
Resource Governor
• Enable Resource Governor
• Monitor resource consumption for each workload group
• Use monitor to establish pools
• Assign workload group to pool
60. Resource Governor scenarios
• Scenario 1: I just got a new version of SQL Server and would
like to make use of resource governor. How can I use it in my
environment?
• Scenario 2 (based on Scenario 1): Based on monitoring results I
would like to see an event any time a query in the ad-hoc
group (groupAdhoc) runs longer than 30 seconds.
• Scenario 3 (based on Scenario 2): I want to further restrict the
ad-hoc group so it does not exceed 50 percent of CPU usage
when all requests are cumulated.
61. Monitoring Resource Governor
• System views
‐ sys.resource_governor_resource_pools
‐ sys.resource_governor_configuration
• DMVs
‐ sys.dm_resource_governor_resource_pools
‐ sys.dm_resource_governor_resource_pool_volumes
‐ sys.dm_resource_governor_configuration
• New performance counters
‐ SQL Server:Resource Pool Stats
‐ SQL Server:Workload group
• XEvents
‐ file_read_enqueued
‐ file_write_enqueued
63. Backup to Windows Azure
What’s being delivered
• SQL Server supports backups to and restores from the Windows Azure Blob storage service
(UI, T-SQL, PowerShell commandlets)
Main benefit: Take advantage of Windows Azure Blob storage
• Flexible, reliable (3-copies geo-DR), and limitless off-site storage
• No need of backup media management
• No overhead of hardware management
CREATE CREDENTIAL mystoragecred
WITH IDENTITY = ‘mystorage',
SECRET = ‘<your storage access key>
BACKUP DATABASE mydb TO URL ='https://mystorage.blob.core.windows.net/backup-container/mydb-
20130411.bak'
WITH CREDENTIAL = ‘mystoragecred',
FORMAT, COMPRESSION, STATS = 5,
MEDIANAME = ‘mydb backup 20130411', MEDIADESCRIPTION = 'Backup of mydb'
64. Backup to Windows Azure
Windows Azure
storage
WA
WindowsAzure
Blobs
• On-site/off-site storage costs
• Device management costs
Box
• XDrives limited to 1 terabyte
• Max 16 drives
• Manage drives and policy
• Near “bottomless” storage
• Off-site, geo-redundant
• No provisioning
• No device management
• Media safety (decay-free)
• Remote accessibility
65. Backup to Windows Azure
• Simple configuration UI
• Easy creation of Azure credential
• No overhead
66. Backup to Windows Azure Tool
• What is it?
‐ Stand-alone Tool that adds backup to Windows
Azure capabilities and backup encryption to prior
versions of SQL Server
• Benefits
‐ One Cloud Backup strategy across prior versions of
SQL Server including 2005, 2008, and 2008 R2
‐ Adds backup encryption to prior versions, locally or
in the cloud
‐ Takes advantage of backup to Azure
‐ Easy configuration
67. Managed backup to Azure
• What’s being delivered
‐ Agent that manages and automates SQL Server
backup policy
• Main benefit
‐ Large-scale management and no need to manage
backup policy
Context-aware – for example, workload/throttling
Minimal knobs – control retention period
Manage whole instance or particular databases
‐ Take advantage of backup to Azure
Inherently off-site
Geo-redundant
Minimal storage costs
Zero hardware management
Example:
EXEC smart_admin.sp_set_db_backup
@database_name='TestDB',
@storage_url=<storage url>,
@retention_days=30,
@credential_name='MyCredential',
@enable_backup=1
69. SQL Server data and log files in Windows Azure
storage
• What’s being delivered
‐ Ability to move data and log files in Windows Azure
Storage, while keeping the compute node of SQL
Server on-premises
‐ Transparent Data Encryption (TDE) is supported
• Main benefits
‐ No application changes required
‐ Centralized copy of data and log files
‐ Enjoy unlimited storage capacity in Azure Storage
(built in HA, SLA, geo-DR)
‐ Secure because TDE encryption key can be stored
on-premises
‐ Restore database is simply an attach operation
73. Deploy databases to Windows Azure VM
• What’s being delivered
‐ New wizard to deploy databases to SQL
Server in Windows Azure VM
‐ Can also create a new Windows Azure VM
if needed
• Main benefits
‐ Easy to use
Perfect for database administrators new to
Azure and for ad hoc scenarios
‐ Complexity hidden
Detailed Azure knowledge not needed
Almost no overhead: defining factor for
time-to-transfer is database size
74. Call to
action
• Download Trial SQL Server 2014
http://technet.microsoft.com/en-US/evalcenter/dn205290.aspx
• Trial Azure
http://azure.microsoft.com/en-us/pricing/free-trial/
• SQL Server
http://www.microsoft.com/SQLServer
Notes de l'éditeur
select * from [sql]
select * from [sql] where c1>100000
Insert 5000 rows into [sql]
Try Estimated plan Ctrl-L for all those querys at the same time
select * from [sql] where c1>100000 option (querytraceon 2312)
select * from [sql] where c1>100000 option (querytraceon 9481)
select * from [sql] where c1>100000
You can use the Transact-SQL ALTER TABLE...SWITCH statement to quickly and efficiently transfer subsets of your data in the following ways:
Assigning a table as a partition to an already existing partitioned table.
Switching a partition from one partitioned table to another.
Reassigning a partition to form a single table.
Not only are we enabling new unique hybrid scenarios, we are also simplify cloud adoption for our customers. With SQL Server 2014 we will also ship a new migration wizard that will help DBAs easily migrate their on-premises SQL Server instance to Windows Azure and again directly through SSMS as you can see in the screen shot there, point and click and you instance will be up and running in Azure in no time.
Once you become familiar with Windows Azure you can start to take advantage of the Windows Azure Virtual Machine to run many scenarios in the cloud including fast dev/test of your SQL Server applications, moving existing applications, doing hybrid scenarios like we talked about, BI scenarios in the cloud because you have full SQL Server functionality including all of the BI services. In addition you have full control over the VM so if you want to put your corporate anti-virus on the VM you can.
Once you have become comfortable with the Windows Azure environment you can take advantage of Windows Azure SQL Database service offering that will speed development of your new database application even faster because you don’t have to manage the database, it is a service, you don’t have to patch the OS or database we take care of that, all you do is develop your application using the service. The SQL Database service also has unique cloud features like dynamic scalability of the database using Federations like Flavorus did to achieve their business goals. In addition this database service also offers an SLA for the database running inside the VM and you don’t have think about setting up high availability because it is built-in to the database service by default. This is where we see cloud applications doing fast development, less maintenance and faster time to market.