More than Just Lines on a Map: Best Practices for U.S Bike Routes
Windows server 2012 r2
1.
2. DOWNLOAD Windows Server 2012 R2
Preview
aka.ms/ws2012r2
DOWNLOAD System Center 2012 R2
Preview
aka.ms/sc2012r2
Microsoft Virtual Academy (MVA)
http://www.microsoftvirtualacademy.com
7. Massive scalability for the
most demanding workloads
Hosts
•
Support for up to 320 logical processors
& 4TB physical memory per host
•
Support for up to 1,024 virtual machines
per host
Clusters
•
Support for up to 64 physical nodes &
8,000 virtual machines per cluster
Virtual Machines
•
Support for up to 64 virtual processors and
1TB memory per VM
8. VMs built on Optimized,
Software-Based Devices
Ease of Management & Operations
•
PXE boot from Optimized vNIC
•
Hot-Add CD/DVD Drive
Dynamic Storage
•
VMs have UEFI firmware with support
for GPT partitioned OS boot disks >2TB
•
Faster Boot from Virtual SCSI with Online
Resize & increased performance
Security
•
Removal of emulated devices reduces
attack surface
•
VM UEFI firmware supports Secure Boot
9. VHDX Provides Increased
Scale, Protection &
Alignment
Features
• Storage capacity up to 64 TBs
• Corruption protection during power failures
• Optimal structure alignment for large-sector
disks
Large allocations
and 1 MB aligned
Intent log
Header region
Data region (large allocations and 1 MB aligned)
Block Allocation
Table (BAT)
User data blocks
Sector bitmap blocks
Metadata region (small allocations and unaligned)
User metadata
Benefits
• Increases storage capacity
• Protects data
• Helps to ensure quality performance on
large-sector disks
Header
Metadata table
File metadata
10. Online VHDX Resize
provides VM storage
flexibility
Expand Virtual SCSI Disks
1. Grow VHD & VHDX files whilst attached
to a running virtual machine
2. Then expand volume within the guest
Shrink Virtual SCSI Disks
1. Reduce volume size inside the guest
2. Shrink the size of the VHD
or VHDX file whilst the VM is running
11. Achieve higher levels of
density for your Hyper-V
hosts
Maximum
memory
Maximum
memory
Windows Server 2008 R2 SP1
•
Introduced Dynamic Memory to enable
reallocation of memory automatically
between running virtual machines
Enhanced in Windows Server 2012 & R2
•
Minimum & Startup Memory
•
Smart Paging
•
Memory Ballooning
•
Runtime Configuration
Memory in use
Memory in use
Minimum
memory
Administrator can
increase maximum
memory without a
restart
VM1
Hyper-V
Physical
memory
pool
12. Control allocation of Storage
IOPS between VM Disks
•
Allows an administrator to specify a
maximum IOPS cap
•
Takes into account incoming &
outgoing IOPS
•
Configurable on a VHDX by VHDX
basis for granular control whilst VM is
running
•
Prevents VMs from consuming all
of the available I/O bandwidth to
the underlying physical resource
•
Supports Dynamic, Fixed
& Differencing
13. Comprehensive feature
support for virtualized Linux
Significant Improvements in
Interoperability
•
Multiple supported Linux distributions
and versions on Hyper-V.
•
Includes Red Hat, SUSE, OpenSUSE,
CentOS, and Ubuntu
Comprehensive Feature Support
•
64 vCPU SMP
•
Full Dynamic Memory Support
•
Live Backup
•
Deeper Integration Services Support
Worker
Processes
WMI Provider
Management Service
Windows
Kernel
Virtual Service
Provider
Independent Hardware
Vendor Drivers
Virtual SCSI, Hot-Add & Online Resize
•
Configuratio
n Store
Hyper-V
Server Hardware
14. • Increase flexibility of virtual machine
placement & increased administrator
efficiency
Source
Hyper-V
Virtual
machine
MEMORY
Complete Flexibility for
Virtual Machine Migrations
Disk contents writes go toto new
Readswrites are mirrored;
Disk and are copied the
Live Migration Completes
Live Migration Continues
outstandingsourceMigration Begins
source VHD. Live VHD replicated
destination VHD
changes are
Destination
Hyper-V
Live Migration
Configuration data
Modified memory pages
Memory content
Virtual
machine
IP connection
• Simultaneously live migrate VM & virtual
disks between hosts
• Nothing shared but an ethernet cable
• No clustering or shared storage
requirements
• Reduce downtime for migrations across
cluster boundaries
Source device
Target device
16. Duplication of a Virtual
Machine whilst Running
Export a clone of a running VM
•
Point-time image of running VM
exported to an alternate location
•
Useful for troubleshooting VM
without downtime for primary VM
Export from an existing checkpoint
•
Export a full cloned virtual machine
from a point-in-time, existing checkpoint
of a virtual machine
•
Checkpoints automatically merged into
single virtual disk
VM1 VM2
17. Simplified upgrade process
from 2012 to 2012 R2
•
Customers can upgrade from Windows
Server 2012 Hyper-V to Windows Server
2012 R2 Preview Hyper-V with no VM
downtime
•
Supports Shared Nothing Live Migration
for migration when changing storage
locations
•
If using SMB share, migration transfers
only the VM running state for faster
completion
•
Automated with PowerShell
•
One-way Migration Only
Hyper-V Cluster Upgrade without Downtime
2012 Cluster Nodes
2012 R2 Cluster Nodes
18. Complete Flexibility for
Deploying App-Level HA
•
Full support for running clustered
workloads on Hyper-V host cluster
•
Guest Clusters that require shared storage
can utilize software iSCSI, Virtual FC or
SMB
•
Full support for Live Migration of Guest
Cluster Nodes
•
Full Support for Dynamic Memory of Guest
Cluster Nodes
•
Restart Priority, Possible & Preferred
Ownership, & AntiAffinityClassNames
help ensure optimal operation
Guest Cluster running on a with Live Migration
Guest cluster node restarts on physical host failure
Guest cluster nodes supportedHyper-V Cluster
19. Guest Clustering No Longer
Bound to Storage Topology
•
VHDX files can be presented to multiple
VMs simultaneously, as shared storage
•
VM sees shared virtual SAS disk
•
Unrestricted number of VMs can
connect to a shared VHDX file
•
Utilizes SCSI-persistent reservations
•
VHDX can reside on a Cluster Shared
Volume on block storage, or on
File-based storage
•
Supports both Dynamic and Fixed VHDX
Flexible choices for placement of Shared VHDX
20. Replicate to 3rd Location for
Extra Level of Resiliency
•
Once a VM has been successfully replicated
to the replica site, replica
can be replicated to a 3rd location
•
Chained Replication
•
Extended Replica contents match the
original replication contents
•
Extended Replica replication frequencies
can differ from original replica
•
Useful for scenarios such as SMB ->
Service Provider -> Service Provider DR
Site
Replication canconfigured fromthe 1st replica to a 3rd site
Replication be enabled on primary to secondary
21. Enhancing VMConnect for
the Richest Experience
Improved VMBus Capabilities enable:
•
Audio over VMConnect
•
Copy & Paste between Host & Guest
•
Smart Card Redirection
•
Remote Desktop Over VMBus
Enabled for Hyper-V on both Server
& Client
Fully supports Live Migration of VMs
22.
23.
24.
25.
26.
27.
28. Enable your end users
Allow users to work on the
devices of their choice and
provide consistent access to
corporate resources.
Unify your environment
Users
Devices
Apps
Data
Deliver a unified application and
device management onpremises and in the cloud.
Protect your data
Management. Access. Protection.
Help protect corporate
information and manage risk.
29. Powered by Windows Server 2012
Desktop
sessions
Firewall
Pooled
VMs
Personal
VMs
1 platform • 1 experience • 3 deployment choices
Corporate Office
Branch Office
Home
Public Location
30. Set up a simple VDI
deployment easily and
quickly
Use wizard-based setup and
deployment for multiple scenarios
Create virtual machines
automatically with settings
31. Benefits
What should I deploy?
Available with pooled virtual machine
collections and remote desktop session host
collections
User Profile Disk
With every virtual machine pool and
remote desktop session host collection
Stores all user settings and data
Contains roaming user profile, Folder
Redirection cache, and user environment
virtualization
User Profile Disk with
pooled virtual machine
collections
Roams with user within collection
Appears as a local disk and improves
application compatibility
User Profile Disk with
remote desktop session host
collections
User environment virtualization
To apply roam settings across collections
Folder Redirection
To apply roaming user data across
collections
To centralize user data backup
32. Web farm
Remote
Desktop
Connection
Broker
With Windows Server 2012
Active/active high availability mode for brokers
Scale-out file server and resiliency
Requires Microsoft SQL Server
Hyper-V
cluster
Remote
Remote
Desktop Web
Desktop Web
Access
Access
Remote
Desktop
Connection
Broker
SQL Server Clustering
Remote Desktop
Virtualization Host
Remote Desktop
Virtualization Host
Remote Desktop
Remote Desktop
Session Host
Session Host
Automatic data migration from single instance
to high availability
Database
Remote
Remote
Desktop
Desktop
Gateway
Gateway
Web farm
Remote
Desktop
Licensing
Cluster
Remote Desktop
Session Host farm
33. DIRECTACCES
Adaptive graphics
remoting based on
content type
Crisp text always
Aero always on, rich new
Windows UI
Reconnect feature for ease of
movement across devices
Ability to serve desktop apps to
Windows RT tablet users
Full multitouch and
gesture remoting
Full single sign-on
RemoteApp programs
integrate seamlessly with
local desktop
38. Virtual IP address management
• Provides network fault tolerance and continuous
availability when network adapters fail by teaming
multiple
network interfaces.
IPAM distributed architecture
Domain
• New in R2: Enhanced LBFO performance.
europe.corp.woodbridge.com
• Vendor agnostic and shipped inbox.
• Provides local or remote management through
Windows PowerShell or UI.
• Enables teams of up to 32
network adapters.
• Aggregates bandwidth from multiple network
adapters.
IPAM Server
(UK)
IPAM server
(Redmond)
DHCP, DNS, DC,
and NPS servers
Site: UK
Branch office
Domain
DHCP, DNS, DC,
and NPS servers
fareast.corp.woodbridge.com
• Includes multiple nodes: switch dependent and
independent.
IPAM Server
(Hyderabad)
Site: Redmond
Head office
DHCP, DNS, DC,
and NPS servers
Site: Hyderabad
Branch office
IPAM Server
(Bangalore)
DHCP, DNS, DC,
and NPS servers
Site: Bangalore
Branch office
4
39.
40. Server with a GUI
Minimal Server
Interface
Server Core
41. Server Core
Minimal Server Interface
Server with a GUI
Desktop Experience
Command Prompt
a
a
a
a
PowerShell/.NET
a
a
a
a
Server Manager
x
a
a
a
MMC
x
a
a
a
Control Panel
x
x
a
a
CPL Applets
x
Some
a
a
Explorer Shell
x
x
a
a
Taskbar
x
x
a
a
System Tray
x
x
a
a
Internet Explorer
x
x
a
a
Help
x
x
a
a
Themes
x
x
x
a
Start screen (Metro)
x
x
a
a
Metro-style apps
x
x
x
a
Media Player
x
x
x
a
47. DOWNLOAD Windows Server 2012 R2
Preview
aka.ms/ws2012r2
DOWNLOAD System Center 2012 R2
Preview
aka.ms/sc2012r2
Microsoft Virtual Academy (MVA)
http://www.microsoftvirtualacademy.com
But how does it work?Offloaded Data Transfer (ODX) support is a feature of the storage stack of Hyper‑V in Windows Server 2012. ODX, when used with offload-capable SAN storage hardware, lets a storage device perform a file copy operation without the main processor of the Hyper‑V host actually reading the content from one storage place and writing it to another.ODX uses a token-based mechanism for reading and writing data within or between intelligent storage arrays. Instead of routing the data through the host, a small token is copied between the source and destination. The token simply serves as a point-in-time representation of the data. As an example, when you copy a file or migrate a virtual machine between storage locations (either within or between storage arrays), a token that represents the virtual machine file is copied, which removes the need to copy the underlying data through the servers. In a token-based copy operation, the steps are as follows:<Click>A user initiates a file copy or move in Windows Explorer, a command-line interface, or a virtual machine migration.<Click>Windows Server automatically translates this transfer request into an ODX (if supported by the storage array) and receives a token representation of the data.<Click>The token is copied between the source and destination systems.<Click>The token is delivered to the storage array.<Click>The storage array performs the copy internally and returns progress status.ODX is especially significant in the cloud space when you must provision new virtual machines from virtual machine template libraries or when virtual hard disk operations are triggered and require large blocks of data to be copied, as in virtual hard disk merges, storage migration, and live migration. These copy operations are then handled by the storage device that must be able to perform offloads (such as an offload-capable iSCSI, Fibre Channel SAN, or a file server based in Windows Server 2012) and frees up the Hyper‑V host processors to carry more virtual machine workloads.As you can imagine having an ODX compliant array provides a wide range of benefits:ODX frees up the main processor to handle virtual machine workloads and lets you achieve native-like performance when your virtual machines read from and write to storage.ODX greatly reduces time to copy large amounts of data.With ODX, copy operations don’t use processor time.Virtualized workload now operates as efficiently as it would in a non-virtualized environment.<next slide>
As was stated earlier, Dynamic Memory was introduced with Windows Server 2008 R2 SP1 and is used to reallocate memory between virtual machines that are running on a Hyper-V host. Improvements made within Windows Server 2012 Hyper-V includeMinimum memory setting – being able to set a minimum value for the memory assigned to a virtual machine that is lower than the startup memory settingHyper-V smart paging – which is paging that is used to enable a virtual machine to reboot while the Hyper-V host is under extreme memory pressureMemory ballooning – the technique used to reclaim unused memory from a virtual machine to be given to another virtual machine that has memory needsRuntime configuration – the ability to adjust the minimum memory setting and the maximum memory configuration setting on the fly while the virtual machine is running without requiring a reboot.Because a memory upgrade requires shutting down the virtual machine, a common challenge for administrators is upgrading the maximum amount of memory for a virtual machine as demand increases. For example, consider a virtual machine running SQL Server and configured with a maximum of 8 GB of RAM. Because of an increase in the size of the databases, the virtual machine now requires more memory. In Windows Server 2008 R2 with SP1, you must shut down the virtual machine to perform the upgrade, which requires planning for downtime and decreasing business productivity. With Windows Server 2012, you can apply that change while the virtual machine is running.[Click]As memory pressure on the virtual machine increases, an administrator can change the maximum memory value of the virtual machine, while it is running and without any downtime to the VM. Then, the Hot-Add memory process of the VM will ask for more memory and that memory is now available for the virtual machine to use.<next slide>
Your computing resources are limited. You need to know how different workloads draw upon these resources—even when they are virtualized. In Windows Server 2012, Hyper‑V introduces Resource Metering, a technology that helps you track historical data of the use of virtual machines. With Resource Metering, you can gain insight into the resource use of specific servers. You can use this data to perform capacity planning, to monitor consumption by different business units or customers, or to capture data needed to help redistribute the costs of running a workload. You could also use the information that this feature provides to help build a billing solution, so that customers of your hosting services can be charged appropriately for resource usage.Hyper‑V in Windows Server 2012 lets providers build a multitenant environment, in which virtual machines can be served to multiple clients in a more isolated and secure way, as shown in the figure. Because a single client may have many virtual machines, aggregation of resource usage data can be a challenging task. However, Windows Server 2012 simplifies this task by using resource pools, a feature available in Hyper‑V. Resource pools are logical containers that collect resources of the virtual machines that belong to one client, permitting single-point querying of the client’s overall resource use.Hyper‑V Resource Metering has the following features:Uses resource pools, logical containers that collect resources of the virtual machines that belong to one client and allow single-point querying of the client’s overall resource use.Works with all Hyper‑V operations. Helps ensure that movement of virtual machines between Hyper‑V hosts (such as through live, offline, or storage migration) doesn’t affect the collected data.Uses Network Metering Port ACLs to differentiate between Internet and intranet traffic, so providers can measure incoming and outgoing network traffic for a given IP address range.Resource Metering can measure the following:Average CPU use. Average CPU, in megahertz, used by a virtual machine over a period of time.Average memory use. Average physical memory, in megabytes, used by a virtual machine over a period of time.Minimum memory use. Lowest amount of physical memory, in megabytes, assigned to a virtual machine over a period of time.Maximum memory use. Highest amount of physical memory, in megabytes, assigned to a virtual machine over a period of time. Maximum disk allocation. Highest amount of disk space capacity, in megabytes, allocated to a virtual machine over a period of time.Incoming network traffic. Total incoming network traffic, in megabytes, for a virtual network adapter over a period of time. Outgoing network traffic. Total outgoing network traffic, in megabytes, for a virtual network adapter over a period of time. <next slide>
NOTE: This slide is animated and has 4 clicksWith Windows Server 2012 Hyper-V, you can also perform a “Shared Nothing” Live Migration where you can move a virtual machine, live, from one physical system to another even if they don’t have connectivity to the same shared storage. This is useful, for example, in a branch office where you may be storing the virtual machines on local disk, and you want to move a VM from one node to another. This is also especially useful when you have two independent clusters and you want to move a virtual machine, live, between them, without having to expose their shared storage to one another. You can also use “Shared Nothing” Live Migration to migrate a virtual machine from one datacenter to another provided your bandwidth is large enough to transfer all of the data between the datacenters.As you can see in the animation, when you perform a live migration of a virtual machine between two computers that do not share an infrastructure, Hyper-V first performs a partial migration of the virtual machine’s storage by creating a virtual machine on the remote system and creating the virtual hard disk on the target storage device.[Click]While reads and writes occur on the source virtual hard disk, the disk contents are copied over the network to the new destination virtual hard disk.This copy is performed by transferring the contents of the VHD between the two servers over the IP connection between the Hyper-V hosts.[Click]After the initial disk copy is complete, disk writes are mirrored to both the source and destination virtual hard disks while outstanding disk changes are replicated.This copy is performed by transferring the contents of the VHD between the two servers over the IP connection between the Hyper-V hosts.[Click]After the source and destination virtual hard disks are synchronized, the virtual machine live migration process is initiated, following the same process that was used for live migration with shared storage.After the virtual machine’s storage is migrated, the virtual machine migrates while it continues to run and provide network services. [Click]After the live migration is complete and the virtual machine is successfully running on the destination server, the files on the source server are deleted.
Lets consider a simplistic scenario:You have a client and server. You want to request data of say 500k from the client – this typically goes through the TCP/IP stack, you place a request for getting the data to the server, the data gets read, gets broken down into smaller packets, transferred back to you. You assimilate all these packets into the 500k data that you asked for and send it back up. All this takes CPU resources. You cant send the entire 500k at once no matter how fast your NIC is. You still have to chop it down into small packets and send it up and down the stack. Now think about having multiple NICs doing a lot of IO intensive operations. Your CPU is just busy doing this over and over again. You don’t want your CPU to be used for this – you want it to work on servicing database requests, indexing and so on. We worked with a lot of network adapter vendors to bring a class of specialized NICs to support high speed data transfers with SMB Direct into Windows Server 2012. These NICs have a better CPU in them and support Remote Direct Memory Access or RDMA so that they can transfer data between them without involving the host CPU. How does it work – I need to read 500k of data. First I find out a place in memory where that data should reside, then I register that with the NIC and get a token back. I send this token back to the other side through SMB direct and say I need to read 500k of data. Now the host uses this token, identifies the memory that needs to be copied, passes the memory location and the token to the NIC and says this is the data that needs to be transferred, why don’t you guys just talk with each other and transfer the data over. So the two NICs actually do the transfer. The two CPUs are now busy doing something else. RDMA can be incredibly fast – 1-2 ms latency when doing transfers. We support Infiniband, RoCE and iWARP network interfaces.New in Windows Server 2012 R2, we are taking advantage of RDMA technology and are introducing it to Hyper-V live migrations taking advantage of the SMB protocol. As in the case with regular SMB file transfers, RDMA enables the offloading of CPU resources to NICs during live migration. This means that live migrations can now take advantage of high-speed networking, and they can also stream over multiple networks for improved bandwidth. Live migration with RDMA delivers the highest performance for live migrations, supporting transfer speeds of up to 56 GB/s.
Enterprise class performance and scale have traditionally been associated with high-end storage solutions. But not everyone can afford an expensive SAN – most customers are looking at options that gives them the same kind of reliability, resiliency and availability that high-end solutions offers but at the cost of industry-standard hardware. With WS 2012, we don’t have any 2nd class citizens – no matter what storage solution you have, we help you make the most out of your investment and use your resources efficiently. A fundamental assumption we made when we built the product was that underlying hardware can fail or be unreliable due to several reasons – planned or unplanned and the services that run on top should be as transparent to these failures as possible. With that in mind, lets look at some of the features we have added in WS 2012
We have had virtualization at Hyper-V layer over the last couple of releases of Windows server. With Windows Server 2012, we give you the ability to virtualize your storage solution. Storage spaces gives you the ability to consolidate all your SAS and SATA connected disks – no matter whether they are SSDs or traditional HDDs and consolidate them together as storage Pools. <Click>You can then assign these pools to different departments within your enterprise or customers so that your data is isolated and administration is easy. Once you have created these pools, you can then create logical disks from them, called Storage Space. <Click>These logical disks, for the most part looks and acts like regular disks. But they can be configured for different resiliency schemes – mirroring or parity depending the performance and space requirements.When you create a Storage space, you can either choose thin or fixed provisioning. This lets you have the ability to increase your storage investments only when you need. You can create a logical disk or Space that is bigger than your pool and add disks only when there is an actual need.<Click>Lets assume that your Hyper-V VMs are stored in logical disks created using Storage Spaces. With trim provisioning, when a large file gets deleted from one of the VMs, the VMs communicate this to the host and the host down to storage spaces and spaces will automatically reclaim this storage and assign it to other disks within the same pool or other pools. So you are optimizing storage utilization with on-demand provisioning and automated capacity reclamation.<Click>Storage Spaces is compatible with other Windows Server 2012 storage features, like SMB Direct and SMB Failover Clustering, so you can use simple inexpensive storage devices to create powerful and resilient storage infrastructures on a limited budget. <Click>Storage Spaces enable you to deliver a new category of highly capable storage solutions to all Windows customer segments at a dramatically lower price point. At the same time, you can maximize your operations by leveraging commodity storage to supply high-performance and feature-rich storage to servers, clusters, and applications alike.
[CLICK]Hot data is data that changes frequently and is stored on the faster, but more expensive, solid state drives. All data starts as hot data. [CLICK]Cold data is data that changes infrequently and is stored on the slower, but cheaper, hard disk drives. [CLICK] If cold data becomes hot it will be automatically moved to the solid state drives, and if [CLICK] hot data becomes cold it is moved to the hard disk drives.
Data deduplication is a new storage efficiency feature available with Windows Server 2012 that helps address the ever-growing demand for file storage. Instead of expanding the storage used to host the data, the amount of space used by that data is now reduced through the use of variable-size chunking and compression. What this means is that Windows will automatically scan through your disks, identify duplicate chunks in the data you have stored and store these chunks only once. Since only one copy is stored for duplicate data this not only lets you optimize your existing storage infrastructure, it also translates into even greater savings by postponing the need to purchase storage upgrades and extending the lifespan of current storage investments. The disk space savings we have seen with Data Dedup during testing, both internally and by ESG Lab, has been phenomenal. Data deduplication can deliver storage savings of 25-60% for general file shares and 98% for OS VHDs. This is far above what was possible with Single Instance Storage (SIS) or NTFS compression.Data deduplication also throttles CPU and memory usage to allow for implementation on large volumes without impacting server performance. Furthermore, compression routine run times can be scheduled for off-peak times to reduce any impact those operations might have on data access.Reliability and data integrity aren’t problems for data deduplication, thanks to metadata and preview redundancy that helps to prevent data loss due to unexpected power outages. Checksums, along with data integrity and consistency checks, also help prevent corruption for volumes configured to use data deduplication.Not for :Live VMsSQL DBsReFS file sharesClient machinesBoot dataCluster shared volumes
New Windows PowerShell ISE Features. The Windows PowerShell Integrated Scripting Environment (ISE) 3.0 includes many new features to ease beginning users into Windows PowerShell and provide advanced editing support for scripters. Some of the new features are:Show-Command pane lets users find and run cmdlets in a dialog box. IntelliSense provides context-sensitive command completion for cmdlet and script names, parameter names and enumerated values, and property and method names. IntelliSense also supports paths, types, and variables.
The ForEach-Object and Where-Object cmdlets have been updated to support an intuitive command structure that more closely models natural language. Users can construct commands without script block, braces, the current object automatic variable ($_), or dot operators to get properties and methods. In short, the “punctuation” that plagued beginning users is no longer required.New Windows PowerShell ISE Features. Windows PowerShell ISE 3.0 includes many other new features to ease beginning users into Windows PowerShell and provide advanced editing support for scripters. Some of the new features are:Windows PowerShell 3.0 helps IT Pros by providing access to a library of Windows PowerShell code snippets, within Windows PowerShell ISE. To access Integrated Script Snippets, the user presses the keystroke (Ctrl+J). The user can then select from a list of script templates, select the appropriate template, and have partially completed script inserted into the editor.Collapsible regions in scripts and XML files make navigation in long scripts easier.
Windows PowerShell 3.0 provides a comprehensive management platform for all aspects of the data center: servers, network, and storage. Windows PowerShell 3.0 includes 260 core cmdlets. Windows Server 2012 includes more than 2,300 total cmdlets in 85 available modules.
Value added to the PowerShell ecosystem:End-UsersRemote managementAccess anywherePartnersReturn from their PowerShell investmentRequirements:ClientBrowser (HTML + Ajax)GatewayWindows Server 2012, PowerShell Web Access roleTargetPowerShell Remoting
With the new release of Windows PowerShell, sessions aren't just persistent; they are resilient. Robust Session Connectivity allows sessions to remain in a connected state even when network connectivity is briefly disrupted.With Robust Session Connectivity, remote sessions can remain in a connected state for up to 4 minutes, even if the client computer crashes or becomes inaccessible, and tasks on the managed nodes continue to run on their own making the end to end system more reliable. If connectivity cannot be restored within 4 minutes, execution on the managed nodes is suspended with no loss of data and remote sessions automatically transition to a disconnected state, allowing them to be reconnected after network connectivity is restored. Corruption of application and system state from premature termination of running tasks due to unexpected client disconnection is virtually eliminated