SlideShare une entreprise Scribd logo
1  sur  22
Télécharger pour lire hors ligne
SharePoint Server 2013 Disaster Recovery in Windows Azure
This topic has not yet been rated - Rate this topic

Applies to: Windows Server 2012, Windows Azure, SQL Server 2012, SharePoint Server 2013 Enterprise, Office Professional 2013, data and storage, AD
DS
Topic Last Modified: 2014-03-03
Summary: Using Windows Azure Infrastructure Services, you can create a disaster-recovery environment for your on-premises SharePoint farm.
This article describes how to design and implement this solution.
Use this article with the following solution model: SharePoint Disaster Recovery in Windows Azure
Figure 1: SharePoint Disaster Recovery Solution Hosted in Windows Azure

Visio version [1]
PDF version [2]
In this article:
Use Windows Azure Infrastructure Services for disaster recovery[3]
Many organizations do not have a disaster-recovery environment for SharePoint. This is because it can be expensive to build and maintain a
recovery environment on-premises. Windows Azure Infrastructure Services provides compelling options for disaster recovery environments that
are more flexible and less expensive than the on-premises alternatives.
Advantages include:
Hosted secondary datacenter Use Windows Azure Infrastructure Services instead of investing in a secondary datacenter in a different
region.
Lower-cost disaster-recovery environments Maintain and pay for fewer resources than an on-premises disaster-recovery environment.
The number of resources depend on which disaster-recovery environment you choose: cold standby, warm standby, or hot standby.
Windows Azure Infrastructure Services is elastic Easily scale-out your recovery SharePoint farm in the event of a disaster to meet load
requirements. Scale in when you no longer need the resources.
Entry-level options for companies getting started with disaster recovery are possible, as well as advanced options for enterprises with high
resiliency requirements. Definitions for cold, warm, and hot standby environments are a little different when the environment is hosted in a
cloud platform. The following table shows how we think about these environments when building a SharePoint recovery farm in Windows Azure.

Table: Recovery environments
Type of recovery
environment

Description
A fully-sized farm is provisioned, updated, and running on standby.
The farm is built and VMs are running and updated.
Recovery includes attaching content databases, provisioning service applications, and crawling content.
The farm can be a smaller version of the production farm and then scaled out to serve the full user base.
The farm is fully built, but the VMs are stopped.
Maintaining the environment includes starting the VMs from time-to-time, patching, updating, and verifying the
environment.
Start the full environment in the event of a disaster.
It is important to evaluate your organization’s Recovery Time Objectives (RTO) and Recovery Point Objectives (RPO). These requirements will
determine which environment is the most appropriate investment for your organization.
The guidance in this article describes how to implement a warm standby environment. This guidance can be adapted for a cold standby
environment, though additional procedures will be needed to support this environment. This guidance does not address hot standby
environments.
For more information about disaster recovery solutions see High availability and disaster recovery concepts in SharePoint 2013[4] and Choose a
disaster recovery strategy for SharePoint 2013[5].
Solution description [6]
The warm standby disaster recovery solution requires the following environment.
On-premises SharePoint production farm
Recovery SharePoint farm in Windows Azure
Site-to-site VPN connection between the two environments
The following figure illustrates these three elements.
Figure 2: Elements of a warm standby solution in Windows Azure

SQL Server Log shipping together with Distributed File System Replication (DFSR) are used to copy database backups and transaction logs to the
recovery farm in Windows Azure.
DFSR is used to transfer logs from the production environment to the recovery environment. In a WAN scenario DFSR is more efficient than
shipping the logs directly to the secondary server in Windows Azure.
Logs are replayed to the SQL servers in the recovery environment in Windows Azure.
Log-shipped databases are not attached to the farm until a recovery exercise is performed.
Failover includes the following steps:
Stop log shipping.
Stop accepting traffic to the primary farm.
Replay the final transaction logs.
Attach the content databases to the farm.
Restore service applications from the replicated services databases.
Update DNS records to point to the recovery farm.
Start a full crawl.
Recovery objectives provided by this solution are summarized in the following table.

Table: Solution recovery objectives
Item

Description

Sites

Sites and content are available in the recovery environment.

and
content
A new

In this warm standby solution, search is not restored from search databases. Search components in the recovery farm are configured as

instance similarly as possible to the production farm. After the sites and content are restored, a full crawl is started to rebuild the search index.
of

You do not need to wait for the crawl to complete to make the sites and content available.

search
Services Services that store data in databases are restored from the log-shipped databases. Services that do not store data in databases are
simply started.
Not all services with databases need to be restored. The following services do not need to be restored from databases and can simply
be started after failover:
Usage and Health Data Collection
State service
Word automation
Any service without a database

Additional items that can be addressed by Microsoft Consulting Services or a partner are summarized in the following table.

Table: Other disaster recovery resources
Item

Description

Synchronizing

Ideally the recovery farm is configured identically as possible to the production farm. You can work with a consultant or

custom farm

partner to evaluate whether custom farm solutions are replicated and the process for keeping the two environments in sync.

solutions
Connections to

It might not be practical to replicate connections to backend data systems, such as BDC connections and search content

data sources onpremises

sources.

Search restore
scenarios

Because enterprise search deployments tend to be fairly unique and very complex, restoring search from databases requires a
greater investment. You can work with a consultant or partner to identify and implement search restore scenarios that your
organization might require.

This solution guidance assumes that the on-premises farm is already designed and deployed.
Detailed architecture[7]
Ideally the recovery farm in Windows Azure is configured as identically as possible to the production farm on-premises:
Same representation of server roles
Same configuration of customizations
Same configuration of search components
The environment in Windows Azure can be a smaller version of the production farm. If you plan to scale out the recovery farm out after failover,
it is important that each type of server role is represented initially.
Some configurations might not be practical to replicate in the failover environment. Be sure to test the failover procedures and environment to
ensure that the failover farm provides the service level that is expected.
This solution doesn't prescribe a specific topology for a SharePoint farm. The focus of this solution is using Windows Azure for the failover farm
and implementing log shipping and DFSR between the two environments.
Warm standby environments[8]
In a warm standby environment, all VMs in the Windows Azure environment are running. The environment is ready for a failover exercise or
event.
The following figure illustrates a disaster recovery solution from an on-premises SharePoint farm to a Windows Azure-based SharePoint farm
that is configured as a warm standby environment.
Figure 3: Topology and key elements of production farm and warm standby recovery farm

In this illustration:
Two environments are illustrated side-by-side — the on-premises SharePoint farm and the warm standby farm in Windows Azure.
A file share is included in each environment.
Each farm includes four tiers. To achieve high availability, each tier includes two servers or VMs that are configured identically for a specific
role, such as front-end services, distributed cache, backend services, and databases. It isn't important in this illustration to call out specific
components. The two farms are configured identically.
The fourth tier is the database tier. Log shipping is used to copy logs from the secondary database server in the on-premises environment to
the file share in the same environment.
DFSR is used to copy files from the file share in the on-premises environment to the file share in the Windows Azure environment.
Log shipping is used to replay the logs from the file share in the Windows Azure environment to the primary replica in the SQL Server
AlwaysOn availability group in the recovery environment.
Cold standby environments[9]
In a cold standby environment, most of the SharePoint farm VMs can be shut down. (We recommend occasionally starting the virtual machines,
such as every two weeks or once a month, so each can sync with the domain.) The following VMs in the Windows Azure recovery environment
must remain running to ensure continuous operations of log shipping and DFSR:
The file share
The primary database server
At least one VM running Windows Server Active Directory and DNS
The following figure shows a Windows Azure failover environment in which the file share VM and the primary SharePoint database VM are
running. All other SharePoint VMs are stopped. The VM running Windows Server Active Directory and DNS is not shown.
Figure 4: Cold standby recovery farm with running VMs
After failover to a cold standby environment, all VMs are started and the method used to achieve high availability of the database servers must be
configured, such as SQL Server AlwaysOn availability groups.
If multiple storage groups are implemented (databases are spread across more than one set of highly available SQL Servers) the primary database
for each storage group must be running to accept the logs associated with its storage group.
Designing the Windows Azure environment[10]
Before deploying VMs in Windows Azure it is important to design the Windows Azure environment, including the virtual network and site-tosite VPN connection that is a requirement for this solution. For this design task, see Windows Azure Architectures for SharePoint 2013[11].
Microsoft proof of concept environment[12]
The design goal for our test environment was to deploy and recover a SharePoint farm that we’d expect to see in a customer environment. We
made several assumptions, but knew that the farm needed to provide all of the out of the box functionality, without any customizations. The
topology would be designed for high availability using best practice guidance from the field and product group.
The following table describes the Hyper-V virtual machines we created and configured for the on-premises test environment.

Table: Virtual machines for on-premises test
Server name

Role
Domain controller with Active Directory

Configuration
2 processors
512 – 4 GB
RAM
1 x 127 GB
Hard Disk

Server configured with the Routing and Remote Access Service role

2 processors
2 - 8 GB
RAM
1 x 127 GB
Hard Disk

File server with shares for backups and end point for DFSR

4 processors
2 - 12 GB
RAM
1 x 127 GB
1 x 1 TB
(SAN)
1 x 750 GB
SP-WFE1, SP-

Front end Web servers

4 processors

WFE2
16 GB RAM
SP-APP1, SP-

Application servers

4 processors

APP2, SP-APP3
2 – 16 GB
RAM
SP-SQL-HA1,

Database servers, configured with SQL Server 2012 AlwaysOn availability groups to provide high availability.

SP-SQL-HA2

This configuration uses SP-SQL-HA1 and SP-SQL-HA2 as the primary and secondary replicas

4 processors
2 – 16 GB
RAM

The following table describes drive configurations for the Hyper-V virtual machines that we created and configured for the front end web and
application, servers for the on-premises test environment.

Table: Virtual machine drive requirements for Front end Web and Application servers for on-premises test
Drive LetterSizeDirectory Name

Path

System drive

<DriveLetter>:Program FilesMicrosoft SQL Server

Log drive (40 GB) <DriveLetter>:Program FilesMicrosoft SQL ServerMSSQL10_50.MSSQLSERVERMSSQLDATA
Page (36 GB

<DriveLetter>:Program FilesMicrosoft SQL ServerMSSQLDATA

The following table describes drive configurations for the Hyper-V virtual machines that we created and configured for the database servers for
the on-premises test environment. On the Database Engine Configuration page access the Data Directories tab to set and confirm the settings
shown in the following table.

Table: Virtual machine drive requirements for Database server for on-premises test
Drive LetterSizeDirectory Name

Path

Data root directory

<DriveLetter>:Program FilesMicrosoft SQL Server

User database directory

<DriveLetter>:Program FilesMicrosoft SQL ServerMSSQL10_50.MSSQLSERVERMSSQLDATA

User database log directory <DriveLetter>:Program FilesMicrosoft SQL ServerMSSQL10_50.MSSQLSERVERMSSQLDATA
Temp DB directory

<DriveLetter>:Program FilesMicrosoft SQL ServerMSSQL10_50.MSSQLSERVERMSSQLDATA

Temp DB log directory

<DriveLetter>:Program FilesMicrosoft SQL ServerMSSQL10_50.MSSQLSERVERMSSQLDATA

Setting up the test environment[13]
During the different deployment phases the test team typically worked on the on-premises part of architecture first, and then on the
corresponding parts of the Windows Azure environment. This reflects the general real world cases, where in-house production farms are already
running. What’s even more important is the fact that you should know the current production workload, capacity, and typical performance. In
addition to building a disaster recovery that can meet business requirements, you need to size the recovery farm servers to deliver a minimum
level of service. In a cold or warm standby environment a recovery farm is typically smaller than a production farm. After the recovery farm is
stable and in production, the farm can be scaled up and out to meet workload requirements.
We deployed our test environment in the following three phases:
Set up the hybrid infrastructure
Provision the servers
Deploy the SharePoint farms
Set up the hybrid infrastructure[14]
This phase involves setting up a domain environment for the on-premises farm and for the recovery farm in Windows Azure. In addition to the
normal tasks associated with configuring Active Directory, the test team implemented a routing solution and a VPN connection between the two
environments.
Provision the servers[15]
In addition to the farm servers, it was necessary to provision servers for the domain controllers, and a server configured to handle routing and
remote access service (RRAS) as well as the Site-to-Site VPN. Two file servers were provisioned for the DFSR service and several client computers
were provisioned for testers.
Deploy the SharePoint farm[16]
The SharePoint farm was deployed in two stages in order to simplify stabilizing the environments and troubleshooting if it was required. During
the first stage each farm was deployed on the minimum number of servers for each tier of the topology and to support the functionality that was
required.
We created the farm and joined additional servers in the following order:
Note:
We created the database server with SQL Server installed before creating the SharePoint 2013 servers.
1. Provision SP-SQL-HA1 and SP-SQL-HA2.
2. Configure AlwaysOn and create the 3 availability groups for the farm.
Note:
Because this was a new deployment we could create the availability groups before deploying SharePoint. We created three groups based on
MCS best practice guidance. The SharePoint databases are distributed on the following groups:
Provision SP-APP1 to host Central Administration.
Provision SP-WFE1 and SP-WFE2 to host the distributed cache. Use the skipRegisterAsDistributedCachehost parameter when running
psconfig.exe at the command line. For more information, see Plan for feeds and the Distributed Cache service in SharePoint Server
2013[17].

Note:
It was recognized that if resources or schedule became an issue, the initial environment would be suitable for finishing this proof of concept
disaster recovery project.
After we configured the distributed cache, added test users, and added test content, we started stage two of the deployment. This required that
we scale out the tiers and configure the farm servers to support the high availability topology described in the farm architecture.
Operations[18]
After the farm environments were stable and functional testing was completed, the test team started the operations tasks required to configure
the on-premises recovery environment.
Configure full and differential backups
Configure DFSR on the file servers that would be used to transfer transaction logs between the on-premises environment and the
Windows Azure environment
Configure log shipping on the primary database server
Stabilize, validate and troubleshoot log shipping as required. This included identifying and documenting any behaviors that might cause
issues, such as network latency which would cause log shipping or DFSR file synchronization failures.
For detailed information about preparing the end to end recovery environment, see Phase 5: Setup log shipping to the recovery farm.
Disaster recovery roadmap[19]
The roadmap described in the following table assumes that you already have a SharePoint Server 2013 farm deployed in production.

Table: Roadmap for disaster recovery
Phase

Description

Phase 1 Review and inventory the on-premises production to farm to ensure that:
The SQL Server AlwaysOn and Log Shipping requirements and configurations in in this article can be met.
Production farm customizations and publishing solutions are fully supported in Windows Azure.
Phase 2 Design and implement a hybrid network environment.
Phase 3 Deploy the SharePoint recovery farm in Windows Azure.
Phase 4 Setup DFSR between the farms.
Phase 5 Setup log shipping to the recovery farm.
Phase 6 Validate failover and recovery solution. This includes the following procedures and technologies:
Stop log shipping
Recover content
Crawl content
Recover services
Manage DNS records

Phase 1: Review and inventory the on-premises product farm[20]
A successful disaster recovery environment must accurately reflect the production farm that you want to recover. The size of the recovery farm is
not the most important thing in the recovery farm’s design, deployment, and testing. Farm scale varies from organization to organization based
on their business requirements. It may be possible to use a scaled down farm for a short outage, or until performance and capacity demands
require you to scale the farm.
The most important thing is the ability to provide a SharePoint farm that meets your SLA requirements and provides the functionality that you
need to support your business. The only way to achieve your recovery goals is to have a detailed inventory of your farm configuration,
applications, and users.
Note:
This should already be part of your disaster recovery plan.
Phase 2: Implement a hybrid network environment[21]
The key components and architecture that’s needed for this hybrid network infrastructure have already been described. In this phase of the
project you have to deploy and configure these components. Based on our test configuration, the following steps are required:
1.
2.
3.
4.

Create the domain controller, and create the AD subnets and sites.
Deploy a gateway server or install a router that can connect to the Windows Azure VPN gateway.
Configure the gateway server (RRAS in our model) to connect the VPN gateway.
Deploy a domain controller in Windows Azure and configure it as a subscription server.

For detailed configuration guidance for setting up a domain controller in Windows Azure, see Install a Replica Active Directory Domain
Controller in Windows Azure Virtual Networks[22].
Phase 3: Deploy the SharePoint recovery farm in Windows Azure[23]
The SharePoint recovery farm in Windows Azure, while scaled down, should be configured as closely as possible to the production farm. When
you deploy SharePoint and configure the farm there are some things you don’t have to configure. A good example is site collections, which are
registered in the farm sitemap automatically when you attach the content database created from the backups.
Before we created any virtual machines in Windows Azure we applied the design principles we’ve described and Windows Azure best practice
guidance. There are several best practice articles available; Guidance[24] is a good starting point.
We created the cloud services we wanted to use and decided to hold off on creating availability sets until we created the virtual machines.
The following table describes the virtual machines, cloud services, and availability sets we setup for our recovery farm.

Table: Recovery farm infrastructure
Server name

Configuration

Cloud service

Availability
set

spDRAD

Domain controller with Active Directory

2 processors

spDRAD

512 – 4 GB RAM
1 x 127 GB Hard
Disk
AZ-SP-FS

File server with shares for backups and end point for DFSR

A5

sp-

configuration:

databaseservers

DATA_SET

2 processors
14 GB RAM
1 x 127 GB
1 x 135
1 x 127 GB
1 x 150 GB
AZ-WFE1, AZ -WFE2

Front end Web servers

A5

sp-webservers

WFE_SET

APP_SET

configuration:
2 processors
14 GB RAM
1 x 127 GB Hard
Disk
AZ -APP1, AZ -APP2, AZ Application servers

A5

sp-

-APP3

configuration:

applicationservers

2 processors
14 GB RAM
1 x 127 GB Hard
Disk
AZ -SQL-HA1, AZ -SQLHA2

Database servers, primary and secondary replicas for AlwaysOn
availability groups

A5
configuration:

spdatabaseservers

DATA_SET

2 processors
14 GB RAM

When we deployed the recovery farm in Windows Azure we used the phase-based strategy that we used for the on-premises farm.
The following steps were repeated in the recovery environment:
Provision AZ-SQL-HA1 and AZ-SQL-HA2.
Configure AlwaysOn and create the 3 availability groups for the farm.
Note:
Because this was a new deployment we could create the availability groups before deploying SharePoint. We created three groups based on
MCS best practice guidance. The SharePoint databases are distributed across these groups as follows:
Provision AZ-APP1 to host Central Administration.
Provision AZ-WFE1 and AZ-WFE2 to host the distributed cache, which is installed by using the skipRegisterAsDistributedCachehost
parameter when running psconfig.exe at the command line. For more information, see Plan for feeds and the Distributed Cache service
in SharePoint Server 2013[25].

After configuring the distributed cache, adding test users, and adding test content, we started stage two of the deployment. This required scaling
out the tiers and configuring the farm servers to support the high availability topology described in the farm architecture.
Phase 4: Setup DFSR between the farms[26]
In order to setup file replication using DFSR we used DNS Management snap-in. However, before the DFSR setup, we had to log on to the onpremises fileserver (FS1) and the Windows Azure file server (az-sp-fs) and enable the service in Windows.
From the Server Manager Dashboard we complete the following steps:
Configure this local server
Start the Add Roles and Features Wizard
Open the File and Storage Services node
Select DFS Namespaces and DFS replication
Click Next to finish the wizard steps
Figure 14: DFS Replication Health Report

The preceding screen capture shows the detailed reporting that DFS Management provides. These reports include configuration results and
replication health.
The following table provides links to DFSR reference articles and blog posts.

Table: Reference articles for DFSR
Replication [27]

DFS Management TechNet topic with links for Replication

DFS Replication: Survival Guide[28]

Wiki with links to DFS information

DFS Replication: Frequently Asked Questions[29]

DFS Replication TechNet topic

Jose Barreto’s Blog[30]

Principal Program Manager on File Server team at Microsoft

The Storage Team at Microsoft – File Cabinet Blog[31] About files services and storage features in Windows Server
Phase 5: Setup log shipping to the recovery farm[32]
Log shipping is the critical component for setting up disaster recovery in this hybrid environment. Log Shipping enables you to automatically
send transaction log files for databases from a primary database server instance to a secondary database server instance. In our on-premises test
environment, which uses AlwaysOn availability groups with two replicas for high availability, we configured log shipping on both replicas. This
is needed because either replica must be able to ship transaction logs. Only the replica that is active and owns the database can ship logs.
However, if there was a failover event and the secondary replica became active, it would have to ship transaction logs instead of the failed replica.
After the transaction logs are received in the Windows Azure environment they are restored, one at a time to each SharePoint database on the
secondary database server.
Note:
Some organizations use a third database server as a monitor to record the history and status of backup and restore operations. This optional
monitor server creates alerts when backup operations fail.
For detailed information about log shipping refer to the articles in the following table.

Table: Reference articles for log shipping
URL

Description

About Log Shipping (SQL Server)[33]

Describes log shipping transaction log backups and the options that are available.

Configure Log Shipping (SQL Server)

Describes how to configure log shipping in SQL Server 2012 by using SQL Server Management Studio or

[34]

Transact-SQL.

View the Log Shipping Report (SQL

Explains how to view the Transaction Log Shipping Status report in SQL Server Management Studio. You

Server Management Studio)[35]

can run a status report at a monitor server, primary server, or secondary server.

Before you begin [36]
Make sure that you can meet the following log shipping prerequisites.
The SQL Server logins you use are domain accounts that have the permission levels needed for log shipping. The log-shipping stored
procedures require membership in the sysadmin fixed server role.
The primary database must use the full or bulk-logged recovery model.
Caution:
If you switch the database to simple recovery log shipping will quit working.
Before you configure log shipping, you must create a share to make the transaction log backups available to the secondary server. This is a
share of the directory where the transaction log backups will be generated.
In addition to your recovery point objectives (RPO) you want to ensure that the recovered farm data is as complete and uncorrupt as possible. To
reach these goals you have to plan and schedule every aspect of log shipping very carefully.
Performance considerations[37]
Log shipping consists of three jobs. Each job performs one of the following operations:
1. Back up the transaction log at the primary server instance.
2. Copy the transaction log file to the secondary server instance.
3. Restore the log backup on the secondary server instance.
Each of the preceding jobs operates on a schedule and runs for an interval, which can have a significant impact on database server, and by
default, SharePoint farm performance.
In order to correctly set the backup, copy, and restore job intervals for log shipping you have to analyze the amount of data that is being log
shipped. The amount of log shipped data is affected by the daily amount of change in the content databases. The percentage of change can vary
greatly, depending on the content, maintenance changes and usage peaks.
To get an accurate percentage of change you have to calculate the sum of changes in the transaction log backups for each content database that
you log ship over a given interval. Use this data to calculate the percentage of change compared to the primary database.
The following guidance is derived from the field’s log shipping experience with several releases of SharePoint Server.
Avoid performance degradation due to all jobs starting at the same time by making sure that all log shipping jobs are offset with at least a 1
minute shift from the previous job.
It is better to back up and copy many small transaction logs instead of a few large transaction logs.
Schedule log backups and copying at frequent intervals. You can restore the transaction logs at less-frequent intervals. For example, start by
using backup and copy intervals of 5 minutes, and a restore interval of 15 minutes.
Skills and experience[38]
In this hybrid disaster recovery solution multiple technologies are used. To make sure that these technologies interact as expected each
component in the on-premises and Windows Azure environment must be installed and configured correctly. We recommend that the person or
team who sets up this solution has a strong working knowledge of, and hands-on skills with the following technologies:
Finally, we recommend scripting skills that you can use to automate tasks associated with these technologies. It is possible to use the available
user interfaces to complete all the tasks described in this solution. However, a manual approach is time consuming, error prone, and delivers
inconsistent results.
In addition to Windows PowerShell there are also Windows PowerShell libraries for SQL Server, SharePoint Server and Windows Azure. Don’t
forget T-SQL, which can also reduce the time needed to configure and maintain your disaster recovery environment.
The log shipping infrastructure[39]
The log shipping infrastructure that is needed for this disaster recovery solution is shown in the following figure.
Figure 13: Log shipping infrastructure and data flow

The preceding image shows the log shipping infrastructure and data flow. This picture shows the SQL Server database servers and the file servers
in the production farm and the Windows Azure recovery farm. These farms are nearly identical and each contains a primary and secondary replica
for each AlwaysOn availability group. The file servers, FIL1 and AZ-FIL1 are configured the same, especially the number of hard disks and disk
sizes.
To provide high availability each replica in an availability group stores a backup (full, differential, and transaction logs) of the other replica.
The primary and secondary replicas (SQL-HA1 and SQL-HA2) each make backups that are stored on its partner in the availability group.
Transaction log shipping is configured on the secondary replica to minimize the impact of backups on the production databases. These
transaction logs are written to a shared folder on the on-premises file server (FIL1). The Windows Server Distributed File System (DFS)
Replication Service copies the transaction logs from FIL1 to AZ-FIL1. The transaction logs on AZ-FIL1 are restored to AZ-SQL-HA1, the primary
replica for the availability group in the recovery farm.
Steps required to configure and validate log shipping[40]
The steps required to configure, run, and validate log shipping are condensed and summarized in the following list.
1. Take database backups
1. Configure Full and Differential backups to a local folder and also a shared folder on the file server
2. Verify backups are made to both local share and shared folder
2. Set up and test Distributed File System (DFS) Replication
1. Create Namespace and Replication to transfer transaction logs and backup files between the on-premises and Windows Azure farms on the
shared folder in file server
2. Verify all transfers after log shipping runs
3. Set up and test log shipping
1. Set up log shipping on primary database server – use the following script: Primary- Logshippingsetupparameter
STNCUTO
E OON N
UEMD
S SB
G
O
-@rmevr:PiaySre nm
-PiSre
rmr evr ae
-@eSre :D/eodr Sre Nm
-Scevr
RScnay evr ae
-@eIsac :D/eodr FD
-Scntne
RScnay QN
-@oan :Dmi Nm
-Dmi
oan ae
-@kDie:Pouto Bcu sre Nm
-Bprv
rdcin akp evr ae
-@Bae:DtbsNm
-DNm
aaaeae
DCAE@SBcuJbdSuiuietfe, @SPiaydSuiuietfe ,@PAdRtoeA it
ELR L_akpoIA nqedniir
L_rmrIA nqedniir S_d_eCd s n
DCAE@iea naca(0,Scntnea naca(5) @rmevra naca(0,Scevra naca(0,
ELR Tm s vrhr1)@eIsac s vrhr20, PiSre s vrhr5)@eSre s vrhr5)
@oana naca(0,DNm a naca(a)@kDiea naca(5)@M a naca(a)@one it
Dmi s vrhr5)@Bae s vrhrmx,Bprv s vrhr20,CD s vrhrmx,Cutr n
--------------------------------------------------------------------------I OJC_D(tmd.B.Lghpig,U)I NTNL DO TBE#oSipn
F BETI 'epbDO#oSipn''' S O UL RP AL Lghpig
Cet tbe#oSipn (LDsnaca(a)
rae al Lghpig
SB vrhrmx)
St@rmevr=08C1PQ1
e PiSre '3-HSS0'
St@eSre =08S1SQ1
e Scevr '3-NSS0'
St@eIsac =08S1SQ108.g.stnt
e Scntne '3-NSS0.3dmdmf.e'
St@oan=08.g.stnt
e Dmi '3dmdmf.e'
St@kDie=08C1PK108.g.stnt
e Bprv '3-HSB0.3dmdmf.e'
St@Bae='oilD'
e DNm
Sca_B
St@ie='10
e Tm
03'
ST@Bae=UPRRPAE@Bae '' ')
E DNm
PE(ELC(DNm,
, ')
ST@Bae='' +RPAE@Bae '' '' '' +''
E DNm
''
ELC(DNm, ,, ', ')
''
St@M = 'Slc '+
e CD
eet
''ELR @PAdRtoeA it @SBcuJbdSuiuietfe, @SPiaydSuiuietfe
'DCAE S_d_eCd s n, L_akpoIA nqedniir L_rmrIA nqedniir
EE @PAdRtoe=mse.b.padlgsipn_rmr_aaae'+CA(0 +
XC S_d_eCd
atrdos_d_o_hpigpiaydtbs
HR1)
'dtbs =''' +d.ae+''''+CA(0 +
@aaae '''
bNm
'''
HR1)
'@akpdrcoy='''+@kDie+'L +''+d.ae+'''+'' +CA(0 +
,bcu_ietr
''
Bprv
S'
'
bNm
''
'' HR1)
'@akpsae='+''' +@kDie+'L +''+d.ae+'''+'' +CA(0 +
,bcu_hr
'''
Bprv
S'
'
bNm
''
''
HR1)
'@akpjbnm ='''+'Sakp'+''+d.ae+'''+'' +CA(0 +
,bcu_o_ae
''
LBcu_
'
bNm
''
'' HR1)
'@akprtninpro =42
,bcu_eeto_eid
30
,bcu_opeso =1
@akpcmrsin
,bcu_hehl =10
@akptrsod
8
,trsodaeteald=1
@hehl_lr_nbe
,hsoyrtninpro =56
@itr_eeto_eid
70
,bcu_o_d=@SBcuJbdOTU
@akpjbi
L_akpoI UPT
,piayi =@SPiaydOTU
@rmr_d L_rmrI UPT
,oewie=1'+
@vrrt
'F(@RO =0AD@PAdRtoe=0
I @ERR
N S_d_eCd
)
BGN
EI
DCAE@SBcUShdlUDsuiuietfe ,L_akpceueDSit
ELR L_akpceueIA nqedniir @SBcUShdlIA n
EE md.b.padshdl
XC sbdos_d_ceue
@ceuenm ='''+'Sakpceue' @rmevr+''+''+d.ae+'''+'' +CA(0 +
shdl_ae
''
LBcuShdl_+ PiSre
_
'
bNm
''
''
HR1)
'@nbe =1
,eald
,fe_ye=4
@rqtp
,fe_nevl=1
@rqitra
,fe_udytp =4
@rqsba_ye
,fe_udyitra =1
@rqsba_nevl
3
,fe_eurnefco =0
@rqrcrec_atr
,atv_tr_ae=2000
@ciesatdt
0956
,atv_n_ae=9913
@cieeddt
9921
,atv_tr_ie='+@ie +CA(0 +
@ciesattm
Tm
HR1)
'@cieedtm =250
,atv_n_ie
390
,shdl_i =@SBcUShdlUDOTU
@ceueud
L_akpceueI UPT
,shdl_d=@SBcUShdlI OTU
@ceuei
L_akpceueD UPT
EE md.b.patc_ceue@o_d=@SBcuJbd,shdl_d=@SBcUShdlI
XC sbdos_tahshdl jbi
L_akpoI @ceuei
L_akpceueD
EE md.b.pudt_o @o_d=@SBcuJbd,eald=1
XC sbdos_paejb jbi
L_akpoI @nbe
ED
N
EE mse.b.padlgsipn_lr_o
XC atrdos_d_o_hpigaetjb
EE mse.b.padlgsipn_rmr_eodr
XC atrdos_d_o_hpigpiayscnay
@rmr_aaae='' +'''+d.ae+'''+'' +CA(0 +
piaydtbs
'
''
bNm
''
''
HR1)
'@eodr_evr='''+@eIsac +''' +CA(0 +
,scnaysre
''
Scntne
'''
HR1)
'@eodr_aaae=''+'''+d.ae+'''+'' +CA(0 +
,scnaydtbs
'
''
bNm
''
''
HR1)
'@vrrt =1''+
,oewie
'
'[SB]FO ssdtbssd weenm i ( +@Bae+''+
LDs RM y.aaae b hr ae n '
DNm
)
'n
ad

d.ae nti ('atr''mdl''md','epb''mtisp','eicp' )
bnm
o n 'mse','oe','sb''tmd','ercos''prsoe'

ad Nt(xss(eetlsScnaydtbs fo md.b.o_hpigScnaydtbsslswee d.ae=lsScnaydtbs)
n
o eit slc s.eodr_aaae rm sbdolgsipn_eodr_aaae s hr bNm
s.eodr_aaae
o eit (eetlppiaydtbs fo md.b.o_hpigpiaydtbsslpwee d.ae=lppiaydtbs)
r xss slc s.rmr_aaae rm sbdolgsipn_rmr_aaae s hr
bNm
s.rmr_aaae
)
'
Isr #oSipn (SB)
net Lghpig LDs
Ee (@M)
xc CD
St@one =@rwon
e Cutr @ocut
Wie(cutr>0
hl @one
)
Bgn
ei
slc tp1 @M =LDsfo #oSipn
eet o
CD SB rm Lghpig
ee s_xctsl@M
xc peeueq CD
st@one =@one -1
e cutr
cutr
dlt tp()fo #oSipn
eee o 1 rm Lghpig
Ed
n
I OJC_D(tmd.B.Lghpig,U)I NTNL DO TBE#oSipn
F BETI 'epbDO#oSipn''' S O UL RP AL Lghpig
- *** Ed Srp t b rna Piay [B-SSL0] ***
- *** n: cit o e u t rmr: D1PMQ-1
***

2. Set up log shipping on secondary database server – use the following script: Secondary-Logshippingsetupparameter scripts
- *** Bgn Srp t b rna Scnay 93BID***
- *** ei: cit o e u t eodr: . UL***
STNCUTO
E OON N
UEMD
S SB
G
O
-@rmevr:PiaySre nm
-PiSre
rmr evr ae
-@eSre :D/eodr Sre Nm
-Scevr
RScnay evr ae
-@eIsac :D/eodr FD
-Scntne
RScnay QN
-@oan :Dmi Nm
-Dmi
oan ae
-@rmrBprv :Pouto Bcu sre Nm
-PiaykDie
rdcin akp evr ae
-@kDie:Scndr Bcu sre Nm
-Bprv
eoaay akp evr ae
-@Bae:DtbsNm
-DNm
aaaeae
DCAE@SBcuJbdSuiuietfe, @SPiaydSuiuietfe ,@PAdRtoeA it
ELR L_akpoIA nqedniir
L_rmrIA nqedniir S_d_eCd s n
DCAE@iea naca(0,Scntnea naca(5) @rmevra naca(0,Scevra naca(0,
ELR Tm s vrhr1)@eIsac s vrhr20, PiSre s vrhr5)@eSre s vrhr5)

@oana naca(0,DNm a naca(a)@rmrBprv a naca(5)@kDiea naca(5)@M a naca(a)@M2a naca(a)@o
Dmi s vrhr5)@Bae s vrhrmx,PiaykDie s vrhr20,Bprv s vrhr20,CD s vrhrmx,CD s vrhrmx,Cu
DCAE @eiee ca()@Bnaca(0) @trPsit @eghit
ELR
Dlmtr hr1,D vrhr20, Sato n, Lnt n
I OJC_D(tmd.B.Lghpig,U)I NTNL DO TBE#oSipn
F BETI 'epbDO#oSipn''' S O UL RP AL Lghpig
Cet tbe#oSipn (LDsnaca(a)
rae al Lghpig
SB vrhrmx)
I OJC_D(tmd.B.Ds,U)I NTNL DO TBE#B
F BETI 'epbDO#B''' S O UL RP AL Ds
Cet TBE#B (aenaca(0)
rae AL Ds Nm vrhr20)
St@rmevr=08C1PQ1
e PiSre '3-HSS0'
St@eSre =08S1SQ1
e Scevr '3-NSS0'
St@eIsac =08S1SQ108.g.stnt
e Scntne '3-NSS0.3dmdmf.e'
St@oan=08.g.stnt
e Dmi '3dmdmf.e'
ST@rmrBprv ='3-HSB0.3dmdmf.e'
E PiaykDie 08C1PK108.g.stnt
St@kDie=08S1SK108.g.stnt
e Bprv '3-NSB0.3dmdmf.e'
St@Bae='oilD'
e DNm
Sca_B
St@ie='10
e Tm
03'
-PrigFnto
-asn ucin
ST@eiee =''
E Dlmtr
,
WIELN@Bae >0
HL E(DNm)
BGN
EI
ST@trPs=CAIDX@eiee,@Bae
E Sato
HRNE(Dlmtr DNm)
I @trPs<0ST@trPs=0
F Sato
E Sato
ST@egh=LN@Bae -@trPs-1
E Lnt
E(DNm)
Sato
I @egh<0ST@egh=0
F Lnt
E Lnt
I @trPs>0
F Sato
BGN
EI
ST@B=RrmLrmSBTIG@Bae 1 @trPs-1)
E D
ti(ti(USRN(DNm, , Sato
))
ST@Bae=SBTIG@Bae @trPs+1 LN@Bae -@trPs
E DNm
USRN(DNm, Sato
, E(DNm) Sato)
ED
N
ES
LE
BGN
EI
ST@B=RrmLrm@Bae)
E D
ti(ti(DNm)
ST@Bae='
E DNm
'
ED
N
ISR #B (ae VLE(D)
NET Ds Nm) AUS@B
ED
N
-ST@Bae=UPRRPAE@Bae '' ')
-E DNm
PE(ELC(DNm,
, ')
-ST@Bae='' +RPAE@Bae '' '' '' +''
-E DNm
'' ELC(DNm, ,, ', ')
''
St@M ='eet'+
e CD Slc

''DCAE@SScnay_oyoI A uiuietfe,@SScnay_etrJbdA uiuietfe ,L_eodr_ScnaydA uiuietfe ,@S
' ELR L_eodr_CpJbd S nqedniir L_eodr_RsoeoI S nqedniir @SScnay_eodrI S nqedniir L

DCAE@SScnayoyoShdlUDsuiuietfe ,L_eodrCpJbceueDSit @SScnayetrJbceueIA uiuietfe ,L_
ELR L_eodrCpJbceueIA nqedniir @SScnayoyoShdlIA n, L_eodrRsoeoShdlUDs nqedniir @S
EE @SAdRtoe=mse.b.padlgsipn_eodr_rmr
XC L_d_eCd
atrdos_d_o_hpigscnaypiay
@rmr_evr='''+@rmevr+'''+ CA(0 +
piaysre
''
PiSre
'''
HR1)
'@rmr_aaae=' +'+ '''''+d.ae+'''''+ CA(0 +
,piaydtbs
'
''''
bNm
''''
HR1)
'+',bcu_oredrcoy='+''' +@rmrBprv +'L'+d.ae+''''+ CA(0 +
'@akpsuc_ietr
'''
PiaykDie S'
bNm
'''
HR1)
',bcu_etnto_ietr = '+''' +@kDie+'L'+d.ae+''''+ CA(0 +
@akpdsiaindrcoy
'''
Bprv
S'
bNm
'''
HR1)
'@oyjbnm =''LCp_B-SSL0_'+d.ae+''''+ CA(0 +
,cp_o_ae ''SoyD1PMQ-1'
bNm
'''
HR1)
'@etr_o_ae=''LRsoe' @rmevr+'' +d.ae+''''+ CA(0 +
,rsoejbnm
''Setr_+ PiSre
_'
bNm
'''
HR1)
'@iertninpro =42
,fl_eeto_eid
30
,oewie=1
@vrrt
,cp_o_d=@SScnay_oyoI OTU
@oyjbi
L_eodr_CpJbd UPT
,rsoejbi =@SScnay_etrJbdOTU
@etr_o_d L_eodr_RsoeoI UPT
,scnayi =@SScnay_eodrI OTU
@eodr_d
L_eodr_Scnayd UPT
I (@RO =0AD@SAdRtoe=0
F @ERR
N L_d_eCd
)
BGN
EI
EE md.b.padshdl
XC sbdos_d_ceue
@ceuenm =''ealCpJbceue''
shdl_ae ''DfutoyoShdl''
,eald=1
@nbe
,fe_ye=4
@rqtp
,fe_nevl=1
@rqitra
,fe_udytp =4
@rqsba_ye
,fe_udyitra =1
@rqsba_nevl
5
,fe_eurnefco =0
@rqrcrec_atr
,atv_tr_ae=2000
@ciesatdt
0956
,atv_n_ae=9913
@cieeddt
9921
,atv_tr_ie='+@ie+'
@ciesattm
Tm
,atv_n_ie=250
@cieedtm
390
,shdl_i =@SScnayoyoShdlUDOTU
@ceueud
L_eodrCpJbceueI UPT
,shdl_d=@SScnayoyoShdlI OTU
@ceuei
L_eodrCpJbceueD UPT
EE md.b.patc_ceue
XC sbdos_tahshdl
@o_d=@SScnay_oyoI
jbi
L_eodr_CpJbd
,shdl_d=@SScnayoyoShdlI
@ceuei
L_eodrCpJbceueD
EE md.b.padshdl
XC sbdos_d_ceue
@ceuenm =''ealRsoeoShdl''
shdl_ae ''DfutetrJbceue''
,eald=1
@nbe
,fe_ye=4
@rqtp
,fe_nevl=1
@rqitra
,fe_udytp =4
@rqsba_ye
,fe_udyitra =1
@rqsba_nevl
5
,fe_eurnefco =0
@rqrcrec_atr
,atv_tr_ae=2000
@ciesatdt
0956
,atv_n_ae=9913
@cieeddt
9921
,atv_tr_ie='+@ie+'
@ciesattm
Tm
,atv_n_ie=250
@cieedtm
390
,shdl_i =@SScnayetrJbceueI OTU
@ceueud
L_eodrRsoeoShdlUD UPT
,shdl_d=@SScnayetrJbceueDOTU
@ceuei
L_eodrRsoeoShdlI UPT
EE md.b.patc_ceue
XC sbdos_tahshdl
@o_d=@SScnay_etrJbd
jbi
L_eodr_RsoeoI
,shdl_d=@SScnayetrJbceueD
@ceuei
L_eodrRsoeoShdlI
ED
N
I (@RO =0AD@SAdRtoe=0
F @ERR
N L_d_eCd
)
BGN
EI
EE @SAdRtoe =mse.b.padlgsipn_eodr_aaae
XC L_d_eCd2
atrdos_d_o_hpigscnaydtbs
@eodr_aaae='+ ''''+d.ae+''''+ CA(0 +'
scnaydtbs
'''
bNm
'''
HR1)
,piaysre ='''+@rmevr+'''
@rmr_evr ''
PiSre
''
,piaydtbs =' ''''+d.ae+''''+ CA(0 +
@rmr_aaae
+ '''
bNm
'''
HR1)
'@etr_ea =0
,rsoedly
,rsoemd =1
@etr_oe
,dsonc_sr=1
@icnetues
,rsoetrsod=10
@etr_hehl
8
,trsodaeteald=1
@hehl_lr_nbe
,hsoyrtninpro=56
@itr_eeto_eid 70
,oewie=1
@vrrt
ED
N
I (@ro =0AD@SAdRtoe=0
F @err
N L_d_eCd
)
BGN
EI
EE md.b.pudt_o @o_d=@SScnay_oyoI ,eald=0
XC sbdos_paejb jbi
L_eodr_CpJbd @nbe
EE md.b.pudt_o @o_d=@SScnay_etrJbd,eald=1
XC sbdos_paejb jbi
L_eodr_RsoeoI @nbe
ED'' +'LDs FO #B d'
N '
[SB] RM Ds b
-Pit@M
-rn CD
Isr #oSipn (SB)
net Lghpig LDs
Ee (@M)
xc CD
St@one =@rwon
e Cutr @ocut
Wie(cutr>0
hl @one
)
Bgn
ei
slc tp1 @M =LDsfo #oSipn
eet o
CD SB rm Lghpig
ee s_xctsl@M
xc peeueq CD
st@one =@one -1
e cutr
cutr
dlt tp()fo #oSipn
eee o 1 rm Lghpig
Ed
n
I OJC_D(tmd.B.Lghpig,U)I NTNL DO TBE#oSipn
F BETI 'epbDO#oSipn''' S O UL RP AL Lghpig
I OJC_D(tmd.B.Ds,U)I NTNL DO TBE#B
F BETI 'epbDO#B''' S O UL RP AL Ds
- *** Ed Srp t b rna Scnay 93Bid***
- *** n: cit o e u t eodr: . ul ***

3. Verify that the transaction logs are shipped to the share and that DFS is replicating these logs to the share on the Windows Azure file
server. Open the Job Activity Monitor in SQL Server to verify the transaction logs are shipped successfully. Open the shared folders on both
file servers in the production and Windows Azure farms to verify DFS is transferring the transaction logs.
Phase 6: Validate failover and recovery[41]
The goal of this final phase is to verify that the disaster recovery solution works as planned. In order to do this you have to create a failover event
that shuts down the production farm and starts up the recovery farm as a replacement. You can start a failover scenario manually or by using
scripts.
The first step is to stop incoming user requests for farm services or content. You can do this by disabling DNS or by shutting down the front end
Web servers. After the farm is ‘down’ you can failover to the recovery farm.
Stop log shipping[42]
Log shipping must be stopped before farm recovery. The log shipping on the secondary server must be stopped first and then it must be stopped
on the primary server. Use the following script to stop log shipping on the secondary server first and then the primary server.
- RmvsLgSipn fo sre
- eoe o hpig rm evr
- Cmad ms b eeue o teScnaysre FRT te tePiay
- omns ut e xctd n h eodr evr IS, hn h rmr
STNCUTO
E OON N
DCAE @rD naca(a)
ELR PiB vrhrmx
,ScBnaca(5)
@eD vrhr20
,Pir naca(5)
@rSv vrhr20
,Scr naca(5)
@eSv vrhr20
St@rD='
e PiB '
ST@rD =UPR@rD)
E PiB
PE(PiB
ST@rD =RPAE@rD,'' ')
E PiB
ELC(PiB
, '
ST@rD ='' +RPAE@rD,'' '' '' +''
E PiB
'' ELC(PiB ,, ', ')
''
St@eD =@rD
e ScB
PiB

Ee ('eet 'ee mse.s_eeelgsipn_eodr_aaae' +'''' +pmpiaydtbs + ''''
xc
Slc 'xc atr.pdlt_o_hpigscnaydtbs ' '''' r.rmr_aaae
''''
fo md.b.o_hpigmntrpiaypmINRJI md.b.o_hpigpiayscnaissc O pmpiaydtbs=e.eodr_aaae
rm sbdolgsipn_oio_rmr r NE ON sbdolgsipn_rmr_eodre e N r.rmr_aaaescscnaydtbs
weepmpiaydtbs i ('+@rD +'))
hr r.rmr_aaae n
PiB
'

Ee ('eet 'ee mse.s_eeelgsipn_rmr_eodr ' +'''' +pmPiayDtbs +''',''' +scScnaySre +''','''
xc
Slc 'xc atr.pdlt_o_hpigpiayscnay ' '''' r.rmr_aaae ''' ''' e.eodr_evr ''' '''
fo md.b.o_hpigmntrpiaypmINRJI md.b.o_hpigpiayscnaissc O pmpiaydtbs=e.eodr_aaae
rm sbdolgsipn_oio_rmr r NE ON sbdolgsipn_rmr_eodre e N r.rmr_aaaescscnaydtbs
weepmpiaydtbs i ('+@rD +'))
hr r.rmr_aaae n
PiB
'
Ee ('eet 'ee mse.s_eeelgsipn_rmr_aaae' +'''' +pmpiaydtbs + ''''
xc
Slc 'xc atr.pdlt_o_hpigpiaydtbs ' '''' r.rmr_aaae
''''
fo md.b.o_hpigmntrpiaypmINRJI md.b.o_hpigpiayscnaissc O pmpiaydtbs=e.eodr_aaae
rm sbdolgsipn_oio_rmr r NE ON sbdolgsipn_rmr_eodre e N r.rmr_aaaescscnaydtbs
weepmpiaydtbs i ('+@rD +'))
hr r.rmr_aaae n
PiB
'

Ee ('eet 'ee mse.s_eeelgsipn_eodr_rmr ' +'''' +pmpiaysre +''',''' +pmpiaydtbs + ''''
xc
Slc 'xc atr.pdlt_o_hpigscnaypiay ' '''' r.rmr_evr ''' ''' r.rmr_aaae
''''
fo md.b.o_hpigmntrpiaypmINRJI md.b.o_hpigpiayscnaissc O pmpiaydtbs=e.eodr_aaae
rm sbdolgsipn_oio_rmr r NE ON sbdolgsipn_rmr_eodre e N r.rmr_aaaescscnaydtbs
weepmpiaydtbs i ('+@rD +'))
hr r.rmr_aaae n
PiB
'

Restore the backups[43]
Ensure that you meet the following prerequisites for restoring backups:
Backups must be restored in the order in which they were created. Before you can restore a particular transaction log backup, you must first
restore the following previous backups without rolling back uncommitted transactions (that is WITH NORECOVERY).
The full database backup and the last differential backup, if any, taken before the particular transaction log backup. Before the most recent full
or differential database backup was created, the database must have been using the full recovery model or bulk-logged recovery model.
All transaction log backups taken after the full database backup or the differential backup (if you restore one) and before the particular
transaction log backup. Log backups must be applied in the sequence in which they were created, without any gaps in the log chain.
To recover the content database on the secondary server so that the sites render, all database connections must be removed before recovery. To
restore the database, run the following command-SQL statement:
rsoedtbs WSCnetwt rcvr
etr aaae S_otn ih eoey

Visual C++
When you use T-SQL explicitly specify either WITH NORECOVERY or WITH RECOVERY in every RESTORE statement to eliminate ambiguity.
This is very important when writing scripts. After the full and differential backups are restored, the transaction logs can be restored in SQL
Server Management Studio. Also, because log shipping is already stopped the content database is in a standby state, so you must change the
state to Full Access.
In Management Studio, right click the WSS_Content database, point to Tasks, Restore, and then click Transaction Log, if you have not restored
the full backup then this will not be available. For more information, see Restore a Transaction Log Backup[44].
Crawl the content source[45]
You must start a full crawl for each content source to restore the Search Service. Note that you lose some analytics information from the onpremises farm, such as search recommendations. (If you require full analytics influences, you must do regular backup and restore operations on
the Search service application.) Before starting the full crawls, use the Windows PowerShell cmdlet, RestoreSPEnterpriseSearchServiceApplication and specify the log shipped and replicated Search Administration database,
Search_Service__DB_<GUID>. This cmdlet gives the search configuration, schema, managed props, rules, sources, and creates a default set of the
other components.
To start a full crawl, follow these steps:
1. In the SharePoint 2013 Central Administration go to Application Management > Service Applications > Manage service applications and
click the Search service application that you want to crawl.
2. On the Search Administration page click Content Sources, point to the content source that you want, click the arrow and then click Start
Full Crawl.
Recover farm services[46]
The following table shows how to recover services that have log shipped databases, the services that have databases but are not recommended to
restore these with log shipping, and the services that do not have databases.

Table: Service application database reference
Restore these services from log-shipped databases

These services have databases but we recommend
that you start this services without restoring their

These services do not store data in
databases. Start these services after

databases

failover

Machine Translation Service

Usage and Health Data Collection

Excel Services

Managed Metadata Service

State service

PerformancePoint Services

Secure Store Service

Word automation

PowerPoint Conversion

User Profile

Visio Graphics Service
Work Management

Note:
Only the Profile and Social Tagging databases
are supported. The Synchronization database is
not supported.
Microsoft SharePoint Foundation Subscription
Settings Service

The following example shows how to restore the Managed Metadata service from a database:
This will use the existing Managed_Metadata_DB database. This database is log shipped, but there is no active service application on the
secondary farm so it needs to be connected once the service application is in place.
First, use nwsmngdeaaaevcapiainspecifying the -aaaesic with the name, of the restored database.
e-paaemtdtsrieplcto
dtbs wth
Next, configure the new Managed Metadata Service Application on the secondary server as follows:
Name: Managed Metadata Service
Database server: Use the database name from the shipped transaction log
Database name: Managed_Metadata_DB
Application Pool: SharePoint Service Applications
Manage DNS records[47]
You have to create DNS records manually to point to your SharePoint farm.
In most cases where you have multiple web-front-end servers, it makes sense to take advantage of the Network Load Balancing feature in
Windows Server 2012 or a hardware load balancer to distribute requests among the web-front-end servers in your farm. Network load balancing
can also help reduce risk: if one of your web-front-end servers fails, NLB can distribute requests to the other servers.
Typically, when you set up network load balancing, your cluster is assigned a single IP address. You then create a DNS host record in the DNS
provider for your network that points to the cluster. (For this project, we put a DNS server in Azure for resiliency in the case of an on-premises
datacenter failure.) For instance, you can create a DNS record, in DNS Manager in Active Directory, called http://sharepoint.contoso.com that
points to the IP address for your load-balanced cluster.
For external access to your SharePoint farm, you can create a host record on an external DNS server with the same URL that clients use on your
intranet, for example http://sharepoint.contoso.com that points to an external IP address in your firewall. (A best practice in this case is to set up
split DNS, so your internal DNS server is authoritative for contoso.com and routes requests directly to the SharePoint farm cluster, rather than
routing DNS requests to your external DNS server.) You can then map the external IP address to the internal IP address of your on-premises
cluster so clients find the resources they’re looking for.
From here, you can run into a couple different disaster-recovery scenarios:
The on-premises SharePoint farm is unavailable, because of hardware failure in the on-premises SharePoint farm, for example. In this
case, after you completed the steps for fail over to the Windows Azure SharePoint farm, you can configure network load balancing on the recovery
SharePoint farm’s web-front end servers, the same way you did with the on-premises farm. You can then redirect the host record in your internal
DNS provider to point to the recovery farm’s cluster IP address. Note that it can take some time before cached DNS records on clients are
refreshed and point to the recovery farm.
The on-premises datacenter is lost completely. This can occur because of a natural disaster like a fire or flood. In this case, for an enterprise,
you’d likely have a secondary datacenter hosted in another region, as well as your Azure subnet that has its own directory services and DNS. As in
the previous disaster scenario, you can redirect your internal and external DNS records to point to the Azure SharePoint farm. Again, take note
that DNS-record propagation can take some time.
If you’re using host-named site collections, as recommended in Host-named site collection architecture and deployment (SharePoint 2013)[48],
you might have several site collections hosted by the same web application in your SharePoint farm, with unique DNS names like
http://sales.contoso.com and http://marketing.contoso.com. In this case, you can create DNS records for each site collection that point to your
cluster IP address. Once a request reaches your SharePoint web-front-end servers, they handle routing each request to the appropriate site
collection.
1. http://go.microsoft.com/fwlink/p/?LinkId=392554
2. http://go.microsoft.com/fwlink/p/?LinkId=392555
3. javascript:void(0)
4. http://go.microsoft.com/fwlink/?LinkID=393114
5. http://go.microsoft.com/fwlink/p/?linkid=203228
6. javascript:void(0)
7. javascript:void(0)
8. javascript:void(0)
9. javascript:void(0)
10. javascript:void(0)
11. http://technet.microsoft.com/en-us/library/dn635309(v=office.15).aspx
12. javascript:void(0)
13. javascript:void(0)
14. javascript:void(0)
15. javascript:void(0)
16. javascript:void(0)
17. http://go.microsoft.com/fwlink/p/?linkid=270985
18. javascript:void(0)
19. javascript:void(0)
20. javascript:void(0)
21. javascript:void(0)
22. http://go.microsoft.com/fwlink/?LinkId=392687
23. javascript:void(0)
24. http://go.microsoft.com/fwlink/?LinkId=392691
25. http://go.microsoft.com/fwlink/p/?linkid=270985
26. javascript:void(0)
27. http://go.microsoft.com/fwlink/?LinkId=392732
28. http://go.microsoft.com/fwlink/?LinkId=392737
29. http://go.microsoft.com/fwlink/?LinkId=392738
30. http://go.microsoft.com/fwlink/?LinkId=392739
31. http://go.microsoft.com/fwlink/?LinkId=392740
32. javascript:void(0)
33. http://go.microsoft.com/fwlink/?LinkId=392694
34. http://go.microsoft.com/fwlink/?LinkId=392695
35. http://go.microsoft.com/fwlink/?LinkId=392693
36. javascript:void(0)
37. javascript:void(0)
38. javascript:void(0)
39. javascript:void(0)
40. javascript:void(0)
41. javascript:void(0)
42. javascript:void(0)
43. javascript:void(0)
44. http://go.microsoft.com/fwlink/?LinkId=392778
45. javascript:void(0)
46. javascript:void(0)
47. javascript:void(0)
48. http://go.microsoft.com/fwlink/?LinkId=393120
SharePoint Server 2013 Disaster Recovery in Windows Azure

Contenu connexe

En vedette

SharePoint Saturday Netherlands 2016 - SharePoint and Office 365 performances...
SharePoint Saturday Netherlands 2016 - SharePoint and Office 365 performances...SharePoint Saturday Netherlands 2016 - SharePoint and Office 365 performances...
SharePoint Saturday Netherlands 2016 - SharePoint and Office 365 performances...Patrick Guimonet
 
Buenas prácticas en infraestructura en SharePoint 2013
Buenas prácticas en infraestructura en SharePoint 2013Buenas prácticas en infraestructura en SharePoint 2013
Buenas prácticas en infraestructura en SharePoint 2013Miguel Tabera
 
SQL Server and SharePoint - Best Practices presented by Steffen Krause, Micro...
SQL Server and SharePoint - Best Practices presented by Steffen Krause, Micro...SQL Server and SharePoint - Best Practices presented by Steffen Krause, Micro...
SQL Server and SharePoint - Best Practices presented by Steffen Krause, Micro...European SharePoint Conference
 
SQL Server 2016 and SharePoint 2016 - Lars PLatzdasch - SQL Konferenz 2016
SQL Server 2016 and SharePoint 2016  - Lars PLatzdasch - SQL Konferenz 2016SQL Server 2016 and SharePoint 2016  - Lars PLatzdasch - SQL Konferenz 2016
SQL Server 2016 and SharePoint 2016 - Lars PLatzdasch - SQL Konferenz 2016Lars Platzdasch
 
A Deep Dive into SharePoint 2016 architecture and deployment
A Deep Dive into SharePoint 2016 architecture and deploymentA Deep Dive into SharePoint 2016 architecture and deployment
A Deep Dive into SharePoint 2016 architecture and deploymentSPC Adriatics
 
Optimizing SQL Server 2012 Deep dive for SharePoint 2013 Lars Platzdasch SQL ...
Optimizing SQL Server 2012 Deep dive for SharePoint 2013 Lars Platzdasch SQL ...Optimizing SQL Server 2012 Deep dive for SharePoint 2013 Lars Platzdasch SQL ...
Optimizing SQL Server 2012 Deep dive for SharePoint 2013 Lars Platzdasch SQL ...Lars Platzdasch
 
SharePoint Performance: Best Practices from the Field
SharePoint Performance: Best Practices from the FieldSharePoint Performance: Best Practices from the Field
SharePoint Performance: Best Practices from the FieldJason Himmelstein
 
Sql best practices for SharePoint 2010
Sql best practices for SharePoint 2010Sql best practices for SharePoint 2010
Sql best practices for SharePoint 2010Samuel Zürcher
 

En vedette (8)

SharePoint Saturday Netherlands 2016 - SharePoint and Office 365 performances...
SharePoint Saturday Netherlands 2016 - SharePoint and Office 365 performances...SharePoint Saturday Netherlands 2016 - SharePoint and Office 365 performances...
SharePoint Saturday Netherlands 2016 - SharePoint and Office 365 performances...
 
Buenas prácticas en infraestructura en SharePoint 2013
Buenas prácticas en infraestructura en SharePoint 2013Buenas prácticas en infraestructura en SharePoint 2013
Buenas prácticas en infraestructura en SharePoint 2013
 
SQL Server and SharePoint - Best Practices presented by Steffen Krause, Micro...
SQL Server and SharePoint - Best Practices presented by Steffen Krause, Micro...SQL Server and SharePoint - Best Practices presented by Steffen Krause, Micro...
SQL Server and SharePoint - Best Practices presented by Steffen Krause, Micro...
 
SQL Server 2016 and SharePoint 2016 - Lars PLatzdasch - SQL Konferenz 2016
SQL Server 2016 and SharePoint 2016  - Lars PLatzdasch - SQL Konferenz 2016SQL Server 2016 and SharePoint 2016  - Lars PLatzdasch - SQL Konferenz 2016
SQL Server 2016 and SharePoint 2016 - Lars PLatzdasch - SQL Konferenz 2016
 
A Deep Dive into SharePoint 2016 architecture and deployment
A Deep Dive into SharePoint 2016 architecture and deploymentA Deep Dive into SharePoint 2016 architecture and deployment
A Deep Dive into SharePoint 2016 architecture and deployment
 
Optimizing SQL Server 2012 Deep dive for SharePoint 2013 Lars Platzdasch SQL ...
Optimizing SQL Server 2012 Deep dive for SharePoint 2013 Lars Platzdasch SQL ...Optimizing SQL Server 2012 Deep dive for SharePoint 2013 Lars Platzdasch SQL ...
Optimizing SQL Server 2012 Deep dive for SharePoint 2013 Lars Platzdasch SQL ...
 
SharePoint Performance: Best Practices from the Field
SharePoint Performance: Best Practices from the FieldSharePoint Performance: Best Practices from the Field
SharePoint Performance: Best Practices from the Field
 
Sql best practices for SharePoint 2010
Sql best practices for SharePoint 2010Sql best practices for SharePoint 2010
Sql best practices for SharePoint 2010
 

Plus de Vision Concepts Infrastructure Services Solution

Plus de Vision Concepts Infrastructure Services Solution (20)

Palo Alto Networks VM-Series firewall now available on NetScaler SDX Platform
Palo Alto Networks VM-Series firewall now available on NetScaler SDX PlatformPalo Alto Networks VM-Series firewall now available on NetScaler SDX Platform
Palo Alto Networks VM-Series firewall now available on NetScaler SDX Platform
 
Citrix Application Lifecycle Management
Citrix Application Lifecycle ManagementCitrix Application Lifecycle Management
Citrix Application Lifecycle Management
 
Using Windows Azure as a SharePoint Disaster Recovery Environment
Using Windows Azure as a SharePoint Disaster Recovery EnvironmentUsing Windows Azure as a SharePoint Disaster Recovery Environment
Using Windows Azure as a SharePoint Disaster Recovery Environment
 
Delivery-on-Demand Company Raises $16 Million, Questions About Safety and Out...
Delivery-on-Demand Company Raises $16 Million, Questions About Safety and Out...Delivery-on-Demand Company Raises $16 Million, Questions About Safety and Out...
Delivery-on-Demand Company Raises $16 Million, Questions About Safety and Out...
 
Security Testing Using Infrastructure-As-Code
Security Testing Using Infrastructure-As-CodeSecurity Testing Using Infrastructure-As-Code
Security Testing Using Infrastructure-As-Code
 
Could Google end sweatshop labor?
Could Google end sweatshop labor?Could Google end sweatshop labor?
Could Google end sweatshop labor?
 
How to Change the IT Architect Hiring Practice
How to Change the IT Architect Hiring PracticeHow to Change the IT Architect Hiring Practice
How to Change the IT Architect Hiring Practice
 
NetApp To Offer Integrated Storage Array And Virtualization Software
NetApp To Offer Integrated Storage Array And Virtualization SoftwareNetApp To Offer Integrated Storage Array And Virtualization Software
NetApp To Offer Integrated Storage Array And Virtualization Software
 
Big Data - Security Concerns
Big Data - Security ConcernsBig Data - Security Concerns
Big Data - Security Concerns
 
Apple adding hundreds of new engineers and operations staff in China to speed...
Apple adding hundreds of new engineers and operations staff in China to speed...Apple adding hundreds of new engineers and operations staff in China to speed...
Apple adding hundreds of new engineers and operations staff in China to speed...
 
Data Analytics and the Ubiquitous Internet of Things
Data Analytics and the Ubiquitous Internet of ThingsData Analytics and the Ubiquitous Internet of Things
Data Analytics and the Ubiquitous Internet of Things
 
Citrix UniPrint NetGain Case Study
Citrix UniPrint NetGain Case StudyCitrix UniPrint NetGain Case Study
Citrix UniPrint NetGain Case Study
 
Uniprint Infinity Citrix Printing Solution
Uniprint Infinity Citrix Printing SolutionUniprint Infinity Citrix Printing Solution
Uniprint Infinity Citrix Printing Solution
 
The CIO Pocket MBA
The CIO Pocket MBAThe CIO Pocket MBA
The CIO Pocket MBA
 
Citrix Top 10 Trends
Citrix Top 10 Trends Citrix Top 10 Trends
Citrix Top 10 Trends
 
Citrix and Samsung Deliver Android Enterprise Security on Latest Samsung Gala...
Citrix and Samsung Deliver Android Enterprise Security on Latest Samsung Gala...Citrix and Samsung Deliver Android Enterprise Security on Latest Samsung Gala...
Citrix and Samsung Deliver Android Enterprise Security on Latest Samsung Gala...
 
Learn KVM and Receive Exclusive Invitation to Linux Foundation Collaboration ...
Learn KVM and Receive Exclusive Invitation to Linux Foundation Collaboration ...Learn KVM and Receive Exclusive Invitation to Linux Foundation Collaboration ...
Learn KVM and Receive Exclusive Invitation to Linux Foundation Collaboration ...
 
Using Concurrent Multipath Transmission for Transport Virtualization
Using Concurrent Multipath Transmission for Transport VirtualizationUsing Concurrent Multipath Transmission for Transport Virtualization
Using Concurrent Multipath Transmission for Transport Virtualization
 
GuideIT - Virtual Economies of Scale
GuideIT - Virtual Economies of Scale GuideIT - Virtual Economies of Scale
GuideIT - Virtual Economies of Scale
 
Build Your 2008R2 2-Node Cluster
Build Your 2008R2 2-Node ClusterBuild Your 2008R2 2-Node Cluster
Build Your 2008R2 2-Node Cluster
 

Dernier

Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Enterprise Knowledge
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...apidays
 
What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?Antenna Manufacturer Coco
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreternaman860154
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsMaria Levchenko
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEarley Information Science
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024Rafal Los
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonAnna Loughnan Colquhoun
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdfhans926745
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationSafe Software
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...Martijn de Jong
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processorsdebabhi2
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Miguel Araújo
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)wesley chun
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking MenDelhi Call girls
 
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUK Journal
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfsudhanshuwaghmare1
 
Advantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your BusinessAdvantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your BusinessPixlogix Infotech
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Drew Madelung
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024The Digital Insurer
 

Dernier (20)

Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreter
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
 
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
Advantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your BusinessAdvantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your Business
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 

SharePoint Server 2013 Disaster Recovery in Windows Azure

  • 1. SharePoint Server 2013 Disaster Recovery in Windows Azure This topic has not yet been rated - Rate this topic Applies to: Windows Server 2012, Windows Azure, SQL Server 2012, SharePoint Server 2013 Enterprise, Office Professional 2013, data and storage, AD DS Topic Last Modified: 2014-03-03 Summary: Using Windows Azure Infrastructure Services, you can create a disaster-recovery environment for your on-premises SharePoint farm. This article describes how to design and implement this solution. Use this article with the following solution model: SharePoint Disaster Recovery in Windows Azure Figure 1: SharePoint Disaster Recovery Solution Hosted in Windows Azure Visio version [1] PDF version [2] In this article: Use Windows Azure Infrastructure Services for disaster recovery[3] Many organizations do not have a disaster-recovery environment for SharePoint. This is because it can be expensive to build and maintain a recovery environment on-premises. Windows Azure Infrastructure Services provides compelling options for disaster recovery environments that are more flexible and less expensive than the on-premises alternatives. Advantages include: Hosted secondary datacenter Use Windows Azure Infrastructure Services instead of investing in a secondary datacenter in a different region. Lower-cost disaster-recovery environments Maintain and pay for fewer resources than an on-premises disaster-recovery environment. The number of resources depend on which disaster-recovery environment you choose: cold standby, warm standby, or hot standby. Windows Azure Infrastructure Services is elastic Easily scale-out your recovery SharePoint farm in the event of a disaster to meet load requirements. Scale in when you no longer need the resources. Entry-level options for companies getting started with disaster recovery are possible, as well as advanced options for enterprises with high resiliency requirements. Definitions for cold, warm, and hot standby environments are a little different when the environment is hosted in a cloud platform. The following table shows how we think about these environments when building a SharePoint recovery farm in Windows Azure. Table: Recovery environments Type of recovery environment Description
  • 2. A fully-sized farm is provisioned, updated, and running on standby. The farm is built and VMs are running and updated. Recovery includes attaching content databases, provisioning service applications, and crawling content. The farm can be a smaller version of the production farm and then scaled out to serve the full user base. The farm is fully built, but the VMs are stopped. Maintaining the environment includes starting the VMs from time-to-time, patching, updating, and verifying the environment. Start the full environment in the event of a disaster. It is important to evaluate your organization’s Recovery Time Objectives (RTO) and Recovery Point Objectives (RPO). These requirements will determine which environment is the most appropriate investment for your organization. The guidance in this article describes how to implement a warm standby environment. This guidance can be adapted for a cold standby environment, though additional procedures will be needed to support this environment. This guidance does not address hot standby environments. For more information about disaster recovery solutions see High availability and disaster recovery concepts in SharePoint 2013[4] and Choose a disaster recovery strategy for SharePoint 2013[5]. Solution description [6] The warm standby disaster recovery solution requires the following environment. On-premises SharePoint production farm Recovery SharePoint farm in Windows Azure Site-to-site VPN connection between the two environments The following figure illustrates these three elements. Figure 2: Elements of a warm standby solution in Windows Azure SQL Server Log shipping together with Distributed File System Replication (DFSR) are used to copy database backups and transaction logs to the recovery farm in Windows Azure. DFSR is used to transfer logs from the production environment to the recovery environment. In a WAN scenario DFSR is more efficient than shipping the logs directly to the secondary server in Windows Azure. Logs are replayed to the SQL servers in the recovery environment in Windows Azure. Log-shipped databases are not attached to the farm until a recovery exercise is performed. Failover includes the following steps: Stop log shipping. Stop accepting traffic to the primary farm. Replay the final transaction logs. Attach the content databases to the farm.
  • 3. Restore service applications from the replicated services databases. Update DNS records to point to the recovery farm. Start a full crawl. Recovery objectives provided by this solution are summarized in the following table. Table: Solution recovery objectives Item Description Sites Sites and content are available in the recovery environment. and content A new In this warm standby solution, search is not restored from search databases. Search components in the recovery farm are configured as instance similarly as possible to the production farm. After the sites and content are restored, a full crawl is started to rebuild the search index. of You do not need to wait for the crawl to complete to make the sites and content available. search Services Services that store data in databases are restored from the log-shipped databases. Services that do not store data in databases are simply started. Not all services with databases need to be restored. The following services do not need to be restored from databases and can simply be started after failover: Usage and Health Data Collection State service Word automation Any service without a database Additional items that can be addressed by Microsoft Consulting Services or a partner are summarized in the following table. Table: Other disaster recovery resources Item Description Synchronizing Ideally the recovery farm is configured identically as possible to the production farm. You can work with a consultant or custom farm partner to evaluate whether custom farm solutions are replicated and the process for keeping the two environments in sync. solutions Connections to It might not be practical to replicate connections to backend data systems, such as BDC connections and search content data sources onpremises sources. Search restore scenarios Because enterprise search deployments tend to be fairly unique and very complex, restoring search from databases requires a greater investment. You can work with a consultant or partner to identify and implement search restore scenarios that your organization might require. This solution guidance assumes that the on-premises farm is already designed and deployed. Detailed architecture[7] Ideally the recovery farm in Windows Azure is configured as identically as possible to the production farm on-premises: Same representation of server roles Same configuration of customizations Same configuration of search components The environment in Windows Azure can be a smaller version of the production farm. If you plan to scale out the recovery farm out after failover, it is important that each type of server role is represented initially. Some configurations might not be practical to replicate in the failover environment. Be sure to test the failover procedures and environment to ensure that the failover farm provides the service level that is expected.
  • 4. This solution doesn't prescribe a specific topology for a SharePoint farm. The focus of this solution is using Windows Azure for the failover farm and implementing log shipping and DFSR between the two environments. Warm standby environments[8] In a warm standby environment, all VMs in the Windows Azure environment are running. The environment is ready for a failover exercise or event. The following figure illustrates a disaster recovery solution from an on-premises SharePoint farm to a Windows Azure-based SharePoint farm that is configured as a warm standby environment. Figure 3: Topology and key elements of production farm and warm standby recovery farm In this illustration: Two environments are illustrated side-by-side — the on-premises SharePoint farm and the warm standby farm in Windows Azure. A file share is included in each environment. Each farm includes four tiers. To achieve high availability, each tier includes two servers or VMs that are configured identically for a specific role, such as front-end services, distributed cache, backend services, and databases. It isn't important in this illustration to call out specific components. The two farms are configured identically. The fourth tier is the database tier. Log shipping is used to copy logs from the secondary database server in the on-premises environment to the file share in the same environment. DFSR is used to copy files from the file share in the on-premises environment to the file share in the Windows Azure environment. Log shipping is used to replay the logs from the file share in the Windows Azure environment to the primary replica in the SQL Server AlwaysOn availability group in the recovery environment. Cold standby environments[9] In a cold standby environment, most of the SharePoint farm VMs can be shut down. (We recommend occasionally starting the virtual machines, such as every two weeks or once a month, so each can sync with the domain.) The following VMs in the Windows Azure recovery environment must remain running to ensure continuous operations of log shipping and DFSR: The file share The primary database server At least one VM running Windows Server Active Directory and DNS The following figure shows a Windows Azure failover environment in which the file share VM and the primary SharePoint database VM are running. All other SharePoint VMs are stopped. The VM running Windows Server Active Directory and DNS is not shown. Figure 4: Cold standby recovery farm with running VMs
  • 5. After failover to a cold standby environment, all VMs are started and the method used to achieve high availability of the database servers must be configured, such as SQL Server AlwaysOn availability groups. If multiple storage groups are implemented (databases are spread across more than one set of highly available SQL Servers) the primary database for each storage group must be running to accept the logs associated with its storage group. Designing the Windows Azure environment[10] Before deploying VMs in Windows Azure it is important to design the Windows Azure environment, including the virtual network and site-tosite VPN connection that is a requirement for this solution. For this design task, see Windows Azure Architectures for SharePoint 2013[11]. Microsoft proof of concept environment[12] The design goal for our test environment was to deploy and recover a SharePoint farm that we’d expect to see in a customer environment. We made several assumptions, but knew that the farm needed to provide all of the out of the box functionality, without any customizations. The topology would be designed for high availability using best practice guidance from the field and product group. The following table describes the Hyper-V virtual machines we created and configured for the on-premises test environment. Table: Virtual machines for on-premises test Server name Role Domain controller with Active Directory Configuration 2 processors 512 – 4 GB RAM 1 x 127 GB Hard Disk Server configured with the Routing and Remote Access Service role 2 processors 2 - 8 GB RAM 1 x 127 GB Hard Disk File server with shares for backups and end point for DFSR 4 processors
  • 6. 2 - 12 GB RAM 1 x 127 GB 1 x 1 TB (SAN) 1 x 750 GB SP-WFE1, SP- Front end Web servers 4 processors WFE2 16 GB RAM SP-APP1, SP- Application servers 4 processors APP2, SP-APP3 2 – 16 GB RAM SP-SQL-HA1, Database servers, configured with SQL Server 2012 AlwaysOn availability groups to provide high availability. SP-SQL-HA2 This configuration uses SP-SQL-HA1 and SP-SQL-HA2 as the primary and secondary replicas 4 processors 2 – 16 GB RAM The following table describes drive configurations for the Hyper-V virtual machines that we created and configured for the front end web and application, servers for the on-premises test environment. Table: Virtual machine drive requirements for Front end Web and Application servers for on-premises test Drive LetterSizeDirectory Name Path System drive <DriveLetter>:Program FilesMicrosoft SQL Server Log drive (40 GB) <DriveLetter>:Program FilesMicrosoft SQL ServerMSSQL10_50.MSSQLSERVERMSSQLDATA Page (36 GB <DriveLetter>:Program FilesMicrosoft SQL ServerMSSQLDATA The following table describes drive configurations for the Hyper-V virtual machines that we created and configured for the database servers for the on-premises test environment. On the Database Engine Configuration page access the Data Directories tab to set and confirm the settings shown in the following table. Table: Virtual machine drive requirements for Database server for on-premises test Drive LetterSizeDirectory Name Path Data root directory <DriveLetter>:Program FilesMicrosoft SQL Server User database directory <DriveLetter>:Program FilesMicrosoft SQL ServerMSSQL10_50.MSSQLSERVERMSSQLDATA User database log directory <DriveLetter>:Program FilesMicrosoft SQL ServerMSSQL10_50.MSSQLSERVERMSSQLDATA Temp DB directory <DriveLetter>:Program FilesMicrosoft SQL ServerMSSQL10_50.MSSQLSERVERMSSQLDATA Temp DB log directory <DriveLetter>:Program FilesMicrosoft SQL ServerMSSQL10_50.MSSQLSERVERMSSQLDATA Setting up the test environment[13] During the different deployment phases the test team typically worked on the on-premises part of architecture first, and then on the corresponding parts of the Windows Azure environment. This reflects the general real world cases, where in-house production farms are already running. What’s even more important is the fact that you should know the current production workload, capacity, and typical performance. In addition to building a disaster recovery that can meet business requirements, you need to size the recovery farm servers to deliver a minimum level of service. In a cold or warm standby environment a recovery farm is typically smaller than a production farm. After the recovery farm is stable and in production, the farm can be scaled up and out to meet workload requirements. We deployed our test environment in the following three phases: Set up the hybrid infrastructure
  • 7. Provision the servers Deploy the SharePoint farms Set up the hybrid infrastructure[14] This phase involves setting up a domain environment for the on-premises farm and for the recovery farm in Windows Azure. In addition to the normal tasks associated with configuring Active Directory, the test team implemented a routing solution and a VPN connection between the two environments. Provision the servers[15] In addition to the farm servers, it was necessary to provision servers for the domain controllers, and a server configured to handle routing and remote access service (RRAS) as well as the Site-to-Site VPN. Two file servers were provisioned for the DFSR service and several client computers were provisioned for testers. Deploy the SharePoint farm[16] The SharePoint farm was deployed in two stages in order to simplify stabilizing the environments and troubleshooting if it was required. During the first stage each farm was deployed on the minimum number of servers for each tier of the topology and to support the functionality that was required. We created the farm and joined additional servers in the following order: Note: We created the database server with SQL Server installed before creating the SharePoint 2013 servers. 1. Provision SP-SQL-HA1 and SP-SQL-HA2. 2. Configure AlwaysOn and create the 3 availability groups for the farm. Note: Because this was a new deployment we could create the availability groups before deploying SharePoint. We created three groups based on MCS best practice guidance. The SharePoint databases are distributed on the following groups: Provision SP-APP1 to host Central Administration. Provision SP-WFE1 and SP-WFE2 to host the distributed cache. Use the skipRegisterAsDistributedCachehost parameter when running psconfig.exe at the command line. For more information, see Plan for feeds and the Distributed Cache service in SharePoint Server 2013[17]. Note: It was recognized that if resources or schedule became an issue, the initial environment would be suitable for finishing this proof of concept disaster recovery project. After we configured the distributed cache, added test users, and added test content, we started stage two of the deployment. This required that we scale out the tiers and configure the farm servers to support the high availability topology described in the farm architecture. Operations[18] After the farm environments were stable and functional testing was completed, the test team started the operations tasks required to configure the on-premises recovery environment. Configure full and differential backups Configure DFSR on the file servers that would be used to transfer transaction logs between the on-premises environment and the Windows Azure environment Configure log shipping on the primary database server Stabilize, validate and troubleshoot log shipping as required. This included identifying and documenting any behaviors that might cause issues, such as network latency which would cause log shipping or DFSR file synchronization failures. For detailed information about preparing the end to end recovery environment, see Phase 5: Setup log shipping to the recovery farm.
  • 8. Disaster recovery roadmap[19] The roadmap described in the following table assumes that you already have a SharePoint Server 2013 farm deployed in production. Table: Roadmap for disaster recovery Phase Description Phase 1 Review and inventory the on-premises production to farm to ensure that: The SQL Server AlwaysOn and Log Shipping requirements and configurations in in this article can be met. Production farm customizations and publishing solutions are fully supported in Windows Azure. Phase 2 Design and implement a hybrid network environment. Phase 3 Deploy the SharePoint recovery farm in Windows Azure. Phase 4 Setup DFSR between the farms. Phase 5 Setup log shipping to the recovery farm. Phase 6 Validate failover and recovery solution. This includes the following procedures and technologies: Stop log shipping Recover content Crawl content Recover services Manage DNS records Phase 1: Review and inventory the on-premises product farm[20] A successful disaster recovery environment must accurately reflect the production farm that you want to recover. The size of the recovery farm is not the most important thing in the recovery farm’s design, deployment, and testing. Farm scale varies from organization to organization based on their business requirements. It may be possible to use a scaled down farm for a short outage, or until performance and capacity demands require you to scale the farm. The most important thing is the ability to provide a SharePoint farm that meets your SLA requirements and provides the functionality that you need to support your business. The only way to achieve your recovery goals is to have a detailed inventory of your farm configuration, applications, and users. Note: This should already be part of your disaster recovery plan. Phase 2: Implement a hybrid network environment[21] The key components and architecture that’s needed for this hybrid network infrastructure have already been described. In this phase of the project you have to deploy and configure these components. Based on our test configuration, the following steps are required: 1. 2. 3. 4. Create the domain controller, and create the AD subnets and sites. Deploy a gateway server or install a router that can connect to the Windows Azure VPN gateway. Configure the gateway server (RRAS in our model) to connect the VPN gateway. Deploy a domain controller in Windows Azure and configure it as a subscription server. For detailed configuration guidance for setting up a domain controller in Windows Azure, see Install a Replica Active Directory Domain Controller in Windows Azure Virtual Networks[22]. Phase 3: Deploy the SharePoint recovery farm in Windows Azure[23] The SharePoint recovery farm in Windows Azure, while scaled down, should be configured as closely as possible to the production farm. When you deploy SharePoint and configure the farm there are some things you don’t have to configure. A good example is site collections, which are registered in the farm sitemap automatically when you attach the content database created from the backups. Before we created any virtual machines in Windows Azure we applied the design principles we’ve described and Windows Azure best practice
  • 9. guidance. There are several best practice articles available; Guidance[24] is a good starting point. We created the cloud services we wanted to use and decided to hold off on creating availability sets until we created the virtual machines. The following table describes the virtual machines, cloud services, and availability sets we setup for our recovery farm. Table: Recovery farm infrastructure Server name Configuration Cloud service Availability set spDRAD Domain controller with Active Directory 2 processors spDRAD 512 – 4 GB RAM 1 x 127 GB Hard Disk AZ-SP-FS File server with shares for backups and end point for DFSR A5 sp- configuration: databaseservers DATA_SET 2 processors 14 GB RAM 1 x 127 GB 1 x 135 1 x 127 GB 1 x 150 GB AZ-WFE1, AZ -WFE2 Front end Web servers A5 sp-webservers WFE_SET APP_SET configuration: 2 processors 14 GB RAM 1 x 127 GB Hard Disk AZ -APP1, AZ -APP2, AZ Application servers A5 sp- -APP3 configuration: applicationservers 2 processors 14 GB RAM 1 x 127 GB Hard Disk AZ -SQL-HA1, AZ -SQLHA2 Database servers, primary and secondary replicas for AlwaysOn availability groups A5 configuration: spdatabaseservers DATA_SET 2 processors 14 GB RAM When we deployed the recovery farm in Windows Azure we used the phase-based strategy that we used for the on-premises farm. The following steps were repeated in the recovery environment: Provision AZ-SQL-HA1 and AZ-SQL-HA2. Configure AlwaysOn and create the 3 availability groups for the farm. Note: Because this was a new deployment we could create the availability groups before deploying SharePoint. We created three groups based on
  • 10. MCS best practice guidance. The SharePoint databases are distributed across these groups as follows: Provision AZ-APP1 to host Central Administration. Provision AZ-WFE1 and AZ-WFE2 to host the distributed cache, which is installed by using the skipRegisterAsDistributedCachehost parameter when running psconfig.exe at the command line. For more information, see Plan for feeds and the Distributed Cache service in SharePoint Server 2013[25]. After configuring the distributed cache, adding test users, and adding test content, we started stage two of the deployment. This required scaling out the tiers and configuring the farm servers to support the high availability topology described in the farm architecture. Phase 4: Setup DFSR between the farms[26] In order to setup file replication using DFSR we used DNS Management snap-in. However, before the DFSR setup, we had to log on to the onpremises fileserver (FS1) and the Windows Azure file server (az-sp-fs) and enable the service in Windows. From the Server Manager Dashboard we complete the following steps: Configure this local server Start the Add Roles and Features Wizard Open the File and Storage Services node Select DFS Namespaces and DFS replication Click Next to finish the wizard steps Figure 14: DFS Replication Health Report The preceding screen capture shows the detailed reporting that DFS Management provides. These reports include configuration results and replication health. The following table provides links to DFSR reference articles and blog posts. Table: Reference articles for DFSR
  • 11. Replication [27] DFS Management TechNet topic with links for Replication DFS Replication: Survival Guide[28] Wiki with links to DFS information DFS Replication: Frequently Asked Questions[29] DFS Replication TechNet topic Jose Barreto’s Blog[30] Principal Program Manager on File Server team at Microsoft The Storage Team at Microsoft – File Cabinet Blog[31] About files services and storage features in Windows Server Phase 5: Setup log shipping to the recovery farm[32] Log shipping is the critical component for setting up disaster recovery in this hybrid environment. Log Shipping enables you to automatically send transaction log files for databases from a primary database server instance to a secondary database server instance. In our on-premises test environment, which uses AlwaysOn availability groups with two replicas for high availability, we configured log shipping on both replicas. This is needed because either replica must be able to ship transaction logs. Only the replica that is active and owns the database can ship logs. However, if there was a failover event and the secondary replica became active, it would have to ship transaction logs instead of the failed replica. After the transaction logs are received in the Windows Azure environment they are restored, one at a time to each SharePoint database on the secondary database server. Note: Some organizations use a third database server as a monitor to record the history and status of backup and restore operations. This optional monitor server creates alerts when backup operations fail. For detailed information about log shipping refer to the articles in the following table. Table: Reference articles for log shipping URL Description About Log Shipping (SQL Server)[33] Describes log shipping transaction log backups and the options that are available. Configure Log Shipping (SQL Server) Describes how to configure log shipping in SQL Server 2012 by using SQL Server Management Studio or [34] Transact-SQL. View the Log Shipping Report (SQL Explains how to view the Transaction Log Shipping Status report in SQL Server Management Studio. You Server Management Studio)[35] can run a status report at a monitor server, primary server, or secondary server. Before you begin [36] Make sure that you can meet the following log shipping prerequisites. The SQL Server logins you use are domain accounts that have the permission levels needed for log shipping. The log-shipping stored procedures require membership in the sysadmin fixed server role. The primary database must use the full or bulk-logged recovery model. Caution: If you switch the database to simple recovery log shipping will quit working. Before you configure log shipping, you must create a share to make the transaction log backups available to the secondary server. This is a share of the directory where the transaction log backups will be generated. In addition to your recovery point objectives (RPO) you want to ensure that the recovered farm data is as complete and uncorrupt as possible. To reach these goals you have to plan and schedule every aspect of log shipping very carefully. Performance considerations[37] Log shipping consists of three jobs. Each job performs one of the following operations: 1. Back up the transaction log at the primary server instance. 2. Copy the transaction log file to the secondary server instance. 3. Restore the log backup on the secondary server instance.
  • 12. Each of the preceding jobs operates on a schedule and runs for an interval, which can have a significant impact on database server, and by default, SharePoint farm performance. In order to correctly set the backup, copy, and restore job intervals for log shipping you have to analyze the amount of data that is being log shipped. The amount of log shipped data is affected by the daily amount of change in the content databases. The percentage of change can vary greatly, depending on the content, maintenance changes and usage peaks. To get an accurate percentage of change you have to calculate the sum of changes in the transaction log backups for each content database that you log ship over a given interval. Use this data to calculate the percentage of change compared to the primary database. The following guidance is derived from the field’s log shipping experience with several releases of SharePoint Server. Avoid performance degradation due to all jobs starting at the same time by making sure that all log shipping jobs are offset with at least a 1 minute shift from the previous job. It is better to back up and copy many small transaction logs instead of a few large transaction logs. Schedule log backups and copying at frequent intervals. You can restore the transaction logs at less-frequent intervals. For example, start by using backup and copy intervals of 5 minutes, and a restore interval of 15 minutes. Skills and experience[38] In this hybrid disaster recovery solution multiple technologies are used. To make sure that these technologies interact as expected each component in the on-premises and Windows Azure environment must be installed and configured correctly. We recommend that the person or team who sets up this solution has a strong working knowledge of, and hands-on skills with the following technologies: Finally, we recommend scripting skills that you can use to automate tasks associated with these technologies. It is possible to use the available user interfaces to complete all the tasks described in this solution. However, a manual approach is time consuming, error prone, and delivers inconsistent results. In addition to Windows PowerShell there are also Windows PowerShell libraries for SQL Server, SharePoint Server and Windows Azure. Don’t forget T-SQL, which can also reduce the time needed to configure and maintain your disaster recovery environment. The log shipping infrastructure[39] The log shipping infrastructure that is needed for this disaster recovery solution is shown in the following figure. Figure 13: Log shipping infrastructure and data flow The preceding image shows the log shipping infrastructure and data flow. This picture shows the SQL Server database servers and the file servers in the production farm and the Windows Azure recovery farm. These farms are nearly identical and each contains a primary and secondary replica for each AlwaysOn availability group. The file servers, FIL1 and AZ-FIL1 are configured the same, especially the number of hard disks and disk sizes. To provide high availability each replica in an availability group stores a backup (full, differential, and transaction logs) of the other replica. The primary and secondary replicas (SQL-HA1 and SQL-HA2) each make backups that are stored on its partner in the availability group. Transaction log shipping is configured on the secondary replica to minimize the impact of backups on the production databases. These transaction logs are written to a shared folder on the on-premises file server (FIL1). The Windows Server Distributed File System (DFS)
  • 13. Replication Service copies the transaction logs from FIL1 to AZ-FIL1. The transaction logs on AZ-FIL1 are restored to AZ-SQL-HA1, the primary replica for the availability group in the recovery farm. Steps required to configure and validate log shipping[40] The steps required to configure, run, and validate log shipping are condensed and summarized in the following list. 1. Take database backups 1. Configure Full and Differential backups to a local folder and also a shared folder on the file server 2. Verify backups are made to both local share and shared folder 2. Set up and test Distributed File System (DFS) Replication 1. Create Namespace and Replication to transfer transaction logs and backup files between the on-premises and Windows Azure farms on the shared folder in file server 2. Verify all transfers after log shipping runs 3. Set up and test log shipping 1. Set up log shipping on primary database server – use the following script: Primary- Logshippingsetupparameter STNCUTO E OON N UEMD S SB G O -@rmevr:PiaySre nm -PiSre rmr evr ae -@eSre :D/eodr Sre Nm -Scevr RScnay evr ae -@eIsac :D/eodr FD -Scntne RScnay QN -@oan :Dmi Nm -Dmi oan ae -@kDie:Pouto Bcu sre Nm -Bprv rdcin akp evr ae -@Bae:DtbsNm -DNm aaaeae DCAE@SBcuJbdSuiuietfe, @SPiaydSuiuietfe ,@PAdRtoeA it ELR L_akpoIA nqedniir L_rmrIA nqedniir S_d_eCd s n DCAE@iea naca(0,Scntnea naca(5) @rmevra naca(0,Scevra naca(0, ELR Tm s vrhr1)@eIsac s vrhr20, PiSre s vrhr5)@eSre s vrhr5) @oana naca(0,DNm a naca(a)@kDiea naca(5)@M a naca(a)@one it Dmi s vrhr5)@Bae s vrhrmx,Bprv s vrhr20,CD s vrhrmx,Cutr n --------------------------------------------------------------------------I OJC_D(tmd.B.Lghpig,U)I NTNL DO TBE#oSipn F BETI 'epbDO#oSipn''' S O UL RP AL Lghpig Cet tbe#oSipn (LDsnaca(a) rae al Lghpig SB vrhrmx) St@rmevr=08C1PQ1 e PiSre '3-HSS0' St@eSre =08S1SQ1 e Scevr '3-NSS0' St@eIsac =08S1SQ108.g.stnt e Scntne '3-NSS0.3dmdmf.e' St@oan=08.g.stnt e Dmi '3dmdmf.e' St@kDie=08C1PK108.g.stnt e Bprv '3-HSB0.3dmdmf.e' St@Bae='oilD' e DNm Sca_B St@ie='10 e Tm 03' ST@Bae=UPRRPAE@Bae '' ') E DNm PE(ELC(DNm, , ') ST@Bae='' +RPAE@Bae '' '' '' +'' E DNm '' ELC(DNm, ,, ', ') '' St@M = 'Slc '+ e CD eet ''ELR @PAdRtoeA it @SBcuJbdSuiuietfe, @SPiaydSuiuietfe 'DCAE S_d_eCd s n, L_akpoIA nqedniir L_rmrIA nqedniir EE @PAdRtoe=mse.b.padlgsipn_rmr_aaae'+CA(0 + XC S_d_eCd atrdos_d_o_hpigpiaydtbs HR1) 'dtbs =''' +d.ae+''''+CA(0 + @aaae ''' bNm ''' HR1) '@akpdrcoy='''+@kDie+'L +''+d.ae+'''+'' +CA(0 + ,bcu_ietr '' Bprv S' ' bNm '' '' HR1) '@akpsae='+''' +@kDie+'L +''+d.ae+'''+'' +CA(0 + ,bcu_hr ''' Bprv S' ' bNm '' '' HR1) '@akpjbnm ='''+'Sakp'+''+d.ae+'''+'' +CA(0 + ,bcu_o_ae '' LBcu_ ' bNm '' '' HR1) '@akprtninpro =42 ,bcu_eeto_eid 30 ,bcu_opeso =1 @akpcmrsin ,bcu_hehl =10 @akptrsod 8 ,trsodaeteald=1 @hehl_lr_nbe ,hsoyrtninpro =56 @itr_eeto_eid 70 ,bcu_o_d=@SBcuJbdOTU @akpjbi L_akpoI UPT ,piayi =@SPiaydOTU @rmr_d L_rmrI UPT
  • 14. ,oewie=1'+ @vrrt 'F(@RO =0AD@PAdRtoe=0 I @ERR N S_d_eCd ) BGN EI DCAE@SBcUShdlUDsuiuietfe ,L_akpceueDSit ELR L_akpceueIA nqedniir @SBcUShdlIA n EE md.b.padshdl XC sbdos_d_ceue @ceuenm ='''+'Sakpceue' @rmevr+''+''+d.ae+'''+'' +CA(0 + shdl_ae '' LBcuShdl_+ PiSre _ ' bNm '' '' HR1) '@nbe =1 ,eald ,fe_ye=4 @rqtp ,fe_nevl=1 @rqitra ,fe_udytp =4 @rqsba_ye ,fe_udyitra =1 @rqsba_nevl 3 ,fe_eurnefco =0 @rqrcrec_atr ,atv_tr_ae=2000 @ciesatdt 0956 ,atv_n_ae=9913 @cieeddt 9921 ,atv_tr_ie='+@ie +CA(0 + @ciesattm Tm HR1) '@cieedtm =250 ,atv_n_ie 390 ,shdl_i =@SBcUShdlUDOTU @ceueud L_akpceueI UPT ,shdl_d=@SBcUShdlI OTU @ceuei L_akpceueD UPT EE md.b.patc_ceue@o_d=@SBcuJbd,shdl_d=@SBcUShdlI XC sbdos_tahshdl jbi L_akpoI @ceuei L_akpceueD EE md.b.pudt_o @o_d=@SBcuJbd,eald=1 XC sbdos_paejb jbi L_akpoI @nbe ED N EE mse.b.padlgsipn_lr_o XC atrdos_d_o_hpigaetjb EE mse.b.padlgsipn_rmr_eodr XC atrdos_d_o_hpigpiayscnay @rmr_aaae='' +'''+d.ae+'''+'' +CA(0 + piaydtbs ' '' bNm '' '' HR1) '@eodr_evr='''+@eIsac +''' +CA(0 + ,scnaysre '' Scntne ''' HR1) '@eodr_aaae=''+'''+d.ae+'''+'' +CA(0 + ,scnaydtbs ' '' bNm '' '' HR1) '@vrrt =1''+ ,oewie ' '[SB]FO ssdtbssd weenm i ( +@Bae+''+ LDs RM y.aaae b hr ae n ' DNm ) 'n ad d.ae nti ('atr''mdl''md','epb''mtisp','eicp' ) bnm o n 'mse','oe','sb''tmd','ercos''prsoe' ad Nt(xss(eetlsScnaydtbs fo md.b.o_hpigScnaydtbsslswee d.ae=lsScnaydtbs) n o eit slc s.eodr_aaae rm sbdolgsipn_eodr_aaae s hr bNm s.eodr_aaae o eit (eetlppiaydtbs fo md.b.o_hpigpiaydtbsslpwee d.ae=lppiaydtbs) r xss slc s.rmr_aaae rm sbdolgsipn_rmr_aaae s hr bNm s.rmr_aaae ) ' Isr #oSipn (SB) net Lghpig LDs Ee (@M) xc CD St@one =@rwon e Cutr @ocut Wie(cutr>0 hl @one ) Bgn ei slc tp1 @M =LDsfo #oSipn eet o CD SB rm Lghpig ee s_xctsl@M xc peeueq CD st@one =@one -1 e cutr cutr dlt tp()fo #oSipn eee o 1 rm Lghpig Ed n I OJC_D(tmd.B.Lghpig,U)I NTNL DO TBE#oSipn F BETI 'epbDO#oSipn''' S O UL RP AL Lghpig - *** Ed Srp t b rna Piay [B-SSL0] *** - *** n: cit o e u t rmr: D1PMQ-1 *** 2. Set up log shipping on secondary database server – use the following script: Secondary-Logshippingsetupparameter scripts - *** Bgn Srp t b rna Scnay 93BID*** - *** ei: cit o e u t eodr: . UL*** STNCUTO E OON N UEMD S SB G O -@rmevr:PiaySre nm -PiSre rmr evr ae -@eSre :D/eodr Sre Nm -Scevr RScnay evr ae -@eIsac :D/eodr FD -Scntne RScnay QN -@oan :Dmi Nm -Dmi oan ae
  • 15. -@rmrBprv :Pouto Bcu sre Nm -PiaykDie rdcin akp evr ae -@kDie:Scndr Bcu sre Nm -Bprv eoaay akp evr ae -@Bae:DtbsNm -DNm aaaeae DCAE@SBcuJbdSuiuietfe, @SPiaydSuiuietfe ,@PAdRtoeA it ELR L_akpoIA nqedniir L_rmrIA nqedniir S_d_eCd s n DCAE@iea naca(0,Scntnea naca(5) @rmevra naca(0,Scevra naca(0, ELR Tm s vrhr1)@eIsac s vrhr20, PiSre s vrhr5)@eSre s vrhr5) @oana naca(0,DNm a naca(a)@rmrBprv a naca(5)@kDiea naca(5)@M a naca(a)@M2a naca(a)@o Dmi s vrhr5)@Bae s vrhrmx,PiaykDie s vrhr20,Bprv s vrhr20,CD s vrhrmx,CD s vrhrmx,Cu DCAE @eiee ca()@Bnaca(0) @trPsit @eghit ELR Dlmtr hr1,D vrhr20, Sato n, Lnt n I OJC_D(tmd.B.Lghpig,U)I NTNL DO TBE#oSipn F BETI 'epbDO#oSipn''' S O UL RP AL Lghpig Cet tbe#oSipn (LDsnaca(a) rae al Lghpig SB vrhrmx) I OJC_D(tmd.B.Ds,U)I NTNL DO TBE#B F BETI 'epbDO#B''' S O UL RP AL Ds Cet TBE#B (aenaca(0) rae AL Ds Nm vrhr20) St@rmevr=08C1PQ1 e PiSre '3-HSS0' St@eSre =08S1SQ1 e Scevr '3-NSS0' St@eIsac =08S1SQ108.g.stnt e Scntne '3-NSS0.3dmdmf.e' St@oan=08.g.stnt e Dmi '3dmdmf.e' ST@rmrBprv ='3-HSB0.3dmdmf.e' E PiaykDie 08C1PK108.g.stnt St@kDie=08S1SK108.g.stnt e Bprv '3-NSB0.3dmdmf.e' St@Bae='oilD' e DNm Sca_B St@ie='10 e Tm 03' -PrigFnto -asn ucin ST@eiee ='' E Dlmtr , WIELN@Bae >0 HL E(DNm) BGN EI ST@trPs=CAIDX@eiee,@Bae E Sato HRNE(Dlmtr DNm) I @trPs<0ST@trPs=0 F Sato E Sato ST@egh=LN@Bae -@trPs-1 E Lnt E(DNm) Sato I @egh<0ST@egh=0 F Lnt E Lnt I @trPs>0 F Sato BGN EI ST@B=RrmLrmSBTIG@Bae 1 @trPs-1) E D ti(ti(USRN(DNm, , Sato )) ST@Bae=SBTIG@Bae @trPs+1 LN@Bae -@trPs E DNm USRN(DNm, Sato , E(DNm) Sato) ED N ES LE BGN EI ST@B=RrmLrm@Bae) E D ti(ti(DNm) ST@Bae=' E DNm ' ED N ISR #B (ae VLE(D) NET Ds Nm) AUS@B ED N -ST@Bae=UPRRPAE@Bae '' ') -E DNm PE(ELC(DNm, , ') -ST@Bae='' +RPAE@Bae '' '' '' +'' -E DNm '' ELC(DNm, ,, ', ') '' St@M ='eet'+ e CD Slc ''DCAE@SScnay_oyoI A uiuietfe,@SScnay_etrJbdA uiuietfe ,L_eodr_ScnaydA uiuietfe ,@S ' ELR L_eodr_CpJbd S nqedniir L_eodr_RsoeoI S nqedniir @SScnay_eodrI S nqedniir L DCAE@SScnayoyoShdlUDsuiuietfe ,L_eodrCpJbceueDSit @SScnayetrJbceueIA uiuietfe ,L_ ELR L_eodrCpJbceueIA nqedniir @SScnayoyoShdlIA n, L_eodrRsoeoShdlUDs nqedniir @S EE @SAdRtoe=mse.b.padlgsipn_eodr_rmr XC L_d_eCd atrdos_d_o_hpigscnaypiay @rmr_evr='''+@rmevr+'''+ CA(0 + piaysre '' PiSre ''' HR1) '@rmr_aaae=' +'+ '''''+d.ae+'''''+ CA(0 + ,piaydtbs ' '''' bNm '''' HR1) '+',bcu_oredrcoy='+''' +@rmrBprv +'L'+d.ae+''''+ CA(0 + '@akpsuc_ietr ''' PiaykDie S' bNm ''' HR1) ',bcu_etnto_ietr = '+''' +@kDie+'L'+d.ae+''''+ CA(0 + @akpdsiaindrcoy ''' Bprv S' bNm ''' HR1) '@oyjbnm =''LCp_B-SSL0_'+d.ae+''''+ CA(0 + ,cp_o_ae ''SoyD1PMQ-1' bNm ''' HR1) '@etr_o_ae=''LRsoe' @rmevr+'' +d.ae+''''+ CA(0 + ,rsoejbnm ''Setr_+ PiSre _' bNm ''' HR1) '@iertninpro =42 ,fl_eeto_eid 30 ,oewie=1 @vrrt ,cp_o_d=@SScnay_oyoI OTU @oyjbi L_eodr_CpJbd UPT
  • 16. ,rsoejbi =@SScnay_etrJbdOTU @etr_o_d L_eodr_RsoeoI UPT ,scnayi =@SScnay_eodrI OTU @eodr_d L_eodr_Scnayd UPT I (@RO =0AD@SAdRtoe=0 F @ERR N L_d_eCd ) BGN EI EE md.b.padshdl XC sbdos_d_ceue @ceuenm =''ealCpJbceue'' shdl_ae ''DfutoyoShdl'' ,eald=1 @nbe ,fe_ye=4 @rqtp ,fe_nevl=1 @rqitra ,fe_udytp =4 @rqsba_ye ,fe_udyitra =1 @rqsba_nevl 5 ,fe_eurnefco =0 @rqrcrec_atr ,atv_tr_ae=2000 @ciesatdt 0956 ,atv_n_ae=9913 @cieeddt 9921 ,atv_tr_ie='+@ie+' @ciesattm Tm ,atv_n_ie=250 @cieedtm 390 ,shdl_i =@SScnayoyoShdlUDOTU @ceueud L_eodrCpJbceueI UPT ,shdl_d=@SScnayoyoShdlI OTU @ceuei L_eodrCpJbceueD UPT EE md.b.patc_ceue XC sbdos_tahshdl @o_d=@SScnay_oyoI jbi L_eodr_CpJbd ,shdl_d=@SScnayoyoShdlI @ceuei L_eodrCpJbceueD EE md.b.padshdl XC sbdos_d_ceue @ceuenm =''ealRsoeoShdl'' shdl_ae ''DfutetrJbceue'' ,eald=1 @nbe ,fe_ye=4 @rqtp ,fe_nevl=1 @rqitra ,fe_udytp =4 @rqsba_ye ,fe_udyitra =1 @rqsba_nevl 5 ,fe_eurnefco =0 @rqrcrec_atr ,atv_tr_ae=2000 @ciesatdt 0956 ,atv_n_ae=9913 @cieeddt 9921 ,atv_tr_ie='+@ie+' @ciesattm Tm ,atv_n_ie=250 @cieedtm 390 ,shdl_i =@SScnayetrJbceueI OTU @ceueud L_eodrRsoeoShdlUD UPT ,shdl_d=@SScnayetrJbceueDOTU @ceuei L_eodrRsoeoShdlI UPT EE md.b.patc_ceue XC sbdos_tahshdl @o_d=@SScnay_etrJbd jbi L_eodr_RsoeoI ,shdl_d=@SScnayetrJbceueD @ceuei L_eodrRsoeoShdlI ED N I (@RO =0AD@SAdRtoe=0 F @ERR N L_d_eCd ) BGN EI EE @SAdRtoe =mse.b.padlgsipn_eodr_aaae XC L_d_eCd2 atrdos_d_o_hpigscnaydtbs @eodr_aaae='+ ''''+d.ae+''''+ CA(0 +' scnaydtbs ''' bNm ''' HR1) ,piaysre ='''+@rmevr+''' @rmr_evr '' PiSre '' ,piaydtbs =' ''''+d.ae+''''+ CA(0 + @rmr_aaae + ''' bNm ''' HR1) '@etr_ea =0 ,rsoedly ,rsoemd =1 @etr_oe ,dsonc_sr=1 @icnetues ,rsoetrsod=10 @etr_hehl 8 ,trsodaeteald=1 @hehl_lr_nbe ,hsoyrtninpro=56 @itr_eeto_eid 70 ,oewie=1 @vrrt ED N I (@ro =0AD@SAdRtoe=0 F @err N L_d_eCd ) BGN EI EE md.b.pudt_o @o_d=@SScnay_oyoI ,eald=0 XC sbdos_paejb jbi L_eodr_CpJbd @nbe EE md.b.pudt_o @o_d=@SScnay_etrJbd,eald=1 XC sbdos_paejb jbi L_eodr_RsoeoI @nbe ED'' +'LDs FO #B d' N ' [SB] RM Ds b
  • 17. -Pit@M -rn CD Isr #oSipn (SB) net Lghpig LDs Ee (@M) xc CD St@one =@rwon e Cutr @ocut Wie(cutr>0 hl @one ) Bgn ei slc tp1 @M =LDsfo #oSipn eet o CD SB rm Lghpig ee s_xctsl@M xc peeueq CD st@one =@one -1 e cutr cutr dlt tp()fo #oSipn eee o 1 rm Lghpig Ed n I OJC_D(tmd.B.Lghpig,U)I NTNL DO TBE#oSipn F BETI 'epbDO#oSipn''' S O UL RP AL Lghpig I OJC_D(tmd.B.Ds,U)I NTNL DO TBE#B F BETI 'epbDO#B''' S O UL RP AL Ds - *** Ed Srp t b rna Scnay 93Bid*** - *** n: cit o e u t eodr: . ul *** 3. Verify that the transaction logs are shipped to the share and that DFS is replicating these logs to the share on the Windows Azure file server. Open the Job Activity Monitor in SQL Server to verify the transaction logs are shipped successfully. Open the shared folders on both file servers in the production and Windows Azure farms to verify DFS is transferring the transaction logs. Phase 6: Validate failover and recovery[41] The goal of this final phase is to verify that the disaster recovery solution works as planned. In order to do this you have to create a failover event that shuts down the production farm and starts up the recovery farm as a replacement. You can start a failover scenario manually or by using scripts. The first step is to stop incoming user requests for farm services or content. You can do this by disabling DNS or by shutting down the front end Web servers. After the farm is ‘down’ you can failover to the recovery farm. Stop log shipping[42] Log shipping must be stopped before farm recovery. The log shipping on the secondary server must be stopped first and then it must be stopped on the primary server. Use the following script to stop log shipping on the secondary server first and then the primary server. - RmvsLgSipn fo sre - eoe o hpig rm evr - Cmad ms b eeue o teScnaysre FRT te tePiay - omns ut e xctd n h eodr evr IS, hn h rmr STNCUTO E OON N DCAE @rD naca(a) ELR PiB vrhrmx ,ScBnaca(5) @eD vrhr20 ,Pir naca(5) @rSv vrhr20 ,Scr naca(5) @eSv vrhr20 St@rD=' e PiB ' ST@rD =UPR@rD) E PiB PE(PiB ST@rD =RPAE@rD,'' ') E PiB ELC(PiB , ' ST@rD ='' +RPAE@rD,'' '' '' +'' E PiB '' ELC(PiB ,, ', ') '' St@eD =@rD e ScB PiB Ee ('eet 'ee mse.s_eeelgsipn_eodr_aaae' +'''' +pmpiaydtbs + '''' xc Slc 'xc atr.pdlt_o_hpigscnaydtbs ' '''' r.rmr_aaae '''' fo md.b.o_hpigmntrpiaypmINRJI md.b.o_hpigpiayscnaissc O pmpiaydtbs=e.eodr_aaae rm sbdolgsipn_oio_rmr r NE ON sbdolgsipn_rmr_eodre e N r.rmr_aaaescscnaydtbs weepmpiaydtbs i ('+@rD +')) hr r.rmr_aaae n PiB ' Ee ('eet 'ee mse.s_eeelgsipn_rmr_eodr ' +'''' +pmPiayDtbs +''',''' +scScnaySre +''',''' xc Slc 'xc atr.pdlt_o_hpigpiayscnay ' '''' r.rmr_aaae ''' ''' e.eodr_evr ''' ''' fo md.b.o_hpigmntrpiaypmINRJI md.b.o_hpigpiayscnaissc O pmpiaydtbs=e.eodr_aaae rm sbdolgsipn_oio_rmr r NE ON sbdolgsipn_rmr_eodre e N r.rmr_aaaescscnaydtbs weepmpiaydtbs i ('+@rD +')) hr r.rmr_aaae n PiB '
  • 18. Ee ('eet 'ee mse.s_eeelgsipn_rmr_aaae' +'''' +pmpiaydtbs + '''' xc Slc 'xc atr.pdlt_o_hpigpiaydtbs ' '''' r.rmr_aaae '''' fo md.b.o_hpigmntrpiaypmINRJI md.b.o_hpigpiayscnaissc O pmpiaydtbs=e.eodr_aaae rm sbdolgsipn_oio_rmr r NE ON sbdolgsipn_rmr_eodre e N r.rmr_aaaescscnaydtbs weepmpiaydtbs i ('+@rD +')) hr r.rmr_aaae n PiB ' Ee ('eet 'ee mse.s_eeelgsipn_eodr_rmr ' +'''' +pmpiaysre +''',''' +pmpiaydtbs + '''' xc Slc 'xc atr.pdlt_o_hpigscnaypiay ' '''' r.rmr_evr ''' ''' r.rmr_aaae '''' fo md.b.o_hpigmntrpiaypmINRJI md.b.o_hpigpiayscnaissc O pmpiaydtbs=e.eodr_aaae rm sbdolgsipn_oio_rmr r NE ON sbdolgsipn_rmr_eodre e N r.rmr_aaaescscnaydtbs weepmpiaydtbs i ('+@rD +')) hr r.rmr_aaae n PiB ' Restore the backups[43] Ensure that you meet the following prerequisites for restoring backups: Backups must be restored in the order in which they were created. Before you can restore a particular transaction log backup, you must first restore the following previous backups without rolling back uncommitted transactions (that is WITH NORECOVERY). The full database backup and the last differential backup, if any, taken before the particular transaction log backup. Before the most recent full or differential database backup was created, the database must have been using the full recovery model or bulk-logged recovery model. All transaction log backups taken after the full database backup or the differential backup (if you restore one) and before the particular transaction log backup. Log backups must be applied in the sequence in which they were created, without any gaps in the log chain. To recover the content database on the secondary server so that the sites render, all database connections must be removed before recovery. To restore the database, run the following command-SQL statement: rsoedtbs WSCnetwt rcvr etr aaae S_otn ih eoey Visual C++ When you use T-SQL explicitly specify either WITH NORECOVERY or WITH RECOVERY in every RESTORE statement to eliminate ambiguity. This is very important when writing scripts. After the full and differential backups are restored, the transaction logs can be restored in SQL Server Management Studio. Also, because log shipping is already stopped the content database is in a standby state, so you must change the state to Full Access. In Management Studio, right click the WSS_Content database, point to Tasks, Restore, and then click Transaction Log, if you have not restored the full backup then this will not be available. For more information, see Restore a Transaction Log Backup[44]. Crawl the content source[45] You must start a full crawl for each content source to restore the Search Service. Note that you lose some analytics information from the onpremises farm, such as search recommendations. (If you require full analytics influences, you must do regular backup and restore operations on the Search service application.) Before starting the full crawls, use the Windows PowerShell cmdlet, RestoreSPEnterpriseSearchServiceApplication and specify the log shipped and replicated Search Administration database, Search_Service__DB_<GUID>. This cmdlet gives the search configuration, schema, managed props, rules, sources, and creates a default set of the other components. To start a full crawl, follow these steps: 1. In the SharePoint 2013 Central Administration go to Application Management > Service Applications > Manage service applications and click the Search service application that you want to crawl. 2. On the Search Administration page click Content Sources, point to the content source that you want, click the arrow and then click Start Full Crawl. Recover farm services[46] The following table shows how to recover services that have log shipped databases, the services that have databases but are not recommended to restore these with log shipping, and the services that do not have databases. Table: Service application database reference
  • 19. Restore these services from log-shipped databases These services have databases but we recommend that you start this services without restoring their These services do not store data in databases. Start these services after databases failover Machine Translation Service Usage and Health Data Collection Excel Services Managed Metadata Service State service PerformancePoint Services Secure Store Service Word automation PowerPoint Conversion User Profile Visio Graphics Service Work Management Note: Only the Profile and Social Tagging databases are supported. The Synchronization database is not supported. Microsoft SharePoint Foundation Subscription Settings Service The following example shows how to restore the Managed Metadata service from a database: This will use the existing Managed_Metadata_DB database. This database is log shipped, but there is no active service application on the secondary farm so it needs to be connected once the service application is in place. First, use nwsmngdeaaaevcapiainspecifying the -aaaesic with the name, of the restored database. e-paaemtdtsrieplcto dtbs wth Next, configure the new Managed Metadata Service Application on the secondary server as follows: Name: Managed Metadata Service Database server: Use the database name from the shipped transaction log Database name: Managed_Metadata_DB Application Pool: SharePoint Service Applications Manage DNS records[47] You have to create DNS records manually to point to your SharePoint farm. In most cases where you have multiple web-front-end servers, it makes sense to take advantage of the Network Load Balancing feature in Windows Server 2012 or a hardware load balancer to distribute requests among the web-front-end servers in your farm. Network load balancing can also help reduce risk: if one of your web-front-end servers fails, NLB can distribute requests to the other servers. Typically, when you set up network load balancing, your cluster is assigned a single IP address. You then create a DNS host record in the DNS provider for your network that points to the cluster. (For this project, we put a DNS server in Azure for resiliency in the case of an on-premises datacenter failure.) For instance, you can create a DNS record, in DNS Manager in Active Directory, called http://sharepoint.contoso.com that points to the IP address for your load-balanced cluster. For external access to your SharePoint farm, you can create a host record on an external DNS server with the same URL that clients use on your intranet, for example http://sharepoint.contoso.com that points to an external IP address in your firewall. (A best practice in this case is to set up split DNS, so your internal DNS server is authoritative for contoso.com and routes requests directly to the SharePoint farm cluster, rather than routing DNS requests to your external DNS server.) You can then map the external IP address to the internal IP address of your on-premises cluster so clients find the resources they’re looking for. From here, you can run into a couple different disaster-recovery scenarios: The on-premises SharePoint farm is unavailable, because of hardware failure in the on-premises SharePoint farm, for example. In this case, after you completed the steps for fail over to the Windows Azure SharePoint farm, you can configure network load balancing on the recovery SharePoint farm’s web-front end servers, the same way you did with the on-premises farm. You can then redirect the host record in your internal DNS provider to point to the recovery farm’s cluster IP address. Note that it can take some time before cached DNS records on clients are refreshed and point to the recovery farm. The on-premises datacenter is lost completely. This can occur because of a natural disaster like a fire or flood. In this case, for an enterprise, you’d likely have a secondary datacenter hosted in another region, as well as your Azure subnet that has its own directory services and DNS. As in the previous disaster scenario, you can redirect your internal and external DNS records to point to the Azure SharePoint farm. Again, take note
  • 20. that DNS-record propagation can take some time. If you’re using host-named site collections, as recommended in Host-named site collection architecture and deployment (SharePoint 2013)[48], you might have several site collections hosted by the same web application in your SharePoint farm, with unique DNS names like http://sales.contoso.com and http://marketing.contoso.com. In this case, you can create DNS records for each site collection that point to your cluster IP address. Once a request reaches your SharePoint web-front-end servers, they handle routing each request to the appropriate site collection. 1. http://go.microsoft.com/fwlink/p/?LinkId=392554 2. http://go.microsoft.com/fwlink/p/?LinkId=392555 3. javascript:void(0) 4. http://go.microsoft.com/fwlink/?LinkID=393114 5. http://go.microsoft.com/fwlink/p/?linkid=203228 6. javascript:void(0) 7. javascript:void(0) 8. javascript:void(0) 9. javascript:void(0) 10. javascript:void(0) 11. http://technet.microsoft.com/en-us/library/dn635309(v=office.15).aspx 12. javascript:void(0) 13. javascript:void(0) 14. javascript:void(0) 15. javascript:void(0) 16. javascript:void(0) 17. http://go.microsoft.com/fwlink/p/?linkid=270985 18. javascript:void(0) 19. javascript:void(0) 20. javascript:void(0) 21. javascript:void(0) 22. http://go.microsoft.com/fwlink/?LinkId=392687 23. javascript:void(0) 24. http://go.microsoft.com/fwlink/?LinkId=392691 25. http://go.microsoft.com/fwlink/p/?linkid=270985 26. javascript:void(0) 27. http://go.microsoft.com/fwlink/?LinkId=392732 28. http://go.microsoft.com/fwlink/?LinkId=392737 29. http://go.microsoft.com/fwlink/?LinkId=392738 30. http://go.microsoft.com/fwlink/?LinkId=392739 31. http://go.microsoft.com/fwlink/?LinkId=392740 32. javascript:void(0) 33. http://go.microsoft.com/fwlink/?LinkId=392694 34. http://go.microsoft.com/fwlink/?LinkId=392695 35. http://go.microsoft.com/fwlink/?LinkId=392693 36. javascript:void(0) 37. javascript:void(0) 38. javascript:void(0) 39. javascript:void(0) 40. javascript:void(0) 41. javascript:void(0)
  • 21. 42. javascript:void(0) 43. javascript:void(0) 44. http://go.microsoft.com/fwlink/?LinkId=392778 45. javascript:void(0) 46. javascript:void(0) 47. javascript:void(0) 48. http://go.microsoft.com/fwlink/?LinkId=393120