SlideShare une entreprise Scribd logo
1  sur  37
System Installation Workbook
Spokane
Version 2.7
Date: Apr 2013

ABOUT NETAPP
NetApp creates innovative storage and data management solutions that deliver outstanding
cost efficiency and accelerate performance breakthroughs. Discover our passion for helping
companies around the world go further, faster at www.netapp.com.
NetApp, Inc.
495 East Java Drive
Sunnyvale, CA 94089 USA
Telephone: +1 (408) 822-6000
Fax: +1 (408) 822-4501
Support telephone: +1 (888) 4-NETAPP
Copyright and trademark information
© Copyright 2012 NetApp, Inc. All rights reserved. No portions of this document may be reproduced
without prior written consent of NetApp, Inc. Specifications are subject to change without notice. NetApp,
the NetApp logo, Go further, faster, and Data ONTAP are trademarks or registered trademarks of NetApp,
Inc. in the United States and/or other countries.
All other brands or products are trademarks or registered trademarks of their respective holders and
should be treated as such.

© Copyright 2012 NetApp, Inc. All rights reserved.
www.netapp.com
Table of contents
1
1.1
1.2
1.2.1
1.2.2
1.2.3
1.2.4
1.2.5
1.2.6
1.2.7
1.2.8
1.2.9
1.2.10
1.2.11
1.2.12
1.3
1.4
1.5
1.5.1
2
2.1
2.1.1
2.1.2
2.1.3
2.1.4
2.1.5
2.1.6
2.1.7
2.1.8
2.1.9
2.1.10
2.1.11
2.1.12
2.1.13
2.1.14
2.1.15
2.1.16
3
4
4.1
4.1.1
4.1.2
4.1.3
4.1.4
4.1.5
4.2
4.2.1
4.2.2
4.3
4.3.1
4.3.2
4.3.3
4.4
4.5
4.5.1
4.5.2
4.5.3
4.5.4
4.5.5
4.5.6
4.5.7
4.5.8
4.6
4.6.1
4.6.2
4.6.3
A.
A.1

SITE REQUIREMENTS ............................................................................................................................................ 6
Physical characteristics -Storage controllers and disk drives ....................................................................... 6
System power requirements - Storage controllers and disk drives ............................................................... 8
FAS20xx series systems................................................................................................................................... 8
FAS22xx series systems................................................................................................................................... 8
FAS30xx series systems................................................................................................................................... 9
FAS31xx series systems................................................................................................................................... 9
FAS32xx series systems................................................................................................................................. 10
FAS60xx series systems................................................................................................................................. 11
FAS62xx series systems................................................................................................................................. 11
DS14 series disk shelves ................................................................................................................................ 11
DS2246 disk shelves ...................................................................................................................................... 12
DS4243 disk shelves ...................................................................................................................................... 12
DS4246 disk shelves ...................................................................................................................................... 13
DS4486 disk shelves ...................................................................................................................................... 13
System Cabinet ................................................................................................................................................. 14
System cabinet configurations ........................................................................................................................ 14
Network cabling requirements ......................................................................................................................... 15
Ethernet Configuration Recommendations ..................................................................................................... 15
DATA ONTAP® 7-MODE CONFIGURATION DETAILS ........................................................................................ 16
Basic configuration........................................................................................................................................... 16
IFGRPs ........................................................................................................................................................... 16
Network interface configuration ...................................................................................................................... 16
Default gateway .............................................................................................................................................. 17
Administration host (Optional)......................................................................................................................... 17
Time zone ....................................................................................................................................................... 17
Language encoding for multiprotocol files....................................................................................................... 17
Domain Name Services (DNS) resolution ....................................................................................................... 17
Network Information Services (NIS) resolution ............................................................................................... 17
Remote Management Settings (RLM/SP/BMC) .............................................................................................. 18
Alternate Control Path (ACP) management for SAS shelves .......................................................................... 18
CIFS configuration .......................................................................................................................................... 18
Configure Virtual LANs (Optional) ................................................................................................................... 18
AutoSupport settings....................................................................................................................................... 19
Customer/RMA details .................................................................................................................................... 19
Time synchronization ...................................................................................................................................... 19
SNMP management settings (Optional).......................................................................................................... 19
DATA ONTAP 7-MODE INSTALLATION AND VERIFICATION CHECKLISTS .................................................... 21
DATA ONTAP CLUSTER-MODE CONFIGURATION DETAILS ........................................................................... 25
Cluster information ........................................................................................................................................... 25
Cluster ............................................................................................................................................................ 25
Licensing......................................................................................................................................................... 25
Admin Vserver ................................................................................................................................................ 26
Time synchronization ...................................................................................................................................... 26
Time zone ....................................................................................................................................................... 26
Node information .............................................................................................................................................. 26
Physical port identification .............................................................................................................................. 26
Node management LIF ................................................................................................................................... 27
Cluster network information ............................................................................................................................ 28
Interface groups (IFGRP)................................................................................................................................ 28
Configure Virtual LANs (VLANs) ..................................................................................................................... 28
Logical Interfaces (LIFs) ................................................................................................................................. 28
Intercluster network information ..................................................................................................................... 29
Vserver information .......................................................................................................................................... 29
Creating Vserver ............................................................................................................................................. 29
Creating Volumes on the Vserver ................................................................................................................... 29
IP Network Interface on the Vserver ............................................................................................................... 30
FCP Network Interface on the Vserver ........................................................................................................... 30
LDAP services ................................................................................................................................................ 30
CIFS protocol .................................................................................................................................................. 31
iSCSI protocol ................................................................................................................................................. 31
FCP protocol ................................................................................................................................................... 31
Support information.......................................................................................................................................... 31
Remote Management Settings (RLM/BMC/SP) .............................................................................................. 31
AutoSupport settings....................................................................................................................................... 32
Customer/RMA details .................................................................................................................................... 32
DATA ONTAP CLUSTER-MODE INSTALLATION AND VERIFICATION CHECKLISTS ..................................... 33
Definitions ......................................................................................................................................................... 36

© Copyright 2012 NetApp, Inc. All rights reserved.
www.netapp.com
List of Tables
Table 1: Electrical Requirements – FAS20xx series ................................................................................................................. 8
Table 2: Electrical requirements – FAS2220 ............................................................................................................................. 8
Table 3: Electrical requirements— FAS2240 series (one controller module, no mezzanine card and either 450-GB or 600-GB
disk drives for FAS2240; 1TB or 2TB disk drives for FAS2240-4) ................................................................................... 8
Table 4: Electrical requirements –FAS30xx series ................................................................................................................... 9
Table 5: Electrical requirements – FAS31xx series ................................................................................................................... 9
Table 6: Electrical requirements – FAS3210 with one 256-GB Flash Cache module— one controller module ...................... 10
Table 7: Electrical requirements – FAS3240 with one 256-GB, one 512-GB, or one 1-TB Flash Cache module per controller
module—two controller modules.................................................................................................................................... 10
Table 8: Electrical requirements – FAS3270 with one 256-GB, one 512-GB, or one 1-TB Flash Cache module per controller
module—two controller modules.................................................................................................................................... 10
Table 9: Electrical requirements -FAS6030/FAS6040 ............................................................................................................. 11
Table 10: Electrical requirements-FAS 6210 single-controller module;FAS6240 & FAS6280 with I/O expansion .................. 11
Table 11: Electrical requirements - DS14mk2 AT 7.2K speed ................................................................................................ 11
Table 12: Electrical requirements- DS14mk2 FC 15K speed .................................................................................................. 12
Table 13: Electrical requirements—DS2246-SAS drives......................................................................................................... 12
Table 14: Electrical requirements—DS4243-SAS drives......................................................................................................... 12
Table 15: Electrical requirements—DS4243-SATA drives ...................................................................................................... 13
Table 16: Electrical requirements— DS4246 -SATA drives, 6-100GB SSD drives with 18-1TB or 18-3TB disk drives .......... 13
Table 17: Electrical requirements –DS4486 ............................................................................................................................ 13

© Copyright 2012 NetApp, Inc. All rights reserved.
www.netapp.com
WELCOME
Dear Customer,
Thank you for choosing a NetApp storage system and Professional Services installation.
To ensure a seamless deployment and integration into your environment, please complete the information
requested in this document before our engineer arrives on site. This will ensure that as many questions as
possible are answered before the day of the installation, so you can start using your system.
The first part of the document includes environmental information about our products, which may help you
with your computer room planning
The second part of the workbook covers the information that the professional services engineer will need
on the day of installation. Please obtain the required information and return a completed copy of this
document to the engineer before they arrive.
We look forward to working with you.

Yours faithfully
(NetApp Services Engineering)

© Copyright 2012 NetApp, Inc. All rights reserved.
www.netapp.com
Preface
This document describes how install a NetApp system.
AUDIENCE

The primary audience for this document is PS consultants, and IT admin engineers.
NON-DISCLOSURE REQUIREMENTS

© Copyright 2012 NetApp. All rights reserved. This document contains the confidential and proprietary
information of NetApp, Inc. Do not reproduce or distribute without the prior written consent of NetApp.
INFORMATION ABOUT THIS DOCUMENT

All information about this document including version history, review and approval, typographical
conventions, references, and a glossary of terms can be found in the final chapter of this document.

© Copyright 2012 NetApp, Inc. All rights reserved.
www.netapp.com
1 Site requirements
Please download and read the latest version of the Site requirements guide available at
http://support.netapp.com/

1.1

Physical characteristics -Storage controllers and disk drives
Hardware

Height

Width

Depth

Weight

FAS62xx
series

10.2 in
(25.86 cm)

17.6 in
(44.68 cm)

29 in
(73.66 cm)
Including cable
management
tray

Single controller
module

99.2 lbs(45 kg)

Controller and I/O
expansion module

125.7 lbs(57 kg)

Two controller
modules

130.1lbs(59 kg)

29 in
(73.66 cm)
Including cable
management
tray

122 lbs(55.34 kg)

24 in
(60.7 cm)

Single controller
module

67.3 lbs(30.5 kg)

Controller and I/O
expansion module

74.5 lbs(33.8 kg)

Two controller
modules

79.5 lbs(36.1 kg)

Single controller
module

102 lbs(46.27 kg)

Two controller
modules

121 lbs(54.89 kg)

FAS60xx
series

FAS32xx
series

FAS31xx
series

10.32 in
(26.21 cm)

5.12 in
(13.0 cm)

10.75 in
(27.3 cm)

17.53 in
(44.52 cm)

17.61 in
(44.7 cm)

17.73 in
(45.0 cm)

24 in
(60.7 cm)

Rack
units

6

6

FAS30xx
series

5.13 in
(13 cm)

17.73 in
(45.0 cm)

24 in
(60.7 cm)

68 lbs(30.84 kg)

FAS2240-4

7 in
(17.9 cm)

17.73 in
(45.0 cm)

28 in
(71.1 cm)
Including the
cable
management
arm installed

Single controller
module

107.8 lbs(48.9 kg)

23.1 in
(58.7 cm)
Including the
cable
management
arm installed

Single controller
module

50.7 lbs(23 kg)

Two controller
modules

56 lbs(25.4 kg)

24.1 in
(61.2 cm)
Including the
cable
management
arm installed

Single controller
Module

57.8 lbs(26.2 kg)

Two controller
modules

62.4 lbs (28.3 kg)

22.5 in
(57.2 cm)

Full(chassis with
all disk drives)

110 lbs(49.9 kg)

Empty(No internal
disks)

91 lbs(41.3 kg)

Full(chassis with
all disk drives)

66 lbs(29.9 kg)

Empty(No internal
disks)

57 lbs(25.9 kg)

Full(chassis with

66 lbs(29.9 kg)

6

102.3 lbs(46.4 kg)

Two controller
modules

3

FAS2240-2

FAS2220

FAS2050

FAS2040

FAS2020

3.3 in
(8.4 cm)

3.4 in
(8.4 cm)

6.9 in
(17.5 cm)

3.5 in
(8.9 cm)

3.5 in

17.6 in
(44.7 cm)

17.6 in
(44.7 cm)

17.6 in
(44.7 cm)

17.6 in
(44.7 cm)

17.6 in

22.5 in
(57.2 cm)

22.5 in

3

© Copyright 2012 NetApp, Inc. All rights reserved.
www.netapp.com

4

2

2

4

2

2
(8.9 cm)

(44.7cm)

all disk drives)

(57.2 cm)

Empty(No internal
disks)

57 lbs(25.9 kg)

Hardware

Height

Width

Depth

DS14
series

5.25 in
(13.3 cm)

17.6 in
(44.7 cm)

DS14mk2FC
DS14mk4FC

20 in
(50.8 cm)

With disk
drives

77 lbs(35 kg)

DS14mk2AT

22in
(55.2 cm)

With disk
drives

68 lbs(30.8 kg)

Empty

50.06 lbs(23 kg)

With disk
drives

49 lbs(22.2 kg)

Without
disk drives

34.6 lbs(15.7 kg)

Empty

17.4 lbs(7.9 kg)

With disk
drives

110 lbs(49.9 kg)

Without
disk drives

53.7 lbs(24.4 kg)

Empty

21.1 lbs(9.6 kg)

With disk
drives

110 lbs (49.9 kg)

Without
disk drives

53.7 lbs (24.4 kg)

Empty

21.1 lbs(9.6kg)

With disk
drives

150 lbs(68kg)

With four
carriers,
IOMs and
PSU‘s

82lbs (37kg)

DS2246

DS4243

DS4246

DS4486

3.4 in
(8.5cm)

7 in
(17.8 cm)

7 in
(17.8 cm)

6.87 in
(17.44 cm)

19 in
(48.0cm)

19 in
(48.0 cm)

17.7 in
(45 cm)

17.6 in
(44.7 cm)

Weight

19.1 in (48.4 cm)

24 in(61 cm)

24 in (61 cm)

27 in (68.6 cm)
Depth from mounting flange
to rear chassis bulk head

Rack
Units

3

2

4

4

4

Note: The DS14 series includes DS14,DS14mk2FC,DS14mk4FC with an ESH (ESH refers to ESH2 and
ESH4) and DS14mk2AT

Hardware

Height

Width

Depth

Weight

Rack Units

Cisco5010

1.72in(4.4cm)

17.3in(43.9cm)

30in(76.2cm)

35lbs(15.88kg)

1

Cisco5020

3.47(8.8cm)

17.3in(43.9cm)

30in(76.2cm)

50lbs(22.68kg)

2

Cisco2960

1.73in(4.4cm)

17.5in(44.45cm)

9.3in(23.62)

8lbs(3.63kg)

1

* 1U = 1.75 inches
Note: Please plan for at least 36 inches (91.4 centimeters) of clearance on both front and back of the
system. This amount of space allows you to reach the back panel for cabling the system. It also allows
you to slide the motherboard tray out from the back of the system when removing or installing hardware.

© Copyright 2012 NetApp, Inc. All rights reserved.
www.netapp.com
1.2

System power requirements - Storage controllers and disk drives
Note: The following section contains the power requirements for the available FAS series and disk
shelves. However, the tables cover values for one-controller modules. If you need additional information
such as inclusion of two controllers, mezzanine card, I/O Expansion module, Flash Cache module etc.,
please refer to the latest Site Requirements guide, before you proceed with the installation.

1.2.1

FAS20xx series systems
Table 1: Electrical Requirements – FAS20xx series
100 to 120V
Worstcase
Single
PSU

1-TB SATA

Typical
Per PSU

System
two PSUs

Worstcase
single
PSU

3.37

1.61

3.22

2-TB SATA

3.36

1.65

1-TB SATA

332

158

2-TB SATA

334

1-TB SATA
2-TB SATA

Parameter

Drives
(in GB)

200 to 240 V
Typical
Per PSU

System
two
PSUs

1.69

0.83

1.66

3.29

1.69

0.84

1.68

316

327

152.5

305

162

324

326

160

320

3.62

1.77

3.53

1.81

0.90

1.8

3.34

1.61

3.22

1.66

0.84

1.67

1-TB SATA

357

173

345

347

169

337

2-TB SATA

329

158

315

319

156

312

Input current measured, A

1-TB SATA

5.07

2.26

4.51

2.46

1.20

2.40

Input power measured,W

1-TB SATA

504

220

439

474

224

447

FAS2020
Input current measured, A
Input power measured,W

FAS2040
Input current measured, A
Input power measured,W

FAS2050

1.2.2

FAS22xx series systems
Table 2: Electrical requirements – FAS2220
T
a
b
Parameter
l
e
FAS2220
Input current
3
measured, A
:

E

Input power
l
measured,W

e
c

100V

200 V

Drives
(in GB)

Worst case
single
PSU

Typical

Typical

System
two PSUs

Worst-case
single
PSU

Per PSU

Per PSU

System
two PSUs

1-TB

4.18

1.3

2.6

2

0.67

1.33

2-TB
3-TB

4.26

1.34

2.63

2.14

0.68

1.36

4.32

1.37

2.74

2.14

0.69

1.38

1-TB

417

129

258

396

123

246

2-TB

425

131

261

423

126

252

3-TB

431

136

271

423

129

257

© Copyright 2012 NetApp, Inc. All rights reserved.
www.netapp.com
Table 3: Electrical requirements — FAS2240 series (one controller module, no mezzanine card
and either 450-GB or 600-GB disk drives for FAS2240; 1TB or 2TB disk drives for FAS2240-4)
Input
Voltage

100V

200V

215V

Worstcase,
single
PSU

Typical

Per
PSU

System,
two PSUs/
System,
four PSU

Worstcase,
single
PSU
/2+2
PSU

Typical

System,
two /four
PSU

Worstcase,
single
PSU
/2+2
PSU

Typical

Per
PSU

Per
PSU

System, two
PSUs /
System four
PSU

Input current
measured, A

4.76

1.8

3.60

2.31

0.88

1.76

2.15

0.82

1.64

Input power
measured,W

474

178

356

456

170

339

456

168

336

5.34

1.21

4.85

2.68

0.63

2.5

2.53

0.59

2.37

533

121

482
(four
PSU‘s)

517
(2+2
PSU‘s)

117

468
(four
PSU‘s)

515
(2+2
PSU‘s)

117

466
(four PSU‘s)

FAS2240-2

FAS2240-4
Input current
measured, A
Input power
measured,W

1.2.3

FAS30xx series systems
Table 4: Electrical requirements –FAS30xx series
Input Voltage

100 to 120V

200 to 240V

-40 to -60V

Worstcase,
single
PSU

Typical

Per
PSU

System,
two
PSUs

Worstcase,
single
PSU

Typical

System,
two
PSUs

Worstcase,
single
PSU

Typical

Per
PSU

Per
PSU

System,
two PSUs

Input current
measured, A

3.39

1.2

2.4

1.77

0.71

1.40

8.2

2.85

5.7

Input power
measured,W

336

118

236

329

115

229

328

113

226

Input current
measured, A

3.66

1.7

3.4

1.9

0.95

1.9

7.94

3.7

7.4

Input power
measured,W

363

169

338

358

165

330

318

148

296

Input current
measured, A

3.88

1.7

3.4

2.04

0.95

1.9

9.49

4.0

8.0

Input power
measured,W

386

164

328

384

164

327

380

160

319

Input current
measured, A

4.03

1.85

3.7

2.06

1.05

2.1

10.57

4.7

9.4

Input power
measured,W

400

181

362

387

178

355

423

188

376

FAS3020

FAS3040

FAS3050

FAS3070

1.2.4

FAS31xx series systems
Table 5: Electrical requirements – FAS31xx series
Input Voltage

100 to 120V
Worstcase,
single
PSU

200 to 240V
Typical
Per
PSU

System,
two
PSUs

Worstcase,
single
PSU

-40 to -60V
Typical
Per
PSU

System,
two
PSUs

Worstcase,
single
PSU

© Copyright 2012 NetApp, Inc. All rights reserved.
www.netapp.com

Typical
Per
PSU

System,
two PSUs
FAS3140
Input current
measured, A

3.98

1.89

3.77

1.97

0.97

1.93

8.38

4.88

9.75

Input power
measured,W

396

187

373

385

183

366

336

195

389

Input current
measured, A

4.80

2.25

4.50

2.38

1.16

2.32

10.07

5.90

11.79

Input power
measured,W

476

220

440

460

225

450

404

235

470

Input current
measured, A

5.07

2.37

4.74

2.52

1.19

2.38

10.75

6.09

12.18

Input power
measured,W

505

235

470

493

230

459

430

243

486

FAS3160

FAS3170

1.2.5

FAS32xx series systems
Table 6: Electrical requirements – FAS3210 with one 256-GB Flash Cache module— one
controller module
Input
Voltage

100 to 120V

200 to 240V

-40 to -60V

Worstcase,
single
PSU

Typical

Per
PSU

System,
two PSUs

Worstcase,
single
PSU

Typical

System,
two PSUs

Worstcase,
single
PSU

Typical

Per
PSU

Per
PSU

System,
two PSUs

Input current
measured, A

4.22

1.52

3.03

2.11

0.83

1.66

10.45

3.65

7.30

Input power
measured,W

421

150

299

411

147

293

418

146

292

FAS3210

Table 7: Electrical requirements – FAS3240 with one 256-GB, one 512-GB, or one 1-TB Flash
Cache module per controller module—two controller modules
Input
Voltage

100 to 120V

200 to 240V

-40 to -60V

Worstcase,
single
PSU

Typical

Per
PSU

System,
two PSUs

Worstcase,
single
PSU

Typical

System,
two PSUs

Worstcase,
single
PSU

Typical

Per
PSU

Per
PSU

System,
two PSUs

Input current
measured, A

6.37

2.35

4.70

3.15

1.21

2.41

15.9

5.90

11.8

Input power
measured,W

635

233

466

620

228

456

636

236

472

FAS3240

Table 8: Electrical requirements – FAS3270 with one 256-GB, one 512-GB, or one 1-TB Flash
Cache module per controller module—two controller modules
Input Voltage

100 to 120V

200 to 240V

Worstcase,
single
PSU

Typical

Typical

Per
PSU

System,
two
PSUs

Worstcase,
single
PSU

-40 to -60V
Per
PSU

Input current
measured, A

7.28

2.78

5.56

3.58

Input power
measured,W

728

278

552

707

Typical

System,
two
PSUs

Worstcase,
single
PSU

Per
PSU

System,
two PSUs

1.42

2.83

18.2

6.95

13.9

271

541

728

278

556

FAS3270

© Copyright 2012 NetApp, Inc. All rights reserved.
www.netapp.com
1.2.6

FAS60xx series systems
Table 9: Electrical requirements -FAS6030/FAS6040
Input Voltage

100 to 120V

200 to 240V

Worst-case,
single PSU

Typical
Per
PSU

System, two
PSUs

Worst-case,
single PSU

Typical
Per
PSU

System, two
PSUs

Input current
measured, A

9.75

2.87

5.74

4.87

1.57

3.14

Input power
measured,W

968

279

557

934

217

541

Input current
measured, A

11.68

3.63

7.25

5.76

1.96

3.91

Input power measured,
W

1,162

352

704

1,115

231

693

FAS6030 /FAS6040

FAS6070/FAS6080

1.2.7

FAS62xx series systems
Table 10: Electrical requirements-FAS 6210 single-controller module;FAS6240 & FAS6280 with
I/O expansion
Input Voltage

100 to 120V

200 to 240V

Worst-case,
single PSU

Typical

Worst-case,
single PSU

Per
PSU

System, two
PSUs

Typical
Per
PSU

System, two
PSUs

Input current
measured, A

5

2.25

4.5

2.5

1.15

2.3

Input power
measured, W

490

215

430

480

208

415

Input current
measured, A

9.3

3.3

6.6

4.5

1.65

3.3

Input power
measured, W

920

312.5

625

875

308

615

Input current
measured, A

9.6

3.5

6.9

4.7

1.75

3.5

Input power
measured, W

950

332.5

665

910

323

645

FAS6210

FAS6240

FAS6280

1.2.8

DS14 series disk shelves
Table 11: Electrical requirements - DS14mk2 AT 7.2K speed
100 to 120V

200 to 240V

-40 to – 60V

Size
GB

WorstCase,
single
PSU

Typical

Typical

System
, two
PSUs

Per
PSU

System,
two
PSUs

Worstcase,
single
PSU

Typical

Per
PSU

Worstcase,
single
PSU

Per
PSU

System,
two
PSUs

250

2.79

1.36

2.72

1.38

0.70

1.39

7.38

2.84

5.67

320

2.85

1.56

3.12

1.43

0.78

1.56

7.4

2.82

5.64

500

2.94

1.45

2.9

1.43

0.74

1.47

8.04

3.11

6.22

750

3.42

1.61

3.22

1.63

0.53

1.60

8.42

6.63

7.25

1-TB

3.15

1.55

3.10

1.55

0.78

1.56

8.33

3.24

6.48

250

279

136

271

271

132

264

295

114

227

Input Voltage

DS14mk2 AT
Input current
measured, A

Input power

© Copyright 2012 NetApp, Inc. All rights reserved.
www.netapp.com
measured, W

320

284

155

310

283

152

304

296

113

226

500

293

144

288

286

142

283

322

125

249

750

341

161

321

323

155

309

337

145

290

1-TB

315

154

308

309

150

300

333

130

259

Table 12: Electrical requirements- DS14mk2 FC 15K speed
Input Voltage

100 to 120V

-40 to – 60V

200 to 240V

Size
GB

WorstCase,
single
PSU

Typical

Per
PSU

System,
two
PSUs

Worstcase,
single
PSU

Typical

System
, two
PSUs

Worstcase,
single
PSU

Typical

Per
PSU

Per
PSU

System,
two
PSUs

72

3.41

1.82

3.63

1.67

0.89

1.78

10.04

3.98

7.95

144

3.96

1.88

3.75

1.93

0.94

1.88

10.40

4.13

8.25

288

4.43

2.16

4.32

2.23

1.07

2.13

11.98

4.36

8.72

450

4.43

2.16

4.32

2.23

1.07

2.13

N/A

72

340

181

362

331

173

345

402

159

318

144

395

187

373

383

183

365

416

165

330

288

443

216

431

443

208

415

479

175

349

450

443

216

431

443

208

415

N/A

450

1,512

735

1,470

1,512

707

1,414

N/A

DS14mk2FC
Input current
measured, A

Input power
measured, W

1.2.9

DS2246 disk shelves
Table 13: Electrical requirements—DS2246-SAS drives

Input Voltage

100 to 120V
Size
(GB)

200 to 240V
(200V actual)

Worst-Case,
single PSU

Typical

Worst-case, single
PSU

Typical

Per
PSU

System, two
PSUs

Per
PSU

System, two
PSUs

DS2246
Input current
measured, A

450

1.38

2.76

2.29

0.79

1.58

600

4.22

1.39

2.77

2.29

0.82

1.64

900

4.22

1.39

2.77

2.29

0.82

1.64

450

428

137

274

420

135

270

600

422

134

267

418

133

266

900

Input power
measured, W

4.28

422

134

267

418

133

266

1.2.10 DS4243 disk shelves
Table 14: Electrical requirements—DS4243-SAS drives
Input Voltage

200 to 240V
(200V actual)

200 to 240V
(215V actual)

Size
(GB)

DS4243SAS
Total input
current
measured, A
Total input
power
measured,

100 to 120V
WorstCase,
single
PSU

Worstcase,
single
PSU

Worstcase,
single
PSU

300
450
600
300
450
600

5.5
6.00
5.98
550
600
595

Typical
Per
PSU

System,
two
PSUs

3.0
3.15
2.86
300
315
284

6.0
6.30
5.71
600
630
567

2.8
3.00
2.99
560
600
584

Typical
Per
PSU

System,
two
PSUs

1.5
1.60
1.44
300
320
274

3.0
3.20
2.87
600
640
547

© Copyright 2012 NetApp, Inc. All rights reserved.
www.netapp.com

2.6
2.80
559
602

Typical
Per
PSU

System,
two
PSUs

1.4
1.50
N/A
301
323
N/A

2.8
3.00
602
645
W
Table 15: Electrical requirements—DS4243-SATA drives
Input Voltage

100 to 120V

200 to 240V
(200V actual)

200 to 240V
(215V actual)

Size
(GB)

WorstCase,
single
PSU

Typical

Per
PSU

System,
two
PSUs

Worstcase,
single
PSU

Typical

System,
two
PSUs

Worstcase,
single
PSU

Typical

Per
PSU

Per
PSU

System,
two
PSUs

500

4.30

2.20

4.40

2.10

1.10

2.20

1.90

1.05

2.10

1-TB

4.41

2.21

4.42

2.21

1.14

2.27

1.90

1.05

2.10

2-TB

4.72

2.31

4.62

2.42

1.21

2.42

N/A

3-TB

4.95

2.30

4.60

2.43

1.19

2.38

100
(SSD)

1.96

0.82

1.63

1.0

0.45

0.9

0.95

0.42

0.84

500

430

220

440

420

220

440

409

226

452

1-TB

439

219

438

429

212

424

409

226

452

2-TB

469

229

458

470

228

456

N/A

3-TB

495

228

456

476

224

448

100
(SSD)

196

82

163

200

90

180

90

180

DS4243SATA
Input
current
measured,
A

Input power
measured,
W

205

1.2.11 DS4246 disk shelves
Table 16: Electrical requirements— DS4246 -SATA drives, 6-100GB SSD drives with 18-1TB or 183TB disk drives
Input Voltage

100 to 120V

200 to 240V
(200V actual)

Size
(GB)

Worst-Case,
single PSU

Typical
Per
PSU

System, two
PSUs

Worst-case,
single PSU

Typical
Per
PSU

System, two
PSUs

Input current
measured, A

1-TB

3.91

1.7

3-TB

4.11

1.9

3.41

2.11

0.9

1.84

3.72

2.25

1.1

2.14

Input power
measured, W

1-TB

386

3-TB

406

168

335

388

166

331

123

368

418

199

397

DS4246

1.2.12 DS4486 disk shelves
Table 17: Electrical requirements –DS4486
Input Voltage

100 to 120V

200 to 240V
(200V actual)

Size
(GB)

Worst-Case,
single PSU

Typical
Per
PSU
pair

System, two
PSUs

Worst-case,
single PSU

Typical
Per
PSU
pair

System, two
PSUs

Input current
measured, A

3-TB

8.71

3.29

6.57

4.59

1.73

3.46

Input power
measured, W

3-TB

870

329

657

919

346

692

DS4486

© Copyright 2012 NetApp, Inc. All rights reserved.
www.netapp.com
1.3

System Cabinet
Dimensions
Cabinet

42U(X870B-R6)

42U Deep(X870C-R6)

Height

78.7in(200cm)

78.7in(200cm)

Depth

37.4in(95cm)

44.3in(112.50cm)

Width

23.6in(60cm)

23.6in(60cm)

Empty Weight

287lb(130.2kg)

307lb(138kg)

Loaded Weight

1500lb(680kg)

2307lb(1046kg)

Front

30in(76.3cm)

30in(76.3cm)

Rear

30in(76.3cm)

30in(76.3cm)

Top

12in(20cm)

12in(30cm)

Weight

Clearance

Note: Consult your co-location facility manager or vendor documentation if installing into third party
cabinets.

1.4

System cabinet configurations
Config

PDU’s

PDU
Part #

Plug
Type

Service
Outlet

Cords

Amps

Outlets

Approx.
Power

NEMA
30A Single
Phase

4

X8712CR6

NEMA
L6-30P

30A

2

48

24

10kW@208V

NEMA
30A
3-Phase Delta

2

X8719AR6

NEMA
L15-30P

30A

1

41.5

24

8.6kW@208V

NEMA
30A
3-Phase Delta

2

X8720AR6

NEMA
L21-30P

30A

1

41.5

24

8.6kW@208V

IEC
32A Single
Phase

4

X8713CR6

IEC
60309-32A
P+N+E

32A

2

64

24

14.7kW@230V

IEC
32A
3-Phase Wye

2

X8718AR6

IEC 6030932A
3P+N+E

32A

1

96

24

22.1kW@230V

Note: PDU Number is per cabinet; cords per side; amps per side and outlets per side

© Copyright 2012 NetApp, Inc. All rights reserved.
www.netapp.com
1.5

Network cabling requirements
Network Device

Cabling Requirements

100Base-TX

Cat 5/5e/6 UTP cable with RJ-45 connector

Gigabit Ethernet
(Optical)

Multimode OM-1, OM-2, OM-3, or OM-4 fiber optic cable with LC connector

Gigabit Ethernet
(Copper)

Cat 5e/6 UTP cable with RJ45 connector

10 Gigabit Ethernet
(Optical)

10Gbase-SR SFP+ transceiver with LC connector* and a multimode OM-3 or OM-4 fiber
optic cable

10 Gigabit Ethernet
(Copper)

10Gbase Copper SFP+ twin-ax cable*

Fibre Channel

Multimode OM-1, OM-2, OM-3, or OM-4 fiber optic cable with LC connector

*Must be provided by NetApp or on the NetApp compatibility list

Note: Refer TR-3552 titled ―Optical Network Installation Guide‖ for more information on Optical networking
requirements and distance limitations for a particular cable type and data rate

1.5.1

Ethernet Configuration Recommendations
Switch ports connected to 100Base-TX storage controller ports should be configured manually for speed
and duplex settings (100 Mbit Full Duplex) when possible.
Flow control should be enabled on Gigabit and 10 Gigabit network ports. On the storage controller
configure with Send on, Receive off, and on the switch configure as Send off, Receive on.
PortFast can be enabled on all switch ports connected to the storage controller to allow the port to enter
forwarding state faster.

© Copyright 2012 NetApp, Inc. All rights reserved.
www.netapp.com
2 Data ONTAP® 7-Mode configuration details
Please work with your Professional Services representative to complete this worksheet prior to the
installation date. The requested information enables us to configure your equipment quickly and
efficiently. Depending on the desired configuration, some fields may not be applicable.
Note: This worksheet does NOT replace the requirement for reading and understanding the appropriate
Data ONTAP manuals that describe the operations of Data ONTAP in 7-Mode. Data ONTAP manuals
can be found at the NetApp Support Site under documentation.
Customer checklist of site preparation requirements (check all that apply):
Adequate rack space for the NetApp system and disk shelves has been provided.
The power requirements for the NetApp system and disk shelves have been satisfied.
The network patch cabling and switch port configuration is complete.
Company Name: PHS

NetApp Sales Order #: 600122473

Storage Controller Model: FAS2240-4

2.1

Data ONTAP® Version: 8.1.2

Basic configuration
System information

Controller 1

Controller 2

Host name (nas + the last 4 S/N)

nasxxxx

nasxxxx

Aggregate Type (32-bit or 64-bit)

64-bit

64-bit

Serial Number

2.1.1

IFGRPs
Interface Groups (IFGRPs) bond multiple network ports together for increased bandwidth and/or fault
tolerance.
Note: For systems without an e0P port, leave one network port available for ACP connections to SAS
disk shelves.
Interface details

Controller 1

Controller 2

Number of interface groups to configure

Vif1

Vif1

Ifgrp1

Ifgrp1

Names of the interface groups For example, ifgrp1,
iscsi_ifgrp2
IFGRP type (multi, single, LACP)
Multi – all ports are active
Single – one port active, other ports are on
standby for failover
LACP – network switch manages traffic

ifgrp1:

LACP

ifgrp1:

LACP

ifgrp2:

LACP

ifgrp2:

LACP

ifgrp3:

ifgrp3:

ifgrp1:
Multi-mode IFGRP load balancing style (IP, MAC,
round-robin, or port based)

IP

ifgrp1:

IP

ifgrp2:

IP

ifgrp2:

IP

ifgrp3:

ifgrp3:

ifgrp1:
Number of links (network ports) in each IFGRP

2

ifgrp1:

2

ifgrp2:

2

ifgrp2:

2

ifgrp3:

ifgrp3:

ifgrp1:
Name of network ports in each IFGRP For
example,ifgrp1= e0a, e1d ifgrp3=ifgrp1, ifgrp2

E0a,e0c

ifgrp2:

E0b, e0d

ifgrp3:

2.1.2

Network interface configuration
If you created IFGRPs, then use their names, otherwise use port names (for example, e0a).
© Copyright 2012 NetApp, Inc. All rights reserved.
www.netapp.com
Some controllers have an e0M interface for environments with a subnet dedicated to managing servers.
Include the e0M settings if you have a management subnet.
Note: For systems without an e0P port, leave one network port available for ACP connections to SAS
disk shelves.
Media
type

Enable
Jumbo
frames?

255.255.255.0

Ethernet

No

10.108.193.12

255.255.255.0

Ethernet

No

10.108.193.11

255.255.255.0

Ethernet

No

e0P

10.108.193.13

255.255.255.0

Ethernet

No

nasxxxx (A)

VIF

170.173.144.10

255.255.255.0

170.173.144.11

VIF

Yes

nasxxxx (B)

VIF

170.173.144.11

255.255.255.0

170.173.144.10

VIF

Yes

Controller
name

IP address

Network mask

nasxxxx (A)

e0M

10.108.193.10

nasxxxx (A)

e0P

nasxxxx (B)

e0M

nasxxxx (B)

2.1.3

Interface
name

Partner
interface name
or IP address

Default gateway
Gateway details

Controller 2

Default Gateway IP address

2.1.4

Controller 1
170.173.144.1

170.173.144.1

Administration host (Optional)
You can limit the systems or subnets authorized to mount the root volume.
Host details

Controller 1

Controller 2

Admin host/subnet IP
2.1.5

Time zone
What time zone should the systems set their clocks to (for example, US/Pacific).
Time zone Details

Controller 2

Time zone

Pacific

pacific

Physical Location (for example, Bldg 4,
Dallas)

2.1.6

Controller 1

101 W. 8 Avenue
Spokane WA, 99204

th

th

101 W. 8 Avenue
Spokane WA, 99204

Language encoding for multiprotocol files
The default is POSIX and only needs to be changed for systems storing files using international
alphabets.
Encoding details

Controller 2

Language for multiprotocol files

2.1.7

Controller 1
English

English

Domain Name Services (DNS) resolution
DNS resolution
DNS Domain Name

wa.providence.org

DNS Server IP addresses (up to 3)

2.1.8

Values

170.173.161.38; 170.173.113.228; 170.173.132.39

Network Information Services (NIS) resolution
NIS resolution

Values

NIS Domain Name
NIS Server IP addresses

© Copyright 2012 NetApp, Inc. All rights reserved.
www.netapp.com
2.1.9

Remote Management Settings (RLM/SP/BMC)
All systems include Remote LAN Module (RLM), Baseboard Management Controller (BMC), or a Service
Processor (SP) to provide out-of-band control of the storage system. NetApp recommends configuring
these interfaces for easier, secure management and troubleshooting.
RLM/BMC

Controller 1

Controller 2

IP Address

10.108.193.12

10.108.193.13

Network Mask

255.255.255.0

255.255.255.0

Gateway

10.108.193.1

10.108.193.1

Mail server
hostname

smtplegacy.providence.org

smtplegacy.providence.org

Mail server IP

170.173.161.56

170.173.161.55

2.1.10 Alternate Control Path (ACP) management for SAS shelves
For system models prior to the FAS3200 series, use an onboard NIC port to use ACP. New systems with
dedicated e0P ports automatically assign IP addresses.
Controller 1

Controller 2

Interface name
(if not using e0P)
Private subnet
(default: 192.168.0.0/22)
Network Mask

2.1.11 CIFS configuration
Systems with a CIFS license run the CIFS setup wizard, immediately after the Setup wizard completes.
NT4 domains will require a server account to be created before running CIFS setup. You can abort the
wizard using Ctrl+C from the keyboard and run later if necessary.
Note: The installation engineer will require someone with Domain Administrator privileges to help
perform this section. When CIFS is configured, a domain administrator should move the controllers out
of OU=Computers into an OU for servers. This will ensure Group Policy Objects can be applied to the
controllers.
CIFS configuration

Controller 1

Controller 2

Authentication mode

Choose one of:
Active Directory domain
NT 4 domain
Workgroup
/etc/passwd or NIS/LDAP

Choose one of:
Active Directory domain
NT 4 domain
Workgroup
/etc/passwd or NIS/LDAP

Domain name

wa.providence.org

wa.providence.org

Multiprotocol

Multiprotocol

NetBios name
Do you want the system visible
via WINS (Y/N)?
WINS IP addresses
(up to 3)
Multiprotocol or NTFS only?

2.1.12 Configure Virtual LANs (Optional)
VLANs are used to segment network domains using 802.1Q protocol standards.
Controller name

Interface name

VLAN IDs to activate

nasxxxx (A)

e0M

264

nasxxxx (B)

e0M

264

nasxxxx (A)

VIF

867

nasxxxx (B)

VIF

Enable GVRP?

867

© Copyright 2012 NetApp, Inc. All rights reserved.
www.netapp.com
Note: To trunk VLANs across an interface or IFGRP, you need to set "switchport mode trunk" on that
interface or logical interface. This will allow 802.1q trunking, so that traffic across it is VLAN tagged.
You must then create the relevant VLAN interfaces on the storage controller.
If you want a port or EtherChannel interface to be the only access port for a particular VLAN you must set
"switchport mode access" on that interface. Then give the storage controller interface an IP address on
that VLAN. No other information is required to VLAN tag the frames.
Reboot the controllers at this point for the settings to go into effect.
2.1.13 AutoSupport settings
AutoSupport is an automated diagnostic reporting function designed to notify you and NetApp of any
event triggered messages. In addition, it provides weekly logs, NetApp health triggers, and performance
statistics. This ensures prompt support responsiveness and system wide proactive health checks.
Note: System must remain on a support contract and the level of responsiveness is dependent on the
level of service purchased.
AutoSupport Settings

Controller 1

Controller 2

Configure AutoSupport on:

Yes

Yes

SMTP Server Name or IP

smtplegacy.providence.org

smtplegacy.providence.org

AutoSupport Transport

One of:
HTTPS (default)
HTTP
SMTP

One of:
HTTPS (default)
HTTP
SMTP

AutoSupport From E-Mail
address

nasxxxx@providence.org

nasxxxx@providence.org

AutoSupport To E-Mail
address(es)

James.Abella@providence.org;
Henry.Pan@providence.org

James.Abella@providence.org;
Henry.Pan@providence.org

2.1.14 Customer/RMA details
Verify this information by logging into the http://www.now.netapp.com website. This information is
required to ensure that the Technical Support personnel can reach you and the replacement parts are
sent to the correct address.
Customer/RMA details

Primary contact

Secondary contact

Contact Name

James Abella

Henry PAN

Contact Address

1801 Lind Ave SW, Renton,WA

1801 Lind Ave SW Renton, WA

Contact Phone

(805) 218-3791

425-525-3328

Contact E-mail Address

James.Abella@providence.org

Henry.Pan@providence.org

RMA Address
RMA Attention to Name

2.1.15 Time synchronization
Time synchronization details

Values

Time services protocol (ntp)

ntp

Time Servers (up to 3 internal or external
hostnames or IP addresses)

Time.providence.org

Max time skew (<5 minutes for CIFS)

< 5 minutes

2.1.16 SNMP management settings (Optional)
Fill out if you have SNMP monitoring applications (for example, Operations Manager). Set by using the
‗snmp options‘ command.
SNMP settings

Controller 1

Controller 2

SNMP Trap Host

© Copyright 2012 NetApp, Inc. All rights reserved.
www.netapp.com
SNMP settings

Controller 1

Controller 2

Data Fabric Manager
Protocol

Choose one of:
HTTP
HTTPS

Choose one of:
HTTP
HTTPS

Data Fabric Manager Port

8080

8080

SNMP Community
Data Fabric Manager
Server Name or IP

© Copyright 2012 NetApp, Inc. All rights reserved.
www.netapp.com
3 Data ONTAP 7-Mode installation and verification checklists
The installer will perform the following checks to ensure that your new systems are configured correctly
and are ready to turn over to you.
Physical installation

Status

Check and verify all ordered components were delivered to the customer site.
Confirm the NetApp controllers are properly installed in the cabinets.
Confirm there is sufficient airflow and cooling in and around the NetApp system.
Confirm all power connections are secured adequately.
Confirm the racks are grounded (if not in NetApp cabinets).
Confirm there is sufficient power distribution to NetApp controllers & disk shelves.
Confirm power cables are properly arranged in the cabinet.
Confirm that LEDs and LCDs are displaying the correct information.
Confirm that cables from NetApp controllers to disk shelves and among disk shelves are not
crimped or stretched.(fiber cable services loops should be bigger than your fist )
Confirm that fiber cables laid between cabinets are properly connected and are not prone to
physical damage.
Confirm disk shelves IDs are set correctly.
Confirm that fiber channel 2Gb/4Gb loop speeds are set correctly on DS14 shelves and proper
LC-LC cables are used.
Confirm that Ethernet cables are arranged and labeled properly.
Confirm all Fiber cables are arranged and labeled properly.
Confirm the Cluster Interconnect Cables are connected (for HA pairs).
Confirm there is sufficient space behind the cabinets to perform hardware maintenance.

Power On and Diagnostics

Status

Power up the disk shelves to ensure that the disks spin up and are initialized properly.
Connect the console to the serial port cable and establish a console connection using a terminal
emulator like Terra Term, PuTTY or Hyperterm.
Note: Log all console output to a text file.
Power on the controllers.
Boot the controller and press Ctrl+C at the second prompt for ‗Special Boot Menu options‘.
Go to Maintenance Mode by selecting option 5.
Check the onboard fibre ports status:
*> fcadmin config
Change the port mode if necessary from targets to initiators (for SAN requirements).
Verify the cable connections to all shelves:
*> fcadmin device_map
*> sasadmin shelf
for SAS shelves
Verify disk ownership assignments:
*> disk show –a
Assign disks to each node using the disk assign command if necessary.
Verify the Multipath High Availability (MPHA) cabling. Each disk must have an A and B path:
*> storage show disk -p
Verify the system has one root aggregate assigned:
*> aggr status

© Copyright 2012 NetApp, Inc. All rights reserved.
www.netapp.com
Power On and Diagnostics

Status

Follow these steps for both cluster nodes, halt and then reboot each system into Data ONTAP:
*> halt
LOADER> boot_ontap
Verify power and cooling are at acceptable levels:
fas1> environment status
Verify expansion cards are installed in the correct slots:
fas1> sysconfig -c
Verify all local and partner shelves are visible to the system:
fas1> fcadmin device_map
Verify that all disks are owned:
fas1> disk show -n
Use the WireGauge tool to verify that all the shelves are cabled correctly.

Installation and configuration
Confirm the correct version of Data ONTAP software, Disk Qualification Package and disk, shelf,
motherboard and RLM/BMC firmware is installed on each controller
fas1> version –b
fas1> sysconfig -a
Confirm ALL controllers are named as per the customer naming standards
Confirm the root volume is sufficiently sized ( 250GB minimum)
fas1> vol size <root volume name>
Confirm all the licenses are installed
fas1> license
Check the /etc/rc and /etc/hosts files:
fas1> rdfile /etc/rc
fas1> rdfile /etc/hosts
Verify all configured Ethernet network interfaces (individual and ifgrp) are configured correctly as
per the customer requirements: IP address, media type, flow control and speed.
Confirm any interfaces not required to perform host name resolution are configured with the "wins" option
For clustered systems, verify they have partner interfaces for failover
Where necessary, confirm the network switches are configured to support dynamic or static
multi-mode ifgrps (LACP or Etherchannel) as per customer requirement.
Has the customer accessed the system console using the RLM / SP / BMC?
Verify network connectivity and DNS resolution is configured properly:
fas1> ping <hostname of mail server>
Verify configured IFGRPs function properly by disconnecting one or more cables
fas1> ifgrp status
Pull cables
fas1> ping <hostname of mail server>
fas1> ifgrp status
Reinsert cables
Confirm each controller is configured to synchronise time with a centralised source
fas1> options timed
fas1> timezone
fas1> date
Confirm that AutoSupport is configured and functioning correctly.
fas1> options autosupport.doit “Test”
Confirm the default ‗home‘ share is stopped from each controller (and vFiler)
If necessary, confirm that telnet and RSH is disabled and SSH is enabled
If required, confirm SNMP is configured on all controllers to the appropriate traphost

© Copyright 2012 NetApp, Inc. All rights reserved.
www.netapp.com

Status
Installation and configuration

Status

Download documentation pack and upload to controller(s)

CIFS configuration

Status

If necessary, run through CIFS setup and join the controllers to the customer's Active Directory
(requires an AD account with suitable permissions).
Confirm the NetApp controller‘s local administrator account was created while configuring the
CIFS service (and the password is set appropriately).
Confirm the permissions to the root volume (c$) and /etc folder (etc$) are configured
appropriately (that is, NOT Everyone Full Control).
Confirm that appropriate Windows Domain Administrators group(s) is/are member of the NetApp
controller‘s local administrator group.
Create a share.
Have the customer map the share to a host, write data to it.
Create a Snapshot and confirm that Snapshot visibility is configured appropriately (for example,
hidden to regular CIFS clients)
Confirm that qtrees storing CIFS data have the appropriate security style specified:
fas1> qtree status
Confirm that qtrees storing CIFS data have the appropriate ‗oplocks‘ setting.

NFS configuration

Status

Create a qtree and confirm the appropriate security style is specified
fas1> qtree create <path>
fas1> qtree status
Export the qtree.
Check the /etc/exports file and update the same with new mount entries with appropriate
permissions.
Have the customer mount the qtree from a host and write data to it.
Take a Snapshot and confirm that Snapshot visibility is configured appropriately (for example,
hidden to regular clients)

iSCSI configuration

Status

Make sure the iSCSI service is started.
Verify that an iSCSI host attach or support kit has been installed on the host.
If appropriate, verify SnapDrive has been installed on the host.
Create a qtree, igroup, and LUN on the system (using SnapDrive if necessary).
Have the customer establish an iSCSI session from the host.
Create a file system on the LUN, write some data to it and confirm the data is on the LUN.
Reboot the host and confirm that the LUN is still attached.

FCP configuration

Status

Make sure the FCP service is started
fas1> fcp status
Verify an FCP host attach or support kit has been installed on the host.
If appropriate, verify that SnapDrive has been installed on the host.
Create a qtree, igroup, and LUN on the system (using SnapDrive if necessary).
Have the customer establish an FCP session from the host.
Have the customer create a file system on the LUN and, write some data to it.

© Copyright 2012 NetApp, Inc. All rights reserved.
www.netapp.com
FCP configuration

Status

Have the customer reboot the host and confirm the LUN is still attached.

Verification checklist

Status

Where necessary Make sure the CLUSTER license is enabled where necessary.
Verify the storage failover options on both systems in the HA pair are identical.
Temporarily disable AutoSupport:
fas1> options autosupport.enable off
Test manual Cluster Failover (in both directions) and ensure success, rectify any errors and
prove network connectivity continues to function correctly during failover.
fas1> cf enable
fas1> cf takeover
fas1> partner
fas2/fas1*> ifconfig –a
fas2/fas1*> ifgrp status
fas2/fas1*> partner
fas1> cf giveback
Test Uncontrolled storage Failover (in both directions) by disconnecting one controller from
power. Rectify any errors.
Test component failure of a PSU (Check status of LEDs and console).
Test component failure of a LAN cable (Interface Group Test), include ifgrp favor.
Test component failure of a fibre cable to disk shelf (Path Test), For Multipath HA cabling to
ensure all disks have an A and B channel. Type
storage show disk –p
Run the WireGauge tool to ensure the shelf cabling is correct.
When installing a new system into a new NetApp cabinet, switch off one cabinet PDU, and make
sure all controllers and shelves remain powered on. Check the status of LEDs and console.
Insert an entry into the system log indicating installation is complete:
fas1> logger * * * System Install complete <installer name> <date> * *
*
Backup the system configuration:
fas1> config dump <date>.cfg
Re-enable AutoSupport:
fas1> options autosupport.enable on

Post installation checklist

Status

Give new customers a brief tour of FilerView or Systems Manager to explain the basic functions
of managing their new system.
Log onto the NOW website and give the customer a brief tour of the site. Show them how to
access documentation, download software and firmware, search the Knowledge Base, and verify
their RMA information.
Discuss training available through NetApp University with new customers.
Since they are the basis for most Data ONTAP functionality, have the customer explain how
Snapshots work. Correct any misconceptions.
Create and send a Trip Report within 24 hours to the customer, partner sales team and NetApp
sales team.
When all tasks are completed, have customer sign a Certificate of Completion.

© Copyright 2012 NetApp, Inc. All rights reserved.
www.netapp.com
4 Data ONTAP Cluster-Mode configuration details
Please work with your professional services representative to complete this worksheet prior to the
installation date. The requested information enables us to configure your equipment quickly and
efficiently. Depending on the desired configuration, some fields may not be applicable.
Note: This worksheet does not replace the requirement for reading and understanding the appropriate
Data ONTAP manuals that describe the operations of Data ONTAP in Cluster-Mode. Data ONTAP
manuals can be found at the NetApp Support site under documentation.
Customer checklist of site preparation requirements (check all that apply):
Adequate rack space for the NetApp system and disk shelves has been provided.
The power requirements for the NetApp system and disk shelves have been satisfied.
The network patch cabling and switch port configuration is complete.
Company Name:

NetApp Sales Order #:

Data ONTAP® Version:

4.1

Cluster information
It is assumed that the cluster will contain four nodes. If there are more than four nodes, replicate the
appropriate section to add additional node information.
Starting from Data ONTAP 8.1 the 'cluster create' and 'cluster join' commands have built-in
wizards.
The wizard generates hostnames, IP addresses for the cluster LIF and subnet masks for the cluster LIF. It
is recommended to use the cluster setup wizard while creating a new cluster or attempting to join an
existing cluster.
The wizard has the following rules:
The names for the nodes in the cluster are derived from the name of the cluster. If the cluster is
named clust1, the nodes will be names as clust-01, clust-02 and so on. The node name can be
changed later with the cluster::system>node>modify command.
The cluster LIF will be assigned IP address in the 169.254.0.0 range with a Class B subnet
(255.255.0.0) if the default is taken.
The initial cluster creating and configuration will be performed on the first node that is booted. The initial
setup script will ask if the operator wants to create a cluster or join a cluster. The first node will be ―create‖
and subsequent nodes will be ―join‖.

4.1.1

Cluster
The cluster base aggregate will contain the root volume for the cluster Vserver.
Cluster name

4.1.2

Cluster Base Aggregate

Licensing
A base license is required, but additional features also need licensing.
License

Values

© Copyright 2012 NetApp, Inc. All rights reserved.
www.netapp.com
4.1.3

Admin Vserver
The Cluster Administration Vserver is used to manage the cluster activities. It is different from the node
Vservers and is used by System Manager to access the cluster.
Type of information

Value

Cluster administrator password
The password for the ‗admin‘ account that the cluster requires
before granting cluster administrator access at the console or
through a secure protocol.
The default rules for passwords are as follows:
A password must be at least eight characters long.
A password must contain at least one letter and one number.
Cluster management LIF IP address
A unique IP address for the cluster management LIF. The cluster
administrator uses this address to access the cluster admin
Vserver and manage the cluster. Typically, this address should be
on the data network.
Cluster management LIF netmask
The subnet mask that defines the range of valid IP addresses on
the cluster management network.
Cluster management LIF default gateway
The IP address for the router on the cluster management network.
DNS domain name
The name of your network's DNS domain. The domain name
cannot contain an underscore (_) and must consist of
alphanumeric characters. To enter multiple DNS domain names,
separate each name with either a comma or a space.
Name server IP addresses
The IP addresses of the DNS name servers. Separate each
address with either a comma or a space.

4.1.4

Time synchronization
Time synchronization details

Values

Time services protocol (NTP)
Time Servers (up to 3 internal or external
hostnames or IP addresses)
Max time skew (<5 minutes for CIFS)

4.1.5

Time zone
What time zone should the systems set their clocks to (for example, US/Pacific)?
Time Zone

4.2

Location

Node information
Individual controllers are called nodes. Each node has a unique name. Unlike the cluster name, the node
name can be changed after it is initially defined.
System information

Node 1

Node 2

Node 3

Serial number
Node name

4.2.1

Physical port identification
Each port services a specific type of function or role. These roles are:
© Copyright 2012 NetApp, Inc. All rights reserved.
www.netapp.com

Node 4
Node Management
Data Intercluster
Cluster
Node Management ports are required to maintain connection between the node to site services such as
NTP and AutoSupport. Data ports are used to transfer data or communicate between the cluster and the
applications. Intercluster LIFs are used to setup peer relations between clusters for replicating data
between clusters. Cluster ports are specifically used to transfer data between nodes within a cluster.
Note: Due to BURT 322675, NetApp recommends setting up an interface group for the node
management LIF on each node of the cluster. The instructions below cover scenarios that have or do
not have a fix for this BURT. Follow the section that is relevant to your case. Some of these instructions
might diverge from the guidelines on the NetApp Support site. Check for updated versions of this
document for latest information.
For versions of Data ONTAP that do not have a fix for BURT 322675, create a single-mode interface
group of the following ports. Use this interface group as the port for the node management LIF. The
interface group should be created before using the ‗cluster setup‘ wizard on the node.
For versions of Data ONTAP that have a fix for BURT 322675:
System model

Port grouping

FAS3040 & FAS3070

e0a and e0c

V3040 & V3070

e0a and e0c

FAS3140, FAS3160 & FAS3170

e0a and e0b

V3140, V3160 & V3170

e0a and e0b

FAS3210, FAS3240 & FAS3270

e0a and e0b

V3210, V3240 & V3270

e0a and e0b

FAS6030, FAS6040, FAS6070 & FAS6080

e0a and e0c

V6030, V6040, V6070 & V6080

e0a and e0c

FAS6210, FAS6240 & FAS6280

e0a and e0b

V6210, V6240 & V6280

e0a and e0b

Some controllers have an e0M interface for environments with a subnet dedicated to managing servers.
Include the e0M settings if you have a management subnet.
Note: For systems without an e0P port, leave one network port available for ACP connections to SAS
disk shelves.

Note: The following table is used to define port roles. If BURT 322675 is not installed, the IFGRP column
should be used and the associated ports noted. If BURT 322675 is installed, omit the IFGRP column.
Node Name

4.2.2

IFGRP

Ports

MTU

Port Role

Node management LIF
Each node has a management port that is used to communicate with it.
Node Name

Port or
IFGRP

LIF Name

IP Address

Netmask

© Copyright 2012 NetApp, Inc. All rights reserved.
www.netapp.com

Gateway
Node Name

4.3

Port or
IFGRP

LIF Name

IP Address

Netmask

Gateway

Cluster network information
Starting from Data ONTAP 8.1 the 'cluster create' and 'cluster join' commands have built-in wizards to
generate hostnames, IP addresses for the cluster LIF, and subnet masks for the cluster LIF. NetApp
recommends using the cluster setup wizard whenever you create a new cluster or attempt to join an
existing cluster.
The wizard has the following rules:
The names for the nodes in the cluster are derived from the name of the cluster. If the cluster is
named cmode, the nodes will be names as cmode-01, cmode-02 and so on
The cluster LIF is assigned IP address in the 169.254.0.0 range with a Class B subnet (255.255.0.0)
Once the cluster has been defined and the nodes are joined to the cluster, other elements can be created.
These elements can be created using System Manager, Element Manager, or CLI.

4.3.1

Interface groups (IFGRP)
Interface groups bond multiple network ports together for increased bandwidth and/or fault tolerance.
IFGRP name

4.3.2

Node

Distribution function

Mode

Ports

Configure Virtual LANs (VLANs)
(Optional) VLANs are used to segment network domains. The VLAN has a specific name that is a
combination of the associated network port and the switch VLAN ID.
VLAN name

4.3.3

Node

Associated Network Port

Switch VLAN ID

Logical Interfaces (LIFs)
Logical Interfaces are the point at which the customer interfaces with the cluster.
LIF name

Home node

Home port

Netmask

Routing group

© Copyright 2012 NetApp, Inc. All rights reserved.
www.netapp.com

Failover group
4.4

Intercluster network information
The intercluster ports used for cross-cluster communication. An intercluster port should be routable to the
following:
Another intercluster port
Data port of another cluster.
Node name

4.5

Port

LIF name

IP address

Netmask

Gateway

Vserver information
Application access to data residing in the cluster must be done through a Vserver. Vservers can be used
to support single or multiple protocols, user groups, or whatever delineation that the customer chooses.
Additionally Vservers can restrict allocation of data to specific Aggregates.
To create a Vserver, you can use any of the available administrative interfaces: System Manager,
Element Manager, or CLI. The Vserver Setup wizard has the following sub-wizards, which you can run
after you create a Vserver:
Network setup
Storage setup
Services setup
Data access protocol setup
Use the following section as a guide to create Vservers. Replicate this section as many times as required.

4.5.1

Creating Vserver
Type of information

Value

Vserver name
The name of a Vserver can contain alphanumeric characters and
the following special characters: ".", "-", and "_". However, the
name of a Vserver must not start with a
number or a special character.

Protocols
Protocols that you want to configure or allow on that Vserver.

Name Services
Name Services that you want to configure on the Vserver

Aggregate name
Aggregate name on which you want to create the Vserver's root
volume. The default aggregate name is used if you do not specify
one.

Language Setting
Language you want the volumes to use.

4.5.2

Creating Volumes on the Vserver
Volume name

Aggregate name

Volume size

Junction path (NAS only)

© Copyright 2012 NetApp, Inc. All rights reserved.
www.netapp.com
4.5.3

IP Network Interface on the Vserver
End user applications connect to the data in the cluster only through interfaces defined to Vservers. The
following table models the first 4 LIFs. Replicate the ‗Interface‘ columns or the entire table if more
interfaces are required.
Type of Information

Interface 1

Interface 2

Interface 3

LIF name
The default LIF name is used if you do not
specify one.
IP address
Subnet mask

Home node
Home node is the node on which you want to
create a logical interface. The default home node
is used if you do not specify one.

Home port
Home port is the port on which you want to
create a logical interface. The default home port
is used if you do not specify one.

Routing Group
Protocols
Protocols that can use the LIF.
Failover Group
DNS Zone

4.5.4

FCP Network Interface on the Vserver
Type of information

Value

LIF name
The default LIF name is used if you do not specify one.

Home node
Home node is the node on which you want to create a logical
interface. The default home node is used if you do not specify one.

Home port
Home port is the port on which you want to create a logical
interface. The default home port is used if you do not specify one

4.5.5

LDAP services
Type of information

Value

LDAP server IP address
LDAP server port number
The default LDAP server port number is used if you do not specify
one.

LDAP server minimum bind authentication level
Bind DN and password
Base DN

© Copyright 2012 NetApp, Inc. All rights reserved.
www.netapp.com

Interface 4
4.5.6

CIFS protocol
Type of information

Value

Domain name
CIFS share name
The default CIFS share name is used if you do not specify one.
Note: You must not use characters or Unicode characters in CIFS
share names. You can use alphanumeric characters and the
following special characters : ".", "!", "@", "#", "$",
"%", "&", "(", ")", ",", "_", ' " , "{", "}", "~", and "-".
CIFS share path
The default CIFS share path is used if you do not specify one.
CIFS access control list
The default CIFS access control list is used if you do not specify
one.

4.5.7

iSCSI protocol
Type of information

Value

igroup name
The default igroup name is used if you do not specify one.
Names of the initiators

Operating system of the initiators
LUN names
The default LUN name is used if you do not specify one.
Volume name
The volume that the LUN will reside on.
LUN sizes

4.5.8

FCP protocol
Type of Information

Value

igroup name
The default igroup name is used if you do not specify one.
WWPN
World wide port number (WWPN) of the initiators.
Operating system of the initiators.
LUN names
The default LUN name is used if you do not specify one.
Volume name
The volume that the LUN will reside on.
LUN sizes

4.6

Support information
The following section describes the support features.

4.6.1

Remote Management Settings (RLM/BMC/SP)
You can access the cluster's system console remotely by using the system console redirection feature
provided by the remote management device of a node. Depending on your storage system model, the
remote management device can be the Service Processor (SP), the Remote LAN Module (RLM), or the
Baseboard Management Controller (BMC). NetApp recommends configuring these interfaces for easier,
secure management and troubleshooting.
Node name

IP address

Netmask

Default gateway

Mail server
hostname

© Copyright 2012 NetApp, Inc. All rights reserved.
www.netapp.com

Mail server
IP address
Node name

4.6.2

IP address

Netmask

Default gateway

Mail server
hostname

Mail server
IP address

AutoSupport settings
AutoSupport is a ‗phone home‘ function to notify you and NetApp of any hardware problems, so that new
hardware can be automatically delivered to solve the issue. (System must remain on a support contract
and the level of responsiveness is dependent on the level of service contract: 2 hours – Next Business
Day.)
Enable
AutoSupport? If
not, provide
justification.

SMTP Server Name
or IP

AutoSupport
transport

AutoSupport from
e-mail address

AutoSupport to
e-mail
address(es)

One of:
HTTPS (default)
HTTP
SMTP
One of:
HTTPS (default)
HTTP
SMTP
One of:
HTTPS (default)
HTTP
SMTP
One of:
HTTPS (default)
HTTP
SMTP

4.6.3

Customer/RMA details
Verify this information by logging into the NetApp support site: http://now.netapp.com. This information is
required to ensure that the Technical Support personnel can reach you and the replacement parts are
sent to the correct address.
Customer/RMA details

Primary contact

Secondary contact

Contact name
Contact address
Contact phone
Contact e-mail address
RMA address
RMA attention to name

© Copyright 2012 NetApp, Inc. All rights reserved.
www.netapp.com
A. Data ONTAP Cluster-Mode installation and verification
checklists
The installer will perform the following checks to ensure that your new systems are configured correctly
and are ready to turn over to you.
Physical installation

Status

Check and verify all ordered components were delivered to the customer site.
Confirm the NetApp controllers are properly installed in the cabinets.
Confirm there is sufficient airflow and cooling in and around the NetApp system.
Confirm all power connections are secured adequately.
Confirm the racks are grounded (if not in NetApp cabinets).
Confirm there is sufficient power distribution to NetApp controllers & disk shelves.
Confirm power cables are properly arranged in the cabinet.
Confirm that LEDs and LCDs are displaying the correct information.
Confirm that cables from NetApp controllers to disk shelves and among disk shelves are not
crimped or stretched.(fiber cable services loops should be bigger than your fist )
Confirm that fiber cables laid between cabinets are properly connected and are not prone to
physical damage.
Confirm disk shelves IDs are set correctly.
Confirm that fiber channel 2Gb/4Gb loop speeds are set correctly on DS14 shelves and proper
LC-LC cables are used.
Confirm that Ethernet cables are arranged and labeled properly.
Confirm all Fiber cables are arranged and labeled properly.
Confirm the Cluster Interconnect Cables are connected (for HA pairs).
Confirm there is sufficient space behind the cabinets to perform hardware maintenance.
Confirm that the Cisco Nexus Cluster Interconnect switches are properly placed in the cabinet.
Confirm that the Cisco IP switches are properly placed in the cabinet.
Confirm that the Cisco FCP switches are properly placed in the cabinet.
Confirm that the latest ―Reference Configuration File‖ for the Cisco Nexus switches has been
installed.
Confirm that any VLANs required have been defined to the appropriate switches.
Confirm that the Ethernet cables are properly connected to the Cisco IP switches.
Confirm that the FCP cables are properly connected to the Cisco Fabric switches.

Power On and Perform Cluster Creation, Node and Vserver configuration
Power up the disk shelves to ensure that the disks spin up and are initialized properly.
Connect the console to the serial port cable and establish a console connection using a terminal
emulator like Terra Term, PuTTY or Hyperterm.
Note: Log all console output to a text file.
Power on the controllers.
On the first controller console, reply to the initial Cluster Setup response request with ―create‖ to
initialize the cluster and the first node.
On the next controller console, reply to the initial Cluster Setup response request with ―join‖ to
initialize the second node and join the cluster.
On each subsequent controller, perform the same task as the second controller to join them as
nodes in the cluster.
Install System Manager 2.0 on a Windows or Linux system.

© Copyright 2012 NetApp, Inc. All rights reserved.
www.netapp.com

Status
Power On and Perform Cluster Creation, Node and Vserver configuration

Status

Use System Manager 2.0 to install remaining licenses on the cluster.
Note: If any of the nodes are V-Series, the V-Series license needs to be added at the node level
for each node that is a V-Series controller. You have 72 hours from the Cluster Setup script
completion to install the license on the local nodes.
cluster::>run –node node1
node1>license add <V-Series license>
node1>exit
Use System Manager 2.0 to create the first Vservers.
Use the WireGauge tool to verify that all the shelves are cabled correctly and switches are
properly connected.

Miscellaneous configuration

Status

Where necessary, confirm the network switches are configured to support dynamic or static
multi-mode IFGRPs (LACP or Etherchannel) as per customer requirement.
Has the customer accessed the system console using the RLM / BMC / SP?
Verify network connectivity and DNS resolution is configured properly:
cluster::>network ping -node <node name> –destination <hostname of DNS
server>
Verify configured IFGRPs with more than one port function properly by disconnecting one or
more cables
Confirm each node date and timezone is set correctly
cluster::>system node date show
cluster::>timezone
Display whether NTP is used in the cluster
cluster::>system services ntp config show
cluster::>system services ntp server show
Confirm that AutoSupport is configured and functioning correctly.
cluster::>system node autosupport show
Confirm that telnet and RSH is disabled and SSH is enabled
If required, confirm SNMP is configured on all controllers to the appropriate traphost
Download documentation pack and provide to customer

CIFS configuration (per Vserver servicing CIFS)
Check the export policy rules to ensure that the CIFS access protocol will allow access
cluster::>vserver export-policy rule show
If necessary, run through CIFS setup and join the controllers to the customer's Active Directory
(requires an AD account with suitable permissions).
Confirm the NetApp controller‘s local administrator account was created while configuring the
CIFS service (and the password is set appropriately).
Confirm the permissions to the root volume (c$) and /etc folder (etc$) are configured
appropriately (that is, NOT Everyone Full Control).
Confirm that appropriate Windows Domain Administrators group(s) are member of the cluster‘s
local administrator group.
Create a share.
Have the customer map the share to a host, write data to it.
Create a Snapshot and confirm that Snapshot visibility is configured appropriately (for example,
hidden to regular CIFS clients)
Confirm that qtrees storing CIFS data have the appropriate security style specified:
cluster::volume> qtree show –vserver <vserver> -volume <volume name> qtree <qtree name>
Confirm that qtrees storing CIFS data have the appropriate ‗oplocks‘ setting.

© Copyright 2012 NetApp, Inc. All rights reserved.
www.netapp.com

Status
CIFS configuration (per Vserver servicing CIFS)

Status

Take a Snapshot and confirm that Snapshot visibility is configured appropriately (for example,
hidden to regular clients)

NFS configuration (per Vserver servicing NFS)

Status

Create a qtree and confirm the appropriate security style is specified
cluster::>volume qtree create –vserver <vserver> -volume <volume name>
-qtree <qtree name> -security-style {unix|ntfs|mixed}
cluster::>volume qtree show –vserver <vserver> -volume <volume name> qtree <qtree name>
Check the export policy rules to ensure that the NFS access protocol will allow access
cluster::>vserver export-policy ruleshow
Have the customer mount the qtree from a host and write data to it.
Take a Snapshot and confirm that Snapshot visibility is configured appropriately (for example,
hidden to regular clients)

iSCSI configuration (per Vserver servicing iSCSI)

Status

Make sure the iSCSI service is started.
Verify that an iSCSI host attach or support kit has been installed on the host.
If appropriate, verify SnapDrive has been installed on the host.
Create a qtree, igroup, and LUN on the system (using SnapDrive if necessary).
Have the customer establish an iSCSI session from the host.
Create a file system on the LUN, write some data to it and confirm the data is on the LUN.
Reboot the host and confirm that the LUN is still attached.

FCP configuration (per Vserver servicing FCP)

Status

Make sure the FCP service is started
Verify an FCP host attach or support kit has been installed on the host.
If appropriate, verify that SnapDrive has been installed on the host.
Create a qtree, igroup, and LUN on the system (using SnapDrive if necessary).
Have the customer establish an FCP session from the host.
Have the customer create a file system on the LUN and, write some data to it.
Have the customer reboot the host and confirm the LUN is still attached.

Verification checklist

Status

Where necessary make sure the CLUSTER license is enabled where necessary.
Verify the cluster options on all nodes in the cluster are identical.
Temporarily disable AutoSupport on nodes of the cluster.
cluster::>system node autosupport modify -node <node name> -state
disable
Test manual node Takeover (in both directions) and ensure success, rectify any errors and prove
network connectivity continues to function correctly during failover.
cluster::>system storage failover takeover –ofnode <node> -bynode
<node>
cluster::>system storage failover show-giveback
cluster::>system storage failover giveback –ofnode <node> -fromnode
Test Uncontrolled Cluster Failover (in both directions) by disconnecting one controller from
power. Rectify any errors.
Repeat above test for all HA pairs in the cluster
Test component failure of a PSU (Check status of LEDs and console).

© Copyright 2012 NetApp, Inc. All rights reserved.
www.netapp.com
Verification checklist

Status

Test component failure of a LAN cable
Run the WireGauge tool to ensure the shelf cabling is correct.
When installing a new system into a new NetApp cabinet, switch off one cabinet PDU, and make
sure all controllers and shelves remain powered on. Check the status of LEDs and console.
Re-enable AutoSupport:
cluster::>system node autosupport modify -node <node name> -state
enable

Post installation checklist

Status

Give new customers a brief tour of Systems Manager and Element Manager to explain the basic
functions of managing their new cluster.
Log onto the NOW website and give the customer a brief tour of the site. Show them how to
access documentation, download software and firmware, search the Knowledge Base, and verify
their RMA information.
Discuss training available through NetApp University with new customers.
Since they are the basis for most Data ONTAP functionality, have the customer explain how
Snapshots work. Correct any misconceptions.
Create and send a Trip Report within 24 hours to the customer, partner sales team and NetApp
sales team.
When all tasks are completed, have customer sign a Certificate of Completion.

A.1 Definitions
This section contains the glossary of terms used throughout this document.
Term

Definition

CIFS

Common Internet File Service

DNS

Domain Name System

DR

Disaster Recovery

DRC

Disaster Recovery Center (data center)

FAS

Fabric Attached Storage

FC

Fibre Channel

FlexVol

Flexible volume

IOPS

Input/Output Operations per Second

iSCSI

Internet Protocol – Small Computer Systems Interface

MAN

Managed / Metro Area Network

LUN

Logical Unit Number

NAS

Network Attached Storage

NFS

Network File System

NIS

Network Information Service

NTP

Network Time Protocol

PDU

Power Distribution Units

PDC

Primary Data Center

RPM

Rotations Per Minute

RAID

Redundant Array of Independent Disks

SAN

Storage Area Network

SNMP

Simple Network Management Protocol

SATA

Serial Advanced Technology Attachment

UPS

Uninterruptible Power Supply

VIF

Virtual Interface

VLAN

Virtual Local Area Network

WINS

Windows Internet Naming Service

© Copyright 2012 NetApp, Inc. All rights reserved.
www.netapp.com

Contenu connexe

Tendances

Integration between S/4HANA and SAP Ariba Network
Integration between S/4HANA and  SAP Ariba Network Integration between S/4HANA and  SAP Ariba Network
Integration between S/4HANA and SAP Ariba Network Da Costa Emmanuel
 
はじめてのOracle Cloud Infrastructure (Oracle Cloudウェビナーシリーズ: 2021年6月16日)
はじめてのOracle Cloud Infrastructure (Oracle Cloudウェビナーシリーズ: 2021年6月16日)はじめてのOracle Cloud Infrastructure (Oracle Cloudウェビナーシリーズ: 2021年6月16日)
はじめてのOracle Cloud Infrastructure (Oracle Cloudウェビナーシリーズ: 2021年6月16日)オラクルエンジニア通信
 
しばちょう先生による特別講義! RMANバックアップの運用と高速化チューニング
しばちょう先生による特別講義! RMANバックアップの運用と高速化チューニングしばちょう先生による特別講義! RMANバックアップの運用と高速化チューニング
しばちょう先生による特別講義! RMANバックアップの運用と高速化チューニングオラクルエンジニア通信
 
Maximum Availability Architecture - Best Practices for Oracle Database 19c
Maximum Availability Architecture - Best Practices for Oracle Database 19cMaximum Availability Architecture - Best Practices for Oracle Database 19c
Maximum Availability Architecture - Best Practices for Oracle Database 19cGlen Hawkins
 
Oracle RAC features on Exadata
Oracle RAC features on ExadataOracle RAC features on Exadata
Oracle RAC features on ExadataAnil Nair
 
SAP on Azure Cloud Workshop Material Japanese 20190221
SAP on Azure Cloud Workshop Material Japanese 20190221SAP on Azure Cloud Workshop Material Japanese 20190221
SAP on Azure Cloud Workshop Material Japanese 20190221Hitoshi Ikemoto
 
はじめてのOracle Cloud Infrastructure (Oracle Cloudウェビナーシリーズ: 2021年9月22日)
はじめてのOracle Cloud Infrastructure (Oracle Cloudウェビナーシリーズ: 2021年9月22日)はじめてのOracle Cloud Infrastructure (Oracle Cloudウェビナーシリーズ: 2021年9月22日)
はじめてのOracle Cloud Infrastructure (Oracle Cloudウェビナーシリーズ: 2021年9月22日)オラクルエンジニア通信
 
GoldenGateテクニカルセミナー3「Oracle GoldenGate Technical Deep Dive」(2016/5/11)
GoldenGateテクニカルセミナー3「Oracle GoldenGate Technical Deep Dive」(2016/5/11)GoldenGateテクニカルセミナー3「Oracle GoldenGate Technical Deep Dive」(2016/5/11)
GoldenGateテクニカルセミナー3「Oracle GoldenGate Technical Deep Dive」(2016/5/11)オラクルエンジニア通信
 
データ分析基盤、どう作る?システム設計のポイント、教えます - Developers.IO 2019 (20191101)
データ分析基盤、どう作る?システム設計のポイント、教えます - Developers.IO 2019 (20191101)データ分析基盤、どう作る?システム設計のポイント、教えます - Developers.IO 2019 (20191101)
データ分析基盤、どう作る?システム設計のポイント、教えます - Developers.IO 2019 (20191101)Yosuke Katsuki
 
あなたのクラウドは大丈夫?NRI実務者が教えるセキュリティの傾向と対策 (Oracle Cloudウェビナーシリーズ: 2021年11月24日)
あなたのクラウドは大丈夫?NRI実務者が教えるセキュリティの傾向と対策 (Oracle Cloudウェビナーシリーズ: 2021年11月24日)あなたのクラウドは大丈夫?NRI実務者が教えるセキュリティの傾向と対策 (Oracle Cloudウェビナーシリーズ: 2021年11月24日)
あなたのクラウドは大丈夫?NRI実務者が教えるセキュリティの傾向と対策 (Oracle Cloudウェビナーシリーズ: 2021年11月24日)オラクルエンジニア通信
 
Zero Data Loss Recovery Applianceによるデータベース保護のアーキテクチャ
Zero Data Loss Recovery Applianceによるデータベース保護のアーキテクチャZero Data Loss Recovery Applianceによるデータベース保護のアーキテクチャ
Zero Data Loss Recovery Applianceによるデータベース保護のアーキテクチャオラクルエンジニア通信
 
Oracle Database 11g,12cからのアップグレード対策とクラウド移行 (Oracle Cloudウェビナーシリーズ: 2021年7...
Oracle Database 11g,12cからのアップグレード対策とクラウド移行 (Oracle Cloudウェビナーシリーズ: 2021年7...Oracle Database 11g,12cからのアップグレード対策とクラウド移行 (Oracle Cloudウェビナーシリーズ: 2021年7...
Oracle Database 11g,12cからのアップグレード対策とクラウド移行 (Oracle Cloudウェビナーシリーズ: 2021年7...オラクルエンジニア通信
 
今さら聞けない HANAのハナシの基本のほ
今さら聞けない HANAのハナシの基本のほ今さら聞けない HANAのハナシの基本のほ
今さら聞けない HANAのハナシの基本のほKoji Shinkubo
 

Tendances (20)

DataGuard体験記
DataGuard体験記DataGuard体験記
DataGuard体験記
 
Integration between S/4HANA and SAP Ariba Network
Integration between S/4HANA and  SAP Ariba Network Integration between S/4HANA and  SAP Ariba Network
Integration between S/4HANA and SAP Ariba Network
 
はじめてのOracle Cloud Infrastructure (Oracle Cloudウェビナーシリーズ: 2021年6月16日)
はじめてのOracle Cloud Infrastructure (Oracle Cloudウェビナーシリーズ: 2021年6月16日)はじめてのOracle Cloud Infrastructure (Oracle Cloudウェビナーシリーズ: 2021年6月16日)
はじめてのOracle Cloud Infrastructure (Oracle Cloudウェビナーシリーズ: 2021年6月16日)
 
Oracle GoldenGate EM Plugin 13c セットアップガイド
Oracle GoldenGate EM Plugin 13c セットアップガイドOracle GoldenGate EM Plugin 13c セットアップガイド
Oracle GoldenGate EM Plugin 13c セットアップガイド
 
しばちょう先生による特別講義! RMANバックアップの運用と高速化チューニング
しばちょう先生による特別講義! RMANバックアップの運用と高速化チューニングしばちょう先生による特別講義! RMANバックアップの運用と高速化チューニング
しばちょう先生による特別講義! RMANバックアップの運用と高速化チューニング
 
Maximum Availability Architecture - Best Practices for Oracle Database 19c
Maximum Availability Architecture - Best Practices for Oracle Database 19cMaximum Availability Architecture - Best Practices for Oracle Database 19c
Maximum Availability Architecture - Best Practices for Oracle Database 19c
 
Oracle GoldenGate Veridata概要
Oracle GoldenGate Veridata概要Oracle GoldenGate Veridata概要
Oracle GoldenGate Veridata概要
 
Oracle GoldenGate FAQ
Oracle GoldenGate FAQOracle GoldenGate FAQ
Oracle GoldenGate FAQ
 
Oracle RAC features on Exadata
Oracle RAC features on ExadataOracle RAC features on Exadata
Oracle RAC features on Exadata
 
SAP on Azure Cloud Workshop Material Japanese 20190221
SAP on Azure Cloud Workshop Material Japanese 20190221SAP on Azure Cloud Workshop Material Japanese 20190221
SAP on Azure Cloud Workshop Material Japanese 20190221
 
はじめてのOracle Cloud Infrastructure (Oracle Cloudウェビナーシリーズ: 2021年9月22日)
はじめてのOracle Cloud Infrastructure (Oracle Cloudウェビナーシリーズ: 2021年9月22日)はじめてのOracle Cloud Infrastructure (Oracle Cloudウェビナーシリーズ: 2021年9月22日)
はじめてのOracle Cloud Infrastructure (Oracle Cloudウェビナーシリーズ: 2021年9月22日)
 
OCI Logging 概要
OCI Logging 概要OCI Logging 概要
OCI Logging 概要
 
Oracle GoldenGate入門
Oracle GoldenGate入門Oracle GoldenGate入門
Oracle GoldenGate入門
 
GoldenGateテクニカルセミナー3「Oracle GoldenGate Technical Deep Dive」(2016/5/11)
GoldenGateテクニカルセミナー3「Oracle GoldenGate Technical Deep Dive」(2016/5/11)GoldenGateテクニカルセミナー3「Oracle GoldenGate Technical Deep Dive」(2016/5/11)
GoldenGateテクニカルセミナー3「Oracle GoldenGate Technical Deep Dive」(2016/5/11)
 
Oracle Database Applianceのご紹介(詳細)
Oracle Database Applianceのご紹介(詳細)Oracle Database Applianceのご紹介(詳細)
Oracle Database Applianceのご紹介(詳細)
 
データ分析基盤、どう作る?システム設計のポイント、教えます - Developers.IO 2019 (20191101)
データ分析基盤、どう作る?システム設計のポイント、教えます - Developers.IO 2019 (20191101)データ分析基盤、どう作る?システム設計のポイント、教えます - Developers.IO 2019 (20191101)
データ分析基盤、どう作る?システム設計のポイント、教えます - Developers.IO 2019 (20191101)
 
あなたのクラウドは大丈夫?NRI実務者が教えるセキュリティの傾向と対策 (Oracle Cloudウェビナーシリーズ: 2021年11月24日)
あなたのクラウドは大丈夫?NRI実務者が教えるセキュリティの傾向と対策 (Oracle Cloudウェビナーシリーズ: 2021年11月24日)あなたのクラウドは大丈夫?NRI実務者が教えるセキュリティの傾向と対策 (Oracle Cloudウェビナーシリーズ: 2021年11月24日)
あなたのクラウドは大丈夫?NRI実務者が教えるセキュリティの傾向と対策 (Oracle Cloudウェビナーシリーズ: 2021年11月24日)
 
Zero Data Loss Recovery Applianceによるデータベース保護のアーキテクチャ
Zero Data Loss Recovery Applianceによるデータベース保護のアーキテクチャZero Data Loss Recovery Applianceによるデータベース保護のアーキテクチャ
Zero Data Loss Recovery Applianceによるデータベース保護のアーキテクチャ
 
Oracle Database 11g,12cからのアップグレード対策とクラウド移行 (Oracle Cloudウェビナーシリーズ: 2021年7...
Oracle Database 11g,12cからのアップグレード対策とクラウド移行 (Oracle Cloudウェビナーシリーズ: 2021年7...Oracle Database 11g,12cからのアップグレード対策とクラウド移行 (Oracle Cloudウェビナーシリーズ: 2021年7...
Oracle Database 11g,12cからのアップグレード対策とクラウド移行 (Oracle Cloudウェビナーシリーズ: 2021年7...
 
今さら聞けない HANAのハナシの基本のほ
今さら聞けない HANAのハナシの基本のほ今さら聞けない HANAのハナシの基本のほ
今さら聞けない HANAのハナシの基本のほ
 

Similaire à NetApp system installation workbook Spokane

WHITE PAPER▶ Software Defined Storage at the Speed of Flash
WHITE PAPER▶ Software Defined Storage at the Speed of FlashWHITE PAPER▶ Software Defined Storage at the Speed of Flash
WHITE PAPER▶ Software Defined Storage at the Speed of FlashSymantec
 
Dell PowerEdge Deployment Guide
Dell PowerEdge Deployment GuideDell PowerEdge Deployment Guide
Dell PowerEdge Deployment GuideKara Krautter
 
eFileCabinet Manual Version 4.0
eFileCabinet Manual Version 4.0eFileCabinet Manual Version 4.0
eFileCabinet Manual Version 4.0eFileCabinet
 
GoldenGate Whitepaper Oracle 8i 9i to 10g 11g Database Migration
GoldenGate Whitepaper Oracle 8i 9i to 10g 11g Database MigrationGoldenGate Whitepaper Oracle 8i 9i to 10g 11g Database Migration
GoldenGate Whitepaper Oracle 8i 9i to 10g 11g Database MigrationFumiko Yamashita
 
Coherence developer's guide
Coherence developer's guideCoherence developer's guide
Coherence developer's guidewangdun119
 
pdfcoffee.com_i-openwells-basics-training-3-pdf-free.pdf
pdfcoffee.com_i-openwells-basics-training-3-pdf-free.pdfpdfcoffee.com_i-openwells-basics-training-3-pdf-free.pdf
pdfcoffee.com_i-openwells-basics-training-3-pdf-free.pdfJalal Neshat
 
Tx2014 Feature and Highlights
Tx2014 Feature and Highlights Tx2014 Feature and Highlights
Tx2014 Feature and Highlights Heath Turner
 
Oracle database edition-12c
Oracle database edition-12cOracle database edition-12c
Oracle database edition-12cAsha BG
 
Cc admin
Cc adminCc admin
Cc adminVenk Re
 
Youwe sap-ecc-r3-hana-e commerce-with-magento-mb2b-100717-1601-206
Youwe sap-ecc-r3-hana-e commerce-with-magento-mb2b-100717-1601-206Youwe sap-ecc-r3-hana-e commerce-with-magento-mb2b-100717-1601-206
Youwe sap-ecc-r3-hana-e commerce-with-magento-mb2b-100717-1601-206Dennis Reurings
 
Dw guide 11 g r2
Dw guide 11 g r2Dw guide 11 g r2
Dw guide 11 g r2sgyazuddin
 
Eloqua web services api 1.2 user guide v1.0.1
Eloqua web services api 1.2 user guide v1.0.1Eloqua web services api 1.2 user guide v1.0.1
Eloqua web services api 1.2 user guide v1.0.1chris_gosselin
 
Sage Intelligence 100 Microsoft Excel Tips and Tricks
Sage Intelligence 100 Microsoft Excel Tips and TricksSage Intelligence 100 Microsoft Excel Tips and Tricks
Sage Intelligence 100 Microsoft Excel Tips and TricksBurCom Consulting Ltd.
 
Plesk 8.1 for Linux/UNIX
Plesk 8.1 for Linux/UNIXPlesk 8.1 for Linux/UNIX
Plesk 8.1 for Linux/UNIXwebhostingguy
 
Plesk 8.1 for Linux/UNIX
Plesk 8.1 for Linux/UNIXPlesk 8.1 for Linux/UNIX
Plesk 8.1 for Linux/UNIXwebhostingguy
 
Sap Solman Instguide Dba Cockpit Setup
Sap Solman Instguide Dba Cockpit SetupSap Solman Instguide Dba Cockpit Setup
Sap Solman Instguide Dba Cockpit Setupwlacaze
 
Sc101 t um_10_jan07
Sc101 t um_10_jan07Sc101 t um_10_jan07
Sc101 t um_10_jan07俊宏 賀
 

Similaire à NetApp system installation workbook Spokane (20)

WHITE PAPER▶ Software Defined Storage at the Speed of Flash
WHITE PAPER▶ Software Defined Storage at the Speed of FlashWHITE PAPER▶ Software Defined Storage at the Speed of Flash
WHITE PAPER▶ Software Defined Storage at the Speed of Flash
 
Dell PowerEdge Deployment Guide
Dell PowerEdge Deployment GuideDell PowerEdge Deployment Guide
Dell PowerEdge Deployment Guide
 
eFileCabinet Manual Version 4.0
eFileCabinet Manual Version 4.0eFileCabinet Manual Version 4.0
eFileCabinet Manual Version 4.0
 
GoldenGate Whitepaper Oracle 8i 9i to 10g 11g Database Migration
GoldenGate Whitepaper Oracle 8i 9i to 10g 11g Database MigrationGoldenGate Whitepaper Oracle 8i 9i to 10g 11g Database Migration
GoldenGate Whitepaper Oracle 8i 9i to 10g 11g Database Migration
 
Coherence developer's guide
Coherence developer's guideCoherence developer's guide
Coherence developer's guide
 
pdfcoffee.com_i-openwells-basics-training-3-pdf-free.pdf
pdfcoffee.com_i-openwells-basics-training-3-pdf-free.pdfpdfcoffee.com_i-openwells-basics-training-3-pdf-free.pdf
pdfcoffee.com_i-openwells-basics-training-3-pdf-free.pdf
 
Tx2014 Feature and Highlights
Tx2014 Feature and Highlights Tx2014 Feature and Highlights
Tx2014 Feature and Highlights
 
Oracle database edition-12c
Oracle database edition-12cOracle database edition-12c
Oracle database edition-12c
 
Cr8.5 usermanual
Cr8.5 usermanualCr8.5 usermanual
Cr8.5 usermanual
 
Oracle_9i_Database_Getting_started
Oracle_9i_Database_Getting_startedOracle_9i_Database_Getting_started
Oracle_9i_Database_Getting_started
 
WEBGUIDE.PDF
WEBGUIDE.PDFWEBGUIDE.PDF
WEBGUIDE.PDF
 
Cc admin
Cc adminCc admin
Cc admin
 
Youwe sap-ecc-r3-hana-e commerce-with-magento-mb2b-100717-1601-206
Youwe sap-ecc-r3-hana-e commerce-with-magento-mb2b-100717-1601-206Youwe sap-ecc-r3-hana-e commerce-with-magento-mb2b-100717-1601-206
Youwe sap-ecc-r3-hana-e commerce-with-magento-mb2b-100717-1601-206
 
Dw guide 11 g r2
Dw guide 11 g r2Dw guide 11 g r2
Dw guide 11 g r2
 
Eloqua web services api 1.2 user guide v1.0.1
Eloqua web services api 1.2 user guide v1.0.1Eloqua web services api 1.2 user guide v1.0.1
Eloqua web services api 1.2 user guide v1.0.1
 
Sage Intelligence 100 Microsoft Excel Tips and Tricks
Sage Intelligence 100 Microsoft Excel Tips and TricksSage Intelligence 100 Microsoft Excel Tips and Tricks
Sage Intelligence 100 Microsoft Excel Tips and Tricks
 
Plesk 8.1 for Linux/UNIX
Plesk 8.1 for Linux/UNIXPlesk 8.1 for Linux/UNIX
Plesk 8.1 for Linux/UNIX
 
Plesk 8.1 for Linux/UNIX
Plesk 8.1 for Linux/UNIXPlesk 8.1 for Linux/UNIX
Plesk 8.1 for Linux/UNIX
 
Sap Solman Instguide Dba Cockpit Setup
Sap Solman Instguide Dba Cockpit SetupSap Solman Instguide Dba Cockpit Setup
Sap Solman Instguide Dba Cockpit Setup
 
Sc101 t um_10_jan07
Sc101 t um_10_jan07Sc101 t um_10_jan07
Sc101 t um_10_jan07
 

Plus de Accenture

Certify 2014trends-report
Certify 2014trends-reportCertify 2014trends-report
Certify 2014trends-reportAccenture
 
Calabrio analyze
Calabrio analyzeCalabrio analyze
Calabrio analyzeAccenture
 
Tier 2 net app baseline design standard revised nov 2011
Tier 2 net app baseline design standard   revised nov 2011Tier 2 net app baseline design standard   revised nov 2011
Tier 2 net app baseline design standard revised nov 2011Accenture
 
Perf stat windows
Perf stat windowsPerf stat windows
Perf stat windowsAccenture
 
Performance problems on ethernet networks when the e0m management interface i...
Performance problems on ethernet networks when the e0m management interface i...Performance problems on ethernet networks when the e0m management interface i...
Performance problems on ethernet networks when the e0m management interface i...Accenture
 
Migrate volume in akfiler7
Migrate volume in akfiler7Migrate volume in akfiler7
Migrate volume in akfiler7Accenture
 
Migrate vol in akfiler7
Migrate vol in akfiler7Migrate vol in akfiler7
Migrate vol in akfiler7Accenture
 
Data storage requirements AK
Data storage requirements AKData storage requirements AK
Data storage requirements AKAccenture
 
C mode class
C mode classC mode class
C mode classAccenture
 
Akfiler upgrades providence july 2012
Akfiler upgrades providence july 2012Akfiler upgrades providence july 2012
Akfiler upgrades providence july 2012Accenture
 
Reporting demo
Reporting demoReporting demo
Reporting demoAccenture
 
Net app virtualization preso
Net app virtualization presoNet app virtualization preso
Net app virtualization presoAccenture
 
Providence net app upgrade plan PPMC
Providence net app upgrade plan PPMCProvidence net app upgrade plan PPMC
Providence net app upgrade plan PPMCAccenture
 
WSC Net App storage for windows challenges and solutions
WSC Net App storage for windows challenges and solutionsWSC Net App storage for windows challenges and solutions
WSC Net App storage for windows challenges and solutionsAccenture
 
50,000-seat_VMware_view_deployment
50,000-seat_VMware_view_deployment50,000-seat_VMware_view_deployment
50,000-seat_VMware_view_deploymentAccenture
 
Tr 3998 -deployment_guide_for_hosted_shared_desktops_and_on-demand_applicatio...
Tr 3998 -deployment_guide_for_hosted_shared_desktops_and_on-demand_applicatio...Tr 3998 -deployment_guide_for_hosted_shared_desktops_and_on-demand_applicatio...
Tr 3998 -deployment_guide_for_hosted_shared_desktops_and_on-demand_applicatio...Accenture
 
Tr 3749 -net_app_storage_best_practices_for_v_mware_vsphere,_dec_11
Tr 3749 -net_app_storage_best_practices_for_v_mware_vsphere,_dec_11Tr 3749 -net_app_storage_best_practices_for_v_mware_vsphere,_dec_11
Tr 3749 -net_app_storage_best_practices_for_v_mware_vsphere,_dec_11Accenture
 
Snap mirror source to tape to destination scenario
Snap mirror source to tape to destination scenarioSnap mirror source to tape to destination scenario
Snap mirror source to tape to destination scenarioAccenture
 
Ref arch for ve sg248155
Ref arch for ve sg248155Ref arch for ve sg248155
Ref arch for ve sg248155Accenture
 

Plus de Accenture (20)

Certify 2014trends-report
Certify 2014trends-reportCertify 2014trends-report
Certify 2014trends-report
 
Calabrio analyze
Calabrio analyzeCalabrio analyze
Calabrio analyze
 
Tier 2 net app baseline design standard revised nov 2011
Tier 2 net app baseline design standard   revised nov 2011Tier 2 net app baseline design standard   revised nov 2011
Tier 2 net app baseline design standard revised nov 2011
 
Perf stat windows
Perf stat windowsPerf stat windows
Perf stat windows
 
Performance problems on ethernet networks when the e0m management interface i...
Performance problems on ethernet networks when the e0m management interface i...Performance problems on ethernet networks when the e0m management interface i...
Performance problems on ethernet networks when the e0m management interface i...
 
Migrate volume in akfiler7
Migrate volume in akfiler7Migrate volume in akfiler7
Migrate volume in akfiler7
 
Migrate vol in akfiler7
Migrate vol in akfiler7Migrate vol in akfiler7
Migrate vol in akfiler7
 
Data storage requirements AK
Data storage requirements AKData storage requirements AK
Data storage requirements AK
 
C mode class
C mode classC mode class
C mode class
 
Akfiler upgrades providence july 2012
Akfiler upgrades providence july 2012Akfiler upgrades providence july 2012
Akfiler upgrades providence july 2012
 
NA notes
NA notesNA notes
NA notes
 
Reporting demo
Reporting demoReporting demo
Reporting demo
 
Net app virtualization preso
Net app virtualization presoNet app virtualization preso
Net app virtualization preso
 
Providence net app upgrade plan PPMC
Providence net app upgrade plan PPMCProvidence net app upgrade plan PPMC
Providence net app upgrade plan PPMC
 
WSC Net App storage for windows challenges and solutions
WSC Net App storage for windows challenges and solutionsWSC Net App storage for windows challenges and solutions
WSC Net App storage for windows challenges and solutions
 
50,000-seat_VMware_view_deployment
50,000-seat_VMware_view_deployment50,000-seat_VMware_view_deployment
50,000-seat_VMware_view_deployment
 
Tr 3998 -deployment_guide_for_hosted_shared_desktops_and_on-demand_applicatio...
Tr 3998 -deployment_guide_for_hosted_shared_desktops_and_on-demand_applicatio...Tr 3998 -deployment_guide_for_hosted_shared_desktops_and_on-demand_applicatio...
Tr 3998 -deployment_guide_for_hosted_shared_desktops_and_on-demand_applicatio...
 
Tr 3749 -net_app_storage_best_practices_for_v_mware_vsphere,_dec_11
Tr 3749 -net_app_storage_best_practices_for_v_mware_vsphere,_dec_11Tr 3749 -net_app_storage_best_practices_for_v_mware_vsphere,_dec_11
Tr 3749 -net_app_storage_best_practices_for_v_mware_vsphere,_dec_11
 
Snap mirror source to tape to destination scenario
Snap mirror source to tape to destination scenarioSnap mirror source to tape to destination scenario
Snap mirror source to tape to destination scenario
 
Ref arch for ve sg248155
Ref arch for ve sg248155Ref arch for ve sg248155
Ref arch for ve sg248155
 

Dernier

Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024The Digital Insurer
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProduct Anonymous
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...apidays
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024Rafal Los
 
Evaluating the top large language models.pdf
Evaluating the top large language models.pdfEvaluating the top large language models.pdf
Evaluating the top large language models.pdfChristopherTHyatt
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdfhans926745
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking MenDelhi Call girls
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Drew Madelung
 
Tech Trends Report 2024 Future Today Institute.pdf
Tech Trends Report 2024 Future Today Institute.pdfTech Trends Report 2024 Future Today Institute.pdf
Tech Trends Report 2024 Future Today Institute.pdfhans926745
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonetsnaman860154
 
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUK Journal
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)Gabriella Davis
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEarley Information Science
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfEnterprise Knowledge
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Enterprise Knowledge
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfsudhanshuwaghmare1
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationMichael W. Hawkins
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Miguel Araújo
 

Dernier (20)

Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
Evaluating the top large language models.pdf
Evaluating the top large language models.pdfEvaluating the top large language models.pdf
Evaluating the top large language models.pdf
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
Tech Trends Report 2024 Future Today Institute.pdf
Tech Trends Report 2024 Future Today Institute.pdfTech Trends Report 2024 Future Today Institute.pdf
Tech Trends Report 2024 Future Today Institute.pdf
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonets
 
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day Presentation
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 

NetApp system installation workbook Spokane

  • 1. System Installation Workbook Spokane Version 2.7 Date: Apr 2013 ABOUT NETAPP NetApp creates innovative storage and data management solutions that deliver outstanding cost efficiency and accelerate performance breakthroughs. Discover our passion for helping companies around the world go further, faster at www.netapp.com. NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 USA Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888) 4-NETAPP
  • 2. Copyright and trademark information © Copyright 2012 NetApp, Inc. All rights reserved. No portions of this document may be reproduced without prior written consent of NetApp, Inc. Specifications are subject to change without notice. NetApp, the NetApp logo, Go further, faster, and Data ONTAP are trademarks or registered trademarks of NetApp, Inc. in the United States and/or other countries. All other brands or products are trademarks or registered trademarks of their respective holders and should be treated as such. © Copyright 2012 NetApp, Inc. All rights reserved. www.netapp.com
  • 3. Table of contents 1 1.1 1.2 1.2.1 1.2.2 1.2.3 1.2.4 1.2.5 1.2.6 1.2.7 1.2.8 1.2.9 1.2.10 1.2.11 1.2.12 1.3 1.4 1.5 1.5.1 2 2.1 2.1.1 2.1.2 2.1.3 2.1.4 2.1.5 2.1.6 2.1.7 2.1.8 2.1.9 2.1.10 2.1.11 2.1.12 2.1.13 2.1.14 2.1.15 2.1.16 3 4 4.1 4.1.1 4.1.2 4.1.3 4.1.4 4.1.5 4.2 4.2.1 4.2.2 4.3 4.3.1 4.3.2 4.3.3 4.4 4.5 4.5.1 4.5.2 4.5.3 4.5.4 4.5.5 4.5.6 4.5.7 4.5.8 4.6 4.6.1 4.6.2 4.6.3 A. A.1 SITE REQUIREMENTS ............................................................................................................................................ 6 Physical characteristics -Storage controllers and disk drives ....................................................................... 6 System power requirements - Storage controllers and disk drives ............................................................... 8 FAS20xx series systems................................................................................................................................... 8 FAS22xx series systems................................................................................................................................... 8 FAS30xx series systems................................................................................................................................... 9 FAS31xx series systems................................................................................................................................... 9 FAS32xx series systems................................................................................................................................. 10 FAS60xx series systems................................................................................................................................. 11 FAS62xx series systems................................................................................................................................. 11 DS14 series disk shelves ................................................................................................................................ 11 DS2246 disk shelves ...................................................................................................................................... 12 DS4243 disk shelves ...................................................................................................................................... 12 DS4246 disk shelves ...................................................................................................................................... 13 DS4486 disk shelves ...................................................................................................................................... 13 System Cabinet ................................................................................................................................................. 14 System cabinet configurations ........................................................................................................................ 14 Network cabling requirements ......................................................................................................................... 15 Ethernet Configuration Recommendations ..................................................................................................... 15 DATA ONTAP® 7-MODE CONFIGURATION DETAILS ........................................................................................ 16 Basic configuration........................................................................................................................................... 16 IFGRPs ........................................................................................................................................................... 16 Network interface configuration ...................................................................................................................... 16 Default gateway .............................................................................................................................................. 17 Administration host (Optional)......................................................................................................................... 17 Time zone ....................................................................................................................................................... 17 Language encoding for multiprotocol files....................................................................................................... 17 Domain Name Services (DNS) resolution ....................................................................................................... 17 Network Information Services (NIS) resolution ............................................................................................... 17 Remote Management Settings (RLM/SP/BMC) .............................................................................................. 18 Alternate Control Path (ACP) management for SAS shelves .......................................................................... 18 CIFS configuration .......................................................................................................................................... 18 Configure Virtual LANs (Optional) ................................................................................................................... 18 AutoSupport settings....................................................................................................................................... 19 Customer/RMA details .................................................................................................................................... 19 Time synchronization ...................................................................................................................................... 19 SNMP management settings (Optional).......................................................................................................... 19 DATA ONTAP 7-MODE INSTALLATION AND VERIFICATION CHECKLISTS .................................................... 21 DATA ONTAP CLUSTER-MODE CONFIGURATION DETAILS ........................................................................... 25 Cluster information ........................................................................................................................................... 25 Cluster ............................................................................................................................................................ 25 Licensing......................................................................................................................................................... 25 Admin Vserver ................................................................................................................................................ 26 Time synchronization ...................................................................................................................................... 26 Time zone ....................................................................................................................................................... 26 Node information .............................................................................................................................................. 26 Physical port identification .............................................................................................................................. 26 Node management LIF ................................................................................................................................... 27 Cluster network information ............................................................................................................................ 28 Interface groups (IFGRP)................................................................................................................................ 28 Configure Virtual LANs (VLANs) ..................................................................................................................... 28 Logical Interfaces (LIFs) ................................................................................................................................. 28 Intercluster network information ..................................................................................................................... 29 Vserver information .......................................................................................................................................... 29 Creating Vserver ............................................................................................................................................. 29 Creating Volumes on the Vserver ................................................................................................................... 29 IP Network Interface on the Vserver ............................................................................................................... 30 FCP Network Interface on the Vserver ........................................................................................................... 30 LDAP services ................................................................................................................................................ 30 CIFS protocol .................................................................................................................................................. 31 iSCSI protocol ................................................................................................................................................. 31 FCP protocol ................................................................................................................................................... 31 Support information.......................................................................................................................................... 31 Remote Management Settings (RLM/BMC/SP) .............................................................................................. 31 AutoSupport settings....................................................................................................................................... 32 Customer/RMA details .................................................................................................................................... 32 DATA ONTAP CLUSTER-MODE INSTALLATION AND VERIFICATION CHECKLISTS ..................................... 33 Definitions ......................................................................................................................................................... 36 © Copyright 2012 NetApp, Inc. All rights reserved. www.netapp.com
  • 4. List of Tables Table 1: Electrical Requirements – FAS20xx series ................................................................................................................. 8 Table 2: Electrical requirements – FAS2220 ............................................................................................................................. 8 Table 3: Electrical requirements— FAS2240 series (one controller module, no mezzanine card and either 450-GB or 600-GB disk drives for FAS2240; 1TB or 2TB disk drives for FAS2240-4) ................................................................................... 8 Table 4: Electrical requirements –FAS30xx series ................................................................................................................... 9 Table 5: Electrical requirements – FAS31xx series ................................................................................................................... 9 Table 6: Electrical requirements – FAS3210 with one 256-GB Flash Cache module— one controller module ...................... 10 Table 7: Electrical requirements – FAS3240 with one 256-GB, one 512-GB, or one 1-TB Flash Cache module per controller module—two controller modules.................................................................................................................................... 10 Table 8: Electrical requirements – FAS3270 with one 256-GB, one 512-GB, or one 1-TB Flash Cache module per controller module—two controller modules.................................................................................................................................... 10 Table 9: Electrical requirements -FAS6030/FAS6040 ............................................................................................................. 11 Table 10: Electrical requirements-FAS 6210 single-controller module;FAS6240 & FAS6280 with I/O expansion .................. 11 Table 11: Electrical requirements - DS14mk2 AT 7.2K speed ................................................................................................ 11 Table 12: Electrical requirements- DS14mk2 FC 15K speed .................................................................................................. 12 Table 13: Electrical requirements—DS2246-SAS drives......................................................................................................... 12 Table 14: Electrical requirements—DS4243-SAS drives......................................................................................................... 12 Table 15: Electrical requirements—DS4243-SATA drives ...................................................................................................... 13 Table 16: Electrical requirements— DS4246 -SATA drives, 6-100GB SSD drives with 18-1TB or 18-3TB disk drives .......... 13 Table 17: Electrical requirements –DS4486 ............................................................................................................................ 13 © Copyright 2012 NetApp, Inc. All rights reserved. www.netapp.com
  • 5. WELCOME Dear Customer, Thank you for choosing a NetApp storage system and Professional Services installation. To ensure a seamless deployment and integration into your environment, please complete the information requested in this document before our engineer arrives on site. This will ensure that as many questions as possible are answered before the day of the installation, so you can start using your system. The first part of the document includes environmental information about our products, which may help you with your computer room planning The second part of the workbook covers the information that the professional services engineer will need on the day of installation. Please obtain the required information and return a completed copy of this document to the engineer before they arrive. We look forward to working with you. Yours faithfully (NetApp Services Engineering) © Copyright 2012 NetApp, Inc. All rights reserved. www.netapp.com
  • 6. Preface This document describes how install a NetApp system. AUDIENCE The primary audience for this document is PS consultants, and IT admin engineers. NON-DISCLOSURE REQUIREMENTS © Copyright 2012 NetApp. All rights reserved. This document contains the confidential and proprietary information of NetApp, Inc. Do not reproduce or distribute without the prior written consent of NetApp. INFORMATION ABOUT THIS DOCUMENT All information about this document including version history, review and approval, typographical conventions, references, and a glossary of terms can be found in the final chapter of this document. © Copyright 2012 NetApp, Inc. All rights reserved. www.netapp.com
  • 7. 1 Site requirements Please download and read the latest version of the Site requirements guide available at http://support.netapp.com/ 1.1 Physical characteristics -Storage controllers and disk drives Hardware Height Width Depth Weight FAS62xx series 10.2 in (25.86 cm) 17.6 in (44.68 cm) 29 in (73.66 cm) Including cable management tray Single controller module 99.2 lbs(45 kg) Controller and I/O expansion module 125.7 lbs(57 kg) Two controller modules 130.1lbs(59 kg) 29 in (73.66 cm) Including cable management tray 122 lbs(55.34 kg) 24 in (60.7 cm) Single controller module 67.3 lbs(30.5 kg) Controller and I/O expansion module 74.5 lbs(33.8 kg) Two controller modules 79.5 lbs(36.1 kg) Single controller module 102 lbs(46.27 kg) Two controller modules 121 lbs(54.89 kg) FAS60xx series FAS32xx series FAS31xx series 10.32 in (26.21 cm) 5.12 in (13.0 cm) 10.75 in (27.3 cm) 17.53 in (44.52 cm) 17.61 in (44.7 cm) 17.73 in (45.0 cm) 24 in (60.7 cm) Rack units 6 6 FAS30xx series 5.13 in (13 cm) 17.73 in (45.0 cm) 24 in (60.7 cm) 68 lbs(30.84 kg) FAS2240-4 7 in (17.9 cm) 17.73 in (45.0 cm) 28 in (71.1 cm) Including the cable management arm installed Single controller module 107.8 lbs(48.9 kg) 23.1 in (58.7 cm) Including the cable management arm installed Single controller module 50.7 lbs(23 kg) Two controller modules 56 lbs(25.4 kg) 24.1 in (61.2 cm) Including the cable management arm installed Single controller Module 57.8 lbs(26.2 kg) Two controller modules 62.4 lbs (28.3 kg) 22.5 in (57.2 cm) Full(chassis with all disk drives) 110 lbs(49.9 kg) Empty(No internal disks) 91 lbs(41.3 kg) Full(chassis with all disk drives) 66 lbs(29.9 kg) Empty(No internal disks) 57 lbs(25.9 kg) Full(chassis with 66 lbs(29.9 kg) 6 102.3 lbs(46.4 kg) Two controller modules 3 FAS2240-2 FAS2220 FAS2050 FAS2040 FAS2020 3.3 in (8.4 cm) 3.4 in (8.4 cm) 6.9 in (17.5 cm) 3.5 in (8.9 cm) 3.5 in 17.6 in (44.7 cm) 17.6 in (44.7 cm) 17.6 in (44.7 cm) 17.6 in (44.7 cm) 17.6 in 22.5 in (57.2 cm) 22.5 in 3 © Copyright 2012 NetApp, Inc. All rights reserved. www.netapp.com 4 2 2 4 2 2
  • 8. (8.9 cm) (44.7cm) all disk drives) (57.2 cm) Empty(No internal disks) 57 lbs(25.9 kg) Hardware Height Width Depth DS14 series 5.25 in (13.3 cm) 17.6 in (44.7 cm) DS14mk2FC DS14mk4FC 20 in (50.8 cm) With disk drives 77 lbs(35 kg) DS14mk2AT 22in (55.2 cm) With disk drives 68 lbs(30.8 kg) Empty 50.06 lbs(23 kg) With disk drives 49 lbs(22.2 kg) Without disk drives 34.6 lbs(15.7 kg) Empty 17.4 lbs(7.9 kg) With disk drives 110 lbs(49.9 kg) Without disk drives 53.7 lbs(24.4 kg) Empty 21.1 lbs(9.6 kg) With disk drives 110 lbs (49.9 kg) Without disk drives 53.7 lbs (24.4 kg) Empty 21.1 lbs(9.6kg) With disk drives 150 lbs(68kg) With four carriers, IOMs and PSU‘s 82lbs (37kg) DS2246 DS4243 DS4246 DS4486 3.4 in (8.5cm) 7 in (17.8 cm) 7 in (17.8 cm) 6.87 in (17.44 cm) 19 in (48.0cm) 19 in (48.0 cm) 17.7 in (45 cm) 17.6 in (44.7 cm) Weight 19.1 in (48.4 cm) 24 in(61 cm) 24 in (61 cm) 27 in (68.6 cm) Depth from mounting flange to rear chassis bulk head Rack Units 3 2 4 4 4 Note: The DS14 series includes DS14,DS14mk2FC,DS14mk4FC with an ESH (ESH refers to ESH2 and ESH4) and DS14mk2AT Hardware Height Width Depth Weight Rack Units Cisco5010 1.72in(4.4cm) 17.3in(43.9cm) 30in(76.2cm) 35lbs(15.88kg) 1 Cisco5020 3.47(8.8cm) 17.3in(43.9cm) 30in(76.2cm) 50lbs(22.68kg) 2 Cisco2960 1.73in(4.4cm) 17.5in(44.45cm) 9.3in(23.62) 8lbs(3.63kg) 1 * 1U = 1.75 inches Note: Please plan for at least 36 inches (91.4 centimeters) of clearance on both front and back of the system. This amount of space allows you to reach the back panel for cabling the system. It also allows you to slide the motherboard tray out from the back of the system when removing or installing hardware. © Copyright 2012 NetApp, Inc. All rights reserved. www.netapp.com
  • 9. 1.2 System power requirements - Storage controllers and disk drives Note: The following section contains the power requirements for the available FAS series and disk shelves. However, the tables cover values for one-controller modules. If you need additional information such as inclusion of two controllers, mezzanine card, I/O Expansion module, Flash Cache module etc., please refer to the latest Site Requirements guide, before you proceed with the installation. 1.2.1 FAS20xx series systems Table 1: Electrical Requirements – FAS20xx series 100 to 120V Worstcase Single PSU 1-TB SATA Typical Per PSU System two PSUs Worstcase single PSU 3.37 1.61 3.22 2-TB SATA 3.36 1.65 1-TB SATA 332 158 2-TB SATA 334 1-TB SATA 2-TB SATA Parameter Drives (in GB) 200 to 240 V Typical Per PSU System two PSUs 1.69 0.83 1.66 3.29 1.69 0.84 1.68 316 327 152.5 305 162 324 326 160 320 3.62 1.77 3.53 1.81 0.90 1.8 3.34 1.61 3.22 1.66 0.84 1.67 1-TB SATA 357 173 345 347 169 337 2-TB SATA 329 158 315 319 156 312 Input current measured, A 1-TB SATA 5.07 2.26 4.51 2.46 1.20 2.40 Input power measured,W 1-TB SATA 504 220 439 474 224 447 FAS2020 Input current measured, A Input power measured,W FAS2040 Input current measured, A Input power measured,W FAS2050 1.2.2 FAS22xx series systems Table 2: Electrical requirements – FAS2220 T a b Parameter l e FAS2220 Input current 3 measured, A : E Input power l measured,W e c 100V 200 V Drives (in GB) Worst case single PSU Typical Typical System two PSUs Worst-case single PSU Per PSU Per PSU System two PSUs 1-TB 4.18 1.3 2.6 2 0.67 1.33 2-TB 3-TB 4.26 1.34 2.63 2.14 0.68 1.36 4.32 1.37 2.74 2.14 0.69 1.38 1-TB 417 129 258 396 123 246 2-TB 425 131 261 423 126 252 3-TB 431 136 271 423 129 257 © Copyright 2012 NetApp, Inc. All rights reserved. www.netapp.com
  • 10. Table 3: Electrical requirements — FAS2240 series (one controller module, no mezzanine card and either 450-GB or 600-GB disk drives for FAS2240; 1TB or 2TB disk drives for FAS2240-4) Input Voltage 100V 200V 215V Worstcase, single PSU Typical Per PSU System, two PSUs/ System, four PSU Worstcase, single PSU /2+2 PSU Typical System, two /four PSU Worstcase, single PSU /2+2 PSU Typical Per PSU Per PSU System, two PSUs / System four PSU Input current measured, A 4.76 1.8 3.60 2.31 0.88 1.76 2.15 0.82 1.64 Input power measured,W 474 178 356 456 170 339 456 168 336 5.34 1.21 4.85 2.68 0.63 2.5 2.53 0.59 2.37 533 121 482 (four PSU‘s) 517 (2+2 PSU‘s) 117 468 (four PSU‘s) 515 (2+2 PSU‘s) 117 466 (four PSU‘s) FAS2240-2 FAS2240-4 Input current measured, A Input power measured,W 1.2.3 FAS30xx series systems Table 4: Electrical requirements –FAS30xx series Input Voltage 100 to 120V 200 to 240V -40 to -60V Worstcase, single PSU Typical Per PSU System, two PSUs Worstcase, single PSU Typical System, two PSUs Worstcase, single PSU Typical Per PSU Per PSU System, two PSUs Input current measured, A 3.39 1.2 2.4 1.77 0.71 1.40 8.2 2.85 5.7 Input power measured,W 336 118 236 329 115 229 328 113 226 Input current measured, A 3.66 1.7 3.4 1.9 0.95 1.9 7.94 3.7 7.4 Input power measured,W 363 169 338 358 165 330 318 148 296 Input current measured, A 3.88 1.7 3.4 2.04 0.95 1.9 9.49 4.0 8.0 Input power measured,W 386 164 328 384 164 327 380 160 319 Input current measured, A 4.03 1.85 3.7 2.06 1.05 2.1 10.57 4.7 9.4 Input power measured,W 400 181 362 387 178 355 423 188 376 FAS3020 FAS3040 FAS3050 FAS3070 1.2.4 FAS31xx series systems Table 5: Electrical requirements – FAS31xx series Input Voltage 100 to 120V Worstcase, single PSU 200 to 240V Typical Per PSU System, two PSUs Worstcase, single PSU -40 to -60V Typical Per PSU System, two PSUs Worstcase, single PSU © Copyright 2012 NetApp, Inc. All rights reserved. www.netapp.com Typical Per PSU System, two PSUs
  • 11. FAS3140 Input current measured, A 3.98 1.89 3.77 1.97 0.97 1.93 8.38 4.88 9.75 Input power measured,W 396 187 373 385 183 366 336 195 389 Input current measured, A 4.80 2.25 4.50 2.38 1.16 2.32 10.07 5.90 11.79 Input power measured,W 476 220 440 460 225 450 404 235 470 Input current measured, A 5.07 2.37 4.74 2.52 1.19 2.38 10.75 6.09 12.18 Input power measured,W 505 235 470 493 230 459 430 243 486 FAS3160 FAS3170 1.2.5 FAS32xx series systems Table 6: Electrical requirements – FAS3210 with one 256-GB Flash Cache module— one controller module Input Voltage 100 to 120V 200 to 240V -40 to -60V Worstcase, single PSU Typical Per PSU System, two PSUs Worstcase, single PSU Typical System, two PSUs Worstcase, single PSU Typical Per PSU Per PSU System, two PSUs Input current measured, A 4.22 1.52 3.03 2.11 0.83 1.66 10.45 3.65 7.30 Input power measured,W 421 150 299 411 147 293 418 146 292 FAS3210 Table 7: Electrical requirements – FAS3240 with one 256-GB, one 512-GB, or one 1-TB Flash Cache module per controller module—two controller modules Input Voltage 100 to 120V 200 to 240V -40 to -60V Worstcase, single PSU Typical Per PSU System, two PSUs Worstcase, single PSU Typical System, two PSUs Worstcase, single PSU Typical Per PSU Per PSU System, two PSUs Input current measured, A 6.37 2.35 4.70 3.15 1.21 2.41 15.9 5.90 11.8 Input power measured,W 635 233 466 620 228 456 636 236 472 FAS3240 Table 8: Electrical requirements – FAS3270 with one 256-GB, one 512-GB, or one 1-TB Flash Cache module per controller module—two controller modules Input Voltage 100 to 120V 200 to 240V Worstcase, single PSU Typical Typical Per PSU System, two PSUs Worstcase, single PSU -40 to -60V Per PSU Input current measured, A 7.28 2.78 5.56 3.58 Input power measured,W 728 278 552 707 Typical System, two PSUs Worstcase, single PSU Per PSU System, two PSUs 1.42 2.83 18.2 6.95 13.9 271 541 728 278 556 FAS3270 © Copyright 2012 NetApp, Inc. All rights reserved. www.netapp.com
  • 12. 1.2.6 FAS60xx series systems Table 9: Electrical requirements -FAS6030/FAS6040 Input Voltage 100 to 120V 200 to 240V Worst-case, single PSU Typical Per PSU System, two PSUs Worst-case, single PSU Typical Per PSU System, two PSUs Input current measured, A 9.75 2.87 5.74 4.87 1.57 3.14 Input power measured,W 968 279 557 934 217 541 Input current measured, A 11.68 3.63 7.25 5.76 1.96 3.91 Input power measured, W 1,162 352 704 1,115 231 693 FAS6030 /FAS6040 FAS6070/FAS6080 1.2.7 FAS62xx series systems Table 10: Electrical requirements-FAS 6210 single-controller module;FAS6240 & FAS6280 with I/O expansion Input Voltage 100 to 120V 200 to 240V Worst-case, single PSU Typical Worst-case, single PSU Per PSU System, two PSUs Typical Per PSU System, two PSUs Input current measured, A 5 2.25 4.5 2.5 1.15 2.3 Input power measured, W 490 215 430 480 208 415 Input current measured, A 9.3 3.3 6.6 4.5 1.65 3.3 Input power measured, W 920 312.5 625 875 308 615 Input current measured, A 9.6 3.5 6.9 4.7 1.75 3.5 Input power measured, W 950 332.5 665 910 323 645 FAS6210 FAS6240 FAS6280 1.2.8 DS14 series disk shelves Table 11: Electrical requirements - DS14mk2 AT 7.2K speed 100 to 120V 200 to 240V -40 to – 60V Size GB WorstCase, single PSU Typical Typical System , two PSUs Per PSU System, two PSUs Worstcase, single PSU Typical Per PSU Worstcase, single PSU Per PSU System, two PSUs 250 2.79 1.36 2.72 1.38 0.70 1.39 7.38 2.84 5.67 320 2.85 1.56 3.12 1.43 0.78 1.56 7.4 2.82 5.64 500 2.94 1.45 2.9 1.43 0.74 1.47 8.04 3.11 6.22 750 3.42 1.61 3.22 1.63 0.53 1.60 8.42 6.63 7.25 1-TB 3.15 1.55 3.10 1.55 0.78 1.56 8.33 3.24 6.48 250 279 136 271 271 132 264 295 114 227 Input Voltage DS14mk2 AT Input current measured, A Input power © Copyright 2012 NetApp, Inc. All rights reserved. www.netapp.com
  • 13. measured, W 320 284 155 310 283 152 304 296 113 226 500 293 144 288 286 142 283 322 125 249 750 341 161 321 323 155 309 337 145 290 1-TB 315 154 308 309 150 300 333 130 259 Table 12: Electrical requirements- DS14mk2 FC 15K speed Input Voltage 100 to 120V -40 to – 60V 200 to 240V Size GB WorstCase, single PSU Typical Per PSU System, two PSUs Worstcase, single PSU Typical System , two PSUs Worstcase, single PSU Typical Per PSU Per PSU System, two PSUs 72 3.41 1.82 3.63 1.67 0.89 1.78 10.04 3.98 7.95 144 3.96 1.88 3.75 1.93 0.94 1.88 10.40 4.13 8.25 288 4.43 2.16 4.32 2.23 1.07 2.13 11.98 4.36 8.72 450 4.43 2.16 4.32 2.23 1.07 2.13 N/A 72 340 181 362 331 173 345 402 159 318 144 395 187 373 383 183 365 416 165 330 288 443 216 431 443 208 415 479 175 349 450 443 216 431 443 208 415 N/A 450 1,512 735 1,470 1,512 707 1,414 N/A DS14mk2FC Input current measured, A Input power measured, W 1.2.9 DS2246 disk shelves Table 13: Electrical requirements—DS2246-SAS drives Input Voltage 100 to 120V Size (GB) 200 to 240V (200V actual) Worst-Case, single PSU Typical Worst-case, single PSU Typical Per PSU System, two PSUs Per PSU System, two PSUs DS2246 Input current measured, A 450 1.38 2.76 2.29 0.79 1.58 600 4.22 1.39 2.77 2.29 0.82 1.64 900 4.22 1.39 2.77 2.29 0.82 1.64 450 428 137 274 420 135 270 600 422 134 267 418 133 266 900 Input power measured, W 4.28 422 134 267 418 133 266 1.2.10 DS4243 disk shelves Table 14: Electrical requirements—DS4243-SAS drives Input Voltage 200 to 240V (200V actual) 200 to 240V (215V actual) Size (GB) DS4243SAS Total input current measured, A Total input power measured, 100 to 120V WorstCase, single PSU Worstcase, single PSU Worstcase, single PSU 300 450 600 300 450 600 5.5 6.00 5.98 550 600 595 Typical Per PSU System, two PSUs 3.0 3.15 2.86 300 315 284 6.0 6.30 5.71 600 630 567 2.8 3.00 2.99 560 600 584 Typical Per PSU System, two PSUs 1.5 1.60 1.44 300 320 274 3.0 3.20 2.87 600 640 547 © Copyright 2012 NetApp, Inc. All rights reserved. www.netapp.com 2.6 2.80 559 602 Typical Per PSU System, two PSUs 1.4 1.50 N/A 301 323 N/A 2.8 3.00 602 645
  • 14. W Table 15: Electrical requirements—DS4243-SATA drives Input Voltage 100 to 120V 200 to 240V (200V actual) 200 to 240V (215V actual) Size (GB) WorstCase, single PSU Typical Per PSU System, two PSUs Worstcase, single PSU Typical System, two PSUs Worstcase, single PSU Typical Per PSU Per PSU System, two PSUs 500 4.30 2.20 4.40 2.10 1.10 2.20 1.90 1.05 2.10 1-TB 4.41 2.21 4.42 2.21 1.14 2.27 1.90 1.05 2.10 2-TB 4.72 2.31 4.62 2.42 1.21 2.42 N/A 3-TB 4.95 2.30 4.60 2.43 1.19 2.38 100 (SSD) 1.96 0.82 1.63 1.0 0.45 0.9 0.95 0.42 0.84 500 430 220 440 420 220 440 409 226 452 1-TB 439 219 438 429 212 424 409 226 452 2-TB 469 229 458 470 228 456 N/A 3-TB 495 228 456 476 224 448 100 (SSD) 196 82 163 200 90 180 90 180 DS4243SATA Input current measured, A Input power measured, W 205 1.2.11 DS4246 disk shelves Table 16: Electrical requirements— DS4246 -SATA drives, 6-100GB SSD drives with 18-1TB or 183TB disk drives Input Voltage 100 to 120V 200 to 240V (200V actual) Size (GB) Worst-Case, single PSU Typical Per PSU System, two PSUs Worst-case, single PSU Typical Per PSU System, two PSUs Input current measured, A 1-TB 3.91 1.7 3-TB 4.11 1.9 3.41 2.11 0.9 1.84 3.72 2.25 1.1 2.14 Input power measured, W 1-TB 386 3-TB 406 168 335 388 166 331 123 368 418 199 397 DS4246 1.2.12 DS4486 disk shelves Table 17: Electrical requirements –DS4486 Input Voltage 100 to 120V 200 to 240V (200V actual) Size (GB) Worst-Case, single PSU Typical Per PSU pair System, two PSUs Worst-case, single PSU Typical Per PSU pair System, two PSUs Input current measured, A 3-TB 8.71 3.29 6.57 4.59 1.73 3.46 Input power measured, W 3-TB 870 329 657 919 346 692 DS4486 © Copyright 2012 NetApp, Inc. All rights reserved. www.netapp.com
  • 15. 1.3 System Cabinet Dimensions Cabinet 42U(X870B-R6) 42U Deep(X870C-R6) Height 78.7in(200cm) 78.7in(200cm) Depth 37.4in(95cm) 44.3in(112.50cm) Width 23.6in(60cm) 23.6in(60cm) Empty Weight 287lb(130.2kg) 307lb(138kg) Loaded Weight 1500lb(680kg) 2307lb(1046kg) Front 30in(76.3cm) 30in(76.3cm) Rear 30in(76.3cm) 30in(76.3cm) Top 12in(20cm) 12in(30cm) Weight Clearance Note: Consult your co-location facility manager or vendor documentation if installing into third party cabinets. 1.4 System cabinet configurations Config PDU’s PDU Part # Plug Type Service Outlet Cords Amps Outlets Approx. Power NEMA 30A Single Phase 4 X8712CR6 NEMA L6-30P 30A 2 48 24 10kW@208V NEMA 30A 3-Phase Delta 2 X8719AR6 NEMA L15-30P 30A 1 41.5 24 8.6kW@208V NEMA 30A 3-Phase Delta 2 X8720AR6 NEMA L21-30P 30A 1 41.5 24 8.6kW@208V IEC 32A Single Phase 4 X8713CR6 IEC 60309-32A P+N+E 32A 2 64 24 14.7kW@230V IEC 32A 3-Phase Wye 2 X8718AR6 IEC 6030932A 3P+N+E 32A 1 96 24 22.1kW@230V Note: PDU Number is per cabinet; cords per side; amps per side and outlets per side © Copyright 2012 NetApp, Inc. All rights reserved. www.netapp.com
  • 16. 1.5 Network cabling requirements Network Device Cabling Requirements 100Base-TX Cat 5/5e/6 UTP cable with RJ-45 connector Gigabit Ethernet (Optical) Multimode OM-1, OM-2, OM-3, or OM-4 fiber optic cable with LC connector Gigabit Ethernet (Copper) Cat 5e/6 UTP cable with RJ45 connector 10 Gigabit Ethernet (Optical) 10Gbase-SR SFP+ transceiver with LC connector* and a multimode OM-3 or OM-4 fiber optic cable 10 Gigabit Ethernet (Copper) 10Gbase Copper SFP+ twin-ax cable* Fibre Channel Multimode OM-1, OM-2, OM-3, or OM-4 fiber optic cable with LC connector *Must be provided by NetApp or on the NetApp compatibility list Note: Refer TR-3552 titled ―Optical Network Installation Guide‖ for more information on Optical networking requirements and distance limitations for a particular cable type and data rate 1.5.1 Ethernet Configuration Recommendations Switch ports connected to 100Base-TX storage controller ports should be configured manually for speed and duplex settings (100 Mbit Full Duplex) when possible. Flow control should be enabled on Gigabit and 10 Gigabit network ports. On the storage controller configure with Send on, Receive off, and on the switch configure as Send off, Receive on. PortFast can be enabled on all switch ports connected to the storage controller to allow the port to enter forwarding state faster. © Copyright 2012 NetApp, Inc. All rights reserved. www.netapp.com
  • 17. 2 Data ONTAP® 7-Mode configuration details Please work with your Professional Services representative to complete this worksheet prior to the installation date. The requested information enables us to configure your equipment quickly and efficiently. Depending on the desired configuration, some fields may not be applicable. Note: This worksheet does NOT replace the requirement for reading and understanding the appropriate Data ONTAP manuals that describe the operations of Data ONTAP in 7-Mode. Data ONTAP manuals can be found at the NetApp Support Site under documentation. Customer checklist of site preparation requirements (check all that apply): Adequate rack space for the NetApp system and disk shelves has been provided. The power requirements for the NetApp system and disk shelves have been satisfied. The network patch cabling and switch port configuration is complete. Company Name: PHS NetApp Sales Order #: 600122473 Storage Controller Model: FAS2240-4 2.1 Data ONTAP® Version: 8.1.2 Basic configuration System information Controller 1 Controller 2 Host name (nas + the last 4 S/N) nasxxxx nasxxxx Aggregate Type (32-bit or 64-bit) 64-bit 64-bit Serial Number 2.1.1 IFGRPs Interface Groups (IFGRPs) bond multiple network ports together for increased bandwidth and/or fault tolerance. Note: For systems without an e0P port, leave one network port available for ACP connections to SAS disk shelves. Interface details Controller 1 Controller 2 Number of interface groups to configure Vif1 Vif1 Ifgrp1 Ifgrp1 Names of the interface groups For example, ifgrp1, iscsi_ifgrp2 IFGRP type (multi, single, LACP) Multi – all ports are active Single – one port active, other ports are on standby for failover LACP – network switch manages traffic ifgrp1: LACP ifgrp1: LACP ifgrp2: LACP ifgrp2: LACP ifgrp3: ifgrp3: ifgrp1: Multi-mode IFGRP load balancing style (IP, MAC, round-robin, or port based) IP ifgrp1: IP ifgrp2: IP ifgrp2: IP ifgrp3: ifgrp3: ifgrp1: Number of links (network ports) in each IFGRP 2 ifgrp1: 2 ifgrp2: 2 ifgrp2: 2 ifgrp3: ifgrp3: ifgrp1: Name of network ports in each IFGRP For example,ifgrp1= e0a, e1d ifgrp3=ifgrp1, ifgrp2 E0a,e0c ifgrp2: E0b, e0d ifgrp3: 2.1.2 Network interface configuration If you created IFGRPs, then use their names, otherwise use port names (for example, e0a). © Copyright 2012 NetApp, Inc. All rights reserved. www.netapp.com
  • 18. Some controllers have an e0M interface for environments with a subnet dedicated to managing servers. Include the e0M settings if you have a management subnet. Note: For systems without an e0P port, leave one network port available for ACP connections to SAS disk shelves. Media type Enable Jumbo frames? 255.255.255.0 Ethernet No 10.108.193.12 255.255.255.0 Ethernet No 10.108.193.11 255.255.255.0 Ethernet No e0P 10.108.193.13 255.255.255.0 Ethernet No nasxxxx (A) VIF 170.173.144.10 255.255.255.0 170.173.144.11 VIF Yes nasxxxx (B) VIF 170.173.144.11 255.255.255.0 170.173.144.10 VIF Yes Controller name IP address Network mask nasxxxx (A) e0M 10.108.193.10 nasxxxx (A) e0P nasxxxx (B) e0M nasxxxx (B) 2.1.3 Interface name Partner interface name or IP address Default gateway Gateway details Controller 2 Default Gateway IP address 2.1.4 Controller 1 170.173.144.1 170.173.144.1 Administration host (Optional) You can limit the systems or subnets authorized to mount the root volume. Host details Controller 1 Controller 2 Admin host/subnet IP 2.1.5 Time zone What time zone should the systems set their clocks to (for example, US/Pacific). Time zone Details Controller 2 Time zone Pacific pacific Physical Location (for example, Bldg 4, Dallas) 2.1.6 Controller 1 101 W. 8 Avenue Spokane WA, 99204 th th 101 W. 8 Avenue Spokane WA, 99204 Language encoding for multiprotocol files The default is POSIX and only needs to be changed for systems storing files using international alphabets. Encoding details Controller 2 Language for multiprotocol files 2.1.7 Controller 1 English English Domain Name Services (DNS) resolution DNS resolution DNS Domain Name wa.providence.org DNS Server IP addresses (up to 3) 2.1.8 Values 170.173.161.38; 170.173.113.228; 170.173.132.39 Network Information Services (NIS) resolution NIS resolution Values NIS Domain Name NIS Server IP addresses © Copyright 2012 NetApp, Inc. All rights reserved. www.netapp.com
  • 19. 2.1.9 Remote Management Settings (RLM/SP/BMC) All systems include Remote LAN Module (RLM), Baseboard Management Controller (BMC), or a Service Processor (SP) to provide out-of-band control of the storage system. NetApp recommends configuring these interfaces for easier, secure management and troubleshooting. RLM/BMC Controller 1 Controller 2 IP Address 10.108.193.12 10.108.193.13 Network Mask 255.255.255.0 255.255.255.0 Gateway 10.108.193.1 10.108.193.1 Mail server hostname smtplegacy.providence.org smtplegacy.providence.org Mail server IP 170.173.161.56 170.173.161.55 2.1.10 Alternate Control Path (ACP) management for SAS shelves For system models prior to the FAS3200 series, use an onboard NIC port to use ACP. New systems with dedicated e0P ports automatically assign IP addresses. Controller 1 Controller 2 Interface name (if not using e0P) Private subnet (default: 192.168.0.0/22) Network Mask 2.1.11 CIFS configuration Systems with a CIFS license run the CIFS setup wizard, immediately after the Setup wizard completes. NT4 domains will require a server account to be created before running CIFS setup. You can abort the wizard using Ctrl+C from the keyboard and run later if necessary. Note: The installation engineer will require someone with Domain Administrator privileges to help perform this section. When CIFS is configured, a domain administrator should move the controllers out of OU=Computers into an OU for servers. This will ensure Group Policy Objects can be applied to the controllers. CIFS configuration Controller 1 Controller 2 Authentication mode Choose one of: Active Directory domain NT 4 domain Workgroup /etc/passwd or NIS/LDAP Choose one of: Active Directory domain NT 4 domain Workgroup /etc/passwd or NIS/LDAP Domain name wa.providence.org wa.providence.org Multiprotocol Multiprotocol NetBios name Do you want the system visible via WINS (Y/N)? WINS IP addresses (up to 3) Multiprotocol or NTFS only? 2.1.12 Configure Virtual LANs (Optional) VLANs are used to segment network domains using 802.1Q protocol standards. Controller name Interface name VLAN IDs to activate nasxxxx (A) e0M 264 nasxxxx (B) e0M 264 nasxxxx (A) VIF 867 nasxxxx (B) VIF Enable GVRP? 867 © Copyright 2012 NetApp, Inc. All rights reserved. www.netapp.com
  • 20. Note: To trunk VLANs across an interface or IFGRP, you need to set "switchport mode trunk" on that interface or logical interface. This will allow 802.1q trunking, so that traffic across it is VLAN tagged. You must then create the relevant VLAN interfaces on the storage controller. If you want a port or EtherChannel interface to be the only access port for a particular VLAN you must set "switchport mode access" on that interface. Then give the storage controller interface an IP address on that VLAN. No other information is required to VLAN tag the frames. Reboot the controllers at this point for the settings to go into effect. 2.1.13 AutoSupport settings AutoSupport is an automated diagnostic reporting function designed to notify you and NetApp of any event triggered messages. In addition, it provides weekly logs, NetApp health triggers, and performance statistics. This ensures prompt support responsiveness and system wide proactive health checks. Note: System must remain on a support contract and the level of responsiveness is dependent on the level of service purchased. AutoSupport Settings Controller 1 Controller 2 Configure AutoSupport on: Yes Yes SMTP Server Name or IP smtplegacy.providence.org smtplegacy.providence.org AutoSupport Transport One of: HTTPS (default) HTTP SMTP One of: HTTPS (default) HTTP SMTP AutoSupport From E-Mail address nasxxxx@providence.org nasxxxx@providence.org AutoSupport To E-Mail address(es) James.Abella@providence.org; Henry.Pan@providence.org James.Abella@providence.org; Henry.Pan@providence.org 2.1.14 Customer/RMA details Verify this information by logging into the http://www.now.netapp.com website. This information is required to ensure that the Technical Support personnel can reach you and the replacement parts are sent to the correct address. Customer/RMA details Primary contact Secondary contact Contact Name James Abella Henry PAN Contact Address 1801 Lind Ave SW, Renton,WA 1801 Lind Ave SW Renton, WA Contact Phone (805) 218-3791 425-525-3328 Contact E-mail Address James.Abella@providence.org Henry.Pan@providence.org RMA Address RMA Attention to Name 2.1.15 Time synchronization Time synchronization details Values Time services protocol (ntp) ntp Time Servers (up to 3 internal or external hostnames or IP addresses) Time.providence.org Max time skew (<5 minutes for CIFS) < 5 minutes 2.1.16 SNMP management settings (Optional) Fill out if you have SNMP monitoring applications (for example, Operations Manager). Set by using the ‗snmp options‘ command. SNMP settings Controller 1 Controller 2 SNMP Trap Host © Copyright 2012 NetApp, Inc. All rights reserved. www.netapp.com
  • 21. SNMP settings Controller 1 Controller 2 Data Fabric Manager Protocol Choose one of: HTTP HTTPS Choose one of: HTTP HTTPS Data Fabric Manager Port 8080 8080 SNMP Community Data Fabric Manager Server Name or IP © Copyright 2012 NetApp, Inc. All rights reserved. www.netapp.com
  • 22. 3 Data ONTAP 7-Mode installation and verification checklists The installer will perform the following checks to ensure that your new systems are configured correctly and are ready to turn over to you. Physical installation Status Check and verify all ordered components were delivered to the customer site. Confirm the NetApp controllers are properly installed in the cabinets. Confirm there is sufficient airflow and cooling in and around the NetApp system. Confirm all power connections are secured adequately. Confirm the racks are grounded (if not in NetApp cabinets). Confirm there is sufficient power distribution to NetApp controllers & disk shelves. Confirm power cables are properly arranged in the cabinet. Confirm that LEDs and LCDs are displaying the correct information. Confirm that cables from NetApp controllers to disk shelves and among disk shelves are not crimped or stretched.(fiber cable services loops should be bigger than your fist ) Confirm that fiber cables laid between cabinets are properly connected and are not prone to physical damage. Confirm disk shelves IDs are set correctly. Confirm that fiber channel 2Gb/4Gb loop speeds are set correctly on DS14 shelves and proper LC-LC cables are used. Confirm that Ethernet cables are arranged and labeled properly. Confirm all Fiber cables are arranged and labeled properly. Confirm the Cluster Interconnect Cables are connected (for HA pairs). Confirm there is sufficient space behind the cabinets to perform hardware maintenance. Power On and Diagnostics Status Power up the disk shelves to ensure that the disks spin up and are initialized properly. Connect the console to the serial port cable and establish a console connection using a terminal emulator like Terra Term, PuTTY or Hyperterm. Note: Log all console output to a text file. Power on the controllers. Boot the controller and press Ctrl+C at the second prompt for ‗Special Boot Menu options‘. Go to Maintenance Mode by selecting option 5. Check the onboard fibre ports status: *> fcadmin config Change the port mode if necessary from targets to initiators (for SAN requirements). Verify the cable connections to all shelves: *> fcadmin device_map *> sasadmin shelf for SAS shelves Verify disk ownership assignments: *> disk show –a Assign disks to each node using the disk assign command if necessary. Verify the Multipath High Availability (MPHA) cabling. Each disk must have an A and B path: *> storage show disk -p Verify the system has one root aggregate assigned: *> aggr status © Copyright 2012 NetApp, Inc. All rights reserved. www.netapp.com
  • 23. Power On and Diagnostics Status Follow these steps for both cluster nodes, halt and then reboot each system into Data ONTAP: *> halt LOADER> boot_ontap Verify power and cooling are at acceptable levels: fas1> environment status Verify expansion cards are installed in the correct slots: fas1> sysconfig -c Verify all local and partner shelves are visible to the system: fas1> fcadmin device_map Verify that all disks are owned: fas1> disk show -n Use the WireGauge tool to verify that all the shelves are cabled correctly. Installation and configuration Confirm the correct version of Data ONTAP software, Disk Qualification Package and disk, shelf, motherboard and RLM/BMC firmware is installed on each controller fas1> version –b fas1> sysconfig -a Confirm ALL controllers are named as per the customer naming standards Confirm the root volume is sufficiently sized ( 250GB minimum) fas1> vol size <root volume name> Confirm all the licenses are installed fas1> license Check the /etc/rc and /etc/hosts files: fas1> rdfile /etc/rc fas1> rdfile /etc/hosts Verify all configured Ethernet network interfaces (individual and ifgrp) are configured correctly as per the customer requirements: IP address, media type, flow control and speed. Confirm any interfaces not required to perform host name resolution are configured with the "wins" option For clustered systems, verify they have partner interfaces for failover Where necessary, confirm the network switches are configured to support dynamic or static multi-mode ifgrps (LACP or Etherchannel) as per customer requirement. Has the customer accessed the system console using the RLM / SP / BMC? Verify network connectivity and DNS resolution is configured properly: fas1> ping <hostname of mail server> Verify configured IFGRPs function properly by disconnecting one or more cables fas1> ifgrp status Pull cables fas1> ping <hostname of mail server> fas1> ifgrp status Reinsert cables Confirm each controller is configured to synchronise time with a centralised source fas1> options timed fas1> timezone fas1> date Confirm that AutoSupport is configured and functioning correctly. fas1> options autosupport.doit “Test” Confirm the default ‗home‘ share is stopped from each controller (and vFiler) If necessary, confirm that telnet and RSH is disabled and SSH is enabled If required, confirm SNMP is configured on all controllers to the appropriate traphost © Copyright 2012 NetApp, Inc. All rights reserved. www.netapp.com Status
  • 24. Installation and configuration Status Download documentation pack and upload to controller(s) CIFS configuration Status If necessary, run through CIFS setup and join the controllers to the customer's Active Directory (requires an AD account with suitable permissions). Confirm the NetApp controller‘s local administrator account was created while configuring the CIFS service (and the password is set appropriately). Confirm the permissions to the root volume (c$) and /etc folder (etc$) are configured appropriately (that is, NOT Everyone Full Control). Confirm that appropriate Windows Domain Administrators group(s) is/are member of the NetApp controller‘s local administrator group. Create a share. Have the customer map the share to a host, write data to it. Create a Snapshot and confirm that Snapshot visibility is configured appropriately (for example, hidden to regular CIFS clients) Confirm that qtrees storing CIFS data have the appropriate security style specified: fas1> qtree status Confirm that qtrees storing CIFS data have the appropriate ‗oplocks‘ setting. NFS configuration Status Create a qtree and confirm the appropriate security style is specified fas1> qtree create <path> fas1> qtree status Export the qtree. Check the /etc/exports file and update the same with new mount entries with appropriate permissions. Have the customer mount the qtree from a host and write data to it. Take a Snapshot and confirm that Snapshot visibility is configured appropriately (for example, hidden to regular clients) iSCSI configuration Status Make sure the iSCSI service is started. Verify that an iSCSI host attach or support kit has been installed on the host. If appropriate, verify SnapDrive has been installed on the host. Create a qtree, igroup, and LUN on the system (using SnapDrive if necessary). Have the customer establish an iSCSI session from the host. Create a file system on the LUN, write some data to it and confirm the data is on the LUN. Reboot the host and confirm that the LUN is still attached. FCP configuration Status Make sure the FCP service is started fas1> fcp status Verify an FCP host attach or support kit has been installed on the host. If appropriate, verify that SnapDrive has been installed on the host. Create a qtree, igroup, and LUN on the system (using SnapDrive if necessary). Have the customer establish an FCP session from the host. Have the customer create a file system on the LUN and, write some data to it. © Copyright 2012 NetApp, Inc. All rights reserved. www.netapp.com
  • 25. FCP configuration Status Have the customer reboot the host and confirm the LUN is still attached. Verification checklist Status Where necessary Make sure the CLUSTER license is enabled where necessary. Verify the storage failover options on both systems in the HA pair are identical. Temporarily disable AutoSupport: fas1> options autosupport.enable off Test manual Cluster Failover (in both directions) and ensure success, rectify any errors and prove network connectivity continues to function correctly during failover. fas1> cf enable fas1> cf takeover fas1> partner fas2/fas1*> ifconfig –a fas2/fas1*> ifgrp status fas2/fas1*> partner fas1> cf giveback Test Uncontrolled storage Failover (in both directions) by disconnecting one controller from power. Rectify any errors. Test component failure of a PSU (Check status of LEDs and console). Test component failure of a LAN cable (Interface Group Test), include ifgrp favor. Test component failure of a fibre cable to disk shelf (Path Test), For Multipath HA cabling to ensure all disks have an A and B channel. Type storage show disk –p Run the WireGauge tool to ensure the shelf cabling is correct. When installing a new system into a new NetApp cabinet, switch off one cabinet PDU, and make sure all controllers and shelves remain powered on. Check the status of LEDs and console. Insert an entry into the system log indicating installation is complete: fas1> logger * * * System Install complete <installer name> <date> * * * Backup the system configuration: fas1> config dump <date>.cfg Re-enable AutoSupport: fas1> options autosupport.enable on Post installation checklist Status Give new customers a brief tour of FilerView or Systems Manager to explain the basic functions of managing their new system. Log onto the NOW website and give the customer a brief tour of the site. Show them how to access documentation, download software and firmware, search the Knowledge Base, and verify their RMA information. Discuss training available through NetApp University with new customers. Since they are the basis for most Data ONTAP functionality, have the customer explain how Snapshots work. Correct any misconceptions. Create and send a Trip Report within 24 hours to the customer, partner sales team and NetApp sales team. When all tasks are completed, have customer sign a Certificate of Completion. © Copyright 2012 NetApp, Inc. All rights reserved. www.netapp.com
  • 26. 4 Data ONTAP Cluster-Mode configuration details Please work with your professional services representative to complete this worksheet prior to the installation date. The requested information enables us to configure your equipment quickly and efficiently. Depending on the desired configuration, some fields may not be applicable. Note: This worksheet does not replace the requirement for reading and understanding the appropriate Data ONTAP manuals that describe the operations of Data ONTAP in Cluster-Mode. Data ONTAP manuals can be found at the NetApp Support site under documentation. Customer checklist of site preparation requirements (check all that apply): Adequate rack space for the NetApp system and disk shelves has been provided. The power requirements for the NetApp system and disk shelves have been satisfied. The network patch cabling and switch port configuration is complete. Company Name: NetApp Sales Order #: Data ONTAP® Version: 4.1 Cluster information It is assumed that the cluster will contain four nodes. If there are more than four nodes, replicate the appropriate section to add additional node information. Starting from Data ONTAP 8.1 the 'cluster create' and 'cluster join' commands have built-in wizards. The wizard generates hostnames, IP addresses for the cluster LIF and subnet masks for the cluster LIF. It is recommended to use the cluster setup wizard while creating a new cluster or attempting to join an existing cluster. The wizard has the following rules: The names for the nodes in the cluster are derived from the name of the cluster. If the cluster is named clust1, the nodes will be names as clust-01, clust-02 and so on. The node name can be changed later with the cluster::system>node>modify command. The cluster LIF will be assigned IP address in the 169.254.0.0 range with a Class B subnet (255.255.0.0) if the default is taken. The initial cluster creating and configuration will be performed on the first node that is booted. The initial setup script will ask if the operator wants to create a cluster or join a cluster. The first node will be ―create‖ and subsequent nodes will be ―join‖. 4.1.1 Cluster The cluster base aggregate will contain the root volume for the cluster Vserver. Cluster name 4.1.2 Cluster Base Aggregate Licensing A base license is required, but additional features also need licensing. License Values © Copyright 2012 NetApp, Inc. All rights reserved. www.netapp.com
  • 27. 4.1.3 Admin Vserver The Cluster Administration Vserver is used to manage the cluster activities. It is different from the node Vservers and is used by System Manager to access the cluster. Type of information Value Cluster administrator password The password for the ‗admin‘ account that the cluster requires before granting cluster administrator access at the console or through a secure protocol. The default rules for passwords are as follows: A password must be at least eight characters long. A password must contain at least one letter and one number. Cluster management LIF IP address A unique IP address for the cluster management LIF. The cluster administrator uses this address to access the cluster admin Vserver and manage the cluster. Typically, this address should be on the data network. Cluster management LIF netmask The subnet mask that defines the range of valid IP addresses on the cluster management network. Cluster management LIF default gateway The IP address for the router on the cluster management network. DNS domain name The name of your network's DNS domain. The domain name cannot contain an underscore (_) and must consist of alphanumeric characters. To enter multiple DNS domain names, separate each name with either a comma or a space. Name server IP addresses The IP addresses of the DNS name servers. Separate each address with either a comma or a space. 4.1.4 Time synchronization Time synchronization details Values Time services protocol (NTP) Time Servers (up to 3 internal or external hostnames or IP addresses) Max time skew (<5 minutes for CIFS) 4.1.5 Time zone What time zone should the systems set their clocks to (for example, US/Pacific)? Time Zone 4.2 Location Node information Individual controllers are called nodes. Each node has a unique name. Unlike the cluster name, the node name can be changed after it is initially defined. System information Node 1 Node 2 Node 3 Serial number Node name 4.2.1 Physical port identification Each port services a specific type of function or role. These roles are: © Copyright 2012 NetApp, Inc. All rights reserved. www.netapp.com Node 4
  • 28. Node Management Data Intercluster Cluster Node Management ports are required to maintain connection between the node to site services such as NTP and AutoSupport. Data ports are used to transfer data or communicate between the cluster and the applications. Intercluster LIFs are used to setup peer relations between clusters for replicating data between clusters. Cluster ports are specifically used to transfer data between nodes within a cluster. Note: Due to BURT 322675, NetApp recommends setting up an interface group for the node management LIF on each node of the cluster. The instructions below cover scenarios that have or do not have a fix for this BURT. Follow the section that is relevant to your case. Some of these instructions might diverge from the guidelines on the NetApp Support site. Check for updated versions of this document for latest information. For versions of Data ONTAP that do not have a fix for BURT 322675, create a single-mode interface group of the following ports. Use this interface group as the port for the node management LIF. The interface group should be created before using the ‗cluster setup‘ wizard on the node. For versions of Data ONTAP that have a fix for BURT 322675: System model Port grouping FAS3040 & FAS3070 e0a and e0c V3040 & V3070 e0a and e0c FAS3140, FAS3160 & FAS3170 e0a and e0b V3140, V3160 & V3170 e0a and e0b FAS3210, FAS3240 & FAS3270 e0a and e0b V3210, V3240 & V3270 e0a and e0b FAS6030, FAS6040, FAS6070 & FAS6080 e0a and e0c V6030, V6040, V6070 & V6080 e0a and e0c FAS6210, FAS6240 & FAS6280 e0a and e0b V6210, V6240 & V6280 e0a and e0b Some controllers have an e0M interface for environments with a subnet dedicated to managing servers. Include the e0M settings if you have a management subnet. Note: For systems without an e0P port, leave one network port available for ACP connections to SAS disk shelves. Note: The following table is used to define port roles. If BURT 322675 is not installed, the IFGRP column should be used and the associated ports noted. If BURT 322675 is installed, omit the IFGRP column. Node Name 4.2.2 IFGRP Ports MTU Port Role Node management LIF Each node has a management port that is used to communicate with it. Node Name Port or IFGRP LIF Name IP Address Netmask © Copyright 2012 NetApp, Inc. All rights reserved. www.netapp.com Gateway
  • 29. Node Name 4.3 Port or IFGRP LIF Name IP Address Netmask Gateway Cluster network information Starting from Data ONTAP 8.1 the 'cluster create' and 'cluster join' commands have built-in wizards to generate hostnames, IP addresses for the cluster LIF, and subnet masks for the cluster LIF. NetApp recommends using the cluster setup wizard whenever you create a new cluster or attempt to join an existing cluster. The wizard has the following rules: The names for the nodes in the cluster are derived from the name of the cluster. If the cluster is named cmode, the nodes will be names as cmode-01, cmode-02 and so on The cluster LIF is assigned IP address in the 169.254.0.0 range with a Class B subnet (255.255.0.0) Once the cluster has been defined and the nodes are joined to the cluster, other elements can be created. These elements can be created using System Manager, Element Manager, or CLI. 4.3.1 Interface groups (IFGRP) Interface groups bond multiple network ports together for increased bandwidth and/or fault tolerance. IFGRP name 4.3.2 Node Distribution function Mode Ports Configure Virtual LANs (VLANs) (Optional) VLANs are used to segment network domains. The VLAN has a specific name that is a combination of the associated network port and the switch VLAN ID. VLAN name 4.3.3 Node Associated Network Port Switch VLAN ID Logical Interfaces (LIFs) Logical Interfaces are the point at which the customer interfaces with the cluster. LIF name Home node Home port Netmask Routing group © Copyright 2012 NetApp, Inc. All rights reserved. www.netapp.com Failover group
  • 30. 4.4 Intercluster network information The intercluster ports used for cross-cluster communication. An intercluster port should be routable to the following: Another intercluster port Data port of another cluster. Node name 4.5 Port LIF name IP address Netmask Gateway Vserver information Application access to data residing in the cluster must be done through a Vserver. Vservers can be used to support single or multiple protocols, user groups, or whatever delineation that the customer chooses. Additionally Vservers can restrict allocation of data to specific Aggregates. To create a Vserver, you can use any of the available administrative interfaces: System Manager, Element Manager, or CLI. The Vserver Setup wizard has the following sub-wizards, which you can run after you create a Vserver: Network setup Storage setup Services setup Data access protocol setup Use the following section as a guide to create Vservers. Replicate this section as many times as required. 4.5.1 Creating Vserver Type of information Value Vserver name The name of a Vserver can contain alphanumeric characters and the following special characters: ".", "-", and "_". However, the name of a Vserver must not start with a number or a special character. Protocols Protocols that you want to configure or allow on that Vserver. Name Services Name Services that you want to configure on the Vserver Aggregate name Aggregate name on which you want to create the Vserver's root volume. The default aggregate name is used if you do not specify one. Language Setting Language you want the volumes to use. 4.5.2 Creating Volumes on the Vserver Volume name Aggregate name Volume size Junction path (NAS only) © Copyright 2012 NetApp, Inc. All rights reserved. www.netapp.com
  • 31. 4.5.3 IP Network Interface on the Vserver End user applications connect to the data in the cluster only through interfaces defined to Vservers. The following table models the first 4 LIFs. Replicate the ‗Interface‘ columns or the entire table if more interfaces are required. Type of Information Interface 1 Interface 2 Interface 3 LIF name The default LIF name is used if you do not specify one. IP address Subnet mask Home node Home node is the node on which you want to create a logical interface. The default home node is used if you do not specify one. Home port Home port is the port on which you want to create a logical interface. The default home port is used if you do not specify one. Routing Group Protocols Protocols that can use the LIF. Failover Group DNS Zone 4.5.4 FCP Network Interface on the Vserver Type of information Value LIF name The default LIF name is used if you do not specify one. Home node Home node is the node on which you want to create a logical interface. The default home node is used if you do not specify one. Home port Home port is the port on which you want to create a logical interface. The default home port is used if you do not specify one 4.5.5 LDAP services Type of information Value LDAP server IP address LDAP server port number The default LDAP server port number is used if you do not specify one. LDAP server minimum bind authentication level Bind DN and password Base DN © Copyright 2012 NetApp, Inc. All rights reserved. www.netapp.com Interface 4
  • 32. 4.5.6 CIFS protocol Type of information Value Domain name CIFS share name The default CIFS share name is used if you do not specify one. Note: You must not use characters or Unicode characters in CIFS share names. You can use alphanumeric characters and the following special characters : ".", "!", "@", "#", "$", "%", "&", "(", ")", ",", "_", ' " , "{", "}", "~", and "-". CIFS share path The default CIFS share path is used if you do not specify one. CIFS access control list The default CIFS access control list is used if you do not specify one. 4.5.7 iSCSI protocol Type of information Value igroup name The default igroup name is used if you do not specify one. Names of the initiators Operating system of the initiators LUN names The default LUN name is used if you do not specify one. Volume name The volume that the LUN will reside on. LUN sizes 4.5.8 FCP protocol Type of Information Value igroup name The default igroup name is used if you do not specify one. WWPN World wide port number (WWPN) of the initiators. Operating system of the initiators. LUN names The default LUN name is used if you do not specify one. Volume name The volume that the LUN will reside on. LUN sizes 4.6 Support information The following section describes the support features. 4.6.1 Remote Management Settings (RLM/BMC/SP) You can access the cluster's system console remotely by using the system console redirection feature provided by the remote management device of a node. Depending on your storage system model, the remote management device can be the Service Processor (SP), the Remote LAN Module (RLM), or the Baseboard Management Controller (BMC). NetApp recommends configuring these interfaces for easier, secure management and troubleshooting. Node name IP address Netmask Default gateway Mail server hostname © Copyright 2012 NetApp, Inc. All rights reserved. www.netapp.com Mail server IP address
  • 33. Node name 4.6.2 IP address Netmask Default gateway Mail server hostname Mail server IP address AutoSupport settings AutoSupport is a ‗phone home‘ function to notify you and NetApp of any hardware problems, so that new hardware can be automatically delivered to solve the issue. (System must remain on a support contract and the level of responsiveness is dependent on the level of service contract: 2 hours – Next Business Day.) Enable AutoSupport? If not, provide justification. SMTP Server Name or IP AutoSupport transport AutoSupport from e-mail address AutoSupport to e-mail address(es) One of: HTTPS (default) HTTP SMTP One of: HTTPS (default) HTTP SMTP One of: HTTPS (default) HTTP SMTP One of: HTTPS (default) HTTP SMTP 4.6.3 Customer/RMA details Verify this information by logging into the NetApp support site: http://now.netapp.com. This information is required to ensure that the Technical Support personnel can reach you and the replacement parts are sent to the correct address. Customer/RMA details Primary contact Secondary contact Contact name Contact address Contact phone Contact e-mail address RMA address RMA attention to name © Copyright 2012 NetApp, Inc. All rights reserved. www.netapp.com
  • 34. A. Data ONTAP Cluster-Mode installation and verification checklists The installer will perform the following checks to ensure that your new systems are configured correctly and are ready to turn over to you. Physical installation Status Check and verify all ordered components were delivered to the customer site. Confirm the NetApp controllers are properly installed in the cabinets. Confirm there is sufficient airflow and cooling in and around the NetApp system. Confirm all power connections are secured adequately. Confirm the racks are grounded (if not in NetApp cabinets). Confirm there is sufficient power distribution to NetApp controllers & disk shelves. Confirm power cables are properly arranged in the cabinet. Confirm that LEDs and LCDs are displaying the correct information. Confirm that cables from NetApp controllers to disk shelves and among disk shelves are not crimped or stretched.(fiber cable services loops should be bigger than your fist ) Confirm that fiber cables laid between cabinets are properly connected and are not prone to physical damage. Confirm disk shelves IDs are set correctly. Confirm that fiber channel 2Gb/4Gb loop speeds are set correctly on DS14 shelves and proper LC-LC cables are used. Confirm that Ethernet cables are arranged and labeled properly. Confirm all Fiber cables are arranged and labeled properly. Confirm the Cluster Interconnect Cables are connected (for HA pairs). Confirm there is sufficient space behind the cabinets to perform hardware maintenance. Confirm that the Cisco Nexus Cluster Interconnect switches are properly placed in the cabinet. Confirm that the Cisco IP switches are properly placed in the cabinet. Confirm that the Cisco FCP switches are properly placed in the cabinet. Confirm that the latest ―Reference Configuration File‖ for the Cisco Nexus switches has been installed. Confirm that any VLANs required have been defined to the appropriate switches. Confirm that the Ethernet cables are properly connected to the Cisco IP switches. Confirm that the FCP cables are properly connected to the Cisco Fabric switches. Power On and Perform Cluster Creation, Node and Vserver configuration Power up the disk shelves to ensure that the disks spin up and are initialized properly. Connect the console to the serial port cable and establish a console connection using a terminal emulator like Terra Term, PuTTY or Hyperterm. Note: Log all console output to a text file. Power on the controllers. On the first controller console, reply to the initial Cluster Setup response request with ―create‖ to initialize the cluster and the first node. On the next controller console, reply to the initial Cluster Setup response request with ―join‖ to initialize the second node and join the cluster. On each subsequent controller, perform the same task as the second controller to join them as nodes in the cluster. Install System Manager 2.0 on a Windows or Linux system. © Copyright 2012 NetApp, Inc. All rights reserved. www.netapp.com Status
  • 35. Power On and Perform Cluster Creation, Node and Vserver configuration Status Use System Manager 2.0 to install remaining licenses on the cluster. Note: If any of the nodes are V-Series, the V-Series license needs to be added at the node level for each node that is a V-Series controller. You have 72 hours from the Cluster Setup script completion to install the license on the local nodes. cluster::>run –node node1 node1>license add <V-Series license> node1>exit Use System Manager 2.0 to create the first Vservers. Use the WireGauge tool to verify that all the shelves are cabled correctly and switches are properly connected. Miscellaneous configuration Status Where necessary, confirm the network switches are configured to support dynamic or static multi-mode IFGRPs (LACP or Etherchannel) as per customer requirement. Has the customer accessed the system console using the RLM / BMC / SP? Verify network connectivity and DNS resolution is configured properly: cluster::>network ping -node <node name> –destination <hostname of DNS server> Verify configured IFGRPs with more than one port function properly by disconnecting one or more cables Confirm each node date and timezone is set correctly cluster::>system node date show cluster::>timezone Display whether NTP is used in the cluster cluster::>system services ntp config show cluster::>system services ntp server show Confirm that AutoSupport is configured and functioning correctly. cluster::>system node autosupport show Confirm that telnet and RSH is disabled and SSH is enabled If required, confirm SNMP is configured on all controllers to the appropriate traphost Download documentation pack and provide to customer CIFS configuration (per Vserver servicing CIFS) Check the export policy rules to ensure that the CIFS access protocol will allow access cluster::>vserver export-policy rule show If necessary, run through CIFS setup and join the controllers to the customer's Active Directory (requires an AD account with suitable permissions). Confirm the NetApp controller‘s local administrator account was created while configuring the CIFS service (and the password is set appropriately). Confirm the permissions to the root volume (c$) and /etc folder (etc$) are configured appropriately (that is, NOT Everyone Full Control). Confirm that appropriate Windows Domain Administrators group(s) are member of the cluster‘s local administrator group. Create a share. Have the customer map the share to a host, write data to it. Create a Snapshot and confirm that Snapshot visibility is configured appropriately (for example, hidden to regular CIFS clients) Confirm that qtrees storing CIFS data have the appropriate security style specified: cluster::volume> qtree show –vserver <vserver> -volume <volume name> qtree <qtree name> Confirm that qtrees storing CIFS data have the appropriate ‗oplocks‘ setting. © Copyright 2012 NetApp, Inc. All rights reserved. www.netapp.com Status
  • 36. CIFS configuration (per Vserver servicing CIFS) Status Take a Snapshot and confirm that Snapshot visibility is configured appropriately (for example, hidden to regular clients) NFS configuration (per Vserver servicing NFS) Status Create a qtree and confirm the appropriate security style is specified cluster::>volume qtree create –vserver <vserver> -volume <volume name> -qtree <qtree name> -security-style {unix|ntfs|mixed} cluster::>volume qtree show –vserver <vserver> -volume <volume name> qtree <qtree name> Check the export policy rules to ensure that the NFS access protocol will allow access cluster::>vserver export-policy ruleshow Have the customer mount the qtree from a host and write data to it. Take a Snapshot and confirm that Snapshot visibility is configured appropriately (for example, hidden to regular clients) iSCSI configuration (per Vserver servicing iSCSI) Status Make sure the iSCSI service is started. Verify that an iSCSI host attach or support kit has been installed on the host. If appropriate, verify SnapDrive has been installed on the host. Create a qtree, igroup, and LUN on the system (using SnapDrive if necessary). Have the customer establish an iSCSI session from the host. Create a file system on the LUN, write some data to it and confirm the data is on the LUN. Reboot the host and confirm that the LUN is still attached. FCP configuration (per Vserver servicing FCP) Status Make sure the FCP service is started Verify an FCP host attach or support kit has been installed on the host. If appropriate, verify that SnapDrive has been installed on the host. Create a qtree, igroup, and LUN on the system (using SnapDrive if necessary). Have the customer establish an FCP session from the host. Have the customer create a file system on the LUN and, write some data to it. Have the customer reboot the host and confirm the LUN is still attached. Verification checklist Status Where necessary make sure the CLUSTER license is enabled where necessary. Verify the cluster options on all nodes in the cluster are identical. Temporarily disable AutoSupport on nodes of the cluster. cluster::>system node autosupport modify -node <node name> -state disable Test manual node Takeover (in both directions) and ensure success, rectify any errors and prove network connectivity continues to function correctly during failover. cluster::>system storage failover takeover –ofnode <node> -bynode <node> cluster::>system storage failover show-giveback cluster::>system storage failover giveback –ofnode <node> -fromnode Test Uncontrolled Cluster Failover (in both directions) by disconnecting one controller from power. Rectify any errors. Repeat above test for all HA pairs in the cluster Test component failure of a PSU (Check status of LEDs and console). © Copyright 2012 NetApp, Inc. All rights reserved. www.netapp.com
  • 37. Verification checklist Status Test component failure of a LAN cable Run the WireGauge tool to ensure the shelf cabling is correct. When installing a new system into a new NetApp cabinet, switch off one cabinet PDU, and make sure all controllers and shelves remain powered on. Check the status of LEDs and console. Re-enable AutoSupport: cluster::>system node autosupport modify -node <node name> -state enable Post installation checklist Status Give new customers a brief tour of Systems Manager and Element Manager to explain the basic functions of managing their new cluster. Log onto the NOW website and give the customer a brief tour of the site. Show them how to access documentation, download software and firmware, search the Knowledge Base, and verify their RMA information. Discuss training available through NetApp University with new customers. Since they are the basis for most Data ONTAP functionality, have the customer explain how Snapshots work. Correct any misconceptions. Create and send a Trip Report within 24 hours to the customer, partner sales team and NetApp sales team. When all tasks are completed, have customer sign a Certificate of Completion. A.1 Definitions This section contains the glossary of terms used throughout this document. Term Definition CIFS Common Internet File Service DNS Domain Name System DR Disaster Recovery DRC Disaster Recovery Center (data center) FAS Fabric Attached Storage FC Fibre Channel FlexVol Flexible volume IOPS Input/Output Operations per Second iSCSI Internet Protocol – Small Computer Systems Interface MAN Managed / Metro Area Network LUN Logical Unit Number NAS Network Attached Storage NFS Network File System NIS Network Information Service NTP Network Time Protocol PDU Power Distribution Units PDC Primary Data Center RPM Rotations Per Minute RAID Redundant Array of Independent Disks SAN Storage Area Network SNMP Simple Network Management Protocol SATA Serial Advanced Technology Attachment UPS Uninterruptible Power Supply VIF Virtual Interface VLAN Virtual Local Area Network WINS Windows Internet Naming Service © Copyright 2012 NetApp, Inc. All rights reserved. www.netapp.com