SlideShare une entreprise Scribd logo
1  sur  318
Télécharger pour lire hors ligne
Netfinity Tape Solutions

Wim Feyants, Steve Russell




                      International Technical Support Organization

                                www.redbooks.ibm.com




                                                                     SG24-5218-01
SG24-5218-01
International Technical Support Organization

Netfinity Tape Solutions

March 2000
Take Note!
  Before using this information and the product it supports, be sure to read the general information in Appendix E,
  “Special notices” on page 289.




Second Edition (March 2000)

This redbook applies to IBM’s current line of tape products for use with Netfinity servers. At the time of writing, these
were:

IBM   40/80 GB DLT tape drive
IBM   35/70 GB DLT tape drive
IBM   20/40 GB DLT tape drive
IBM   20/40 GB 8 mm tape drive
IBM   20/40 GB DDS-4 4 mm tape drive
IBM   12/24 GB DDS-3 4 mm tape drive
IBM   10/20 GB NS tape drive
IBM   490/980 GB DLT tape library
IBM   280/560 GB DLT tape autoloader
IBM   3447 DLT tape library
IBM   3449 8 mm tape library
IBM   3570 Magstar MP tape library
IBM   3575 Magstar MP tape library

Comments may be addressed to:
IBM Corporation, International Technical Support Organization
Dept. HZ8 Building 678
P.O. Box 12195
Research Triangle Park, NC 27709-2195

When you send information to IBM, you grant IBM a non-exclusive right to use or distribute the information in any way
it believes appropriate without incurring any obligation to you.

© Copyright International Business Machines Corporation 1998 2000. All rights reserved.
Note to U.S Government Users - Documentation related to restricted rights - Use, duplication or disclosure is subject to restrictions
set forth in GSA ADP Schedule Contract with IBM Corp.
Contents
                           Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
                           The team that wrote this redbook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
                           Comments welcome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x

                           Chapter 1. Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1

                           Chapter 2. Strategy . . . . . . . . . . . . . . . . . . . .          .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..   .   .   .   . .3
                           2.1 Why are backups necessary? . . . . . . . . . .                   .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..   .   .   .   . .3
                           2.2 Backup methodologies . . . . . . . . . . . . . . .               .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..   .   .   .   . .4
                              2.2.1 When will a file be backed up? . . . . .                    .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..   .   .   .   . .4
                              2.2.2 Backup patterns . . . . . . . . . . . . . . . .             .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..   .   .   .   . .5
                           2.3 System and storage topologies . . . . . . . . .                  .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..   .   .   .   .10
                              2.3.1 Direct tape connection. . . . . . . . . . . .               .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..   .   .   .   .10
                              2.3.2 Single server model . . . . . . . . . . . . .               .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..   .   .   .   .10
                              2.3.3 Two-tier model . . . . . . . . . . . . . . . . .            .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..   .   .   .   .11
                              2.3.4 Multi-tier model . . . . . . . . . . . . . . . . .          .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..   .   .   .   .13
                           2.4 Storage area network implementations . . .                       .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..   .   .   .   .14
                              2.4.1 Why use SAN for tape storage? . . . .                       .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..   .   .   .   .15
                              2.4.2 Fibre Channel attached tape storage.                        .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..   .   .   .   .17
                              2.4.3 Tape pooling . . . . . . . . . . . . . . . . . . .          .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..   .   .   .   .18
                           2.5 Performance considerations . . . . . . . . . . .                 .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..   .   .   .   .19
                              2.5.1 Scheduling backups . . . . . . . . . . . . .                .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..   .   .   .   .19
                              2.5.2 Network bandwidth considerations . .                        .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..   .   .   .   .20
                              2.5.3 Compression . . . . . . . . . . . . . . . . . . .           .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..   .   .   .   .21
                              2.5.4 Hierarchical storage . . . . . . . . . . . . .              .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..   .   .   .   .22
                           2.6 Database server backup . . . . . . . . . . . . . .               .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..   .   .   .   .23
                           2.7 Selecting a tape drive . . . . . . . . . . . . . . . .           .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..   .   .   .   .27
                              2.7.1 Tape capacity . . . . . . . . . . . . . . . . . .           .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..   .   .   .   .27
                              2.7.2 Single tape devices and libraries . . . .                   .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..   .   .   .   .27
                              2.7.3 Reliability . . . . . . . . . . . . . . . . . . . . .       .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..   .   .   .   .28
                           2.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . .        .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..   .   .   .   .31

                           Chapter 3. Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                              ..   .   .   .   .   ..   .   .   .   .33
                           3.1 Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                         ..   .   .   .   .   ..   .   .   .   .36
                              3.1.1 Digital Linear Tape (DLT) . . . . . . . . . . . . . . . . . . . . .                                   ..   .   .   .   .   ..   .   .   .   .36
                              3.1.2 8 mm tape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                           ..   .   .   .   .   ..   .   .   .   .39
                              3.1.3 4 mm Digital Audio Tape (DAT) . . . . . . . . . . . . . . . . .                                       ..   .   .   .   .   ..   .   .   .   .40
                              3.1.4 Travan Quarter-Inch Cartridge (QIC) . . . . . . . . . . . . .                                         ..   .   .   .   .   ..   .   .   .   .42
                              3.1.5 Magstar 3570 MP Fast Access Linear tape cartridge .                                                   ..   .   .   .   .   ..   .   .   .   .43
                              3.1.6 Linear Tape Open (LTO) . . . . . . . . . . . . . . . . . . . . . .                                    ..   .   .   .   .   ..   .   .   .   .45
                              3.1.7 Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                            ..   .   .   .   .   ..   .   .   .   .47
                           3.2 40/80 GB DLT tape drive . . . . . . . . . . . . . . . . . . . . . . . . .                                  ..   .   .   .   .   ..   .   .   .   .47
                              3.2.1 Installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                         ..   .   .   .   .   ..   .   .   .   .49
                           3.3 35/70 GB DLT tape drive . . . . . . . . . . . . . . . . . . . . . . . . .                                  ..   .   .   .   .   ..   .   .   .   .51
                              3.3.1 Installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                         ..   .   .   .   .   ..   .   .   .   .53
                           3.4 20/40 GB DLT tape drive . . . . . . . . . . . . . . . . . . . . . . . . .                                  ..   .   .   .   .   ..   .   .   .   .54
                              3.4.1 Installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                         ..   .   .   .   .   ..   .   .   .   .56
                           3.5 20/40 GB 8 mm tape drive . . . . . . . . . . . . . . . . . . . . . . . .                                   ..   .   .   .   .   ..   .   .   .   .58
                              3.5.1 Installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                         ..   .   .   .   .   ..   .   .   .   .59
                              3.5.2 Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                            ..   .   .   .   .   ..   .   .   .   .61
                           3.6 20/40 GB DDS-4 4 mm tape drive . . . . . . . . . . . . . . . . . . .                                       ..   .   .   .   .   ..   .   .   .   .62


© Copyright IBM Corp. 1998 2000                                                                                                                                                   iii
3.6.1 Installation . . . . . . . . . . . . . . .     .   .   .   ..   .   .   .   .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..   . 63
                            3.7 12/24 GB DDS-3 4 mm tape drive .                    .   .   .   ..   .   .   .   .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..   . 64
                               3.7.1 Installation . . . . . . . . . . . . . . .     .   .   .   ..   .   .   .   .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..   . 65
                            3.8 10/20 GB NS tape drive . . . . . . . . .            .   .   .   ..   .   .   .   .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..   . 66
                               3.8.1 Installation . . . . . . . . . . . . . . .     .   .   .   ..   .   .   .   .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..   . 67
                            3.9 490/980 GB DLT library . . . . . . . . .            .   .   .   ..   .   .   .   .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..   . 69
                               3.9.1 Operation. . . . . . . . . . . . . . . .       .   .   .   ..   .   .   .   .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..   . 71
                               3.9.2 Installation . . . . . . . . . . . . . . .     .   .   .   ..   .   .   .   .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..   . 71
                               3.9.3 Configuration . . . . . . . . . . . . .        .   .   .   ..   .   .   .   .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..   . 72
                            3.10 280/560 GB DLT autoloader . . . .                  .   .   .   ..   .   .   .   .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..   . 74
                               3.10.1 Operation. . . . . . . . . . . . . . .        .   .   .   ..   .   .   .   .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..   . 75
                               3.10.2 Installation . . . . . . . . . . . . . .      .   .   .   ..   .   .   .   .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..   . 75
                               3.10.3 Configuration . . . . . . . . . . . .         .   .   .   ..   .   .   .   .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..   . 75
                            3.11 3447 DLT Tape Library . . . . . . . .              .   .   .   ..   .   .   .   .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..   . 77
                               3.11.1 Operation . . . . . . . . . . . . . . .       .   .   .   ..   .   .   .   .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..   . 79
                               3.11.2 Installation . . . . . . . . . . . . . .      .   .   .   ..   .   .   .   .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..   . 79
                               3.11.3 Configuration . . . . . . . . . . . .         .   .   .   ..   .   .   .   .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..   . 80
                            3.12 3449 8 mm tape library . . . . . . . .             .   .   .   ..   .   .   .   .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..   . 83
                               3.12.1 Operation. . . . . . . . . . . . . . .        .   .   .   ..   .   .   .   .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..   . 86
                               3.12.2 Installation . . . . . . . . . . . . . .      .   .   .   ..   .   .   .   .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..   . 90
                               3.12.3 Configuration . . . . . . . . . . . .         .   .   .   ..   .   .   .   .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..   . 90
                            3.13 3570 Magstar MP tape library . . .                 .   .   .   ..   .   .   .   .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..   . 93
                               3.13.1 Configuration . . . . . . . . . . . .         .   .   .   ..   .   .   .   .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..   . 96
                               3.13.2 SCSI configuration . . . . . . . .            .   .   .   ..   .   .   .   .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..   . 98
                            3.14 3575 Magstar MP tape library . . .                 .   .   .   ..   .   .   .   .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..   . 98
                               3.14.1 Design highlights . . . . . . . . .           .   .   .   ..   .   .   .   .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..   . 99
                               3.14.2 The multi-path feature . . . . .              .   .   .   ..   .   .   .   .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..    100
                               3.14.3 Bulk I/O slots . . . . . . . . . . . .        .   .   .   ..   .   .   .   .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..    101
                               3.14.4 High performance . . . . . . . .              .   .   .   ..   .   .   .   .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..    101
                               3.14.5 High reliability . . . . . . . . . . .        .   .   .   ..   .   .   .   .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..    102
                               3.14.6 3575 models . . . . . . . . . . . .           .   .   .   ..   .   .   .   .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..    102
                               3.14.7 Magstar MP tape drives . . . .                .   .   .   ..   .   .   .   .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..    103

                            Chapter 4. SAN equipment . . . . . . . . . . . . . . . . . . . . .                                    .   ..   .   .   .   .   ..   .   .   .   .   ..   105
                            4.1 Netfinity Fibre Channel PCI adapter . . . . . . . . . . . .                                       .   ..   .   .   .   .   ..   .   .   .   .   ..   105
                            4.2 IBM SAN Fibre Channel switch . . . . . . . . . . . . . . . .                                      .   ..   .   .   .   .   ..   .   .   .   .   ..   105
                            4.3 IBM SAN Data Gateway Router . . . . . . . . . . . . . . . .                                       .   ..   .   .   .   .   ..   .   .   .   .   ..   107
                            4.4 Netfinity Fibre Channel hub . . . . . . . . . . . . . . . . . . .                                 .   ..   .   .   .   .   ..   .   .   .   .   ..   109
                            4.5 Cabling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                       .   ..   .   .   .   .   ..   .   .   .   .   ..   111
                            4.6 Supported configurations . . . . . . . . . . . . . . . . . . . . .                                .   ..   .   .   .   .   ..   .   .   .   .   ..   112
                               4.6.1 Fibre Channel attached tape storage . . . . . . . .                                          .   ..   .   .   .   .   ..   .   .   .   .   ..   113
                               4.6.2 Netfinity server consolidation with tape pooling                                             .   ..   .   .   .   .   ..   .   .   .   .   ..   114
                               4.6.3 Sample SAN configuration . . . . . . . . . . . . . . . .                                     .   ..   .   .   .   .   ..   .   .   .   .   ..   114

                            Chapter 5. Software . . . . . . . . . . . . . . . .             .   ..   .   .   .   .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..   123
                            5.1 Tivoli Storage Manager . . . . . . . . . . .                .   ..   .   .   .   .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..   124
                               5.1.1 Products and base components .                         .   ..   .   .   .   .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..   125
                               5.1.2 Server data management. . . . . .                      .   ..   .   .   .   .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..   127
                               5.1.3 Automating client operations . . .                     .   ..   .   .   .   .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..   132
                               5.1.4 Supported devices . . . . . . . . . . .                .   ..   .   .   .   .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..   134
                            5.2 Tivoli Data Protection for Workgroups                       .   ..   .   .   .   .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..   135
                               5.2.1 Concepts . . . . . . . . . . . . . . . . . .           .   ..   .   .   .   .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..   135
                               5.2.2 Components . . . . . . . . . . . . . . .               .   ..   .   .   .   .   ..   .   .   .   ..   .   .   .   .   ..   .   .   .   .   ..   136



iv   Netfinity Tape Solutions
5.2.3 Supported devices . . . . . . . . . . . . . . . . . . . . .         .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .137
5.3 VERITAS NetBackup . . . . . . . . . . . . . . . . . . . . . . .          .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .137
   5.3.1 Concepts. . . . . . . . . . . . . . . . . . . . . . . . . . . . .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .138
   5.3.2 Supported devices . . . . . . . . . . . . . . . . . . . . .         .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .140
5.4 Legato NetWorker . . . . . . . . . . . . . . . . . . . . . . . . . .     .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .141
   5.4.1 Concepts. . . . . . . . . . . . . . . . . . . . . . . . . . . . .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .141
   5.4.2 Supported devices . . . . . . . . . . . . . . . . . . . . .         .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .143
5.5 Computer Associates ARCserveIT for Windows NT                            .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .143
   5.5.1 Concepts. . . . . . . . . . . . . . . . . . . . . . . . . . . . .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .145
   5.5.2 Supported devices . . . . . . . . . . . . . . . . . . . . .         .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .146
5.6 Computer Associates ARCserveIT for NetWare . . .                         .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .147
   5.6.1 Concepts. . . . . . . . . . . . . . . . . . . . . . . . . . . . .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .148
   5.6.2 Supported Devices . . . . . . . . . . . . . . . . . . . . .         .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .149
5.7 VERITAS Backup Exec for Windows NT . . . . . . . . .                     .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .149
   5.7.1 Concepts. . . . . . . . . . . . . . . . . . . . . . . . . . . . .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .151
   5.7.2 Supported devices . . . . . . . . . . . . . . . . . . . . .         .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .155
5.8 VERITAS Backup Exec for Novell NetWare . . . . . .                       .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .155
   5.8.1 Concepts. . . . . . . . . . . . . . . . . . . . . . . . . . . . .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .156
   5.8.2 Job types . . . . . . . . . . . . . . . . . . . . . . . . . . . .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .157
   5.8.3 Supported devices . . . . . . . . . . . . . . . . . . . . .         .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .158

Chapter 6. Installation and configuration . . . . . . . . . . . . . .                            ..      .   .   .   .   ..      .   .   .159
6.1 Tivoli Storage Manager for Windows NT . . . . . . . . . . . . . .                            ..      .   .   .   .   ..      .   .   .159
   6.1.1 Software installation . . . . . . . . . . . . . . . . . . . . . . . . .                 ..      .   .   .   .   ..      .   .   .159
   6.1.2 Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .              ..      .   .   .   .   ..      .   .   .164
   6.1.3 Configuring the IBM tapes and libraries . . . . . . . . . . .                           ..      .   .   .   .   ..      .   .   .186
6.2 Tivoli Storage Manager Server V2.1 for OS/2 . . . . . . . . . .                              ..      .   .   .   .   ..      .   .   .194
   6.2.1 Server configuration . . . . . . . . . . . . . . . . . . . . . . . . .                  ..      .   .   .   .   ..      .   .   .197
6.3 Tivoli Data Protection for Workgroups . . . . . . . . . . . . . . . .                        ..      .   .   .   .   ..      .   .   .204
   6.3.1 Configuration and use . . . . . . . . . . . . . . . . . . . . . . . .                   ..      .   .   .   .   ..      .   .   .206
   6.3.2 Configuring IBM tape devices . . . . . . . . . . . . . . . . . .                        ..      .   .   .   .   ..      .   .   .210
6.4 Legato NetWorker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .               ..      .   .   .   .   ..      .   .   .210
   6.4.1 Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .              ..      .   .   .   .   ..      .   .   .212
6.5 Computer Associates ARCserveIT for Windows NT . . . . .                                      ..      .   .   .   .   ..      .   .   .217
   6.5.1 Preparing to install ARCserveIT . . . . . . . . . . . . . . . .                         ..      .   .   .   .   ..      .   .   .217
   6.5.2 Installing ARCserveIT . . . . . . . . . . . . . . . . . . . . . . . .                   ..      .   .   .   .   ..      .   .   .217
   6.5.3 Configuring ARCserveIT on Windows NT Server . . . .                                     ..      .   .   .   .   ..      .   .   .221
6.6 Computer Associates ARCserve Version 6.1 for NetWare.                                        ..      .   .   .   .   ..      .   .   .224
   6.6.1 Installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .           ..      .   .   .   .   ..      .   .   .224
   6.6.2 Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .              ..      .   .   .   .   ..      .   .   .228
   6.6.3 Managing ARCserve for NetWare . . . . . . . . . . . . . . .                             ..      .   .   .   .   ..      .   .   .233
   6.6.4 The ARCserve changer option . . . . . . . . . . . . . . . . .                           ..      .   .   .   .   ..      .   .   .238
6.7 VERITAS Backup Exec for Windows NT . . . . . . . . . . . . . .                               ..      .   .   .   .   ..      .   .   .242
   6.7.1 Software installation . . . . . . . . . . . . . . . . . . . . . . . . .                 ..      .   .   .   .   ..      .   .   .242
   6.7.2 Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .              ..      .   .   .   .   ..      .   .   .244
   6.7.3 Configuring IBM tape drives . . . . . . . . . . . . . . . . . . .                       ..      .   .   .   .   ..      .   .   .252
6.8 Veritas Backup Exec for Novell NetWare . . . . . . . . . . . . .                             ..      .   .   .   .   ..      .   .   .254
   6.8.1 Software installation . . . . . . . . . . . . . . . . . . . . . . . . .                 ..      .   .   .   .   ..      .   .   .254
   6.8.2 Software configuration. . . . . . . . . . . . . . . . . . . . . . . .                   ..      .   .   .   .   ..      .   .   .260
6.9 Seagate Sytos Premium for OS/2 . . . . . . . . . . . . . . . . . . .                         ..      .   .   .   .   ..      .   .   .262
   6.9.1 Installing Sytos Premium Version 2.2 . . . . . . . . . . . .                            ..      .   .   .   .   ..      .   .   .263




                                                                                                                                            v
Appendix A. Sources of information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267

                            Appendix B. Hardware part numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269

                            Appendix C. Storage area networks and Fibre Channel . . . . . . . . . . . . . . . 275
                            C.1 Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .275
                               C.1.1 Lower layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
                               C.1.2 Upper layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
                            C.2 Topologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .276
                            C.3 Classes of Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .276
                            C.4 SAN components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
                               C.4.1 SAN servers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
                               C.4.2 SAN storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
                            C.5 SAN interconnects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .277
                               C.5.1 Cables and connectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
                               C.5.2 Gigabit link model (GLM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
                               C.5.3 Gigabit interface converters (GBIC). . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
                               C.5.4 Media interface adapters (MIA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
                               C.5.5 Adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
                               C.5.6 Extenders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .279
                               C.5.7 Multiplexors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
                               C.5.8 Hubs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
                               C.5.9 Routers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
                               C.5.10 Bridges. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
                               C.5.11 Gateways . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
                               C.5.12 Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .280
                               C.5.13 Directors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .280

                            Appendix D. TSM element addresses and worksheets . . . . .                                       ......    . . . . . 283
                            D.1 Device names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .       ......    . . . . . 283
                            D.2 Single tape devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .        ......    . . . . . 283
                            D.3 Tape libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .   ......    . . . . . 284
                               D.3.1 IBM 3502-108 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .          ......    . . . . . 284
                               D.3.2 IBM 3502-x14 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .          ......    . . . . . 284
                               D.3.3 IBM 3447 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .      ......    . . . . . 285
                               D.3.4 IBM 3449 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .      ......    . . . . . 285
                               D.3.5 IBM 3570 C2x . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .          ......    . . . . . 286
                               D.3.6 IBM 3575 L06 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .          ......    . . . . . 286
                               D.3.7 IBM 3575 L12 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .          ......    . . . . . 287
                               D.3.8 IBM 3575 L18, L24, and L32 . . . . . . . . . . . . . . . . . . . . . .                  ......    . . . . . 287

                            Appendix E. Special notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289

                            Appendix F. Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
                            F.1 IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
                            F.2 IBM Redbooks collections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
                            F.3 Other resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .291
                            F.4 Referenced Web sites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292

                            How to get IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
                            IBM Redbooks fax order form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296




vi   Netfinity Tape Solutions
Abbreviations and acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .297

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .299

IBM Redbooks review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .305




                                                                                                                   vii
viii   Netfinity Tape Solutions
Preface
                           This redbook discusses IBM’s range of tape drives currently available for Netfinity
                           servers. The book starts with a discussion of tape backup strategies and what
                           concepts you should consider when designing a backup configuration. Each of
                           the tape drives currently available from IBM is then described, listing its
                           specifications and connectivity options. It also includes Storage Area Network
                           implementations of tape devices. The redbook then examines the backup
                           software that is most commonly used by customers in the Intel processor
                           environment. Finally the book explains how to configure the tape drives and
                           software so that they function correctly together.

                           This redbook gives a broad understanding of data backup and how important it is
                           to day-to-day operations of networked servers. It will help anyone who has to
                           select, configure or support servers and tape subsystems involving software from
                           IBM and other leading backup solution providers and IBM tape hardware.


The team that wrote this redbook
                           This redbook was produced by a team of specialists from around the world
                           working at the International Technical Support Organization, Raleigh Center.

                           Wim Feyants is a Support Engineer in Belgium. He has four years of experience
                           in supporting PCs and related software, and one year in OS/390 support. He
                           holds a degree in Electromechanical Engineering. His areas of expertise include
                           Tivoli Storage Manager on S/390 and Netfinity, Netfinity Servers, OS/2 and Novell
                           NetWare. His previous publications include the redbook IBM Netfinity and PC
                           Server Technology and Selection Reference and the first edition of this redbook.
                           Wim can be reached at wim_feyants@be.ibm.com.

                           Steve Russell is a Senior IT Specialist at the International Technical Support
                           Organization, Raleigh Center. Before joining the ITSO in January 1999, Steve
                           worked in a Technical Marketing role in IBM’s Netfinity organization in EMEA.
                           Prior to that, he spent nearly 15 years managing and developing PC-based
                           hardware and software projects. He holds a BSc in Electrical and Electronic
                           Engineering and is a member of the Institution of Electrical Engineers and a
                           Chartered Engineer.

                           This is the second edition of this redbook. The authors of the first edition were:

                           David Watts
                           Wim Feyants
                           Mike Sanchez
                           Dilbagh Singh

                           Thanks to the following people from the ITSO for their help:

                           David Watts, Raleigh
                           Matthias Werner, San Jose
                           Pat Randall, San Jose
                           Margaret Ticknor, Raleigh
                           Shawn Walsh, Raleigh
                           Gail Christensen, Raleigh



© Copyright IBM Corp. 1998 2000                                                                                 ix
Linda Robinson, Raleigh
                               Thanks also to the following IBMers for their invaluable contributions to this
                               project:

                               John Gates, Tape Product Manager, Raleigh
                               Lee Pisarek, Netfinity Technology Lab, Raleigh
                               Dan Watanabe, Tape and Optics Business Development, Tuscon


Comments welcome
                  Your comments are important to us!

                  We want our Redbooks to be as helpful as possible. Please send us your comments
                  about this or other Redbooks in one of the following ways:
                   • Fax the evaluation form found in “IBM Redbooks review” on page 305 to the fax
                     number shown on the form.
                   • Use the online evaluation form found at http://www.redbooks.ibm.com/
                   • Send your comments in an Internet note to redbook@us.ibm.com




x   Netfinity Tape Solutions
Chapter 1. Introduction
                           IBM has a long heritage in the development and production of digital data
                           storage. As Netfinity servers take on more work in the enterprise, the need for
                           robust storage management solutions and support programs becomes a basic
                           requirement.

                           IBM provides industry leading tape technology in 4 mm, 8 mm, Quarter Inch
                           Cartridge (QIC), Digital Linear Tape (DLT), and Magstar. IBM’s tape offerings are
                           manufactured and tested to IBM’s standards and specifications and are backed
                           by its worldwide service and support. IBM can provide a total storage solution
                           end-to-end, from the hardware to financing.

                           Before selecting a tape solution, you first need to determine your own specific
                           requirements both in terms of the data to protect and the time it takes to back up
                           and recover those files. Once you have determined the strategy you wish to use,
                           you need to select the drive technology, then the hardware and software products
                           that best meet those strategic requirements.

                           This redbook leads you through the points you need to consider when
                           determining a backup strategy, describes the hardware and software available to
                           you and finally provides guidance about how to configure the hardware and
                           software so that they work well together.

                           This edition adds descriptions of hardware and software introduced since the first
                           edition was published. In addition, we have included a chapter about storage area
                           networks (Chapter 4, “SAN equipment” on page 105), which discusses tape
                           implementations using a SAN fabric in particular.

                           As well as providing an overview of newly announced SAN components, including
                           Fibre Channel hubs, gateways, and routers, we examine configurations of these
                           components supported in combination with tape hardware. Examples we explore
                           include remotely attached tapes and tape library sharing solutions. Finally, the
                           advantages of SAN attached tape devices in comparison with direct SCSI
                           attached tape devices are discussed.

                           This books only covers SAN solutions in a backup environment. Other
                           implementations, such as remotely attached direct access storage devices, are
                           not dicussed.




© Copyright IBM Corp. 1998 2000                                                                              1
2   Netfinity Tape Solutions
Chapter 2. Strategy
                           When designing a backup solution, you will start by looking at your specific
                           needs, and then at the possibilities different products (hardware and software)
                           have to offer. This chapter is meant to help you determine those needs, by
                           explaining some common backup terminology. We won’t be referring to specific
                           hardware or software. For specific information, see Chapter 3, “Hardware” on
                           page 33 and Chapter 5, “Software” on page 123.


2.1 Why are backups necessary?
                           In today’s server environments, there is great emphasis on high availability
                           solutions. Examples include RAID disk subsystems, redundant power supplies,
                           ECC memory and clustering solutions. These new technologies reduce the risk of
                           server downtime and data loss. Some people could see this as a reason not to
                           implement a backup solution, since data is already secured. Unfortunately,
                           hardware failures are only responsible for a small percentage of incidents
                           involving data loss.

                           Among other causes of data loss, one of the most common is operator errors, that
                           is, user errors. Users may inadvertently save a file that contains erroneous
                           information, or they may erase a critical file by mistake. Besides hardware and
                           user errors, software errors and virus attacks can also cause data loss or data
                           corruption.

                           When thinking about backups, you should consider that your backup is not only
                           necessary for disaster recovery. Being able to provide a stable storage
                           environment for keeping earlier versions of user files is as important. You should
                           think about your backup environment as being a storage management solution.

                           Storage management (when discussed in a backup/archive context) embodies
                           more than just disaster recovery. The possibility of keeping several versions of a
                           particular file, including ways to maintain these multiple versions, is just as
                           important. If a user or application corrupts data, and saves it, the above function
                           will allow you to roll back changes and get back to a previous version. Backup file
                           maintenance is also an important factor. It is fairly easy to create a backup of a
                           file each hour. However, if there is no way to set the period for which these
                           versions should be kept, or the number of different versions to be kept, your
                           storage utilization will be very high (and costly). Another factor that is important is
                           the degree of automation a backup product delivers. This is the differentiator
                           between simple backup/restore applications and storage management tools. If
                           the operator has to do everything manually, the chances of errors and the cost of
                           operation will go up.

                           Where backup data is meant to be used as a recovery resource in case of data
                           loss, another possible use of low-cost mass storage media is archiving. The
                           current trend of producing data in an electronic form, rather than on paper, calls
                           for a need to have a valid archiving solution. Documents such as contracts,
                           payroll records, employee records, etc. will need to be stored in a permanent way,
                           without losing the advantages of the electronic form they exist in. A typical
                           difference between backup data and archive data is their lifetime and rate of
                           change. While backup data changes very fast and becomes obsolete in a short
                           term period, archive data typically is static and stays current for a long time (up to


© Copyright IBM Corp. 1998 2000                                                                                  3
several years, depending on legal standards). As a result, backup products
                               should be able to differentiate between these two types of data, since storage
                               policies will differ.

                               Besides a difference in handling this data, the storage device and media will have
                               specific needs. Since data will be kept for a long time, media lifetime must be very
                               high, which means you might need tape devices that are backward compatible.
                               Physical storage is as important. It should be an environmentally controlled,
                               secured area.

                               Finally, availability of this data should be very high. That is why some sources
                               suggest keeping a second backup server, entirely identical to the production system,
                               on standby in a remote location, together with an extra copy of the media.


2.2 Backup methodologies
                               This section explains the different ways our data will be backed up, what will be
                               backed up, and where will it go. Different methods exist, each having its
                               advantages and disadvantages. We will discuss three common ways in which
                               data is approached by backup programs. When an approach is decided upon, the
                               next step is to set the backup pattern that will be used. The backup pattern can be
                               seen as the way the backup program determines how data will be handled over a
                               certain time period. This leads us to another important factor in backup
                               operations: continuity. There is a start point, and from then on, reliable backups
                               must be maintained. This is why backup implementation should be very well
                               planned before starting.

2.2.1 When will a file be backed up?
                               2.2.1.1 Full backup
                               A full backup is simply that: a complete backup of every single file.

                               It is the start point for every backup implementation. Every file that needs to be
                               backed up, will have to be backed up at least once.

                               The advantage of such a backup is that files are easily found when needed. Since
                               full backups include all data on your hard drive, you do not have to search through
                               several tapes to find the files you need to restore. If you should need to restore
                               the entire system, all of the most current information can be found on the last
                               backup tape (or set of tapes).

                               The disadvantage is that doing nothing but full backups leads to redundancy
                               which wastes both media and time. A backup strategy would normally include a
                               combination of full, incremental and/or differential backups.

                               2.2.1.2 Incremental backup
                               Incremental backups include files that were created or changed since the last
                               backup (that is, the last full or incremental backup). To achieve this, the status of
                               each file must be recorded either within the backup software or through the use of
                               the archive attribute of the files. If no previous backup was made, an incremental
                               backup is equivalent to a full backup.




4   Netfinity Tape Solutions
Incremental backups make better use of media compared to full backups. Only
                 files that were created or changed since the last backup are included, so less
                 backup space is used and less time is required.

                        Note

                  The definition of a file change can differ between backup applications. Some
                  criteria used for marking a file as changed include:
                    •   Data changes
                    •   Location changes
                    •   Attribute changes (last modification or access date, archive bit)
                    •   Security changes

                 The disadvantage is that multiple tapes are needed to restore a set of files. The
                 files can be spread over all the tapes in use since the last full backup. You may
                 have to search several tapes to find the file you wish to restore. The backup
                 software can minimize this by remembering where files are located; however a
                 restoration may still require access to all incremental backups.

                 2.2.1.3 Differential backup
                 A differential backup includes all files that were created or modified since the last
                 full backup. Note the difference between incremental and differential: incremental
                 backups save files changed since the last (incremental or full) backup, whereas
                 differential backups save files changed since the last full backup. In some
                 publications, a differential backup is also called a cumulative incremental backup.

                 The advantages over full backups are that they are quicker and use less media.
                 The advantage over an incremental backup is that the restore process is more
                 efficient — at worst, the restore will require only the latest differential backup set
                 and the latest full backup set, whereas an incremental backup could require all
                 incremental backup sets and the full backup set.

                 The disadvantage of differential backups is that longer and longer time is needed
                 to perform them as the amount of changed data grows. Compared to incremental
                 backups, differential backups use more time and media — each backup would
                 store much of the same information plus the latest information added or created
                 since the last full backup.

2.2.2 Backup patterns
                 A backup pattern is the way we will back up our data. Now that we have defined
                 the different types of backups, the question is how should we combine them?
                 Tape usage and reusage are important factors, because tape management will
                 get complicated when dealing with a large numbers of tapes, and media costs will
                 rise if we do not reuse tapes.

                 2.2.2.1 Full/Incremental pattern
                 The most common way of performing backups is to take full backups on a regular
                 basis, with incremental backups in between.

                 To avoid the management of too many tapes, the number of incremental backups
                 should be as few as possible. The average frequency is one full backup every
                 week, plus five or six incremental backups (one per day) in between. This is
                 shown graphically in Figure 1 on page 6.


                                                                                   Chapter 2. Strategy   5
This way of performing backups implies:
                                • One tape (or set of tapes) per day
                                • Very little data on each tape (except the full backup tapes)
                                • When performing the second full backup, you ignore all of the previous full
                                  backups, erase the tapes, and send them back to the scratch pool.

                               The administration of the tapes, inventory and tracking, tape labeling, and
                               archiving must be done manually in most cases. In addition, each time you do a
                               full backup, you send all of the data again.

                               When doing a full restore, you will need to start by restoring the full backup, then
                               restore the changes using every incremental backup.


                                                Sun         Mon       Tue        Wed     Thu   Fri     Sat


                                 Week 1           F          I           I          I     I      I       I


                                 Week 2           F          I           I          I     I      I       I

                                                        F   Full backup
                                                        I   Incremental backup

                               Figure 1. Tape usage in full/incremental backup pattern

                               An important factor within each backup pattern is tape usage and reutilization. In
                               the example above (Figure 1), if in week 2, you need to restore a file that was
                               backed up in week 1, you will need to have these tapes still available. This means
                               that the number of tapes needed increases significantly. That is why rotation
                               schedules are a very important part of tape management. Tape rotation
                               schedules will provide you with different versions of files, without having a large
                               number of tapes.

                               A commonly used tape rotation strategy is the “grandfather-father-son” schedule.
                               This name reflects the use of three generations of backup tapes: grandfather
                               tapes, father tapes and son tapes. To explain, let us start our backups.

                               On Sunday, a full backup is taken to a tape labeled “Week_1”. From Monday to
                               Saturday, backups are taken to tapes labeled “Monday”, “Tuesday”, etc. The next
                               Sunday, a full backup is taken to a tape “Week_2”. On Monday, we reuse the
                               tapes labeled with the names of the week (the same tapes as used in week 1).
                               These tapes are called the son tapes. For the next two weeks, we take weekly full
                               backups to separate tapes, and store daily backups on the son tapes. At the end
                               of the month, this leaves us with four father tapes, labeled “Week_1”, “Week_2”,
                               “Week_3”, “Week_4”. This gives us the possibility to restore a version of a file that
                               is one month in age. The last day of the month, a backup is taken to a grandfather
                               tape, labeled “Month_1”. After this, the “Week_1” through “Week_4” tapes can be
                               reused to do the weekly full backup.



6   Netfinity Tape Solutions
So, you will have a set of 5 son tapes reused weekly, a set of 4 father tapes
reused monthly, and a set of 4 or 12 grandfather tapes (depending on the amount
of time you want to cover).


    Monday       Tuesday     Wednesday Thursday        Friday      Saturday    Sunday
                                                                                Week_1

                                                                                Week_2

                                                                                Week_3

                             Wednesday Thursday        Friday      Saturday     Week_4

   Monday       Tuesday      Month_1



                    Daily Backup (Son backup set)

                    Weekly Full Backup (Father backup set)

                    Monthly Full Backup (Grandfather backup set)


Figure 2. Grandfather-Father-Son media rotation schedule


2.2.2.2 Full/differential pattern
Another way of performing backups is to take full backups and differential
backups, with incremental backups in between.

In this pattern:
 • A full backup saves every file.
 • A differential backup saves the files that have changed since the previous full
   backup.
 • An incremental backup saves the files that have changed since the previous
   incremental backup (or the previous differential backup if no previous
   incremental backups exist, or the previous full if no previous differentials
   exist).

This process reduces the number of tapes to manage because you can discard
your incremental tapes once you have done a differential. You still have to
manage the incremental tapes prior to the differential backup, however.

This way of performing backups implies:
 • One tape (or set of tapes) per day
 • Very little data on each tape (except the full backup tape)
 • More tapes to manage, because you have to keep the full backup tapes, the
   differential tapes, and the incremental tapes




                                                                         Chapter 2. Strategy   7
Sun           Mon             Tue             Wed         Thu         Fri         Sat




                                   F              I              I               D          I            I            D
                                 F Full backup                                New/changed data since last incremental backup
                                 I Incremental backup                         Data from previous days' incremental backups
                                 D Differential backup

                               Figure 3. Tape usage in full/differential backup patterns

                               The advantage of the full/differential pattern over a full/incremental pattern is that a
                               restore will use full, differential and incremental backups which require fewer tapes.
                               (See 2.2.2.4, “Example” on page 9.)

                               As in the full/incremental pattern, tape rotation can be implemented to limit the
                               number of tapes used, while keeping a certain number of versions of each file
                               over a certain time period

                               2.2.2.3 Incremental forever pattern
                               Since one of the critical factors in any backup is the amount of data that has to be
                               moved, a way of limiting this amount should be pursued. The best way to do this
                               is to back up changes only. Using the incremental forever pattern, only
                               incremental backups are performed. This means that there is no need for regular
                               full or differential backups. Though the first backup will be an incremental that will
                               back up everything (so, essentially the same as a full backup), only incremental
                               backups need to be taken afterwards.

                               It is clear that this pattern will limit the amount of backed up data, but turns tape
                               management and usage into a very complex process. That is why you will need a
                               backup application that is capable of managing these tapes.

                               A good example of this is tape reusage. Since there is no determined point in
                               time when tapes can be reused (as we had in the previous two patterns), the
                               number of tapes can increase dramatically. Therefore, the application should be
                               able to check tapes and clean them if necessary. This cleanup (or tape
                               reclamation) should occur when a tape holds backup data that will no longer be
                               used, since newer versions have been backed up.

                               Another point is that, when backing up data from different machines, their data
                               can be dispersed over a multitude of different tapes. Since mounting a tape is a
                               slow process, this should be avoided. That is why some applications have a
                               mechanism that is called collocation. Collocation will try to maintain the data of
                               one machine on the fewest number of tapes possible. This should mean a
                               performance gain when restoring, but will slow down the backup in cases where
                               multiple machines need to back up their data to a single tape drive. Instead of
                               moving the backup data of both clients to the same tape, the backup program will
                               try to put the data of both clients on separate tapes. Therefore, the second client
                               will have to wait until the backup of the first one completes before it can start its



8   Netfinity Tape Solutions
backup. Again, applications have been provided to limit the impact of this (see
2.5.4, “Hierarchical storage” on page 22).

2.2.2.4 Example
To make things a bit clearer, let’s look at an example. We have a machine with
20 GB of data, and each day about 5% of this data changes. This means we will
have to back up about 1 GB of data for each incremental backup. The network will
be the determining factor for the data transfer rate (we will assume a 16 Mbps
token-ring network), and backup and restore throughput is equal.

Table 1 shows times needed for backup operations and the type of backups:
Table 1. Backup operation: time required using specific backup patterns

 Pattern                           Sun        Mon      Tue       Wed      Thu         Fri    Sat

 Full/increment      Type          Full       Incr     Incr      Incr     Incr        Incr   Incr
 al (Figure 1 on
 page 6)             Time (sec)    10240      512      512       512      512         512    512

 Full/differential   Type          Full       Incr     Incr      Diff     Incr        Incr   Diff
 (Figure 3 on
 page 8)             Time (sec)    10240      512      512       1536     512         512    1536

 Incremental         Type          Incr       Incr     Incr      Incr     Incr        Incr   Incr

                     Time (sec)    5121       512      512       512      512         512    512

 1.The first incremental backup will take 10240 seconds but here we assume
   that Sunday’s backup is not the first backup.

If we look at the restore operation, we will need to determine the number of tapes
that are required and the time needed to restore the data. Let’s assume that we
have to do a full restore (that is, 20 MB) on Friday (restoring from Thursday’s
backups).
Table 2. Restore operation: total number of tapes and total amount of time required

 Type                              Number of tapes                  Time (seconds)

 Full/incremental                  5                                6000
                                   Sun, Mon, Tue, Wed, Thu          (10240 + 4 x 512= 12288)

 Full/differential                 3                                6000
                                   Sun, Wed, Thu                    (10240 +1536 + 512 =
                                                                    12288)

 Incremental                       Unknown                          10240

From this we conclude:
 • A full restore is faster when using incremental strategies, but the number of
   tapes needed is hard to predict.
 • The number of tapes is the least when using the differential pattern.




                                                                             Chapter 2. Strategy    9
2.3 System and storage topologies
                           When implementing a backup solution, the first thing to look at is how you are
                           going to set up your site. Different possibilities exist, each giving some
                           advantages and disadvantages. For SAN implementations, refer to 2.4, “Storage
                           area network implementations” on page 14.

                           The following topology models will be discussed:
                                •   Direct connection
                                •   Single server site
                                •   Two-tier site
                                •   Multi-tier site or branch office model

                           There is no one “best” solution applicable to every situation. Factors to be
                           considered when deciding on a backup solution include:
                                •   The   network bandwidth available
                                •   The   period available for backup activity
                                •   The   capabilities of the backup software
                                •   The   size and number of machines to be backed up

2.3.1 Direct tape connection
                           The most easy topology to understand, is the one where we connect our tape
                           device directly to the machine we are going to back up (see Figure 4). One
                           advantage of this setup is the speed of the link between data and backup device
                           (typically SCSI) versus the network connection used in other models.

                           The disadvantages of this model are limited scalability, manageability and
                           hardware cost (one tape device needed for every machine that requires backup).

                           This setup can be suited for sites with a limited number of machines that need to
                           be backed up, or for emergency restores.



                                                                        Storage
                                                                        Device




                           Figure 4. Direct tape connection


2.3.2 Single server model
                           As opposed to the direct connection model, this type of setup is based on a
                           backup server, connected through a network to the machines that will need to
                           take a backup. These machines are often referred to as clients, nodes or agents.
                           The tape device (or other storage media) will be connected to this backup server
                           (see Figure 5). The advantages of this design are that centralized storage
                           administration is possible and the number of storage devices is reduced (and
                           probably the cost).




10   Netfinity Tape Solutions
However, one of the problems here could be the network bandwidth. Since all
                  data that is backed up needs to go over the network, the throughput is smaller
                  than what we have using a direct tape connection. Every client that is added will
                  need some of this bandwidth (see 2.5.2, “Network bandwidth considerations” on
                  page 20). This bandwidth issue becomes even more important when dealing with
                  a distributed site. Let’s imagine that one of the machines that needs to be backed
                  up is located in a different location than the backup server, with only a very slow
                  link between these two sites. Throughput could diminish in such a way that it
                  would take longer than 24 hours to back up the remote system. In this case, a
                  two-tier solution would be better (as discussed in 2.3.3, “Two-tier model” on page
                  11).

                  Although not required, it is advised that the machine used as backup server
                  should be a dedicated machine. The reason for this is that backup and restore
                  operations would have an impact on this server’s performance. If you included it
                  in your regular server pool, acting as a file or application server, it could slow
                  down all operations on the network servers.



                  Machines that
                  need to be
                  backed up.




                                                                      Backup server




                  Figure 5. Single server model

                  This design is well suited for sites with a limited number of machines. There are
                  multiple reasons for this. For example, network bandwidth is not unlimited.
                  Another reason for the limit on clients that a single server will support is that each
                  session will use resources (processor, memory) on the backup server.

2.3.3 Two-tier model
                  As discussed in 2.3.2, “Single server model” on page 10, scalability and network
                  bandwidth are limited when working with a single server site. In a two-tier model,
                  an intermediate backup server is used as a staging platform (see Figure 6 on
                  page 12). The advantages are twofold:
                  1. The backup is done to the first backup server (or source server), which resides
                     locally (and has a LAN connection), and only then forwarded to the central


                                                                                   Chapter 2. Strategy   11
backup server (or target server). This can be done asynchronously, so that
                                communication performance between the first and second-level backup
                                servers is not critical.
                           2. The backup completes in a much shorter time as data transmission is not
                              slowed down by tape drive write speeds. This leads to much shorter backup
                              windows.

                           You could also load balance large sites by adding additional source servers.




                                                                Source Servers with local backups
                                                                forwarded asynchronously to...

                                                                                      Target or Central
                                                                                      Server




                           Figure 6. Two-tier model

                           Figure 7 shows what happens with the data that needs to be backed up. In the
                           first stage, data is moved to the source server. This happens during the period of
                           time that we have to take our backups (referred to as backup window; see 2.5.1,
                           “Scheduling backups” on page 19).




12   Netfinity Tape Solutions
Stage 1: Data is backed up to the             Stage 2: Data is moved from
                         first backup server. This operation           Storage 1 to Storage 2. This
                         should complete during the normal             can happen outside of the
                         backup window.                                backup window.




                                                                                      Storage 2
                                                          Storage 1
                            DATA


                   Figure 7. Data movement in a two-tier model

                   The specifications of the storage device connected to this source server should
                   be sufficient to store all the data that is backed up. Typically, it will also be a fast
                   device (probably a disk drive). In the second stage, data on this storage device is
                   moved across the network to a second backup server. This normally happens
                   after stage 1 completes (but not necessarily, however), and can be done outside
                   of the normal backup window. The only rule here is that all data from the source
                   servers must be moved to the target server before the backup window restarts.

                   This setup gives advantages with regard to scalability, since you can add as many
                   source servers as you want. However, more intelligent software is required to
                   manage the transfer of backed up data both in backup mode and in restore mode.
                   In the case of a restore operation, the user should not need to know on which
                   backup server the data is.

                   Another advantage of this server storage hierarchy is that in case of a site
                   disaster at the source server location, the backups still reside on the target
                   server. Of course, this advantage will only be true if the target and source servers
                   are geographically separated, and all backup data has been moved to the central
                   server.

2.3.4 Multi-tier model
                   The multi-tier or branch office model is an extension of the two-tier model, but
                   with another stage added (and you can add even more stages if you wish). The
                   same advantages and disadvantages can be observed. Scalability goes up, but
                   so does complexity.




                                                                                     Chapter 2. Strategy   13
Branches

                                                               Regional Offices




                                                                                          Central Server




                           Figure 8. Multi-tier model



2.4 Storage area network implementations
                           In this section, we will introduce some tape storage implementations using a
                           storage area network architecture. This is not intended to be an introduction to
                           SAN itself. It is limited to currently supported and tested configurations. For more
                           information please refer to the following redbooks: Introduction to Storage Area
                           Networks, SG24-5470 and Storage Area Networks: Tape Future in Fabrics,
                           SG24-5474.

                           The IBM definition of a storage area network, or SAN, is a dedicated, centrally
                           managed, secure information infrastructure, which enables any-to-any
                           interconnection of servers and storage systems.

                           A SAN is made up of the following components:
                                • A Fibre Channel topology
                                • One or more host nodes
                                • One or more storage nodes
                                • Management software




14   Netfinity Tape Solutions
The SAN topology is a combination of components, which can be compared to
                 those used in local area networks. Examples of such components are hubs,
                 gateways, switches and routers. The transport media used is Fibre Channel,
                 which is defined in several ANSI standards. Although the name, Fibre Channel,
                 assumes the usage of fiber connections, copper wiring is also supported. This
                 topology is used to interconnect nodes. The two types of nodes are host nodes,
                 such as the FC adapter of a server, and storage nodes. Storage nodes can be
                 any devices that connect to a SAN.

                 When looking at the above description, it is clear that many configurations can be
                 created. However, only a limited number of implementations of the SAN
                 architecture are currently supported for Netfinity backup solutions. This number
                 will certainly rise in the future.

2.4.1 Why use SAN for tape storage?
                 There can be several reasons to use SAN, and the importance of these reasons
                 will depend on your requirements, such as availability, cost and performance. One
                 thing that should be noted is that current Netfinity SAN implementations are
                 limited to tape libraries, and not single tape drives. Besides the lack of tested
                 implementations using single drives, another important fact is responsible for this
                 lack of support: the main reason for implementing SAN solutions is tape drive and
                 media sharing. Both concepts are possible when a media pool is available, and
                 the tape devices have enough intelligence to share this media between them.
                 Neither of these concepts is applicable to single tape drives.

                 When talking about the availability of tape storage, two separate points can be
                 discussed. The first one is availability of the hardware, meaning the tape library
                 itself. The second one is the availability of the data backed up to tape. In current
                 high availability implementations, this data is backed up and stored off-site.
                 Although this way of working is generally accepted, it also might be a good thing
                 to automate this. By doing so, a copy of local tapes would be sent to a remote site
                 without human intervention. Retrieving these copies would also be transparent.
                 This technique, which is sometimes referred to as automatic vaulting, can be
                 achieved by using the SAN architecture.

                 Performance issues can also be addressed using SAN architectures. When using
                 a client/server backup model (the client backs up the data to the backup server),
                 all the backup data must pass through the network (LAN or WAN). In some cases,
                 for example, if the backup client is a big database server, the network can no
                 longer deliver the throughput that is needed to complete the backup in a certain
                 time frame. Current solutions would consist of putting a local tape device on the
                 backup client. Besides the extra cost of additional tape devices, decentralizing
                 backup management can be difficult to maintain. SAN provides a solution by a
                 technique called “LAN-free backup”. Here, only the meta data (labeled control
                 data in Figure 9 on page 16) flows over the LAN, while the actual backup data
                 moves directly from the client to the storage device connected to the SAN.




                                                                                Chapter 2. Strategy   15
Control Data


                                                 Backup                                Backup
                                                  Client                               Server



                                                  Data
                                                                    SAN



                                                      Backup Data             Control Data


                                                                 Tape Storage
                                                                    Node



                           Figure 9. SAN-based LAN-free backup

                           Even though this solution is still in an early phase, the next step towards
                           performance improvement has already been architected. This will be called
                           “server-free backup”. Here, client-attached SAN storage moves data immediately
                           to the tape storage. Besides having the advantage that most of the data no longer
                           needs to be backed up through the network, you get an additional performance
                           gain by bypassing the SCSI interface. Both connections (SCSI and network) have
                           a lower throughput than the SAN interface. Its nominal throughput is rated at
                           1 Gbps. Future implementations will allow this figure to extend to 4 Gbps. See
                           2.5.2, “Network bandwidth considerations” on page 20 for network throughput
                           figures. Compared to SCSI, operating at 40 MBps, FC-AL operates at 100 MBps.
                           Since FC-AL supports full-duplex communications, the total throughput can go up
                           to 200 MBps.


                                                                      Control Data



                                                    Backup                                   Backup
                                                     Client                                  Server




                                                                        SAN


                                                                                       Control Data

                                                                    Backup Data
                                                    Client
                                                    Data                              Tape Storage
                                                                                         Node



                           Figure 10. SAN Server-free backup




16   Netfinity Tape Solutions
Finally, cost reduction can be an important factor in deciding to move tape
                  storage from traditional SCSI attachments to SAN attachments. Here, using the
                  sharing capability of a SAN-connected tape library, two or more systems can use
                  one library. This limits the investment in expensive hardware, and enables the use
                  of cheaper storage media (as compared to disk). So, where a traditional
                  implementation of a tape library would probably cost more than the equivalent in
                  disk storage, sharing of the library increases the utilization factor and decreases
                  the cost per amount of storage.



                  Cost
                                                                             Disk



                                                                                            Tape
                   Tape Library Cost




                                                      x                             Amount of Data

                  Figure 11. Storage cost

                  Figure 11 is a graph of Cost versus Amount of Data for both tape and disk
                  storage. As you can see, if the amount of data that needs to be stored is lower
                  than x, disk storage is cheaper than tape. However, by increasing the amount of
                  data stored, the total cost goes below that of disk storage. In order to get past this
                  point, you should increase the volume of data that is stored on tape. One way of
                  doing this is by sharing the library between different systems.

2.4.2 Fibre Channel attached tape storage
                  Probably the most straightforward type of SAN implementation of a tape device is
                  where a tape library is connected to one backup server using fiber. The reason
                  why this can be done is the fact that fiber connections, using long-wave
                  technology, can have a length up to 10 kilometers. This means that you can
                  physically separate your tape library from your backup server, which might prove
                  efficient for disaster recovery or automatic vaulting.

                  Figure 12 shows the logical setup of such a configuration:




                                                                                    Chapter 2. Strategy   17
SAN              Tape Storage
                                        Host Node
                                                                                          Node
                                                     Fibre                      SCSI
                                                     Channel


                           Figure 12. Fibre Channel attached tape storage

                           The above diagram is only a representation of a logical configuration. For
                           information on the actual hardware and software that can be used to implement
                           this, see Chapter 4, “SAN equipment” on page 105.

                           This still leaves the question of how to implement the remote vaulting. Since this
                           is typically done by using tape copies, a second library should be added. Here, for
                           example, we could use a local SCSI-connected library.

2.4.3 Tape pooling
                           A configuration that comes closer to the general idea of storage area networks,
                           sharing storage across multiple machines, is the tape pooling configuration. Here,
                           one (or more) tape libraries are connected to several backup servers. This is
                           done by using a Fibre Channel SAN switch. The main advantage of this type of
                           installation is the ability to share a (costly) tape library between two or more
                           backup servers. Although this might look like something that could already be
                           accomplished in the past, using a library setup in split configuration (the library is
                           logically split in two, each part using one tape device connected to one backup
                           system), there are some differences.

                           The split configuration was a static setup. This means that you connected one
                           tape drive to one system, the other tape drive to another. If you had a library with
                           two tape devices, the split setup meant that you created two smaller, independent
                           libraries. Also the cartridges were assigned to one part of the split library.

                           In a tape pooling configuration, there is no physical or logical split of the tape
                           hardware. The entire library is available to both systems. This means that when
                           one server needs two tape drives for a certain configuration, it will be able to get
                           them (if they are not being used by another system). Also, the free tapes, or
                           scratch pool, can be accessed by both systems.

                           However, the physical media that are in use (meaning that they do not belong to
                           the scratch pool) cannot be shared. A tape used by one system cannot be read by
                           another.

                           Figure 13 shows a tape pooling configuration:




18   Netfinity Tape Solutions
Fibre
                                 Host Node   Channel


                                                          SAN                   Tape Storage
                                                                                   Node
                                                                         SCSI
                                 Host Node
                                             Fibre
                                             Channel

                 Figure 13. Tape pooling

                 Again, this configuration is just a logical layout. The exact physical layout, and the
                 necessary hardware and software will be discussed later.


2.5 Performance considerations
                 When talking to people who are using backup software intensively, one of their
                 major problems is performance. The reason for that is as follows: while the
                 amount of data increases steadily, the time that a machine is available for backup
                 (which has a performance impact on the machine, and sometimes requires
                 applications to be quiesced) often gets shorter. That is, more data has to be
                 moved in a shorter time period. Although hardware and software manufacturers
                 are continually improving their products to cope with this trend, some parameters
                 affecting performance are related to the way the backup solution is implemented.
                 The following topics discuss some of the techniques you can use, as well as
                 some considerations that might help you determine what performance issues
                 should be addressed.

2.5.1 Scheduling backups
                 When thinking about which machines you are going to back up, you will probably
                 think about file or application servers. Unfortunately, these machines get updates
                 during the day, and the only time it makes sense to back up these systems is after
                 hours. The reason for this is that backup products need to access files and back
                 up valid copies of them. If these files are in use and modified during a backup, the
                 backup version you have would not be very helpful when restored.

                 That is why you should determine a period of time in which operations on the
                 machine that you will back up are minimal, and use this period of time to run your
                 backup. This period is often referred to as the backup window. You will soon see
                 that this backup window usually starts sometime late at night, and ends early in
                 the morning, not exactly the time you or someone else wants to sit beside the
                 machine starting or stopping backup operations. Luckily, backup programs make
                 good use of scheduling mechanisms. These schedulers allow you to start a
                 backup at a certain point in time.

                 The following points are important when automating your backup processes using
                 schedulers:
                  • What will my backup application do in case of errors? Will it continue or stop?
                    The worst case would be if the application stops and asks for user
                    intervention. It would be better for the backup application to make every effort
                    to work around problems, backing up as much of your data as possible.


                                                                                  Chapter 2. Strategy   19
• Will operations and errors be logged somewhere, so I can check if the
                                  backups were successful?
                                • If the backup operation takes longer than the defined backup window, will it
                                  continue or stop?

                           There are different scheduling mechanisms, each with its own advantages and
                           disadvantages. For more details, please refer to Chapter 5, “Software” on page
                           123.

2.5.2 Network bandwidth considerations
                           When implementing a backup solution that backs up data to a backup server over
                           the network, an important factor is network bandwidth. The reason for this is that
                           all the data must go over the network. This becomes even more important when
                           different machines are trying to back up to one server at the same time, since the
                           amount of data increases. That is why network bandwidth will be one of the
                           factors when deciding how many machines will be backed up to one backup
                           server, and which backup window will be needed.

                           To calculate the time needed for a backup, the following points must be
                           considered:
                                • The amount of data that will be backed up
                                 Unfortunately, this number can differ from backup system to backup system.
                                 Let’s say you have a file server with 20 GB of data. When you do a full backup
                                 of this system, it will indeed send 20 GB. But most backup programs also work
                                 with incremental or differential backup algorithms, which only back up
                                 changed data. So, to figure out the amount of data that is backed up in such
                                 an operation, we will have to consider the following points:
                                   • How much data changes between two backups?
                                   • What does “changed” mean to my backup program?
                                 Backup programs will normally also compress data. Unfortunately, the
                                 compression rate is strongly dependent on the type of file you are backing up,
                                 and therefore hard to define. For initial calculations, you could take the worst
                                 case scenario, where no compression would take place.
                                • Nominal network speed (commonly expressed in Mbps)
                                 This is the published speed of your network. Token-ring for example will have a
                                 nominal network speed of 4 or 16 Mbps.
                                • The practical network speed
                                 Since a communication protocol typically adds headers, control data and
                                 acknowledgments to network frames, not all of it will be available for our
                                 backup data. As a rule of thumb, the practical capacity is 50-60% for
                                 token-ring, FDDI or ATM networks, and 35% for Ethernet networks.




20   Netfinity Tape Solutions
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218
Netfinity tape solutions sg245218

Contenu connexe

Tendances

Gdfs sg246374
Gdfs sg246374Gdfs sg246374
Gdfs sg246374
Accenture
 
Ibm system storage ds8700 disk encryption redp4500
Ibm system storage ds8700 disk encryption redp4500Ibm system storage ds8700 disk encryption redp4500
Ibm system storage ds8700 disk encryption redp4500
Banking at Ho Chi Minh city
 
Tape automation with ibm e server xseries servers redp0415
Tape automation with ibm e server xseries servers redp0415Tape automation with ibm e server xseries servers redp0415
Tape automation with ibm e server xseries servers redp0415
Banking at Ho Chi Minh city
 
Deployment guide series ibm tivoli composite application manager for web reso...
Deployment guide series ibm tivoli composite application manager for web reso...Deployment guide series ibm tivoli composite application manager for web reso...
Deployment guide series ibm tivoli composite application manager for web reso...
Banking at Ho Chi Minh city
 
Ibm tivoli key lifecycle manager for z os redp4472
Ibm tivoli key lifecycle manager for z os redp4472Ibm tivoli key lifecycle manager for z os redp4472
Ibm tivoli key lifecycle manager for z os redp4472
Banking at Ho Chi Minh city
 
Deployment guide series tivoli continuous data protection for files v3.1 sg24...
Deployment guide series tivoli continuous data protection for files v3.1 sg24...Deployment guide series tivoli continuous data protection for files v3.1 sg24...
Deployment guide series tivoli continuous data protection for files v3.1 sg24...
Banking at Ho Chi Minh city
 
Tivoli storage productivity center v4.2 release guide sg247894
Tivoli storage productivity center v4.2 release guide sg247894Tivoli storage productivity center v4.2 release guide sg247894
Tivoli storage productivity center v4.2 release guide sg247894
Banking at Ho Chi Minh city
 
Integrating backup recovery and media services and ibm tivoli storage manager...
Integrating backup recovery and media services and ibm tivoli storage manager...Integrating backup recovery and media services and ibm tivoli storage manager...
Integrating backup recovery and media services and ibm tivoli storage manager...
Banking at Ho Chi Minh city
 
Tivoli data warehouse version 1.3 planning and implementation sg246343
Tivoli data warehouse version 1.3 planning and implementation sg246343Tivoli data warehouse version 1.3 planning and implementation sg246343
Tivoli data warehouse version 1.3 planning and implementation sg246343
Banking at Ho Chi Minh city
 
Ibm virtual disk system quickstart guide sg247794
Ibm virtual disk system quickstart guide sg247794Ibm virtual disk system quickstart guide sg247794
Ibm virtual disk system quickstart guide sg247794
Banking at Ho Chi Minh city
 
Disaster recovery solutions for ibm total storage san file system sg247157
Disaster recovery solutions for ibm total storage san file system sg247157Disaster recovery solutions for ibm total storage san file system sg247157
Disaster recovery solutions for ibm total storage san file system sg247157
Banking at Ho Chi Minh city
 
Ibm system storage open systems tape encryption solutions sg247907
Ibm system storage open systems tape encryption solutions sg247907Ibm system storage open systems tape encryption solutions sg247907
Ibm system storage open systems tape encryption solutions sg247907
Banking at Ho Chi Minh city
 

Tendances (17)

Ibm info sphere datastage data flow and job design
Ibm info sphere datastage data flow and job designIbm info sphere datastage data flow and job design
Ibm info sphere datastage data flow and job design
 
Gdfs sg246374
Gdfs sg246374Gdfs sg246374
Gdfs sg246374
 
Ibm system storage ds8700 disk encryption redp4500
Ibm system storage ds8700 disk encryption redp4500Ibm system storage ds8700 disk encryption redp4500
Ibm system storage ds8700 disk encryption redp4500
 
Tape automation with ibm e server xseries servers redp0415
Tape automation with ibm e server xseries servers redp0415Tape automation with ibm e server xseries servers redp0415
Tape automation with ibm e server xseries servers redp0415
 
Deployment guide series ibm tivoli composite application manager for web reso...
Deployment guide series ibm tivoli composite application manager for web reso...Deployment guide series ibm tivoli composite application manager for web reso...
Deployment guide series ibm tivoli composite application manager for web reso...
 
Assembly
AssemblyAssembly
Assembly
 
Ibm tivoli key lifecycle manager for z os redp4472
Ibm tivoli key lifecycle manager for z os redp4472Ibm tivoli key lifecycle manager for z os redp4472
Ibm tivoli key lifecycle manager for z os redp4472
 
Tec implementation examples sg245216
Tec implementation examples sg245216Tec implementation examples sg245216
Tec implementation examples sg245216
 
Deployment guide series tivoli continuous data protection for files v3.1 sg24...
Deployment guide series tivoli continuous data protection for files v3.1 sg24...Deployment guide series tivoli continuous data protection for files v3.1 sg24...
Deployment guide series tivoli continuous data protection for files v3.1 sg24...
 
Tivoli storage productivity center v4.2 release guide sg247894
Tivoli storage productivity center v4.2 release guide sg247894Tivoli storage productivity center v4.2 release guide sg247894
Tivoli storage productivity center v4.2 release guide sg247894
 
Integrating backup recovery and media services and ibm tivoli storage manager...
Integrating backup recovery and media services and ibm tivoli storage manager...Integrating backup recovery and media services and ibm tivoli storage manager...
Integrating backup recovery and media services and ibm tivoli storage manager...
 
IBM enterprise Content Management
IBM enterprise Content ManagementIBM enterprise Content Management
IBM enterprise Content Management
 
Tivoli data warehouse version 1.3 planning and implementation sg246343
Tivoli data warehouse version 1.3 planning and implementation sg246343Tivoli data warehouse version 1.3 planning and implementation sg246343
Tivoli data warehouse version 1.3 planning and implementation sg246343
 
Ibm virtual disk system quickstart guide sg247794
Ibm virtual disk system quickstart guide sg247794Ibm virtual disk system quickstart guide sg247794
Ibm virtual disk system quickstart guide sg247794
 
Ibm system storage solutions handbook sg245250
Ibm system storage solutions handbook sg245250Ibm system storage solutions handbook sg245250
Ibm system storage solutions handbook sg245250
 
Disaster recovery solutions for ibm total storage san file system sg247157
Disaster recovery solutions for ibm total storage san file system sg247157Disaster recovery solutions for ibm total storage san file system sg247157
Disaster recovery solutions for ibm total storage san file system sg247157
 
Ibm system storage open systems tape encryption solutions sg247907
Ibm system storage open systems tape encryption solutions sg247907Ibm system storage open systems tape encryption solutions sg247907
Ibm system storage open systems tape encryption solutions sg247907
 

Similaire à Netfinity tape solutions sg245218

Backing up db2 using ibm tivoli storage management sg246247
Backing up db2 using ibm tivoli storage management sg246247Backing up db2 using ibm tivoli storage management sg246247
Backing up db2 using ibm tivoli storage management sg246247
Banking at Ho Chi Minh city
 
Ibm information archive architecture and deployment sg247843
Ibm information archive architecture and deployment sg247843Ibm information archive architecture and deployment sg247843
Ibm information archive architecture and deployment sg247843
Banking at Ho Chi Minh city
 
Disaster recovery solutions for ibm total storage san file system sg247157
Disaster recovery solutions for ibm total storage san file system sg247157Disaster recovery solutions for ibm total storage san file system sg247157
Disaster recovery solutions for ibm total storage san file system sg247157
Banking at Ho Chi Minh city
 
Implementing the ibm system storage san32 b e4 encryption switch - sg247922
Implementing the ibm system storage san32 b e4 encryption switch - sg247922Implementing the ibm system storage san32 b e4 encryption switch - sg247922
Implementing the ibm system storage san32 b e4 encryption switch - sg247922
Banking at Ho Chi Minh city
 
Implementing the ibm system storage san32 b e4 encryption switch - sg247922
Implementing the ibm system storage san32 b e4 encryption switch - sg247922Implementing the ibm system storage san32 b e4 encryption switch - sg247922
Implementing the ibm system storage san32 b e4 encryption switch - sg247922
Banking at Ho Chi Minh city
 
Robust data synchronization with ibm tivoli directory integrator sg246164
Robust data synchronization with ibm tivoli directory integrator sg246164Robust data synchronization with ibm tivoli directory integrator sg246164
Robust data synchronization with ibm tivoli directory integrator sg246164
Banking at Ho Chi Minh city
 
Robust data synchronization with ibm tivoli directory integrator sg246164
Robust data synchronization with ibm tivoli directory integrator sg246164Robust data synchronization with ibm tivoli directory integrator sg246164
Robust data synchronization with ibm tivoli directory integrator sg246164
Banking at Ho Chi Minh city
 
Deployment guide series ibm total storage productivity center for data sg247140
Deployment guide series ibm total storage productivity center for data sg247140Deployment guide series ibm total storage productivity center for data sg247140
Deployment guide series ibm total storage productivity center for data sg247140
Banking at Ho Chi Minh city
 
Ibm total storage tape selection and differentiation guide sg246946
Ibm total storage tape selection and differentiation guide sg246946Ibm total storage tape selection and differentiation guide sg246946
Ibm total storage tape selection and differentiation guide sg246946
Banking at Ho Chi Minh city
 
Ibm total storage tape selection and differentiation guide sg246946
Ibm total storage tape selection and differentiation guide sg246946Ibm total storage tape selection and differentiation guide sg246946
Ibm total storage tape selection and differentiation guide sg246946
Banking at Ho Chi Minh city
 
BOOK - IBM Z vse using db2 on linux for system z
BOOK - IBM Z vse using db2 on linux for system zBOOK - IBM Z vse using db2 on linux for system z
BOOK - IBM Z vse using db2 on linux for system z
Satya Harish
 
Ibm tivoli storage manager bare machine recovery for microsoft windows 2003 a...
Ibm tivoli storage manager bare machine recovery for microsoft windows 2003 a...Ibm tivoli storage manager bare machine recovery for microsoft windows 2003 a...
Ibm tivoli storage manager bare machine recovery for microsoft windows 2003 a...
Banking at Ho Chi Minh city
 
An introduction to storage provisioning with tivoli provisioning manager and ...
An introduction to storage provisioning with tivoli provisioning manager and ...An introduction to storage provisioning with tivoli provisioning manager and ...
An introduction to storage provisioning with tivoli provisioning manager and ...
Banking at Ho Chi Minh city
 
Windows nt backup and recovery with adsm sg242231
Windows nt backup and recovery with adsm sg242231Windows nt backup and recovery with adsm sg242231
Windows nt backup and recovery with adsm sg242231
Banking at Ho Chi Minh city
 

Similaire à Netfinity tape solutions sg245218 (20)

Backing up db2 using ibm tivoli storage management sg246247
Backing up db2 using ibm tivoli storage management sg246247Backing up db2 using ibm tivoli storage management sg246247
Backing up db2 using ibm tivoli storage management sg246247
 
Db2 partitioning
Db2 partitioningDb2 partitioning
Db2 partitioning
 
Ibm information archive architecture and deployment sg247843
Ibm information archive architecture and deployment sg247843Ibm information archive architecture and deployment sg247843
Ibm information archive architecture and deployment sg247843
 
DB2 10 for Linux on System z Using z/VM v6.2, Single System Image Clusters an...
DB2 10 for Linux on System z Using z/VM v6.2, Single System Image Clusters an...DB2 10 for Linux on System z Using z/VM v6.2, Single System Image Clusters an...
DB2 10 for Linux on System z Using z/VM v6.2, Single System Image Clusters an...
 
Disaster recovery solutions for ibm total storage san file system sg247157
Disaster recovery solutions for ibm total storage san file system sg247157Disaster recovery solutions for ibm total storage san file system sg247157
Disaster recovery solutions for ibm total storage san file system sg247157
 
Implementing the ibm system storage san32 b e4 encryption switch - sg247922
Implementing the ibm system storage san32 b e4 encryption switch - sg247922Implementing the ibm system storage san32 b e4 encryption switch - sg247922
Implementing the ibm system storage san32 b e4 encryption switch - sg247922
 
Implementing the ibm system storage san32 b e4 encryption switch - sg247922
Implementing the ibm system storage san32 b e4 encryption switch - sg247922Implementing the ibm system storage san32 b e4 encryption switch - sg247922
Implementing the ibm system storage san32 b e4 encryption switch - sg247922
 
Oracle
OracleOracle
Oracle
 
Batch Modernization on z/OS
Batch Modernization on z/OSBatch Modernization on z/OS
Batch Modernization on z/OS
 
Robust data synchronization with ibm tivoli directory integrator sg246164
Robust data synchronization with ibm tivoli directory integrator sg246164Robust data synchronization with ibm tivoli directory integrator sg246164
Robust data synchronization with ibm tivoli directory integrator sg246164
 
Robust data synchronization with ibm tivoli directory integrator sg246164
Robust data synchronization with ibm tivoli directory integrator sg246164Robust data synchronization with ibm tivoli directory integrator sg246164
Robust data synchronization with ibm tivoli directory integrator sg246164
 
Deployment guide series ibm total storage productivity center for data sg247140
Deployment guide series ibm total storage productivity center for data sg247140Deployment guide series ibm total storage productivity center for data sg247140
Deployment guide series ibm total storage productivity center for data sg247140
 
Ibm total storage tape selection and differentiation guide sg246946
Ibm total storage tape selection and differentiation guide sg246946Ibm total storage tape selection and differentiation guide sg246946
Ibm total storage tape selection and differentiation guide sg246946
 
Ibm total storage tape selection and differentiation guide sg246946
Ibm total storage tape selection and differentiation guide sg246946Ibm total storage tape selection and differentiation guide sg246946
Ibm total storage tape selection and differentiation guide sg246946
 
BOOK - IBM Z vse using db2 on linux for system z
BOOK - IBM Z vse using db2 on linux for system zBOOK - IBM Z vse using db2 on linux for system z
BOOK - IBM Z vse using db2 on linux for system z
 
Ibm tivoli storage manager bare machine recovery for microsoft windows 2003 a...
Ibm tivoli storage manager bare machine recovery for microsoft windows 2003 a...Ibm tivoli storage manager bare machine recovery for microsoft windows 2003 a...
Ibm tivoli storage manager bare machine recovery for microsoft windows 2003 a...
 
Designing an ibm storage area network sg245758
Designing an ibm storage area network sg245758Designing an ibm storage area network sg245758
Designing an ibm storage area network sg245758
 
An introduction to storage provisioning with tivoli provisioning manager and ...
An introduction to storage provisioning with tivoli provisioning manager and ...An introduction to storage provisioning with tivoli provisioning manager and ...
An introduction to storage provisioning with tivoli provisioning manager and ...
 
IBM Flex System Interoperability Guide
IBM Flex System Interoperability GuideIBM Flex System Interoperability Guide
IBM Flex System Interoperability Guide
 
Windows nt backup and recovery with adsm sg242231
Windows nt backup and recovery with adsm sg242231Windows nt backup and recovery with adsm sg242231
Windows nt backup and recovery with adsm sg242231
 

Plus de Banking at Ho Chi Minh city

Tme 10 cookbook for aix systems management and networking sg244867
Tme 10 cookbook for aix systems management and networking sg244867Tme 10 cookbook for aix systems management and networking sg244867
Tme 10 cookbook for aix systems management and networking sg244867
Banking at Ho Chi Minh city
 
Tivoli data warehouse version 1.3 planning and implementation sg246343
Tivoli data warehouse version 1.3 planning and implementation sg246343Tivoli data warehouse version 1.3 planning and implementation sg246343
Tivoli data warehouse version 1.3 planning and implementation sg246343
Banking at Ho Chi Minh city
 
Tivoli data warehouse 1.2 and business objects redp9116
Tivoli data warehouse 1.2 and business objects redp9116Tivoli data warehouse 1.2 and business objects redp9116
Tivoli data warehouse 1.2 and business objects redp9116
Banking at Ho Chi Minh city
 
Tivoli business systems manager v2.1 end to-end business impact management sg...
Tivoli business systems manager v2.1 end to-end business impact management sg...Tivoli business systems manager v2.1 end to-end business impact management sg...
Tivoli business systems manager v2.1 end to-end business impact management sg...
Banking at Ho Chi Minh city
 
Synchronizing data with ibm tivoli directory integrator 6.1 redp4317
Synchronizing data with ibm tivoli directory integrator 6.1 redp4317Synchronizing data with ibm tivoli directory integrator 6.1 redp4317
Synchronizing data with ibm tivoli directory integrator 6.1 redp4317
Banking at Ho Chi Minh city
 
Storage migration and consolidation with ibm total storage products redp3888
Storage migration and consolidation with ibm total storage products redp3888Storage migration and consolidation with ibm total storage products redp3888
Storage migration and consolidation with ibm total storage products redp3888
Banking at Ho Chi Minh city
 
Solution deployment guide for ibm tivoli composite application manager for we...
Solution deployment guide for ibm tivoli composite application manager for we...Solution deployment guide for ibm tivoli composite application manager for we...
Solution deployment guide for ibm tivoli composite application manager for we...
Banking at Ho Chi Minh city
 
Slr to tivoli performance reporter for os 390 migration cookbook sg245128
Slr to tivoli performance reporter for os 390 migration cookbook sg245128Slr to tivoli performance reporter for os 390 migration cookbook sg245128
Slr to tivoli performance reporter for os 390 migration cookbook sg245128
Banking at Ho Chi Minh city
 

Plus de Banking at Ho Chi Minh city (20)

Postgresql v15.1
Postgresql v15.1Postgresql v15.1
Postgresql v15.1
 
Postgresql v14.6 Document Guide
Postgresql v14.6 Document GuidePostgresql v14.6 Document Guide
Postgresql v14.6 Document Guide
 
IBM MobileFirst Platform v7.0 Pot Intro v0.1
IBM MobileFirst Platform v7.0 Pot Intro v0.1IBM MobileFirst Platform v7.0 Pot Intro v0.1
IBM MobileFirst Platform v7.0 Pot Intro v0.1
 
IBM MobileFirst Platform v7 Tech Overview
IBM MobileFirst Platform v7 Tech OverviewIBM MobileFirst Platform v7 Tech Overview
IBM MobileFirst Platform v7 Tech Overview
 
IBM MobileFirst Foundation Version Flyer v1.0
IBM MobileFirst Foundation Version Flyer v1.0IBM MobileFirst Foundation Version Flyer v1.0
IBM MobileFirst Foundation Version Flyer v1.0
 
IBM MobileFirst Platform v7.0 POT Offers Lab v1.0
IBM MobileFirst Platform v7.0 POT Offers Lab v1.0IBM MobileFirst Platform v7.0 POT Offers Lab v1.0
IBM MobileFirst Platform v7.0 POT Offers Lab v1.0
 
IBM MobileFirst Platform v7.0 pot intro v0.1
IBM MobileFirst Platform v7.0 pot intro v0.1IBM MobileFirst Platform v7.0 pot intro v0.1
IBM MobileFirst Platform v7.0 pot intro v0.1
 
IBM MobileFirst Platform v7.0 POT App Mgmt Lab v1.1
IBM MobileFirst Platform  v7.0 POT App Mgmt Lab v1.1IBM MobileFirst Platform  v7.0 POT App Mgmt Lab v1.1
IBM MobileFirst Platform v7.0 POT App Mgmt Lab v1.1
 
IBM MobileFirst Platform v7.0 POT Analytics v1.1
IBM MobileFirst Platform v7.0 POT Analytics v1.1IBM MobileFirst Platform v7.0 POT Analytics v1.1
IBM MobileFirst Platform v7.0 POT Analytics v1.1
 
IBM MobileFirst Platform Pot Sentiment Analysis v3
IBM MobileFirst Platform Pot Sentiment Analysis v3IBM MobileFirst Platform Pot Sentiment Analysis v3
IBM MobileFirst Platform Pot Sentiment Analysis v3
 
IBM MobileFirst Platform 7.0 POT InApp Feedback V0.1
IBM MobileFirst Platform 7.0 POT InApp Feedback V0.1IBM MobileFirst Platform 7.0 POT InApp Feedback V0.1
IBM MobileFirst Platform 7.0 POT InApp Feedback V0.1
 
Tme 10 cookbook for aix systems management and networking sg244867
Tme 10 cookbook for aix systems management and networking sg244867Tme 10 cookbook for aix systems management and networking sg244867
Tme 10 cookbook for aix systems management and networking sg244867
 
Tivoli firewall magic redp0227
Tivoli firewall magic redp0227Tivoli firewall magic redp0227
Tivoli firewall magic redp0227
 
Tivoli data warehouse version 1.3 planning and implementation sg246343
Tivoli data warehouse version 1.3 planning and implementation sg246343Tivoli data warehouse version 1.3 planning and implementation sg246343
Tivoli data warehouse version 1.3 planning and implementation sg246343
 
Tivoli data warehouse 1.2 and business objects redp9116
Tivoli data warehouse 1.2 and business objects redp9116Tivoli data warehouse 1.2 and business objects redp9116
Tivoli data warehouse 1.2 and business objects redp9116
 
Tivoli business systems manager v2.1 end to-end business impact management sg...
Tivoli business systems manager v2.1 end to-end business impact management sg...Tivoli business systems manager v2.1 end to-end business impact management sg...
Tivoli business systems manager v2.1 end to-end business impact management sg...
 
Synchronizing data with ibm tivoli directory integrator 6.1 redp4317
Synchronizing data with ibm tivoli directory integrator 6.1 redp4317Synchronizing data with ibm tivoli directory integrator 6.1 redp4317
Synchronizing data with ibm tivoli directory integrator 6.1 redp4317
 
Storage migration and consolidation with ibm total storage products redp3888
Storage migration and consolidation with ibm total storage products redp3888Storage migration and consolidation with ibm total storage products redp3888
Storage migration and consolidation with ibm total storage products redp3888
 
Solution deployment guide for ibm tivoli composite application manager for we...
Solution deployment guide for ibm tivoli composite application manager for we...Solution deployment guide for ibm tivoli composite application manager for we...
Solution deployment guide for ibm tivoli composite application manager for we...
 
Slr to tivoli performance reporter for os 390 migration cookbook sg245128
Slr to tivoli performance reporter for os 390 migration cookbook sg245128Slr to tivoli performance reporter for os 390 migration cookbook sg245128
Slr to tivoli performance reporter for os 390 migration cookbook sg245128
 

Dernier

Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
Joaquim Jorge
 

Dernier (20)

Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CV
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
Developing An App To Navigate The Roads of Brazil
Developing An App To Navigate The Roads of BrazilDeveloping An App To Navigate The Roads of Brazil
Developing An App To Navigate The Roads of Brazil
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day Presentation
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024
 
What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdf
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 

Netfinity tape solutions sg245218

  • 1. Netfinity Tape Solutions Wim Feyants, Steve Russell International Technical Support Organization www.redbooks.ibm.com SG24-5218-01
  • 2.
  • 3. SG24-5218-01 International Technical Support Organization Netfinity Tape Solutions March 2000
  • 4. Take Note! Before using this information and the product it supports, be sure to read the general information in Appendix E, “Special notices” on page 289. Second Edition (March 2000) This redbook applies to IBM’s current line of tape products for use with Netfinity servers. At the time of writing, these were: IBM 40/80 GB DLT tape drive IBM 35/70 GB DLT tape drive IBM 20/40 GB DLT tape drive IBM 20/40 GB 8 mm tape drive IBM 20/40 GB DDS-4 4 mm tape drive IBM 12/24 GB DDS-3 4 mm tape drive IBM 10/20 GB NS tape drive IBM 490/980 GB DLT tape library IBM 280/560 GB DLT tape autoloader IBM 3447 DLT tape library IBM 3449 8 mm tape library IBM 3570 Magstar MP tape library IBM 3575 Magstar MP tape library Comments may be addressed to: IBM Corporation, International Technical Support Organization Dept. HZ8 Building 678 P.O. Box 12195 Research Triangle Park, NC 27709-2195 When you send information to IBM, you grant IBM a non-exclusive right to use or distribute the information in any way it believes appropriate without incurring any obligation to you. © Copyright International Business Machines Corporation 1998 2000. All rights reserved. Note to U.S Government Users - Documentation related to restricted rights - Use, duplication or disclosure is subject to restrictions set forth in GSA ADP Schedule Contract with IBM Corp.
  • 5. Contents Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix The team that wrote this redbook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix Comments welcome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x Chapter 1. Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Chapter 2. Strategy . . . . . . . . . . . . . . . . . . . . . .. . . . .. . . . . .. . . . . .. . . . . .3 2.1 Why are backups necessary? . . . . . . . . . . . .. . . . .. . . . . .. . . . . .. . . . . .3 2.2 Backup methodologies . . . . . . . . . . . . . . . . .. . . . .. . . . . .. . . . . .. . . . . .4 2.2.1 When will a file be backed up? . . . . . . .. . . . .. . . . . .. . . . . .. . . . . .4 2.2.2 Backup patterns . . . . . . . . . . . . . . . . . .. . . . .. . . . . .. . . . . .. . . . . .5 2.3 System and storage topologies . . . . . . . . . . .. . . . .. . . . . .. . . . . .. . . . .10 2.3.1 Direct tape connection. . . . . . . . . . . . . .. . . . .. . . . . .. . . . . .. . . . .10 2.3.2 Single server model . . . . . . . . . . . . . . .. . . . .. . . . . .. . . . . .. . . . .10 2.3.3 Two-tier model . . . . . . . . . . . . . . . . . . .. . . . .. . . . . .. . . . . .. . . . .11 2.3.4 Multi-tier model . . . . . . . . . . . . . . . . . . .. . . . .. . . . . .. . . . . .. . . . .13 2.4 Storage area network implementations . . . . .. . . . .. . . . . .. . . . . .. . . . .14 2.4.1 Why use SAN for tape storage? . . . . . .. . . . .. . . . . .. . . . . .. . . . .15 2.4.2 Fibre Channel attached tape storage. . .. . . . .. . . . . .. . . . . .. . . . .17 2.4.3 Tape pooling . . . . . . . . . . . . . . . . . . . . .. . . . .. . . . . .. . . . . .. . . . .18 2.5 Performance considerations . . . . . . . . . . . . .. . . . .. . . . . .. . . . . .. . . . .19 2.5.1 Scheduling backups . . . . . . . . . . . . . . .. . . . .. . . . . .. . . . . .. . . . .19 2.5.2 Network bandwidth considerations . . . .. . . . .. . . . . .. . . . . .. . . . .20 2.5.3 Compression . . . . . . . . . . . . . . . . . . . . .. . . . .. . . . . .. . . . . .. . . . .21 2.5.4 Hierarchical storage . . . . . . . . . . . . . . .. . . . .. . . . . .. . . . . .. . . . .22 2.6 Database server backup . . . . . . . . . . . . . . . .. . . . .. . . . . .. . . . . .. . . . .23 2.7 Selecting a tape drive . . . . . . . . . . . . . . . . . .. . . . .. . . . . .. . . . . .. . . . .27 2.7.1 Tape capacity . . . . . . . . . . . . . . . . . . . .. . . . .. . . . . .. . . . . .. . . . .27 2.7.2 Single tape devices and libraries . . . . . .. . . . .. . . . . .. . . . . .. . . . .27 2.7.3 Reliability . . . . . . . . . . . . . . . . . . . . . . .. . . . .. . . . . .. . . . . .. . . . .28 2.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . .. . . . . .. . . . . .. . . . .31 Chapter 3. Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . .. . . . .33 3.1 Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . .. . . . .36 3.1.1 Digital Linear Tape (DLT) . . . . . . . . . . . . . . . . . . . . . .. . . . . .. . . . .36 3.1.2 8 mm tape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . .. . . . .39 3.1.3 4 mm Digital Audio Tape (DAT) . . . . . . . . . . . . . . . . . .. . . . . .. . . . .40 3.1.4 Travan Quarter-Inch Cartridge (QIC) . . . . . . . . . . . . . .. . . . . .. . . . .42 3.1.5 Magstar 3570 MP Fast Access Linear tape cartridge . .. . . . . .. . . . .43 3.1.6 Linear Tape Open (LTO) . . . . . . . . . . . . . . . . . . . . . . .. . . . . .. . . . .45 3.1.7 Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . .. . . . .47 3.2 40/80 GB DLT tape drive . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . .. . . . .47 3.2.1 Installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . .. . . . .49 3.3 35/70 GB DLT tape drive . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . .. . . . .51 3.3.1 Installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . .. . . . .53 3.4 20/40 GB DLT tape drive . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . .. . . . .54 3.4.1 Installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . .. . . . .56 3.5 20/40 GB 8 mm tape drive . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . .. . . . .58 3.5.1 Installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . .. . . . .59 3.5.2 Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . .. . . . .61 3.6 20/40 GB DDS-4 4 mm tape drive . . . . . . . . . . . . . . . . . . . .. . . . . .. . . . .62 © Copyright IBM Corp. 1998 2000 iii
  • 6. 3.6.1 Installation . . . . . . . . . . . . . . . . . . .. . . . . .. . . . .. . . . . .. . . . . .. . 63 3.7 12/24 GB DDS-3 4 mm tape drive . . . . .. . . . . .. . . . .. . . . . .. . . . . .. . 64 3.7.1 Installation . . . . . . . . . . . . . . . . . . .. . . . . .. . . . .. . . . . .. . . . . .. . 65 3.8 10/20 GB NS tape drive . . . . . . . . . . . . .. . . . . .. . . . .. . . . . .. . . . . .. . 66 3.8.1 Installation . . . . . . . . . . . . . . . . . . .. . . . . .. . . . .. . . . . .. . . . . .. . 67 3.9 490/980 GB DLT library . . . . . . . . . . . . .. . . . . .. . . . .. . . . . .. . . . . .. . 69 3.9.1 Operation. . . . . . . . . . . . . . . . . . . .. . . . . .. . . . .. . . . . .. . . . . .. . 71 3.9.2 Installation . . . . . . . . . . . . . . . . . . .. . . . . .. . . . .. . . . . .. . . . . .. . 71 3.9.3 Configuration . . . . . . . . . . . . . . . . .. . . . . .. . . . .. . . . . .. . . . . .. . 72 3.10 280/560 GB DLT autoloader . . . . . . . .. . . . . .. . . . .. . . . . .. . . . . .. . 74 3.10.1 Operation. . . . . . . . . . . . . . . . . . .. . . . . .. . . . .. . . . . .. . . . . .. . 75 3.10.2 Installation . . . . . . . . . . . . . . . . . .. . . . . .. . . . .. . . . . .. . . . . .. . 75 3.10.3 Configuration . . . . . . . . . . . . . . . .. . . . . .. . . . .. . . . . .. . . . . .. . 75 3.11 3447 DLT Tape Library . . . . . . . . . . . .. . . . . .. . . . .. . . . . .. . . . . .. . 77 3.11.1 Operation . . . . . . . . . . . . . . . . . . .. . . . . .. . . . .. . . . . .. . . . . .. . 79 3.11.2 Installation . . . . . . . . . . . . . . . . . .. . . . . .. . . . .. . . . . .. . . . . .. . 79 3.11.3 Configuration . . . . . . . . . . . . . . . .. . . . . .. . . . .. . . . . .. . . . . .. . 80 3.12 3449 8 mm tape library . . . . . . . . . . . .. . . . . .. . . . .. . . . . .. . . . . .. . 83 3.12.1 Operation. . . . . . . . . . . . . . . . . . .. . . . . .. . . . .. . . . . .. . . . . .. . 86 3.12.2 Installation . . . . . . . . . . . . . . . . . .. . . . . .. . . . .. . . . . .. . . . . .. . 90 3.12.3 Configuration . . . . . . . . . . . . . . . .. . . . . .. . . . .. . . . . .. . . . . .. . 90 3.13 3570 Magstar MP tape library . . . . . . .. . . . . .. . . . .. . . . . .. . . . . .. . 93 3.13.1 Configuration . . . . . . . . . . . . . . . .. . . . . .. . . . .. . . . . .. . . . . .. . 96 3.13.2 SCSI configuration . . . . . . . . . . . .. . . . . .. . . . .. . . . . .. . . . . .. . 98 3.14 3575 Magstar MP tape library . . . . . . .. . . . . .. . . . .. . . . . .. . . . . .. . 98 3.14.1 Design highlights . . . . . . . . . . . . .. . . . . .. . . . .. . . . . .. . . . . .. . 99 3.14.2 The multi-path feature . . . . . . . . .. . . . . .. . . . .. . . . . .. . . . . .. 100 3.14.3 Bulk I/O slots . . . . . . . . . . . . . . . .. . . . . .. . . . .. . . . . .. . . . . .. 101 3.14.4 High performance . . . . . . . . . . . .. . . . . .. . . . .. . . . . .. . . . . .. 101 3.14.5 High reliability . . . . . . . . . . . . . . .. . . . . .. . . . .. . . . . .. . . . . .. 102 3.14.6 3575 models . . . . . . . . . . . . . . . .. . . . . .. . . . .. . . . . .. . . . . .. 102 3.14.7 Magstar MP tape drives . . . . . . . .. . . . . .. . . . .. . . . . .. . . . . .. 103 Chapter 4. SAN equipment . . . . . . . . . . . . . . . . . . . . . . .. . . . . .. . . . . .. 105 4.1 Netfinity Fibre Channel PCI adapter . . . . . . . . . . . . . .. . . . . .. . . . . .. 105 4.2 IBM SAN Fibre Channel switch . . . . . . . . . . . . . . . . . .. . . . . .. . . . . .. 105 4.3 IBM SAN Data Gateway Router . . . . . . . . . . . . . . . . . .. . . . . .. . . . . .. 107 4.4 Netfinity Fibre Channel hub . . . . . . . . . . . . . . . . . . . . .. . . . . .. . . . . .. 109 4.5 Cabling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . .. . . . . .. 111 4.6 Supported configurations . . . . . . . . . . . . . . . . . . . . . . .. . . . . .. . . . . .. 112 4.6.1 Fibre Channel attached tape storage . . . . . . . . . .. . . . . .. . . . . .. 113 4.6.2 Netfinity server consolidation with tape pooling . .. . . . . .. . . . . .. 114 4.6.3 Sample SAN configuration . . . . . . . . . . . . . . . . . .. . . . . .. . . . . .. 114 Chapter 5. Software . . . . . . . . . . . . . . . . . .. . . . . .. . . . .. . . . . .. . . . . .. 123 5.1 Tivoli Storage Manager . . . . . . . . . . . . .. . . . . .. . . . .. . . . . .. . . . . .. 124 5.1.1 Products and base components . . .. . . . . .. . . . .. . . . . .. . . . . .. 125 5.1.2 Server data management. . . . . . . .. . . . . .. . . . .. . . . . .. . . . . .. 127 5.1.3 Automating client operations . . . . .. . . . . .. . . . .. . . . . .. . . . . .. 132 5.1.4 Supported devices . . . . . . . . . . . . .. . . . . .. . . . .. . . . . .. . . . . .. 134 5.2 Tivoli Data Protection for Workgroups . .. . . . . .. . . . .. . . . . .. . . . . .. 135 5.2.1 Concepts . . . . . . . . . . . . . . . . . . . .. . . . . .. . . . .. . . . . .. . . . . .. 135 5.2.2 Components . . . . . . . . . . . . . . . . .. . . . . .. . . . .. . . . . .. . . . . .. 136 iv Netfinity Tape Solutions
  • 7. 5.2.3 Supported devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .137 5.3 VERITAS NetBackup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .137 5.3.1 Concepts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .138 5.3.2 Supported devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .140 5.4 Legato NetWorker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .141 5.4.1 Concepts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .141 5.4.2 Supported devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .143 5.5 Computer Associates ARCserveIT for Windows NT . . . . . . . . . . . . . . . .143 5.5.1 Concepts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .145 5.5.2 Supported devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .146 5.6 Computer Associates ARCserveIT for NetWare . . . . . . . . . . . . . . . . . . .147 5.6.1 Concepts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .148 5.6.2 Supported Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .149 5.7 VERITAS Backup Exec for Windows NT . . . . . . . . . . . . . . . . . . . . . . . . .149 5.7.1 Concepts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .151 5.7.2 Supported devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .155 5.8 VERITAS Backup Exec for Novell NetWare . . . . . . . . . . . . . . . . . . . . . .155 5.8.1 Concepts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .156 5.8.2 Job types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .157 5.8.3 Supported devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .158 Chapter 6. Installation and configuration . . . . . . . . . . . . . . .. . . . . .. . . .159 6.1 Tivoli Storage Manager for Windows NT . . . . . . . . . . . . . . .. . . . . .. . . .159 6.1.1 Software installation . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . .. . . .159 6.1.2 Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . .. . . .164 6.1.3 Configuring the IBM tapes and libraries . . . . . . . . . . . .. . . . . .. . . .186 6.2 Tivoli Storage Manager Server V2.1 for OS/2 . . . . . . . . . . .. . . . . .. . . .194 6.2.1 Server configuration . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . .. . . .197 6.3 Tivoli Data Protection for Workgroups . . . . . . . . . . . . . . . . .. . . . . .. . . .204 6.3.1 Configuration and use . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . .. . . .206 6.3.2 Configuring IBM tape devices . . . . . . . . . . . . . . . . . . .. . . . . .. . . .210 6.4 Legato NetWorker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . .. . . .210 6.4.1 Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . .. . . .212 6.5 Computer Associates ARCserveIT for Windows NT . . . . . .. . . . . .. . . .217 6.5.1 Preparing to install ARCserveIT . . . . . . . . . . . . . . . . .. . . . . .. . . .217 6.5.2 Installing ARCserveIT . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . .. . . .217 6.5.3 Configuring ARCserveIT on Windows NT Server . . . . .. . . . . .. . . .221 6.6 Computer Associates ARCserve Version 6.1 for NetWare. .. . . . . .. . . .224 6.6.1 Installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . .. . . .224 6.6.2 Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . .. . . .228 6.6.3 Managing ARCserve for NetWare . . . . . . . . . . . . . . . .. . . . . .. . . .233 6.6.4 The ARCserve changer option . . . . . . . . . . . . . . . . . .. . . . . .. . . .238 6.7 VERITAS Backup Exec for Windows NT . . . . . . . . . . . . . . .. . . . . .. . . .242 6.7.1 Software installation . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . .. . . .242 6.7.2 Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . .. . . .244 6.7.3 Configuring IBM tape drives . . . . . . . . . . . . . . . . . . . .. . . . . .. . . .252 6.8 Veritas Backup Exec for Novell NetWare . . . . . . . . . . . . . .. . . . . .. . . .254 6.8.1 Software installation . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . .. . . .254 6.8.2 Software configuration. . . . . . . . . . . . . . . . . . . . . . . . .. . . . . .. . . .260 6.9 Seagate Sytos Premium for OS/2 . . . . . . . . . . . . . . . . . . . .. . . . . .. . . .262 6.9.1 Installing Sytos Premium Version 2.2 . . . . . . . . . . . . .. . . . . .. . . .263 v
  • 8. Appendix A. Sources of information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267 Appendix B. Hardware part numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269 Appendix C. Storage area networks and Fibre Channel . . . . . . . . . . . . . . . 275 C.1 Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .275 C.1.1 Lower layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275 C.1.2 Upper layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275 C.2 Topologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .276 C.3 Classes of Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .276 C.4 SAN components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 C.4.1 SAN servers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 C.4.2 SAN storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 C.5 SAN interconnects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .277 C.5.1 Cables and connectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278 C.5.2 Gigabit link model (GLM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278 C.5.3 Gigabit interface converters (GBIC). . . . . . . . . . . . . . . . . . . . . . . . . . . . 278 C.5.4 Media interface adapters (MIA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278 C.5.5 Adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279 C.5.6 Extenders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .279 C.5.7 Multiplexors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279 C.5.8 Hubs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279 C.5.9 Routers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279 C.5.10 Bridges. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280 C.5.11 Gateways . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280 C.5.12 Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .280 C.5.13 Directors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .280 Appendix D. TSM element addresses and worksheets . . . . . ...... . . . . . 283 D.1 Device names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ...... . . . . . 283 D.2 Single tape devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ...... . . . . . 283 D.3 Tape libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ...... . . . . . 284 D.3.1 IBM 3502-108 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ...... . . . . . 284 D.3.2 IBM 3502-x14 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ...... . . . . . 284 D.3.3 IBM 3447 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ...... . . . . . 285 D.3.4 IBM 3449 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ...... . . . . . 285 D.3.5 IBM 3570 C2x . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ...... . . . . . 286 D.3.6 IBM 3575 L06 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ...... . . . . . 286 D.3.7 IBM 3575 L12 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ...... . . . . . 287 D.3.8 IBM 3575 L18, L24, and L32 . . . . . . . . . . . . . . . . . . . . . . ...... . . . . . 287 Appendix E. Special notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289 Appendix F. Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291 F.1 IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291 F.2 IBM Redbooks collections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291 F.3 Other resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .291 F.4 Referenced Web sites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292 How to get IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295 IBM Redbooks fax order form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296 vi Netfinity Tape Solutions
  • 9. Abbreviations and acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .297 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .299 IBM Redbooks review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .305 vii
  • 10. viii Netfinity Tape Solutions
  • 11. Preface This redbook discusses IBM’s range of tape drives currently available for Netfinity servers. The book starts with a discussion of tape backup strategies and what concepts you should consider when designing a backup configuration. Each of the tape drives currently available from IBM is then described, listing its specifications and connectivity options. It also includes Storage Area Network implementations of tape devices. The redbook then examines the backup software that is most commonly used by customers in the Intel processor environment. Finally the book explains how to configure the tape drives and software so that they function correctly together. This redbook gives a broad understanding of data backup and how important it is to day-to-day operations of networked servers. It will help anyone who has to select, configure or support servers and tape subsystems involving software from IBM and other leading backup solution providers and IBM tape hardware. The team that wrote this redbook This redbook was produced by a team of specialists from around the world working at the International Technical Support Organization, Raleigh Center. Wim Feyants is a Support Engineer in Belgium. He has four years of experience in supporting PCs and related software, and one year in OS/390 support. He holds a degree in Electromechanical Engineering. His areas of expertise include Tivoli Storage Manager on S/390 and Netfinity, Netfinity Servers, OS/2 and Novell NetWare. His previous publications include the redbook IBM Netfinity and PC Server Technology and Selection Reference and the first edition of this redbook. Wim can be reached at wim_feyants@be.ibm.com. Steve Russell is a Senior IT Specialist at the International Technical Support Organization, Raleigh Center. Before joining the ITSO in January 1999, Steve worked in a Technical Marketing role in IBM’s Netfinity organization in EMEA. Prior to that, he spent nearly 15 years managing and developing PC-based hardware and software projects. He holds a BSc in Electrical and Electronic Engineering and is a member of the Institution of Electrical Engineers and a Chartered Engineer. This is the second edition of this redbook. The authors of the first edition were: David Watts Wim Feyants Mike Sanchez Dilbagh Singh Thanks to the following people from the ITSO for their help: David Watts, Raleigh Matthias Werner, San Jose Pat Randall, San Jose Margaret Ticknor, Raleigh Shawn Walsh, Raleigh Gail Christensen, Raleigh © Copyright IBM Corp. 1998 2000 ix
  • 12. Linda Robinson, Raleigh Thanks also to the following IBMers for their invaluable contributions to this project: John Gates, Tape Product Manager, Raleigh Lee Pisarek, Netfinity Technology Lab, Raleigh Dan Watanabe, Tape and Optics Business Development, Tuscon Comments welcome Your comments are important to us! We want our Redbooks to be as helpful as possible. Please send us your comments about this or other Redbooks in one of the following ways: • Fax the evaluation form found in “IBM Redbooks review” on page 305 to the fax number shown on the form. • Use the online evaluation form found at http://www.redbooks.ibm.com/ • Send your comments in an Internet note to redbook@us.ibm.com x Netfinity Tape Solutions
  • 13. Chapter 1. Introduction IBM has a long heritage in the development and production of digital data storage. As Netfinity servers take on more work in the enterprise, the need for robust storage management solutions and support programs becomes a basic requirement. IBM provides industry leading tape technology in 4 mm, 8 mm, Quarter Inch Cartridge (QIC), Digital Linear Tape (DLT), and Magstar. IBM’s tape offerings are manufactured and tested to IBM’s standards and specifications and are backed by its worldwide service and support. IBM can provide a total storage solution end-to-end, from the hardware to financing. Before selecting a tape solution, you first need to determine your own specific requirements both in terms of the data to protect and the time it takes to back up and recover those files. Once you have determined the strategy you wish to use, you need to select the drive technology, then the hardware and software products that best meet those strategic requirements. This redbook leads you through the points you need to consider when determining a backup strategy, describes the hardware and software available to you and finally provides guidance about how to configure the hardware and software so that they work well together. This edition adds descriptions of hardware and software introduced since the first edition was published. In addition, we have included a chapter about storage area networks (Chapter 4, “SAN equipment” on page 105), which discusses tape implementations using a SAN fabric in particular. As well as providing an overview of newly announced SAN components, including Fibre Channel hubs, gateways, and routers, we examine configurations of these components supported in combination with tape hardware. Examples we explore include remotely attached tapes and tape library sharing solutions. Finally, the advantages of SAN attached tape devices in comparison with direct SCSI attached tape devices are discussed. This books only covers SAN solutions in a backup environment. Other implementations, such as remotely attached direct access storage devices, are not dicussed. © Copyright IBM Corp. 1998 2000 1
  • 14. 2 Netfinity Tape Solutions
  • 15. Chapter 2. Strategy When designing a backup solution, you will start by looking at your specific needs, and then at the possibilities different products (hardware and software) have to offer. This chapter is meant to help you determine those needs, by explaining some common backup terminology. We won’t be referring to specific hardware or software. For specific information, see Chapter 3, “Hardware” on page 33 and Chapter 5, “Software” on page 123. 2.1 Why are backups necessary? In today’s server environments, there is great emphasis on high availability solutions. Examples include RAID disk subsystems, redundant power supplies, ECC memory and clustering solutions. These new technologies reduce the risk of server downtime and data loss. Some people could see this as a reason not to implement a backup solution, since data is already secured. Unfortunately, hardware failures are only responsible for a small percentage of incidents involving data loss. Among other causes of data loss, one of the most common is operator errors, that is, user errors. Users may inadvertently save a file that contains erroneous information, or they may erase a critical file by mistake. Besides hardware and user errors, software errors and virus attacks can also cause data loss or data corruption. When thinking about backups, you should consider that your backup is not only necessary for disaster recovery. Being able to provide a stable storage environment for keeping earlier versions of user files is as important. You should think about your backup environment as being a storage management solution. Storage management (when discussed in a backup/archive context) embodies more than just disaster recovery. The possibility of keeping several versions of a particular file, including ways to maintain these multiple versions, is just as important. If a user or application corrupts data, and saves it, the above function will allow you to roll back changes and get back to a previous version. Backup file maintenance is also an important factor. It is fairly easy to create a backup of a file each hour. However, if there is no way to set the period for which these versions should be kept, or the number of different versions to be kept, your storage utilization will be very high (and costly). Another factor that is important is the degree of automation a backup product delivers. This is the differentiator between simple backup/restore applications and storage management tools. If the operator has to do everything manually, the chances of errors and the cost of operation will go up. Where backup data is meant to be used as a recovery resource in case of data loss, another possible use of low-cost mass storage media is archiving. The current trend of producing data in an electronic form, rather than on paper, calls for a need to have a valid archiving solution. Documents such as contracts, payroll records, employee records, etc. will need to be stored in a permanent way, without losing the advantages of the electronic form they exist in. A typical difference between backup data and archive data is their lifetime and rate of change. While backup data changes very fast and becomes obsolete in a short term period, archive data typically is static and stays current for a long time (up to © Copyright IBM Corp. 1998 2000 3
  • 16. several years, depending on legal standards). As a result, backup products should be able to differentiate between these two types of data, since storage policies will differ. Besides a difference in handling this data, the storage device and media will have specific needs. Since data will be kept for a long time, media lifetime must be very high, which means you might need tape devices that are backward compatible. Physical storage is as important. It should be an environmentally controlled, secured area. Finally, availability of this data should be very high. That is why some sources suggest keeping a second backup server, entirely identical to the production system, on standby in a remote location, together with an extra copy of the media. 2.2 Backup methodologies This section explains the different ways our data will be backed up, what will be backed up, and where will it go. Different methods exist, each having its advantages and disadvantages. We will discuss three common ways in which data is approached by backup programs. When an approach is decided upon, the next step is to set the backup pattern that will be used. The backup pattern can be seen as the way the backup program determines how data will be handled over a certain time period. This leads us to another important factor in backup operations: continuity. There is a start point, and from then on, reliable backups must be maintained. This is why backup implementation should be very well planned before starting. 2.2.1 When will a file be backed up? 2.2.1.1 Full backup A full backup is simply that: a complete backup of every single file. It is the start point for every backup implementation. Every file that needs to be backed up, will have to be backed up at least once. The advantage of such a backup is that files are easily found when needed. Since full backups include all data on your hard drive, you do not have to search through several tapes to find the files you need to restore. If you should need to restore the entire system, all of the most current information can be found on the last backup tape (or set of tapes). The disadvantage is that doing nothing but full backups leads to redundancy which wastes both media and time. A backup strategy would normally include a combination of full, incremental and/or differential backups. 2.2.1.2 Incremental backup Incremental backups include files that were created or changed since the last backup (that is, the last full or incremental backup). To achieve this, the status of each file must be recorded either within the backup software or through the use of the archive attribute of the files. If no previous backup was made, an incremental backup is equivalent to a full backup. 4 Netfinity Tape Solutions
  • 17. Incremental backups make better use of media compared to full backups. Only files that were created or changed since the last backup are included, so less backup space is used and less time is required. Note The definition of a file change can differ between backup applications. Some criteria used for marking a file as changed include: • Data changes • Location changes • Attribute changes (last modification or access date, archive bit) • Security changes The disadvantage is that multiple tapes are needed to restore a set of files. The files can be spread over all the tapes in use since the last full backup. You may have to search several tapes to find the file you wish to restore. The backup software can minimize this by remembering where files are located; however a restoration may still require access to all incremental backups. 2.2.1.3 Differential backup A differential backup includes all files that were created or modified since the last full backup. Note the difference between incremental and differential: incremental backups save files changed since the last (incremental or full) backup, whereas differential backups save files changed since the last full backup. In some publications, a differential backup is also called a cumulative incremental backup. The advantages over full backups are that they are quicker and use less media. The advantage over an incremental backup is that the restore process is more efficient — at worst, the restore will require only the latest differential backup set and the latest full backup set, whereas an incremental backup could require all incremental backup sets and the full backup set. The disadvantage of differential backups is that longer and longer time is needed to perform them as the amount of changed data grows. Compared to incremental backups, differential backups use more time and media — each backup would store much of the same information plus the latest information added or created since the last full backup. 2.2.2 Backup patterns A backup pattern is the way we will back up our data. Now that we have defined the different types of backups, the question is how should we combine them? Tape usage and reusage are important factors, because tape management will get complicated when dealing with a large numbers of tapes, and media costs will rise if we do not reuse tapes. 2.2.2.1 Full/Incremental pattern The most common way of performing backups is to take full backups on a regular basis, with incremental backups in between. To avoid the management of too many tapes, the number of incremental backups should be as few as possible. The average frequency is one full backup every week, plus five or six incremental backups (one per day) in between. This is shown graphically in Figure 1 on page 6. Chapter 2. Strategy 5
  • 18. This way of performing backups implies: • One tape (or set of tapes) per day • Very little data on each tape (except the full backup tapes) • When performing the second full backup, you ignore all of the previous full backups, erase the tapes, and send them back to the scratch pool. The administration of the tapes, inventory and tracking, tape labeling, and archiving must be done manually in most cases. In addition, each time you do a full backup, you send all of the data again. When doing a full restore, you will need to start by restoring the full backup, then restore the changes using every incremental backup. Sun Mon Tue Wed Thu Fri Sat Week 1 F I I I I I I Week 2 F I I I I I I F Full backup I Incremental backup Figure 1. Tape usage in full/incremental backup pattern An important factor within each backup pattern is tape usage and reutilization. In the example above (Figure 1), if in week 2, you need to restore a file that was backed up in week 1, you will need to have these tapes still available. This means that the number of tapes needed increases significantly. That is why rotation schedules are a very important part of tape management. Tape rotation schedules will provide you with different versions of files, without having a large number of tapes. A commonly used tape rotation strategy is the “grandfather-father-son” schedule. This name reflects the use of three generations of backup tapes: grandfather tapes, father tapes and son tapes. To explain, let us start our backups. On Sunday, a full backup is taken to a tape labeled “Week_1”. From Monday to Saturday, backups are taken to tapes labeled “Monday”, “Tuesday”, etc. The next Sunday, a full backup is taken to a tape “Week_2”. On Monday, we reuse the tapes labeled with the names of the week (the same tapes as used in week 1). These tapes are called the son tapes. For the next two weeks, we take weekly full backups to separate tapes, and store daily backups on the son tapes. At the end of the month, this leaves us with four father tapes, labeled “Week_1”, “Week_2”, “Week_3”, “Week_4”. This gives us the possibility to restore a version of a file that is one month in age. The last day of the month, a backup is taken to a grandfather tape, labeled “Month_1”. After this, the “Week_1” through “Week_4” tapes can be reused to do the weekly full backup. 6 Netfinity Tape Solutions
  • 19. So, you will have a set of 5 son tapes reused weekly, a set of 4 father tapes reused monthly, and a set of 4 or 12 grandfather tapes (depending on the amount of time you want to cover). Monday Tuesday Wednesday Thursday Friday Saturday Sunday Week_1 Week_2 Week_3 Wednesday Thursday Friday Saturday Week_4 Monday Tuesday Month_1 Daily Backup (Son backup set) Weekly Full Backup (Father backup set) Monthly Full Backup (Grandfather backup set) Figure 2. Grandfather-Father-Son media rotation schedule 2.2.2.2 Full/differential pattern Another way of performing backups is to take full backups and differential backups, with incremental backups in between. In this pattern: • A full backup saves every file. • A differential backup saves the files that have changed since the previous full backup. • An incremental backup saves the files that have changed since the previous incremental backup (or the previous differential backup if no previous incremental backups exist, or the previous full if no previous differentials exist). This process reduces the number of tapes to manage because you can discard your incremental tapes once you have done a differential. You still have to manage the incremental tapes prior to the differential backup, however. This way of performing backups implies: • One tape (or set of tapes) per day • Very little data on each tape (except the full backup tape) • More tapes to manage, because you have to keep the full backup tapes, the differential tapes, and the incremental tapes Chapter 2. Strategy 7
  • 20. Sun Mon Tue Wed Thu Fri Sat F I I D I I D F Full backup New/changed data since last incremental backup I Incremental backup Data from previous days' incremental backups D Differential backup Figure 3. Tape usage in full/differential backup patterns The advantage of the full/differential pattern over a full/incremental pattern is that a restore will use full, differential and incremental backups which require fewer tapes. (See 2.2.2.4, “Example” on page 9.) As in the full/incremental pattern, tape rotation can be implemented to limit the number of tapes used, while keeping a certain number of versions of each file over a certain time period 2.2.2.3 Incremental forever pattern Since one of the critical factors in any backup is the amount of data that has to be moved, a way of limiting this amount should be pursued. The best way to do this is to back up changes only. Using the incremental forever pattern, only incremental backups are performed. This means that there is no need for regular full or differential backups. Though the first backup will be an incremental that will back up everything (so, essentially the same as a full backup), only incremental backups need to be taken afterwards. It is clear that this pattern will limit the amount of backed up data, but turns tape management and usage into a very complex process. That is why you will need a backup application that is capable of managing these tapes. A good example of this is tape reusage. Since there is no determined point in time when tapes can be reused (as we had in the previous two patterns), the number of tapes can increase dramatically. Therefore, the application should be able to check tapes and clean them if necessary. This cleanup (or tape reclamation) should occur when a tape holds backup data that will no longer be used, since newer versions have been backed up. Another point is that, when backing up data from different machines, their data can be dispersed over a multitude of different tapes. Since mounting a tape is a slow process, this should be avoided. That is why some applications have a mechanism that is called collocation. Collocation will try to maintain the data of one machine on the fewest number of tapes possible. This should mean a performance gain when restoring, but will slow down the backup in cases where multiple machines need to back up their data to a single tape drive. Instead of moving the backup data of both clients to the same tape, the backup program will try to put the data of both clients on separate tapes. Therefore, the second client will have to wait until the backup of the first one completes before it can start its 8 Netfinity Tape Solutions
  • 21. backup. Again, applications have been provided to limit the impact of this (see 2.5.4, “Hierarchical storage” on page 22). 2.2.2.4 Example To make things a bit clearer, let’s look at an example. We have a machine with 20 GB of data, and each day about 5% of this data changes. This means we will have to back up about 1 GB of data for each incremental backup. The network will be the determining factor for the data transfer rate (we will assume a 16 Mbps token-ring network), and backup and restore throughput is equal. Table 1 shows times needed for backup operations and the type of backups: Table 1. Backup operation: time required using specific backup patterns Pattern Sun Mon Tue Wed Thu Fri Sat Full/increment Type Full Incr Incr Incr Incr Incr Incr al (Figure 1 on page 6) Time (sec) 10240 512 512 512 512 512 512 Full/differential Type Full Incr Incr Diff Incr Incr Diff (Figure 3 on page 8) Time (sec) 10240 512 512 1536 512 512 1536 Incremental Type Incr Incr Incr Incr Incr Incr Incr Time (sec) 5121 512 512 512 512 512 512 1.The first incremental backup will take 10240 seconds but here we assume that Sunday’s backup is not the first backup. If we look at the restore operation, we will need to determine the number of tapes that are required and the time needed to restore the data. Let’s assume that we have to do a full restore (that is, 20 MB) on Friday (restoring from Thursday’s backups). Table 2. Restore operation: total number of tapes and total amount of time required Type Number of tapes Time (seconds) Full/incremental 5 6000 Sun, Mon, Tue, Wed, Thu (10240 + 4 x 512= 12288) Full/differential 3 6000 Sun, Wed, Thu (10240 +1536 + 512 = 12288) Incremental Unknown 10240 From this we conclude: • A full restore is faster when using incremental strategies, but the number of tapes needed is hard to predict. • The number of tapes is the least when using the differential pattern. Chapter 2. Strategy 9
  • 22. 2.3 System and storage topologies When implementing a backup solution, the first thing to look at is how you are going to set up your site. Different possibilities exist, each giving some advantages and disadvantages. For SAN implementations, refer to 2.4, “Storage area network implementations” on page 14. The following topology models will be discussed: • Direct connection • Single server site • Two-tier site • Multi-tier site or branch office model There is no one “best” solution applicable to every situation. Factors to be considered when deciding on a backup solution include: • The network bandwidth available • The period available for backup activity • The capabilities of the backup software • The size and number of machines to be backed up 2.3.1 Direct tape connection The most easy topology to understand, is the one where we connect our tape device directly to the machine we are going to back up (see Figure 4). One advantage of this setup is the speed of the link between data and backup device (typically SCSI) versus the network connection used in other models. The disadvantages of this model are limited scalability, manageability and hardware cost (one tape device needed for every machine that requires backup). This setup can be suited for sites with a limited number of machines that need to be backed up, or for emergency restores. Storage Device Figure 4. Direct tape connection 2.3.2 Single server model As opposed to the direct connection model, this type of setup is based on a backup server, connected through a network to the machines that will need to take a backup. These machines are often referred to as clients, nodes or agents. The tape device (or other storage media) will be connected to this backup server (see Figure 5). The advantages of this design are that centralized storage administration is possible and the number of storage devices is reduced (and probably the cost). 10 Netfinity Tape Solutions
  • 23. However, one of the problems here could be the network bandwidth. Since all data that is backed up needs to go over the network, the throughput is smaller than what we have using a direct tape connection. Every client that is added will need some of this bandwidth (see 2.5.2, “Network bandwidth considerations” on page 20). This bandwidth issue becomes even more important when dealing with a distributed site. Let’s imagine that one of the machines that needs to be backed up is located in a different location than the backup server, with only a very slow link between these two sites. Throughput could diminish in such a way that it would take longer than 24 hours to back up the remote system. In this case, a two-tier solution would be better (as discussed in 2.3.3, “Two-tier model” on page 11). Although not required, it is advised that the machine used as backup server should be a dedicated machine. The reason for this is that backup and restore operations would have an impact on this server’s performance. If you included it in your regular server pool, acting as a file or application server, it could slow down all operations on the network servers. Machines that need to be backed up. Backup server Figure 5. Single server model This design is well suited for sites with a limited number of machines. There are multiple reasons for this. For example, network bandwidth is not unlimited. Another reason for the limit on clients that a single server will support is that each session will use resources (processor, memory) on the backup server. 2.3.3 Two-tier model As discussed in 2.3.2, “Single server model” on page 10, scalability and network bandwidth are limited when working with a single server site. In a two-tier model, an intermediate backup server is used as a staging platform (see Figure 6 on page 12). The advantages are twofold: 1. The backup is done to the first backup server (or source server), which resides locally (and has a LAN connection), and only then forwarded to the central Chapter 2. Strategy 11
  • 24. backup server (or target server). This can be done asynchronously, so that communication performance between the first and second-level backup servers is not critical. 2. The backup completes in a much shorter time as data transmission is not slowed down by tape drive write speeds. This leads to much shorter backup windows. You could also load balance large sites by adding additional source servers. Source Servers with local backups forwarded asynchronously to... Target or Central Server Figure 6. Two-tier model Figure 7 shows what happens with the data that needs to be backed up. In the first stage, data is moved to the source server. This happens during the period of time that we have to take our backups (referred to as backup window; see 2.5.1, “Scheduling backups” on page 19). 12 Netfinity Tape Solutions
  • 25. Stage 1: Data is backed up to the Stage 2: Data is moved from first backup server. This operation Storage 1 to Storage 2. This should complete during the normal can happen outside of the backup window. backup window. Storage 2 Storage 1 DATA Figure 7. Data movement in a two-tier model The specifications of the storage device connected to this source server should be sufficient to store all the data that is backed up. Typically, it will also be a fast device (probably a disk drive). In the second stage, data on this storage device is moved across the network to a second backup server. This normally happens after stage 1 completes (but not necessarily, however), and can be done outside of the normal backup window. The only rule here is that all data from the source servers must be moved to the target server before the backup window restarts. This setup gives advantages with regard to scalability, since you can add as many source servers as you want. However, more intelligent software is required to manage the transfer of backed up data both in backup mode and in restore mode. In the case of a restore operation, the user should not need to know on which backup server the data is. Another advantage of this server storage hierarchy is that in case of a site disaster at the source server location, the backups still reside on the target server. Of course, this advantage will only be true if the target and source servers are geographically separated, and all backup data has been moved to the central server. 2.3.4 Multi-tier model The multi-tier or branch office model is an extension of the two-tier model, but with another stage added (and you can add even more stages if you wish). The same advantages and disadvantages can be observed. Scalability goes up, but so does complexity. Chapter 2. Strategy 13
  • 26. Branches Regional Offices Central Server Figure 8. Multi-tier model 2.4 Storage area network implementations In this section, we will introduce some tape storage implementations using a storage area network architecture. This is not intended to be an introduction to SAN itself. It is limited to currently supported and tested configurations. For more information please refer to the following redbooks: Introduction to Storage Area Networks, SG24-5470 and Storage Area Networks: Tape Future in Fabrics, SG24-5474. The IBM definition of a storage area network, or SAN, is a dedicated, centrally managed, secure information infrastructure, which enables any-to-any interconnection of servers and storage systems. A SAN is made up of the following components: • A Fibre Channel topology • One or more host nodes • One or more storage nodes • Management software 14 Netfinity Tape Solutions
  • 27. The SAN topology is a combination of components, which can be compared to those used in local area networks. Examples of such components are hubs, gateways, switches and routers. The transport media used is Fibre Channel, which is defined in several ANSI standards. Although the name, Fibre Channel, assumes the usage of fiber connections, copper wiring is also supported. This topology is used to interconnect nodes. The two types of nodes are host nodes, such as the FC adapter of a server, and storage nodes. Storage nodes can be any devices that connect to a SAN. When looking at the above description, it is clear that many configurations can be created. However, only a limited number of implementations of the SAN architecture are currently supported for Netfinity backup solutions. This number will certainly rise in the future. 2.4.1 Why use SAN for tape storage? There can be several reasons to use SAN, and the importance of these reasons will depend on your requirements, such as availability, cost and performance. One thing that should be noted is that current Netfinity SAN implementations are limited to tape libraries, and not single tape drives. Besides the lack of tested implementations using single drives, another important fact is responsible for this lack of support: the main reason for implementing SAN solutions is tape drive and media sharing. Both concepts are possible when a media pool is available, and the tape devices have enough intelligence to share this media between them. Neither of these concepts is applicable to single tape drives. When talking about the availability of tape storage, two separate points can be discussed. The first one is availability of the hardware, meaning the tape library itself. The second one is the availability of the data backed up to tape. In current high availability implementations, this data is backed up and stored off-site. Although this way of working is generally accepted, it also might be a good thing to automate this. By doing so, a copy of local tapes would be sent to a remote site without human intervention. Retrieving these copies would also be transparent. This technique, which is sometimes referred to as automatic vaulting, can be achieved by using the SAN architecture. Performance issues can also be addressed using SAN architectures. When using a client/server backup model (the client backs up the data to the backup server), all the backup data must pass through the network (LAN or WAN). In some cases, for example, if the backup client is a big database server, the network can no longer deliver the throughput that is needed to complete the backup in a certain time frame. Current solutions would consist of putting a local tape device on the backup client. Besides the extra cost of additional tape devices, decentralizing backup management can be difficult to maintain. SAN provides a solution by a technique called “LAN-free backup”. Here, only the meta data (labeled control data in Figure 9 on page 16) flows over the LAN, while the actual backup data moves directly from the client to the storage device connected to the SAN. Chapter 2. Strategy 15
  • 28. Control Data Backup Backup Client Server Data SAN Backup Data Control Data Tape Storage Node Figure 9. SAN-based LAN-free backup Even though this solution is still in an early phase, the next step towards performance improvement has already been architected. This will be called “server-free backup”. Here, client-attached SAN storage moves data immediately to the tape storage. Besides having the advantage that most of the data no longer needs to be backed up through the network, you get an additional performance gain by bypassing the SCSI interface. Both connections (SCSI and network) have a lower throughput than the SAN interface. Its nominal throughput is rated at 1 Gbps. Future implementations will allow this figure to extend to 4 Gbps. See 2.5.2, “Network bandwidth considerations” on page 20 for network throughput figures. Compared to SCSI, operating at 40 MBps, FC-AL operates at 100 MBps. Since FC-AL supports full-duplex communications, the total throughput can go up to 200 MBps. Control Data Backup Backup Client Server SAN Control Data Backup Data Client Data Tape Storage Node Figure 10. SAN Server-free backup 16 Netfinity Tape Solutions
  • 29. Finally, cost reduction can be an important factor in deciding to move tape storage from traditional SCSI attachments to SAN attachments. Here, using the sharing capability of a SAN-connected tape library, two or more systems can use one library. This limits the investment in expensive hardware, and enables the use of cheaper storage media (as compared to disk). So, where a traditional implementation of a tape library would probably cost more than the equivalent in disk storage, sharing of the library increases the utilization factor and decreases the cost per amount of storage. Cost Disk Tape Tape Library Cost x Amount of Data Figure 11. Storage cost Figure 11 is a graph of Cost versus Amount of Data for both tape and disk storage. As you can see, if the amount of data that needs to be stored is lower than x, disk storage is cheaper than tape. However, by increasing the amount of data stored, the total cost goes below that of disk storage. In order to get past this point, you should increase the volume of data that is stored on tape. One way of doing this is by sharing the library between different systems. 2.4.2 Fibre Channel attached tape storage Probably the most straightforward type of SAN implementation of a tape device is where a tape library is connected to one backup server using fiber. The reason why this can be done is the fact that fiber connections, using long-wave technology, can have a length up to 10 kilometers. This means that you can physically separate your tape library from your backup server, which might prove efficient for disaster recovery or automatic vaulting. Figure 12 shows the logical setup of such a configuration: Chapter 2. Strategy 17
  • 30. SAN Tape Storage Host Node Node Fibre SCSI Channel Figure 12. Fibre Channel attached tape storage The above diagram is only a representation of a logical configuration. For information on the actual hardware and software that can be used to implement this, see Chapter 4, “SAN equipment” on page 105. This still leaves the question of how to implement the remote vaulting. Since this is typically done by using tape copies, a second library should be added. Here, for example, we could use a local SCSI-connected library. 2.4.3 Tape pooling A configuration that comes closer to the general idea of storage area networks, sharing storage across multiple machines, is the tape pooling configuration. Here, one (or more) tape libraries are connected to several backup servers. This is done by using a Fibre Channel SAN switch. The main advantage of this type of installation is the ability to share a (costly) tape library between two or more backup servers. Although this might look like something that could already be accomplished in the past, using a library setup in split configuration (the library is logically split in two, each part using one tape device connected to one backup system), there are some differences. The split configuration was a static setup. This means that you connected one tape drive to one system, the other tape drive to another. If you had a library with two tape devices, the split setup meant that you created two smaller, independent libraries. Also the cartridges were assigned to one part of the split library. In a tape pooling configuration, there is no physical or logical split of the tape hardware. The entire library is available to both systems. This means that when one server needs two tape drives for a certain configuration, it will be able to get them (if they are not being used by another system). Also, the free tapes, or scratch pool, can be accessed by both systems. However, the physical media that are in use (meaning that they do not belong to the scratch pool) cannot be shared. A tape used by one system cannot be read by another. Figure 13 shows a tape pooling configuration: 18 Netfinity Tape Solutions
  • 31. Fibre Host Node Channel SAN Tape Storage Node SCSI Host Node Fibre Channel Figure 13. Tape pooling Again, this configuration is just a logical layout. The exact physical layout, and the necessary hardware and software will be discussed later. 2.5 Performance considerations When talking to people who are using backup software intensively, one of their major problems is performance. The reason for that is as follows: while the amount of data increases steadily, the time that a machine is available for backup (which has a performance impact on the machine, and sometimes requires applications to be quiesced) often gets shorter. That is, more data has to be moved in a shorter time period. Although hardware and software manufacturers are continually improving their products to cope with this trend, some parameters affecting performance are related to the way the backup solution is implemented. The following topics discuss some of the techniques you can use, as well as some considerations that might help you determine what performance issues should be addressed. 2.5.1 Scheduling backups When thinking about which machines you are going to back up, you will probably think about file or application servers. Unfortunately, these machines get updates during the day, and the only time it makes sense to back up these systems is after hours. The reason for this is that backup products need to access files and back up valid copies of them. If these files are in use and modified during a backup, the backup version you have would not be very helpful when restored. That is why you should determine a period of time in which operations on the machine that you will back up are minimal, and use this period of time to run your backup. This period is often referred to as the backup window. You will soon see that this backup window usually starts sometime late at night, and ends early in the morning, not exactly the time you or someone else wants to sit beside the machine starting or stopping backup operations. Luckily, backup programs make good use of scheduling mechanisms. These schedulers allow you to start a backup at a certain point in time. The following points are important when automating your backup processes using schedulers: • What will my backup application do in case of errors? Will it continue or stop? The worst case would be if the application stops and asks for user intervention. It would be better for the backup application to make every effort to work around problems, backing up as much of your data as possible. Chapter 2. Strategy 19
  • 32. • Will operations and errors be logged somewhere, so I can check if the backups were successful? • If the backup operation takes longer than the defined backup window, will it continue or stop? There are different scheduling mechanisms, each with its own advantages and disadvantages. For more details, please refer to Chapter 5, “Software” on page 123. 2.5.2 Network bandwidth considerations When implementing a backup solution that backs up data to a backup server over the network, an important factor is network bandwidth. The reason for this is that all the data must go over the network. This becomes even more important when different machines are trying to back up to one server at the same time, since the amount of data increases. That is why network bandwidth will be one of the factors when deciding how many machines will be backed up to one backup server, and which backup window will be needed. To calculate the time needed for a backup, the following points must be considered: • The amount of data that will be backed up Unfortunately, this number can differ from backup system to backup system. Let’s say you have a file server with 20 GB of data. When you do a full backup of this system, it will indeed send 20 GB. But most backup programs also work with incremental or differential backup algorithms, which only back up changed data. So, to figure out the amount of data that is backed up in such an operation, we will have to consider the following points: • How much data changes between two backups? • What does “changed” mean to my backup program? Backup programs will normally also compress data. Unfortunately, the compression rate is strongly dependent on the type of file you are backing up, and therefore hard to define. For initial calculations, you could take the worst case scenario, where no compression would take place. • Nominal network speed (commonly expressed in Mbps) This is the published speed of your network. Token-ring for example will have a nominal network speed of 4 or 16 Mbps. • The practical network speed Since a communication protocol typically adds headers, control data and acknowledgments to network frames, not all of it will be available for our backup data. As a rule of thumb, the practical capacity is 50-60% for token-ring, FDDI or ATM networks, and 35% for Ethernet networks. 20 Netfinity Tape Solutions