SlideShare une entreprise Scribd logo
1  sur  77
HP-UX Dynamic Root Disk

Boot Disk Cloning
Benefits and Use Cases
Dusan Baljevic 2013
Acknowledgements
•

These slides have been used in various presentations in Australia over
the last four years. This is a work-in-progress.

•

I bear full responsibility for any error, even though it is purely
unintentional.

•

I cannot claim credits solely, nor can I claim that I know everything
about Unix. I consider myself to be a Unix Apprentice. For that reason I
need to give special credits to our colleagues Nobuyuki Hirota (TCE&Q
BCS ERT), Daniel Bambou (TC&Q BCS ERT), and Leon Strauss
(GSE) for their continuous support, advice, comments, and guidance.

•

Wisdom of many helped in creation of the presentation (seminars at
HP, HPWorld, ITRC/HPSC forums, HP Ambassadors and Unix
Profession forums, HP Education courses, individual contributions on
the Net).
2
What Kind of Use Cases?
•

This presentation is not displaying formulated textual, structural and
visual modeling techniques for specifying use cases with the HP-UX
DRD.

•

In software and systems engineering, a use case is a list of steps,
typically defining interactions between a role (known in UML as an
"actor") and a system, to achieve a goal. The actor can be a human or
an external system.

•

In systems engineering, use cases are used at a higher level than
within software engineering, often representing missions or stakeholder
goals. The detailed requirements may then be captured in SysML or as
contractual statements.

•

Rather, in our context, use cases are practical examples of HP-UX
DRD usage.
3
Bootable System Images in Unix/Linux
Many tools available. For the sake of brevity, to
mention a few:
AIX

mksysb, Network Installation Manager (NIM)

HP-UX make_tape_recovery/make_net_recovery,
Dynamic Root Disk (DRD)*, VM mirroring
Linux

Mondo Rescue, Clonezilla

Solaris ufsdump, fssnap+ufsdump, flash/JumpStart
Tru64 btcreate

4
Why Boot Disk Cloning is Critical Today?
•

Creates a "point-in-time“ O/S image,

•

On-line patching and configuration changes of the inactive
O/S,

•

Easier change management approvals because the active
O/S is not affected (risk is eliminated),

•

Some tasks make dynamic changes of the O/S during the
cloning, without affecting the active O/S,

•

Boot disk mirroring does not prevent disasters caused by
human error,

•

If boot disks are on the same controller, mirroring is not a
perfect protection.

5
Dynamic Root Disk Mission *
Significantly reduce the downtime needed to
perform HP-UX software maintenance
Reduce the downtime required for recovery from
administrative errors

Perform software update work during normal
business hours, or whenever convenient

Provision systems quickly and efficiently

Simplify testing
6
Dynamic Root Disk Cycles
Provision
- [Re-]Ignite
- Recover
- Clone

Bare Metal
- unused HW

Software
Management
- Identify
- Acquire
- Organize
- Deploy

Maintain
- Monitor
- Patches
- Applications
- Recovery

Upgrade or
Recycle
- update-ux
- re-ignite

7
HP-UX Dynamic Root Disk Features 1 of
4
•

Dynamic Root Disk (DRD) provides the ability to clone an HP-UX
system image to an inactive disk.

•

Supported on HP PA-RISC and Itanium-based systems.

•

Supported on hard partitions (nPars), virtual partitions (vPars),
and Integrity Virtual Machines (Integrity VMs), running the
following operating systems with roots managed by the following
Volume Managers (except as specifically noted for rehosting):
o HP-UX 11i V2 (11.23) September 2004 or later
o HP-UX 11i V3 (11.31)
o LVM (all O/S releases supported by DRD)

o VxVM 4.1
o VxVM 5.x
8
HP-UX Dynamic Root Disk Features 2 of
4
•

Product : DynRootDisk
Version: A.3.12.316
(DRD_1131_WEB1301.depot)
(DRD_1123_WEB1301.depot)

•

The target disk must be a single physical disk, or SAN LUN.

•

The target disk must be large enough to hold all of the root volume
file systems. DRD allows the cloning of the root volume group even
if the master O/S is spread across multiple disks (it is a one-way,
many-to-one operation).

•

On Itanium servers, all partitions are created; EFI and HP-UX
partitions are copied. This release of DRD does not copy the HPSP
partition.

•

Copy of lvmtab on the cloned image is modified by the clone
operation to contain information that will reflect the desired volume
groups when the clone is booted.
9
HP-UX Dynamic Root Disk Features 3 of
4
•

Only the contents of vg00 are copied.

•

Due to system calls DRD depends on, DRD expects legacy
Device Special Files (DSFs) to be present and the legacy
naming model to be enabled on HP-UX 11i v3 servers. HP
recommends only partial migration to persistent DSFs be
performed.

•

If the disk is currently in use by another volume group that
is visible on the system, the disk will not be used.

•

If the disk contains LVM, VxVM, or boot records but is not
in use, one must use the “-x overwrite” option to tell DRD to
overwrite the disk. Already-created clones will contain boot
records; the drd status command will show the disk that is
currently in use as an inactive system image.
10
HP-UX Dynamic Root Disk Features 4 of
4
•

All DRD processes, including “drd clone” and “drd runcmd”,
can be safely interrupted issuing Control-C (SIGINT) from
the controlling terminal or by issuing kill –HUP <pid>
(SIGHUP). This action causes DRD to abort processing. Do
not interrupt DRD using the kill -9 <pid> command
(SIGKILL), which fails to abort safely and does not perform
cleanup. Refer to the “Known Issues” list on the DRD web
page (http://www.hp.com/go/DRD) for cleanup instructions
after drd runcmd is interrupted.

•

The Ignite server will only be aware of the clone if it is
mounted during a make_*_recovery operation.

•

DRD Revision A.3.12 DRD supports SoftReboot feature if a
machine is installed with SoftReboot on a supported
platform.
11
HP-UX Dynamic Root Disk versus IgniteUX
•

DRD has several advantages over Ignite-UX net and tape
images:
* No tape drive is needed,
* No impact on network performance will occur,
* No security issues of transferring data across the network.

•

Mirror Disk/UX keeps an "always up-to-date" image of the
booted system. DRD provides a "point-in-time“ image. The
booted system and the clone may then diverge due to
changes to either one. Keeping the clone unchanged is the
Recovery scenario. DRD is not available for HP-UX 11.11,
which limits options on those systems.

12
DRD and update-ux Practices
HP-UX Patching Versus Update-UX 1 of
2
•

The update-ux method is not only used to update from a
lower to a higher version (for example, 11i v2 to v3), but also
to update from an older to a newer release within the same
version.

•

For many reasons, we encourage usage of update-ux with
Dynamic Root Disk (DRD).

•

If O/S is upgraded through update-ux process, the best
practice recommends cold installs; incremental upgrades
might create possibility that some obsolete software and
libraries exist afterwards.

14
HP-UX Patching Versus Update-UX 2 of
2
We recommend customers develop a release “cycle” through
DRD implementation:
Run update-ux every year (18 months or maximum
two
years is acceptable in some circumstances). Only break

this cycle if they must have some new functionality in a
bi-annual release.

Unless specifically requested differently, the
patch/update level should be at latest release, if
practicable, or LATEST-1.
15
DRD is Minimizing Downtime
HP-UX DRD: Minimizing Planned Downtime
• DRD enables the administrator to create a point-in-time clone of the vg00 volume
group:
• Original vg00 image remains active;
• Cloned vg00 image remains inactive until needed;
• Unlike boot disk mirrors, DRD clones are unaffected by vg00 changes.
• DRD is an optional, free product on the 11i v2 and v3 application media.

Install patches
on the clone;
applications
remain running

lvol1
lvol2
lvol3

lvol1
lvol2
lvol3

boot disk boot mirror
vg00 (active)

Activate the
clone to make
changes take
effect

lvol1
lvol2
lvol3

lvol1
lvol2
lvol3

boot disk boot mirror
vg00 (inactive)
17

lvol1
lvol2
lvol3

lvol1
lvol2
lvol3

clone disk clone mirror
cloned vg00 (inactive/patched)

lvol1
lvol2
lvol3

lvol1
lvol2
lvol3

clone disk clone mirror
cloned vg00 (active/patched)
DRD Clones Minimize Unplanned Downtime
• Without DRD: In case of O/S mis-configuration, it may be necessary to
restore from tape.
• With DRD: In case of O/S mis-configuration, simply activate and boot the
clone.

Original
boot VG is
corrupted

lvol1
lvol2
lvol3

lvol1
lvol2
lvol3

boot disk boot mirror
original vg00 (unusable)

Activate
the clone!

lvol1
lvol2
lvol3

lvol1
lvol2
lvol3

boot disk boot mirror
original vg00 (unusable)
18

lvol1
lvol2
lvol3

lvol1
lvol2
lvol3

clone clone mirror
disk
cloned vg00 (inactive)
lvol1
lvol2
lvol3

lvol1
lvol2
lvol3

clone clone mirror
disk
cloned vg00 (active)
DRD Clones Minimize Planned Downtime
• Without DRD: Software and kernel management may require extended
downtime.
• With DRD: Install/remove software on the clone while applications continue
running.
Install patches &
tune the kernel
on the clone;
applications
remain running

Activate the
clone to make
changes take
effect

lvol1
lvol2
lvol3

lvol1
lvol2
lvol3

boot disk boot mirror
vg00 (active)
lvol1
lvol2
lvol3

lvol1
lvol2
lvol3

boot disk boot mirror
vg00 (inactive)
19

lvol1
lvol2
lvol3

lvol1
lvol2
lvol3

clone clone mirror
disk
cloned vg00 (inactive/patched)
lvol1
lvol2
lvol3

lvol1
lvol2
lvol3

clone clone mirror
disk
cloned vg00 (active/patched)
DRD – Pros and Cons
HP-UX DRD Pros 1 of 2
•

Fully supported by HP.

•

Full clone.

•

Complements other HP solutions by reducing system
downtime required to install and update patches and
software.

•

Copy operation is currently done by fbackup and frecover.

•

kctune command can be used to modify kernel
parameters in the clone.

•

The ioconfig file and the entire /dev directory are copied
by the DRD clone operation, so instance numbers will not
change when the clone is booted.*

•

Supports nPars, vPars, and Integrity VMs.
21
HP-UX DRD Pros 2 of 2
•

No tape drive is needed.

•

No impact on network performance.

•

No security issues of transferring data across the network.

•

All DRD processes, including drd clone and drd runcmd,
can be safely interrupted issuing Control-C (SIGINT) from
the controlling terminal or by issuing kill -HUP<pid>
(SIGHUP). This action causes DRD to abort processing
and perform any necessary clean up. Do not interrupt
DRD using the kill -9 <pid> command (SIGKILL), which
fails to abort safely and does not perform cleanup.

22
HP-UX DRD Cons 1 of 4
•

Target disk must be a single disk or mirror group only.

•

Not easy to list all differences between Active and
Inactive image (drd sync * is the simplistic option).

•

Cloning should be done when the server’s activity is at a
minimum.

•

DRD can clone root volume group that is spread across
multiple disks. The target must be a single disk or
mirrored pair.

23
HP-UX DRD Cons 2 of 4
•

Contents of root volume group are copied. A system that has /opt (or
any file system that is patched) not in root volume group is not
suitable for use with DRD.

•

Does not provide a mechanism for resizing file systems during a DRD
clone operation. However, after the clone is created, you can
manually change file system sizes on the inactive system without
needing an immediate reboot. The whitepaper, Using the
Dynamic Root Disk Toolset describes resizing file systems
other than /stand. The whitepaper Using the DRD toolset to
extend the /stand file system in an LVM environment
describes resizing the boot (/stand) file system on an inactive system
image.

•

Current release of DRD does not copy the Itanium Service Partition
(s3 or _p3).
24
HP-UX DRD Cons 3 of 4
•

Command /opt/drd/lbin/drd_scan_hw_host hangs
occasionally. This is a hardware issue as it is trying to scan all
connected hardware. Check it before using DRD and maybe
even remove stale devices with rmsf –x if necessary:
# ioscan -s

# lssf -s
•

Too many tiny files on root disks can cause significant
performance problem when DRD is used. When there are large
number of files in the root VG (for example, two millions), drd
clone / drd sync might fail with error "Out of memory". It is
suggested to increase maxdsiz kernel parameter, or use "-x
exclude_list" option, or remove unnecessary user files.

25
HP-UX DRD Cons 4 of 4
•

We might see the following error message during the execution
of drd runcmd if the nsswitch.conf file contains the "hosts:
nis" entry:

Error: Could not contact host "myserver". Make
sure the hostname is correct and an absolute
pathname is specified (beginning with "/").
•

We might see the following error message during the execution
of drd runcmd if the nsswitch.conf file contains the "passwd:
compat" or "group: compat" entries:
Error: Permission is denied for the current
operation. There is no entry for user id 0 in
the user database. Check /etc/passwd and/or
the NIS user database.

26
Supported Versions of DRD
•

Versions of DRD are supported for at least two
years.

•

Versions not listed in the "Supported DRD
Releases" section of the latest Release Notes are
no longer supported.

•

We always recommend to have the latest DRD
installed.

27
DRD – Installation and
Commands
Installing DRD
• DRD is included in current 11i v2 and v3 operating environments or
...
• Download and install DRD from http://software.hp.com
Install DRD with swinstall (no reboot required)
# swinstall –s /tmp/DynRootDisk*.depot DynRootDisk

29
DRD Commands
Most DRD tasks require a single command, drd, which supports
multiple “modes”.
Example
# drd clone –t /dev/disk/diskY –x overwrite=true
Other available modes
# drd
# drd clone ...
# drd mount ...
# drd umount ...
# drd runcmd ...
# drd activate ...
# drd deactivate
# drd status

view available modes and options
create a DRD clone
mount the DRD clone’s file systems
unmount the DRD clone’s file systems
execute a command on the clone’s file systems
make the DRD clone the default boot disk after next reboot
retain the current active image as the default boot disk
display information about active/inactive DRD images

DRD offers several common options that are supported in all modes
# drd mode -?
view available options
# drd mode –x ?
view available extended options
# drd mode [-x verbosity=3] ...
specify stdout/stderr verbosity, 0-5
# drd mode [-x log_verbosity=4] ... specify log file verbosity, 0-5
# drd mode [-qqq|qq|q|v|vv|vvv] ... alternative to –x verbosity=n
# drd mode [–p] ...
preview but don’t execute the
operation
30
DRD – Some Restrictions
HP-UX DRD Restrictions on update-ux and
sw* Commands Invoked by drd runcmd
•

Options on the Software Distributor commands that can be used with drd
runcmd need to ensure that operations are DRD-safe:

•

The -F and -x fix=true options are not supported for drd runcmd
swverify operations. Use of these options could result in changes to the
booted system.

•

The use of double quotation marks and wild card symbols (*, ?) in the
command line must be escaped with a backslash character (), as in the
following example:
# drd runcmd swinstall –s depot_server:/var/opt/patches *

•

Files referenced in the command line must both:

o Reside in the inactive system image
o Be referenced in the DRD-safe command by the path relative to the
mount point of the inactive system image
•

This applies to files referenced as arguments for the -C, -f, -S, -X,
and -x logfile options for an sw command run by drd runcmd and
update-ux command -f option.
32
HP-UX Issue when DRD versions different in
booted and cloned environment 1 of 2
# drd runcmd swinstall -s /tmp/ignite/Ignite-UX-11ALL_C.7.7.98.depot
•

======= 11/28/12 00:42:22 IST BEGIN Executing Command On

•

…

/opt/drd/wrappers/start_fsdaemon[22]: start_fsdaemon: not found.

* Stopping swagentd for drd runcmd
/opt/drd/wrappers/stop_fsdaemon[22]: stop_fsdaemon: not found.
ERROR: Command executed on inactive system image returned an error
- One or more postcommands for /usr/sbin/swinstall failed.

- One or more precommands for /usr/sbin/swinstall failed. /usr/sbin/swinstall will not
be executed.
- The precommand "/opt/drd/wrappers/start_fsdaemon" fails with the return code "1".
- The postcommand "/opt/drd/wrappers/stop_fsdaemon" fails with the return code "1".
Executing Command On Inactive System Image failed with 1 error.* Cleaning Up
After Command Execution On Inactive System Image

33
HP-UX Issue when DRD versions different in
booted and cloned environment 2 of 2
This problem is triggered by having one version of DRD installed on the booted
system, and a previous release on the inactive image.
If the clone is not very new, just re-run drd clone.
If you do not want to re-create the clone, the following workaround will help:
# drd mount
# cp /var/opt/drd/mnts/sysimage_001/opt/drd/wrappers/common_utils
/var/opt/drd/mnts/sysimage_001/opt/drd/wrappers/common_utils.orig
# cp /opt/drd/wrappers/common_utils
/var/opt/drd/mnts/sysimage_001/opt/drd/wrappers/common_utils

(If you are booted on the clone, replace "sysimage_001" with "sysimage_000".)
The steps above will enable drd runcmd to succeed. However, the file change would
cause a swverify error on the version of DRD in the clone. To repair this, install the
new version of DRD to the inactive image:
# drd runcmd swinstall -s <depot> DynRootDisk
34
HP-UX DRD Updates from multiple-DVD
media
DRD updates directly from media require the September 2010 OE (or later)
versions of DRD, SWM and SW-DIST products.
In order to use a media depot to do a DRD update, first install September
2010 or later versions of DRD, SWM, and SW-DIST products from the
media. This must be done before the clone is created, so the new DRD,
SWM, and SW-DIST are on the active system and on the clone.

35
DRD – Usage Scenarios
Creating and Updating DRD Clone
Use the drd clone command to create a DRD clone of the active boot disk:
• DRD identifies the current active boot disk
• DRD builds a similarly structured clone disk
• DRD copies the current disk’s file system contents to the clone
• DRD builds a mirror of the clone, too, if requested
• DRD records log messages in /var/opt/drd/drd.log
Identify available disk(s)
# ioscan –funC disk
# lvmadm –l
or
strings /etc/lvmtab*
# vxdisk list
# diskinfo /dev/rdisk/disk3

list all disks on the system
which disks are LVM disks?
which disks are VxVM disks?
verify the disk size

Clone the current active boot disk
# drd clone –t /dev/disk/disk3 
[–x overwrite=true] 
[-x mirror_disk=/dev/disk/disk4]

specify a target disk (required!)
overwrite data on target
create a mirror of the DRD

Update an existing clone (overwrite=true required!)
# drd clone –t /dev/disk/disk3 
–x overwrite=true 
[-x mirror_disk=/dev/disk/disk4]

specify a target disk (required!)
overwrite data on target
create a mirror of the DRD

37
Verifying DRD Clone Status
# drd status
======= 07/23/08 12:13:57 EDT BEGIN Displaying DRD Clone Image
Information (user=root) (jobid=myhost)
* Clone Disk:
/dev/disk/disk3
* Clone EFI Partition:
Boot loader and AUTO file present
* Clone Creation Date:
07/18/08 21:07:29 EDT
* Clone Mirror Disk:
None
* Mirror EFI Partition:
None
* Original Disk:
/dev/disk/disk1
* Original EFI Partition:
Boot loader and AUTO file present
* Booted Disk:
Original Disk (/dev/disk/disk1)
* Activated Disk:
Original Disk (/dev/disk/disk1)
======= 07/23/08 12:14:04 EDT END Displaying DRD Clone Image
Information succeeded. (user=root) (jobid=myhost)

38
DRD-Safe Commands
• Files in the inactive system image are not accessible, by default, to HP-UX
commands.
• “DRD-Safe” commands can be executed on the inactive image via drd runcmd
– Temporarily imports and mounts the inactive image’s volume group and file
systems,
– Executes the specified command using executables & files on the inactive
image,
– Ensures that the active image remains untouched,
– Unmounts and exports the inactive image’s file systems and volume group.
•

DRD-safe commands currently include:
swinstall
swremove
swlist
swmodify
swverify
swjob
kctune
update-ux
view
39
Managing Patches with DRD-Safe
Commands
•
•
•
•

Installing patches and software sometimes requires a reboot and downtime.
Minimize downtime by installing software/patches/updates on an inactive image.
Changes take effect when you activate and boot the inactive image.
Only DRD-Safe patches/products can be installed via DRD.

List software installed on the inactive image using the DRD-Safe swlist command
# drd runcmd swlist
Check if product or patch is DRD-Safe
# swlist –l fileset –a is_drd_safe product_name|patch
Install software on the inactive image using the DRD-Safe swinstall command
# drd runcmd swinstall –s server:/mydepot PHSS_NNNNN
Remove software from the inactive image using the DRD-Safe swremove command
# drd runcmd swremove PHSS_NNNNN
View the inactive image SDUX log file using the DRD-Safe view command
# drd runcmd view /var/adm/sw/swagent.log
Update to a more recent 11i v3 media kit
# drd runcmd swinstall –s server:/mydepot Update-UX
# drd runcmd update-ux –s server:/mydepot
# drd runcmd view /var/adm/sw/update-ux.log
# drd runcmd view /var/opt/swm/sw.log

40
Accessing DRD Inactive Images
• The drd runcmd utility only executes DRD-safe executables on an inactive image.
• To access other files on the inactive image, mount the image via drd mount
– Imports the inactive image volume group, typically as drd00,
– Mounts the image file systems under /var/opt/drd/mnts/sysimage_001

• Warnings:
– Be careful not to unintentionally modify the active system image!
– Only use read-only commands like view and diff to access inactive images.

Mount the inactive image file systems
# drd mount
# mount -v
Access the inactive image file systems, being careful not to modify the active
image!
# diff /etc/passwd /var/opt/drd/mnts/sysimage_001/etc/passwd
Unmount the inactive image file systems
# drd umount
41
Activating and Deactivating Inactive DRD
Image
Use drd activate to make the inactive image the primary boot disk
• DRD updates the boot menu
• DRD can optionally reboot the system immediately
Promote the inactive system image to become primary boot disk (with preview)
# drd activate [-x reboot=false] -p
Check the bootpath
# setboot -v
If –x reboot=true wasn’t specified, manually reboot
# shutdown –ry 0
If you change your mind before rebooting, use drd deactivate to undo the
activation
# drd deactivate
Use drd status to determine which disk is the currently active boot disk
# drd status
42
DRD Inactive Image Synchronization
• The drd sync command was introduced in release B.11.xx.A.3.5 of Dynamic
Root Disk (DRD) to propagate root volume group file system changes from the
booted original system to the inactive clone image. Running drd sync
command updates/creates the files on Inactive Image (Clone Disk) which were
modified on Active Image (Boot Disk) after last successful execution of drd
clone command.
pax archive is used for drd sync while fbackup/frestore is used for clone.
•To preview differences between the Active Image and the DRD Inactive Image
# drd sync –p

• It creates file /var/opt/drd/sync/files_to_be_copied_by_drd_sync
• Once the preview is checked, a resync of the cloned image can be initiated
# drd sync
43
drd sync
Without DRD Sync
1. A system administrator creates
a DRD clone on a Thursday.
2. The administrator applies a
collection of software changes
to the clone on Friday using the
drd runcmd command.
3. On Friday, several log files are
updated on the booted system.
4. On Saturday, the clone is
booted, however the log files are
not up to date, so the
administrator must copy over
the log files and any other files
from the original system that
changed after the clone was
created – for example,
/etc/passwd

With DRD Sync
1. A system administrator creates
a DRD clone on a Thursday.
2. The administrator applies a
collection of software changes
to the clone on Friday using the
drd runcmd command.
3. On Friday, several log files are
updated on the booted system.
4. On Saturday, the clone is
synced then booted – log files
and other files that have
changed on the original system
have automatically been copied
to the clone.

44
drd sync
•

The list of files on the active system whose modification
date is newer than or equal to the clone creation time
provides the initial list of files to be synchronized

Trimming the list of files to be synchronized
The following locations are not synchronized:
/var/adm/sw, /tmp, /var/tmp, /var/opt/drd/tmp, /stand, /dev/<
clone_group>,
Files that have changed on the clone are not synchronized
Nonvolatile files in the Software Distributor Installed
Products Database (IPD) are not synchronized
Volatile files in the Software Distributor Installed Products
Database (IPD) are not synchronized
45
HP-UX DRD Examples for Different
O/S Releases
HP-UX 11iv2:
# drd clone -t /dev/dsk/c2t1d0 -x 
overwrite=true [-x mirror_disk=/dev/dsk/c3t0d1]
HP-UX 11iv3, use agile views:
# drd clone -t /dev/disk/disk32 -x 
overwrite=true [-x mirror_disk=/dev/disk/disk4]
Note that all partitions on Itanium disk are created, and s1 and s2
(_p1 and _p2) are copied.

46
HP-UX 11i v2 To v3 Upgrade with DRD
1 of 3
Original image: /dev/dsk/c0t0d0 (HP-UX 11i v2)
Clone disk: /dev/dsk/c1t0d0
What to apply: HP-UX 11i v3 Update 9, Virtual Server OE Depot with
patches
depsvr:/var/depots/1131_VSE-OE
Version of DRD: B.11.31.A.3.3 or later
Objective: Utilize DRD to help adjust file systems sizes when
performing an HP-UX 11i v2 to v3 update
•

Create the clone:

# drd clone –t /dev/dsk/c1t0d0
•

Use drd status to view the clone:

# drd status
47
HP-UX 11i v2 To v3 Upgrade with DRD
2 of 3
•

Run update-UX in preview mode on the active disk:

# update-ux -p -s depsvr:/var/depots/1131 HPUX11iVSE-OE
•

Adjust file system sizes on the clone as needed

•

Activate and boot the clone, setting the alternate bootpath to the
HP-UX 11i v2 disk:

# drd activate -x
alternate_bootdisk=/dev/dsk/c0t0d0 -x reboot=true
•

Update the active image to HP-UX 11i v3, Virtual Server OE:

# update-ux -s depsvr:/var/depots/1131_VSE-OE
HPUX11i-VSE-OE
There will be a reboot executed at this time.

48
HP-UX 11i v2 To v3 Upgrade with DRD
3 of 3
•

Ensure that the software is installed properly:

# swverify *
•

Verify all software has been updated to the HP-UX 11i v3:

# swlist
•

Ensure the integrity of your updated system by checking the
following log files /var/adm/sw/update-ux.log and
/var/opt/swm/swm.log

49
HP-UX How to Interrupt DRD processes
All DRD processes, including “drd clone” and “drd runcmd”,
can be safely interrupted issuing Control-C (SIGINT) from the
controlling terminal or by issuing kill –HUP <pid> (SIGHUP).
This action causes DRD to abort processing. Do not interrupt
DRD using the kill -9 <pid> command (SIGKILL), which fails
to abort safely and does not perform cleanup.
Refer to the “Known Issues” list on the DRD web page
(http://www.hp.com/go/DRD) for cleanup instructions after
drd runcmd is interrupted.

50
HP-UX DRD Examples How to Select
Software
•

To exclude single product T1458AA

# drd runcmd update-ux -p –s 
svr:/var/opt/HPUX_1131_0903_DCOE HPUX11i-DC-OE 
!T1458AA
•

Use -f software_file * to read the list of sw_selections from
software_file instead of (or in addition to) the command line

# drd runcmd update-ux -s source_location 

-f software_file

51
HP-UX DRD Rehost Cookbook 1 of 2
•

Clone the host1 system to a shared LUN
# drd clone -t /dev/disk/diskX

•

Create a system information file for host2
# vi /tmp/sysinfo_host2
SYSINFO_HOSTNAME=host2
SYSINFO_DHCP_ENABLE[0]=0
SYSINFO_MAC_ADDRESS[0]=0x1edb3adea7ab
SYSINFO_IP_ADDRESS[0]=172.16.19.184
SYSINFO_SUBNET_MASK[0]=255.255.255.0
SYSINFO_ROUTE_GATEWAY[0]=172.16.19.1
SYSINFO_ROUTE_DESTINATION[0]=default
SYSINFO_ROUTE_COUNT[0]=1
52
HP-UX DRD Rehost Cookbook 2 of 2
•

Execute the drd rehost command, specifying the system
information file created in the previous step.
# drd rehost -f /tmp/sysinfo_host2

•

Unpresent the LUN from the host1, and present it to the host2.

•

Choose the new LUN from the boot screens and boot the host2.

•

On both hosts reinitialize the DRD configuration by deleting the registry
# rm -f /var/opt/drd/registry/registry.xml

•

Remove the Device Special File of the boot device of the host2
# rmsf -H 64000/0xfa00/0x6
53
HP-UX Expand Root File System with DRD
1 of 3
For this example, we assume vg00 has only one disk (disk0) in LVM L1
and the DRD will hold on disk5. Note, however, that support procedure
for

extending the root filesystem is using Ignite-UX!
•

Create a clone of the root filesystem
# drd clone -v -x overwrite=true -t /dev/disk/disk5

•

Mount the DRD filesystem as vgdrd
# mkdir /dev/vgdrd
# mknod /dev/vgdrd/group c 64 0x0a0000
# vgimport /dev/vgdrd /dev/disk/disk5
# vgchange -a y vgdrd
NOTE: The minor number must be unique on the server.

54
HP-UX Expand Root File System with DRD
2 of 3
•

Create a new lvol to hold lvol4
# lvcreate -l <lvol4_size> -n lvtmp /dev/vgdrd

•

Copy the data from lvol4 to lvtmp
# dd if=/dev/vgdrd/lvol4 of=/dev/vgdrd/lvtmp bs=1024

•

Remove lvol4
# lvremove /dev/vgdrd/lvol4

•

Assume that there is a need to get to 450 PE on root
# lvextend -l 450 /dev/vgdrd/lvol3

•

Recreate lvol4 and move the data back:
# lvcreate -l <lvol4_size> -n lvol4 /dev/vgdrd
# dd if=/dev/vgdrd/lvtmp of=/dev/vgdrd/lvol4 bs=1024

55
HP-UX Expand Root File System with DRD
3 of 3
•

Check the size change
# vgdisplay -v vgdrd

•

Remove the DRD volume group
# vgexport vgdrd

•

Boot from the DRD volume
# /opt/drd/bin/drd activate -x reboot=true

56
HP-UX DRD Update-ux with Single Reboot
•

Create a clone disk:

# drd clone -x overwrite=true -t <block_DSF_target_disk>
•

Install OE update-ux on the clone:

# drd runcmd update-ux -v -s /hp/raj.depot/ HPUX11i-VSE-OE 
!Ignite-UX-11-23 !Ignite-UX-11-31 !T1335DC !IGNITE 
!Ignite-UX-11-11
•

Install required patches from a depot. Install Patches, HP Products, non-HP
Products from single depot:

# drd runcmd swinstall -x patch_match_target=true -x 
-s /hp/non-oe.depot *
•

Boot the clone when ready

# drd activate -x 
alternate_bootdisk=<block_DSF_current_ boot_disk> 
-x reboot=true
57
HP-UX DRD Debug Session *
Clean /var/opt/drd directory
# cd /var/opt/drd
# rm -rf tmp/* mapfiles/* drd.log inventory 
mnts registry sync
Run DRD session with following environment set and duplicate the issue
# export INST_DEBUG=5
# export SMDDEBUG_SMDINIT=9
# drd … -x overwrite=true -x verbosity=D 
-x log_verbosity=D
Collect the archive of /var/opt/drd
# cd /var/opt
# tar cvf /var/tmp/drd.tar drd
# gzip /var/tmp/drd.tar
Make sure to collect the debug log when the problem can be duplicated
and obtain the archive of /var/opt/drd when opening an L3 case
58
HP-UX DRD Serial Patch Installation
# swlist –l fileset –a is_drd_safe 
<product_name|patch>
# swcopy -s /tmp/PHCO_38159.depot * @
/var/opt/mx/depot11/PHCO_38159.dir
# drd runcmd swinstall -s 
/var/opt/mx/depot11/PHCO_38159.dir PHCO_38159

59
HP-UX DRD with SWA 1 of 3
•

Use drd status to view the clone:

# drd status
•
•

Determine what patches are needed:
a. Mount the clone:

# drd mount
•

b. Create an SWA report:

# swa report -s /var/opt/drd/mnts/sysimage001

60
HP-UX DRD with SWA 2 of 3
•

Download the patches identified by SWA into a depot:

# swa get -t /var/depots/myswa
Patch installation might require special attention. Review
any special installation instructions documented in
/var/depots/myswa/readBeforeInstall.txt.

61
HP-UX DRD with SWA 3 of 3
•

Install everything in the 1131swa depot:

# drd runcmd swinstall 
-s /var/depots/1131swa -x patch_match_target=true
•

Ensure the patches are installed:

# drd runcmd view /var/adm/sw/swagent.log
•

Unmount the clone:

# drd umount
•

Activate and boot the clone:

# drd activate -x reboot=true
62
HP-UX 11i V2 to V3 Upgrade via DRD
•

DRD can be used to update from 11iv2 to 11iv3. Whether that is the best option is another
question. Note that there is a difference between solutions working and vendors
supporting them. There are many examples in IT field when vendors refuse to “certify”
solutions although they are known to work reasonably well.

•

There were some issues with clone activations in certain releases of DRD though.

•

I have done this upgrade via DRD many times.

•

You must ensure that the newer version (11i v3) DVDs (or ISO images) are not from a
revision date earlier than what the 11i v2 was created with.

•

The best practice recommends cold installs as incremental upgrades might leave
possibility that some obsolete software and libraries exist afterwards.

•

I enclose herewith another HP document that confirms it (I am sure there are many other
documents around).

•

Or, a customer’s experience:

http://www.hpuxtips.es/?q=node/229
63
HP-UX Using DRD to Change
Volume Manager
•

Create clone via DRD

•

Boot the clone

•

Migrate the LVM disk to VxVM using vxcp_lvmroot command:

# /etc/vx/bin/vxcp_lvmroot -v disk1

64
HP-UX DRD Multiple Copies of Targets
1 of 11 *
# ioscan -funNC disk
Class

I

H/W Path

Driver S/W State

H/W Type

Description

===================================================================
disk
5 64000/0xfa00/0x0
73.4GMAU3073NC

esdisk

/dev/disk/disk5
disk
6 64000/0xfa00/0x1
GST3146707LC

esdisk

/dev/disk/disk6
disk
7 64000/0xfa00/0x2
73.4GMAU3073NC

esdisk

CLAIMED

DEVICE

/dev/rdisk/disk5
CLAIMED

DEVICE

CLAIMED

DEVICE

/dev/rdisk/disk7_p1

/dev/disk/disk7_p2

/dev/rdisk/disk7_p2

/dev/disk/disk7_p3

/dev/rdisk/disk7_p3

64000/0xfa00/0x3

esdisk

/dev/disk/disk8
disk
VOLUME

9

HP

/dev/rdisk/disk7

/dev/disk/disk7_p1

8

HP 146

/dev/rdisk/disk6

/dev/disk/disk7

disk
N

HP

64000/0xfa00/0x6

esdisk

/dev/disk/disk9

CLAIMED

DEVICE

TEAC

DV-28E-

HP

MSA

/dev/rdisk/disk8
CLAIMED

DEVICE

/dev/rdisk/disk9
65
HP-UX DRD Multiple Copies of Targets
2 of 11
# drd clone -t /dev/disk/disk5
=======

03/04/13 11:04:31 EDT

BEGIN Clone System Image (user=root)

(jobid=ia643)
* Reading Current System Information

* Selecting System Image To Clone
* Selecting Target Disk
* Selecting Volume Manager For New System Image
* Analyzing For System Image Cloning
* Creating New File Systems
* Copying File Systems To New System Image
WARNING: The following files could not be copied to the clone.
WARNING: This may be caused by updating files during the copy.
WARNING: Uncopied file: /var/opt/hpvm/common/command.log
* Copying File Systems To New System Image succeeded with 3 warnings.
* Making New System Image Bootable
* Unmounting New System Image Clone
=======

03/04/13 11:44:49 EDT

END Clone System Image succeeded with 3

warnings. (user=root)

(jobid=ia643)

66
HP-UX DRD Multiple Copies of Targets
3 of 11
# drd status
=======

03/04/13 11:45:09 EDT
(user=root)

BEGIN Displaying DRD Clone Image Information

(jobid=ia643)

* Clone Disk:

/dev/disk/disk5

* Clone EFI Partition:

AUTO file present, Boot loader present

* Clone Rehost Status:

SYSINFO.TXT not present

* Clone Creation Date:

03/04/13 11:04:52 EDT

* Last Sync Date:

None

* Clone Mirror Disk:

None

* Mirror EFI Partition:

None

* Original Disk:

/dev/disk/disk7

* Original EFI Partition:

AUTO file present, Boot loader present

* Original Rehost Status:

SYSINFO.TXT not present

* Booted Disk:

Original Disk (/dev/disk/disk7)

* Activated Disk:

Original Disk (/dev/disk/disk7)

=======

03/04/13 11:45:28 EDT
succeeded. (user=root)

END Displaying DRD Clone Image Information
(jobid=ia643)

67
HP-UX DRD Multiple Copies of Targets
4 of 11
# drd clone -t /dev/disk/disk6
=======

03/04/13 11:46:17 EDT

BEGIN Clone System Image (user=root)

(jobid=ia643)
* Reading Current System Information

* Selecting System Image To Clone
* Selecting Target Disk
* Selecting Volume Manager For New System Image
* Analyzing For System Image Cloning
* Creating New File Systems
* Copying File Systems To New System Image
WARNING: The following files could not be copied to the clone.
WARNING: This may be caused by updating files during the copy.
WARNING: Uncopied file: /var/opt/hpvm/common/command.log
WARNING: Uncopied file: /var/opt/perf/datafiles/logdev
* Copying File Systems To New System Image succeeded with 4 warnings.
* Making New System Image Bootable
* Unmounting New System Image Clone
=======

03/04/13 12:34:52 EDT

END Clone System Image succeeded with 4

warnings. (user=root)

(jobid=ia643)
68
HP-UX DRD Multiple Copies of Targets
5 of 11
# drd status
=======

03/04/13 12:35:37 EDT
(user=root)

BEGIN Displaying DRD Clone Image Information

(jobid=ia643)

* Clone Disk:

/dev/disk/disk6

* Clone EFI Partition:

AUTO file present, Boot loader present

* Clone Rehost Status:

SYSINFO.TXT not present

* Clone Creation Date:

03/04/13 11:46:37 EDT

* Last Sync Date:

None

* Clone Mirror Disk:

None

* Mirror EFI Partition:

None

* Original Disk:

/dev/disk/disk7

* Original EFI Partition:

AUTO file present, Boot loader present

* Original Rehost Status:

SYSINFO.TXT not present

* Booted Disk:

Original Disk (/dev/disk/disk7)

* Activated Disk:

Original Disk (/dev/disk/disk7)

=======

03/04/13 12:35:56 EDT
succeeded. (user=root)

END Displaying DRD Clone Image Information
(jobid=ia643)

69
HP-UX DRD Multiple Copies of Targets
6 of 11
# ioscan -m dsf
Persistent DSF

Legacy DSF(s)

========================================

/dev/pt/pt3

/dev/rscsi/c5t0d0
/dev/rscsi/c4t0d0

/dev/pt/pt4

/dev/rscsi/c6t0d0

/dev/rdisk/disk5

/dev/rdsk/c2t1d0

/dev/rdisk/disk5_p1

/dev/rdsk/c2t1d0s1

/dev/rdisk/disk5_p3

/dev/rdsk/c2t1d0s3

/dev/rdisk/disk5_p2

/dev/rdsk/c2t1d0s2

/dev/rdisk/disk6

/dev/rdsk/c2t0d0

/dev/rdisk/disk6_p1

/dev/rdsk/c2t0d0s1

/dev/rdisk/disk6_p2

/dev/rdsk/c2t0d0s2

/dev/rdisk/disk6_p3

/dev/rdsk/c2t0d0s3

/dev/rdisk/disk7

/dev/rdsk/c3t2d0

/dev/rdisk/disk7_p1

/dev/rdsk/c3t2d0s1

/dev/rdisk/disk7_p2

/dev/rdsk/c3t2d0s2

/dev/rdisk/disk7_p3

/dev/rdsk/c3t2d0s3

/dev/rdisk/disk8

/dev/rdsk/c0t0d0

/dev/rdisk/disk9

/dev/rdsk/c7t0d2
70
HP-UX DRD Multiple Copies of Targets
7 of 11
# ioscan -funneC disk
Class

I

H/W Path

Driver S/W State

H/W Type

Description

=======================================================================
disk

3

0/0/2/0.0.0.0

sdisk

CLAIMED

/dev/dsk/c0t0d0

DEVICE

TEAC

DV-28E-N

/dev/rdsk/c0t0d0

Acpi(HWP0002,0)/Pci(2|0)/Ata(Primary,Master)
disk

1

0/1/1/0.0.0

sdisk

CLAIMED

DEVICE

HP 146 GST3146707LC

/dev/dsk/c2t0d0

/dev/rdsk/c2t0d0

/dev/dsk/c2t0d0s1

/dev/rdsk/c2t0d0s1

/dev/dsk/c2t0d0s2

/dev/rdsk/c2t0d0s2

/dev/dsk/c2t0d0s3

/dev/rdsk/c2t0d0s3

Acpi(HWP0002,100)/Pci(1|0)/Scsi(Pun0,Lun0)/HD(Part1,SigFB4D90C2-8464-11E2-8000D6217B60E588)/EFIHPUXHPUX.EFI

disk

0

0/1/1/0.1.0

sdisk

CLAIMED

DEVICE

HP 73.4GMAU3073NC

/dev/dsk/c2t1d0

/dev/rdsk/c2t1d0

/dev/dsk/c2t1d0s1

/dev/rdsk/c2t1d0s1

/dev/dsk/c2t1d0s2

/dev/rdsk/c2t1d0s2

/dev/dsk/c2t1d0s3

/dev/rdsk/c2t1d0s3

Acpi(HWP0002,100)/Pci(1|0)/Scsi(Pun1,Lun0)/HD(Part1,Sig25DEF318-845F-11E2-8000D6217B60E588)/EFIHPUXHPUX.EFI
disk

2

0/1/1/1.2.0

sdisk

CLAIMED

DEVICE

HP 73.4GMAU3073NC

/dev/dsk/c3t2d0

/dev/rdsk/c3t2d0

/dev/dsk/c3t2d0s1

/dev/rdsk/c3t2d0s1

/dev/dsk/c3t2d0s2

/dev/rdsk/c3t2d0s2

/dev/dsk/c3t2d0s3

/dev/rdsk/c3t2d0s3

Acpi(HWP0002,100)/Pci(1|1)/Scsi(Pun2,Lun0)/HD(Part1,SigB9B8E4CC-0CE1-11E2-8000D6217B60E588)/EFIHPUXHPUX.EFI
disk

4

0/4/1/1.4.0.0.0.0.2

sdisk

/dev/dsk/c7t0d2

CLAIMED

DEVICE

HP

MSA VOLUME

/dev/rdsk/c7t0d2

Acpi(HWP0002,400)/Pci(1|1)/Fibre(WWN500805F300061081,Lun4002000000000000
71
HP-UX DRD Multiple Copies of Targets
8 of 11
EFI Shell version 1.10 [14.62]
Device mapping table
fs0

: Acpi(HWP0002,100)/Pci(1|0)/Scsi(Pun0,Lun0)/HD(Part1,SigFB4D90C2-8464-11E2-8000-D6217B60E588)

fs1

: Acpi(HWP0002,100)/Pci(1|0)/Scsi(Pun0,Lun0)/HD(Part3,SigFB4D90EA-8464-11E2-8000-D6217B60E588)

fs2

: Acpi(HWP0002,100)/Pci(1|0)/Scsi(Pun1,Lun0)/HD(Part1,Sig25DEF318-845F-11E2-8000-D6217B60E588)

fs3

: Acpi(HWP0002,100)/Pci(1|0)/Scsi(Pun1,Lun0)/HD(Part3,Sig25DEF340-845F-11E2-8000-D6217B60E588)

fs4

: Acpi(HWP0002,100)/Pci(1|1)/Scsi(Pun2,Lun0)/HD(Part1,SigB9B8E4CC-0CE1-11E2-8000-D6217B60E588)

fs5

: Acpi(HWP0002,100)/Pci(1|1)/Scsi(Pun2,Lun0)/HD(Part3,SigB9B8E526-0CE1-11E2-8000-D6217B60E588)

blk0 : Acpi(HWP0002,0)/Pci(2|0)/Ata(Primary,Master)
blk1 : Acpi(HWP0002,100)/Pci(1|0)/Scsi(Pun0,Lun0)
blk2 : Acpi(HWP0002,100)/Pci(1|0)/Scsi(Pun0,Lun0)/HD(Part1,SigFB4D90C2-8464-11E2-8000-D6217B60E588)

blk3 : Acpi(HWP0002,100)/Pci(1|0)/Scsi(Pun0,Lun0)/HD(Part2,SigFB4D90D6-8464-11E2-8000-D6217B60E588)
blk4 : Acpi(HWP0002,100)/Pci(1|0)/Scsi(Pun0,Lun0)/HD(Part3,SigFB4D90EA-8464-11E2-8000-D6217B60E588)
blk5 : Acpi(HWP0002,100)/Pci(1|0)/Scsi(Pun1,Lun0)
blk6 : Acpi(HWP0002,100)/Pci(1|0)/Scsi(Pun1,Lun0)/HD(Part1,Sig25DEF318-845F-11E2-8000-D6217B60E588)
blk7 : Acpi(HWP0002,100)/Pci(1|0)/Scsi(Pun1,Lun0)/HD(Part2,Sig25DEF32C-845F-11E2-8000-D6217B60E588)
blk8 : Acpi(HWP0002,100)/Pci(1|0)/Scsi(Pun1,Lun0)/HD(Part3,Sig25DEF340-845F-11E2-8000-D6217B60E588)
blk9 : Acpi(HWP0002,100)/Pci(1|1)/Scsi(Pun2,Lun0)

blkA : Acpi(HWP0002,100)/Pci(1|1)/Scsi(Pun2,Lun0)/HD(Part1,SigB9B8E4CC-0CE1-11E2-8000-D6217B60E588)
blkB : Acpi(HWP0002,100)/Pci(1|1)/Scsi(Pun2,Lun0)/HD(Part2,SigB9B8E512-0CE1-11E2-8000-D6217B60E588)
blkC : Acpi(HWP0002,100)/Pci(1|1)/Scsi(Pun2,Lun0)/HD(Part3,SigB9B8E526-0CE1-11E2-8000-D6217B60E588)
blkD : Acpi(HWP0002,400)/Pci(1|1)/Fibre(WWN500805F300061081,Lun4002000000000000)
startup.nsh> echo -off

72
HP-UX DRD Multiple Copies of Targets
9 of 11
fs2:EFIHPUX> hpux.efi

After the reboot:
# /usr/lbin/bootpath
/dev/disk/disk5
# setboot -v
Primary bootpath : 0/1/1/1.0x2.0x0 (/dev/rdisk/disk7)
HA Alternate bootpath : 0/0/2/0.0.0x0.0x0 (/dev/rdisk/disk8)
Alternate bootpath : 0/1/1/0.0x1.0x0 (/dev/rdisk/disk5)
Autoboot is ON (enabled)
TEST

CURRENT

DEFAULT

----

-------

-------

all

on

on

on

on

early_cpu

on

on

late_cpu

on

on

on

on

on

on

Full_memory on

on

SELFTESTS

FASTBOOT
Platform
Memory_init

on

on

IO_HW

on

on

Chipset

on

on
73
HP-UX DRD Multiple Copies of Targets
10 of 11
# lvlnboot -v
Boot Definitions for Volume Group /dev/vg00:
Physical Volumes belonging in Root Volume Group:
/dev/disk/disk5_p2 -- Boot Disk
Boot: lvol1

on:

/dev/disk/disk5_p2

Root: lvol3

on:

/dev/disk/disk5_p2

Swap: lvol2

on:

/dev/disk/disk5_p2

Dump: lvol2

on:

/dev/disk/disk5_p2, 0

# drd status
=======

03/04/13 15:03:21 EDT

(user=root)

BEGIN Displaying DRD Clone Image Information

(jobid=ia643)

* Clone Disk:

/dev/disk/disk5

* Clone EFI Partition:

AUTO file present, Boot loader present

* Clone Rehost Status:

SYSINFO.TXT not present

* Clone Creation Date:

03/04/13 11:04:52 EDT

* Last Sync Date:

None

* Clone Mirror Disk:

None

* Mirror EFI Partition:

None

* Original Disk:

/dev/disk/disk7

* Original EFI Partition:

AUTO file present, Boot loader present

* Original Rehost Status:

SYSINFO.TXT not present

* Booted Disk:

Clone Disk (/dev/disk/disk5)

* Activated Disk:

Original Disk (/dev/disk/disk7)

=======

03/04/13 15:03:40 EDT

succeeded. (user=root)

END Displaying DRD Clone Image Information

(jobid=ia643)
74
HP-UX DRD Multiple Copies of Targets
11 of 11
Final reboot to go back to the original disk:
# /usr/lbin/bootpath
/dev/disk/disk7
# drd status
=======

03/11/13 16:41:35 EDT
(user=root)

BEGIN Displaying DRD Clone Image Information

(jobid=ia643)

* Clone Disk:
* Clone EFI Partition:

AUTO file present, Boot loader present

* Clone Rehost Status:

SYSINFO.TXT not present

* Clone Creation Date:

03/04/13 11:46:37 EDT

* Last Sync Date:

None

* Clone Mirror Disk:

None

* Mirror EFI Partition:

None

* Original Disk:

/dev/disk/disk7

* Original EFI Partition:

AUTO file present, Boot loader present

* Original Rehost Status:

SYSINFO.TXT not present

* Booted Disk:

Original Disk (/dev/disk/disk7)

* Activated Disk:
=======

/dev/disk/disk6

Original Disk (/dev/disk/disk7)

03/11/13 16:41:54 EDT
succeeded. (user=root)

END Displaying DRD Clone Image Information
(jobid=ia643)
75
HP-UX DRD Wishlist
•

Here are some other DRD features I would like to see in the future:

•

Command to cancel DRD cloning process (like Solaris lucancel).

•

More comprehensive comparison of files (like Solaris lucompare). Yes, we have
"drd sync –p" but it is not as good.

•

Command to delete DRD clone (like Solaris ludelete).

•

Command to set or display useful description for clones and boot environments
(like Solaris ludesc).

•

Terminal user interface (like Solaris lu).

•

Multiple target disks for cloning (currently DRD supports one target and its mirror
only).
76
Thank You!

Contenu connexe

Tendances

Xen and the Art of Virtualization
Xen and the Art of VirtualizationXen and the Art of Virtualization
Xen and the Art of Virtualization
Susheel Thakur
 
S4 xen hypervisor_20080622
S4 xen hypervisor_20080622S4 xen hypervisor_20080622
S4 xen hypervisor_20080622
Todd Deshane
 
Redesigning Xen Memory Sharing (Grant) Mechanism
Redesigning Xen Memory Sharing (Grant) MechanismRedesigning Xen Memory Sharing (Grant) Mechanism
Redesigning Xen Memory Sharing (Grant) Mechanism
The Linux Foundation
 
Building a Distributed Block Storage System on Xen
Building a Distributed Block Storage System on XenBuilding a Distributed Block Storage System on Xen
Building a Distributed Block Storage System on Xen
The Linux Foundation
 
Rmll Virtualization As Is Tool 20090707 V1.0
Rmll Virtualization As Is Tool 20090707 V1.0Rmll Virtualization As Is Tool 20090707 V1.0
Rmll Virtualization As Is Tool 20090707 V1.0
guest72e8c1
 
kexec / kdump implementation in Linux Kernel and Xen hypervisor
kexec / kdump implementation in Linux Kernel and Xen hypervisorkexec / kdump implementation in Linux Kernel and Xen hypervisor
kexec / kdump implementation in Linux Kernel and Xen hypervisor
The Linux Foundation
 

Tendances (20)

Tuning Android for low RAM
Tuning Android for low RAMTuning Android for low RAM
Tuning Android for low RAM
 
Xen io
Xen ioXen io
Xen io
 
Evolution of ota_update_in_the_io_t_world
Evolution of ota_update_in_the_io_t_worldEvolution of ota_update_in_the_io_t_world
Evolution of ota_update_in_the_io_t_world
 
QEMU Disk IO Which performs Better: Native or threads?
QEMU Disk IO Which performs Better: Native or threads?QEMU Disk IO Which performs Better: Native or threads?
QEMU Disk IO Which performs Better: Native or threads?
 
PVH : PV Guest in HVM container
PVH : PV Guest in HVM containerPVH : PV Guest in HVM container
PVH : PV Guest in HVM container
 
Xen and the Art of Virtualization
Xen and the Art of VirtualizationXen and the Art of Virtualization
Xen and the Art of Virtualization
 
XPDDS19: Core Scheduling in Xen - Jürgen Groß, SUSE
XPDDS19: Core Scheduling in Xen - Jürgen Groß, SUSEXPDDS19: Core Scheduling in Xen - Jürgen Groß, SUSE
XPDDS19: Core Scheduling in Xen - Jürgen Groß, SUSE
 
Sitaram_Chalasani_CV
Sitaram_Chalasani_CVSitaram_Chalasani_CV
Sitaram_Chalasani_CV
 
XPDS16: The OpenXT Project in 2016 - Christopher Clark, BAE Systems
XPDS16: The OpenXT Project in 2016 - Christopher Clark, BAE SystemsXPDS16: The OpenXT Project in 2016 - Christopher Clark, BAE Systems
XPDS16: The OpenXT Project in 2016 - Christopher Clark, BAE Systems
 
Improving Xen idle power efficiency
Improving Xen idle power efficiencyImproving Xen idle power efficiency
Improving Xen idle power efficiency
 
Xen Summit 2009 Shanghai Ras
Xen Summit 2009 Shanghai RasXen Summit 2009 Shanghai Ras
Xen Summit 2009 Shanghai Ras
 
S4 xen hypervisor_20080622
S4 xen hypervisor_20080622S4 xen hypervisor_20080622
S4 xen hypervisor_20080622
 
Redesigning Xen Memory Sharing (Grant) Mechanism
Redesigning Xen Memory Sharing (Grant) MechanismRedesigning Xen Memory Sharing (Grant) Mechanism
Redesigning Xen Memory Sharing (Grant) Mechanism
 
Ian Pratt Nsdi Keynote Apr2008
Ian Pratt Nsdi Keynote Apr2008Ian Pratt Nsdi Keynote Apr2008
Ian Pratt Nsdi Keynote Apr2008
 
XPDDS18: CPUFreq in Xen on ARM - Oleksandr Tyshchenko, EPAM Systems
XPDDS18: CPUFreq in Xen on ARM - Oleksandr Tyshchenko, EPAM SystemsXPDDS18: CPUFreq in Xen on ARM - Oleksandr Tyshchenko, EPAM Systems
XPDDS18: CPUFreq in Xen on ARM - Oleksandr Tyshchenko, EPAM Systems
 
Building a Distributed Block Storage System on Xen
Building a Distributed Block Storage System on XenBuilding a Distributed Block Storage System on Xen
Building a Distributed Block Storage System on Xen
 
Rmll Virtualization As Is Tool 20090707 V1.0
Rmll Virtualization As Is Tool 20090707 V1.0Rmll Virtualization As Is Tool 20090707 V1.0
Rmll Virtualization As Is Tool 20090707 V1.0
 
Xen Hypervisor
Xen HypervisorXen Hypervisor
Xen Hypervisor
 
XS Boston 2008 Quantitative
XS Boston 2008 QuantitativeXS Boston 2008 Quantitative
XS Boston 2008 Quantitative
 
kexec / kdump implementation in Linux Kernel and Xen hypervisor
kexec / kdump implementation in Linux Kernel and Xen hypervisorkexec / kdump implementation in Linux Kernel and Xen hypervisor
kexec / kdump implementation in Linux Kernel and Xen hypervisor
 

En vedette

HP Superdome 2 Partitioning Administrator Guide HP-UX 11iv3(August 2010).PDF
HP Superdome 2 Partitioning Administrator Guide HP-UX 11iv3(August 2010).PDFHP Superdome 2 Partitioning Administrator Guide HP-UX 11iv3(August 2010).PDF
HP Superdome 2 Partitioning Administrator Guide HP-UX 11iv3(August 2010).PDF
E. Balauca
 
StoreGrid : Restore & Recovery
StoreGrid : Restore & RecoveryStoreGrid : Restore & Recovery
StoreGrid : Restore & Recovery
Revolucion
 
FreeNAS installation and setup for shared storage (1/2)
FreeNAS installation and setup for shared storage (1/2)FreeNAS installation and setup for shared storage (1/2)
FreeNAS installation and setup for shared storage (1/2)
康志強 大人
 
Networking concepts and terms
Networking concepts and termsNetworking concepts and terms
Networking concepts and terms
Hemnath R.
 

En vedette (19)

HP-UX with Rsync by Dusan Baljevic
HP-UX with Rsync by Dusan BaljevicHP-UX with Rsync by Dusan Baljevic
HP-UX with Rsync by Dusan Baljevic
 
UX at HP Enterprise
UX at HP Enterprise UX at HP Enterprise
UX at HP Enterprise
 
How to Remove Primary Swap on HP-UX 11iv3 Online by Dusan Baljevic
How to Remove Primary Swap on HP-UX 11iv3 Online by Dusan BaljevicHow to Remove Primary Swap on HP-UX 11iv3 Online by Dusan Baljevic
How to Remove Primary Swap on HP-UX 11iv3 Online by Dusan Baljevic
 
HPUX Update Seminar Session 1 Dan Taipala
HPUX Update Seminar Session 1   Dan TaipalaHPUX Update Seminar Session 1   Dan Taipala
HPUX Update Seminar Session 1 Dan Taipala
 
HP-UX - Patch Installation
HP-UX  - Patch InstallationHP-UX  - Patch Installation
HP-UX - Patch Installation
 
HP Unix administration
HP Unix administrationHP Unix administration
HP Unix administration
 
HP Superdome 2 Partitioning Administrator Guide HP-UX 11iv3(August 2010).PDF
HP Superdome 2 Partitioning Administrator Guide HP-UX 11iv3(August 2010).PDFHP Superdome 2 Partitioning Administrator Guide HP-UX 11iv3(August 2010).PDF
HP Superdome 2 Partitioning Administrator Guide HP-UX 11iv3(August 2010).PDF
 
StoreGrid : Restore & Recovery
StoreGrid : Restore & RecoveryStoreGrid : Restore & Recovery
StoreGrid : Restore & Recovery
 
Elastix backup-guide
Elastix backup-guideElastix backup-guide
Elastix backup-guide
 
FreeNAS backup solution
FreeNAS backup solutionFreeNAS backup solution
FreeNAS backup solution
 
FreeNAS installation and setup for shared storage (1/2)
FreeNAS installation and setup for shared storage (1/2)FreeNAS installation and setup for shared storage (1/2)
FreeNAS installation and setup for shared storage (1/2)
 
HP-UX 11iv3 Private Kernel Parameter nfile by Dusan Baljevic
HP-UX 11iv3 Private Kernel Parameter nfile by Dusan BaljevicHP-UX 11iv3 Private Kernel Parameter nfile by Dusan Baljevic
HP-UX 11iv3 Private Kernel Parameter nfile by Dusan Baljevic
 
Basic config in Cisco router
Basic config in Cisco routerBasic config in Cisco router
Basic config in Cisco router
 
Hp ux x_server
Hp ux x_serverHp ux x_server
Hp ux x_server
 
Networking concepts and terms
Networking concepts and termsNetworking concepts and terms
Networking concepts and terms
 
Hp ux-11iv3-multiple-clones-with-dynamic-root-disks-dusan-baljevic-mar2014
Hp ux-11iv3-multiple-clones-with-dynamic-root-disks-dusan-baljevic-mar2014Hp ux-11iv3-multiple-clones-with-dynamic-root-disks-dusan-baljevic-mar2014
Hp ux-11iv3-multiple-clones-with-dynamic-root-disks-dusan-baljevic-mar2014
 
Hp Integrity Servers
Hp Integrity ServersHp Integrity Servers
Hp Integrity Servers
 
HP-UX 11iv3 How to Change Root Volume Group Name vg00 by Dusan Baljevic
HP-UX 11iv3 How to Change Root Volume Group Name vg00 by Dusan BaljevicHP-UX 11iv3 How to Change Root Volume Group Name vg00 by Dusan Baljevic
HP-UX 11iv3 How to Change Root Volume Group Name vg00 by Dusan Baljevic
 
Better Settings for /tmp Filesystem on HP-UX by Dusan Baljevic
Better Settings for /tmp Filesystem on HP-UX by Dusan BaljevicBetter Settings for /tmp Filesystem on HP-UX by Dusan Baljevic
Better Settings for /tmp Filesystem on HP-UX by Dusan Baljevic
 

Similaire à HP-UX Dynamic Root Disk Boot Disk Cloning Benefits and Use Cases by Dusan Baljevic

Droidcon 2013 France - Android Platform Anatomy
Droidcon 2013 France - Android Platform AnatomyDroidcon 2013 France - Android Platform Anatomy
Droidcon 2013 France - Android Platform Anatomy
Benjamin Zores
 
OSC-Fall-Tokyo-2012-v9.pdf
OSC-Fall-Tokyo-2012-v9.pdfOSC-Fall-Tokyo-2012-v9.pdf
OSC-Fall-Tokyo-2012-v9.pdf
nitinscribd
 

Similaire à HP-UX Dynamic Root Disk Boot Disk Cloning Benefits and Use Cases by Dusan Baljevic (20)

Building android for the Cloud: Android as a Server (AnDevConBoston 2014)
Building android for the Cloud: Android as a Server (AnDevConBoston 2014)Building android for the Cloud: Android as a Server (AnDevConBoston 2014)
Building android for the Cloud: Android as a Server (AnDevConBoston 2014)
 
Building Android for the Cloud: Android as a Server (Mobile World Congress 2014)
Building Android for the Cloud: Android as a Server (Mobile World Congress 2014)Building Android for the Cloud: Android as a Server (Mobile World Congress 2014)
Building Android for the Cloud: Android as a Server (Mobile World Congress 2014)
 
Tuning DB2 in a Solaris Environment
Tuning DB2 in a Solaris EnvironmentTuning DB2 in a Solaris Environment
Tuning DB2 in a Solaris Environment
 
Achieving maximum performance in microsoft vdi environments - Jeff Stokes
Achieving maximum performance in microsoft vdi environments - Jeff StokesAchieving maximum performance in microsoft vdi environments - Jeff Stokes
Achieving maximum performance in microsoft vdi environments - Jeff Stokes
 
Bringing up Android on your favorite X86 Workstation or VM (AnDevCon Boston, ...
Bringing up Android on your favorite X86 Workstation or VM (AnDevCon Boston, ...Bringing up Android on your favorite X86 Workstation or VM (AnDevCon Boston, ...
Bringing up Android on your favorite X86 Workstation or VM (AnDevCon Boston, ...
 
Ubuntu phone engineering
Ubuntu phone engineeringUbuntu phone engineering
Ubuntu phone engineering
 
Droidcon 2013 France - Android Platform Anatomy
Droidcon 2013 France - Android Platform AnatomyDroidcon 2013 France - Android Platform Anatomy
Droidcon 2013 France - Android Platform Anatomy
 
Android As a Server- Building Android for the Cloud (AnDevCon SF 2013)
Android As a Server- Building Android for the Cloud (AnDevCon SF 2013)Android As a Server- Building Android for the Cloud (AnDevCon SF 2013)
Android As a Server- Building Android for the Cloud (AnDevCon SF 2013)
 
XPDDS17: Keynote: Shared Coprocessor Framework on ARM - Oleksandr Andrushchen...
XPDDS17: Keynote: Shared Coprocessor Framework on ARM - Oleksandr Andrushchen...XPDDS17: Keynote: Shared Coprocessor Framework on ARM - Oleksandr Andrushchen...
XPDDS17: Keynote: Shared Coprocessor Framework on ARM - Oleksandr Andrushchen...
 
LMG Lightning Talks - SFO17-205
LMG Lightning Talks - SFO17-205LMG Lightning Talks - SFO17-205
LMG Lightning Talks - SFO17-205
 
LAS16-209: Finished and Upcoming Projects in LMG
LAS16-209: Finished and Upcoming Projects in LMGLAS16-209: Finished and Upcoming Projects in LMG
LAS16-209: Finished and Upcoming Projects in LMG
 
Lessons Learned: Using Concourse In Production
Lessons Learned: Using Concourse In ProductionLessons Learned: Using Concourse In Production
Lessons Learned: Using Concourse In Production
 
Hardware Detection Tool
Hardware Detection ToolHardware Detection Tool
Hardware Detection Tool
 
NFD9 - Matt Peterson, Data Center Operations
NFD9 - Matt Peterson, Data Center OperationsNFD9 - Matt Peterson, Data Center Operations
NFD9 - Matt Peterson, Data Center Operations
 
OSC-Fall-Tokyo-2012-v9.pdf
OSC-Fall-Tokyo-2012-v9.pdfOSC-Fall-Tokyo-2012-v9.pdf
OSC-Fall-Tokyo-2012-v9.pdf
 
Modern Travelling Sales Man- Solution through travel time interpolation
Modern Travelling Sales Man- Solution through travel time interpolationModern Travelling Sales Man- Solution through travel time interpolation
Modern Travelling Sales Man- Solution through travel time interpolation
 
Debugging ZFS: From Illumos to Linux
Debugging ZFS: From Illumos to LinuxDebugging ZFS: From Illumos to Linux
Debugging ZFS: From Illumos to Linux
 
Easy backup & restore with Clonezilla - Tips form Basic to Advanced
Easy backup & restore with Clonezilla - Tips form Basic to AdvancedEasy backup & restore with Clonezilla - Tips form Basic to Advanced
Easy backup & restore with Clonezilla - Tips form Basic to Advanced
 
Real-World Docker: 10 Things We've Learned
Real-World Docker: 10 Things We've Learned  Real-World Docker: 10 Things We've Learned
Real-World Docker: 10 Things We've Learned
 
Flexible DevOps Deployment of Enterprise Test Environments in the Cloud
Flexible DevOps Deployment of Enterprise Test Environments in the CloudFlexible DevOps Deployment of Enterprise Test Environments in the Cloud
Flexible DevOps Deployment of Enterprise Test Environments in the Cloud
 

Plus de Circling Cycle (7)

Brief summary-standard-password-hashes-Aix-FreeBSD-Linux-Solaris-HP-UX-May-20...
Brief summary-standard-password-hashes-Aix-FreeBSD-Linux-Solaris-HP-UX-May-20...Brief summary-standard-password-hashes-Aix-FreeBSD-Linux-Solaris-HP-UX-May-20...
Brief summary-standard-password-hashes-Aix-FreeBSD-Linux-Solaris-HP-UX-May-20...
 
Ovclusterinfo command by Dusan Baljevic
Ovclusterinfo command by Dusan BaljevicOvclusterinfo command by Dusan Baljevic
Ovclusterinfo command by Dusan Baljevic
 
HP-UX 11i Log File Management with Logrotate by Dusan Baljevic
HP-UX 11i Log File Management with Logrotate by Dusan BaljevicHP-UX 11i Log File Management with Logrotate by Dusan Baljevic
HP-UX 11i Log File Management with Logrotate by Dusan Baljevic
 
HP-UX 11i LVM Mirroring Features and Multi-threads by Dusan Baljevic
HP-UX 11i LVM Mirroring Features and Multi-threads by Dusan BaljevicHP-UX 11i LVM Mirroring Features and Multi-threads by Dusan Baljevic
HP-UX 11i LVM Mirroring Features and Multi-threads by Dusan Baljevic
 
Three CLI Methods to Find Console IP details on HP-UX by Dusan Baljevic
Three CLI Methods to Find Console IP details on HP-UX by Dusan BaljevicThree CLI Methods to Find Console IP details on HP-UX by Dusan Baljevic
Three CLI Methods to Find Console IP details on HP-UX by Dusan Baljevic
 
HP-UX RBAC Audsys Setup by Dusan Baljevic
HP-UX RBAC Audsys Setup by Dusan BaljevicHP-UX RBAC Audsys Setup by Dusan Baljevic
HP-UX RBAC Audsys Setup by Dusan Baljevic
 
Comparison of Unix and Linux Log File Management Tools by Dusan Baljevic
Comparison of Unix and Linux Log File Management Tools by Dusan BaljevicComparison of Unix and Linux Log File Management Tools by Dusan Baljevic
Comparison of Unix and Linux Log File Management Tools by Dusan Baljevic
 

Dernier

Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
Joaquim Jorge
 

Dernier (20)

Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Manulife - Insurer Innovation Award 2024
Manulife - Insurer Innovation Award 2024Manulife - Insurer Innovation Award 2024
Manulife - Insurer Innovation Award 2024
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdf
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CV
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 
Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live StreamsTop 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
 

HP-UX Dynamic Root Disk Boot Disk Cloning Benefits and Use Cases by Dusan Baljevic

  • 1. HP-UX Dynamic Root Disk Boot Disk Cloning Benefits and Use Cases Dusan Baljevic 2013
  • 2. Acknowledgements • These slides have been used in various presentations in Australia over the last four years. This is a work-in-progress. • I bear full responsibility for any error, even though it is purely unintentional. • I cannot claim credits solely, nor can I claim that I know everything about Unix. I consider myself to be a Unix Apprentice. For that reason I need to give special credits to our colleagues Nobuyuki Hirota (TCE&Q BCS ERT), Daniel Bambou (TC&Q BCS ERT), and Leon Strauss (GSE) for their continuous support, advice, comments, and guidance. • Wisdom of many helped in creation of the presentation (seminars at HP, HPWorld, ITRC/HPSC forums, HP Ambassadors and Unix Profession forums, HP Education courses, individual contributions on the Net). 2
  • 3. What Kind of Use Cases? • This presentation is not displaying formulated textual, structural and visual modeling techniques for specifying use cases with the HP-UX DRD. • In software and systems engineering, a use case is a list of steps, typically defining interactions between a role (known in UML as an "actor") and a system, to achieve a goal. The actor can be a human or an external system. • In systems engineering, use cases are used at a higher level than within software engineering, often representing missions or stakeholder goals. The detailed requirements may then be captured in SysML or as contractual statements. • Rather, in our context, use cases are practical examples of HP-UX DRD usage. 3
  • 4. Bootable System Images in Unix/Linux Many tools available. For the sake of brevity, to mention a few: AIX mksysb, Network Installation Manager (NIM) HP-UX make_tape_recovery/make_net_recovery, Dynamic Root Disk (DRD)*, VM mirroring Linux Mondo Rescue, Clonezilla Solaris ufsdump, fssnap+ufsdump, flash/JumpStart Tru64 btcreate 4
  • 5. Why Boot Disk Cloning is Critical Today? • Creates a "point-in-time“ O/S image, • On-line patching and configuration changes of the inactive O/S, • Easier change management approvals because the active O/S is not affected (risk is eliminated), • Some tasks make dynamic changes of the O/S during the cloning, without affecting the active O/S, • Boot disk mirroring does not prevent disasters caused by human error, • If boot disks are on the same controller, mirroring is not a perfect protection. 5
  • 6. Dynamic Root Disk Mission * Significantly reduce the downtime needed to perform HP-UX software maintenance Reduce the downtime required for recovery from administrative errors Perform software update work during normal business hours, or whenever convenient Provision systems quickly and efficiently Simplify testing 6
  • 7. Dynamic Root Disk Cycles Provision - [Re-]Ignite - Recover - Clone Bare Metal - unused HW Software Management - Identify - Acquire - Organize - Deploy Maintain - Monitor - Patches - Applications - Recovery Upgrade or Recycle - update-ux - re-ignite 7
  • 8. HP-UX Dynamic Root Disk Features 1 of 4 • Dynamic Root Disk (DRD) provides the ability to clone an HP-UX system image to an inactive disk. • Supported on HP PA-RISC and Itanium-based systems. • Supported on hard partitions (nPars), virtual partitions (vPars), and Integrity Virtual Machines (Integrity VMs), running the following operating systems with roots managed by the following Volume Managers (except as specifically noted for rehosting): o HP-UX 11i V2 (11.23) September 2004 or later o HP-UX 11i V3 (11.31) o LVM (all O/S releases supported by DRD) o VxVM 4.1 o VxVM 5.x 8
  • 9. HP-UX Dynamic Root Disk Features 2 of 4 • Product : DynRootDisk Version: A.3.12.316 (DRD_1131_WEB1301.depot) (DRD_1123_WEB1301.depot) • The target disk must be a single physical disk, or SAN LUN. • The target disk must be large enough to hold all of the root volume file systems. DRD allows the cloning of the root volume group even if the master O/S is spread across multiple disks (it is a one-way, many-to-one operation). • On Itanium servers, all partitions are created; EFI and HP-UX partitions are copied. This release of DRD does not copy the HPSP partition. • Copy of lvmtab on the cloned image is modified by the clone operation to contain information that will reflect the desired volume groups when the clone is booted. 9
  • 10. HP-UX Dynamic Root Disk Features 3 of 4 • Only the contents of vg00 are copied. • Due to system calls DRD depends on, DRD expects legacy Device Special Files (DSFs) to be present and the legacy naming model to be enabled on HP-UX 11i v3 servers. HP recommends only partial migration to persistent DSFs be performed. • If the disk is currently in use by another volume group that is visible on the system, the disk will not be used. • If the disk contains LVM, VxVM, or boot records but is not in use, one must use the “-x overwrite” option to tell DRD to overwrite the disk. Already-created clones will contain boot records; the drd status command will show the disk that is currently in use as an inactive system image. 10
  • 11. HP-UX Dynamic Root Disk Features 4 of 4 • All DRD processes, including “drd clone” and “drd runcmd”, can be safely interrupted issuing Control-C (SIGINT) from the controlling terminal or by issuing kill –HUP <pid> (SIGHUP). This action causes DRD to abort processing. Do not interrupt DRD using the kill -9 <pid> command (SIGKILL), which fails to abort safely and does not perform cleanup. Refer to the “Known Issues” list on the DRD web page (http://www.hp.com/go/DRD) for cleanup instructions after drd runcmd is interrupted. • The Ignite server will only be aware of the clone if it is mounted during a make_*_recovery operation. • DRD Revision A.3.12 DRD supports SoftReboot feature if a machine is installed with SoftReboot on a supported platform. 11
  • 12. HP-UX Dynamic Root Disk versus IgniteUX • DRD has several advantages over Ignite-UX net and tape images: * No tape drive is needed, * No impact on network performance will occur, * No security issues of transferring data across the network. • Mirror Disk/UX keeps an "always up-to-date" image of the booted system. DRD provides a "point-in-time“ image. The booted system and the clone may then diverge due to changes to either one. Keeping the clone unchanged is the Recovery scenario. DRD is not available for HP-UX 11.11, which limits options on those systems. 12
  • 13. DRD and update-ux Practices
  • 14. HP-UX Patching Versus Update-UX 1 of 2 • The update-ux method is not only used to update from a lower to a higher version (for example, 11i v2 to v3), but also to update from an older to a newer release within the same version. • For many reasons, we encourage usage of update-ux with Dynamic Root Disk (DRD). • If O/S is upgraded through update-ux process, the best practice recommends cold installs; incremental upgrades might create possibility that some obsolete software and libraries exist afterwards. 14
  • 15. HP-UX Patching Versus Update-UX 2 of 2 We recommend customers develop a release “cycle” through DRD implementation: Run update-ux every year (18 months or maximum two years is acceptable in some circumstances). Only break this cycle if they must have some new functionality in a bi-annual release. Unless specifically requested differently, the patch/update level should be at latest release, if practicable, or LATEST-1. 15
  • 16. DRD is Minimizing Downtime
  • 17. HP-UX DRD: Minimizing Planned Downtime • DRD enables the administrator to create a point-in-time clone of the vg00 volume group: • Original vg00 image remains active; • Cloned vg00 image remains inactive until needed; • Unlike boot disk mirrors, DRD clones are unaffected by vg00 changes. • DRD is an optional, free product on the 11i v2 and v3 application media. Install patches on the clone; applications remain running lvol1 lvol2 lvol3 lvol1 lvol2 lvol3 boot disk boot mirror vg00 (active) Activate the clone to make changes take effect lvol1 lvol2 lvol3 lvol1 lvol2 lvol3 boot disk boot mirror vg00 (inactive) 17 lvol1 lvol2 lvol3 lvol1 lvol2 lvol3 clone disk clone mirror cloned vg00 (inactive/patched) lvol1 lvol2 lvol3 lvol1 lvol2 lvol3 clone disk clone mirror cloned vg00 (active/patched)
  • 18. DRD Clones Minimize Unplanned Downtime • Without DRD: In case of O/S mis-configuration, it may be necessary to restore from tape. • With DRD: In case of O/S mis-configuration, simply activate and boot the clone. Original boot VG is corrupted lvol1 lvol2 lvol3 lvol1 lvol2 lvol3 boot disk boot mirror original vg00 (unusable) Activate the clone! lvol1 lvol2 lvol3 lvol1 lvol2 lvol3 boot disk boot mirror original vg00 (unusable) 18 lvol1 lvol2 lvol3 lvol1 lvol2 lvol3 clone clone mirror disk cloned vg00 (inactive) lvol1 lvol2 lvol3 lvol1 lvol2 lvol3 clone clone mirror disk cloned vg00 (active)
  • 19. DRD Clones Minimize Planned Downtime • Without DRD: Software and kernel management may require extended downtime. • With DRD: Install/remove software on the clone while applications continue running. Install patches & tune the kernel on the clone; applications remain running Activate the clone to make changes take effect lvol1 lvol2 lvol3 lvol1 lvol2 lvol3 boot disk boot mirror vg00 (active) lvol1 lvol2 lvol3 lvol1 lvol2 lvol3 boot disk boot mirror vg00 (inactive) 19 lvol1 lvol2 lvol3 lvol1 lvol2 lvol3 clone clone mirror disk cloned vg00 (inactive/patched) lvol1 lvol2 lvol3 lvol1 lvol2 lvol3 clone clone mirror disk cloned vg00 (active/patched)
  • 20. DRD – Pros and Cons
  • 21. HP-UX DRD Pros 1 of 2 • Fully supported by HP. • Full clone. • Complements other HP solutions by reducing system downtime required to install and update patches and software. • Copy operation is currently done by fbackup and frecover. • kctune command can be used to modify kernel parameters in the clone. • The ioconfig file and the entire /dev directory are copied by the DRD clone operation, so instance numbers will not change when the clone is booted.* • Supports nPars, vPars, and Integrity VMs. 21
  • 22. HP-UX DRD Pros 2 of 2 • No tape drive is needed. • No impact on network performance. • No security issues of transferring data across the network. • All DRD processes, including drd clone and drd runcmd, can be safely interrupted issuing Control-C (SIGINT) from the controlling terminal or by issuing kill -HUP<pid> (SIGHUP). This action causes DRD to abort processing and perform any necessary clean up. Do not interrupt DRD using the kill -9 <pid> command (SIGKILL), which fails to abort safely and does not perform cleanup. 22
  • 23. HP-UX DRD Cons 1 of 4 • Target disk must be a single disk or mirror group only. • Not easy to list all differences between Active and Inactive image (drd sync * is the simplistic option). • Cloning should be done when the server’s activity is at a minimum. • DRD can clone root volume group that is spread across multiple disks. The target must be a single disk or mirrored pair. 23
  • 24. HP-UX DRD Cons 2 of 4 • Contents of root volume group are copied. A system that has /opt (or any file system that is patched) not in root volume group is not suitable for use with DRD. • Does not provide a mechanism for resizing file systems during a DRD clone operation. However, after the clone is created, you can manually change file system sizes on the inactive system without needing an immediate reboot. The whitepaper, Using the Dynamic Root Disk Toolset describes resizing file systems other than /stand. The whitepaper Using the DRD toolset to extend the /stand file system in an LVM environment describes resizing the boot (/stand) file system on an inactive system image. • Current release of DRD does not copy the Itanium Service Partition (s3 or _p3). 24
  • 25. HP-UX DRD Cons 3 of 4 • Command /opt/drd/lbin/drd_scan_hw_host hangs occasionally. This is a hardware issue as it is trying to scan all connected hardware. Check it before using DRD and maybe even remove stale devices with rmsf –x if necessary: # ioscan -s # lssf -s • Too many tiny files on root disks can cause significant performance problem when DRD is used. When there are large number of files in the root VG (for example, two millions), drd clone / drd sync might fail with error "Out of memory". It is suggested to increase maxdsiz kernel parameter, or use "-x exclude_list" option, or remove unnecessary user files. 25
  • 26. HP-UX DRD Cons 4 of 4 • We might see the following error message during the execution of drd runcmd if the nsswitch.conf file contains the "hosts: nis" entry: Error: Could not contact host "myserver". Make sure the hostname is correct and an absolute pathname is specified (beginning with "/"). • We might see the following error message during the execution of drd runcmd if the nsswitch.conf file contains the "passwd: compat" or "group: compat" entries: Error: Permission is denied for the current operation. There is no entry for user id 0 in the user database. Check /etc/passwd and/or the NIS user database. 26
  • 27. Supported Versions of DRD • Versions of DRD are supported for at least two years. • Versions not listed in the "Supported DRD Releases" section of the latest Release Notes are no longer supported. • We always recommend to have the latest DRD installed. 27
  • 28. DRD – Installation and Commands
  • 29. Installing DRD • DRD is included in current 11i v2 and v3 operating environments or ... • Download and install DRD from http://software.hp.com Install DRD with swinstall (no reboot required) # swinstall –s /tmp/DynRootDisk*.depot DynRootDisk 29
  • 30. DRD Commands Most DRD tasks require a single command, drd, which supports multiple “modes”. Example # drd clone –t /dev/disk/diskY –x overwrite=true Other available modes # drd # drd clone ... # drd mount ... # drd umount ... # drd runcmd ... # drd activate ... # drd deactivate # drd status view available modes and options create a DRD clone mount the DRD clone’s file systems unmount the DRD clone’s file systems execute a command on the clone’s file systems make the DRD clone the default boot disk after next reboot retain the current active image as the default boot disk display information about active/inactive DRD images DRD offers several common options that are supported in all modes # drd mode -? view available options # drd mode –x ? view available extended options # drd mode [-x verbosity=3] ... specify stdout/stderr verbosity, 0-5 # drd mode [-x log_verbosity=4] ... specify log file verbosity, 0-5 # drd mode [-qqq|qq|q|v|vv|vvv] ... alternative to –x verbosity=n # drd mode [–p] ... preview but don’t execute the operation 30
  • 31. DRD – Some Restrictions
  • 32. HP-UX DRD Restrictions on update-ux and sw* Commands Invoked by drd runcmd • Options on the Software Distributor commands that can be used with drd runcmd need to ensure that operations are DRD-safe: • The -F and -x fix=true options are not supported for drd runcmd swverify operations. Use of these options could result in changes to the booted system. • The use of double quotation marks and wild card symbols (*, ?) in the command line must be escaped with a backslash character (), as in the following example: # drd runcmd swinstall –s depot_server:/var/opt/patches * • Files referenced in the command line must both: o Reside in the inactive system image o Be referenced in the DRD-safe command by the path relative to the mount point of the inactive system image • This applies to files referenced as arguments for the -C, -f, -S, -X, and -x logfile options for an sw command run by drd runcmd and update-ux command -f option. 32
  • 33. HP-UX Issue when DRD versions different in booted and cloned environment 1 of 2 # drd runcmd swinstall -s /tmp/ignite/Ignite-UX-11ALL_C.7.7.98.depot • ======= 11/28/12 00:42:22 IST BEGIN Executing Command On • … /opt/drd/wrappers/start_fsdaemon[22]: start_fsdaemon: not found. * Stopping swagentd for drd runcmd /opt/drd/wrappers/stop_fsdaemon[22]: stop_fsdaemon: not found. ERROR: Command executed on inactive system image returned an error - One or more postcommands for /usr/sbin/swinstall failed. - One or more precommands for /usr/sbin/swinstall failed. /usr/sbin/swinstall will not be executed. - The precommand "/opt/drd/wrappers/start_fsdaemon" fails with the return code "1". - The postcommand "/opt/drd/wrappers/stop_fsdaemon" fails with the return code "1". Executing Command On Inactive System Image failed with 1 error.* Cleaning Up After Command Execution On Inactive System Image 33
  • 34. HP-UX Issue when DRD versions different in booted and cloned environment 2 of 2 This problem is triggered by having one version of DRD installed on the booted system, and a previous release on the inactive image. If the clone is not very new, just re-run drd clone. If you do not want to re-create the clone, the following workaround will help: # drd mount # cp /var/opt/drd/mnts/sysimage_001/opt/drd/wrappers/common_utils /var/opt/drd/mnts/sysimage_001/opt/drd/wrappers/common_utils.orig # cp /opt/drd/wrappers/common_utils /var/opt/drd/mnts/sysimage_001/opt/drd/wrappers/common_utils (If you are booted on the clone, replace "sysimage_001" with "sysimage_000".) The steps above will enable drd runcmd to succeed. However, the file change would cause a swverify error on the version of DRD in the clone. To repair this, install the new version of DRD to the inactive image: # drd runcmd swinstall -s <depot> DynRootDisk 34
  • 35. HP-UX DRD Updates from multiple-DVD media DRD updates directly from media require the September 2010 OE (or later) versions of DRD, SWM and SW-DIST products. In order to use a media depot to do a DRD update, first install September 2010 or later versions of DRD, SWM, and SW-DIST products from the media. This must be done before the clone is created, so the new DRD, SWM, and SW-DIST are on the active system and on the clone. 35
  • 36. DRD – Usage Scenarios
  • 37. Creating and Updating DRD Clone Use the drd clone command to create a DRD clone of the active boot disk: • DRD identifies the current active boot disk • DRD builds a similarly structured clone disk • DRD copies the current disk’s file system contents to the clone • DRD builds a mirror of the clone, too, if requested • DRD records log messages in /var/opt/drd/drd.log Identify available disk(s) # ioscan –funC disk # lvmadm –l or strings /etc/lvmtab* # vxdisk list # diskinfo /dev/rdisk/disk3 list all disks on the system which disks are LVM disks? which disks are VxVM disks? verify the disk size Clone the current active boot disk # drd clone –t /dev/disk/disk3 [–x overwrite=true] [-x mirror_disk=/dev/disk/disk4] specify a target disk (required!) overwrite data on target create a mirror of the DRD Update an existing clone (overwrite=true required!) # drd clone –t /dev/disk/disk3 –x overwrite=true [-x mirror_disk=/dev/disk/disk4] specify a target disk (required!) overwrite data on target create a mirror of the DRD 37
  • 38. Verifying DRD Clone Status # drd status ======= 07/23/08 12:13:57 EDT BEGIN Displaying DRD Clone Image Information (user=root) (jobid=myhost) * Clone Disk: /dev/disk/disk3 * Clone EFI Partition: Boot loader and AUTO file present * Clone Creation Date: 07/18/08 21:07:29 EDT * Clone Mirror Disk: None * Mirror EFI Partition: None * Original Disk: /dev/disk/disk1 * Original EFI Partition: Boot loader and AUTO file present * Booted Disk: Original Disk (/dev/disk/disk1) * Activated Disk: Original Disk (/dev/disk/disk1) ======= 07/23/08 12:14:04 EDT END Displaying DRD Clone Image Information succeeded. (user=root) (jobid=myhost) 38
  • 39. DRD-Safe Commands • Files in the inactive system image are not accessible, by default, to HP-UX commands. • “DRD-Safe” commands can be executed on the inactive image via drd runcmd – Temporarily imports and mounts the inactive image’s volume group and file systems, – Executes the specified command using executables & files on the inactive image, – Ensures that the active image remains untouched, – Unmounts and exports the inactive image’s file systems and volume group. • DRD-safe commands currently include: swinstall swremove swlist swmodify swverify swjob kctune update-ux view 39
  • 40. Managing Patches with DRD-Safe Commands • • • • Installing patches and software sometimes requires a reboot and downtime. Minimize downtime by installing software/patches/updates on an inactive image. Changes take effect when you activate and boot the inactive image. Only DRD-Safe patches/products can be installed via DRD. List software installed on the inactive image using the DRD-Safe swlist command # drd runcmd swlist Check if product or patch is DRD-Safe # swlist –l fileset –a is_drd_safe product_name|patch Install software on the inactive image using the DRD-Safe swinstall command # drd runcmd swinstall –s server:/mydepot PHSS_NNNNN Remove software from the inactive image using the DRD-Safe swremove command # drd runcmd swremove PHSS_NNNNN View the inactive image SDUX log file using the DRD-Safe view command # drd runcmd view /var/adm/sw/swagent.log Update to a more recent 11i v3 media kit # drd runcmd swinstall –s server:/mydepot Update-UX # drd runcmd update-ux –s server:/mydepot # drd runcmd view /var/adm/sw/update-ux.log # drd runcmd view /var/opt/swm/sw.log 40
  • 41. Accessing DRD Inactive Images • The drd runcmd utility only executes DRD-safe executables on an inactive image. • To access other files on the inactive image, mount the image via drd mount – Imports the inactive image volume group, typically as drd00, – Mounts the image file systems under /var/opt/drd/mnts/sysimage_001 • Warnings: – Be careful not to unintentionally modify the active system image! – Only use read-only commands like view and diff to access inactive images. Mount the inactive image file systems # drd mount # mount -v Access the inactive image file systems, being careful not to modify the active image! # diff /etc/passwd /var/opt/drd/mnts/sysimage_001/etc/passwd Unmount the inactive image file systems # drd umount 41
  • 42. Activating and Deactivating Inactive DRD Image Use drd activate to make the inactive image the primary boot disk • DRD updates the boot menu • DRD can optionally reboot the system immediately Promote the inactive system image to become primary boot disk (with preview) # drd activate [-x reboot=false] -p Check the bootpath # setboot -v If –x reboot=true wasn’t specified, manually reboot # shutdown –ry 0 If you change your mind before rebooting, use drd deactivate to undo the activation # drd deactivate Use drd status to determine which disk is the currently active boot disk # drd status 42
  • 43. DRD Inactive Image Synchronization • The drd sync command was introduced in release B.11.xx.A.3.5 of Dynamic Root Disk (DRD) to propagate root volume group file system changes from the booted original system to the inactive clone image. Running drd sync command updates/creates the files on Inactive Image (Clone Disk) which were modified on Active Image (Boot Disk) after last successful execution of drd clone command. pax archive is used for drd sync while fbackup/frestore is used for clone. •To preview differences between the Active Image and the DRD Inactive Image # drd sync –p • It creates file /var/opt/drd/sync/files_to_be_copied_by_drd_sync • Once the preview is checked, a resync of the cloned image can be initiated # drd sync 43
  • 44. drd sync Without DRD Sync 1. A system administrator creates a DRD clone on a Thursday. 2. The administrator applies a collection of software changes to the clone on Friday using the drd runcmd command. 3. On Friday, several log files are updated on the booted system. 4. On Saturday, the clone is booted, however the log files are not up to date, so the administrator must copy over the log files and any other files from the original system that changed after the clone was created – for example, /etc/passwd With DRD Sync 1. A system administrator creates a DRD clone on a Thursday. 2. The administrator applies a collection of software changes to the clone on Friday using the drd runcmd command. 3. On Friday, several log files are updated on the booted system. 4. On Saturday, the clone is synced then booted – log files and other files that have changed on the original system have automatically been copied to the clone. 44
  • 45. drd sync • The list of files on the active system whose modification date is newer than or equal to the clone creation time provides the initial list of files to be synchronized Trimming the list of files to be synchronized The following locations are not synchronized: /var/adm/sw, /tmp, /var/tmp, /var/opt/drd/tmp, /stand, /dev/< clone_group>, Files that have changed on the clone are not synchronized Nonvolatile files in the Software Distributor Installed Products Database (IPD) are not synchronized Volatile files in the Software Distributor Installed Products Database (IPD) are not synchronized 45
  • 46. HP-UX DRD Examples for Different O/S Releases HP-UX 11iv2: # drd clone -t /dev/dsk/c2t1d0 -x overwrite=true [-x mirror_disk=/dev/dsk/c3t0d1] HP-UX 11iv3, use agile views: # drd clone -t /dev/disk/disk32 -x overwrite=true [-x mirror_disk=/dev/disk/disk4] Note that all partitions on Itanium disk are created, and s1 and s2 (_p1 and _p2) are copied. 46
  • 47. HP-UX 11i v2 To v3 Upgrade with DRD 1 of 3 Original image: /dev/dsk/c0t0d0 (HP-UX 11i v2) Clone disk: /dev/dsk/c1t0d0 What to apply: HP-UX 11i v3 Update 9, Virtual Server OE Depot with patches depsvr:/var/depots/1131_VSE-OE Version of DRD: B.11.31.A.3.3 or later Objective: Utilize DRD to help adjust file systems sizes when performing an HP-UX 11i v2 to v3 update • Create the clone: # drd clone –t /dev/dsk/c1t0d0 • Use drd status to view the clone: # drd status 47
  • 48. HP-UX 11i v2 To v3 Upgrade with DRD 2 of 3 • Run update-UX in preview mode on the active disk: # update-ux -p -s depsvr:/var/depots/1131 HPUX11iVSE-OE • Adjust file system sizes on the clone as needed • Activate and boot the clone, setting the alternate bootpath to the HP-UX 11i v2 disk: # drd activate -x alternate_bootdisk=/dev/dsk/c0t0d0 -x reboot=true • Update the active image to HP-UX 11i v3, Virtual Server OE: # update-ux -s depsvr:/var/depots/1131_VSE-OE HPUX11i-VSE-OE There will be a reboot executed at this time. 48
  • 49. HP-UX 11i v2 To v3 Upgrade with DRD 3 of 3 • Ensure that the software is installed properly: # swverify * • Verify all software has been updated to the HP-UX 11i v3: # swlist • Ensure the integrity of your updated system by checking the following log files /var/adm/sw/update-ux.log and /var/opt/swm/swm.log 49
  • 50. HP-UX How to Interrupt DRD processes All DRD processes, including “drd clone” and “drd runcmd”, can be safely interrupted issuing Control-C (SIGINT) from the controlling terminal or by issuing kill –HUP <pid> (SIGHUP). This action causes DRD to abort processing. Do not interrupt DRD using the kill -9 <pid> command (SIGKILL), which fails to abort safely and does not perform cleanup. Refer to the “Known Issues” list on the DRD web page (http://www.hp.com/go/DRD) for cleanup instructions after drd runcmd is interrupted. 50
  • 51. HP-UX DRD Examples How to Select Software • To exclude single product T1458AA # drd runcmd update-ux -p –s svr:/var/opt/HPUX_1131_0903_DCOE HPUX11i-DC-OE !T1458AA • Use -f software_file * to read the list of sw_selections from software_file instead of (or in addition to) the command line # drd runcmd update-ux -s source_location -f software_file 51
  • 52. HP-UX DRD Rehost Cookbook 1 of 2 • Clone the host1 system to a shared LUN # drd clone -t /dev/disk/diskX • Create a system information file for host2 # vi /tmp/sysinfo_host2 SYSINFO_HOSTNAME=host2 SYSINFO_DHCP_ENABLE[0]=0 SYSINFO_MAC_ADDRESS[0]=0x1edb3adea7ab SYSINFO_IP_ADDRESS[0]=172.16.19.184 SYSINFO_SUBNET_MASK[0]=255.255.255.0 SYSINFO_ROUTE_GATEWAY[0]=172.16.19.1 SYSINFO_ROUTE_DESTINATION[0]=default SYSINFO_ROUTE_COUNT[0]=1 52
  • 53. HP-UX DRD Rehost Cookbook 2 of 2 • Execute the drd rehost command, specifying the system information file created in the previous step. # drd rehost -f /tmp/sysinfo_host2 • Unpresent the LUN from the host1, and present it to the host2. • Choose the new LUN from the boot screens and boot the host2. • On both hosts reinitialize the DRD configuration by deleting the registry # rm -f /var/opt/drd/registry/registry.xml • Remove the Device Special File of the boot device of the host2 # rmsf -H 64000/0xfa00/0x6 53
  • 54. HP-UX Expand Root File System with DRD 1 of 3 For this example, we assume vg00 has only one disk (disk0) in LVM L1 and the DRD will hold on disk5. Note, however, that support procedure for extending the root filesystem is using Ignite-UX! • Create a clone of the root filesystem # drd clone -v -x overwrite=true -t /dev/disk/disk5 • Mount the DRD filesystem as vgdrd # mkdir /dev/vgdrd # mknod /dev/vgdrd/group c 64 0x0a0000 # vgimport /dev/vgdrd /dev/disk/disk5 # vgchange -a y vgdrd NOTE: The minor number must be unique on the server. 54
  • 55. HP-UX Expand Root File System with DRD 2 of 3 • Create a new lvol to hold lvol4 # lvcreate -l <lvol4_size> -n lvtmp /dev/vgdrd • Copy the data from lvol4 to lvtmp # dd if=/dev/vgdrd/lvol4 of=/dev/vgdrd/lvtmp bs=1024 • Remove lvol4 # lvremove /dev/vgdrd/lvol4 • Assume that there is a need to get to 450 PE on root # lvextend -l 450 /dev/vgdrd/lvol3 • Recreate lvol4 and move the data back: # lvcreate -l <lvol4_size> -n lvol4 /dev/vgdrd # dd if=/dev/vgdrd/lvtmp of=/dev/vgdrd/lvol4 bs=1024 55
  • 56. HP-UX Expand Root File System with DRD 3 of 3 • Check the size change # vgdisplay -v vgdrd • Remove the DRD volume group # vgexport vgdrd • Boot from the DRD volume # /opt/drd/bin/drd activate -x reboot=true 56
  • 57. HP-UX DRD Update-ux with Single Reboot • Create a clone disk: # drd clone -x overwrite=true -t <block_DSF_target_disk> • Install OE update-ux on the clone: # drd runcmd update-ux -v -s /hp/raj.depot/ HPUX11i-VSE-OE !Ignite-UX-11-23 !Ignite-UX-11-31 !T1335DC !IGNITE !Ignite-UX-11-11 • Install required patches from a depot. Install Patches, HP Products, non-HP Products from single depot: # drd runcmd swinstall -x patch_match_target=true -x -s /hp/non-oe.depot * • Boot the clone when ready # drd activate -x alternate_bootdisk=<block_DSF_current_ boot_disk> -x reboot=true 57
  • 58. HP-UX DRD Debug Session * Clean /var/opt/drd directory # cd /var/opt/drd # rm -rf tmp/* mapfiles/* drd.log inventory mnts registry sync Run DRD session with following environment set and duplicate the issue # export INST_DEBUG=5 # export SMDDEBUG_SMDINIT=9 # drd … -x overwrite=true -x verbosity=D -x log_verbosity=D Collect the archive of /var/opt/drd # cd /var/opt # tar cvf /var/tmp/drd.tar drd # gzip /var/tmp/drd.tar Make sure to collect the debug log when the problem can be duplicated and obtain the archive of /var/opt/drd when opening an L3 case 58
  • 59. HP-UX DRD Serial Patch Installation # swlist –l fileset –a is_drd_safe <product_name|patch> # swcopy -s /tmp/PHCO_38159.depot * @ /var/opt/mx/depot11/PHCO_38159.dir # drd runcmd swinstall -s /var/opt/mx/depot11/PHCO_38159.dir PHCO_38159 59
  • 60. HP-UX DRD with SWA 1 of 3 • Use drd status to view the clone: # drd status • • Determine what patches are needed: a. Mount the clone: # drd mount • b. Create an SWA report: # swa report -s /var/opt/drd/mnts/sysimage001 60
  • 61. HP-UX DRD with SWA 2 of 3 • Download the patches identified by SWA into a depot: # swa get -t /var/depots/myswa Patch installation might require special attention. Review any special installation instructions documented in /var/depots/myswa/readBeforeInstall.txt. 61
  • 62. HP-UX DRD with SWA 3 of 3 • Install everything in the 1131swa depot: # drd runcmd swinstall -s /var/depots/1131swa -x patch_match_target=true • Ensure the patches are installed: # drd runcmd view /var/adm/sw/swagent.log • Unmount the clone: # drd umount • Activate and boot the clone: # drd activate -x reboot=true 62
  • 63. HP-UX 11i V2 to V3 Upgrade via DRD • DRD can be used to update from 11iv2 to 11iv3. Whether that is the best option is another question. Note that there is a difference between solutions working and vendors supporting them. There are many examples in IT field when vendors refuse to “certify” solutions although they are known to work reasonably well. • There were some issues with clone activations in certain releases of DRD though. • I have done this upgrade via DRD many times. • You must ensure that the newer version (11i v3) DVDs (or ISO images) are not from a revision date earlier than what the 11i v2 was created with. • The best practice recommends cold installs as incremental upgrades might leave possibility that some obsolete software and libraries exist afterwards. • I enclose herewith another HP document that confirms it (I am sure there are many other documents around). • Or, a customer’s experience: http://www.hpuxtips.es/?q=node/229 63
  • 64. HP-UX Using DRD to Change Volume Manager • Create clone via DRD • Boot the clone • Migrate the LVM disk to VxVM using vxcp_lvmroot command: # /etc/vx/bin/vxcp_lvmroot -v disk1 64
  • 65. HP-UX DRD Multiple Copies of Targets 1 of 11 * # ioscan -funNC disk Class I H/W Path Driver S/W State H/W Type Description =================================================================== disk 5 64000/0xfa00/0x0 73.4GMAU3073NC esdisk /dev/disk/disk5 disk 6 64000/0xfa00/0x1 GST3146707LC esdisk /dev/disk/disk6 disk 7 64000/0xfa00/0x2 73.4GMAU3073NC esdisk CLAIMED DEVICE /dev/rdisk/disk5 CLAIMED DEVICE CLAIMED DEVICE /dev/rdisk/disk7_p1 /dev/disk/disk7_p2 /dev/rdisk/disk7_p2 /dev/disk/disk7_p3 /dev/rdisk/disk7_p3 64000/0xfa00/0x3 esdisk /dev/disk/disk8 disk VOLUME 9 HP /dev/rdisk/disk7 /dev/disk/disk7_p1 8 HP 146 /dev/rdisk/disk6 /dev/disk/disk7 disk N HP 64000/0xfa00/0x6 esdisk /dev/disk/disk9 CLAIMED DEVICE TEAC DV-28E- HP MSA /dev/rdisk/disk8 CLAIMED DEVICE /dev/rdisk/disk9 65
  • 66. HP-UX DRD Multiple Copies of Targets 2 of 11 # drd clone -t /dev/disk/disk5 ======= 03/04/13 11:04:31 EDT BEGIN Clone System Image (user=root) (jobid=ia643) * Reading Current System Information * Selecting System Image To Clone * Selecting Target Disk * Selecting Volume Manager For New System Image * Analyzing For System Image Cloning * Creating New File Systems * Copying File Systems To New System Image WARNING: The following files could not be copied to the clone. WARNING: This may be caused by updating files during the copy. WARNING: Uncopied file: /var/opt/hpvm/common/command.log * Copying File Systems To New System Image succeeded with 3 warnings. * Making New System Image Bootable * Unmounting New System Image Clone ======= 03/04/13 11:44:49 EDT END Clone System Image succeeded with 3 warnings. (user=root) (jobid=ia643) 66
  • 67. HP-UX DRD Multiple Copies of Targets 3 of 11 # drd status ======= 03/04/13 11:45:09 EDT (user=root) BEGIN Displaying DRD Clone Image Information (jobid=ia643) * Clone Disk: /dev/disk/disk5 * Clone EFI Partition: AUTO file present, Boot loader present * Clone Rehost Status: SYSINFO.TXT not present * Clone Creation Date: 03/04/13 11:04:52 EDT * Last Sync Date: None * Clone Mirror Disk: None * Mirror EFI Partition: None * Original Disk: /dev/disk/disk7 * Original EFI Partition: AUTO file present, Boot loader present * Original Rehost Status: SYSINFO.TXT not present * Booted Disk: Original Disk (/dev/disk/disk7) * Activated Disk: Original Disk (/dev/disk/disk7) ======= 03/04/13 11:45:28 EDT succeeded. (user=root) END Displaying DRD Clone Image Information (jobid=ia643) 67
  • 68. HP-UX DRD Multiple Copies of Targets 4 of 11 # drd clone -t /dev/disk/disk6 ======= 03/04/13 11:46:17 EDT BEGIN Clone System Image (user=root) (jobid=ia643) * Reading Current System Information * Selecting System Image To Clone * Selecting Target Disk * Selecting Volume Manager For New System Image * Analyzing For System Image Cloning * Creating New File Systems * Copying File Systems To New System Image WARNING: The following files could not be copied to the clone. WARNING: This may be caused by updating files during the copy. WARNING: Uncopied file: /var/opt/hpvm/common/command.log WARNING: Uncopied file: /var/opt/perf/datafiles/logdev * Copying File Systems To New System Image succeeded with 4 warnings. * Making New System Image Bootable * Unmounting New System Image Clone ======= 03/04/13 12:34:52 EDT END Clone System Image succeeded with 4 warnings. (user=root) (jobid=ia643) 68
  • 69. HP-UX DRD Multiple Copies of Targets 5 of 11 # drd status ======= 03/04/13 12:35:37 EDT (user=root) BEGIN Displaying DRD Clone Image Information (jobid=ia643) * Clone Disk: /dev/disk/disk6 * Clone EFI Partition: AUTO file present, Boot loader present * Clone Rehost Status: SYSINFO.TXT not present * Clone Creation Date: 03/04/13 11:46:37 EDT * Last Sync Date: None * Clone Mirror Disk: None * Mirror EFI Partition: None * Original Disk: /dev/disk/disk7 * Original EFI Partition: AUTO file present, Boot loader present * Original Rehost Status: SYSINFO.TXT not present * Booted Disk: Original Disk (/dev/disk/disk7) * Activated Disk: Original Disk (/dev/disk/disk7) ======= 03/04/13 12:35:56 EDT succeeded. (user=root) END Displaying DRD Clone Image Information (jobid=ia643) 69
  • 70. HP-UX DRD Multiple Copies of Targets 6 of 11 # ioscan -m dsf Persistent DSF Legacy DSF(s) ======================================== /dev/pt/pt3 /dev/rscsi/c5t0d0 /dev/rscsi/c4t0d0 /dev/pt/pt4 /dev/rscsi/c6t0d0 /dev/rdisk/disk5 /dev/rdsk/c2t1d0 /dev/rdisk/disk5_p1 /dev/rdsk/c2t1d0s1 /dev/rdisk/disk5_p3 /dev/rdsk/c2t1d0s3 /dev/rdisk/disk5_p2 /dev/rdsk/c2t1d0s2 /dev/rdisk/disk6 /dev/rdsk/c2t0d0 /dev/rdisk/disk6_p1 /dev/rdsk/c2t0d0s1 /dev/rdisk/disk6_p2 /dev/rdsk/c2t0d0s2 /dev/rdisk/disk6_p3 /dev/rdsk/c2t0d0s3 /dev/rdisk/disk7 /dev/rdsk/c3t2d0 /dev/rdisk/disk7_p1 /dev/rdsk/c3t2d0s1 /dev/rdisk/disk7_p2 /dev/rdsk/c3t2d0s2 /dev/rdisk/disk7_p3 /dev/rdsk/c3t2d0s3 /dev/rdisk/disk8 /dev/rdsk/c0t0d0 /dev/rdisk/disk9 /dev/rdsk/c7t0d2 70
  • 71. HP-UX DRD Multiple Copies of Targets 7 of 11 # ioscan -funneC disk Class I H/W Path Driver S/W State H/W Type Description ======================================================================= disk 3 0/0/2/0.0.0.0 sdisk CLAIMED /dev/dsk/c0t0d0 DEVICE TEAC DV-28E-N /dev/rdsk/c0t0d0 Acpi(HWP0002,0)/Pci(2|0)/Ata(Primary,Master) disk 1 0/1/1/0.0.0 sdisk CLAIMED DEVICE HP 146 GST3146707LC /dev/dsk/c2t0d0 /dev/rdsk/c2t0d0 /dev/dsk/c2t0d0s1 /dev/rdsk/c2t0d0s1 /dev/dsk/c2t0d0s2 /dev/rdsk/c2t0d0s2 /dev/dsk/c2t0d0s3 /dev/rdsk/c2t0d0s3 Acpi(HWP0002,100)/Pci(1|0)/Scsi(Pun0,Lun0)/HD(Part1,SigFB4D90C2-8464-11E2-8000D6217B60E588)/EFIHPUXHPUX.EFI disk 0 0/1/1/0.1.0 sdisk CLAIMED DEVICE HP 73.4GMAU3073NC /dev/dsk/c2t1d0 /dev/rdsk/c2t1d0 /dev/dsk/c2t1d0s1 /dev/rdsk/c2t1d0s1 /dev/dsk/c2t1d0s2 /dev/rdsk/c2t1d0s2 /dev/dsk/c2t1d0s3 /dev/rdsk/c2t1d0s3 Acpi(HWP0002,100)/Pci(1|0)/Scsi(Pun1,Lun0)/HD(Part1,Sig25DEF318-845F-11E2-8000D6217B60E588)/EFIHPUXHPUX.EFI disk 2 0/1/1/1.2.0 sdisk CLAIMED DEVICE HP 73.4GMAU3073NC /dev/dsk/c3t2d0 /dev/rdsk/c3t2d0 /dev/dsk/c3t2d0s1 /dev/rdsk/c3t2d0s1 /dev/dsk/c3t2d0s2 /dev/rdsk/c3t2d0s2 /dev/dsk/c3t2d0s3 /dev/rdsk/c3t2d0s3 Acpi(HWP0002,100)/Pci(1|1)/Scsi(Pun2,Lun0)/HD(Part1,SigB9B8E4CC-0CE1-11E2-8000D6217B60E588)/EFIHPUXHPUX.EFI disk 4 0/4/1/1.4.0.0.0.0.2 sdisk /dev/dsk/c7t0d2 CLAIMED DEVICE HP MSA VOLUME /dev/rdsk/c7t0d2 Acpi(HWP0002,400)/Pci(1|1)/Fibre(WWN500805F300061081,Lun4002000000000000 71
  • 72. HP-UX DRD Multiple Copies of Targets 8 of 11 EFI Shell version 1.10 [14.62] Device mapping table fs0 : Acpi(HWP0002,100)/Pci(1|0)/Scsi(Pun0,Lun0)/HD(Part1,SigFB4D90C2-8464-11E2-8000-D6217B60E588) fs1 : Acpi(HWP0002,100)/Pci(1|0)/Scsi(Pun0,Lun0)/HD(Part3,SigFB4D90EA-8464-11E2-8000-D6217B60E588) fs2 : Acpi(HWP0002,100)/Pci(1|0)/Scsi(Pun1,Lun0)/HD(Part1,Sig25DEF318-845F-11E2-8000-D6217B60E588) fs3 : Acpi(HWP0002,100)/Pci(1|0)/Scsi(Pun1,Lun0)/HD(Part3,Sig25DEF340-845F-11E2-8000-D6217B60E588) fs4 : Acpi(HWP0002,100)/Pci(1|1)/Scsi(Pun2,Lun0)/HD(Part1,SigB9B8E4CC-0CE1-11E2-8000-D6217B60E588) fs5 : Acpi(HWP0002,100)/Pci(1|1)/Scsi(Pun2,Lun0)/HD(Part3,SigB9B8E526-0CE1-11E2-8000-D6217B60E588) blk0 : Acpi(HWP0002,0)/Pci(2|0)/Ata(Primary,Master) blk1 : Acpi(HWP0002,100)/Pci(1|0)/Scsi(Pun0,Lun0) blk2 : Acpi(HWP0002,100)/Pci(1|0)/Scsi(Pun0,Lun0)/HD(Part1,SigFB4D90C2-8464-11E2-8000-D6217B60E588) blk3 : Acpi(HWP0002,100)/Pci(1|0)/Scsi(Pun0,Lun0)/HD(Part2,SigFB4D90D6-8464-11E2-8000-D6217B60E588) blk4 : Acpi(HWP0002,100)/Pci(1|0)/Scsi(Pun0,Lun0)/HD(Part3,SigFB4D90EA-8464-11E2-8000-D6217B60E588) blk5 : Acpi(HWP0002,100)/Pci(1|0)/Scsi(Pun1,Lun0) blk6 : Acpi(HWP0002,100)/Pci(1|0)/Scsi(Pun1,Lun0)/HD(Part1,Sig25DEF318-845F-11E2-8000-D6217B60E588) blk7 : Acpi(HWP0002,100)/Pci(1|0)/Scsi(Pun1,Lun0)/HD(Part2,Sig25DEF32C-845F-11E2-8000-D6217B60E588) blk8 : Acpi(HWP0002,100)/Pci(1|0)/Scsi(Pun1,Lun0)/HD(Part3,Sig25DEF340-845F-11E2-8000-D6217B60E588) blk9 : Acpi(HWP0002,100)/Pci(1|1)/Scsi(Pun2,Lun0) blkA : Acpi(HWP0002,100)/Pci(1|1)/Scsi(Pun2,Lun0)/HD(Part1,SigB9B8E4CC-0CE1-11E2-8000-D6217B60E588) blkB : Acpi(HWP0002,100)/Pci(1|1)/Scsi(Pun2,Lun0)/HD(Part2,SigB9B8E512-0CE1-11E2-8000-D6217B60E588) blkC : Acpi(HWP0002,100)/Pci(1|1)/Scsi(Pun2,Lun0)/HD(Part3,SigB9B8E526-0CE1-11E2-8000-D6217B60E588) blkD : Acpi(HWP0002,400)/Pci(1|1)/Fibre(WWN500805F300061081,Lun4002000000000000) startup.nsh> echo -off 72
  • 73. HP-UX DRD Multiple Copies of Targets 9 of 11 fs2:EFIHPUX> hpux.efi After the reboot: # /usr/lbin/bootpath /dev/disk/disk5 # setboot -v Primary bootpath : 0/1/1/1.0x2.0x0 (/dev/rdisk/disk7) HA Alternate bootpath : 0/0/2/0.0.0x0.0x0 (/dev/rdisk/disk8) Alternate bootpath : 0/1/1/0.0x1.0x0 (/dev/rdisk/disk5) Autoboot is ON (enabled) TEST CURRENT DEFAULT ---- ------- ------- all on on on on early_cpu on on late_cpu on on on on on on Full_memory on on SELFTESTS FASTBOOT Platform Memory_init on on IO_HW on on Chipset on on 73
  • 74. HP-UX DRD Multiple Copies of Targets 10 of 11 # lvlnboot -v Boot Definitions for Volume Group /dev/vg00: Physical Volumes belonging in Root Volume Group: /dev/disk/disk5_p2 -- Boot Disk Boot: lvol1 on: /dev/disk/disk5_p2 Root: lvol3 on: /dev/disk/disk5_p2 Swap: lvol2 on: /dev/disk/disk5_p2 Dump: lvol2 on: /dev/disk/disk5_p2, 0 # drd status ======= 03/04/13 15:03:21 EDT (user=root) BEGIN Displaying DRD Clone Image Information (jobid=ia643) * Clone Disk: /dev/disk/disk5 * Clone EFI Partition: AUTO file present, Boot loader present * Clone Rehost Status: SYSINFO.TXT not present * Clone Creation Date: 03/04/13 11:04:52 EDT * Last Sync Date: None * Clone Mirror Disk: None * Mirror EFI Partition: None * Original Disk: /dev/disk/disk7 * Original EFI Partition: AUTO file present, Boot loader present * Original Rehost Status: SYSINFO.TXT not present * Booted Disk: Clone Disk (/dev/disk/disk5) * Activated Disk: Original Disk (/dev/disk/disk7) ======= 03/04/13 15:03:40 EDT succeeded. (user=root) END Displaying DRD Clone Image Information (jobid=ia643) 74
  • 75. HP-UX DRD Multiple Copies of Targets 11 of 11 Final reboot to go back to the original disk: # /usr/lbin/bootpath /dev/disk/disk7 # drd status ======= 03/11/13 16:41:35 EDT (user=root) BEGIN Displaying DRD Clone Image Information (jobid=ia643) * Clone Disk: * Clone EFI Partition: AUTO file present, Boot loader present * Clone Rehost Status: SYSINFO.TXT not present * Clone Creation Date: 03/04/13 11:46:37 EDT * Last Sync Date: None * Clone Mirror Disk: None * Mirror EFI Partition: None * Original Disk: /dev/disk/disk7 * Original EFI Partition: AUTO file present, Boot loader present * Original Rehost Status: SYSINFO.TXT not present * Booted Disk: Original Disk (/dev/disk/disk7) * Activated Disk: ======= /dev/disk/disk6 Original Disk (/dev/disk/disk7) 03/11/13 16:41:54 EDT succeeded. (user=root) END Displaying DRD Clone Image Information (jobid=ia643) 75
  • 76. HP-UX DRD Wishlist • Here are some other DRD features I would like to see in the future: • Command to cancel DRD cloning process (like Solaris lucancel). • More comprehensive comparison of files (like Solaris lucompare). Yes, we have "drd sync –p" but it is not as good. • Command to delete DRD clone (like Solaris ludelete). • Command to set or display useful description for clones and boot environments (like Solaris ludesc). • Terminal user interface (like Solaris lu). • Multiple target disks for cloning (currently DRD supports one target and its mirror only). 76

Notes de l'éditeur

  1. * Courtesy of SusanBenzel