SlideShare une entreprise Scribd logo
1  sur  66
HP-UX Dynamic Root
Disk, Solaris Live
Upgrade and AIX
Multibos
Dusan Baljevic
Sydney, Australia
2009 Dusan Baljevic
Cloning in Major Unix and Linux
Releases
AIX

Alternate Root and Multibos (AIX 5.3 and above)

HP-UX Dynamic Root Disk (DRD)
Linux

Mondo Rescue, Clonezilla

Solaris Live Upgrade

August 7, 2009

2
HP-UX Dynamic Root Disk Features
•

Dynamic Root Disk (DRD) provides the ability to clone an HP-UX
system image to an inactive disk.

•

Supported on HP PA-RISC and Itanium-based systems.

•

Supported on hard partitions (nPars), virtual partitions (vPars),
and Integrity Virtual Machines (Integrity VMs), running the
following operating systems with roots managed by the following
Volume Managers (except as specifically noted for rehosting): 
o HP-UX 11i Version 2 (11.23) September 2004 or later
o HP-UX 11i Version 3 (11.31)
o LVM (all O/S releases supported by DRD)
o VxVM 4.1
o VxVM 5.0
August 7, 2009

3
HP-UX DRD Benefit: Minimizing
Planned Downtime
Without DRD: Software management may require extended downtime
With DRD: Install/remove software on the clone while applications continue running

Install patches
on the clone;
applications
remain running

lvol1
lvol2
lvol3

lvol1
lvol2
lvol3

boot disk boot mirror

lvol1
lvol2
lvol3

lvol1
lvol2
lvol3

clone

clone mirror

disk

Original vg00 (active) cloned vg00 (inactive/patched)
Activate the
clone to make
changes take
effect

lvol1
lvol2
lvol3

lvol1
lvol2
lvol3

boot disk boot mirror
Original vg00 (inactive)

August 7, 2009

lvol1
lvol2
lvol3

lvol1
lvol2
lvol3

clone

clone mirror

disk
cloned vg00 (active/patched)
4
HP-UX Dynamic Root Disk Features continued
•

Product : DynRootDisk
Version: A.3.3.1.221 (B.11.xx.A.3.4.x will be the current
version number as of September 2009)

•

The target disk must be a single physical disk, or SAN LUN.

•

The target disk must be large enough to hold all of the root
volume file systems. DRD allows the cloning of the root
volume group even if the master O/S is spread across
multiple disks (it is a one-way, many-to-one operation).

•

On Itanium servers, all partitions are created; EFI and HP-UX
partitions are copied. This release of DRD does not copy the
HPSP partition.

•

Copy of lvmtab on the cloned image is modified by the clone
operation to contain information that will reflect the desired
volume groups when the clone is booted.

August 7, 2009

5
HP-UX Dynamic Root Disk Features continued
•

Only the contents of vg00 are copied.

•

Due to system calls DRD depends on, DRD expects legacy
Device Special Files (DSFs) to be present and the legacy
naming model to be enabled on HP-UX 11i v3 servers. HP
recommends only partial migration to persistent DSFs be
performed.

•

If the disk is currently in use by another volume group that
is visible on the system, the disk will not be used.

•

If the disk contains LVM, VxVM, or boot records but is not
in use, one must use the “-x overwrite” option to tell DRD to
overwrite the disk. Already-created clones will contain boot
records; the drd status command will show the disk that is
currently in use as an inactive system image.
August 7, 2009

6
HP-UX Dynamic Root Disk Features continued
•

All DRD processes, including “drd clone” and “drd runcmd”,
can be safely interrupted issuing Control-C (SIGINT) from
the controlling terminal or by issuing kill –HUP <pid>
(SIGHUP). This action causes DRD to abort processing. Do
not interrupt DRD using the kill -9 <pid> command
(SIGKILL), which fails to abort safely and does not perform
cleanup. Refer to the “Known Issues” list on the DRD web
page (http://www.hp.com/go/DRD) for cleanup instructions
after drd runcmd is interrupted.

•

The Ignite server will only be aware of the clone if it is
mounted during a make_*_recovery operation.

August 7, 2009

7
HP-UX Dynamic Root Disk Features continued
DRD does not provide a mechanism for resizing file systems
during a clone operation.
• After the clone is created, one can manually change file system
sizes on the inactive system without an immediate reboot:
1. The whitepaper, “Dynamic Root Disk: Quick Start & Best
Practices” describes resizing file systems other than /stand. *
2. The whitepaper “Dynamic Root Disk: Quick Start & Best
Practices” describes resizing the boot (/stand) file system on an
inactive system image.
• One can avoid multiple mounts and unmounts by using “drd
mount” to mount the inactive system image before the first
runcmd operation and “drd umount” to unmount the inactive
system image after the last runcmd operation. **
• Supports root volume groups with any name (prior to version
A.3.0, only vg00 was possible).
•

August 7, 2009

8
HP-UX Dynamic Root Disk Commands
•

The basic DRD commands are:
drd
drd
drd
drd
drd
drd
drd
drd
drd
August 7, 2009

clone
runcmd
activate
deactivate
mount
umount
status
rehost
unrehost
9
HP-UX Dynamic Root Disk Commands continued
•

“drd runcmd” can run specific Software Distributor (SD)
commands on the inactive system image only:
swinstall
swremove
swlist
swmodify
swverify
swjob

•

Three other commands can be executed by the drd runcmd
command:
view
used to view logs produced by commands that
were executed by drd runcmd.
kctune

used to modify kernel parameters.

update-ux performs v3 to v3 OE updates
August 7, 2009

10
HP-UX Dynamic Root Disk Features –
Dry Run
•

A simple mechanism for determining if a chosen target disk
is sufficiently large is to run a preview:
# drd clone -p -v -t <blockDSF>
blockDSF is of the form: 
* HP-UX 11i v2: /dev/dsk/cXtXdX
* HP-UX 11i v3: /dev/disk/diskX

•

The preview operation includes the disk space analysis
needed to see if the target disk is sufficiently large.
August 7, 2009

11
HP-UX Dynamic Root Disk versus IgniteUX
•

DRD has several advantages over Ignite-UX net and tape
images:
* No tape drive is needed,
* No impact on network performance will occur,
* No security issues of transferring data across the network.

•

Mirror Disk/UX keeps an "always up-to-date" image of the
booted system. DRD provides a "point-in-time“ image. The
booted system and the clone may then diverge due to
changes to either one. Keeping the clone unchanged is the
Recovery scenario. DRD is not available for HP-UX 11.11,
which limits options on those systems.

August 7, 2009

12
HP-UX Dynamic Root Disk Features continued
Dynamic Root Disk (DRD) provides ability to clone an HPUX
system image to an inactive disk, and then:
* Perform system maintenance on the clone while the HPUX 11i system is online.
* Reboot during off-hours - significantly reducing system
downtime.
* Utilize the clone for system recovery, if needed.
* Rehost the clone on another system for testing or
provisioning purposes—on VMs or blades utilizing Virtual
Connect, HP-UX 11i v3 LVM only; VMs with HP-UX 11i v2
LVM only.
* Perform an OE Update on the clone from an older version
of HP-UX 11i v3 to HP-UX 11i v3 update 4 or later.
August 7, 2009

13
HP-UX – Dynamic Root Disk and
/etc/bootconf
•

Errors in /stand/bootconf can make the command drd
deactivate to fail. * (This is no longer true in the current
release)
The /stand/bootconf file on the booted system should
contain device files for just the booted disk and any of its
mirrors not the clone target.
The /stand/bootconf file that is created on the clone target
WILL contain the device file of the target itself (or, on an
IPF system, the device file of the HP-UX partition of the
target).

August 7, 2009

14
HP-UX – Dynamic Root Disk –
Rehosting
•

The initial implementation of drd rehost only supports
rehosting of an LVM-managed root volume group on an
Integrity virtual machine to another Integrity virtual
machine, or an LVM-managed root volume group on a
Blade with Virtual Connect I/O to another such Blade.

•

The rehost command does not enforce the restriction to
blades and VMs, but other use of this command is not
officially supported.

•

As of version A.3.3, rehosting support for HP-UX 11i v2 has
been added.

August 7, 2009

15
HP-UX – Dynamic Root Disk –
Rehosting on HP-UX 11.31
•

After the clone and system information file have been created, the
“drd rehost” command can be used to check the syntax of the
system information file and copy it to /EFI/HPUX/SYSINFO.TXT
in preparation for processing by auto_parms(1M) during the boot
of the image. The following example uses the
/var/opt/drd/tmp/newhost.txt system information file:
SYSINFO_HOSTNAME=myhost
SYSINFO_MAC_ADDRESS[0]=0x0017A451E718
SYSINFO_DHCP_ENABLE[0]=0
SYSINFO_IP_ADDRESS[0]=192.2.3.4
SYSINFO_SUBNET_MASK[0]=255.255.255.0
SYSINFO_ROUTE_GATEWAY[0]=192.2.3.75
SYSINFO_ROUTE_DESTINATION[0]=default
SYSINFO_ROUTE_COUNT[0]=1
August 7, 2009

16
HP-UX – Dynamic Root Disk –
Rehosting on HP-UX 11.31 - continued
•

To check the syntax of the system information file, without
copying it to the /EFI/HPUX/SYSINFO.TXT file, use the
preview option of the drd rehost command:
# drd rehost –p –f 
/var/opt/drd/tmp/newhost.txt

•

To copy it to the /EFI/HPUX/SYSINFO.TXT file, use the
following command:
# drd rehost –f /var/opt/drd/tmp/newhost.txt
August 7, 2009

17
HP-UX – Dynamic Root Disk Examples
# drd clone -t /dev/disk/disk8 -x
overwrite=true
======= 07/02/08 13:09:41 EST BEGIN Clone System Image
(user=root) (jobid=syd59)
* Reading Current System Information
* Selecting System Image To Clone
* Selecting Target Disk
* Selecting Volume Manager For New System Image
* Analyzing For System Image Cloning
* Creating New File Systems
* Copying File Systems To New System Image
* Making New System Image Bootable
* Unmounting New System Image Clone
•

======= 07/02/08 13:42:57 EST END Clone System Image
succeeded. (user=root) (jobid=syd59)

August 7, 2009

18
HP-UX – Dynamic Root Disk Examples continued
# drd status
======= 07/02/08 13:45:42 EST BEGIN Displaying DRD Clone Image
Information (user=root) (jobid=syd59)
* Clone Disk:

/dev/disk/disk8

* Clone EFI Partition:

Boot loader and AUTO file present

* Clone Creation Date:

07/02/08 13:09:46 EST

* Clone Mirror Disk:

None

* Mirror EFI Partition:

None

* Original Disk:

/dev/disk/disk7

* Original EFI Partition: Boot loader and AUTO file present
* Booted Disk:

Original Disk (/dev/disk/disk7)

* Activated Disk:

Original Disk (/dev/disk/disk7)

======= 07/02/08 13:45:51 EST END Displaying DRD Clone Image
Information succeeded. (user=root) (jobid=syd59)
August 7, 2009

19
HP-UX – Dynamic Root Disk Examples continued

# drd activate

======= 07/02/08 13:48:03 EST BEGIN Activate Inactive System Image (user=root)
(jobid=syd59)
* Checking for Valid Inactive System Image
* Reading Current System Information
* Locating Inactive System Image
* Determining Bootpath Status
* Primary bootpath : 0/1/1/0.0x1.0x0 before activate.
* Primary bootpath : 0/1/1/1.0x2.0x0 after activate.
* Alternate bootpath : 0/1/1/1.0x2.0x0 before activate.
* Alternate bootpath : 0/1/1/1.0x2.0x0 after activate.
* HA Alternate bootpath : 0/1/1/0.0x1.0x0 before activate.
* HA Alternate bootpath : 0/1/1/0.0x1.0x0 after activate.
* Activating Inactive System Image
======= 07/02/08 13:48:15 EST END Activate Inactive System Image succeeded.
(user=root) (jobid=syd59)
August 7, 2009

20
HP-UX – Dynamic Root Disk Examples continued
# drd_register_mirror /dev/dsk/c1t2d0

*

# drd_unregister_mirror /dev/dsk/c2t3d0 **
# drd runcmd view /var/adm/sw/swagent.log
# diff /var/spool/crontab/crontab.root 
/var/opt/drd/mnts/sysimage_001/var/spool/cron
tab/crontab.root

August 7, 2009

21
HP-UX – Dynamic Root Disk Examples continued
# /opt/drd/bin/drd mount
# /usr/bin/bdf
file system

kbytes

/dev/vg00/lvol3

1048576

/dev/vg00/lvol1

505392

/dev/vg00/lvol8

used

avail

%used

320456

722432

43560

411288

10%

/stand

3395584

797064

2580088

24%

/var

/dev/vg00/lvol7

4636672

1990752

2625264

43%

/usr

/dev/vg00/lvol4

204800

8656

194680

4%

/tmp

/dev/vg00/lvol6

3067904

1961048

1098264

64%

/opt

/dev/vg00/lvol5

262144

9320

250912

4%

1048576

320504

722392

31%

/dev/drd00/lvol1
505392
43560
/var/opt/drd/mnts/sysimage_001/stand

411288

10%

/dev/drd00/lvol4

8592

194680

4%

/dev/drd00/lvol5
262144
9320
/var/opt/drd/mnts/sysimage_001/home

250912

4%

/dev/drd00/lvol3

204800

31%

Mounted on
/

/home
/var/opt/drd/mnts/sysimage_001

/var/opt/drd/mnts/sysimage_001/tmp

/dev/drd00/lvol6

3067904

1962912

1096416

64%

/var/opt/drd/mnts/sysimage_001/opt

/dev/drd00/lvol7

4636672

1991336

2624680

43%

/var/opt/drd/mnts/sysimage_001/usr

/dev/drd00/lvol8

3395584

788256

23%

/var/opt/drd/mnts/sysimage_001/var

August 7, 2009

2586968

22
HP-UX – Dynamic Root Disk – Serial
Patch Installation Example
# swcopy -s /tmp/PHCO_38159.depot * @
/var/opt/mx/depot11/PHCO_38159.dir
# drd runcmd swinstall -s 
/var/opt/mx/depot11/PHCO_38159.dir PHCO_38159

August 7, 2009

23
HP-UX – Dynamic Root Disk update-ux
Issue *
When executing “drd runcmd update-ux” the inactive
on
DRD
system image, the command errors:
ERROR: The expected depot does not exist at
"<depot_name>"
In order to use a directory depot on the active system image,
you will need to create a loopback mount to access the depot.

August 7, 2009

24
HP-UX – Dynamic Root Disk update-ux
Issue - continued
Issue Resolution
The following steps should be followed in order to update the clone from a
directory depot that resides on the active system image.  The steps must
executed as root, in this order:
1) Mount the clone using “drd mount”
2) Make the directory on the clone and loopback mount the depot.  The
directory on the clone and the source depot must have the same name, in
this case “/var/depots/0909_DCOE”, however the name can be whatever
you chose:
# mkdir -p 
/var/opt/drd/mnts/sysimage_001/var/depots/0909_DCOE              
# mount -F lofs /var/depots/0909_DCOE
/var/opt/drd/mnts/sysimage_001/var/depots/0909_DCOE
# drd runcmd update-ux -s /var/depots/0909_DCOE  

August 7, 2009

25
HP-UX – Dynamic Root Disk update-ux
Issue - continued
3) Once your update has completed, unmount the loopback mount and
then unmount the clone
# umount –F lofs /var/depots/0909_DCOE
/var/opt/drd/mnts/sysimage_001/var/depots/0909_DCOE
# drd umount
 
Updates from multiple-DVD Media
Updates directly from media are not supported for DRD updates.  In order
to update from media, you must copy the contents to a directory depot
either on a remote server (easiest method) or to a directory on the
active system. If it must be on the active system image you must first
copy the media’s contents to a directory depot and then create the
clone.  If you already have a clone, you can copy the depot and then
loopback mount that depot to the clone (see instructions above). 

August 7, 2009

26
HP-UX – Dynamic Root Disk update-ux
Issue - continued
 To copy the software from the DVD’s, make a directory on a remote
system or the active system image; mount the DVD media and swcopy its
contents into the newly created directory.  Unmount the first disk and
insert the second DVD to copy its contents into the directory.   
# mkdir –p /var/software_depot/DCOE-DVD
# mount /dev/disk/diskX /cdrom
# swcopy -s /cdrom –x enforce_dependencies=false *
@/var/software_depot/DCOE-DVD
# umount /cdrom
# mount /dev/disk/diskX /cdrom // this is DVD 2
# swcopy -s /cdrom –x enforce_dependencies=false *
@/var/software_depot/DCOE-DVD

August 7, 2009

27
HP-UX – Dynamic Root Disk update-ux
Issue - continued
If the depot resides on a remote server (a system other than the one to be
updated),
proceed with the “drd runcmd update-ux” command and specify the location as the
argument of the “-s” parameter:
# drd runcmd update-ux -s <server_name>:/var/software_depot/DCOE-DVD
<OE>
 
If the depot resides in the root group of the system to be cloned, and the clone has
not yet been created, create the clone and issue the  “drd runcmd update-ux “
command, specifying the location of the depot as it appears on the booted
system:
# drd runcmd update-ux –s /var/software_depot/DCOE-DVD <OE>
If the depot resides on the system to be updated, in a location other than the root
group, 2009 if the clone has already been created, use the loopback mount
August 7, or

28
Solaris Live Upgrade Features
•

Live upgrade is a feature of Solaris (since version 2.6)
that allows the operating system to be cloned to an offline
partition (or partitions), which can then be upgraded with
new O/S patches, software, or even a new version of the
operating system.
The system administrator can then reboot the system on
the newly upgraded partition. In case of problems, it is
easy to revert back to the original partition/version via a
single live upgrade command followed by a reboot.

•

Live upgrade is especially useful because Sun does not
officially support installing O/S patches to active partitions
- patching while in single user mode or to a non-active live
upgrade partition.
August 7, 2009

29
Solaris Live Upgrade Features continued
•

 Live Upgrade requires multiple partitions on the boot drive – one

•

A slice where the root (/) file system is to be copied must be
selected. Use the following guidelines when you select a slice for
the root (/) file system. The slice must comply with the following:

set of partitions is "active" and the other is "inactive“) or on
separate drives. These sets of partitions are "boot environments“
(BEs).

* Must be a slice from which the system can boot.
* Must meet the recommended minimum size.
* Cannot be a Veritas VxVM volume or a Solstice DiskSuite
metadevice.
* Can be on different physical disks or the same disk as the active
root file system.
* For sun4c and sun4m, the root file system must be less than 2
GB.
August 7, 2009

30
Solaris Live Upgrade Features continued
•

The swap slice cannot be in use by any boot environment
except the current boot environment or if the “-s” option is
used, the source boot environment. The boot environment
creation fails if the swap slice is being used by any other boot
environment whether the slice contains a swap, UFS, or any
other file system.

•

Typically, each boot environment requires a minimum of 350
to 800 MB of disk space, depending on the system software
configuration.

•

When viewing the character interface remotely, such as over
a tip line, set the TERM environment variable to VT220. Also,
when using the Common Desktop Environment, set the value
of the TERM variable to dtterm, rather than xterm.
August 7, 2009

31
Solaris Live Upgrade Features continued
•

lucreate command allows you to include or exclude specific files
and directories when creating a new BE.

•

Include files and directories with:
-y include option
-Y include_list_file option
items with a leading + in the file used with the -z filter_list
option

•

Exclude files and directories with:
-x exclude option
-f exclude_list_file option
items with a leading – in the file used with the -z filter_list
option
August 7, 2009

32
Solaris Live Upgrade and Special Files
•

Files can change in the original boot environment (BE)
after the BE is created but NOT YET activated.

•

On the first boot of a BE, data is copied from the source
BE.

•

The list to copy is in /etc/lu/synclist. Example:
/etc/default/passwd OVERWRITE
/etc/dfs OVERWRITE
/var/log/syslog APPEND
/var/adm/messages APPEND
August 7, 2009

33
Solaris Live Upgrade Examples
•

The upgrade process of the new BE can be done in several ways (local,
net, CD-ROM, flash). All four of these are done the same way except each
one you specify a different path to the image through the -s flag.
Examples:
Local file:
# luupgrade -u -n solenv2 -s
/Solaris_10/path/to/os_image
Net:
# luupgrade -u -n solenv2 -s
/net/Solaris_10/path/to/os_image
CD-ROM:
# luupgrade -u -n solenv2 -s
/cdrom/Solaris_10/path/to/os_image
Flash:
# luupgrade -u -n solenv2 -s /path/to/flash.flar
August 7, 2009

34
Solaris Live Upgrade Examples
# lucompare BE2
Determining the configuration of BE2 ...
< BE1
> BE2
Processing Global Zone
Comparing / ... 
Links differ
01 < /:root:root:33:16877:DIR:
02 > /:root:root:30:16877:DIR: 
Sizes differ
01 < /platform/sun4u/boot_archive:root:root:1:33188:REGFIL:76550144:
02 > /platform/sun4u/boot_archive:root:root:1:33188:REGFIL:76922880: 
...
August 7, 2009

35
Solaris Live Upgrade Examples
# lucreate -c "solenv1" -m /:/dev/dsk/c0d0s3:ufs -n
"solenv2“
*
# lucreate -m /:/dev/md/dsk/d20:ufs,mirror 
-m /:/dev/dsk/c0t0d0s0:detach,attach,preserve 
-n nextBE

**

# lucreate -m /:/dev/md/dsk/d10:ufs,mirror 
-m /:/dev/dsk/c0t0d0s0,d1:attach 
-m /:/dev/dsk/c0t1d0s0,d2:attach -n myserv2

August 7, 2009

***

36
Solaris Live Upgrade Examples
# lucurr
BE1
# ludesc -n BE1 
"Dusan BootEnvironment“
# ludesc -n BE1
Dusan BootEnvironment

August 7, 2009

37
Solaris Live Upgrade Examples
# lufslist BE1
boot environment name: BE1
This boot environment is currently active
This boot environment will be active on next system
boot.
Filesystem
Options

fstype

device size Mounted on

Mount

----------------------- -------- ------------ -------------------------------/dev/zvol/dsk/rpool/swap swap
rpool/ROOT/s10s_u6wos_07b zfs
rpool/ROOT/s10s_u6wos_07b/var zfs
-

1073741824 -

-

5119809024 /

-

86450688 /var

rpool

zfs

rpool/export

zfs

95149568 /export

-

hppool

zfs

? /hppool

-

rpool/export/home

zfs

August 7, 2009

7493079552 /rpool

95129088 /export/home

-

38
Clone Commands Compared
Task

HP-UX DRD

Solaris Live
Upgrade

Create BE

drd clone

lucreate

Activate BE

drd activate

luactivate

Check status

drd status

lustatus

Compare BEs

Indirect method:
diff
cmp

lucompare

Cancel
scheduled
copy/
create

Indirect method –
remove from crontab

lucancel

August 7, 2009

39
Clone Commands Compared
Task

HP-UX DRD

Solaris Live
Upgrade

Display BE/
System drd status
Image

lucurr

Delete BE

ludelete

N/ *
A

Add or resync data N/ * *
A
in BE

lumake

Set or display BE
description

N/
A

ludesc

Mount BE file
systems

drd mount

lumount

Unmount BE file
system

drd umount

luumount

August 7, 2009

40
Clone Commands Compared
Task

HP-UX DRD

Solaris Live
Upgrade

R
ename BE

N/
A

lurename

Install software
and patches into
BE BE
List

drd runcmd swinstall
drd runcmd update-ux

luupgrade

N/
A

lufslist

TUI

N/
A

lu

configuration

August 7, 2009

41
Clone Commands Compared
Task

HP-UX DRD

Solaris Live
Upgrade

R
ehosting

drd rehost

N/
A

Modify kernel
tunables

Drd runcmd kctune

N/
A

August 7, 2009

42
AIX Alt_disk_install
•

•
•

The AIX alt_disk_install command allows a root sysadmin
to create an alternate rootvg on another set of disk drives.
The alternate rootvg can be configured by restoring a
mksysb image to it while AIX continues to run from the
primary rootvg, or the primary rootvg can be "cloned" to
the alternate rootvg and updates and fixes can then be
installed on the alternate rootvg while AIX continues to
run.
When the system admin is ready, AIX can be rebooted
from the alternate rootvg disks. Changes can be backed
out by rebooting AIX from the original primary rootvg.
In AIX v.5.3, alt_disk_install has been replaced by
alt_disk_copy
alt_disk_mksysb
alt_rootvg_op
The alt_disk_install will continue to ship as a
wrapper to the new commands, but it will not support any
new functions, flags, or features.
AIX Alt_disk_install Examples
•

Copy the current rootvg to an alternate disk. The following
example shows how to clone the rootvg to hdisk1:
# alt_disk_copy -d hdisk1

•

Copy rootvg (hdisk1) to hdisk0, and then apply the
updates to hdisk0:
# alt_disk_copy -d hdisk0 -b update_all -l
AIX Alt_disk_install Examples
•

Copy the current rootvg to two alternate disks:
# alt_disk_copy -d hdisk2 hdisk3 -O

•

…assuming that hdisk2 and hdisk3 are the targets on which the copy
should be placed.
Note that the -O flag is required when "cloning" (when planning to boot
the rootvg copy on another LPAR or server), but can be detrimental when
making a copy which will be booted on the same LPAR or server.
Before taking the target disks away from the existing AIX image, run
command:
# alt_rootvg_op -X

•

•

•

If a rootvg copy has been made for use on the same LPAR/server as the
original rootvg (without the -O flag on alt_disk_copy), System
Management Services can be used to switch between the primary and
backup AIX rootvgs by shutting AIX down, booting to SMS mode, and
selecting the disks from which to boot.
AIX Multibos Features
•

multibos command (AIX 5.3 ML3) provides dual AIX
boot from the same rootvg. One can run production on
one boot image while installing, customizing or updating
the other.

•

This is similar to AIX alt-disk-install, with one
major difference: in alt-disk-install the boot images
must reside on separate disks and separate rootvg's. The
multibos capability allows both O/S images to reside
on the same disk/rootvg.

August 7, 2009

46
MultiBOS (rootvg)

Reboot
AIX Multibos Features - continued
•

The multibos command allows the root level administrator
to create multiple instances of AIX on the same rootvg.

•

The multibos setup operation creates a standby Base
Operating System (BOS) that boots from a distinct boot
logical volume (BLV). This creates two bootable sets of
BOS on a given rootvg. The administrator can boot from
either instance of BOS by specifying the respective BLV
as an argument to the bootlist command or using system
firmware boot operations.

•

Two bootable instances of BOS can be simultaneously
maintained. The instance of BOS associated with the
booted BLV is referred to as the a c tive BOS. The instance
of BOS associated with the BLV that has not been booted
is referred to as the s ta nd by BOS. Currently, only two
instances of BOS are supported per rootvg.
August 7, 2009

48
AIX Multibos Features - continued
•

The multibos command allows the administrator to
access, install maintenance and technology levels for,
update, and customize the standby BOS either during
setup or in subsequent customization operations.

•

Installing maintenance and technology updates to the
standby BOS does not change system files on the active
BOS. This allows for concurrent update of the standby
BOS, while the active BOS remains in production.

August 7, 2009

49
AIX Multibos Features - continued
•

The multibos command has the ability to copy or share
logical volumes and file systems. By default, the BOS file
systems (currently /, /usr, /var, and /opt,) and the boot
logical volume are copied. The administrator can make
copies of additional BOS objects (using the -L flag).

•

All other file systems and logical volumes are shared
between instances of BOS. Separate log device logical
volumes (for example, those that are not contained within
the file system) are not supported for copy and will be
shared.

•

The current rootvg must have enough space for each
BOS object copy. BOS object copies are placed on the
same disk or disks as the original.
August 7, 2009

50
AIX Multibos Features - continued
•

The total number of copied logical volumes cannot exceed
128.

•

The total number of copied logical volumes and shared
logical volumes are subject to volume group limits.

•

/etc/multibos contains multibos data and logs.

•

The only supported method of backup and recovery with
multibos is mksysb via CD, NIM or tape. If the standby
BOS was mounted during the creation of the mksysb, it is
restored and synchronized on the first boot from the
restored mksysb. However, if the standby BOS wasn’t
mounted during the creation of the mksysb backup, the
synchronization on reboot will remove the unusable
standby BOS.
August 7, 2009

51
AIX Multibos Examples
•

Standby BOS setup operation preview:
# multibos -Xsp

•

Set up standby BOS:
# multibos -Xs

•

Set up standby BOS with optional image.data file
/tmp/image.dat and exclude list /tmp/exclude.lst:
# multibos -Xs -i /tmp/image.dat -e 
/tmp/exclude.lst
August 7, 2009

52
AIX Multibos Examples - continued
•

To set up standby BOS and install additional software
listed as bundle file /tmp/bundle and located in the images
source /images:
# multibos -Xs -b /tmp/bundle -l /images

•

To execute a customization operation on standby BOS
with the update_all install option:
# multibos -Xac -l /images

August 7, 2009

53
AIX Multibos Examples - continued
•

To mount all standby BOS file systems, type:
# multibos –Xm

•

To perform a standby BOS remove operation preview:
# multibos –RXp

•

To remove standby BOS:
# multibos -RX

August 7, 2009

54
AIX Multibos Examples - continued
•

Apply TL6 to the standby BOS. The TL6 lppsource is
mounted from our Network Installation Manager (NIM)
master. Perform a preview operation and then execute
the actual update to the standby instance. Check the log
file for any issues:

•

# mount
nimsrv:/export/lpp_source/lpp_sourceaix5306
03 /mnt
# multibos -Xacp -l /mnt
# multibos -Xac -l /mnt

August 7, 2009

55
AIX Multibos Examples - continued
•

Back out of the update and return to the previous TL. Set
the bootlist and verify that the BLV is set to the previous
BOS instance (hd5):

•

# bootlist -m normal hdisk0 blv=hd5 

•

hdisk0 blv=bos_hd5
# bootlist -m normal -o
hdisk0 blv=hd5
hdisk0 blv=bos_hd5

Now reboot the system and confirm that it’s running at the
previous TL.

August 7, 2009

56
AIX Multibos Examples – continued *
# multibos -S
MULTIBOS> df
Filesystem 512-blocks Free %Used Iused %Iused Mounted on
/dev/hd4 1966080 1198800 40% 3364 1% /
/dev/hd2 3670016 299344 92% 42697 10% /usr
...
/dev/hd3 262144 250776 5% 64 1% /tmp
/dev/bos_hd4 1966080 1198800 40% 3364 1% /bos_inst
/dev/bos_hd2 3670016 299344 92% 42697 10% /bos_inst/usr
/dev/bos_hd9var 655360 594456 10% 674 1% /bos_inst/var
/dev/bos_hd10opt 393216 123592 69% 2545 6% /bos_inst/opt
# to exit from multibos shell
MULTIBOS> exit

August 7, 2009

57
AIX Multibos Examples – continued *
# cat /root/hosts.txt
•

host1

•

host2

•

host3

# export WCOLL=/root/hosts.txt
# dsh multibos –R
# dsh rm /etc/multibos/logs/op.alog
# dsh multibos –sXp
# dsh alog -of /etc/multibos/logs/op.alog
# dsh multibos –sX
# dsh mount nimmast:/export/lpp_source/lpp_sourceaix530603 /mnt
# dsh multibos -Xacp -l /mnt
# dsh multibos -Xac -l /mnt
# dsh alog -of /etc/multibos/logs/op.alog
# dsh umount /mnt
# dsh bootlist –m normal –o
# dsh shutdown -Fr
August 7, 2009

58
AIX Check Boot Environment
•

After the reboot, confirm the TL level:

# oslevel –r
•

Verify which BLV the system booted from with:

# bootinfo –v

August 7, 2009

59
Features Compared
Feature

HP-UX DRD

Solaris Live Upgrade

AIX Multibos

Licensing

N/
A

N/
A

N/
A

Supported
platforms

PA-R
ISC
IA-64

SPARC
x86-32
x86-64

32-bit POW
ER
64-bit POW *
ER
PowerPC

Supported O/
S

HP-UX 11.23
HP-UX 11.31

Solaris 2.6
Solaris 7
Solaris 8
Solaris 9
Solaris 10

AIX 5L Version 5.3
with the 5300-03
Recommended
Maintenance package
and later

Current product

DynRootDisk
B.11.xx.A.3.4.y
where xx is 23 or 31

Live Upgrade 2.0

Part of AIX 6.1

TUI

Not supported

Supported

Not Supported

GUI

Not supported

Not supported

Not Supported

CLI

Supported

Supported

Supported

August 7, 2009

60
Features Compared - continued
Feature

HP-UX

Solaris

AIX Multibos

Add mirror disk
to a clone

Supported directly
via
command:

Not supported directly!
Supported via SVM,
ZFS, and VxVM RAID-1
setup only

N/
A

drd activate –x
reboot=true or
Standard Unix
commands

Never use reboot(1)
or halt(1) commands.
Instead, “init 6” or
shutdown(1)

bootlist -m
normal hdisk0
blv=bos_hd5

Mostly manual
process, based on:
dvd mount
cmp ...
diff...

lucompare(1)

Mostly manual
process, based on:
multibos –S
cmp ...
diff ...

drd clone –x
mirror_disk=
R
eboot
commands

Automated
comparison of
primary and
alternate boot
environments

August 7, 2009

shutdown -Fr or
reboot -q

61
Features Compared - continued
Feature

HP-UX

Solaris

AIX Multibos

Mounting
inactive images

a) “drd mount” does not
support mounting on different
directories

a) lumount(1)
supports
mounting on
different
directories

multibos –S

b) “drd mount” mounts file
systems as:
/var/opt/drd/mnts/
sysimage_00X

It mounts file
systems as

/bos_inst/...

b) “lumount”
m
ounts file
system as:
s
/.alt.configX

Change size of
Not supported
any file systems
during cloning

Supported

Supported * *

File system
split

Supported *

Not supported

August 7, 2009

Not supported

62
Features Compared - continued
Feature

HP-UX

Solaris

AIX Multibos

Simple listing of
clone file
systems

drd mount
bdf

Supported via
lufslist(1)
command

Not directly
supported * *

Clone updates
(re-sync)

Supported via full clone
recreation:

Supported via
command
lumake(1)

Supported via flag
“-c” *

Supported

Not supported

drd clone –t= -x
overwrite=true
Merge file
systems during
cloning

August 7, 2009

Not supported yet

63
Features Compared - continued
Feature

HP-UX

Solaris

AIX Multibos

Change file
system type
during cloning

Not supported

Supported. For
example, SVM
to ZFS
migration

Not supported

Supported
Volume
Manager

LVM
VxVM

Solstice
DiskSuite *
VxVM
ZFS * *

AIX LVM

Virtualization
Support

nPar
vPar
Integrity VM

Solaris Zones
***
Logical Domain

LPAR
Dynamic LPAR
Live Partition Mobility on
POW
ER6
W
PAR

Full-disk copy
during cloning

On Itanium servers, all
partitions are created
and EFI and HPUX are
copied. This release of
DRD does not copy the
HPSP

Supported

Not supported

August 7, 2009

64
Features Compared - continued
Feature

HP-UX

Solaris

AIX Multibos

Multiple target
disks for cloning

Not supported

Supported

Not supported

Dry-run (preview)
cloning

Supported

Supported

Supported

Swap shared

Primary swap is not
shared, secondary swap
can be shared

Yes, by default

Yes, by default

On-line cloning

Yes

Sun recommends to
halt all zones during
lucreate or lumount
operations! That
means, the Solaris
zones cloning is not
truly an on-line
process

Yes

August 7, 2009

65
Features Compared - continued
Feature

HP-UX

Solaris

AIX Multibos

Exclude files
from cloning

Not supported yet *

Supported * *

Supported * * * * *

Include files
during cloning

Not supported yet

Supported * *

Supported * * * * *

Simple method to Not supported yet * * *
remove clone

Supported * * * *

Supported * * * * * *

Clone on the
same physical
disk (multiple
BEs on the same
disk)

Supported

Supported

August 7, 2009

Not supported

66

Contenu connexe

Tendances

Linux power management: are you doing it right?
Linux power management: are you doing it right?Linux power management: are you doing it right?
Linux power management: are you doing it right?Chris Simmonds
 
S4 xen hypervisor_20080622
S4 xen hypervisor_20080622S4 xen hypervisor_20080622
S4 xen hypervisor_20080622Todd Deshane
 
Software update for embedded systems - elce2014
Software update for embedded systems - elce2014Software update for embedded systems - elce2014
Software update for embedded systems - elce2014Stefano Babic
 
Asiabsdcon14 lavigne
Asiabsdcon14 lavigneAsiabsdcon14 lavigne
Asiabsdcon14 lavigneDru Lavigne
 
LOAD BALANCING OF APPLICATIONS USING XEN HYPERVISOR
LOAD BALANCING OF APPLICATIONS  USING XEN HYPERVISORLOAD BALANCING OF APPLICATIONS  USING XEN HYPERVISOR
LOAD BALANCING OF APPLICATIONS USING XEN HYPERVISORVanika Kapoor
 
Reducing boot time in embedded Linux
Reducing boot time in embedded LinuxReducing boot time in embedded Linux
Reducing boot time in embedded LinuxChris Simmonds
 
XPDS16: The OpenXT Project in 2016 - Christopher Clark, BAE Systems
XPDS16: The OpenXT Project in 2016 - Christopher Clark, BAE SystemsXPDS16: The OpenXT Project in 2016 - Christopher Clark, BAE Systems
XPDS16: The OpenXT Project in 2016 - Christopher Clark, BAE SystemsThe Linux Foundation
 
The end of embedded Linux (as we know it)
The end of embedded Linux (as we know it)The end of embedded Linux (as we know it)
The end of embedded Linux (as we know it)Chris Simmonds
 
Xen and the art of embedded virtualization (ELC 2017)
Xen and the art of embedded virtualization (ELC 2017)Xen and the art of embedded virtualization (ELC 2017)
Xen and the art of embedded virtualization (ELC 2017)Stefano Stabellini
 
Xen & the Art of Virtualization
Xen & the Art of VirtualizationXen & the Art of Virtualization
Xen & the Art of VirtualizationTareque Hossain
 
XPDS16: Xenbedded: Xen-based client virtualization for phones and tablets - ...
XPDS16:  Xenbedded: Xen-based client virtualization for phones and tablets - ...XPDS16:  Xenbedded: Xen-based client virtualization for phones and tablets - ...
XPDS16: Xenbedded: Xen-based client virtualization for phones and tablets - ...The Linux Foundation
 
Intrack14dec tips tricks_clean
Intrack14dec tips tricks_cleanIntrack14dec tips tricks_clean
Intrack14dec tips tricks_cleanchinitooo
 
Xen and Art of Virtualization (Xen Architecture)
Xen and Art of Virtualization (Xen Architecture)Xen and Art of Virtualization (Xen Architecture)
Xen and Art of Virtualization (Xen Architecture)Mr Cracker
 

Tendances (20)

Linux power management: are you doing it right?
Linux power management: are you doing it right?Linux power management: are you doing it right?
Linux power management: are you doing it right?
 
Xen io
Xen ioXen io
Xen io
 
Xen Hypervisor
Xen HypervisorXen Hypervisor
Xen Hypervisor
 
S4 xen hypervisor_20080622
S4 xen hypervisor_20080622S4 xen hypervisor_20080622
S4 xen hypervisor_20080622
 
Software update for embedded systems - elce2014
Software update for embedded systems - elce2014Software update for embedded systems - elce2014
Software update for embedded systems - elce2014
 
LinuxTag2012 Rear
LinuxTag2012 RearLinuxTag2012 Rear
LinuxTag2012 Rear
 
Asiabsdcon14 lavigne
Asiabsdcon14 lavigneAsiabsdcon14 lavigne
Asiabsdcon14 lavigne
 
Tlf2014
Tlf2014Tlf2014
Tlf2014
 
Xen & virtualization
Xen & virtualizationXen & virtualization
Xen & virtualization
 
LOAD BALANCING OF APPLICATIONS USING XEN HYPERVISOR
LOAD BALANCING OF APPLICATIONS  USING XEN HYPERVISORLOAD BALANCING OF APPLICATIONS  USING XEN HYPERVISOR
LOAD BALANCING OF APPLICATIONS USING XEN HYPERVISOR
 
Reducing boot time in embedded Linux
Reducing boot time in embedded LinuxReducing boot time in embedded Linux
Reducing boot time in embedded Linux
 
XPDS16: The OpenXT Project in 2016 - Christopher Clark, BAE Systems
XPDS16: The OpenXT Project in 2016 - Christopher Clark, BAE SystemsXPDS16: The OpenXT Project in 2016 - Christopher Clark, BAE Systems
XPDS16: The OpenXT Project in 2016 - Christopher Clark, BAE Systems
 
The end of embedded Linux (as we know it)
The end of embedded Linux (as we know it)The end of embedded Linux (as we know it)
The end of embedded Linux (as we know it)
 
Xen and the art of embedded virtualization (ELC 2017)
Xen and the art of embedded virtualization (ELC 2017)Xen and the art of embedded virtualization (ELC 2017)
Xen and the art of embedded virtualization (ELC 2017)
 
Asiabsdcon14
Asiabsdcon14Asiabsdcon14
Asiabsdcon14
 
Xen & the Art of Virtualization
Xen & the Art of VirtualizationXen & the Art of Virtualization
Xen & the Art of Virtualization
 
Xen Summit 2009 Shanghai Ras
Xen Summit 2009 Shanghai RasXen Summit 2009 Shanghai Ras
Xen Summit 2009 Shanghai Ras
 
XPDS16: Xenbedded: Xen-based client virtualization for phones and tablets - ...
XPDS16:  Xenbedded: Xen-based client virtualization for phones and tablets - ...XPDS16:  Xenbedded: Xen-based client virtualization for phones and tablets - ...
XPDS16: Xenbedded: Xen-based client virtualization for phones and tablets - ...
 
Intrack14dec tips tricks_clean
Intrack14dec tips tricks_cleanIntrack14dec tips tricks_clean
Intrack14dec tips tricks_clean
 
Xen and Art of Virtualization (Xen Architecture)
Xen and Art of Virtualization (Xen Architecture)Xen and Art of Virtualization (Xen Architecture)
Xen and Art of Virtualization (Xen Architecture)
 

En vedette

Upgrading to Oracle 11gR2
Upgrading to Oracle 11gR2Upgrading to Oracle 11gR2
Upgrading to Oracle 11gR2Syed Hussain
 
Proof of concept guide for ibm tivoli storage manager version 5.3 sg246762
Proof of concept guide for ibm tivoli storage manager version 5.3 sg246762Proof of concept guide for ibm tivoli storage manager version 5.3 sg246762
Proof of concept guide for ibm tivoli storage manager version 5.3 sg246762Banking at Ho Chi Minh city
 
Aix admin course provider Navi Mumbai | AIX Admin Course Training Navi Mumbai...
Aix admin course provider Navi Mumbai | AIX Admin Course Training Navi Mumbai...Aix admin course provider Navi Mumbai | AIX Admin Course Training Navi Mumbai...
Aix admin course provider Navi Mumbai | AIX Admin Course Training Navi Mumbai...VibrantGroup
 
Overview of v cloud case studies
Overview of v cloud case studiesOverview of v cloud case studies
Overview of v cloud case studiessolarisyougood
 
Visual studio 2008 overview
Visual studio 2008 overviewVisual studio 2008 overview
Visual studio 2008 overviewsagaroceanic11
 
Sparc t4 systems customer presentation
Sparc t4 systems customer presentationSparc t4 systems customer presentation
Sparc t4 systems customer presentationsolarisyougood
 
Presentation oracle on power power advantages and license optimization
Presentation   oracle on power power advantages and license optimizationPresentation   oracle on power power advantages and license optimization
Presentation oracle on power power advantages and license optimizationsolarisyougood
 
Ibm tivoli storage manager in a clustered environment sg246679
Ibm tivoli storage manager in a clustered environment sg246679Ibm tivoli storage manager in a clustered environment sg246679
Ibm tivoli storage manager in a clustered environment sg246679Banking at Ho Chi Minh city
 
Avoiding Chaos: Methodology for Managing Performance in a Shared Storage A...
Avoiding Chaos:  Methodology for Managing Performance in a Shared Storage A...Avoiding Chaos:  Methodology for Managing Performance in a Shared Storage A...
Avoiding Chaos: Methodology for Managing Performance in a Shared Storage A...brettallison
 
Ibm tivoli storage manager bare machine recovery for aix with sysback - red...
Ibm tivoli storage manager   bare machine recovery for aix with sysback - red...Ibm tivoli storage manager   bare machine recovery for aix with sysback - red...
Ibm tivoli storage manager bare machine recovery for aix with sysback - red...Banking at Ho Chi Minh city
 
RHT Upgrading to vSphere 5
RHT Upgrading to vSphere 5RHT Upgrading to vSphere 5
RHT Upgrading to vSphere 5virtualsouthwest
 
2.ibm flex system manager overview
2.ibm flex system manager overview2.ibm flex system manager overview
2.ibm flex system manager overviewsolarisyougood
 
Leveraging Open Source to Manage SAN Performance
Leveraging Open Source to Manage SAN PerformanceLeveraging Open Source to Manage SAN Performance
Leveraging Open Source to Manage SAN Performancebrettallison
 
CTI Group- Blue power technology storwize technical training for customer - p...
CTI Group- Blue power technology storwize technical training for customer - p...CTI Group- Blue power technology storwize technical training for customer - p...
CTI Group- Blue power technology storwize technical training for customer - p...Tri Susilo
 
V mware v center orchestrator 5.5 knowledge transfer kit
V mware v center orchestrator 5.5 knowledge transfer kitV mware v center orchestrator 5.5 knowledge transfer kit
V mware v center orchestrator 5.5 knowledge transfer kitsolarisyougood
 
Virtual san hardware guidance &amp; best practices
Virtual san hardware guidance &amp; best practicesVirtual san hardware guidance &amp; best practices
Virtual san hardware guidance &amp; best practicessolarisyougood
 

En vedette (20)

Upgrading to Oracle 11gR2
Upgrading to Oracle 11gR2Upgrading to Oracle 11gR2
Upgrading to Oracle 11gR2
 
Proof of concept guide for ibm tivoli storage manager version 5.3 sg246762
Proof of concept guide for ibm tivoli storage manager version 5.3 sg246762Proof of concept guide for ibm tivoli storage manager version 5.3 sg246762
Proof of concept guide for ibm tivoli storage manager version 5.3 sg246762
 
Accelerate Return on Data
Accelerate Return on DataAccelerate Return on Data
Accelerate Return on Data
 
Aix admin course provider Navi Mumbai | AIX Admin Course Training Navi Mumbai...
Aix admin course provider Navi Mumbai | AIX Admin Course Training Navi Mumbai...Aix admin course provider Navi Mumbai | AIX Admin Course Training Navi Mumbai...
Aix admin course provider Navi Mumbai | AIX Admin Course Training Navi Mumbai...
 
Overview of v cloud case studies
Overview of v cloud case studiesOverview of v cloud case studies
Overview of v cloud case studies
 
Visual studio 2008 overview
Visual studio 2008 overviewVisual studio 2008 overview
Visual studio 2008 overview
 
Sparc t4 systems customer presentation
Sparc t4 systems customer presentationSparc t4 systems customer presentation
Sparc t4 systems customer presentation
 
Presentation oracle on power power advantages and license optimization
Presentation   oracle on power power advantages and license optimizationPresentation   oracle on power power advantages and license optimization
Presentation oracle on power power advantages and license optimization
 
IBMRedbook
IBMRedbookIBMRedbook
IBMRedbook
 
Ibm tivoli storage manager in a clustered environment sg246679
Ibm tivoli storage manager in a clustered environment sg246679Ibm tivoli storage manager in a clustered environment sg246679
Ibm tivoli storage manager in a clustered environment sg246679
 
Avoiding Chaos: Methodology for Managing Performance in a Shared Storage A...
Avoiding Chaos:  Methodology for Managing Performance in a Shared Storage A...Avoiding Chaos:  Methodology for Managing Performance in a Shared Storage A...
Avoiding Chaos: Methodology for Managing Performance in a Shared Storage A...
 
Ibm tivoli storage manager bare machine recovery for aix with sysback - red...
Ibm tivoli storage manager   bare machine recovery for aix with sysback - red...Ibm tivoli storage manager   bare machine recovery for aix with sysback - red...
Ibm tivoli storage manager bare machine recovery for aix with sysback - red...
 
AIX 5L Differences Guide Version 5.3 Edition
AIX 5L Differences Guide Version 5.3 EditionAIX 5L Differences Guide Version 5.3 Edition
AIX 5L Differences Guide Version 5.3 Edition
 
RHT Design for Security
RHT Design for SecurityRHT Design for Security
RHT Design for Security
 
RHT Upgrading to vSphere 5
RHT Upgrading to vSphere 5RHT Upgrading to vSphere 5
RHT Upgrading to vSphere 5
 
2.ibm flex system manager overview
2.ibm flex system manager overview2.ibm flex system manager overview
2.ibm flex system manager overview
 
Leveraging Open Source to Manage SAN Performance
Leveraging Open Source to Manage SAN PerformanceLeveraging Open Source to Manage SAN Performance
Leveraging Open Source to Manage SAN Performance
 
CTI Group- Blue power technology storwize technical training for customer - p...
CTI Group- Blue power technology storwize technical training for customer - p...CTI Group- Blue power technology storwize technical training for customer - p...
CTI Group- Blue power technology storwize technical training for customer - p...
 
V mware v center orchestrator 5.5 knowledge transfer kit
V mware v center orchestrator 5.5 knowledge transfer kitV mware v center orchestrator 5.5 knowledge transfer kit
V mware v center orchestrator 5.5 knowledge transfer kit
 
Virtual san hardware guidance &amp; best practices
Virtual san hardware guidance &amp; best practicesVirtual san hardware guidance &amp; best practices
Virtual san hardware guidance &amp; best practices
 

Similaire à HP-UX Dynamic Root Disk vs Solaris Live Upgrade vs AIX Multibos by Dusan Baljevic

Unix and Linux Common Boot Disk Disaster Recovery Tools by Dusan Baljevic
Unix and Linux Common Boot Disk Disaster Recovery Tools by Dusan BaljevicUnix and Linux Common Boot Disk Disaster Recovery Tools by Dusan Baljevic
Unix and Linux Common Boot Disk Disaster Recovery Tools by Dusan BaljevicCircling Cycle
 
Problem Reporting and Analysis Linux on System z -How to survive a Linux Crit...
Problem Reporting and Analysis Linux on System z -How to survive a Linux Crit...Problem Reporting and Analysis Linux on System z -How to survive a Linux Crit...
Problem Reporting and Analysis Linux on System z -How to survive a Linux Crit...IBM India Smarter Computing
 
Release notes 3_d_v61
Release notes 3_d_v61Release notes 3_d_v61
Release notes 3_d_v61sundar sivam
 
101 2.1 design hard disk layout v2
101 2.1 design hard disk layout v2101 2.1 design hard disk layout v2
101 2.1 design hard disk layout v2Acácio Oliveira
 
OSC-Fall-Tokyo-2012-v9.pdf
OSC-Fall-Tokyo-2012-v9.pdfOSC-Fall-Tokyo-2012-v9.pdf
OSC-Fall-Tokyo-2012-v9.pdfnitinscribd
 
3. configuring a compute node for nfv
3. configuring a compute node for nfv3. configuring a compute node for nfv
3. configuring a compute node for nfvvideos
 
Hp ux-11iv3-multiple-clones-with-dynamic-root-disks-dusan-baljevic-mar2014
Hp ux-11iv3-multiple-clones-with-dynamic-root-disks-dusan-baljevic-mar2014Hp ux-11iv3-multiple-clones-with-dynamic-root-disks-dusan-baljevic-mar2014
Hp ux-11iv3-multiple-clones-with-dynamic-root-disks-dusan-baljevic-mar2014Circling Cycle
 
2.1 design hard disk layout v2
2.1 design hard disk layout v22.1 design hard disk layout v2
2.1 design hard disk layout v2Acácio Oliveira
 
Distributed replicated block device
Distributed replicated block deviceDistributed replicated block device
Distributed replicated block deviceChanaka Lasantha
 
WinFE: The (Almost) Perfect Triage Tool
WinFE: The (Almost) Perfect Triage ToolWinFE: The (Almost) Perfect Triage Tool
WinFE: The (Almost) Perfect Triage ToolBrent Muir
 

Similaire à HP-UX Dynamic Root Disk vs Solaris Live Upgrade vs AIX Multibos by Dusan Baljevic (20)

Unix and Linux Common Boot Disk Disaster Recovery Tools by Dusan Baljevic
Unix and Linux Common Boot Disk Disaster Recovery Tools by Dusan BaljevicUnix and Linux Common Boot Disk Disaster Recovery Tools by Dusan Baljevic
Unix and Linux Common Boot Disk Disaster Recovery Tools by Dusan Baljevic
 
Drd
DrdDrd
Drd
 
FreeBSD Portscamp, Kuala Lumpur 2016
FreeBSD Portscamp, Kuala Lumpur 2016FreeBSD Portscamp, Kuala Lumpur 2016
FreeBSD Portscamp, Kuala Lumpur 2016
 
Linux admin course
Linux admin courseLinux admin course
Linux admin course
 
Problem Reporting and Analysis Linux on System z -How to survive a Linux Crit...
Problem Reporting and Analysis Linux on System z -How to survive a Linux Crit...Problem Reporting and Analysis Linux on System z -How to survive a Linux Crit...
Problem Reporting and Analysis Linux on System z -How to survive a Linux Crit...
 
Release notes 3_d_v61
Release notes 3_d_v61Release notes 3_d_v61
Release notes 3_d_v61
 
101 2.1 design hard disk layout v2
101 2.1 design hard disk layout v2101 2.1 design hard disk layout v2
101 2.1 design hard disk layout v2
 
OSC-Fall-Tokyo-2012-v9.pdf
OSC-Fall-Tokyo-2012-v9.pdfOSC-Fall-Tokyo-2012-v9.pdf
OSC-Fall-Tokyo-2012-v9.pdf
 
Linux Booting Process
Linux Booting ProcessLinux Booting Process
Linux Booting Process
 
Fossetcon14
Fossetcon14Fossetcon14
Fossetcon14
 
Linux basics
Linux basics Linux basics
Linux basics
 
3. configuring a compute node for nfv
3. configuring a compute node for nfv3. configuring a compute node for nfv
3. configuring a compute node for nfv
 
Linux basics
Linux basics Linux basics
Linux basics
 
Hp ux-11iv3-multiple-clones-with-dynamic-root-disks-dusan-baljevic-mar2014
Hp ux-11iv3-multiple-clones-with-dynamic-root-disks-dusan-baljevic-mar2014Hp ux-11iv3-multiple-clones-with-dynamic-root-disks-dusan-baljevic-mar2014
Hp ux-11iv3-multiple-clones-with-dynamic-root-disks-dusan-baljevic-mar2014
 
2.1 design hard disk layout v2
2.1 design hard disk layout v22.1 design hard disk layout v2
2.1 design hard disk layout v2
 
OLUG_xen.ppt
OLUG_xen.pptOLUG_xen.ppt
OLUG_xen.ppt
 
Introduction to docker
Introduction to dockerIntroduction to docker
Introduction to docker
 
Distributed replicated block device
Distributed replicated block deviceDistributed replicated block device
Distributed replicated block device
 
WinFE: The (Almost) Perfect Triage Tool
WinFE: The (Almost) Perfect Triage ToolWinFE: The (Almost) Perfect Triage Tool
WinFE: The (Almost) Perfect Triage Tool
 
.ppt
.ppt.ppt
.ppt
 

Plus de Circling Cycle

Brief summary-standard-password-hashes-Aix-FreeBSD-Linux-Solaris-HP-UX-May-20...
Brief summary-standard-password-hashes-Aix-FreeBSD-Linux-Solaris-HP-UX-May-20...Brief summary-standard-password-hashes-Aix-FreeBSD-Linux-Solaris-HP-UX-May-20...
Brief summary-standard-password-hashes-Aix-FreeBSD-Linux-Solaris-HP-UX-May-20...Circling Cycle
 
How to-mount-3 par-san-virtual-copy-onto-rhel-servers-by-Dusan-Baljevic
How to-mount-3 par-san-virtual-copy-onto-rhel-servers-by-Dusan-BaljevicHow to-mount-3 par-san-virtual-copy-onto-rhel-servers-by-Dusan-Baljevic
How to-mount-3 par-san-virtual-copy-onto-rhel-servers-by-Dusan-BaljevicCircling Cycle
 
Ovclusterinfo command by Dusan Baljevic
Ovclusterinfo command by Dusan BaljevicOvclusterinfo command by Dusan Baljevic
Ovclusterinfo command by Dusan BaljevicCircling Cycle
 
HP-UX 11i Log File Management with Logrotate by Dusan Baljevic
HP-UX 11i Log File Management with Logrotate by Dusan BaljevicHP-UX 11i Log File Management with Logrotate by Dusan Baljevic
HP-UX 11i Log File Management with Logrotate by Dusan BaljevicCircling Cycle
 
How to Remove Primary Swap on HP-UX 11iv3 Online by Dusan Baljevic
How to Remove Primary Swap on HP-UX 11iv3 Online by Dusan BaljevicHow to Remove Primary Swap on HP-UX 11iv3 Online by Dusan Baljevic
How to Remove Primary Swap on HP-UX 11iv3 Online by Dusan BaljevicCircling Cycle
 
HP-UX 11iv3 Private Kernel Parameter nfile by Dusan Baljevic
HP-UX 11iv3 Private Kernel Parameter nfile by Dusan BaljevicHP-UX 11iv3 Private Kernel Parameter nfile by Dusan Baljevic
HP-UX 11iv3 Private Kernel Parameter nfile by Dusan BaljevicCircling Cycle
 
HP-UX 11i LVM Mirroring Features and Multi-threads by Dusan Baljevic
HP-UX 11i LVM Mirroring Features and Multi-threads by Dusan BaljevicHP-UX 11i LVM Mirroring Features and Multi-threads by Dusan Baljevic
HP-UX 11i LVM Mirroring Features and Multi-threads by Dusan BaljevicCircling Cycle
 
HP-UX with Rsync by Dusan Baljevic
HP-UX with Rsync by Dusan BaljevicHP-UX with Rsync by Dusan Baljevic
HP-UX with Rsync by Dusan BaljevicCircling Cycle
 
Three CLI Methods to Find Console IP details on HP-UX by Dusan Baljevic
Three CLI Methods to Find Console IP details on HP-UX by Dusan BaljevicThree CLI Methods to Find Console IP details on HP-UX by Dusan Baljevic
Three CLI Methods to Find Console IP details on HP-UX by Dusan BaljevicCircling Cycle
 
HP-UX RBAC Audsys Setup by Dusan Baljevic
HP-UX RBAC Audsys Setup by Dusan BaljevicHP-UX RBAC Audsys Setup by Dusan Baljevic
HP-UX RBAC Audsys Setup by Dusan BaljevicCircling Cycle
 
HP-UX 11iv3 Ignite-UX with NFSv4 and SSH Tunnel by Dusan Baljevic
HP-UX 11iv3 Ignite-UX with NFSv4 and SSH Tunnel by Dusan BaljevicHP-UX 11iv3 Ignite-UX with NFSv4 and SSH Tunnel by Dusan Baljevic
HP-UX 11iv3 Ignite-UX with NFSv4 and SSH Tunnel by Dusan BaljevicCircling Cycle
 
HP-UX 11iv3 How to Change Root Volume Group Name vg00 by Dusan Baljevic
HP-UX 11iv3 How to Change Root Volume Group Name vg00 by Dusan BaljevicHP-UX 11iv3 How to Change Root Volume Group Name vg00 by Dusan Baljevic
HP-UX 11iv3 How to Change Root Volume Group Name vg00 by Dusan BaljevicCircling Cycle
 
Better Settings for /tmp Filesystem on HP-UX by Dusan Baljevic
Better Settings for /tmp Filesystem on HP-UX by Dusan BaljevicBetter Settings for /tmp Filesystem on HP-UX by Dusan Baljevic
Better Settings for /tmp Filesystem on HP-UX by Dusan BaljevicCircling Cycle
 
Comparison of Unix and Linux Log File Management Tools by Dusan Baljevic
Comparison of Unix and Linux Log File Management Tools by Dusan BaljevicComparison of Unix and Linux Log File Management Tools by Dusan Baljevic
Comparison of Unix and Linux Log File Management Tools by Dusan BaljevicCircling Cycle
 

Plus de Circling Cycle (14)

Brief summary-standard-password-hashes-Aix-FreeBSD-Linux-Solaris-HP-UX-May-20...
Brief summary-standard-password-hashes-Aix-FreeBSD-Linux-Solaris-HP-UX-May-20...Brief summary-standard-password-hashes-Aix-FreeBSD-Linux-Solaris-HP-UX-May-20...
Brief summary-standard-password-hashes-Aix-FreeBSD-Linux-Solaris-HP-UX-May-20...
 
How to-mount-3 par-san-virtual-copy-onto-rhel-servers-by-Dusan-Baljevic
How to-mount-3 par-san-virtual-copy-onto-rhel-servers-by-Dusan-BaljevicHow to-mount-3 par-san-virtual-copy-onto-rhel-servers-by-Dusan-Baljevic
How to-mount-3 par-san-virtual-copy-onto-rhel-servers-by-Dusan-Baljevic
 
Ovclusterinfo command by Dusan Baljevic
Ovclusterinfo command by Dusan BaljevicOvclusterinfo command by Dusan Baljevic
Ovclusterinfo command by Dusan Baljevic
 
HP-UX 11i Log File Management with Logrotate by Dusan Baljevic
HP-UX 11i Log File Management with Logrotate by Dusan BaljevicHP-UX 11i Log File Management with Logrotate by Dusan Baljevic
HP-UX 11i Log File Management with Logrotate by Dusan Baljevic
 
How to Remove Primary Swap on HP-UX 11iv3 Online by Dusan Baljevic
How to Remove Primary Swap on HP-UX 11iv3 Online by Dusan BaljevicHow to Remove Primary Swap on HP-UX 11iv3 Online by Dusan Baljevic
How to Remove Primary Swap on HP-UX 11iv3 Online by Dusan Baljevic
 
HP-UX 11iv3 Private Kernel Parameter nfile by Dusan Baljevic
HP-UX 11iv3 Private Kernel Parameter nfile by Dusan BaljevicHP-UX 11iv3 Private Kernel Parameter nfile by Dusan Baljevic
HP-UX 11iv3 Private Kernel Parameter nfile by Dusan Baljevic
 
HP-UX 11i LVM Mirroring Features and Multi-threads by Dusan Baljevic
HP-UX 11i LVM Mirroring Features and Multi-threads by Dusan BaljevicHP-UX 11i LVM Mirroring Features and Multi-threads by Dusan Baljevic
HP-UX 11i LVM Mirroring Features and Multi-threads by Dusan Baljevic
 
HP-UX with Rsync by Dusan Baljevic
HP-UX with Rsync by Dusan BaljevicHP-UX with Rsync by Dusan Baljevic
HP-UX with Rsync by Dusan Baljevic
 
Three CLI Methods to Find Console IP details on HP-UX by Dusan Baljevic
Three CLI Methods to Find Console IP details on HP-UX by Dusan BaljevicThree CLI Methods to Find Console IP details on HP-UX by Dusan Baljevic
Three CLI Methods to Find Console IP details on HP-UX by Dusan Baljevic
 
HP-UX RBAC Audsys Setup by Dusan Baljevic
HP-UX RBAC Audsys Setup by Dusan BaljevicHP-UX RBAC Audsys Setup by Dusan Baljevic
HP-UX RBAC Audsys Setup by Dusan Baljevic
 
HP-UX 11iv3 Ignite-UX with NFSv4 and SSH Tunnel by Dusan Baljevic
HP-UX 11iv3 Ignite-UX with NFSv4 and SSH Tunnel by Dusan BaljevicHP-UX 11iv3 Ignite-UX with NFSv4 and SSH Tunnel by Dusan Baljevic
HP-UX 11iv3 Ignite-UX with NFSv4 and SSH Tunnel by Dusan Baljevic
 
HP-UX 11iv3 How to Change Root Volume Group Name vg00 by Dusan Baljevic
HP-UX 11iv3 How to Change Root Volume Group Name vg00 by Dusan BaljevicHP-UX 11iv3 How to Change Root Volume Group Name vg00 by Dusan Baljevic
HP-UX 11iv3 How to Change Root Volume Group Name vg00 by Dusan Baljevic
 
Better Settings for /tmp Filesystem on HP-UX by Dusan Baljevic
Better Settings for /tmp Filesystem on HP-UX by Dusan BaljevicBetter Settings for /tmp Filesystem on HP-UX by Dusan Baljevic
Better Settings for /tmp Filesystem on HP-UX by Dusan Baljevic
 
Comparison of Unix and Linux Log File Management Tools by Dusan Baljevic
Comparison of Unix and Linux Log File Management Tools by Dusan BaljevicComparison of Unix and Linux Log File Management Tools by Dusan Baljevic
Comparison of Unix and Linux Log File Management Tools by Dusan Baljevic
 

Dernier

DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenHervé Boutemy
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Mark Simos
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningLars Bell
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfAddepto
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteDianaGray10
 
SALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICESSALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICESmohitsingh558521
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationSlibray Presentation
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Commit University
 
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024BookNet Canada
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek SchlawackFwdays
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfLoriGlavin3
 
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdfHyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdfPrecisely
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr BaganFwdays
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxLoriGlavin3
 
What is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfWhat is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfMounikaPolabathina
 
Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionDilum Bandara
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024Stephanie Beckett
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebUiPathCommunity
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .Alan Dix
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxNavinnSomaal
 

Dernier (20)

DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache Maven
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine Tuning
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdf
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test Suite
 
SALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICESSALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICES
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck Presentation
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!
 
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdf
 
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdfHyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
 
What is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfWhat is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdf
 
Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An Introduction
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio Web
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptx
 

HP-UX Dynamic Root Disk vs Solaris Live Upgrade vs AIX Multibos by Dusan Baljevic

  • 1. HP-UX Dynamic Root Disk, Solaris Live Upgrade and AIX Multibos Dusan Baljevic Sydney, Australia 2009 Dusan Baljevic
  • 2. Cloning in Major Unix and Linux Releases AIX Alternate Root and Multibos (AIX 5.3 and above) HP-UX Dynamic Root Disk (DRD) Linux Mondo Rescue, Clonezilla Solaris Live Upgrade August 7, 2009 2
  • 3. HP-UX Dynamic Root Disk Features • Dynamic Root Disk (DRD) provides the ability to clone an HP-UX system image to an inactive disk. • Supported on HP PA-RISC and Itanium-based systems. • Supported on hard partitions (nPars), virtual partitions (vPars), and Integrity Virtual Machines (Integrity VMs), running the following operating systems with roots managed by the following Volume Managers (except as specifically noted for rehosting):  o HP-UX 11i Version 2 (11.23) September 2004 or later o HP-UX 11i Version 3 (11.31) o LVM (all O/S releases supported by DRD) o VxVM 4.1 o VxVM 5.0 August 7, 2009 3
  • 4. HP-UX DRD Benefit: Minimizing Planned Downtime Without DRD: Software management may require extended downtime With DRD: Install/remove software on the clone while applications continue running Install patches on the clone; applications remain running lvol1 lvol2 lvol3 lvol1 lvol2 lvol3 boot disk boot mirror lvol1 lvol2 lvol3 lvol1 lvol2 lvol3 clone clone mirror disk Original vg00 (active) cloned vg00 (inactive/patched) Activate the clone to make changes take effect lvol1 lvol2 lvol3 lvol1 lvol2 lvol3 boot disk boot mirror Original vg00 (inactive) August 7, 2009 lvol1 lvol2 lvol3 lvol1 lvol2 lvol3 clone clone mirror disk cloned vg00 (active/patched) 4
  • 5. HP-UX Dynamic Root Disk Features continued • Product : DynRootDisk Version: A.3.3.1.221 (B.11.xx.A.3.4.x will be the current version number as of September 2009) • The target disk must be a single physical disk, or SAN LUN. • The target disk must be large enough to hold all of the root volume file systems. DRD allows the cloning of the root volume group even if the master O/S is spread across multiple disks (it is a one-way, many-to-one operation). • On Itanium servers, all partitions are created; EFI and HP-UX partitions are copied. This release of DRD does not copy the HPSP partition. • Copy of lvmtab on the cloned image is modified by the clone operation to contain information that will reflect the desired volume groups when the clone is booted. August 7, 2009 5
  • 6. HP-UX Dynamic Root Disk Features continued • Only the contents of vg00 are copied. • Due to system calls DRD depends on, DRD expects legacy Device Special Files (DSFs) to be present and the legacy naming model to be enabled on HP-UX 11i v3 servers. HP recommends only partial migration to persistent DSFs be performed. • If the disk is currently in use by another volume group that is visible on the system, the disk will not be used. • If the disk contains LVM, VxVM, or boot records but is not in use, one must use the “-x overwrite” option to tell DRD to overwrite the disk. Already-created clones will contain boot records; the drd status command will show the disk that is currently in use as an inactive system image. August 7, 2009 6
  • 7. HP-UX Dynamic Root Disk Features continued • All DRD processes, including “drd clone” and “drd runcmd”, can be safely interrupted issuing Control-C (SIGINT) from the controlling terminal or by issuing kill –HUP <pid> (SIGHUP). This action causes DRD to abort processing. Do not interrupt DRD using the kill -9 <pid> command (SIGKILL), which fails to abort safely and does not perform cleanup. Refer to the “Known Issues” list on the DRD web page (http://www.hp.com/go/DRD) for cleanup instructions after drd runcmd is interrupted. • The Ignite server will only be aware of the clone if it is mounted during a make_*_recovery operation. August 7, 2009 7
  • 8. HP-UX Dynamic Root Disk Features continued DRD does not provide a mechanism for resizing file systems during a clone operation. • After the clone is created, one can manually change file system sizes on the inactive system without an immediate reboot: 1. The whitepaper, “Dynamic Root Disk: Quick Start & Best Practices” describes resizing file systems other than /stand. * 2. The whitepaper “Dynamic Root Disk: Quick Start & Best Practices” describes resizing the boot (/stand) file system on an inactive system image. • One can avoid multiple mounts and unmounts by using “drd mount” to mount the inactive system image before the first runcmd operation and “drd umount” to unmount the inactive system image after the last runcmd operation. ** • Supports root volume groups with any name (prior to version A.3.0, only vg00 was possible). • August 7, 2009 8
  • 9. HP-UX Dynamic Root Disk Commands • The basic DRD commands are: drd drd drd drd drd drd drd drd drd August 7, 2009 clone runcmd activate deactivate mount umount status rehost unrehost 9
  • 10. HP-UX Dynamic Root Disk Commands continued • “drd runcmd” can run specific Software Distributor (SD) commands on the inactive system image only: swinstall swremove swlist swmodify swverify swjob • Three other commands can be executed by the drd runcmd command: view used to view logs produced by commands that were executed by drd runcmd. kctune used to modify kernel parameters. update-ux performs v3 to v3 OE updates August 7, 2009 10
  • 11. HP-UX Dynamic Root Disk Features – Dry Run • A simple mechanism for determining if a chosen target disk is sufficiently large is to run a preview: # drd clone -p -v -t <blockDSF> blockDSF is of the form:  * HP-UX 11i v2: /dev/dsk/cXtXdX * HP-UX 11i v3: /dev/disk/diskX • The preview operation includes the disk space analysis needed to see if the target disk is sufficiently large. August 7, 2009 11
  • 12. HP-UX Dynamic Root Disk versus IgniteUX • DRD has several advantages over Ignite-UX net and tape images: * No tape drive is needed, * No impact on network performance will occur, * No security issues of transferring data across the network. • Mirror Disk/UX keeps an "always up-to-date" image of the booted system. DRD provides a "point-in-time“ image. The booted system and the clone may then diverge due to changes to either one. Keeping the clone unchanged is the Recovery scenario. DRD is not available for HP-UX 11.11, which limits options on those systems. August 7, 2009 12
  • 13. HP-UX Dynamic Root Disk Features continued Dynamic Root Disk (DRD) provides ability to clone an HPUX system image to an inactive disk, and then: * Perform system maintenance on the clone while the HPUX 11i system is online. * Reboot during off-hours - significantly reducing system downtime. * Utilize the clone for system recovery, if needed. * Rehost the clone on another system for testing or provisioning purposes—on VMs or blades utilizing Virtual Connect, HP-UX 11i v3 LVM only; VMs with HP-UX 11i v2 LVM only. * Perform an OE Update on the clone from an older version of HP-UX 11i v3 to HP-UX 11i v3 update 4 or later. August 7, 2009 13
  • 14. HP-UX – Dynamic Root Disk and /etc/bootconf • Errors in /stand/bootconf can make the command drd deactivate to fail. * (This is no longer true in the current release) The /stand/bootconf file on the booted system should contain device files for just the booted disk and any of its mirrors not the clone target. The /stand/bootconf file that is created on the clone target WILL contain the device file of the target itself (or, on an IPF system, the device file of the HP-UX partition of the target). August 7, 2009 14
  • 15. HP-UX – Dynamic Root Disk – Rehosting • The initial implementation of drd rehost only supports rehosting of an LVM-managed root volume group on an Integrity virtual machine to another Integrity virtual machine, or an LVM-managed root volume group on a Blade with Virtual Connect I/O to another such Blade. • The rehost command does not enforce the restriction to blades and VMs, but other use of this command is not officially supported. • As of version A.3.3, rehosting support for HP-UX 11i v2 has been added. August 7, 2009 15
  • 16. HP-UX – Dynamic Root Disk – Rehosting on HP-UX 11.31 • After the clone and system information file have been created, the “drd rehost” command can be used to check the syntax of the system information file and copy it to /EFI/HPUX/SYSINFO.TXT in preparation for processing by auto_parms(1M) during the boot of the image. The following example uses the /var/opt/drd/tmp/newhost.txt system information file: SYSINFO_HOSTNAME=myhost SYSINFO_MAC_ADDRESS[0]=0x0017A451E718 SYSINFO_DHCP_ENABLE[0]=0 SYSINFO_IP_ADDRESS[0]=192.2.3.4 SYSINFO_SUBNET_MASK[0]=255.255.255.0 SYSINFO_ROUTE_GATEWAY[0]=192.2.3.75 SYSINFO_ROUTE_DESTINATION[0]=default SYSINFO_ROUTE_COUNT[0]=1 August 7, 2009 16
  • 17. HP-UX – Dynamic Root Disk – Rehosting on HP-UX 11.31 - continued • To check the syntax of the system information file, without copying it to the /EFI/HPUX/SYSINFO.TXT file, use the preview option of the drd rehost command: # drd rehost –p –f /var/opt/drd/tmp/newhost.txt • To copy it to the /EFI/HPUX/SYSINFO.TXT file, use the following command: # drd rehost –f /var/opt/drd/tmp/newhost.txt August 7, 2009 17
  • 18. HP-UX – Dynamic Root Disk Examples # drd clone -t /dev/disk/disk8 -x overwrite=true ======= 07/02/08 13:09:41 EST BEGIN Clone System Image (user=root) (jobid=syd59) * Reading Current System Information * Selecting System Image To Clone * Selecting Target Disk * Selecting Volume Manager For New System Image * Analyzing For System Image Cloning * Creating New File Systems * Copying File Systems To New System Image * Making New System Image Bootable * Unmounting New System Image Clone • ======= 07/02/08 13:42:57 EST END Clone System Image succeeded. (user=root) (jobid=syd59) August 7, 2009 18
  • 19. HP-UX – Dynamic Root Disk Examples continued # drd status ======= 07/02/08 13:45:42 EST BEGIN Displaying DRD Clone Image Information (user=root) (jobid=syd59) * Clone Disk: /dev/disk/disk8 * Clone EFI Partition: Boot loader and AUTO file present * Clone Creation Date: 07/02/08 13:09:46 EST * Clone Mirror Disk: None * Mirror EFI Partition: None * Original Disk: /dev/disk/disk7 * Original EFI Partition: Boot loader and AUTO file present * Booted Disk: Original Disk (/dev/disk/disk7) * Activated Disk: Original Disk (/dev/disk/disk7) ======= 07/02/08 13:45:51 EST END Displaying DRD Clone Image Information succeeded. (user=root) (jobid=syd59) August 7, 2009 19
  • 20. HP-UX – Dynamic Root Disk Examples continued # drd activate ======= 07/02/08 13:48:03 EST BEGIN Activate Inactive System Image (user=root) (jobid=syd59) * Checking for Valid Inactive System Image * Reading Current System Information * Locating Inactive System Image * Determining Bootpath Status * Primary bootpath : 0/1/1/0.0x1.0x0 before activate. * Primary bootpath : 0/1/1/1.0x2.0x0 after activate. * Alternate bootpath : 0/1/1/1.0x2.0x0 before activate. * Alternate bootpath : 0/1/1/1.0x2.0x0 after activate. * HA Alternate bootpath : 0/1/1/0.0x1.0x0 before activate. * HA Alternate bootpath : 0/1/1/0.0x1.0x0 after activate. * Activating Inactive System Image ======= 07/02/08 13:48:15 EST END Activate Inactive System Image succeeded. (user=root) (jobid=syd59) August 7, 2009 20
  • 21. HP-UX – Dynamic Root Disk Examples continued # drd_register_mirror /dev/dsk/c1t2d0 * # drd_unregister_mirror /dev/dsk/c2t3d0 ** # drd runcmd view /var/adm/sw/swagent.log # diff /var/spool/crontab/crontab.root /var/opt/drd/mnts/sysimage_001/var/spool/cron tab/crontab.root August 7, 2009 21
  • 22. HP-UX – Dynamic Root Disk Examples continued # /opt/drd/bin/drd mount # /usr/bin/bdf file system kbytes /dev/vg00/lvol3 1048576 /dev/vg00/lvol1 505392 /dev/vg00/lvol8 used avail %used 320456 722432 43560 411288 10% /stand 3395584 797064 2580088 24% /var /dev/vg00/lvol7 4636672 1990752 2625264 43% /usr /dev/vg00/lvol4 204800 8656 194680 4% /tmp /dev/vg00/lvol6 3067904 1961048 1098264 64% /opt /dev/vg00/lvol5 262144 9320 250912 4% 1048576 320504 722392 31% /dev/drd00/lvol1 505392 43560 /var/opt/drd/mnts/sysimage_001/stand 411288 10% /dev/drd00/lvol4 8592 194680 4% /dev/drd00/lvol5 262144 9320 /var/opt/drd/mnts/sysimage_001/home 250912 4% /dev/drd00/lvol3 204800 31% Mounted on / /home /var/opt/drd/mnts/sysimage_001 /var/opt/drd/mnts/sysimage_001/tmp /dev/drd00/lvol6 3067904 1962912 1096416 64% /var/opt/drd/mnts/sysimage_001/opt /dev/drd00/lvol7 4636672 1991336 2624680 43% /var/opt/drd/mnts/sysimage_001/usr /dev/drd00/lvol8 3395584 788256 23% /var/opt/drd/mnts/sysimage_001/var August 7, 2009 2586968 22
  • 23. HP-UX – Dynamic Root Disk – Serial Patch Installation Example # swcopy -s /tmp/PHCO_38159.depot * @ /var/opt/mx/depot11/PHCO_38159.dir # drd runcmd swinstall -s /var/opt/mx/depot11/PHCO_38159.dir PHCO_38159 August 7, 2009 23
  • 24. HP-UX – Dynamic Root Disk update-ux Issue * When executing “drd runcmd update-ux” the inactive on DRD system image, the command errors: ERROR: The expected depot does not exist at "<depot_name>" In order to use a directory depot on the active system image, you will need to create a loopback mount to access the depot. August 7, 2009 24
  • 25. HP-UX – Dynamic Root Disk update-ux Issue - continued Issue Resolution The following steps should be followed in order to update the clone from a directory depot that resides on the active system image.  The steps must executed as root, in this order: 1) Mount the clone using “drd mount” 2) Make the directory on the clone and loopback mount the depot.  The directory on the clone and the source depot must have the same name, in this case “/var/depots/0909_DCOE”, however the name can be whatever you chose: # mkdir -p  /var/opt/drd/mnts/sysimage_001/var/depots/0909_DCOE               # mount -F lofs /var/depots/0909_DCOE /var/opt/drd/mnts/sysimage_001/var/depots/0909_DCOE # drd runcmd update-ux -s /var/depots/0909_DCOE   August 7, 2009 25
  • 26. HP-UX – Dynamic Root Disk update-ux Issue - continued 3) Once your update has completed, unmount the loopback mount and then unmount the clone # umount –F lofs /var/depots/0909_DCOE /var/opt/drd/mnts/sysimage_001/var/depots/0909_DCOE # drd umount   Updates from multiple-DVD Media Updates directly from media are not supported for DRD updates.  In order to update from media, you must copy the contents to a directory depot either on a remote server (easiest method) or to a directory on the active system. If it must be on the active system image you must first copy the media’s contents to a directory depot and then create the clone.  If you already have a clone, you can copy the depot and then loopback mount that depot to the clone (see instructions above).  August 7, 2009 26
  • 27. HP-UX – Dynamic Root Disk update-ux Issue - continued  To copy the software from the DVD’s, make a directory on a remote system or the active system image; mount the DVD media and swcopy its contents into the newly created directory.  Unmount the first disk and insert the second DVD to copy its contents into the directory.    # mkdir –p /var/software_depot/DCOE-DVD # mount /dev/disk/diskX /cdrom # swcopy -s /cdrom –x enforce_dependencies=false * @/var/software_depot/DCOE-DVD # umount /cdrom # mount /dev/disk/diskX /cdrom // this is DVD 2 # swcopy -s /cdrom –x enforce_dependencies=false * @/var/software_depot/DCOE-DVD August 7, 2009 27
  • 28. HP-UX – Dynamic Root Disk update-ux Issue - continued If the depot resides on a remote server (a system other than the one to be updated), proceed with the “drd runcmd update-ux” command and specify the location as the argument of the “-s” parameter: # drd runcmd update-ux -s <server_name>:/var/software_depot/DCOE-DVD <OE>   If the depot resides in the root group of the system to be cloned, and the clone has not yet been created, create the clone and issue the  “drd runcmd update-ux “ command, specifying the location of the depot as it appears on the booted system: # drd runcmd update-ux –s /var/software_depot/DCOE-DVD <OE> If the depot resides on the system to be updated, in a location other than the root group, 2009 if the clone has already been created, use the loopback mount August 7, or 28
  • 29. Solaris Live Upgrade Features • Live upgrade is a feature of Solaris (since version 2.6) that allows the operating system to be cloned to an offline partition (or partitions), which can then be upgraded with new O/S patches, software, or even a new version of the operating system. The system administrator can then reboot the system on the newly upgraded partition. In case of problems, it is easy to revert back to the original partition/version via a single live upgrade command followed by a reboot. • Live upgrade is especially useful because Sun does not officially support installing O/S patches to active partitions - patching while in single user mode or to a non-active live upgrade partition. August 7, 2009 29
  • 30. Solaris Live Upgrade Features continued •  Live Upgrade requires multiple partitions on the boot drive – one • A slice where the root (/) file system is to be copied must be selected. Use the following guidelines when you select a slice for the root (/) file system. The slice must comply with the following: set of partitions is "active" and the other is "inactive“) or on separate drives. These sets of partitions are "boot environments“ (BEs). * Must be a slice from which the system can boot. * Must meet the recommended minimum size. * Cannot be a Veritas VxVM volume or a Solstice DiskSuite metadevice. * Can be on different physical disks or the same disk as the active root file system. * For sun4c and sun4m, the root file system must be less than 2 GB. August 7, 2009 30
  • 31. Solaris Live Upgrade Features continued • The swap slice cannot be in use by any boot environment except the current boot environment or if the “-s” option is used, the source boot environment. The boot environment creation fails if the swap slice is being used by any other boot environment whether the slice contains a swap, UFS, or any other file system. • Typically, each boot environment requires a minimum of 350 to 800 MB of disk space, depending on the system software configuration. • When viewing the character interface remotely, such as over a tip line, set the TERM environment variable to VT220. Also, when using the Common Desktop Environment, set the value of the TERM variable to dtterm, rather than xterm. August 7, 2009 31
  • 32. Solaris Live Upgrade Features continued • lucreate command allows you to include or exclude specific files and directories when creating a new BE. • Include files and directories with: -y include option -Y include_list_file option items with a leading + in the file used with the -z filter_list option • Exclude files and directories with: -x exclude option -f exclude_list_file option items with a leading – in the file used with the -z filter_list option August 7, 2009 32
  • 33. Solaris Live Upgrade and Special Files • Files can change in the original boot environment (BE) after the BE is created but NOT YET activated. • On the first boot of a BE, data is copied from the source BE. • The list to copy is in /etc/lu/synclist. Example: /etc/default/passwd OVERWRITE /etc/dfs OVERWRITE /var/log/syslog APPEND /var/adm/messages APPEND August 7, 2009 33
  • 34. Solaris Live Upgrade Examples • The upgrade process of the new BE can be done in several ways (local, net, CD-ROM, flash). All four of these are done the same way except each one you specify a different path to the image through the -s flag. Examples: Local file: # luupgrade -u -n solenv2 -s /Solaris_10/path/to/os_image Net: # luupgrade -u -n solenv2 -s /net/Solaris_10/path/to/os_image CD-ROM: # luupgrade -u -n solenv2 -s /cdrom/Solaris_10/path/to/os_image Flash: # luupgrade -u -n solenv2 -s /path/to/flash.flar August 7, 2009 34
  • 35. Solaris Live Upgrade Examples # lucompare BE2 Determining the configuration of BE2 ... < BE1 > BE2 Processing Global Zone Comparing / ...  Links differ 01 < /:root:root:33:16877:DIR: 02 > /:root:root:30:16877:DIR:  Sizes differ 01 < /platform/sun4u/boot_archive:root:root:1:33188:REGFIL:76550144: 02 > /platform/sun4u/boot_archive:root:root:1:33188:REGFIL:76922880:  ... August 7, 2009 35
  • 36. Solaris Live Upgrade Examples # lucreate -c "solenv1" -m /:/dev/dsk/c0d0s3:ufs -n "solenv2“ * # lucreate -m /:/dev/md/dsk/d20:ufs,mirror -m /:/dev/dsk/c0t0d0s0:detach,attach,preserve -n nextBE ** # lucreate -m /:/dev/md/dsk/d10:ufs,mirror -m /:/dev/dsk/c0t0d0s0,d1:attach -m /:/dev/dsk/c0t1d0s0,d2:attach -n myserv2 August 7, 2009 *** 36
  • 37. Solaris Live Upgrade Examples # lucurr BE1 # ludesc -n BE1 "Dusan BootEnvironment“ # ludesc -n BE1 Dusan BootEnvironment August 7, 2009 37
  • 38. Solaris Live Upgrade Examples # lufslist BE1 boot environment name: BE1 This boot environment is currently active This boot environment will be active on next system boot. Filesystem Options fstype device size Mounted on Mount ----------------------- -------- ------------ -------------------------------/dev/zvol/dsk/rpool/swap swap rpool/ROOT/s10s_u6wos_07b zfs rpool/ROOT/s10s_u6wos_07b/var zfs - 1073741824 - - 5119809024 / - 86450688 /var rpool zfs rpool/export zfs 95149568 /export - hppool zfs ? /hppool - rpool/export/home zfs August 7, 2009 7493079552 /rpool 95129088 /export/home - 38
  • 39. Clone Commands Compared Task HP-UX DRD Solaris Live Upgrade Create BE drd clone lucreate Activate BE drd activate luactivate Check status drd status lustatus Compare BEs Indirect method: diff cmp lucompare Cancel scheduled copy/ create Indirect method – remove from crontab lucancel August 7, 2009 39
  • 40. Clone Commands Compared Task HP-UX DRD Solaris Live Upgrade Display BE/ System drd status Image lucurr Delete BE ludelete N/ * A Add or resync data N/ * * A in BE lumake Set or display BE description N/ A ludesc Mount BE file systems drd mount lumount Unmount BE file system drd umount luumount August 7, 2009 40
  • 41. Clone Commands Compared Task HP-UX DRD Solaris Live Upgrade R ename BE N/ A lurename Install software and patches into BE BE List drd runcmd swinstall drd runcmd update-ux luupgrade N/ A lufslist TUI N/ A lu configuration August 7, 2009 41
  • 42. Clone Commands Compared Task HP-UX DRD Solaris Live Upgrade R ehosting drd rehost N/ A Modify kernel tunables Drd runcmd kctune N/ A August 7, 2009 42
  • 43. AIX Alt_disk_install • • • The AIX alt_disk_install command allows a root sysadmin to create an alternate rootvg on another set of disk drives. The alternate rootvg can be configured by restoring a mksysb image to it while AIX continues to run from the primary rootvg, or the primary rootvg can be "cloned" to the alternate rootvg and updates and fixes can then be installed on the alternate rootvg while AIX continues to run. When the system admin is ready, AIX can be rebooted from the alternate rootvg disks. Changes can be backed out by rebooting AIX from the original primary rootvg. In AIX v.5.3, alt_disk_install has been replaced by alt_disk_copy alt_disk_mksysb alt_rootvg_op The alt_disk_install will continue to ship as a wrapper to the new commands, but it will not support any new functions, flags, or features.
  • 44. AIX Alt_disk_install Examples • Copy the current rootvg to an alternate disk. The following example shows how to clone the rootvg to hdisk1: # alt_disk_copy -d hdisk1 • Copy rootvg (hdisk1) to hdisk0, and then apply the updates to hdisk0: # alt_disk_copy -d hdisk0 -b update_all -l
  • 45. AIX Alt_disk_install Examples • Copy the current rootvg to two alternate disks: # alt_disk_copy -d hdisk2 hdisk3 -O • …assuming that hdisk2 and hdisk3 are the targets on which the copy should be placed. Note that the -O flag is required when "cloning" (when planning to boot the rootvg copy on another LPAR or server), but can be detrimental when making a copy which will be booted on the same LPAR or server. Before taking the target disks away from the existing AIX image, run command: # alt_rootvg_op -X • • • If a rootvg copy has been made for use on the same LPAR/server as the original rootvg (without the -O flag on alt_disk_copy), System Management Services can be used to switch between the primary and backup AIX rootvgs by shutting AIX down, booting to SMS mode, and selecting the disks from which to boot.
  • 46. AIX Multibos Features • multibos command (AIX 5.3 ML3) provides dual AIX boot from the same rootvg. One can run production on one boot image while installing, customizing or updating the other. • This is similar to AIX alt-disk-install, with one major difference: in alt-disk-install the boot images must reside on separate disks and separate rootvg's. The multibos capability allows both O/S images to reside on the same disk/rootvg. August 7, 2009 46
  • 48. AIX Multibos Features - continued • The multibos command allows the root level administrator to create multiple instances of AIX on the same rootvg. • The multibos setup operation creates a standby Base Operating System (BOS) that boots from a distinct boot logical volume (BLV). This creates two bootable sets of BOS on a given rootvg. The administrator can boot from either instance of BOS by specifying the respective BLV as an argument to the bootlist command or using system firmware boot operations. • Two bootable instances of BOS can be simultaneously maintained. The instance of BOS associated with the booted BLV is referred to as the a c tive BOS. The instance of BOS associated with the BLV that has not been booted is referred to as the s ta nd by BOS. Currently, only two instances of BOS are supported per rootvg. August 7, 2009 48
  • 49. AIX Multibos Features - continued • The multibos command allows the administrator to access, install maintenance and technology levels for, update, and customize the standby BOS either during setup or in subsequent customization operations. • Installing maintenance and technology updates to the standby BOS does not change system files on the active BOS. This allows for concurrent update of the standby BOS, while the active BOS remains in production. August 7, 2009 49
  • 50. AIX Multibos Features - continued • The multibos command has the ability to copy or share logical volumes and file systems. By default, the BOS file systems (currently /, /usr, /var, and /opt,) and the boot logical volume are copied. The administrator can make copies of additional BOS objects (using the -L flag). • All other file systems and logical volumes are shared between instances of BOS. Separate log device logical volumes (for example, those that are not contained within the file system) are not supported for copy and will be shared. • The current rootvg must have enough space for each BOS object copy. BOS object copies are placed on the same disk or disks as the original. August 7, 2009 50
  • 51. AIX Multibos Features - continued • The total number of copied logical volumes cannot exceed 128. • The total number of copied logical volumes and shared logical volumes are subject to volume group limits. • /etc/multibos contains multibos data and logs. • The only supported method of backup and recovery with multibos is mksysb via CD, NIM or tape. If the standby BOS was mounted during the creation of the mksysb, it is restored and synchronized on the first boot from the restored mksysb. However, if the standby BOS wasn’t mounted during the creation of the mksysb backup, the synchronization on reboot will remove the unusable standby BOS. August 7, 2009 51
  • 52. AIX Multibos Examples • Standby BOS setup operation preview: # multibos -Xsp • Set up standby BOS: # multibos -Xs • Set up standby BOS with optional image.data file /tmp/image.dat and exclude list /tmp/exclude.lst: # multibos -Xs -i /tmp/image.dat -e /tmp/exclude.lst August 7, 2009 52
  • 53. AIX Multibos Examples - continued • To set up standby BOS and install additional software listed as bundle file /tmp/bundle and located in the images source /images: # multibos -Xs -b /tmp/bundle -l /images • To execute a customization operation on standby BOS with the update_all install option: # multibos -Xac -l /images August 7, 2009 53
  • 54. AIX Multibos Examples - continued • To mount all standby BOS file systems, type: # multibos –Xm • To perform a standby BOS remove operation preview: # multibos –RXp • To remove standby BOS: # multibos -RX August 7, 2009 54
  • 55. AIX Multibos Examples - continued • Apply TL6 to the standby BOS. The TL6 lppsource is mounted from our Network Installation Manager (NIM) master. Perform a preview operation and then execute the actual update to the standby instance. Check the log file for any issues: • # mount nimsrv:/export/lpp_source/lpp_sourceaix5306 03 /mnt # multibos -Xacp -l /mnt # multibos -Xac -l /mnt August 7, 2009 55
  • 56. AIX Multibos Examples - continued • Back out of the update and return to the previous TL. Set the bootlist and verify that the BLV is set to the previous BOS instance (hd5): • # bootlist -m normal hdisk0 blv=hd5 • hdisk0 blv=bos_hd5 # bootlist -m normal -o hdisk0 blv=hd5 hdisk0 blv=bos_hd5 Now reboot the system and confirm that it’s running at the previous TL. August 7, 2009 56
  • 57. AIX Multibos Examples – continued * # multibos -S MULTIBOS> df Filesystem 512-blocks Free %Used Iused %Iused Mounted on /dev/hd4 1966080 1198800 40% 3364 1% / /dev/hd2 3670016 299344 92% 42697 10% /usr ... /dev/hd3 262144 250776 5% 64 1% /tmp /dev/bos_hd4 1966080 1198800 40% 3364 1% /bos_inst /dev/bos_hd2 3670016 299344 92% 42697 10% /bos_inst/usr /dev/bos_hd9var 655360 594456 10% 674 1% /bos_inst/var /dev/bos_hd10opt 393216 123592 69% 2545 6% /bos_inst/opt # to exit from multibos shell MULTIBOS> exit August 7, 2009 57
  • 58. AIX Multibos Examples – continued * # cat /root/hosts.txt • host1 • host2 • host3 # export WCOLL=/root/hosts.txt # dsh multibos –R # dsh rm /etc/multibos/logs/op.alog # dsh multibos –sXp # dsh alog -of /etc/multibos/logs/op.alog # dsh multibos –sX # dsh mount nimmast:/export/lpp_source/lpp_sourceaix530603 /mnt # dsh multibos -Xacp -l /mnt # dsh multibos -Xac -l /mnt # dsh alog -of /etc/multibos/logs/op.alog # dsh umount /mnt # dsh bootlist –m normal –o # dsh shutdown -Fr August 7, 2009 58
  • 59. AIX Check Boot Environment • After the reboot, confirm the TL level: # oslevel –r • Verify which BLV the system booted from with: # bootinfo –v August 7, 2009 59
  • 60. Features Compared Feature HP-UX DRD Solaris Live Upgrade AIX Multibos Licensing N/ A N/ A N/ A Supported platforms PA-R ISC IA-64 SPARC x86-32 x86-64 32-bit POW ER 64-bit POW * ER PowerPC Supported O/ S HP-UX 11.23 HP-UX 11.31 Solaris 2.6 Solaris 7 Solaris 8 Solaris 9 Solaris 10 AIX 5L Version 5.3 with the 5300-03 Recommended Maintenance package and later Current product DynRootDisk B.11.xx.A.3.4.y where xx is 23 or 31 Live Upgrade 2.0 Part of AIX 6.1 TUI Not supported Supported Not Supported GUI Not supported Not supported Not Supported CLI Supported Supported Supported August 7, 2009 60
  • 61. Features Compared - continued Feature HP-UX Solaris AIX Multibos Add mirror disk to a clone Supported directly via command: Not supported directly! Supported via SVM, ZFS, and VxVM RAID-1 setup only N/ A drd activate –x reboot=true or Standard Unix commands Never use reboot(1) or halt(1) commands. Instead, “init 6” or shutdown(1) bootlist -m normal hdisk0 blv=bos_hd5 Mostly manual process, based on: dvd mount cmp ... diff... lucompare(1) Mostly manual process, based on: multibos –S cmp ... diff ... drd clone –x mirror_disk= R eboot commands Automated comparison of primary and alternate boot environments August 7, 2009 shutdown -Fr or reboot -q 61
  • 62. Features Compared - continued Feature HP-UX Solaris AIX Multibos Mounting inactive images a) “drd mount” does not support mounting on different directories a) lumount(1) supports mounting on different directories multibos –S b) “drd mount” mounts file systems as: /var/opt/drd/mnts/ sysimage_00X It mounts file systems as /bos_inst/... b) “lumount” m ounts file system as: s /.alt.configX Change size of Not supported any file systems during cloning Supported Supported * * File system split Supported * Not supported August 7, 2009 Not supported 62
  • 63. Features Compared - continued Feature HP-UX Solaris AIX Multibos Simple listing of clone file systems drd mount bdf Supported via lufslist(1) command Not directly supported * * Clone updates (re-sync) Supported via full clone recreation: Supported via command lumake(1) Supported via flag “-c” * Supported Not supported drd clone –t= -x overwrite=true Merge file systems during cloning August 7, 2009 Not supported yet 63
  • 64. Features Compared - continued Feature HP-UX Solaris AIX Multibos Change file system type during cloning Not supported Supported. For example, SVM to ZFS migration Not supported Supported Volume Manager LVM VxVM Solstice DiskSuite * VxVM ZFS * * AIX LVM Virtualization Support nPar vPar Integrity VM Solaris Zones *** Logical Domain LPAR Dynamic LPAR Live Partition Mobility on POW ER6 W PAR Full-disk copy during cloning On Itanium servers, all partitions are created and EFI and HPUX are copied. This release of DRD does not copy the HPSP Supported Not supported August 7, 2009 64
  • 65. Features Compared - continued Feature HP-UX Solaris AIX Multibos Multiple target disks for cloning Not supported Supported Not supported Dry-run (preview) cloning Supported Supported Supported Swap shared Primary swap is not shared, secondary swap can be shared Yes, by default Yes, by default On-line cloning Yes Sun recommends to halt all zones during lucreate or lumount operations! That means, the Solaris zones cloning is not truly an on-line process Yes August 7, 2009 65
  • 66. Features Compared - continued Feature HP-UX Solaris AIX Multibos Exclude files from cloning Not supported yet * Supported * * Supported * * * * * Include files during cloning Not supported yet Supported * * Supported * * * * * Simple method to Not supported yet * * * remove clone Supported * * * * Supported * * * * * * Clone on the same physical disk (multiple BEs on the same disk) Supported Supported August 7, 2009 Not supported 66

Notes de l'éditeur

  1. My humble attempt to summarise best-known features at the present time. Even after 23 years of Unix experience I cannot claim I know everything!
  2. Courtesy of HP Education Training Materials in HE776 course
  3. * The following steps work for file systems other than the boot (/stand) file system: After creating the clone, execute the command: # /opt/drd/bin/drd mount 2. Choose the file system on the clone to expand. For this example, we are using /opt. The logical volume is /dev/drd00/lvol6 mounted at /var/opt/drd/mnts/sysimage_001/opt. The size of the vxfs file system is increased to 999 extents. Execute the following commands to expand /opt: # /usr/sbin/umount /dev/drd00/lvol6 # /usr/sbin/lvextend –l 999 /dev/drd00/lvol6 # /usr/sbin/extendfs –F vxfs /dev/drd00/rlvol6 # /usr/sbin/mount /dev/drd00/lvol6 /var/opt/drd/mnts/sysimage_001/opt 3. Run bdf to check that the /var/opt/drd/mnts/sysimage_001/opt file system now has the desired size. ** When drd runcmd finds the file systems in the clone already mounted, it does not unmount them (nor will it export the volume group) at the completion of the runcmd operation.
  4. * Refer to ITRC document mmr_na-197095-3.
  5. * To notify DRD that all logical volumes in the root group have been manually mirrored using LVM or VxVM commands to disk /dev/dsk/c1t2d0 ** To notify DRD that all logical volumes in the root group have been manually un-mirrored using LVM or VxVM commands
  6. * Here is the Known Issue that we are publishing to docs.hp.com/en/DRD in June 2009.
  7. Full example of lucompare(1) on xlsansun.cxo.hp.com on 12th of June 2009: Determining the configuration of BE2 ... &amp;lt; BE1 &amp;gt; BE2 Processing Global Zone Comparing / ... Links differ 01 &amp;lt; /:root:root:33:16877:DIR: 02 &amp;gt; /:root:root:30:16877:DIR: Sizes differ 01 &amp;lt; /platform/sun4u/boot_archive:root:root:1:33188:REGFIL:76550144: 02 &amp;gt; /platform/sun4u/boot_archive:root:root:1:33188:REGFIL:76922880: Sizes differ 01 &amp;lt; /platform/sun4u-us3/lib/sparcv9/libc_psr.so.1:root:bin:1:33261:REGFIL:6888: 02 &amp;gt; /platform/sun4u-us3/lib/sparcv9/libc_psr.so.1:root:bin:1:33261:REGFIL:0: Sizes differ 01 &amp;lt; /platform/sun4u-us3/lib/libc_psr.so.1:root:bin:1:33261:REGFIL:6600: 02 &amp;gt; /platform/sun4u-us3/lib/libc_psr.so.1:root:bin:1:33261:REGFIL:0: Sizes differ 01 &amp;lt; /kernel/drv/fp.conf:root:sys:1:33188:REGFIL:2848: 02 &amp;gt; /kernel/drv/fp.conf:root:sys:1:33188:REGFIL:2774: Sizes differ 01 &amp;lt; /kernel/drv/scsi_vhci.conf:root:sys:1:33188:REGFIL:975: 02 &amp;gt; /kernel/drv/scsi_vhci.conf:root:sys:1:33188:REGFIL:944: Sizes differ 01 &amp;lt; /boot/solaris/filestat.ramdisk:root:root:1:33188:REGFIL:100648: 02 &amp;gt; /boot/solaris/filestat.ramdisk:root:root:1:33188:REGFIL:101144: 02 &amp;gt; /BE2 does not exist 02 &amp;gt; /etc/svc/repository-boot-20090302_135833 does not exist 02 &amp;gt; /etc/svc/repository-boot-20090612_122601 does not exist 02 &amp;gt; /etc/svc/repository-boot-20090302_162009 does not exist Symbolic links are to different files Symbolic links are to different files 01 &amp;lt; /etc/svc/repository-boot:root:root:1:41471:SYMLINK:31: 02 &amp;gt; /etc/svc/repository-boot:root:root:1:41471:SYMLINK:31: 02 &amp;gt; /etc/svc/repository-boot-20090611_091912 does not exist Sizes differ 01 &amp;lt; /etc/svc/repository.db:root:sys:1:33152:REGFIL:3778560: 02 &amp;gt; /etc/svc/repository.db:root:sys:1:33152:REGFIL:3775488: Sizes differ 01 &amp;lt; /etc/zfs/zpool.cache:root:root:1:33188:REGFIL:1648: 02 &amp;gt; /etc/zfs/zpool.cache:root:root:1:33188:REGFIL:3648: Sizes differ 01 &amp;lt; /etc/path_to_inst:root:root:1:33060:REGFIL:7774: 02 &amp;gt; /etc/path_to_inst:root:root:1:33060:REGFIL:6447: 02 &amp;gt; /etc/rc1.d/K13hprsmha does not exist 02 &amp;gt; /etc/rc1.d/K13hprsmha_wd does not exist Checksums differ 01 &amp;lt; /etc/logadm.conf:root:sys:1:33188:REGFIL:1485:3674435182: 02 &amp;gt; /etc/logadm.conf:root:sys:1:33188:REGFIL:1485:114050809: 02 &amp;gt; /etc/rc2.d/S80hprsmha does not exist 02 &amp;gt; /etc/rc2.d/S80hprsmha_wd does not exist 02 &amp;gt; /etc/lu/COPY_LOCK does not exist 02 &amp;gt; /etc/lu/lustartup.log does not exist 02 &amp;gt; /etc/lu/sync.log does not exist Checksums differ 01 &amp;lt; /etc/lu/.BE_CONFIG:root:root:1:33060:REGFIL:89:1143091087: 02 &amp;gt; /etc/lu/.BE_CONFIG:root:root:1:33060:REGFIL:89:2658615630: 02 &amp;gt; /etc/.cpr_config does not exist 02 &amp;gt; /etc/default/hprsmha does not exist Checksums differ 01 &amp;lt; /etc/shadow:root:sys:1:33024:REGFIL:384:161700006: 02 &amp;gt; /etc/shadow:root:sys:1:33024:REGFIL:384:1617970827: Sizes differ 01 &amp;lt; /etc/mail/statistics:root:root:1:33188:REGFIL:728: 02 &amp;gt; /etc/mail/statistics:root:root:1:33188:REGFIL:0: Checksums differ 01 &amp;lt; /etc/saf/zsmon/_pid:root:root:1:33188:REGFIL:4:1108548913: 02 &amp;gt; /etc/saf/zsmon/_pid:root:root:1:33188:REGFIL:4:3964579771: 02 &amp;gt; /etc/init.d/hprsmha_wd does not exist 02 &amp;gt; /etc/init.d/hprsmha does not exist Sizes differ 01 &amp;lt; /etc/devices/devid_cache:root:root:1:33060:REGFIL:900: 02 &amp;gt; /etc/devices/devid_cache:root:root:1:33060:REGFIL:5108: Sizes differ 01 &amp;lt; /etc/devices/snapshot_cache:root:root:1:33060:REGFIL:130336: 02 &amp;gt; /etc/devices/snapshot_cache:root:root:1:33060:REGFIL:138568: Sizes differ 01 &amp;lt; /etc/devices/mdi_scsi_vhci_cache:root:root:1:33060:REGFIL:1752: 02 &amp;gt; /etc/devices/mdi_scsi_vhci_cache:root:root:1:33060:REGFIL:2588: Sizes differ 01 &amp;lt; /etc/path_to_inst.old:root:root:1:33060:REGFIL:7714: 02 &amp;gt; /etc/path_to_inst.old:root:root:1:33060:REGFIL:6179: 02 &amp;gt; /zpool_history.txt does not exist 02 &amp;gt; /ph does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/_jvm/javaws/resources/messages_ko.properties does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/_jvm/javaws/resources/messages_es.properties does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/_jvm/javaws/resources/messages_zh_CN.properties does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/_jvm/javaws/resources/messages_sv.properties does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/_jvm/javaws/resources/messages_it.properties does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/_jvm/javaws/resources/messages_zh_TW.properties does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/_jvm/javaws/resources/messages_de.properties does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/_jvm/javaws/resources/messages.properties does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/_jvm/javaws/resources/messages_ja.properties does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/_jvm/javaws/resources/miniSplash.jpg does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/_jvm/javaws/javaws does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/_jvm/javaws/javaws.jar does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/_jvm/javaws/javawsbin does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/_jvm/javaws/readme_zh_TW.html does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/_jvm/javaws/readme_sv.html does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/_jvm/javaws/readme.html does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/_jvm/javaws/readme_ko.html does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/_jvm/javaws/readme_zh_CN.html does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/_jvm/javaws/sunlogo64x30.gif does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/_jvm/javaws/readme_es.html does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/_jvm/javaws/readme_fr.html does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/_jvm/javaws/readme_ja.html does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/_jvm/javaws/javaws-l10n.jar does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/_jvm/javaws/readme_de.html does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/_jvm/javaws/javaws.policy does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/_jvm/javaws/readme_it.html does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/_jvm/javaws/javaws_launcher.dt does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/_jvm/javaws/javalogo52x88.gif does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/_instlogs does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/_instlogs/SunStartErr.log does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/_instlogs/SunStartOut.log does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/_instlogs/install.txt does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/_uninst does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/_uninst/uninstall.bin does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/_uninst/assembly.dat does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/_uninst/run.inf does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/_uninst/uninstall.jar does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/_uninst/uninstall.dat does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/hbainfo.cfg does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/wd.pid does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/hatrigger does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/wdtrigger does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/server.cert does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/src does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/src/SNIAHBAAPI.zip does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/HAMessages.bin does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/ha.pid does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/hprsmha does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090523-1.log does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090607-1.log does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090611-1.log does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090514-1.log does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090518-1.log does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090606-1.log does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090522-1.log does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090610-1.log does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090515-1.log does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090519-1.log does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090605-1.log does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090521-1.log does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090609-1.log does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090516-1.log does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090520-1.log does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090604-1.log does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090608-1.log does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090517-1.log does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090612-1.log does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090603-1.log does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090527-1.log does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090531-1.log does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090526-1.log does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090602-1.log does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090530-1.log does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090529-1.log does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090525-1.log does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090601-1.log does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090512-1.log does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090528-1.log does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090524-1.log does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090513-1.log does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/libcrypto.so.0.9.7 does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/hprsmha_wd does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/hprsmha.cfg does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/server.pkey does not exist 02 &amp;gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/libssl.so.0.9.7 does not exist 01 &amp;lt; /devices/pseudo/cvcredir@0:cvcredir does not exist 01 &amp;lt; /devices/pseudo/cvc@0:cvc does not exist 01 &amp;lt; /devices/pci@8,700000/SUNW,qlc@3,1/fp@0,0/ssd@w50001fe15004134b,2 does not exist 01 &amp;lt; /devices/pci@8,700000/SUNW,qlc@3,1/fp@0,0/ssd@w50001fe15004134b,8 does not exist 01 &amp;lt; /devices/pci@8,700000/SUNW,qlc@3,1/fp@0,0/ssd@w50001fe15004134b,1 does not exist 01 &amp;lt; /devices/pci@8,700000/SUNW,qlc@3,1/fp@0,0/ssd@w50001fe150041349,8 does not exist 01 &amp;lt; /devices/pci@8,700000/SUNW,qlc@3,1/fp@0,0/ssd@w50001fe150041349,2 does not exist 01 &amp;lt; /devices/pci@8,700000/SUNW,qlc@3,1/fp@0,0/ssd@w50001fe15004134d,8 does not exist 01 &amp;lt; /devices/pci@8,700000/SUNW,qlc@3,1/fp@0,0/ssd@w50001fe15004134d,2 does not exist 01 &amp;lt; /devices/pci@8,700000/SUNW,qlc@3,1/fp@0,0/ssd@w50001fe15004134f,1 does not exist 01 &amp;lt; /devices/pci@8,700000/SUNW,qlc@3,1/fp@0,0/ssd@w50001fe150041349,1 does not exist 01 &amp;lt; /devices/pci@8,700000/SUNW,qlc@3,1/fp@0,0/ssd@w50001fe15004134d,1 does not exist 01 &amp;lt; /devices/pci@8,700000/SUNW,qlc@3,1/fp@0,0/ssd@w50001fe15004134f,2 does not exist 01 &amp;lt; /devices/pci@8,700000/SUNW,qlc@3,1/fp@0,0/ssd@w50001fe15004134f,8 does not exist 01 &amp;lt; /devices/pci@8,700000/SUNW,qlc@2,1/fp@0,0/ssd@w50001fe15004134e,1 does not exist 01 &amp;lt; /devices/pci@8,700000/SUNW,qlc@2,1/fp@0,0/ssd@w50001fe150041348,1 does not exist 01 &amp;lt; /devices/pci@8,700000/SUNW,qlc@2,1/fp@0,0/ssd@w50001fe15004134e,8 does not exist 01 &amp;lt; /devices/pci@8,700000/SUNW,qlc@2,1/fp@0,0/ssd@w50001fe15004134e,2 does not exist 01 &amp;lt; /devices/pci@8,700000/SUNW,qlc@2,1/fp@0,0/ssd@w50001fe150041348,8 does not exist 01 &amp;lt; /devices/pci@8,700000/SUNW,qlc@2,1/fp@0,0/ssd@w50001fe150041348,2 does not exist 01 &amp;lt; /devices/pci@8,700000/SUNW,qlc@2,1/fp@0,0/ssd@w50001fe15004134c,1 does not exist 01 &amp;lt; /devices/pci@8,700000/SUNW,qlc@2,1/fp@0,0/ssd@w50001fe15004134a,8 does not exist 01 &amp;lt; /devices/pci@8,700000/SUNW,qlc@2,1/fp@0,0/ssd@w50001fe15004134a,2 does not exist 01 &amp;lt; /devices/pci@8,700000/SUNW,qlc@2,1/fp@0,0/ssd@w50001fe15004134c,2 does not exist 01 &amp;lt; /devices/pci@8,700000/SUNW,qlc@2,1/fp@0,0/ssd@w50001fe15004134c,8 does not exist 01 &amp;lt; /devices/pci@8,700000/SUNW,qlc@2,1/fp@0,0/ssd@w50001fe15004134a,1 does not exist 01 &amp;lt; /devices/scsi_vhci/ssd@g20000004cfdf8179 does not exist 01 &amp;lt; /devices/scsi_vhci/ssd@g600508b400102e8e0001300003000000 does not exist 01 &amp;lt; /devices/scsi_vhci/ssd@g2000002037e35629 does not exist 01 &amp;lt; /etc/svc/repository-boot-20081202_211723 does not exist 01 &amp;lt; /etc/svc/repository-boot-20081202_215828 does not exist 01 &amp;lt; /etc/lu/.SYNCKEY does not exist 01 &amp;lt; /etc/lu/INODE.2 does not exist 01 &amp;lt; /dev/dsk/c7t2000002037E35629d0s4 does not exist 01 &amp;lt; /dev/dsk/c1t50001FE15004134Ed8s0 does not exist 01 &amp;lt; /dev/dsk/c2t50001FE150041349d8s0 does not exist 01 &amp;lt; /dev/dsk/c1t50001FE15004134Ad2s2 does not exist 01 &amp;lt; /dev/dsk/c2t50001FE15004134Bd5s4 does not exist 01 &amp;lt; /dev/dsk/c2t50001FE15004134Dd8s6 does not exist 01 &amp;lt; /dev/dsk/c1t50001FE15004134Ad1s1 does not exist 01 &amp;lt; /dev/dsk/c1t50001FE15004134Cd4s5 does not exist 01 &amp;lt; /dev/dsk/c1t50001FE15004134Cd5s2 does not exist 01 &amp;lt; /dev/dsk/c1t50001FE150041348d8s6 does not exist 01 &amp;lt; /dev/dsk/c1t50001FE15004134Cd2s3 does not exist 01 &amp;lt; /dev/dsk/c7t600508B400102E8E0001300003000000d0s6 does not exist 01 &amp;lt; /dev/dsk/c2t50001FE150041349d8 does not exist 01 &amp;lt; /dev/dsk/c2t50001FE15004134Bd1s6 does not exist 01 &amp;lt; /dev/dsk/c1t50001FE15004134Cd3s4 does not exist 01 &amp;lt; /dev/dsk/c1t50001FE15004134Ad5s3 does not exist 01 &amp;lt; /dev/dsk/c2t50001FE15004134Bd2s5 does not exist 01 &amp;lt; /dev/dsk/c1t50001FE15004134Cd1s0 does not exist 01 &amp;lt; /dev/dsk/c2t50001FE15004134Bd3s2 does not exist 01 &amp;lt; /dev/dsk/c1t50001FE15004134Ad4s4 does not exist 01 &amp;lt; /dev/dsk/c2t50001FE15004134Dd2s6 does not exist 01 &amp;lt; /dev/dsk/c2t50001FE150041349d1s3 does not exist 01 &amp;lt; /dev/dsk/c1t50001FE150041348d3s1 does not exist 01 &amp;lt; /dev/dsk/c2t50001FE15004134Dd3s1 does not exist 01 &amp;lt; /dev/dsk/c1t50001FE150041348d2s6 does not exist 01 &amp;lt; /dev/dsk/c1t50001FE15004134Ed1s3 does not exist 01 &amp;lt; /dev/dsk/c2t50001FE15004134Fd5s6 does not exist 01 &amp;lt; /dev/dsk/c1t50001FE150041348d1s5 does not exist 01 &amp;lt; /dev/dsk/c1t50001FE15004134Ed2s0 does not exist 01 &amp;lt; /dev/dsk/c2t50001FE150041349d3s7 does not exist 01 &amp;lt; /dev/dsk/c1t50001FE15004134Ed3s7 does not exist 01 &amp;lt; /dev/dsk/c2t50001FE15004134Fd4s1 does not exist 01 &amp;lt; /dev/dsk/c2t50001FE15004134Dd1s5 does not exist 01 &amp;lt; /dev/dsk/c2t50001FE150041349d2s0 does not exist 01 &amp;lt; /dev/dsk/c1t50001FE15004134Ad8s2 does not exist 01 &amp;lt; /dev/dsk/c1t50001FE15004134Ed4s6 does not exist 01 &amp;lt; /dev/dsk/c2t50001FE15004134Fd3s0 does not exist 01 &amp;lt; /dev/dsk/c2t50001FE150041349d5s1 does not exist 01 &amp;lt; /dev/dsk/c2t50001FE15004134Bd8s5 does not exist 01 &amp;lt; /dev/dsk/c2t50001FE15004134Fd2s7 does not exist 01 &amp;lt; /dev/dsk/c1t50001FE15004134Ed5s1 does not exist 01 &amp;lt; /dev/dsk/c2t50001FE150041349d4s6 does not exist 01 &amp;lt; /dev/dsk/c1t50001FE15004134Cd8s3 does not exist 01 &amp;lt; /dev/dsk/c2t50001FE15004134Dd4s0 does not exist 01 &amp;lt; /dev/dsk/c2t50001FE15004134Fd1s4 does not exist 01 &amp;lt; /dev/dsk/c1t50001FE150041348d5s7 does not exist 01 &amp;lt; /dev/dsk/c2t50001FE15004134Dd5s7 does not exist 01 &amp;lt; /dev/dsk/c1t50001FE150041348d4s0 does not exist 01 &amp;lt; /dev/dsk/c1t50001FE15004134Cd8s4 does not exist 01 &amp;lt; /dev/dsk/c2t50001FE15004134Dd4s7 does not exist 01 &amp;lt; /dev/dsk/c1t50001FE150041348d5s0 does not exist 01 &amp;lt; /dev/dsk/c2t50001FE15004134Fd1s3 does not exist 01 &amp;lt; /dev/dsk/c2t50001FE15004134Dd5s0 does not exist 01 &amp;lt; /dev/dsk/c1t50001FE150041348d4s7 does not exist 01 &amp;lt; /dev/dsk/c2t50001FE15004134Fd3s7 does not exist 01 &amp;lt; /dev/dsk/c1t50001FE15004134Ed4s1 does not exist 01 &amp;lt; /dev/dsk/c2t50001FE150041349d5s6 does not exist 01 &amp;lt; /dev/dsk/c2t50001FE15004134Bd8s2 does not exist 01 &amp;lt; /dev/dsk/c1t50001FE15004134Ed5s6 does not exist 01 &amp;lt; /dev/dsk/c2t50001FE15004134Fd2s0 does not exist 01 &amp;lt; /dev/dsk/c2t50001FE150041349d4s1 does not exist 01 &amp;lt; /dev/dsk/c1t50001FE15004134Ed2s7 does not exist 01 &amp;lt; /dev/dsk/c1t50001FE15004134Cd8 does not exist 01 &amp;lt; /dev/dsk/c1t50001FE150041348d1s2 does not exist 01 &amp;lt; /dev/dsk/c2t50001FE15004134Fd5s1 does not exist 01 &amp;lt; /dev/dsk/c2t50001FE150041349d3s0 does not exist 01 &amp;lt; /dev/dsk/c2t50001FE15004134Fd4s6 does not exist 01 &amp;lt; /dev/dsk/c1t50001FE15004134Ed3s0 does not exist 01 &amp;lt; /dev/dsk/c2t50001FE150041349d2s7 does not exist 01 &amp;lt; /dev/dsk/c2t50001FE15004134Dd1s2 does not exist 01 &amp;lt; /dev/dsk/c1t50001FE15004134Ad8s5 does not exist 01 &amp;lt; /dev/dsk/c2t50001FE150041349d1s4 does not exist 01 &amp;lt; /dev/dsk/c2t50001FE15004134Dd2s1 does not exist 01 &amp;lt; /dev/dsk/c1t50001FE150041348d3s6 does not exist 01 &amp;lt; /dev/dsk/c2t50001FE15004134Dd3s6 does not exist 01 &amp;lt; /dev/dsk/c1t50001FE15004134Ed1s4 does not exist 01 &amp;lt; /dev/dsk/c1t50001FE150041348d2s1 does not exist 01 &amp;lt; /dev/dsk/c2t50001FE15004134Bd2s2 does not exist 01 &amp;lt; /dev/dsk/c1t50001FE15004134Ad5s4 does not exist 01 &amp;lt; /dev/dsk/c1t50001FE15004134Cd1s7 does not exist 01 &amp;lt; /dev/dsk/c1t50001FE15004134Ad4s3 does not exist 01 &amp;lt; /dev/dsk/c2t50001FE15004134Bd3s5 does not exist 01 &amp;lt; /dev/dsk/c2t50001FE15004134Fd8s0 does not exist 01 &amp;lt; /dev/dsk/c1t50001FE15004134Cd2s4 does not exist 01 &amp;lt; /dev/dsk/c7t600508B400102E8E0001300003000000d0s1 does not exist 01 &amp;lt; /dev/dsk/c2t50001FE15004134Bd1s1 does not exist 01 &amp;lt; /dev/dsk/c1t50001FE15004134Cd3s3 does not exist 01 &amp;lt; /dev/dsk/c2t50001FE15004134Dd8s1 does not exist 01 &amp;lt; /dev/dsk/c1t50001FE15004134Ad1s6 does not exist 01 &amp;lt; /dev/dsk/c1t50001FE15004134Cd4s2 does not exist 01 &amp;lt; /dev/dsk/c1t50001FE15004134Cd5s5 does not exist 01 &amp;lt; /dev/dsk/c1t50001FE150041348d8s1 does not exist 01 &amp;lt; /dev/dsk/c7t20000004CFDF8179d0s4 does not exist 01 &amp;lt; /dev/dsk/c1t50001FE15004134Ad3s2 does not exist 01 &amp;lt; /dev/dsk/c2t50001FE15004134Bd4s4 does not exist 01 &amp;lt; /dev/dsk/c7t2000002037E35629d0s3 does not exist 01 &amp;lt; /dev/dsk/c2t50001FE15004134Dd8 does not exist 01 &amp;lt; /dev/dsk/c2t50001FE15004134Bd5s3 does not exist 01 &amp;lt; /dev/dsk/c1t50001FE15004134Ad2s5 does not exist Compare complete for /. Comparing /var ... Sizes differ 01 &amp;lt; /var/cacao/instances/default/audits/audit-cacao.0:root:sys:1:33188:REGFIL:282558: 02 &amp;gt; /var/cacao/instances/default/audits/audit-cacao.0:root:sys:1:33188:REGFIL:137571: 02 &amp;gt; /var/sadm/system/uuid_state does not exist Sizes differ 01 &amp;lt; /var/sadm/system/data/locales.list:root:root:1:33188:REGFIL:276: 02 &amp;gt; /var/sadm/system/data/locales.list:root:root:1:33188:REGFIL:281: Sizes differ 01 &amp;lt; /var/sadm/system/data/locales.list.bak:root:root:1:33188:REGFIL:276: 02 &amp;gt; /var/sadm/system/data/locales.list.bak:root:root:1:33188:REGFIL:281: Links differ 01 &amp;lt; /var/sadm/pkg:root:sys:1199:16749:DIR: 02 &amp;gt; /var/sadm/pkg:root:sys:1198:16749:DIR: Sizes differ 01 &amp;lt; /var/svc/log/milestone-multi-user-server:default.log:root:root:1:33188:REGFIL:22432: 02 &amp;gt; /var/svc/log/milestone-multi-user-server:default.log:root:root:1:33188:REGFIL:2537: Sizes differ 01 &amp;lt; /var/svc/log/network-routing-bgp:quagga.log:root:root:1:33188:REGFIL:6015: 02 &amp;gt; /var/svc/log/network-routing-bgp:quagga.log:root:root:1:33188:REGFIL:435: Sizes differ 01 &amp;lt; /var/svc/log/network-smtp:sendmail.log:root:root:1:33188:REGFIL:4394: 02 &amp;gt; /var/svc/log/network-smtp:sendmail.log:root:root:1:33188:REGFIL:447: Sizes differ 01 &amp;lt; /var/svc/log/network-routing-rip:quagga.log:root:root:1:33188:REGFIL:4530: 02 &amp;gt; /var/svc/log/network-routing-rip:quagga.log:root:root:1:33188:REGFIL:390: Sizes differ 01 &amp;lt; /var/svc/log/system-filesystem-volfs:default.log:root:root:1:33188:REGFIL:4124: 02 &amp;gt; /var/svc/log/system-filesystem-volfs:default.log:root:root:1:33188:REGFIL:374: Sizes differ 01 &amp;lt; /var/svc/log/application-graphical-login-cde-login:default.log:root:root:1:33188:REGFIL:4024: 02 &amp;gt; /var/svc/log/application-graphical-login-cde-login:default.log:root:root:1:33188:REGFIL:409: Sizes differ 01 &amp;lt; /var/svc/log/system-filesystem-local:default.log:root:root:1:33188:REGFIL:3861: 02 &amp;gt; /var/svc/log/system-filesystem-local:default.log:root:root:1:33188:REGFIL:301: Sizes differ 01 &amp;lt; /var/svc/log/system-console-login:default.log:root:root:1:33188:REGFIL:2542: 02 &amp;gt; /var/svc/log/system-console-login:default.log:root:root:1:33188:REGFIL:358: Sizes differ 01 &amp;lt; /var/svc/log/application-print-ppd-cache-update:default.log:root:root:1:33188:REGFIL:4154: 02 &amp;gt; /var/svc/log/application-print-ppd-cache-update:default.log:root:root:1:33188:REGFIL:388: Sizes differ 01 &amp;lt; /var/svc/log/system-fmd:default.log:root:root:1:33188:REGFIL:3746: 02 &amp;gt; /var/svc/log/system-fmd:default.log:root:root:1:33188:REGFIL:320: Sizes differ 01 &amp;lt; /var/svc/log/system-sysidtool:net.log:root:root:1:33188:REGFIL:6723: 02 &amp;gt; /var/svc/log/system-sysidtool:net.log:root:root:1:33188:REGFIL:436: Sizes differ 01 &amp;lt; /var/svc/log/application-print-ipp-listener:default.log:root:root:1:33188:REGFIL:134: 02 &amp;gt; /var/svc/log/application-print-ipp-listener:default.log:root:root:1:33188:REGFIL:104: Sizes differ 01 &amp;lt; /var/svc/log/network-nfs-client:default.log:root:root:1:33188:REGFIL:4236: 02 &amp;gt; /var/svc/log/network-nfs-client:default.log:root:root:1:33188:REGFIL:376: Sizes differ 01 &amp;lt; /var/log/syslog:root:sys:1:33188:REGFIL:1025: 02 &amp;gt; /var/log/syslog:root:sys:1:33188:REGFIL:0: 02 &amp;gt; /var/log/syslog.3 does not exist 02 &amp;gt; /var/log/syslog.4 does not exist Sizes differ 01 &amp;lt; /var/log/webconsole/console/console_debug_log:noaccess:root:1:33188:REGFIL:78215: 02 &amp;gt; /var/log/webconsole/console/console_debug_log:noaccess:root:1:33188:REGFIL:4831: Sizes differ 01 &amp;lt; /var/log/webconsole/console/console_config_log:root:root:1:33188:REGFIL:28359: 02 &amp;gt; /var/log/webconsole/console/console_config_log:root:root:1:33188:REGFIL:2425: Sizes differ 01 &amp;lt; /var/log/syslog.0:root:sys:1:33188:REGFIL:770: 02 &amp;gt; /var/log/syslog.0:root:sys:1:33188:REGFIL:694: 02 &amp;gt; /var/log/syslog.7 does not exist 02 &amp;gt; /var/log/syslog.5 does not exist 02 &amp;gt; /var/log/syslog.2 does not exist 02 &amp;gt; /var/log/syslog.6 does not exist 02 &amp;gt; /var/log/syslog.1 does not exist Sizes differ 01 &amp;lt; /var/saf/zsmon/log:root:sys:1:33188:REGFIL:19008: 02 &amp;gt; /var/saf/zsmon/log:root:sys:1:33188:REGFIL:1215: Sizes differ 01 &amp;lt; /var/saf/_log:root:root:1:33188:REGFIL:6053: 02 &amp;gt; /var/saf/_log:root:root:1:33188:REGFIL:545: Sizes differ 01 &amp;lt; /var/dmi/db/1l.tbl:root:root:1:33188:REGFIL:5077: 02 &amp;gt; /var/dmi/db/1l.tbl:root:root:1:33188:REGFIL:595: Sizes differ 01 &amp;lt; /var/snmp/snmpdx.st.old:root:root:1:33188:REGFIL:34: 02 &amp;gt; /var/snmp/snmpdx.st.old:root:root:1:33188:REGFIL:35: Checksums differ 01 &amp;lt; /var/snmp/snmpdx.st:root:root:1:33188:REGFIL:34:2082392013: 02 &amp;gt; /var/snmp/snmpdx.st:root:root:1:33188:REGFIL:34:430863272: 02 &amp;gt; /var/dt/A:0-sQa4zb does not exist 02 &amp;gt; /var/dt/A:0-.qa4zb does not exist 02 &amp;gt; /var/dt/A:0-xHaWzb does not exist 02 &amp;gt; /var/dt/A:0-_8aazb does not exist 02 &amp;gt; /var/dt/A:0-fZaGBb does not exist 02 &amp;gt; /var/dt/A:0-hgayzb does not exist 02 &amp;gt; /var/dt/A:0-jua4zb does not exist Links differ 01 &amp;lt; /var/dt/appconfig/appmanager:root:root:2:16877:DIR: 02 &amp;gt; /var/dt/appconfig/appmanager:root:root:3:16877:DIR: 02 &amp;gt; /var/dt/A:0-2Iaizb does not exist 02 &amp;gt; /var/dt/A:0-c6aivb does not exist 02 &amp;gt; /var/dt/A:0-nJayAb does not exist 02 &amp;gt; /var/dt/A:0-QjaaAb does not exist 02 &amp;gt; /var/dt/A:0-AbaaAb does not exist 02 &amp;gt; /var/dt/A:0-PZaaAb does not exist Checksums differ 01 &amp;lt; /var/dt/Xpid:root:root:1:33188:REGFIL:4:1426526467: 02 &amp;gt; /var/dt/Xpid:root:root:1:33188:REGFIL:4:1793701281: 02 &amp;gt; /var/dt/A:0-ulaqAb does not exist 02 &amp;gt; /var/dt/A:0-M9ayub does not exist 02 &amp;gt; /var/dt/A:0-t5a4yb does not exist 02 &amp;gt; /var/dt/A:0-UDaaAb does not exist 02 &amp;gt; /var/dt/A:0-b5aiAb does not exist 02 &amp;gt; /var/dt/A:0-UDayub does not exist 02 &amp;gt; /var/dt/A:0-w.aiAb does not exist 02 &amp;gt; /var/dt/A:0-h.aGAb does not exist Links differ 01 &amp;lt; /var/dt/tmp:root:root:2:17407:DIR: 02 &amp;gt; /var/dt/tmp:root:root:3:17407:DIR: 02 &amp;gt; /var/dt/A:0-xjaaAb does not exist 02 &amp;gt; /var/dt/A:0-Jxaqzb does not exist 02 &amp;gt; /var/dt/A:0-YTaqAb does not exist 02 &amp;gt; /var/dt/A:0-eoa4zb does not exist 02 &amp;gt; /var/dt/A:0-LhaaAb does not exist 02 &amp;gt; /var/dt/A:0-0aaaAb does not exist 02 &amp;gt; /var/sma_snmp/fmd-trapgen.conf does not exist Sizes differ 01 &amp;lt; /var/sma_snmp/snmpd.conf:root:root:1:33152:REGFIL:315: 02 &amp;gt; /var/sma_snmp/snmpd.conf:root:root:1:33152:REGFIL:314: Checksums differ 01 &amp;lt; /var/statmon/state:daemon:daemon:1:33188:REGFIL:10:3815787447: 02 &amp;gt; /var/statmon/state:daemon:daemon:1:33188:REGFIL:10:1588317337: 01 &amp;lt; /var/tmp/wscon-:0-kxa4sb does not exist 01 &amp;lt; /var/dt/appconfig/appmanager/root-xlsansun-0 does not exist 01 &amp;lt; /var/dt/appconfig/appmanager/root-xlsansun-0/smc does not exist 01 &amp;lt; /var/dt/appconfig/appmanager/root-xlsansun-0/System_Admin does not exist 01 &amp;lt; /var/dt/appconfig/appmanager/root-xlsansun-0/updatemanager does not exist 01 &amp;lt; /var/dt/appconfig/appmanager/root-xlsansun-0/Information does not exist 01 &amp;lt; /var/dt/appconfig/appmanager/root-xlsansun-0/Desktop_Tools does not exist 01 &amp;lt; /var/dt/appconfig/appmanager/root-xlsansun-0/Desktop_Controls does not exist 01 &amp;lt; /var/dt/appconfig/appmanager/root-xlsansun-0/Desktop_Apps does not exist 01 &amp;lt; /var/dt/appconfig/appmanager/root-xlsansun-0/Trusted_Extensions does not exist 01 &amp;lt; /var/dt/tmp/root-xlsansun-0 does not exist Compare complete for /var.
  8. * The -c assigns the specified name to the current boot environment. The -m specifies the location of root slice (/) going to be copied to /dev/dsk/c0d0s3 (/altroot). The -n specifies the name of the live upgrade boot environment. ** Detaches a concatenation (containing c0t0d0s0) from one mirror (d10) and attaches it to another (d20), preserving its contents. *** Creates the mirror d10 and establishes this mirror as the receptacle for the root file system. Attaches c0t0d0s0 and c0t1d0s0 to single-slice concatenations d1 and d2, respectively. The specification of these volumes is optional. Attaches the concatenations associated with c0t0d0s0 and c0t1d0s0 to mirror d10. Copies the current BE&amp;apos;s root file system to mirror d10 and overwrite any d10 contents.
  9. * Indirect method: # lvrmboot -s drd00 # lvremove -f /dev/drd00/lvol2 # lvrmboot -d lvol3 /dev/drd00 # lvremove -f /dev/drd00/lvol3 # lvrmboot -r drd00 # lvremove -f /dev/drd00/lvol4 ** Only a full copy of data from the primary BE is possible. # vgremove drd00
  10. * Full listing of file systems (opening a shell into the newly creates alternate BOS image to explore. Note: All files (even SMIT use) are available to use /, /usr, /var, /opt, and /home. /proc is private. /tmp FS is shared - by default): Filesystem 512-blocks Free %Used Iused %Iused Mounted on /dev/hd4 1966080 1198800 40% 3364 1% / /dev/hd2 3670016 299344 92% 42697 10% /usr /dev/hd9var 655360 594456 10% 674 1% /var /dev/hd3 262144 250776 5% 64 1% /tmp /dev/hd1 1966080 1198800 40% 3364 1% /home /proc 1966080 1198800 40% 3364 1% /proc /dev/hd10opt 393216 123592 69% 2545 6% /opt /dev/bos_hd4 1966080 1198800 40% 3364 1% /bos_inst /dev/bos_hd2 3670016 299344 92% 42697 10% /bos_inst/usr /dev/bos_hd9var 655360 594456 10% 674 1% /bos_inst/var /dev/bos_hd10opt 393216 123592 69% 2545 6% /bos_inst/opt /usr/lib 3670016 299384 92% 42701 10% /bos_inst/usr/lib/multibos_chroot/usr/lib /usr/ccs/lib 3670016 299384 92% 42701 10% /bos_inst/usr/lib/multibos_chroot/usr/ccs/lib /tmp 262144 250776 5% 64 1% /bos_inst/tmp
  11. * Perform multibos operations on several servers at once by using Multibos and dsh commands.
  12. * The latest IBM CPU is POWER6.
  13. * For example, name /usr explicitly and it will be split from / root file system: # lucreate -n disk1 -m /:/dev/dsk/c0t8d0s0:ufs -m -:/dev/dsk/c0t8d0s1:swap -m /usr:/dev/dsk/c0t8d0s3:ufs ** The multibos “–X” flag auto-expansion feature allows for automatic file system expansion, if space is necessary to perform multibos-related tasks. One should execute all multibos operations with this flag.
  14. * The customization operation requires an image source (-l device or directory flag) and at least one installation option (installation by bundle, installation by fix, or update_all). The customization operation performs the following steps:   1. The standby BOS file systems are mounted, if not already mounted. 2. If you specify an installation bundle with the -b flag, the installation bundle is installed using the geninstall utility. The installation bundle syntax should follow geninstall conventions. If you specify the -p preview flag, geninstall will perform a preview operation. 3. If you specify a fix list, with the -f flag, the fix list is installed using the instfix utility. The fix list syntax should follow instfix conventions. If you specify the -p preview flag, then instfix will perform a preview operation. 4. If you specify the update_all function, with the -a flag, it is performed using the install_all_updates utility. If you specify the -p preview flag, then install_all_updates performs a preview operation. ** # lsvg -l rootvg | grep bos bos_hd5 boot 1 1 1 closed/syncd N/A bos_hd4 jfs 4 4 1 closed/syncd /bos_inst bos_hd2 jfs 48 48 1 closed/syncd /bos_inst/usr bos_hd9var jfs 21 21 1 closed/syncd /bos_inst/var bos_hd10opt jfs 4 4 1 closed/syncd /bos_inst/opt
  15. * Solstice DiskSuite is also known by old name Sun Volume Manager (SVM). ** ZFS is a “marriage” between file system and volume manager. *** Because a non-global zone can be controlled by a non-global zone administrator as well as by the global zone administrator, Sun recommends to halt all zones during lucreate or lumount operations! That means, the Solaris zones cloning is not truly an on-line process.
  16. * DRD option “-x ignore_unmounted_fs=true” could be used to exclude files from unmounted file system – but that is a workaround. ** Live Upgrade options “-f exclude_list_file” , “-x exclude”, “-z filter_list_file“ *** LVM can be used to remove a DRD clone but it is more complex process: # lvrmboot -s drd00 Volume Group configuration for /dev/drd00 has been saved in /etc/lvmconf/drd00.conf # lvremove -f /dev/drd00/lvol2 Logical volume &amp;quot;/dev/drd00/lvol2&amp;quot; has been successfully removed. Volume Group configuration for /dev/drd00 has been saved in /etc/lvmconf/drd00.conf # lvrmboot -d lvol3 /dev/drd00 Volume Group configuration for /dev/drd00 has been saved in /etc/lvmconf/drd00.conf # lvremove -f /dev/drd00/lvol3 Logical volume &amp;quot;/dev/drd00/lvol3&amp;quot; has been successfully removed. Volume Group configuration for /dev/drd00 has been saved in /etc/lvmconf/drd00.conf # lvrmboot -r drd00 Volume Group configuration for /dev/drd00 has been saved in /etc/lvmconf/drd00.conf # lvremove -f /dev/drd00/lvol4 Logical volume &amp;quot;/dev/drd00/lvol4&amp;quot; has been successfully removed. Volume Group configuration for /dev/drd00 has been saved in /etc/lvmconf/drd00.conf # vgremove drd00 Volume group &amp;quot;drd00&amp;quot; has been successfully removed. **** To leave the Live Upgrade BE empty: # lucreate -s – ***** To set up standby BOS with optional image.data file /tmp/image.data and exclude list /tmp/exclude.list, enter the following command:   # multibos -Xs -i /tmp/image.data -e /tmp/exclude.list To set up standby BOS and install additional software listed as bundle file /tmp/bundle and located in the images source /images, enter the following command: # multibos -Xs -b /tmp/bundle -l /images ****** To remove standby BOS, enter the following command:   # multibos -RX