Setting up 1-Touch for Solaris
Recovering the System to a Specific Level
Recovering Data in Case of Disaster
Global File System Recovery (Clusters)
1-Touch for Solaris allows you to recover systems with defective components (e.g., inaccessible root, crashed disk) quickly and efficiently. The feature can recover systems without requiring the reinstallation of operating system files that have been lost. 1-Touch uses file system restore capabilities. It restores data backed up by the default backup set that is configured for 1-Touch.
The feature also involves a 1-Touch server, which is used to boot the client. A 1-Touch server is required for each subnet in a network that contains 1-Touch clients. The 1-Touch server provides the Internet Protocol (IP) address, boot parameters, and files for the 1-Touch clients to be booted. In effect, the 1-Touch server secures the required information from the 1-Touch client's backup and then uses this information to recreate the client's environment.
This feature is not supported when bootable partitions are on the disk arrays (snapable disk).
If Solaris 10 Update 6 or above jumpstart server is used as 1-touch server, please make sure that it has /boot/solaris/bin/root_archive. It is required for unpacking and repacking Solaris miniroot in jumpstart server. |
This feature requires a Feature License to be available in the CommServe® Server.
Review general license requirements included in License Administration. Also, View All Licenses provides step-by-step instructions on how to view the license information.
The 1-Touch feature requires the following licenses:
Configure the 1-Touch server per the Solaris Advanced Installation Guide at http://docs.sun.com/db/doc/816-2411?q=jumpstart.
The 1-Touch data is backed up by the default subclient of the default backup set. When you install the File System iDataAgent, you must configure the default subclient for 1-Touch.
Required Capability: See Capabilities and Permitted Actions
To
configure the subclient for 1-Touch:
Start or Schedule regular full backups of the default subclient. Unless you backup the system state, 1-Touch cannot restore the system configuration.
To start a full backup see, Start a Full Backup.
To schedule a full backup see, Schedule Backups.
If the 1-Touch client host ID (mac address) has changed since the most recent 1-Touch backup, be sure to update the client's Ethernet address within the /etc/ethers file on the 1-Touch server. Otherwise, you will not be able to boot the 1-Touch client.
We also recommend that you set the primary mirror as the boot device. Otherwise (e.g., if the secondary mirror is set as the boot device), the 1-Touch restore may fail. |
You can recover the system to a specific system state level based on an identified backup job or on data that was backed up before an identified date.
Use the following procedure to recover the system to a specific level.
Required Capability: See Capabilities and Permitted Actions
To
recover the system to a specific level:
Use the following procedures to recover data in case of disaster.
Try to avoid the unconditional overwrite of the root directory on a live file system. This is a mechanism that allows an unconditional overwrite of open files in the root directory (/) on a live file system. Performing such a restore can result in an inconsistent system that may also fail to boot. Use this option AT YOUR OWN RISK. |
Required Capability: See Capabilities and Permitted Actions
To recover a system using
1-Touch:
Connect the client to network and switch on the power.
From the CommCell Console on the 1-Touch server, right-click the appropriate backup set, click All Tasks, and then click 1-Touch Recovery to access the Submit 1-Touch Recovery Request wizard.
At the boot prompt of the affected client, enter one of the following commands, as appropriate:
where <user_id> is Commcell console user id and <passwd> is Commcell console password.
This step starts the 1-Touch. All logging output for the recovery is redirected to the console and a file under the /tmp directory.
The 1-Touch client updates the
sr.log log files on both the
client and the 1-Touch server. If a 1-Touch client backup completed
successfully and is somehow failing to boot, be sure to check the
sr.log on the 1-Touch server
for any warning. If you are using a different 1-Touch server than
that was used previously, remove the client’s name from the database
of the previous 1-Touch server. Otherwise, the client will not be able
to boot off the network because both servers will respond to the client’s
RARP request. To do this, from the Tools directory (e.g., /database/jumpstart/Solaris_9/Tools,
run the ./rm_install_client <clientname> command.
Sometimes, the RARP, bootparams and tftp daemons fail to re-read network information pertaining to the 1-Touch client. In such cases, you may have to restart the daemons. |
At this point, the recovery procedure should launch an automatic command
line restore. If this is successful, go to the next step. If this is not successful,
manually restore / to
/a on the client.
Rebooting with command: net - w recover
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
.....
Phase 1 of 1-Touch starting
.....
.....
Starting services
Starting cvd service... done
Starting EvMgrC service... done
Starting cvfwd service, if installed... done
Launching automatic restore. Hit return to continue...
Pause while the system completes some internal boots and performs some processing.
No action is required on your part.
.....
.....
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Phase 2 of 1-Touch started
Mounting '/opt/calypso' if on a separate file system...done
.....
.....
.....
Phase 2 of 1-Touch is now complete
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Phase 3 of 1-Touch started
Preserve not specified
All file system will be created regardless of their state
Phase 3 of 1-Touch is now completed
Launching automatic restore. Hit return to continue...
At this point, you are advised that the system will reboot without any further
user input.
Job (28) is Completed ...
Stopping services
done
Phase 4 of 1-Touch is now complete
Any remaining 1-Touch actions will continue after the reboot!
.....
.....
Rebooting with command:
.....
.....
Phase 5 of 1-Touch started
.....
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Congratulations
1-Touch procedures have now been completed
The system will now reboot to continue normal operation
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
#### Normal boot :-)
rebooting...
Resetting...
SPARCstation 5, No Keyboard
ROM Rev. 2.28, 128 MB memory installed, Serial #8856067. Ethernet address 8:0:20:87:22:3,
Host ID: 80872203.
.....
.....
Rebooting with command:
Boot device: /iommu/sbus/espdma@5,8400000/esp@5,8800000/sd@3,0 File and args:
SunOS Release 5.7 Version Generic_106541-12 [UNIX(R) System V Release 4.0] Copyright
(c) 1983-1999, Sun Microsystems, Inc. configuring network interfaces: le0.
Hostname: pluto3
To debug boot problems, use the snoop and truss commands. |
Be sure to note the file system type and then recreate the file system as previously created. The initial Global File System restore should be started from the node that served as the Global File System controller at the time of the last 1-Touch backup. The 1-Touch restore directory contains the Global File System disk recovery commands. For example:
/database/jumpstart/files/arecibo/galaxy/iDataAgent/systemrecovery/disks/vxvm.vxmake.oracle_dg
or
/database/jumpstart/files/arecibo/galaxy/iDataAgent/systemrecovery/disks/lvm.mk.commands
where :
arecibo is the client
/database/jumpstart/ is the system's jumpstart directory
vxvm.commands (with vxvm.vxmake.oracle_dg) is the Global File System volume
group using VERITAS Volume Manager
lvm.mk.commands is the Global File System volume
group using Solaris Volume Manager
The following error messages may be displayed during
a Sun Cluster 1-Touch recovery. However, you can ignore these messages as
they are unintentional. devfsadm: mkdir failed
for /dev/md/shared/1/rdsk 0x1ed: No such file or directory |
The difference between a normal restore and a full system restore is the severity of the problem. Normally, if data is lost or removed, it is recovered from the archives using the normal restore procedures. However, when a normal restore operation cannot correct a software and/or hardware corruption problem, some additional changes may be required.
Also refer to Restore Data - Solaris File System iDataAgent - Full System Restore.
When the root file system is lost, a full system restore is required. Select the appropriate method to restore:
Before You Begin
On a Solaris system, you can use either the jump start server method or the miniroot method to perform a full system restore. The jump start server method is considered the easier of the two methods because you do not need to use an additional drive for rebooting purposes. The latter steps for both methods are identical.
The jump start server method includes the following general steps (detailed instructions are provided in the following section):
The miniroot method includes the following general steps (detailed instructions are provided in the sections below):
Try to avoid the unconditional overwrite of the root directory on a live file system. This is a mechanism that allows an unconditional overwrite of open files in the root directory (/) on a live file system. Performing such a restore can result in an inconsistent system that may also fail to boot. Use this option AT YOUR OWN RISK. |
Modify <mnt_point>/Solaris_8/Tools/Boot/.tmp_proto/root/etc/inet/hosts
to add the hosts needed to communicate with the Solaris File System
iDataAgent
Modify <mnt_point>/Solaris_8/Tools/Boot/etc/system
to add shared memory information. For example:
set shmsys:shminfo_shmmax=4199304
set shmsys:shminfo_shmmin=1
set shmsys:shminfo_shmmni=5824
set shmsys:shminfo_shmseg=5824
set semsys:seminfo_semmns=5824
set semsys:seminfo_semmni=5824
set semsys:seminfo_semmsl=5824
Modify <mnt_point>/Solaris_8/Tools/Boot/etc/services
to add system services. For example:
cvd 8400/tcp
EvMgrS 8401/tcp
Create symbolic links for required writeable areas in
<mnt_point>/Solaris_8/Tools/Boot/etc/.
For example:
ln -s ../tmp/root/etc/inet/Galaxy.pkg Galaxy.pkg
ln -s /tmp/CommVaultRegistry CommVaultRegistry
Then let the system boot.
mount /dev/dsk/<DriveID>
/mnt/<file_system_name>
where <file_system_name>
is the name of the file system and <DriveID>
is the Drive Identifier of the partition containing the file system.
Do not select Unconditional Overwrite from the Restore Options dialog box. |
This procedure is now complete.
If you install the Core System Support on an external drive, it can be used for other systems. However, you will have to remove and re-install the Solaris File System iDataAgent software for each client. In addition, you will have to reconfigure TCP/IP, hostname, and domain name settings for each system. |
Load a Core System Support partition on your system. (You can get installation
instructions from the Sun website.) Once the software is installed, boot your
system.
For x86 systems, be sure to use the
fdisk command to partition the
disk before you format it. This will allow you to add a "Solaris2" partition
to hold the Solaris "slices". For example: fdisk /dev/rdsk/c1d0p0 |
Do not select Unconditional Overwrite from the Restore Options dialog box. |
For example:
installboot /usr/platform/'uname -i' /lib/fs/ufs/bootblk/dev/rdsk/c0t3d0s0
For x86 systems, use the installgrub command. For example:
installgrub - m /boot/grub/stage1 /boot/grub/stage2/dev/rdsk/c1d0s0
This procedure is now complete.
cd <jumpstart_install_directory>
/boot/solaris/bin/root_archive unpack sparc.miniroot `pwd`/boot
cd <miniroot_mnt_dir>/etc/
ln –s /tmp/Calypso/CommVaultRegistry CommVaultRegistry
If Calypso was installed at a different directory, change the steps as follows:
cd <miniroot_mnt_dir>
ln –s /tmp/Calypso opt/Calypso
ln –s /tmp/Calypso var/log/Calypso
pack miniroot /boot/solaris/bin/root_archive pack sparc.miniroot `pwd`/boot
/export/<mydir>
where <mydir> is the name of the directory.
cd to /export/<mydir>
mkdir Log_Files
mv Base32 Base
mv iDataAgent32 iDataAgent
If you are using x64 Solaris, then:
|
boot net -s
route add default <172.x.x.x>
where <172.x.x.x> is the Gateway address
/etc/hosts
/etc/hosts
For x86 systems, be sure to use the
fdisk command to partition the
disk before you format it. This will allow you to add a "Solaris2" partition
to hold the Solaris "slices". For example: fdisk /dev/rdsk/c1d0p0 |
where <DriveID> is the Drive Identifier of the partition where you want to create the root file system.
where <DriveID> is the Drive Identifier of the partition containing the root file system.
zpool create –R/a <pool name>
where <pool name> is the zpool name created during the initial Solaris install.
zfs create <pool name>/FS1
where <pool name> is the zpool name created during the initial Solaris install and FS1 is the file system name
zfs set mountpot=<first_mountpoint><pool name>/FS1
where <first_mountpoint> refers to its original mountpoint and FS1 is the file system name
fs create <pool name>/FS2
where <pool name> is the zpool name created during the initial Solaris install and FS2 is the file system name
zfs set mountpot=<first_mountpoint><pool name>/FS1
where <first_mountpoint> refers to its original mountpoint and <pool name> is the zpool name created during the initial Solaris install.
zfs create –V <space required> <fsname>
where <fsname> is the filesystem name. Example: /export/home
zpool export <rpool>
where <rpool> is the root pool name
zpool import –R /a <rpool>
where <rpool> is the root pool name
zpool set bootfs=<fs_mp> <rpool>
where <fs_mp> is the name of the file system that has the mount point = ‘/’ and <rpool> is the root pool name
mkdir /tmp/Calypso
mount bootserver:/NFS/exported/myfolder /tmp/Calypso
ln –s /tmp/Calypso /var/log/Calypso
mkdir /opt/Calypso/Base/Temp/
../galaxy_vm
../Base/cvprofile
svc_ctrl –focus $GALAXY_VM -start cvd
svc_ctrl –focus $GALAXY_VM -start EvMgrC
svc_ctrl –focus $GALAXY_VM -start cvfwd
platform=`uname –i`
Following command is for Solaris 10 Update 6 or above:
installboot –F zfs /a/usr/platform/$Platform/lib/fs/zfs/bootblk /dev/rdsk/<boot_disk>
where <boot_disk> is the name of the disk to be booted
Following command is for Solaris 10 Update 5 or below:
installboot /a/usr/platform/$Platform/lib/fs/ufs/bootblk /dev/rdsk/<boot_disk>
where <boot_disk> is the name of the disk to be booted
For x86 systems: installgrub
–fm /a/boot/grub/stage1/a/boot/grub/stage2 /dev/rdsk/<boot disk> |
This procedure is now complete.