Bring the Standby Server online
In this scenario, the user has at least two servers, a Production Server, which runs the Oracle Database and a Standby Server.
The Quick Recovery Agent takes a snapshot of the running instance. All tablespaces of the database are put in backup mode before taking the snapshot (Point-In-Time-Image) of the data partitions; as soon as the snapshot is taken, all the tablespaces are put back in open mode, then, after archiving the online redo logs, a snapshot of the archive log partition(s) is taken. This ensures that any transactions happening during the backup mode of the tablespaces are archived. This archived redo log can be used for later Quick Recovery.
The Production Server contains the source (Primary) volumes, which are copied and updated incrementally via the LAN Copy Manager to QR Volumes on the Standby Server. The data is therefore physically ready to use (no lengthy recovery from tape or other media is necessary) on the Standby Server. The Standby Server should have the same version of the Operating system and the same version of Oracle installed on the same path as the Production Server.
In cases where the Production Server suffers a failure, or requires downtime, the system administrator can remove the Production Server from the network and then add the Standby Server to the network (with the same name and configuration as the disconnected Production Server).
The key to a successful QR Agent recovery is properly configuring the Production and Standby Servers. This ensures that the database(s) can quickly and easily be run from the QR volumes on the Standby Server.
This procedure is written for bringing up one instance on the Standby Server. If multiple instances are present on one client we strongly recommend creating a separate subclient for each instance and following the same procedure for each instance individually. This procedure has been tested on Oracle 9.2 database with Sun Solaris 8 and the Sparc platform. The procedure for other versions of Oracle and the OS may vary slightly.
The following sections discuss preparing the Production and Standby Servers as well as configuring the QR Agent. The basic workflow is described below. For detailed instructions on installation and configuration options, see Quick Recovery Agent.
The Quick Recovery Agent supports Oracle 9.2. The database must be in “archive log mode” for QR Volume Creation. We recommend that data files, archive logs, control files, and online redo logs each reside on different partitions (different mount points), and that none of these four types of files reside on a partition with any of the others.
Following is a sample configuration of an Oracle 9.2 installation on a Solaris 2.8 machine, which has an Oracle instance, “CV”, running. Oracle requires that the Standby database be on the same OS level as that on the Production Server.
Mount Point | Description |
/oracle901 | Contains the Oracle 9.2 installed binaries |
/oracle901 | Is the ORACLE_HOME of instance CV |
/cv_data | Contains all the data files of instance CV |
/cv_archlog | Contains all the archived logs of instance CV |
/cv_control1 | Contains current control file 1 of instance CV |
/cv_control02 | Contains current control file 2 of instance CV |
/cv_redologs | Contains online redo logs of instance CV |
The redo logs and control files are not backed up since they are constantly changing. In order to take a consistent control file backup, Oracle provides specific commands to back up online control files. It is strongly recommended to have each database's data and archived logs on different volumes exclusively for each instance. (Example: /cv_data or /cv_archlog is exclusively for instance CV. It should not have data or archive logs of any other instance running on that machine).
Before installing any software on the Standby Server, consider the following:
Install the same version of Oracle as is installed on the Production Server. You must use the same Oracle user ID, Oracle group ID, and the same installation path of Oracle as the Production Server.
NOTE:
It is very important to follow Step 1 precisely. These files are used for bringing
up the database on the Standby Server in the event the Production Server
goes down. Failure to properly configure the Standby Server will result in
not being able to bring up the Standby Server successfully.
From the QR Volume Creation Advanced Options dialog box, assign each volume on the Production Server to its corresponding destination volume on the Standby Server. The mount path of each standby volume should be same as its counterpart production volume. For each source raw partition on the Production Server, the corresponding destination raw partition on the Standby Server should be selected in the QR Volume Creation Advanced Options dialog box.
If any raw partitions are used for data files or control files, you must create the necessary links for appropriate devices on the Standby Server before QR Volume creation. The soft link pointing to the destination raw partition should be the same as that of the source raw partition.
For example, if the production database has a data file linked to a raw volume (data file) as follows:
/ora_data/<ORACLE_SID>/raw.dbf -> /dev/cxbf/rdsk/c1t1d1s1 (source)
then the links to the destination volume should be created on the Standby Server as follows (after selecting the destination volume in the Advanced tab and before starting the QR Volume creation process):
/ora_data/<ORACLE_SID>/raw.dbf -> /dev/cxbf/rdsk/c2t1d1s1 (destination)
In cases where the Production Server suffers a failure, or requires downtime, the Standby Server can quickly be brought on-line to host the database from the QR Volumes you have created. The following steps must be taken to add the Standby Server into the network.
After bringing up the Standby Server, verify that the destination volumes are mounted back. These are the volumes that were present in the scratch volume pool while creating QR volumes. Also verify Oracle user is an owner of all mount points, directories and files. If necessary change the ownership to the appropriate Oracle user and group; otherwise your instance might fail to access the files.
NOTE:
Do not try to detect volumes in Volume Explorer on either the
Production
or the Standby Server. The Oracle recovery procedure is manual, and is not
accomplished with Volume Explorer, the QR Agent, or the Oracle
iDataAgent.
OPTIONAL:
Add the destination volumes and their mount points to
/etc/vfstab on the Standby Server, to ensure
the volumes will be mounted back in the event of a re-boot.
Copy the backup.ctl.galaxy file to all the control file locations as noted down in Step 1 of “Prepare the Standby Server”.
Example:
cp backup.ctl.galaxy /cv_control1/control01.ctl
cp backup.ctl.galaxy /cv_control2/control02.ctl
cp backup.ctl.galaxy /cv_control3/control03.ctl
If the control file is on a raw device, then the above example would change to:
dd if=backup.ctl.galaxy of=/cv_control1/controlraw.ctl
NOTES:
SQL> startup mount
After the successful completion of Step 5, the database is in OPEN mode and ready
to use.
NOTE:
If you receive any messages that files cannot be accessed or access
is denied during the startup process, check again that you have mounted all volumes,
copied the files and set the ownership to Oracle user.
There is a utility called sys-unconfig which is used for resetting the network configuration. After using it, reboot the machine and enter the new IP, hostname, etc. You can achieve the same effect by editing the listed files below and then rebooting.
/etc/hosts
This is where you specify the IP address of your hostname. Change the hostname here to match your host's new name. It should be the same as what you specify in /etc/nodename.
/etc/nodename
This is just like /etc/HOSTNAME in Linux. It defines the real name of the host. Simply change it to whatever you want to call your host.
/etc/hostname.hme0 (or other interface name)
Change these to line up with what you specified in /etc/hosts.
/etc/net/tic*/hosts
Change everything in here to line up with the files above.
/etc/resolv.conf
Just like Linux. This is where you specify your DNS servers and domain resolution information.
/etc/defaultrouter
Enter the IP address of the default router for your Solaris host. Note that Sun does not create this file by default, nor is there any other location where you may specify a default route.