Wednesday, June 8, 2016

Cloning Oracle E-Business Suite Release 12.2 RAC Enabled Systems with Rapid Clone

To BottomTo Bottom

This document describes the process of using the Oracle E-Business Suite Rapid Clone utility to create a clone (copy) of an Oracle E-Business Suite Release 12.2 system that utilizes the Oracle Database Real Application Clusters (Oracle RAC) feature.The resulting duplicate Oracle E-Business Suite Release 12.2 Oracle RAC environment can then be used for purposes such as:
  • Patch testing
  • User Acceptance testing
  • Performance testing
  • Load testing
  • QA validation
  • Disaster recovery
The most current version of this document can be obtained in Document 1679270.1.
Note: For cloning procedures in Oracle E-Business Suite environments that do not use Oracle RAC, refer to My Oracle Support Knowledge Document 1383621.1Cloning Oracle E-Business Suite Release 12.2 with Rapid Clone.
There is a Change Log at the end of this document.

In This Document


Note: At present, the procedures described in this document apply to UNIX and Linux platforms only, and are not suitable for Oracle E-Business Suite Release 12.2 RAC-enabled systems running on Windows.
A number of conventions are used in describing the Oracle E-Business Suite architecture:
ConventionMeaning
Application tierMachines (nodes) running Forms, Web, and other services (servers). Also called middle tier.
Database tierMachines (nodes) running the Oracle E-Business Suite database.
oracleUser account that owns the database file system (database ORACLE_HOME and files).
CONTEXT_NAMEThe CONTEXT_NAME variable specifies the name of the Oracle E-Business Suite context that is used by AutoConfig. The default is [SID]_[hostname].
CONTEXT_FILEFull path to the context file on the application tier or database tier.
APPSpwdOracle E-Business Suite database user password.
Source SystemOriginal Oracle E-Business Suite and database system, to be duplicated as Target System.
Target SystemNew Oracle E-Business Suite system, created as a copy of the source system.
ORACLE_HOMEThe top-level directory into which the database software has been installed.
CRS_ORACLE_HOMEThe top-level directory into which the Cluster Ready Services (CRS) software has been installed.
ASM_ORACLE_HOMEThe top-level directory into which the Automatic Storage Management (ASM) software has been installed.
RMANOracle's Recovery Manager utility, which ships with the 11g and 12c Database.
ImageThe RMAN proprietary-format files from the source system backup.
Monospace TextRepresents command line text. Type such a command exactly as shown.
[ ]Text enclosed in square brackets represents a variable. Substitute a value for the variable text. Do not type the square brackets.
\On UNIX, the backslash character is entered at the end of a command line to indicate continuation of the command on the next line.

Section 1: Overview, Prerequisites and Restrictions

1.1 Overview

Converting Oracle E-Business Suite Release 12.2 from a single instance database to a multi-node Oracle Real Application Clusters (Oracle RAC) enabled database (described inDocument 1453213.1Using Oracle 11g Release 2 Real Application Clusters and Automatic Storage Management with Oracle E-Business Suite Release 12.2 and Document 1626606.1,Using Oracle 12c Release 1 (12.1) Real Application Clusters with Oracle E-Business Suite Release R12.2) is a complex and time-consuming process. It is therefore common for many sites to maintain only a single system in which Oracle RAC is enabled with the E-Business Suite environment. Typically, this will be the main production system.
In many large enterprises, however, there is often a need to maintain two or more Oracle RAC-enabled environments that are exact copies (or clones) of each other. This may be needed, for example, when undertaking specialized development, testing patches, working with Oracle Support, and other scenarios. It is not advisable to carry out such tasks on a live production system, even if it is the only environment enabled to use Oracle Real Application Clusters.
The goal of this document (and the patches mentioned herein) is to provide a rapid, clear-cut, and easily achievable method of cloning an Oracle RAC-enabled E-Business Suite Release 12.2 environment to a new set of machines on which a duplicate Oracle RAC-enabled Oracle E-Business Suite system is to be deployed.
This process will be referred to as RAC-To-RAC cloning from here on.

1.1.2 Cluster Terminology

You should understand the terminology used in a cluster environment. Key terms include the following.
  • Automatic Storage Management (ASM) is an Oracle database component that acts as an integrated file system and volume manager, providing the performance of raw devices with the ease of management of a file system. In an ASM environment, you specify a disk group rather than the traditional data file when creating or modifying a database structure such as a tablespace. ASM then creates and manages the underlying files automatically.
  • Oracle Cluster File System (OCFS2) is a general purpose cluster file system which can, for example, be used to store Oracle database files on a shared disk.
  • Certified Network File Systems is an Oracle-certified network attached storage (NAS) filer: such products are available from EMC, HP, NetApp, and other vendors. See the Oracle Release 11g Real Application Clusters installation and user guides for details on supported NAS devices and certified cluster file systems.
  • Cluster Ready Services (CRS) is the primary program that manages high availability operations in an Oracle RAC environment. The crs process manages designated cluster resources, such as databases, instances, services, and listeners.
  • Oracle Real Application Clusters (Oracle RAC) is a database feature that allows multiple machines to work on the same data in parallel, reducing processing time. Of equal or greater significance, depending on the specific need, an Oracle RAC environment also offers resilience if one or more machines become temporarily unavailable as a result of planned or unplanned downtime.

1.3 Prerequisites

  • This document is only for use in RAC-To-RAC cloning of a source Oracle E-Business Suite Release 12.2 Oracle RAC System to a target Oracle E-Business Suite Oracle RAC System.
  • The steps described in this note are for use by accomplished Oracle E-Business Suite and Database Administrators, who should be:
    • Familiar with the principles of cloning an Oracle E-Business Suite system, as described in my Oracle Support Knowledge Document 1383621.1.
    • Familiar with Oracle Database Server 11g or Oracle Database Server 12c, and have at least a basic knowledge of Oracle RAC.
    • Experienced in the use of of RapidClone, AutoConfig, and AD utilities, as well as the steps required to convert from a single instance Oracle E-Business Suite installation to an Oracle RAC-enabled one.
  • The source system must remain in a running and active state during database Image creation.
  • The addition of database Oracle RAC nodes (beyond the assumed secondary node) is, from the RapidClone perspective, easily handled. However, the Clusterware software stack and cluster-specific configuration must be in place first, to allow RapidClone to configure the database technology stack properly. The CRS-specific steps required for the addition of database nodes are briefly covered further in Appendix A however the Oracle Clusterware product documentation should be referred to for greater detail and understanding.
  • Details such as operating system configuration of mount points, installation and configuration of ASM, OCFS2, NFS or other forms of cluster file systems are not covered in this document.
  • Oracle Clusterware installation and component service registration are not covered in this document.
  • Oracle Grid Infrastructure Installation Guide 11g Release 2 (11.2) and Oracle Grid Infrastructure Installation Guide 12c Release 1 (12.1) are useful references when planning to set up Oracle Real Application Clusters and shared devices.

1.4 Restrictions

Before using RapidClone to create a clone of an Oracle E-Business Suite Release 12.2 Oracle RAC-enabled system, you should be aware of the following restrictions and limitations:
  • This RAC-To-RAC cloning procedure can be used on Oracle Database 11g and Oracle Database 12c Release 1 (12.1) Oracle RAC Systems.
  • The final cloned Oracle RAC environment will:
    • Use Aliases for file names and Oracle Managed Files option for data file names.
    • Contain the same number of redo log threads as the source system.
    • Have all data files located under a single "DATA_TOP" location.
    • Generate three control files in the target database.
  • During the cloning process, no allowance is made for the use of a Flash Recovery Area (FRA). If an FRA needs to be configured on the target system, it must be done manually.
  • At the conclusion of the cloning process, the final cloned Oracle RAC environment will use a pfile (parameter file) instead of an spfile. For proper CRS functionality, you should create an spfile and locate it in a shared storage location that is accessible from both Oracle RAC nodes.
  • Beside ASM and OCFS2, only NetApp branded devices (certified NFS clustered file systems) have been confirmed to work at present. While other certified clustered file systems should work for RAC-to-RAC cloning, shared storage combinations not specifically mentioned in this the document are not guaranteed to work, and will therefore only be supported on a best-efforts basis.

Section 2: Configuration Requirements for Source Oracle RAC System

2.1 Required AD/TXK Patches

Note: As a general principle, you are strongly encouraged always to be on the latest AD/TXK codelevel. Refer to My Oracle Support Knowledge Document 1583092.1Oracle E-Business Suite Release 12.2: Suite-Wide Rollup and AD/TXK Delta Information, and apply the most recent delta patches.
To be able to use RAC-to-RAC cloning, you must first be on the codelevel arrived at by application of these patches:
  • R12.AD.C.Delta.5 (18283295)
  • R12.TXK.C.Delta.5 (18288881)
For instructions on applying these prerequisite patches, visit My Oracle Support and refer to the current version of Knowledge Document 1617461.1Applying the Latest AD and TXK Release Update Packs to Oracle E-Business Suite Release 12.2.
Once all these patches have been applied, proceed to the next steps.

2.2 AutoConfig Setup

  1. On the application tier (as the applmgr user):

    1. Source the run edition file system
    2. Create the appsutil.zip file (under under $INST_TOP/admin/out) by running this command:
      $ $ADPERLPRG $AD_TOP/bin/admkappsutil.pl
      or on Windows:
      C:\>%ADPERLPRG% %AD_TOP%\bin\admkappsutil.pl
  2. On the database tier (as the oracle user):

    1. Copy or ftp the appsutil.zip file from the Application tier to the <RDBMS ORACLE_HOME>
    2. Change directory to the RDBMS ORACLE_HOME
      $ cd <RDBMS_ORACLE_HOME>
    3. Unzip the appsutil.zip file:
      $ unzip -o appsutil.zip
    4. Run AutoConfig by executing:
      $ <RDBMS_ORACLE_HOME>/appsutil/scripts/<CONTEXT_NAME>/adautocfg.sh
      or on Windows:
      C:\>RDBMS_ORACLE_HOME>\appsutil\scripts\<CONTEXT_NAME>\adautocfg.cmd

2.3 Supported Oracle RAC Migration

The source Oracle E-Business Suite RAC environment must be created in accordance with Document 1453213.1Using Oracle 11g Release 2 Real Application Clusters and Automatic Storage Management with Oracle E-Business Suite Release 12.2 and Document 1626606.1, Using Oracle 12c Release 1 (12.1) Real Application Clusters with Oracle E-Business Suite Release R12.2). The RAC-To-RAC cloning process described here has only been validated for use on Oracle E-Business Suite Release 12.2 systems that have been converted to use Oracle RAC as per this document, or installed in Oracle RAC mode via Rapid Install.

2.4 AutoConfig Compliance on Oracle RAC Nodes

Also in accordance with the Document 1453213.1 and Document 1626606.1, AutoConfig must have been used during Oracle RAC configuration of the source system (following conversion).

2.5 Supported Data File Storage Methods

The storage method used for the source system data files must be one of the following Oracle 11g/12c R1 Oracle RAC Certified types:
  • NFS Clustered File Systems (such as NetApp Filers)
  • ASM (Oracle Automatic Storage Management)
  • OCFS2 (Oracle Cluster File System V2)
  • Solaris ZFS (Default file system in Solaris 11)

2.6 Archive Log Mode

The source system database instances must be in archive log mode, and the archive log files must be located within the shared storage area where the data files are currently stored. This conforms to standard Oracle RAC best practices.
Note: If the source system was not previously in archive log mode, but it has recently been enabled, or if the source system parameter ARCHIVE_LOG_DEST was at some point set to any local disk directory location, you must ensure that RMAN has a properly maintained list of valid archive logs located exclusively in the shared storage area.
To confirm RMAN knows only of your archive logs located on the shared disk storage area, do the following.
First, use SQL*Plus or RMAN to show the locations of the archive logs. For example:
SQL>archive log list
If the output shows a local disk location, change this location appropriately, and back up or relocate any archive log files to the shared storage area. It will then be necessary to correct the RMAN archive log manifest, as follows:
RMAN>crosscheck archivelog all;
Review the output archive log file locations and, assuming you have relocated or removed any locally stored archive logs, you will need to correct the invalid or expired archive logs as follows:
RMAN>delete expired archivelog all;
It is essential to carry out the above steps (if applicable) before you continue with the Oracle E-Business Suite Release 12.2 Oracle RAC cloning procedure.

2.7 Control File Location

The database instance control files must be located in the shared storage area as well.

Section 3: Configuration Requirements for Target Oracle RAC System

3.1 User Equivalence between Oracle RAC Nodes

Set up ssh and rsh user equivalence (that is, without password prompting) between primary and secondary target Oracle RAC nodes. This is described in Oracle Grid Infrastructure Installation Guide 11g Release 2 (11.2), with the required steps being listed in Section E.1.2, "Configuring SSH on All Cluster Nodes". For Oracle Grid Infrastructure 12c Release 1 (12.1), the steps are described in Section F.1.2. "Configuring SSH on All Cluster Nodes" in Oracle Grid Infrastructure Installation Guide 12c Release 1 (12.1).
Note: If the Oracle RAC ORACLE_HOME was installed using Rapid Install, SSH may already have been configured as part of the installation.

Note: SSH connectivity can also be setup automatically during installation. This is described in:

3.2 Install Oracle Grid Infrastructure

Install Oracle Grid Infrastructure and update the version to match that of the source system database. For example, if the original source system database is 11.2.0.3, Oracle Grid Infrastructure must also be patched to the 11.2.0.3 level.
Note: For detailed instructions regarding the installation and usage of Oracle's Clusterware software as it relates to Oracle Real Applications Clusters, see the following articles:

3.3 Verify Shared Mount Points or Disks

Ensure that all shared disk sub-systems are fully and properly configured: they need to have adequate space, be writeable by the future oracle software owner, and be accessible from both primary and secondary nodes.
Note: For details on configuring ASM, OCFS2, and NFS with NetApp Filer, refer to the following documents as applicable:
  • My Oracle Support Knowledge Document 279393.1Linux/NetApp RHEL/SUSE Setup Recommendations for NetApp Filer Storage, contains details specific to Linux NFS mount options.
  • My Oracle Support Knowledge Document 1066042.6Configuring Network Appliance's NetApp To Work With Oracle includes details on where to find NetApp co-authored articles related to using NetApp-branded devices with Oracle products.
Be aware that from Oracle 11gR2 onwards, ASM is installed with Oracle Grid Infrastructure - which is why a separate ORACLE_HOME is needed for that component.

Note: For ASM target deployments, you are strongly advised to install a separate $ORACLE_HOME for ASM management, whatever the location of your ASM listener configuration. It is required to change the default listener configuration via the netca executable.

3.4 Verify Network Layer Interconnects

Ensure that the network layer is properly defined for private, public and VIP (Clusterware) Interconnects. That is, runcluvfy.sh from the Oracle Clusterware software stage area should have executed without error prior to CRS installation.

Section 4: Preparing the Source Oracle RAC System for Cloning

4.1 Update the File System With Latest Oracle RAC Patches

As per Section 2 of this document, the latest AD and TXK consolidated update patches, any post-patch steps described in the READMEs, and all prerequisite patches should now have been applied.
After application of all these patches, adpreclone.pl must be executed on the primary database node you are going to archive. Use the following command:
$ cd $ORACLE_HOME/appsutil/scripts/[context_name]
$ perl adpreclone.pl dbTier
After executing adpreclone.pl, perform the following steps.

4.2 Create Database Image

Note: Do not shut down the source system database services to complete the steps on this section. The database must remain mounted and open for the imaging process to successfully complete. RapidClone for Oracle RAC-enabled Oracle E-Business Suite Release 12.2 systems operates differently from single instance cloning.
Log in to the primary Oracle RAC node, navigate to [ORACLE_HOME]/appsutil/clone/bin, and run the adclone.pl utility from a shell as follows:
$ perl adclone.pl \
java=[JDK Location] \
mode=stage \
stage=[Stage Directory] \
component=database \
method=RMAN \
dbctx=[RAC DB Context File] \
showProgress
Where:
ParameterUsage
stageAny directory or mount point location outside the current ORACLE_HOME location, with enough space to hold the existing database data files in an uncompressed form.
dbctxFull Path to the existing Oracle RAC database context file.
The above command will create a series of directories under the specified stage location.
After the stage creation is completed, navigate to [stage]/data/stage. In this directory, you will find several 2GB RMAN backup/image files. These files will have names like "1jj9c44g_1_1". The number of files present will depend on the source system configuration. The files, along with the "backup_controlfile.ctl", will need to be transferred to the target system on which you wish to create your new primary Oracle RAC node.

These files should be placed into a temporary holding area, which will ultimately be removed later.

4.3 Archive the ORACLE_HOME

Note: The database can be left open during the ORACLE_HOME archive creation process.
Create an archive of the source system ORACLE_HOME on the primary node:
$ cd $ORACLE_HOME/..
$ tar -cvzf rac_db_oh.tgz [DATABASE TOP LEVEL DIRECTORY]
Note: Consider using data integrity utilities such as md5sum, sha1sum, or cksum to validate the file sum both before and after transfer to the target system.
It is not required to place the archived home in the target ORACLE_HOME location. You can untar it to the ORACLE_HOME location from a shared location that is accessible to all Oracle RAC nodes.

Section 5: RAC-to-RAC Cloning

5.1 Target System Primary Node Configuration (Clone Initial Node)

Follow the steps below to clone the primary node (i.e. Node 1) to the new target system.

5.1.1 Uncompress ORACLE_HOME

Uncompress the ORACLE_HOME archive that was transferred from the source system. Choose a suitable location, and rename the extracted top-level directory name to something meaningful on the new target system.
$ tar -xvzf rac_db_oh.tgz

5.1.2 Create pairsfile.txt File for Primary Node

Create a [NEW_ORACLE_HOME]/appsutil/clone/pairsfile.txt text file with contents as shown below:
s_undo_tablespace=[Source system primary instance undo tablespace name]
s_dbClusterInst=[Total number of Instances in a cluster e.g. 2]
s_db_oh=[Location of new ORACLE_HOME]

5.1.3 Create Context File for Primary Node

You will now execute the adclonectx.pl utility to create a new context file, providing carefully determined answers to the prompts.
Navigate to [NEW_ORACLE_HOME]/appsutil/clone/bin and run adclonectx.pl with the following parameters:
$ perl adclonectx.pl \
contextfile=[Full PATH to OLD Source RAC]/<contextfile>.xml \
template=[NEW ORACLE_HOME]/appsutil/template/adxdbctx.tmp \
pairsfile=[NEW ORACLE_HOME]/appsutil/clone/pairsfile.txt initialnode
Where:
ParameterUsage
contextfileAbsolute path to the old source Oracle RAC database context file.
templateAbsolute path to the existing database context file template.
pairsfileAbsolute path to the pairsfile created in the last step.

Note: A new and unique global database name (DB name) must be selected when creating the new target system context file. Do not use the source system global database name or sid name during any of the context file interview prompts as shown below.

You will be presented with the following questions [sample answers provided]:
Target System Hostname (virtual or normal) [kawasaki] [Enter appropriate value if not defaulted]

Do you want the inputs to be validated (y/n) [n] ? : [Enter n]

Target Instance is RAC (y/n) [y] : [Enter y]

Target System Database Name : [Enter new desired global DB name, not a SID; motoGP global name was selected here]

Do you want to enable SCAN addresses (y/n) [y] ? : [if y, answer next 2 questions]

Specify value for SCAN Name : [target system SCAN name, if SCAN addresses enabled]

Specify value for SCAN Port : [target system SCAN port, if SCAN addresses enabled]

Do you want the target system to have the same port values as the source system (y/n) [y] ? : [Select y or n]

Target System Port Pool [0-99] : <provide the port_pool>

Provide information for the initial RAC node:

Host name [ducati] : [Always need to change this value to the current public machine name, for example kawasaki]

Virtual Host name [null] : [Enter the Clusterware VIP interconnect name, for example kawasaki-vip ]

Instance number [1] : 1 [Enter 1, as this will always be the instance number when you are on the primary target node]

Private interconnect name [kawasaki] [You need to change this value; enter the private interconnect name, such as kawasaki-priv]

Target System Base Directory : [Enter the base directory that contains the new_oh_loc dir]

Oracle OS User [oracle] : [Should default to correct current user; just press enter]

Oracle OS Group [dba] : [Should default to correct current group; just press enter]

Target System utl_file_dir Directory List : /usr/tmp [Specify an appropriate location for your requirements]

Number of DATA_TOP's on the Target System [2] : 1 [At present, you can only have one data_top with RAC-To-RAC cloning]

Target System DATA_TOP Directory 1 : +APPS_RAC_DISK [The shared storage location; ASM diskgroup/NetApps NFS mount point/OCFS2 mount point]

Do you want to preserve the Display [null] (y/n) ? : [Respond according to your requirements]

New context path and file name [/s1/atgrac/racdb/appsutil/motoGP1_kawasaki.xml] : [Double-check proposed location, and amend if needed]
Note: If cloning to ASM, a full path is needed. For example:
Target System DATA_TOP Directory 1 : +DATA/dbfile/VISION
This path must be created on the ASM system target manually.

 
Note: It is critical that the correct values are selected above: if you are uncertain, review the newly-written context file and compare it with values selected during source system migration to RAC (as per My Oracle Support Knowledge Document 1453213.1 for Oracle 11g Release 2 (11.2) RAC or My Oracle Support Knowledge Document 1626606.1 for Oracle 12c Release 1 (12.1) RAC) .

When making comparisons, always ensure that any path differences between the source and target systems are understood and accounted for.

5.1.4 Restore Database on Target System Primary Node

Warning: It is not recommended to clone a E-Business Suite RAC enabled environments to the same host. If the source and target systems must be the same host, ensure the source system is cleanly shut down and the data files moved to a temporarily inaccessible location prior to restoring/recovering the new target system.
Failure to heed this warning could result in corrupt redo logs on the source system. Same host RAC cloning requires the source system to be down.

Warning: In addition to same node RAC cloning, it is also not recommended to attempt cloning E-Business Suite RAC enabled environments to a target system which can directly access source system dbf files (perhaps via an nfs shared mount). If the intended target file system has access to the source dbf files, corruption of redo log files can occur on the source system. It is also possible that corruption might occur if any dbf files exist on the new intended target file system which match the original source mount point [i.e. /xyz/datafiles]. If existing data files on the target are in a file system location as is present on the source server [i.e. /xyz/datafiles], shut down the database that owns these data files.Failure to heed this warning could result in corrupt redo logs on the source system or any existing database on the target host that has a mount point the same as the original (and perhaps unrelated) source system. If unsure, shut down any database that stores data files in a path which existed on the source system and in which data files were stored.

5.1.4.1 Run adclone.pl to Restore and Rename Database on New Target System

Navigate to [NEW_ORACLE_HOME]/appsutil/clone/bin and run Rapid Clone (adclone.pl utility) with the following parameters:
$ perl adclone.pl \
java=[JDK Location] \
component=dbTier \
mode=apply \
stage=[ORACLE_HOME]/appsutil/clone \
method=RMAN \
dbctxtg=[Full Path to the Target Context File]/<contextfile>.xml \
rmanstage=[Location of the Source RMAN dump files... i.e. RMAN_STAGE/data/stage] \
rmantgtloc=[Shared storage location for data files...ASM diskgroup / NetApps NFS mount / OCFS mount point] \
srcdbname=[Source RAC system GLOBAL name] \
showProgress
Where:
ParameterUsage
javaFull path to the directory where JDK 1.6 or 1.7 is installed.
stageThis parameter is static and refers to the newly-unzipped [ORACLE_HOME]/appsutil/clone directory.
dbctxtgAbsolute path to the new context file created by adclonectx.pl under [ORACLE_HOME]/appsutil.
rmanstageTemporary location where you have placed database "image" files transferred from the source system to the new target host.
rmantgtlocBase directory or ASM diskgroup location into which you wish the database (dbf) files to be extracted. The recreation process will create subdirectories of [GLOBAL_DB_NAME]/data, into which the dbf files will be placed. Only the shared storage mount point top level location needs be supplied.
srcdbnameSource system GLOBAL_DB_NAME (not the SID of a specific node). Refer to the source system context file parameter s_global_database_name. Note that no domain suffix should be added.

Note: If cloning to ASM, a full PATH is needed. For example:
Target System DATA_TOP Directory 1 : +DATA/dbfile/VISION
This path must be created on the ASM target system manually.

Additionally, the rmantgtloc (rman target location) parameter in adclone.pl command should also have the full path if restoring to ASM.

Note: The directories and mount points selected for the rmanstage and rmantgtloc locations should not contain data files for any other databases. The presence of unrelated data files may result in very lengthy restore operations, and on some systems a potential hang of the adclone.pl restore command.
Running the adclone.pl command may take several hours. You can monitor progress by running the following command from a terminal window:
$ tail -f [ORACLE_HOME]/appsutil/log/$CONTEXT_NAME/ApplyDBTier_[time].log
This will display and periodically refresh the last few lines of the main log file (displayed when you run adclone.pl), in which you will see references to additional log files that can help show the current actions being executed.
Note: If the database version is 12c Release 1, ensure to add the following line in your sqlnet_ifile.ora after adclone.pl execution completes:
  • SQLNET.ALLOWED_LOGON_VERSION_SERVER = 8 (if the initialization parameter SEC_CASE_SENSITIVE_LOGON is set to FALSE)
  • SQLNET.ALLOWED_LOGON_VERSION_SERVER = 10 (if SEC_CASE_SENSITIVE_LOGON is set to TRUE)

5.1.4.2 Verify TNS Listener has been started

After the above process exits, and it has been confirmed that no errors were encountered, you will have a running database and TNS listener, with the new SID name chosen earlier.
Execute the following command to confirm that the TNS listener is running and has the appropriate service name format:
$ ps -ef | grep tns | awk '{ print $9}'
The output from the above command should return a string of the form context variable s_db_listener. If it does not, verify the listener.ora file in the $TNS_ADMIN location before continuing with the next steps: the listener must be open and running before AutoConfig is executed.

5.1.4.3 Run AutoConfig

At this point, the new database is fully functional. However, to complete the configuration you must navigate to [ORACLE_HOME]/appsutil/scripts/[CONTEXT_NAME] and run AutoConfig with the following command:
$ ./adautocfg.sh appspass=[APPS Password]

5.2 Target System Secondary Node Configuration (Clone Additional Nodes)

Follow the steps below to clone the secondary nodes (for example, Node 2) on to the target system.

5.2.1 Add Secondary RAC node information to sqlnet.ora

This is a two-node Oracle RAC example.
Go to the primary node and add the secondary node information to the sqlnet.ora file:
RAC node 1: host1.example.com
RAC node 1 vip: host1-vip.example.com
RAC node 2: host2.example.com
RAC node 2 vip: host2-vip.example.com
Open the $ORACLE_HOME/network/admin/<context>/sqlnet.ora file for editing, and add the Oracle RAC Node 2 information by changing the line shown.
FROM:

tcp.invited_nodes=(host1.example.com, host1-vip.example.com)

TO:

tcp.invited_nodes=(host1.example.com, host1-vip.example.com, host2.example.com, host2-vip.example.com)
Then reload the listener to reflect the change.
Note: The host1 entries should be there after a successful clone.

5.2.2 Uncompress archived ORACLE_HOME transferred from Source System

Uncompress the source system ORACLE_HOME archive to a location matching that on your target system primary node. The directory structure should match that present on the newly created target system primary node.
$ tar -xvzf rac_db_oh.tgz

5.2.3 Archive [ORACLE_HOME]/appsutil directory structure from New Primary Node

Log in to the new target system primary node, and execute the following commands:
$ cd [ORACLE_HOME]
$ zip -r appsutil_node1.zip appsutil

5.2.4 Copy appsutil_node1.zip to Secondary Target Node

Transfer and then expand the appsutil_node1.zip into the secondary target RAC node [NEW ORACLE_HOME].
$ cd [NEW ORACLE_HOME]
$ unzip -o appsutil_node1.zip

5.2.5 Update pairsfile.txt for Secondary Target Node

Alter the existing pairsfile.txt (from the first target node) and change the s_undo_tablespace parameter.
The [NEW_ORACLE_HOME]/appsutil/clone/pairsfile.txt will look like this example:
s_undo_tablespace=[Source system secondary node undo tablespace name]
s_dbClusterInst=[Total number of Instances in a cluster e.g. 2]
s_db_oh=[Location of new ORACLE_HOME]

5.2.6 Create Context File for Secondary Node

Navigate to [NEW_ORACLE_HOME]/appsutil/clone/bin and run the adclonectx.pl utility as follows:
$ perl adclonectx.pl \
contextfile=[Full Path to Existing Context File on First Node]/<contextfile>.xml \
template=[NEW ORACLE_HOME]/appsutil/template/adxdbctx.tmp \
pairsfile=[NEW ORACLE_HOME]/appsutil/clone/pairsfile.txt addnode
Where:
ParameterUsage
contextfileAbsolute path to the existing context file from the first (primary) node.
templateAbsolute path to the existing database context file template.
pairsfileAbsolute path to the pairsfile created on last step.
Several of the interview prompts are the same as on Node 1. However, there are some new questions which are specific to the "addnode" option used when on the second node.
Note: When answering the questions below, review your responses carefully before entering them. The rest of the inputs (not shown) are the same as those encountered during the context file creation on the initial node (primary node).
Host name of the live RAC node : kawasaki [enter appropriate value if not defaulted]

Domain name of the live RAC node : yourdomain.com [enter appropriate value if not defaulted]

Database SID of the live RAC node : motoGP1 [enter the individual SID, NOT the Global DB name]

Listener port number of the live RAC node : 1548 [enter the port # of the Primary Target Node you just created]

Provide information for the new Node:

Host name : suzuki [enter appropriate value if not defaulted, like suzuki]

Virtual Host name : suzuki-vip [enter the Clusterware VIP interconnect name, like suzuki-vip.yourdomain.com]

Instance number : 2 [enter the instance # for this current node]

Private interconnect name : suzuki-priv [enter the private interconnect name, like suzuki-priv]

Current Node:

Host Name : suzuki

SID : motoGP2

Instance Name : motoGP2

Instance Number : 2

Instance Thread : 2

Undo Table Space: UNDOTBS2 [enter value earlier added to pairsfile.txt, if not defaulted]

Listener Port : 1548

Target System quorum disk location required for cluster manager and node monitor : [legacy parameter, enter /tmp]

Target System Base Directory : [Enter the base directory that contains the new_oh_loc dir]
Oracle OS User [oracle] :
Oracle OS Group [dba] :
Target System utl_file_dir Directory List : /usr/tmp [Specify an appropriate location for your requirements]
Number of DATA_TOP's on the Target System [2] : 1 [At present, you can only have one data_top with RAC-To-RAC cloning]
Target System DATA_TOP Directory 1 : +APPS_RAC_DISK [The shared storage location; ASM diskgroup/NetApps NFS mount point/OCFS2 mount point]
Do you want to preserve the Display [null] (y/n) ? : [Respond according to your requirements]
Target System Display [null] : [Respond according to your requirements]
New context path and file name [/s1/atgrac/racdb/appsutil/motoGP1_kawasaki.xml] : [Double-check proposed location, and amend if needed]:
Note: At the conclusion of these "interview" questions related to context file creation, look carefully at the generated context file and ensure that the values contained therein compare to the values entered during context file creation on Node 1. The values should be almost identical, a small but important exception being the local instance name will have a number 2 instead of a 1.

5.2.7 Configure NEW ORACLE_HOME

Run the commands below to move to the correct directory and continue the cloning process:
$ cd [NEW ORACLE_HOME]/appsutil/clone/bin
$ perl adcfgclone.pl dbTechStack [Full path to the database context file created in previous step]/<contextfile>.xml
Note: If the database version is 12c Release 1, ensure to add the following line in your sqlnet_ifile.ora after adcfgclone.pl execution completes:
  • SQLNET.ALLOWED_LOGON_VERSION_SERVER = 8 (if the initialization parameter SEC_CASE_SENSITIVE_LOGON is set to FALSE)
  • SQLNET.ALLOWED_LOGON_VERSION_SERVER = 10 (if SEC_CASE_SENSITIVE_LOGON is set to TRUE)

5.2.8 Source the new environment file in the ORACLE_HOME

Run the commands below to move to the correct directory and source the environment:
$ cd [NEW ORACLE_HOME]
$ ./[CONTEXT_NAME].env

5.2.9 Modify [SID]_APPS_BASE.ora

Edit the [SID]_APPS_BASE.ora file in <NEW ORACLE_HOME>/dbs and change the control file parameter to reflect the correct control file location on the shared storage. This will be the same value as in the [SID]_APPS_BASE.ora on the target system primary node which was just created.

5.2.10 Start Oracle RAC Database

Start the database using the following commands:
$ sqlplus /nolog
SQL>connect / as sysdba
SQL>startup

5.2.11 Execute AutoConfig

Run AutoConfig to generate the correct listener.ora and tnsnames.ora files:
$ cd $ORACLE_HOME/appsutil/scripts/$CONTEXT_NAME
$ ./adautocfg.sh appspass=[APPS Password]
If AutoConfig fails, and you see any "TNS" errors in the AutoConfig log files, you should ensure the listeners are registered properly, and then re-run AutoConfig on the second node.

5.3 Carry Out Target System (Primary Node) Final Oracle RAC Configuration Tasks

5.3.1 Recreate TNSNAMES and LISTENER.ORA

Log in to the target primary node (Node 1) and run AutoConfig to perform the final Oracle RAC configuration and create new listener.ora and tnsnames.ora. This is needed as the FND_NODES table did not contain the second node hostname until AutoConfig was run on the secondary target RAC node.
$ cd $ORACLE_HOME/appsutil/scripts/[CONTEXT_NAME]
$ ./adautocfg.sh appspass=[APPS Password]
Note: This execution of AutoConfig on the primary target Oracle RAC Node 1 will add the second Oracle RAC Node connection information to the first node's tnsnames.ora, such that listener load balancing can occur. If you have more than two nodes in your new target system cluster, you must repeat Sections 4.2 and 4.3 for all subsequent nodes.

Section 6: RAC to Single Instance Cloning

It is now possible to clone from a RAC enabled E-Business Suite (source) environment to a Single Instance E-Business Suite (target) environment following nearly the same process detailed above in Section 5.
To clone from a RAC source environment to a Single Instance target, the image creation process as noted in Section 4 remains unchanged. On the target host system however, while working through Section 5, the context file creation (step 5.1.3 above) should be done as in the case of Single Instance cloning. All other primary target restore tasks remain the same from Section 5 in the case of Single Instance Restore. Disregard any references to secondary node configuration (starting at step 5.2) as they will not apply here.
For example:
Target Instance is RAC (y/n) [y] : [Enter n]
Because you are cloning the context file from an Oracle RAC enabled source system, the interview question above pre-selects a default value of being an Oracle RAC Instance. Ensure you answer "N" to the above question. By creating a context file without Oracle RAC attributes present, Rapid Clone will configure and convert the RDBMS technology stack and its binaries on the target system such that a Single Instance restore can be performed.

The Rapid Clone command to restore the database on the target system (Step 5.1.4) remains the same whether the target is Oracle RAC or Single Instance.
The final step is to edit the init[sid].ora file and remove the duplicate entries for aq_tm_processes and job_queue_processes (which are set to 0). Ensure you restart the database after you make these changes.
Note: In the Oracle RAC to Single Instance Cloning scenario, no changes are made to the database with regard to undo tablespaces or redo log groups or members. These data structures will therefore be as were present in the source system Oracle RAC database. Optionally, you may decide to reduce the complexity that was carried over from the source Oracle RAC environment to the single instance.

Section 7: Application Tier Cloning for Oracle RAC

The target system application tier may be located in any one of the following locations:
  • Primary target database node
  • Secondary target database node
  • An independent machine, running neither of the target system RAC nodes
  • Shared between two or more machines
Because of the complexities which can arise, the application tier should initially be configured to connect to a single database instance. After proper configuration with one of the two target system Oracle RAC nodes has been achieved, suitable context variable changes can be made to enable JDBC and TNS Listener load balancing.

7.1 Clone the Application Tier

To clone the application tier, follow the standard steps for the application node listed in My Oracle Support Knowledge Document 1383621.1Cloning Oracle E-Business Suite Release 12.2 with Rapid Clone. The procedure includes adpreclone steps, copying the bits to the target, the configuration portion, and the finishing tasks.
Note: On the application tier, during adcfgclone.pl execution, you will be asked for a database to which the application tier services should connect. Enter the database name (DB_NAME). On successful completion of this step, the application tier services will be started, and you should be able to log in and use the new target system Oracle E-Business Suite system.

7.2 Configure Application Tier JDBC and Listener Load Balancing

You now need to configure the context variables on the application tier nodes to enable database listener and instance load balancing.
Implement load balancing for the database connections as follows:
  1. Run the context editor (through Oracle Applications Manager) and set the value of "Tools OH TWO_TASK" (s_tools_twotask),"iAS OH TWO_TASK"(s_weboh_twotask) and "Apps JDBC Connect Alias" (s_apps_jdbc_connect_alias).
  2. To load-balance the forms-based Oracle E-Business Suite database connections, set the value of "Tools OH TWO_TASK" to point to the [database_name]_BALANCE alias generated in the tnsnames.ora file.
  3. To load-balance the self-service Oracle E-Business Suite database connections, set the value of iAS OH TWO_TASK" and "Apps JDBC Connect Alias" to point to the [database_name]_BALANCE alias generated in the tnsnames.ora file.
  4. Execute AutoConfig by running the commands:
    $ cd $ADMIN_SCRIPTS_HOME
    $ ./adautocfg.sh
  5. After successful completion of AutoConfig, restart the application tier processes via the scripts located in $ADMIN_SCRIPTS_HOME.
  6. Ensure that value of the profile option "Application Database ID" is set to dbc file name generated at $FND_SECURE.

Section 8: Advanced Cloning Scenarios

8.1 Cloning the Database Separately

In certain cases, customers may require the RAC database to be recreated separately, without using the full mechanism employed during a regular E-Business Suite RAC Rapid Clone scenario.
This section documents the steps needed to allow for manual creation of the target RAC database control files (or the reuse of existing control files) within the Rapid Clone process.
Unless otherwise noted, all commands are specific to the primary target database instance.
Follow only Step 1 and Step 2 from the "Standard Cloning Tasks" section of My Oracle Support Knowledge Document 1383621.1Cloning Oracle E-Business Suite Release 12.2 with Rapid Clone, then continue with the steps below to complete Cloning the Database Separately.
  1. Log on to the primary target system host as the oracle UNIX user.
  2. Configure the [RDBMS ORACLE_HOME] as note above in Section 5: RAC-to-RAC Cloning: execute only steps 5.1.1, 5.1.2 and 5.1.3
  3. Create the target database control files manually (if needed) or modify the existing control files as needed to define data file, redo and archive log locations along with any other relevant and required setting. In this step, you copy and recreate the database using your preferred method, such as RMAN restore, Flash Copy, Snap View, or Mirror View.
  4. Start the new target Oracle RAC database in open mode.
  5. Run the library update script against the Oracle RAC database.

    $ cd [RDBMS ORACLE_HOME]/appsutil/install/[CONTEXT_NAME]
    $ sqlplus "/ as sysdba" @adupdlib.sql [libext]
    Where [libext] should be set to sl for HP-UX, so for any other UNIX platform, or dll for Windows.
  6. Configure the primary target database
    The database must be running and open before performing this step.
    $ cd [RDBMS ORACLE_HOME]/appsutil/clone/bin
    $ perl adcfgclone.pl dbconfig [Database target context file]
    Where database target context file is:
    [RDBMS ORACLE_HOME]/appsutil/[Target CONTEXT_NAME].xml
    Note: The dbconfig option will configure the database with the required settings for the new target, but it will not recreate the control files.
     
  7. When the above tasks (a-f) are completed on the primary target database instance, see "5.2 Target System Secondary Node Configuration(Clone Additional Nodes)" to configure any secondary database instances.

8.2 Additional Advanced RAC Cloning Scenarios

Rapid Clone is only certified for RAC-to-RAC and RAC-to-Single Instance Cloning at present. The addition or removal of Oracle RAC nodes during the cloning process is not currently supported.

Appendix A: Configuring Oracle Clusterware on the Target System Database Nodes

Associating Target System Oracle RAC Database instances and listeners with Clusterware (CRS)


As the oracle user, run the following commands to add the target system database, instances, and listeners to CRS:
$ srvctl add database -d [database_name] -o [oracle_home]
$ srvctl add instance -d [database_name] -i [instance_name] -n [host_name]
$ srvctl add service -d [name] -s [service_name]
Note: For detailed instructions regarding the installation and usage of Oracle Clusterware software as it relates to Oracle Real Applications Clusters, refer to the following documents, as applicable: