Focus On Oracle

Installing, Backup & Recovery, Performance Tuning,
Troubleshooting, Upgrading, Patching

Oracle Engineered System


当前位置: 首页 » 技术文章 » Cloud

Oracle VM--如何使用Deploycluster tool快速部署4个节点RAC

在读本文之前,你需要了解下面的知识

A.了解Oracle VM虚拟化的

B.之前曾经部署过Oracle VM Manager和Oracle VM Server

C.通过Oracle VM Manager配置过网络,Repository,ServerPool等


Oracle VM 3 users can benefit from the DeployCluster tool which now fully supports Single Instance, Oracle Restart (SIHA) or RAC deployments. The tool leverages the Oracle VM 3 API so that given a set of VMs it quickly boots them up sends the needed configuration details, and an automated Single Instance or cluster build is initiated, without requiring the user to login to Dom0, any of the involved VMs or Oracle VM Manager.


Download DeployCluster Tool Download (for Oracle VM 3.3 or higher)  
Download DeployCluster Tool Download (for Oracle VM 3.2 or lower)
Documentation for Oracle VM 3 usersDeployCluster Tool
Note: For Deploycluster to function properly, an OVMAPI enabled OS disk (released since 2012) must be used in an Oracle VM 3 Manager and Server framework.
Use DeployCluster version 3.x for deployments on Oracle VM 3.3 or higher; useDeployCluster version 2.x for deployments on Oracle VM 3.2 or lower.
Single Instance and Oracle 12c support comes with deploycluster version 2.0 or above.

See DeployCluster documentations for additional details.


注意:

Oracle VM 3.3及以上的版本:使用Deploycluster tool version 3.x部署

Oracle VM 3.2及以下的版本:使用Deploycluster tool version 2.x部署


导入Oracle VM模板
在Oracle VM3.x中,如果没有配置ftp或http,可以通过python创建一个迷你的http服务器,然后再导入VM Template模板
1.进入到模板的目录
2.创建htp服务器
  $python -m SimpleHTTPServer 8080
3.通过浏览器就可以访问模板文件
  http://<Ip Address>:8080
  http://10.0.2.15:8080/OVM_OL6U4_X86_64_12101DBRAC_PVM-1of2.tbz
  http://10.0.2.15:8080/OVM_OL6U4_X86_64_12101DBRAC_PVM-2of2.tbz

导入VM Template后,通过Deploycluster tool可以很快的帮你部署一套集群。其实主要有2个文件,一个是网络配置文件(netconfig12cRAC4node.ini),一个是数据库参数文件(params12c.ini)
注意:这两个文件名字可以自己制定,好记就好


通过Deploycluster tool部署4个节点RAC的架构图


网络配置文件(netconfig12cRAC4node.ini)
[root@ovm utils]# cat netconfig12cRAC4node.ini 
# Node specific information
NODE1=ohs0
NODE1IP=192.168.56.10
NODE1PRIV=ohs0-priv
NODE1PRIVIP=10.10.10.230
NODE1VIP=ohs0-vip
NODE1VIPIP=192.168.56.230
NODE1ROLE=HUB

NODE2=ohs1
NODE2IP=192.168.56.11
NODE2PRIV=ohs1-priv
NODE2PRIVIP=10.10.10.231
NODE2VIP=ohs1-vip
NODE2VIPIP=192.168.56.231
NODE2ROLE=HUB

NODE3=ohs2
NODE3IP=192.168.56.12
NODE3PRIV=ohs2-priv
NODE3PRIVIP=10.10.10.232
NODE3VIP=ohs2-vip
NODE3VIPIP=192.168.56.232
NODE3ROLE=HUB

NODE4=ohs3
NODE4IP=192.168.56.13
NODE4PRIV=ohs3-priv
NODE4PRIVIP=10.10.10.233
#NODE4VIP=ohs3-vip
#NODE4VIPIP=192.168.56.233
NODE4ROLE=LEAF

# Common data
PUBADAP=eth0
PUBMASK=255.255.255.0
PUBGW=192.168.56.1
PRIVADAP=eth1
PRIVMASK=255.255.255.0
RACCLUSTERNAME=pgold
DOMAINNAME=localdomain  # May be blank
DNSIP=  # Starting from 2013 Templates allows multi value
# Device used to transfer network information to second node
# in interview mode
NETCONFIG_DEV=/dev/xvdc
# 11gR2 specific data
SCANNAME=pgold-scan
SCANIP=192.168.56.235
GNS_ADDRESS=192.168.56.236

# 12c Flex parameters (uncomment to take effect)
FLEX_CLUSTER=yes  # If 'yes' implies Flex ASM as well
FLEX_ASM=yes
ASMADAP=eth2  # Must be different than private/public
ASMMASK=255.255.255.0
NODE1ASMIP=10.11.0.230
NODE2ASMIP=10.11.0.231
NODE3ASMIP=10.11.0.232
NODE4ASMIP=10.11.0.233

# Single Instance (description in params.ini) 
# CLONE_SINGLEINSTANCE=yes  # Setup Single Instance
# CLONE_SINGLEINSTANCE_HA=yes  # Setup Single Instance/HA (Oracle Restart)
[root@ovm utils]# 

数据库配置文件(params12c.ini)
#
#/* Copyright 2013,  Oracle. All rights reserved. */
#
#
# WRITTEN BY: Oracle.
#  v1.0: Jul-2013 Creation
#
#
# Oracle DB/RAC 12c OneCommand for Oracle VM - Generic configuration file
# For Single Instance, Single Instance HA (Oracle Restart) and Oracle RAC
#
#############################################
#
# Generic Parameters
#
# NOTE: The first section holds more advanced parameters that
#       should be modified by advanced users or if instructed by Oracle.
#
# See further down this file for the basic user modifiable parameters.
#
##############################################
#
# Temp directory (for OUI), optional
# Default: /tmp
TMPDIR="/tmp"
#
# Progress logfile location
# Default: $TMPDIR/progress-racovm.out
LOGFILE="$TMPDIR/progress-racovm.out"
#
# Must begin with a "+", see "man 1 date" for valid date formats, optional.
# Default: "+%Y-%m-%d %T"
LOGFILE_DATE_FORMAT=""
#
# Should 'clone.pl' be used (default no) or direct 'attach home' (default yes)
# to activate the Grid and RAC homes.
# Attach is possible in the VM since all relinking was done already
# Certain changes may still trigger a clone/relink operation such as switching
# from role to non-role separation.
# Default: yes
CLONE_ATTACH_DBHOME=yes
CLONE_ATTACH_GIHOME=yes
#
# Should a re-link be done on the Grid and RAC homes. Default is no,
# since the software was relinked in VM already. Setting it to yes
# forces a relink on both homes, and overrides the clone/attach option
# above by forcing clone operation (clone.pl)
# Default: no
CLONE_RELINK=no
#
# Should a re-link be done on the Grid and RAC homes in case of a major
# OS change; Default is yes.  In case the homes are attached to a different
# major OS than they were linked against, a relink will be automatically
# performed.  For example, if the homes were linked on OL5 and then used
# with an OL6 OS, or vice versa, a relink will be performed. To disable
# this automated relinking during install (cloning step), set this
# value to no (not recommended)
# Default: yes
CLONE_RELINK_ON_MAJOR_OS_CHANGE=yes
#
# The root of the oracle install must be an absolute path starting with a /
# Default: /u01/app
RACROOT="/u01/app"
#
# The location of the Oracle Inventory
# Default: $RACROOT/oraInventory
RACINVENTORYLOC="${RACROOT}/oraInventory"
#
# The location of the SOFTWARE base
# In role separated configuration GIBASE may be defined to set the location
# of the Grid home which defaults to $RACROOT/$GRIDOWNER.
# Default: $RACROOT/$RACOWNER
RACBASE="${RACROOT}/oracle"
#
# The location of the Grid home, must be set in RAC or Single Instance HA deployments
# Default: $RACROOT/12.1.0/grid
GIHOME="${RACROOT}/12.1.0/grid"
#
# The location of the DB RAC home, must be set in non-Clusterware only deployments
# Default: ${RACBASE}/product/12.1.0/dbhome_1
DBHOME="${RACBASE}/product/12.1.0/dbhome_1"
#
# The disk string used to discover ASM disks, it should cover all disks
# on all nodes, even if their physical names differ. It can also hold
# ASMLib syntax, e.g. ORCL:VOL*, and have as many elements as needed
# separated by space, tab or comma.
# Do not remove the "set -/+o noglob" options below, they are required
# so that discovery string don't expand on assignment.
set -o noglob
RACASMDISKSTRING="/dev/xvdc1"
set +o noglob
#
# Provide list of devices or actual partitions to use. If actual
# partition number is specified no partitioning will be done, otherwise specify
# top level device name and the disk will automatically be partitioned with
# one partition using 'parted'. For example, if /dev/xvdh4 is listed
# below it will be used as is, if it does not exist an error will be raised.
# However, if /dev/xvdh is listed it will be automatically partitioned
# and /dev/xvdh1 will be used.
# Minimum of 5 devices or partitions are recommended (see ASM_MIN_DISKS).
ALLDISKS="/dev/xvdc"
#
# Provide list of ASMLib disks to use.  Can be either "diskname" or
# "ORCL:diskname".  They must be manually configured in ASMLib by
# mapping them to correct block device (this part is not yet automated).
# If you include any disks here they should also be included
# in RACASMDISKSTRING setting above (discovery string).
ALLDISKS_ASMLIB=""
#
# By default 5 disks for ASM are recommended to provide higher redundancy
# for OCR/Voting files. If for some reason you want to use less
# disks, then uncomment ASM_MIN_DISKS below and set to the new minimum.
# Make needed adjustments in ALLDISKS and/or ALLDISKS_ASMLIB above.
# Default: 5
ASM_MIN_DISKS=1
#
# By default, whole disks specified in ALLDISKS will be partitioned with
# one partition. If you prefer not to partition and use whole disk, set
# PARTITION_WHOLE_DISKS to no. Keep in mind that if at a later time
# someone will repartition the disk, data may be lost. Probably better
# to leave it as "yes" and signal it's used by having a partition created.
# Default: yes
PARTITION_WHOLE_DISKS=yes
#
# By default, disk *names* are assumed to exist with same name on all nodes, i.e
# all nodes will have /dev/xvdc, /dev/xvdd, etc.  It doesn't mean that the *ordering*
# is also identical, i.e. xvdc can really be xvdd on the other node.
# If such persistent naming (not ordering) is not the case, i.e node1 has
# xvdc,xvdd but node2 calls them: xvdn,xvdm then PERSISTENT_DISKNAMES should be
# set to NO.  In the case where disks are named differently on each node, a
# stamping operation should take place (writing to second sector on disk)
# to verify if all nodes see all disks.
# Stamping only happens on the node the build is running from, and backup
# is taken to $TMPDIR/StampDisk-backup-diskname.dd. Remote nodes read the stamped
# data and if all disks are discovered on all nodes the disk configuration continues.
# Default: yes
PERSISTENT_DISKNAMES=yes
#
# This parameter decides whether disk stamping takes place or not to discover and verify
# that all nodes see all disks.  Stamping is the only way to know 100% that the disks
# are actually the same ones on all nodes before installation begins.
# The master node writes a unique uuid to each disk on the second sector of the disk,
# then remote nodes read and discover all disks.
# If you prefer not to stamp the disks, set DISCOVER_VERIFY_REMOTE_DISKS_BY_STAMPING to
# no. However, in that case, PERSISTENT_DISKNAMES must be set to "yes", otherwise, with
# both parameters set to "no" there is no way to calculate the remote disk names.
# The default for stamping is "yes" since in Virtual machine environments, scsi_id(8)
# doesn't return data for disks.
# Default: yes
DISCOVER_VERIFY_REMOTE_DISKS_BY_STAMPING=yes
#
# Permissions and ownership files, EL4 uses PERMISSIONFILE, EL5 uses UDEVFILE
UDEVFILE="/etc/udev/rules.d/99-oracle.rules"
PERMISSIONFILE="/etc/udev/permissions.d/10-oracle.permissions"
#
# Disk permissions to be set on ASM disks use if want to override the below default
# Default: "660" (owner+group: read+write)
#  It may be possible in Non-role separation to use "640" (owner: read+write, group: read)
#  however, that is not recommended since if a new database OS user
#  is added at a later time in the future, it will not be able to write to the disks.
#DISKPERMISSIONS="660"
#
# ASM's minimum allocation unit (au_size) for objects/files/segments/extents of the first
# diskgroup, in some cases increasing to higher values may help performance (at the
# potential of a bit of space wasting). Legal values are 1,2,4,8,16,32 and 64 MB.
# Not recommended to go over 8MB. Currently if initial diskgroup holds OCR/Voting then it's
# maximum possible au_size is 16MB. Do not change unless you understand the topic.
# Most releases default to 1MB (Exadata's default: 4MB)
#RACASM_AU_SIZE=1
#
# Should we align the ASM disks to a 1MB boundary.
# Default: yes
ALIGN_PARTITIONS=yes
#
# Should partitioned disks use the GPT partition table
# which supported devices larger than 2TB.
# Default: msdos
PARTITION_TABLE_GPT=no
#
# These are internal functions that check if a disk/partition is held
# by any component. They are run in parallel on all nodes, but in sequence
# within a node. Do not modify these unless explicitly instructed to by Oracle.
HELDBY_FUNCTIONS=(HeldByRaid HeldByAsmlib HeldByPowerpath HeldByDeviceMapper HeldByUser HeldByFilesystem HeldBySwap)
#
##### STORAGE: Filesystem: DB/RAC: (shared) filesystem
#
# NOTE1: To not configure ASM unset RACASMGROUPNAME
# NOTE2: Not all operations/verification take place in a
#        FS configuration.
#  For example:
#   - The mount points are not automatically created/mounted
#   - Best effort verification is done that the correct
#     mount options are used.
#
# The filesystem directory to hold Database files (control, logfile, etc.)
# For RAC it must be a shared location (NFS, OCFS or in 12c ACFS),
# otherwise it may be a local filesystem (e.g. ext4).
# For NFS make sure mount options are correct as per docs
# such as Note:359515.1
# Default: None (Single Instance: $RACBASE/oradata)
#FS_DATAFILE_LOCATION=/nfs/160
#
# Should the database be created in the FS location mentioned above.
# If value is unset or set to no, the database is created in ASM.
# Default: no (Single Instance: yes)
#DATABASE_ON_FS=no
#
# Should the above directory be cleared from Clusterware and Database
# files during a 'clean' or 'cleanlocal' operation.
# Default: no
#CLONE_CLEAN_FS_LOCATIONS=no
#
# Names of OCR/VOTE disks, could be in above FS Datafile location
# or a different properly mounted (shared) filesystem location
# Default: None
#CLONE_OCR_DISKS=/nfs/160/ocr1,/nfs/160/ocr2,/nfs/160/ocr3
#CLONE_VOTING_DISKS=/nfs/160/vote1,/nfs/160/vote2,/nfs/160/vote3
#
# Location of OCR/VOTE disks. Value of "yes" means inside ASM
# whereas any other value means the OCR/Voting reside in CFS
# (above locations must be supplied)
# Default: yes
#CLONE_OCRVOTE_IN_ASM=yes
#
# Should addnodes operation COPY the entire Oracle Homes to newly added
# nodes. By default no copy is done to speed up the process, however
# if existing cluster members have changed (patches applied) compared
# to the newly created nodes (using the template), then a copy
# of the Oracle Homes might be desired so that the newly added node will
# get all the latest modifications from the current members.
# Default: no
CLONE_ADDNODES_COPY=no
#
# Should an add node operation fully clean the new node before adding
# it to the cluster. Setting to yes means that any lingering running
# Oracle processes on the new node are killed before the add node is
# started as well as all logs/traces are cleared from that node.
# Default: no
CLONE_CLEAN_ON_ADDNODES=no
#
# Should a remove node operation fully clean the removed node after removing
# it from the cluster. Setting to yes means that any lingering running
# Oracle processes on the removed node are killed after the remove node is
# completed as well as all logs/traces are cleared from that node.
# Default: no
CLONE_CLEAN_ON_REMNODES=no
#
# Should 'cleanlocal' request prompt for confirmation if processes are running
# Note that a global 'clean' will fail if this is set to 'yes' and processes are running
# this is a designed safeguard to protect environment from accidental removal.
# Default: yes
CLONE_CLEAN_CONFIRM_WHEN_RUNNING=yes
#
# Should the recommended oracle-validated or oracle-rdbms-server-*-preinstall
# be checked for existence and dependencies during check step. If any missing
# rpms are found user will need to use up2date or other methods to resolve dependencies
# The RPM may be obtained from Unbreakable Linux Network or http://oss.oracle.com
# Default: yes
CLONE_ORACLE_PREREQ_RPM_REQD=yes
#
# Should the "verify" actions of the above RPM be run during buildcluster.
# These adjust kernel parameters. In the VM everything is pre-configured hence
# default is not to run.
# Default: no
CLONE_ORACLE_PREREQ_RPM_RUN=no
#
# By default after clusterware installation CVU (Cluster Verification Utility)
# is executed to make sure all is well. Setting to 'yes' will skip this step.
# Set CLONE_SKIP_CVU_POSTHAS for SIHA (Oracle Restart) environments
# Default: no
#CLONE_SKIP_CVU_POSTCRS=no
#
# Allows to skip minimum disk space checks on the
# Oracle Homes (recommended not to skip)
# Default: no
CLONE_SKIP_DISKSPACE_CHECKS=no
#
# Allows to skip minimum memory checks (recommended not to skip)
# Default: no
CLONE_SKIP_MEMORYCHECKS=yes
#
# On systems with extreme memory limitations, e.g. VirtualBox, it may be needed
# to disable some Clusterware components to release some memory. Workload
# Management, Cluster Health Monitor and Cluster Verification Utility are
# disabled if this option is set to yes.
# This is only supported for production usage with Clusterware only installation.
# Default: no
CLONE_LOW_MEMORY_CONFIG=yes
#
# By default on systems with less than 4GB of RAM the /dev/shm will
# automatically resize to fit the specified configuration (ASM, DB).
# This is done because the default of 50% of RAM may not be enough. To
# disable this functionality set CLONE_TMPFS_SHM_RESIZE_NEVER=yes.
# Default: no
CLONE_TMPFS_SHM_RESIZE_NEVER=no
#
# To disable the modification of /etc/fstab with the calculated size of
# /dev/shm, set CLONE_TMPFS_SHM_RESIZE_MODIFY_FSTAB=no. This may mean that
# some instances may not properly start following a system reboot.
# Default: yes
CLONE_TMPFS_SHM_RESIZE_MODIFY_FSTAB=yes
#
# Configures the Cluster Management DB (aka Cluster Health Monitor or CHM/OS)
# Default: no
CLONE_GRID_MANAGEMENT_DB=no
#
# Setting CLONE_CLUSTERWARE_ONLY to yes allows Clusterware only installation
# any operation to create a database or reference the DB home are ignored.
# Default: no
#CLONE_CLUSTERWARE_ONLY=no
#
# As described in the 11.2.0.2 README as well as Note:1212703.1 multicasting
# is required to run Oracle RAC starting with 11.2.0.2. If this check fails
# review the note, and remove any firewall rules from Dom0, or re-configure
# the switch servicing the private network to allow multicasting from all
# nodes to all nodes.
# Default: yes
CLONE_MULTICAST_CHECK=yes
#
# Should a multicast check failure cause the build to stop. It's possible to
# perform the multicast check, but not stop on failures.
# Default: yes
CLONE_MULTICAST_STOP_ON_FAILURE=yes
#
# List of multicast addresses to check. By default 11.2.0.2 supports
# only 230.0.1.0, however with fix for bug 9974223 or bundle 1 and higher
# the software also supports multicast address 244.0.0.251. If future
# software releases will support more addresses, modify this list as needed.
# Default: "230.0.1.0 224.0.0.251"
CLONE_MULTICAST_ADDRESSLIST="230.0.1.0 224.0.0.251"
#
# The text specified in the NETCONFIG_RESOLVCONF_OPTIONS variable is written to
# the "options" field in the /etc/resolv.conf file during initial network setup.
# This variable can be set here in params.ini, or in netconfig.ini having the same
# effect. It should be a space separated options as described in "man 5 resolv.conf"
# under the "options" heading. Some useful options are:
# "single-request-reopen attempts:x timeout:x"  x being a digit value.
# The 'single-request-reopen' option may be helpful in some environments if
# in-bound ssh slowness occur.
# Note that minimal validation takes place to verify the options are correct.
# Default: ""
#NETCONFIG_RESOLVCONF_OPTIONS=""
#
##################################################
#
# The second section below holds basic parameters
#
##################################################
#
# Configures a Single Instance environment, including a database as
# specified in BUILD_SI_DATABASE. In this mode, no Clusterware or ASM will be
# configured, hence all related parameters (e.g. ALLDISKS) are not relevant.
# The database must reside on a filesystem.
# This parameter may be placed in netconfig.ini for simpler deployment.
# Default: no
#CLONE_SINGLEINSTANCE=no
#
# Configures a Single Instance/HA environment, aka Oracle Restart, including
# a database as specified in BUILD_SI_DATABASE. The database may reside in
# ASM (if RACASMGROUPNAME is defined), or on a filesystem.
# This parameter may be placed in netconfig.ini for simpler deployment.
# Default: no
#CLONE_SINGLEINSTANCE_HA=no
#
# OS USERS AND GROUPS FOR ORACLE SOFTWARE
#
# SYNTAX for user/group are either (VAR denotes the variable names below):
#   VAR=username:uid   OR:  VAR=username
#                           VARID=uid
#   VAR=groupname:gid  OR:  VAR=groupname
#                           VARID=gid
#
#   If uid/gid are omitted no checks are made nor users created if need be.
#   If uid/gid are supplied they should be numeric and not clash
#   with existing uid/gids defined on the system already.
#   NOTE: In RAC usernames and uid/gid must match on all cluster nodes,
#         the verification process enforces that only if uid/gid's
#         are given below.
#
# If incorrect configuration is detected, changes to users and groups are made to
# correct them. If this is set to "no" then errors are reported
# without an attempt to fix them.
# (Users/groups are never dropped, only added or modified.)
# Default: yes
CREATE_MODIFY_USERS_GROUPS=yes
#
# NON-ROLE SEPARATED:
#    No Grid user is defined and all roles are set to 'dba'
RACOWNER=oracle:1101
OINSTALLGROUP=oinstall:1000
GIOSASM=dba:1031
GIOSDBA=dba:1031
#GIOSOPER=   # optional in 12c
DBOSDBA=dba:1031
#DBOSOPER=   # optional in 12c
#
# ROLE SEPARATION: (uncomment lines below)
#    See Note:1092213.1
#    (Numeric changes made to uid/gid to reduce the footprint and possible clashes
#     with existing users/groups)
#
##GRIDOWNER=grid:1100
##RACOWNER=oracle:1101
##OINSTALLGROUP=oinstall:1000
##GIOSASM=asmadmin:1020
##GIOSDBA=asmdba:1021
##GIOSOPER=   # optional in 12c
##DBOSDBA=dba:1031
##DBOSOPER=   # optional in 12c
## New in 12c are these 3 roles, if unset, they default to "DBOSDBA"
##DBOSBACKUPDBA=dba:1031
##DBOSDGDBA=dba:1031
##DBOSKMDBA=dba:1031
#
# The name for the Grid home in the inventory
# Default: OraGrid12c
#GIHOMENAME="OraGrid12c"
#
# The name for the DB/RAC home in the inventory
# Default: OraRAC12c (Single Instance: OraDB12c)
#DBHOMENAME="OraRAC12c"
#
# The name of the ASM diskgroup, default "DATA"
# If unset ASM will not be configured (see filesystem section above)
# Default: DATA
RACASMGROUPNAME="DATA"
#
# The ASM Redundancy for the diskgroup above
# Valid values are EXTERNAL, NORMAL or HIGH
# Default: NORMAL (if unset)
RACASMREDUNDANCY="EXTERNAL"
#
# Allows running the Clusterware with a different timezone than the system's timezone.
# If CLONE_CLUSTERWARE_TIMEZONE is not set, the Clusterware Timezone will
# be set to the system's timezone of the node running the build.  System timezone is
# defined in /etc/sysconfig/clock (ZONE variable), if not defined or file missing
# comparison of /etc/localtime file is made against the system's timezone database in
# /usr/share/zoneinfo, if no match or /etc/localtime is missing GMT is used. If you
# want to override the above logic, simply set CLONE_CLUSTERWARE_TIMEZONE to desired
# timezone. Note that a complete timezone is needed, e.g. "PST" or "EDT" is not enough
# needs to be full timezone spec, e.g. "PST8PDT" or "America/New_York".
# This variable is only honored in 11.2.0.2 or above
# Default: OS
#CLONE_CLUSTERWARE_TIMEZONE="America/Los_Angeles"
#
# Create an ACFS volume?
# Default: no
ACFS_CREATE_FILESYSTEM=no
#
# If ACFS volume is to be created, this is the mount point.
# It will automatically get created on all nodes.
# Default: /myacfs
ACFS_MOUNTPOINT="/myacfs"
#
# Name of ACFS volume to optionally create.
# Default: MYACFS
ACFS_VOLNAME="MYACFS"
#
# Size of ACFS volume in GigaBytes.
# Default: 3
ACFS_VOLSIZE_GB="3"
#
# NOTE: In the OVM3 enhanced RAC Templates when using deploycluster
# tool (outside of the VMs). The correct and secure way to transfer/set the
# passwords is to remove them from this file and use the -P (--params)
# flag to transfer this params.ini during deploy operation, in which
# case the passwords will be prompted, and sent to all VMs in a secure way.
# The password that will be set for the ASM and RAC databases
# as well as EM DB Console and the oracle OS user.
# If not defined here they will be prompted for (only once)
# at the start of the build. Required to be set here or environment
# for silent mode.
# Use single quote to prevent shell parsing of special characters.
RACPASSWORD='oracle'
GRIDPASSWORD='oracle'
#
# Password for 'root' user. If not defined here it will be prompted
# for (only once) at the start of the build.
# Assumed to be same on both nodes and required to be set here or
# environment for silent mode.
# Use single quote to prevent shell parsing of special characters.
ROOTUSERPASSWORD='ovsroot'
#
# Build Database? The BUILD_RAC_DATABASE will build a RAC database and
# BUILD_SI_DATABASE a single instance database (also in a RAC environment)
# Default: yes
BUILD_RAC_DATABASE=yes
#BUILD_SI_DATABASE=yes
#
# Allows for database and listener to be started automatically at next
# system boot. This option is only applicable in Single Instance mode.
# In Single Instance/HA or RAC mode, the Clusterware starts up all
# resources (listener, ASM, databases).
# Default: yes
CLONE_SI_DATABASE_AUTOSTART=yes
#
# Comma separated list of name value pairs for database initialization parameters
# Use with care, no validation takes place.
# For example: "sort_area_size=99999,control_file_record_keep_time=99"
# Default: none
#DBCA_INITORA_PARAMETERS=""
#
# Create a 12c Container Database allowing pluggable databases to be added
# using options below, or at a later time.
# Default: no
DBCA_CONTAINER_DB=no
#
# Pluggable Database name. In 'createdb' operation a number is appended at the end
# based on count (below). In 'deletepdb' exact name must be specified here or in
# an environment variable.
# Default: mypdb
DBCA_PLUGGABLE_DB_NAME=mypdb
#
# Number of Pluggable Databases to create during a 'createdb' operation. A value
# of zero (default) disables pluggable database creation.
# Default: 0
DBCA_PLUGGABLE_DB_COUNT=0
#
# Should a Policy Managed database be created taking into account the
# options below. If set to 'no' an Admin Managed database is created.
# Default: no
DBCA_DATABASE_POLICY=no
#
# Create Server Pools (Policy Managed database).
# Default: yes
CLONE_CREATE_SERVERPOOLS=yes
#
# Recreate Server Pools; if already exist (Policy Managed database).
# Default: no
CLONE_RECREATE_SERVERPOOLS=no
#
# List of server pools to create (Policy Managed database).
# Syntax is poolname:category:min:max
# All except name can be omitted. Category can be Hub or Leaf.
# Default: mypool
CLONE_SERVERPOOLS="mypool"
#
# List of Server Pools to be used by the created database (Policy Managed database).
# The server pools listed in DBCA_SERVERPOOLS must appear in CLONE_SERVERPOOLS
# (and CLONE_CREATE_SERVERPOOLS set to yes), OR must be manually pre-created for
# the create database to succeed.
# Default: mypool
DBCA_SERVERPOOLS="mypool"
#
# Database character set.
# Default: WE8MSWIN1252 (previous default was AL32UTF8)
# DATABASE_CHARACTERSET="WE8MSWIN1252"
#
# Use this DBCA template name, file must exist under $DBHOME/assistants/dbca/templates
# Default: "General_Purpose.dbc"
DBCA_TEMPLATE_NAME="General_Purpose.dbc"
#
# Should the database include the sample schema
# Default: no
DBCA_SAMPLE_SCHEMA=no
#
# Registers newly created database to be periodically monitored by Cluster Verification
# Utility (CVU) on a continuous basis.
# Default: no
DBCA_RUN_CVU_PERIODICALLY=no
#
# Certain patches applied to the Oracle home require execution of some SQL post
# database creation for the fix to be applied completely. These files are located
# under patches/postsql subdirectory. It is possible to run them serially (adds
# to overall build time), or in the background which is the default.
# Note that when running in background these scripts may run a little longer after
# the RAC Cluster + Database are finished building, however that should not cause
# any issues. If overall build time is not a concern change this to NO and have
# the scripts run as part of the actual build in serial.
# Default: yes
DBCA_POST_SQL_BG=yes
#
# An optional user custom SQL may be executed post database creation, default name of
# script is user_custom_postsql.sql, it is located under patches/postsql subdirectory.
# Default: user_custom_postsql.sql
DBCA_POST_SQL_CUSTOM=user_custom_postsql.sql
#
# The Database Name
# Default: ORCL
DBNAME="ORCL"
#
# The Instance name, may be different than database name. Limited in length of
# 1 to 8 for a RAC DB and 1 to 12 for Single Instance DB of alphanumeric characters.
# Ignored for Policy Managed DB.
# Default: ORCL
SIDNAME="ORCL"
#
# Configure EM DB Express
# Default: no
CONFIGURE_DBEXPRESS=no
#
# DB Express port number. If left at the default, a free port will be assigned at
# runtime, otherwise the port should be unused on all network adapters.
# Default: 5500
#DBEXPRESS_HTTPS_PORT=5500
#
# SCAN (Single Client Access Name) port number
# Default: 1521
SCANPORT=1521
#
# Local Listener port number
# Default: 1521
LISTENERPORT=1521
#
# Allows color coding of log messages, errors (red), warning (yellow),
# info (green). By default no colors are used.
# Default: NO
CLONE_LOGWITH_COLORS=no
#
# END OF FILE
#

deploycluster2帮助信息(适用于Oracle VM 3.2及以下的版本)
[root@ovm deploycluster2]# ./deploycluster.py -h
Oracle DB/RAC OneCommand (v2.1.2) for Oracle VM - deploy cluster - (c) 2011-2015 Oracle Corporation
 (com: 29100:v2.0.4, lib: 182513:v2.0.6, var: 1600:v2.0.6) - v2.6.6 - ovm.ohsdba.cn (x86_64)
Invoked as root at Sat Oct  8 04:35:02 2016  (size: 47000, mtime: Thu Sep 17 18:03:35 2015)
Usage: deploycluster.py <Oracle VM Manager login> <DB/RAC Templates Options>

deploycluster.py provides fully automated backend Oracle DB/Single Instance or
RAC cluster deployment. It assumes all VMs are pre-created from the base
Oracle DB/RAC OVM Template and assigned correct disks & network to allow a
successful cluster deployment. It then starts the VMs (if needed), verifies
they are configured correctly (disk, network, etc.) then sends the network &
build parameters to all VMs. This configures the VMs network and optionally
launches a buildsingle/buildcluster on the single VM or N-nodes cluster to
obtain a fully configured single instance or Oracle RAC environment. See
deploycluster.ini options file for more details. *** NOTE: This version of
deploycluster.py only supports Oracle VM version 3.2 and below. See My Oracle
Support Note#1185244.1 and OTN for deployment options on higher versions of
Oracle VM Manager, use the newer version 3 of the deploycluster.py tool. ***

Options:
  --version             show program's version number and exit
  -h, --help            show this help message and exit
  Oracle VM Manager Login:
    Credentials to login to Oracle VM Manager (SSL supported)
    -u <username>, --username=<username>
                        Username to connect to Oracle VM Manager
    -p <password>, --password=<password>
                        Password to connect to Oracle VM Manager
    -H <host>, --host=<host>
                        Manager hostname (use either -H or -U or omit for
                        local host)
    -U <url>, --url=<url>
                        Login URL to Manager (default: tcp://localhost:54321
                        or tcps://host:54322 when -H used to remote node)
  Oracle DB/RAC OVM Template Options:
    Identify which VMs to deploy as Single Instance or a cluster - pass
    any special build attributes. Commonly used flags: -M, -N & -P.
    -L, --list_vms_only
                        List VMs seen via Oracle VM Manager; Honors -M flag
    -M <List of VMs>, --vms=<List of VMs>
                        List of existing VM names or IDs to deploy cluster on.
                        Supports "*" & "?" wildcard characters
    -P <params.ini>, --params=<params.ini>
                        Location of params.ini file (sent to VMs)
    -N <netconfig.ini>, --netconfig=<netconfig.ini>
                        Location of netconfig.ini file (sent to VMs)
    -B <yes|no>, --buildcluster=<yes|no>
                        Start a buildcluster/buildsingle post-network setup
                        (default: yes. [If netconfig_args passed then default:
                        no])
    -G <args>, --netconfig_args=<args>
                        Advanced: Arguments to netconfig; In short they are:
                        -n<#>: Node number for this node or starting node
                        number. -b[#]: Initiate a buildcluster/buildsingle
                        (optional node count). -R/W: Read or Write
                        netconfig.ini data from disk. -c <d>: Disk that holds
                        netconfig.ini details. The special keyword "yes"
                        indicates the 2-node interview. The "out" keyword
                        skips any network setup.
    -K <zip file>, --kitfile=<zip file>
                        Advanced: Unzip new (partial) kitfile inside the VMs
    -X <file>, --extrakeys=<file>
                        Advanced: File containing extra keys to send all VMs
    -D, --dryrun        Show what will be done (do not start VMs or send msgs)
[root@ovm deploycluster2]# 

deploycluster3帮助信息(适用于Oracle VM 3.3及以上)

[root@ovm media]# cd deploycluster3/
[root@ovm deploycluster3]# ./deploycluster.py -h
Oracle DB/RAC OneCommand (v3.0.4) for Oracle VM - deploy cluster - (c) 2011-2016 Oracle Corporation
 (com: 29100:v3.0.4, lib: 231044:v3.0.3, var: 1800:v3.0.4) - v2.6.6 - ovm.ohsdba.cn (x86_64)
Invoked as root at Sat Oct  8 04:36:27 2016  (size: 43800, mtime: Mon Feb  1 10:12:03 2016)
Usage: deploycluster.py <Oracle VM Manager login> <DB/RAC Templates Options>


deploycluster.py provides fully automated backend Oracle DB/Single Instance or
RAC cluster deployment. It assumes all VMs are pre-created from the base
Oracle DB/RAC OVM Template and assigned correct disks & network to allow a
successful cluster deployment. It then starts the VMs (if needed), verifies
they are configured correctly (disk, network, etc.) then sends the network &
build parameters to all VMs. This configures the VMs network and optionally
launches a buildsingle/buildcluster on the single VM or N-nodes cluster to
obtain a fully configured single instance or Oracle RAC environment. See
deploycluster.ini options file for more details. *** NOTE: This version of
deploycluster.py only supports Oracle VM version 3.3 and above. See My Oracle
Support Note#1185244.1 and OTN for deployment options on lower versions of
Oracle VM Manager, use the older version 2 of the deploycluster.py tool. ***

Options:
  --version             show program's version number and exit
  -h, --help            show this help message and exit
  Oracle VM Manager Login:
    Credentials to login to Oracle VM Manager


    -u <username>, --username=<username>
                        Username to connect to Oracle VM Manager
    -p <password>, --password=<password>
                        Password to connect to Oracle VM Manager
    -H <host>, --host=<host>
                        Manager hostname, IP address or omit for local host
    -r <port>, --port=<port>
                        Manager WebServices API port number (default: 7002)
  Oracle VM Manager Login - CERTIFICATE:
    Certificate related login credentials to access Oracle VM Manager
    -I, --insecure      Ignore SSL certificate and host name checks - not for
                        production environments
    --keystore-file=<KEYSTOREFILE>
                        The keystore used to store your login key and
                        certificates (default: ~/.ovmkeystore)
    --keystore-password=<KEYSTOREPASS>
                        The password for the keystore
    --key-password=<KEYPASS>
                        The password for the login key (can be omitted if same
                        as keystore password)
  Oracle DB/RAC OVM Template Options:
    Identify which VMs to deploy as Single Instance or a cluster - pass
    any special build attributes. Commonly used flags: -M, -N & -P.
    -L, --list_vms_only
                        List VMs seen via Oracle VM Manager; Honors -M flag
    -M <List of VMs>, --vms=<List of VMs>
                        List of existing VM names or IDs to deploy cluster on.
                        Supports "*" & "?" wildcard characters
    -P <params.ini>, --params=<params.ini>
                        Location of params.ini file (sent to VMs)
    -N <netconfig.ini>, --netconfig=<netconfig.ini>
                        Location of netconfig.ini file (sent to VMs)
    -B <yes|no>, --buildcluster=<yes|no>
                        Start a buildcluster/buildsingle post-network setup
                        (default: yes. [If netconfig_args passed then default:
                        no])
    -G <args>, --netconfig_args=<args>
                        Advanced: Arguments to netconfig; In short they are:
                        -n<#>: Node number for this node or starting node
                        number. -b[#]: Initiate a buildcluster/buildsingle
                        (optional node count). -R/W: Read or Write
                        netconfig.ini data from disk. -c <d>: Disk that holds
                        netconfig.ini details. The special keyword "yes"
                        indicates the 2-node interview. The "out" keyword
                        skips any network setup.
    -K <zip file>, --kitfile=<zip file>
                        Advanced: Unzip new (partial) kitfile inside the VMs
    -X <file>, --extrakeys=<file>
                        Advanced: File containing extra keys to send all VMs
    -D, --dryrun        Show what will be done (do not start VMs or send msgs)
[root@ovm deploycluster3]# 

通过Dryrun模式检查潜在的错误

[root@ovm-mgr deploycluster2]# ./deploycluster.py -u admin -M ohs.? -N utils/netconfig12cRAC4node.ini -P utils/12cdb.ini -D
-u   指定VM Manager的用户名
-M  准备用作RAC集群的server名字
-N  指定网络配置文件(netconfig12cRAC4node.ini).
-P  指定数据库配置文件(12cdb.ini).
-D  意思为Dryrun模式,不加-D表示正式执行
Oracle DB/RAC OneCommand (v2.1.2) for Oracle VM - deploy cluster - (c) 2011-2015 Oracle Corporation
 (com: 29100:v2.0.4, lib: 182513:v2.0.6, var: 1600:v2.0.6) - v2.4.3 - ovm-mgr.ohsdba.cn (x86_64)
Invoked as root at Thu Oct  6 10:06:08 2016  (size: 47000, mtime: Thu Sep 17 15:03:35 2015)
Using: ./deploycluster.py -u admin -M ohs.? -N utils/netconfig12cRAC4node.ini -P utils/12cdb.ini -D

INFO: Running in dryrun mode, not starting VMs or sending any messages to them...
INFO: Login password to Oracle VM Manager not supplied on command line or environment (DEPLOYCLUSTER_MGR_PASSWORD), prompting...
Password: 
INFO: Attempting to connect to Oracle VM Manager...
INFO: Oracle VM Client  (3.2.9.746) protocol (1.9) CONNECTED (tcp) to
      Oracle VM Manager (3.2.4.524) protocol (1.9) IP (192.168.56.3) UUID (0004fb0000010000285d60b0071f42ae)
INFO: Inspecting /var/www/html/files/deploycluster2/utils/netconfig12cRAC4node.ini for number of nodes defined....
INFO: Detected 4 nodes in: /var/www/html/files/deploycluster2/utils/netconfig12cRAC4node.ini
INFO: Located a total of (4) VMs; 
      4 VMs with a simple name of: ['ohs.0', 'ohs.1', 'ohs.2', 'ohs.3']
INFO: Detected (3) Hub nodes and (1) Leaf node in the Flex Cluster
INFO: Detected a RAC deployment...
INFO: Starting all (4) VMs -- "dryrun" mode
INFO: VM with a simple name of "ohs.0" (Hub node) is in a Stopped state, however, not starting it due to "dryrun" option passed on command line.
INFO: VM with a simple name of "ohs.1" (Hub node) is in a Stopped state, however, not starting it due to "dryrun" option passed on command line.
INFO: VM with a simple name of "ohs.2" (Hub node) is in a Stopped state, however, not starting it due to "dryrun" option passed on command line.
INFO: VM with a simple name of "ohs.3" (Leaf node) is in a Stopped state, however, not starting it due to "dryrun" option passed on command line.

INFO: Verifying that all (4) VMs are in Running state and pass prerequisite checks -- "dryrun" mode...
INFO: Detected Flex ASM enabled with a dedicated network adapter (eth2), all VMs will require a minimum of (3) Vnics...
....
INFO: Skipped checking memory of VMs due to DEPLOYCLUSTER_SKIP_VM_MEMORY_CHECK=yes
INFO: Detected that all (3) Hub node VMs specified on command line have (1) common shared disk between them (ASM_MIN_DISKS=1)
INFO: The (4) VMs passed basic sanity checks (dry-run mode), not sending cluster details as follows:
      netconfig.ini (Network setup): /var/www/html/files/deploycluster2/utils/netconfig12cRAC4node.ini
      params.ini (Overall build options): /var/www/html/files/deploycluster2/utils/12cdb.ini
      buildcluster: yes
INFO: Exiting without sending above parameters due to "dryrun" option passed on command line.
INFO: deploycluster.py completed successfully at 10:06:30 in 22.5 seconds (0h:00m:22s)
Logfile at: /var/www/html/files/deploycluster2/deploycluster1.log
[root@ovm-mgr deploycluster2]# 

正式开始部署4个节点的RAC
[root@ovm-mgr deploycluster2]# ./deploycluster.py -u admin -M ohs.? -N utils/netconfig12cRAC4node.ini -P utils/12cdb.ini
Oracle DB/RAC OneCommand (v2.1.2) for Oracle VM - deploy cluster - (c) 2011-2015 Oracle Corporation
 (com: 29100:v2.0.4, lib: 182513:v2.0.6, var: 1600:v2.0.6) - v2.4.3 - ovm-mgr.ohsdba.cn (x86_64)
Invoked as root at Thu Oct  6 10:40:07 2016  (size: 47000, mtime: Thu Sep 17 15:03:35 2015)
Using: ./deploycluster.py -u admin -M ohs.? -N utils/netconfig12cRAC4node.ini -P utils/12cdb.ini

INFO: Login password to Oracle VM Manager not supplied on command line or environment (DEPLOYCLUSTER_MGR_PASSWORD), prompting...
Password: 
INFO: Attempting to connect to Oracle VM Manager...
INFO: Oracle VM Client  (3.2.9.746) protocol (1.9) CONNECTED (tcp) to
      Oracle VM Manager (3.2.4.524) protocol (1.9) IP (192.168.56.3) UUID (0004fb0000010000285d60b0071f42ae)
INFO: Inspecting /var/www/html/files/deploycluster2/utils/netconfig12cRAC4node.ini for number of nodes defined....
INFO: Detected 4 nodes in: /var/www/html/files/deploycluster2/utils/netconfig12cRAC4node.ini
INFO: Located a total of (4) VMs; 
      4 VMs with a simple name of: ['ohs.0', 'ohs.1', 'ohs.2', 'ohs.3']
INFO: Detected (3) Hub nodes and (1) Leaf node in the Flex Cluster
INFO: Detected a RAC deployment...
INFO: Starting all (4) VMs...
INFO: VM with a simple name of "ohs.0" (Hub node) is in a Stopped state, attempting to start it....OK.
INFO: VM with a simple name of "ohs.1" (Hub node) is in a Stopped state, attempting to start it....OK.
INFO: VM with a simple name of "ohs.2" (Hub node) is in a Stopped state, attempting to start it....OK.
INFO: VM with a simple name of "ohs.3" (Leaf node) is in a Stopped state, attempting to start it....OK.
INFO: Verifying that all (4) VMs are in Running state and pass prerequisite checks...
INFO: Detected Flex ASM enabled with a dedicated network adapter (eth2), all VMs will require a minimum of (3) Vnics...
....
INFO: Skipped checking memory of VMs due to DEPLOYCLUSTER_SKIP_VM_MEMORY_CHECK=yes
INFO: Detected that all (3) Hub node VMs specified on command line have (1) common shared disk between them (ASM_MIN_DISKS=1)
INFO: The (4) VMs passed basic sanity checks and in Running state, sending cluster details as follows:
      netconfig.ini (Network setup): /var/www/html/files/deploycluster2/utils/netconfig12cRAC4node.ini
      params.ini (Overall build options): /var/www/html/files/deploycluster2/utils/12cdb.ini
      buildcluster: yes

INFO: Starting to send configuration details to all (4) VM(s).......
INFO: Sending to VM with a simple name of "ohs.0" (Hub node).........
INFO: Sending to VM with a simple name of "ohs.1" (Hub node)........
INFO: Sending to VM with a simple name of "ohs.2" (Hub node)........
INFO: Sending to VM with a simple name of "ohs.3" (Leaf node).........

INFO: Configuration details sent to (4) VMs...
      Check log (default location /u01/racovm/buildcluster.log) on build VM (ohs.0)...

INFO: deploycluster.py completed successfully at 10:40:58 in 50.6 seconds (0h:00m:50s)
Logfile at: /var/www/html/files/deploycluster2/deploycluster2.log
[root@ovm-mgr deploycluster2]# 

然后观察节点一上的/u01/racovm/buildcluster.log日志即可
[oracle@ohs0 racovm]$ tail -100 buildcluster1.log 

INFO (node:ohs3): Disabling passwordless ssh access for root user (from remote nodes)
2016-10-06 15:01:31:[rmsshrootlocal:Time :ohs3] Completed successfully in 1 seconds (0h:00m:01s)
INFO (node:ohs1): Disabling passwordless ssh access for root user (from remote nodes)
INFO (node:ohs2): Disabling passwordless ssh access for root user (from remote nodes)
2016-10-06 15:01:31:[rmsshrootlocal:Time :ohs2] Completed successfully in 0 seconds (0h:00m:00s)
2016-10-06 15:01:32:[rmsshrootlocal:Time :ohs1] Completed successfully in 2 seconds (0h:00m:02s)

INFO (node:ohs0): Disabling passwordless ssh access for root user (from remote nodes)
2016-10-06 15:01:34:[rmsshrootlocal:Time :ohs0] Completed successfully in 0 seconds (0h:00m:00s)
2016-10-06 15:01:34:[rmsshroot:Time :ohs0] Completed successfully in 9 seconds (0h:00m:09s)
INFO (node:ohs0): Current cluster state (15:01:34)...
INFO (node:ohs0): Running on: ohs0 as root: /u01/app/12.1.0/grid/bin/olsnodes -n -s -t -a
ohs0    1       Active  Hub     Unpinned
ohs1    2       Active  Hub     Unpinned
ohs2    3       Active  Hub     Unpinned
ohs3    100     Active  Leaf    Unpinned
Oracle Clusterware active version on the cluster is [12.1.0.1.0]
Oracle Clusterware version on node [ohs0] is [12.1.0.1.0]
CRS Administrator List: oracle root
Cluster is running in "flex" mode
ASM Flex mode enabled: ASM instance count: 3
ASM is running on ohs2,ohs1,ohs0
INFO (node:ohs0): Running on: ohs0 as oracle: export ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1; /u01/app/oracle/product/12.1.0/dbhome_1/bin/srvctl status database -d ORCL
Instance ORCL1 is running on node ohs0
Instance ORCL2 is running on node ohs1
Instance ORCL3 is running on node ohs2

INFO (node:ohs0): Running on: ohs0 as root: /u01/app/12.1.0/grid/bin/crsctl status resource -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       ohs0                     STABLE
               ONLINE  ONLINE       ohs1                     STABLE
               ONLINE  ONLINE       ohs2                     STABLE
ora.DATA.dg
               ONLINE  ONLINE       ohs0                     STABLE
               ONLINE  ONLINE       ohs1                     STABLE
               ONLINE  ONLINE       ohs2                     STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       ohs0                     STABLE
               ONLINE  ONLINE       ohs1                     STABLE
               ONLINE  ONLINE       ohs2                     STABLE
ora.LISTENER_LEAF.lsnr
               OFFLINE OFFLINE      ohs3                     STABLE
ora.net1.network
               ONLINE  ONLINE       ohs0                     STABLE
               ONLINE  ONLINE       ohs1                     STABLE
               ONLINE  ONLINE       ohs2                     STABLE
ora.ons
               ONLINE  ONLINE       ohs0                     STABLE
               ONLINE  ONLINE       ohs1                     STABLE
               ONLINE  ONLINE       ohs2                     STABLE
ora.proxy_advm
               ONLINE  ONLINE       ohs0                     STABLE
               ONLINE  ONLINE       ohs1                     STABLE
               ONLINE  ONLINE       ohs2                     STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       ohs0                     STABLE
ora.asm
      1        ONLINE  ONLINE       ohs0                     STABLE
      2        ONLINE  ONLINE       ohs1                     STABLE
      3        ONLINE  ONLINE       ohs2                     STABLE
ora.cvu
      1        OFFLINE OFFLINE                               STABLE
ora.gns
      1        ONLINE  ONLINE       ohs0                     STABLE
ora.gns.vip
      1        ONLINE  ONLINE       ohs0                     STABLE
ora.oc4j
      1        OFFLINE OFFLINE                               STABLE
ora.ohs0.vip
      1        ONLINE  ONLINE       ohs0                     STABLE
ora.ohs1.vip
      1        ONLINE  ONLINE       ohs1                     STABLE
ora.ohs2.vip
      1        ONLINE  ONLINE       ohs2                     STABLE
ora.orcl.db
      1        ONLINE  ONLINE       ohs0                     Open,STABLE
      2        ONLINE  ONLINE       ohs1                     Open,STABLE
      3        ONLINE  ONLINE       ohs2                     Open,STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       ohs0                     STABLE
--------------------------------------------------------------------------------

INFO (node:ohs0): For an explanation on resources in OFFLINE state, see Note:1068835.1
2016-10-06 15:01:42:[clusterstate:Time :ohs0] Completed successfully in 8 seconds (0h:00m:08s)
2016-10-06 15:01:42:[buildcluster:Done :ohs0] Building 12c RAC Cluster
2016-10-06 15:01:42:[buildcluster:Time :ohs0] Completed successfully in 3010 seconds (0h:50m:10s)
[oracle@ohs0 racovm]$ 


可以看到全程此次部署耗时50分钟10秒,全程不需要手动干预


Reference

Oracle VM

http://www.oracle.com/technetwork/server-storage/vm/overview/index.html


How to Use Oracle VM Templates

http://www.oracle.com/technetwork/articles/servers-storage-dev/configure-vm-templates-1656261.html


Oracle VM Templates for Oracle Database

http://www.oracle.com/technetwork/server-storage/vm/database-templates-12c-11gr2-1972804.html


Oracle VM Templates

http://www.oracle.com/technetwork/server-storage/vm/overview/templates-101937.html


Virtual Appliances Download for Oracle VM Hands-on Labs

http://www.oracle.com/technetwork/server-storage/vm/downloads/hol-oraclevm-2368799.html


Hands-On Labs for Oracle VM

http://www.oracle.com/technetwork/systems/hands-on-labs/ovm-1908457.html


How to Deploy a Four-Node Oracle RAC 12c Cluster in Minutes Using Oracle VM Templates

http://www.oracle.com/technetwork/systems/hands-on-labs/deploy-rac-ovm-cluster-2101019.html




关键词:vm cloud oracle 

相关文章

Oracle宣布推出全球分布式自治数据库
Oracle 23c新特性---开发人员
Oracle 23c free FAQ
Oracle 23c free and OCI Base Service
Oracle 21c
基于PDB的Active Data Guard(Oracle 21.7+)
在Oracle数据库中使用REST
OGG from MySQL to Oracle
Oracle数据库容灾之两地三中心实践
低代码开发用Oracle Apex,看这篇就够了
Oracle Database 20c之SQL宏
Java beginner for Oracle DBA
Top