Focus On Oracle

Installing, Backup & Recovery, Performance Tuning,
Troubleshooting, Upgrading, Patching

Oracle Engineered System


当前位置: 首页 » 技术文章 » AODU

aodu(At Oracle Database Utility)之rac(二)


AODU> rac diagrac  --收集RAC信息用于诊断
        ****Data Gathering for All Real Application Cluster Issues****
        From all nodes:
        # $GRID_HOME/bin/diagcollection.sh
            Provide instance alert_{$ORACLE_SID}.log, lmon, lmd*, lms*, ckpt, lgwr, lck*, dia*, lmhb(11g only), and all others traces
            that are modified around incident time. A quick way to identify all traces and tar them up is to use incident time with the following example:
            $ grep "2010-09-02 03" *.trc | awk -F: '{print $1}' | sort -u |xargs tar cvf trace.`hostname`.`date +%Y%m%d%H%M%S`.tar
            $ gzip trace*.tar
            For pre-11g, execute the command in bdump and udump to identify the list of files.
            For 11g+, execute the command in ${ORACLE_BASE}/diag/rdbms/$DBNAME/${ORACLE_SID}/trace to identify the list of files
            Incident files/packages in alert.log at time of the incident
            If ASM is involved, provide same set of files for ASM
            OS logs - refer to Appendix B
        Appendix B. OS logs
        OS logs are in the following directory depending on platform:
        Linux: /var/log/messages
        AIX: /bin/errpt -a (redirect this to a file called messages.out)
        Solaris: /var/adm/messages
        HP-UX: /var/adm/syslog/syslog.log
        Tru64: /var/adm/messages
        Windows: save Application Log and System Log as .TXT files using Event Viewer
        Note: From 11gR2, OS logs are part of diagcollection on Linux, Solaris, HP-UX.
AODU>

AODU> rac perf   --当出现性能问题时,如何收集信息。当系统hung了,如何连接数据库
        ****Data Gathering for Real Application Cluster Performance/Hang Issues****
        Provide files in Section "Data Gathering for All Real Application Cluster Issues" and the following:
            systemstate and hanganalyze - refer to Appendix C
            awr, addm and ash report, each report covers a period no more than 60 minutes
            OSWatcher archives which cover the hang time
            Note 301137.1 - OS Watcher User Guide
            Note.433472.1 - OS Watcher For Windows (OSWFW) User Guide
            CHM/OS data what covers the hang time for platforms where it is available, refer to Note 1328466.1 for
            section "How do I collect the Cluster Health Monitor data"
        Appendix C. systemstate and hanganalyze in RAC
        To collect hanganalyze and systemstate in RAC, execute the following on one instance to generate cluster wide dumps:
        a - Connect to sqlplus as sysdba: "sqlplus / as sysdba";
            if this does not work, use "sqlplus -prelim / as sysdba"
        b - Execute the following commands:
            For 11g+
            SQL> oradebug setospid <ospid of diag process>
            SQL> oradebug unlimit
            SQL> oradebug -g all hanganalyze 3
            ##..Wait about 2 minutes
            SQL> oradebug -g all hanganalyze 3
            SQL> oradebug -g all dump systemstate 258
            If possible, take another one at level 266 instead of 258
            If SGA is large or fix for bug 11800959 (fixed in 11.2.0.2 DB PSU5, 11.2.0.3 and above) is not applied,
        level 266 could take very long time and generate a huge trace file and may not finish in hours.
            For 10g
            SQL> oradebug setospid <ospid of diag process>
            SQL> oradebug unlimit
            SQL> oradebug -g all dump systemstate 266##..Wait about 2 minutes
            SQL> oradebug -g all dump systemstate 266
            Please upload *diag* trace from either bdump or trace directory.
            If diag trace is huge or "oradebug -g all ..." command is hanging, please collect system state
        dump from each instance individually at similar time:
            SQL> oradebug setmypid
            SQL> oradebug unlimit
            SQL> oradebug hanganalyze 3
            ##..Wait about 2 minutes
            SQL> oradebug hanganalyze 3
            SQL> oradebug dump systemstate 258
            SQL> oradebug tracefile_name
            Please upload the trace file listed above.
        Script to Collect RAC Diagnostic Information (racdiag.sql) (Doc ID 135714.1)
        If "sqlplus -prelim / as sysdba" does not work, refer to note 359536.1 Step "1.)  Using
        OS debuggers like dbx or gdb" to take on all nodes.
        If ASM is involved, collect hanganalyze and systemstate from ASM with the instruction above.
AODU>

AODU> rac srvctl    --srvctl常用手册,在没有环境和白皮书下,这个就很管用了


        ****Crsctl Command Reference****
        $ srvctl -h
        Usage: srvctl [-V]
        Usage: srvctl add database -d <db_unique_name> -o <oracle_home> [-m <domain_name>] [-p <spfile>] [-r {PRIMARY | PHYSICAL_STANDBY | LOGICAL_STANDBY |
        SNAPSHOT_STANDBY}] [-s <start_options>] [-t <stop_options>] [-n <db_name>] [-y {AUTOMATIC | MANUAL}]
        [-g "<serverpool_list>"] [-x <node_name>] [-a "<diskgroup_list>"]
        Usage: srvctl config database [-d <db_unique_name> [-a] ]
        Usage: srvctl start database -d <db_unique_name> [-o <start_options>]
        Usage: srvctl stop database -d <db_unique_name> [-o <stop_options>] [-f]
        Usage: srvctl status database -d <db_unique_name> [-f] [-v]
        Usage: srvctl enable database -d <db_unique_name> [-n <node_name>]
        Usage: srvctl disable database -d <db_unique_name> [-n <node_name>]
        Usage: srvctl modify database -d <db_unique_name> [-n <db_name>] [-o <oracle_home>] [-u <oracle_user>] [-m <domain>] [-p <spfile>] [-r {PRIMARY |
        PHYSICAL_STANDBY | LOGICAL_STANDBY | SNAPSHOT_STANDBY}] [-s <start_options>] [-t <stop_options>] [-y {AUTOMATIC | MANUAL}]
        [-g "<serverpool_list>" [-x <node_name>]] [-a "<diskgroup_list>"|-z]
        Usage: srvctl remove database -d <db_unique_name> [-f] [-y]
        Usage: srvctl getenv database -d <db_unique_name> [-t "<name_list>"]
        Usage: srvctl setenv database -d <db_unique_name> {-t <name>=<val>[,<name>=<val>,...] | -T <name>=<val>}
        Usage: srvctl unsetenv database -d <db_unique_name> -t "<name_list>"
        Usage: srvctl add instance -d <db_unique_name> -i <inst_name> -n <node_name> [-f]
        Usage: srvctl start instance -d <db_unique_name> {-n <node_name> [-i <inst_name>] | -i <inst_name_list>} [-o <start_options>]
        Usage: srvctl stop instance -d <db_unique_name> {-n <node_name> | -i <inst_name_list>}  [-o <stop_options>] [-f]
        Usage: srvctl status instance -d <db_unique_name> {-n <node_name> | -i <inst_name_list>} [-f] [-v]
        Usage: srvctl enable instance -d <db_unique_name> -i "<inst_name_list>"
        Usage: srvctl disable instance -d <db_unique_name> -i "<inst_name_list>"
        Usage: srvctl modify instance -d <db_unique_name> -i <inst_name> { -n <node_name> | -z }
        Usage: srvctl remove instance -d <db_unique_name> [-i <inst_name>] [-f] [-y]
        Usage: srvctl add service -d <db_unique_name> -s <service_name> {-r "<preferred_list>" [-a "<available_list>"] [-P {BASIC | NONE | PRECONNECT}] |
         -g <server_pool> [-c {UNIFORM | SINGLETON}] } [-k   <net_num>] [-l [PRIMARY][,PHYSICAL_STANDBY][,LOGICAL_STANDBY][,SNAPSHOT_STANDBY]]
        [-y {AUTOMATIC | MANUAL}] [-q {TRUE|FALSE}] [-x {TRUE|FALSE}] [-j {SHORT|LONG}] [-B {NONE|SERVICE_TIME|THROUGHPUT}] [-e {NONE|SESSION|SELECT}]
        [-m {NONE|BASIC}] [-z <failover_retries>] [-w <failover_delay>]
        Usage: srvctl add service -d <db_unique_name> -s <service_name> -u {-r "<new_pref_inst>" | -a "<new_avail_inst>"}
        Usage: srvctl config service -d <db_unique_name> [-s <service_name>] [-a]
        Usage: srvctl enable service -d <db_unique_name> -s "<service_name_list>" [-i <inst_name> | -n <node_name>]
        Usage: srvctl disable service -d <db_unique_name> -s "<service_name_list>" [-i <inst_name> | -n <node_name>]
        Usage: srvctl status service -d <db_unique_name> [-s "<service_name_list>"] [-f] [-v]
        Usage: srvctl modify service -d <db_unique_name> -s <service_name> -i <old_inst_name> -t <new_inst_name> [-f]
        Usage: srvctl modify service -d <db_unique_name> -s <service_name> -i <avail_inst_name> -r [-f]
        Usage: srvctl modify service -d <db_unique_name> -s <service_name> -n -i "<preferred_list>" [-a "<available_list>"] [-f]
        Usage: srvctl modify service -d <db_unique_name> -s <service_name> [-c {UNIFORM | SINGLETON}] [-P {BASIC|PRECONNECT|NONE}]
        [-l [PRIMARY][,PHYSICAL_STANDBY][,LOGICAL_STANDBY][,SNAPSHOT_STANDBY]] [-y {AUTOMATIC | MANUAL}][-q {true|false}]
        [-x {true|false}] [-j {SHORT|LONG}] [-B {NONE|SERVICE_TIME|THROUGHPUT}] [-e {NONE|SESSION|SELECT}] [-m {NONE|BASIC}]
         [-z <integer>] [-w <integer>]
        Usage: srvctl relocate service -d <db_unique_name> -s <service_name> {-i <old_inst_name> -t <new_inst_name>|-c <current_node> -n <target_node>} [-f]
               Specify instances for an administrator-managed database, or nodes for a policy managed database
        Usage: srvctl remove service -d <db_unique_name> -s <service_name> [-i <inst_name>] [-f]
        Usage: srvctl start service -d <db_unique_name> [-s "<service_name_list>" [-n <node_name> | -i <inst_name>] ] [-o <start_options>]
        Usage: srvctl stop service -d <db_unique_name> [-s "<service_name_list>" [-n <node_name> | -i <inst_name>] ] [-f]
        Usage: srvctl add nodeapps { { -n <node_name> -A <name|ip>/<netmask>/[if1[|if2...]] } | { -S <subnet>/<netmask>/[if1[|if2...]] } }
        [-p <portnum>] [-m <multicast-ip-address>] [-e <eons-listen-port>] [-l <ons-local-port>]  [-r <ons-remote-port>]
        [-t <host>[:<port>][,<host>[:<port>]...]] [-v]
        Usage: srvctl config nodeapps [-a] [-g] [-s] [-e]
        Usage: srvctl modify nodeapps {[-n <node_name> -A <new_vip_address>/<netmask>[/if1[|if2|...]]] | [-S <subnet>/<netmask>[/if1[|if2|...]]]}
        [-m <multicast-ip-address>] [-p <multicast-portnum>] [-e <eons-listen-port>] [ -l <ons-local-port> ] [-r <ons-remote-port> ]
        [-t <host>[:<port>][,<host>[:<port>]...]] [-v]
        Usage: srvctl start nodeapps [-n <node_name>] [-v]
        Usage: srvctl stop nodeapps [-n <node_name>] [-f] [-r] [-v]
        Usage: srvctl status nodeapps
        Usage: srvctl enable nodeapps [-v]
        Usage: srvctl disable nodeapps [-v]
        Usage: srvctl remove nodeapps [-f] [-y] [-v]
        Usage: srvctl getenv nodeapps [-a] [-g] [-s] [-e] [-t "<name_list>"]
        Usage: srvctl setenv nodeapps {-t "<name>=<val>[,<name>=<val>,...]" | -T "<name>=<val>"}
        Usage: srvctl unsetenv nodeapps -t "<name_list>" [-v]
        Usage: srvctl add vip -n <node_name> -k <network_number> -A <name|ip>/<netmask>/[if1[|if2...]] [-v]
        Usage: srvctl config vip { -n <node_name> | -i <vip_name> }
        Usage: srvctl disable vip -i <vip_name> [-v]
        Usage: srvctl enable vip -i <vip_name> [-v]
        Usage: srvctl remove vip -i "<vip_name_list>" [-f] [-y] [-v]
        Usage: srvctl getenv vip -i <vip_name> [-t "<name_list>"]
        Usage: srvctl start vip { -n <node_name> | -i <vip_name> } [-v]
        Usage: srvctl stop vip { -n <node_name>  | -i <vip_name> } [-f] [-r] [-v]
        Usage: srvctl status vip { -n <node_name> | -i <vip_name> }
        Usage: srvctl setenv vip -i <vip_name> {-t "<name>=<val>[,<name>=<val>,...]" | -T "<name>=<val>"}
        Usage: srvctl unsetenv vip -i <vip_name> -t "<name_list>" [-v]
        Usage: srvctl add asm [-l <lsnr_name>]
        Usage: srvctl start asm [-n <node_name>] [-o <start_options>]
        Usage: srvctl stop asm [-n <node_name>] [-o <stop_options>] [-f]
        Usage: srvctl config asm [-a]
        Usage: srvctl status asm [-n <node_name>] [-a]
        Usage: srvctl enable asm [-n <node_name>]
        Usage: srvctl disable asm [-n <node_name>]
        Usage: srvctl modify asm [-l <lsnr_name>]
        Usage: srvctl remove asm [-f]
        Usage: srvctl getenv asm [-t <name>[, ...]]
        Usage: srvctl setenv asm -t "<name>=<val> [,...]" | -T "<name>=<value>"
        Usage: srvctl unsetenv asm -t "<name>[, ...]"
        Usage: srvctl start diskgroup -g <dg_name> [-n "<node_list>"]
        Usage: srvctl stop diskgroup -g <dg_name> [-n "<node_list>"] [-f]
        Usage: srvctl status diskgroup -g <dg_name> [-n "<node_list>"] [-a]
        Usage: srvctl enable diskgroup -g <dg_name> [-n "<node_list>"]
        Usage: srvctl disable diskgroup -g <dg_name> [-n "<node_list>"]
        Usage: srvctl remove diskgroup -g <dg_name> [-f]
        Usage: srvctl add listener [-l <lsnr_name>] [-s] [-p "[TCP:]<port>[, ...][/IPC:<key>][/NMP:<pipe_name>][/TCPS:<s_port>] [/SDP:<port>]"]
         [-o <oracle_home>] [-k <net_num>]
        Usage: srvctl config listener [-l <lsnr_name>] [-a]
        Usage: srvctl start listener [-l <lsnr_name>] [-n <node_name>]
        Usage: srvctl stop listener [-l <lsnr_name>] [-n <node_name>] [-f]
        Usage: srvctl status listener [-l <lsnr_name>] [-n <node_name>]
        Usage: srvctl enable listener [-l <lsnr_name>] [-n <node_name>]
        Usage: srvctl disable listener [-l <lsnr_name>] [-n <node_name>]
        Usage: srvctl modify listener [-l <lsnr_name>] [-o <oracle_home>] [-p "[TCP:]<port>[, ...][/IPC:<key>][/NMP:<pipe_name>][/TCPS:<s_port>]
         [/SDP:<port>]"] [-u <oracle_user>] [-k <net_num>]
        Usage: srvctl remove listener [-l <lsnr_name> | -a] [-f]
        Usage: srvctl getenv listener [-l <lsnr_name>] [-t <name>[, ...]]
        Usage: srvctl setenv listener [-l <lsnr_name>] -t "<name>=<val> [,...]" | -T "<name>=<value>"
        Usage: srvctl unsetenv listener [-l <lsnr_name>] -t "<name>[, ...]"
        Usage: srvctl add scan -n <scan_name> [-k <network_number> [-S <subnet>/<netmask>[/if1[|if2|...]]]]
        Usage: srvctl config scan [-i <ordinal_number>]
        Usage: srvctl start scan [-i <ordinal_number>] [-n <node_name>]
        Usage: srvctl stop scan [-i <ordinal_number>] [-f]
        Usage: srvctl relocate scan -i <ordinal_number> [-n <node_name>]
        Usage: srvctl status scan [-i <ordinal_number>]
        Usage: srvctl enable scan [-i <ordinal_number>]
        Usage: srvctl disable scan [-i <ordinal_number>]
        Usage: srvctl modify scan -n <scan_name>
        Usage: srvctl remove scan [-f] [-y]
        Usage: srvctl add scan_listener [-l <lsnr_name_prefix>] [-s] [-p [TCP:]<port>[/IPC:<key>][/NMP:<pipe_name>][/TCPS:<s_port>] [/SDP:<port>]]
        Usage: srvctl config scan_listener [-i <ordinal_number>]
        Usage: srvctl start scan_listener [-n <node_name>] [-i <ordinal_number>]
        Usage: srvctl stop scan_listener [-i <ordinal_number>] [-f]
        Usage: srvctl relocate scan_listener -i <ordinal_number> [-n <node_name>]
        Usage: srvctl status scan_listener [-i <ordinal_number>]
        Usage: srvctl enable scan_listener [-i <ordinal_number>]
        Usage: srvctl disable scan_listener [-i <ordinal_number>]
        Usage: srvctl modify scan_listener {-u|-p [TCP:]<port>[/IPC:<key>][/NMP:<pipe_name>][/TCPS:<s_port>] [/SDP:<port>]}
        Usage: srvctl remove scan_listener [-f] [-y]
        Usage: srvctl add srvpool -g <pool_name> [-l <min>] [-u <max>] [-i <importance>] [-n "<server_list>"]
        Usage: srvctl config srvpool [-g <pool_name>]
        Usage: srvctl status srvpool [-g <pool_name>] [-a]
        Usage: srvctl status server -n "<server_list>" [-a]
        Usage: srvctl relocate server -n "<server_list>" -g <pool_name> [-f]
        Usage: srvctl modify srvpool -g <pool_name> [-l <min>] [-u <max>] [-i <importance>] [-n "<server_list>"]
        Usage: srvctl remove srvpool -g <pool_name>
        Usage: srvctl add oc4j [-v]
        Usage: srvctl config oc4j
        Usage: srvctl start oc4j [-v]
        Usage: srvctl stop oc4j [-f] [-v]
        Usage: srvctl relocate oc4j [-n <node_name>] [-v]
        Usage: srvctl status oc4j [-n <node_name>]
        Usage: srvctl enable oc4j [-n <node_name>] [-v]
        Usage: srvctl disable oc4j [-n <node_name>] [-v]
        Usage: srvctl modify oc4j -p <oc4j_rmi_port> [-v]
        Usage: srvctl remove oc4j [-f] [-v]
        Usage: srvctl start home -o <oracle_home> -s <state_file> -n <node_name>
        Usage: srvctl stop home -o <oracle_home> -s <state_file> -n <node_name> [-t <stop_options>] [-f]
        Usage: srvctl status home -o <oracle_home> -s <state_file> -n <node_name>
        Usage: srvctl add filesystem -d <volume_device> -v <volume_name> -g <dg_name> [-m <mountpoint_path>] [-u <user>]
        Usage: srvctl config filesystem -d <volume_device>
        Usage: srvctl start filesystem -d <volume_device> [-n <node_name>]
        Usage: srvctl stop filesystem -d <volume_device> [-n <node_name>] [-f]
        Usage: srvctl status filesystem -d <volume_device>
        Usage: srvctl enable filesystem -d <volume_device>
        Usage: srvctl disable filesystem -d <volume_device>
        Usage: srvctl modify filesystem -d <volume_device> -u <user>
        Usage: srvctl remove filesystem -d <volume_device> [-f]
        Usage: srvctl start gns [-v] [-l <log_level>] [-n <node_name>]
        Usage: srvctl stop gns [-v] [-n <node_name>] [-f]
        Usage: srvctl config gns [-v] [-a] [-d] [-k] [-m] [-n <node_name>] [-p] [-s] [-V]
        Usage: srvctl status gns -n <node_name>
        Usage: srvctl enable gns [-v] [-n <node_name>]
        Usage: srvctl disable gns [-v] [-n <node_name>]
        Usage: srvctl relocate gns [-v] [-n <node_name>] [-f]
        Usage: srvctl add gns [-v] -d <domain> -i <vip_name|ip> [-k <network_number> [-S <subnet>/<netmask>[/<interface>]]]
        srvctl modify gns [-v] [-f] [-l <log_level>] [-d <domain>] [-i <ip_address>] [-N <name> -A <address>] [-D <name> -A <address>]
        [-c <name> -a <alias>] [-u <alias>] [-r <address>] [-V <name>] [-F <forwarded_domains>] [-R <refused_domains>] [-X <excluded_interfaces>]
        Usage: srvctl remove gns [-f] [-d <domain_name>]
        srvctl add service -d <db_unique_name> -s <service_name>
        -r "<preferred_list>" [-a "<available_list>"] [-P {BASIC | NONE | PRECONNECT}]
        -g <server_pool> [-c {UNIFORM | SINGLETON}]
        [-k <net_num>]
        [-l [PRIMARY][,PHYSICAL_STANDBY][,LOGICAL_STANDBY][,SNAPSHOT_STANDBY]]
        [-y {AUTOMATIC | MANUAL}]
        [-q {TRUE|FALSE}]
        [-x {TRUE|FALSE}]
        [-j {SHORT|LONG}]
        [-B {NONE|SERVICE_TIME|THROUGHPUT}]
        [-e {NONE|SESSION|SELECT}]
        [-m {NONE|BASIC}]
        [-z <failover_retries>]
        [-w <failover_delay>]
        Here is the description of the options that we are going to use in configuring TAF.
        -r "<preferred_list>" [-a "<available_list>"] [-P {BASIC | NONE | PRECONNECT}]
                 This clause is valid only for Administrator-managed database which support PRECONNECT method.
        -g <server_pool> [-c {UNIFORM | SINGLETON}]
                 This clause is valid only for Policy-managed database where in PRECONNECT method is not available.
        The above two options instruct clusterware on how to handle this service.
        [-e {NONE|SESSION|SELECT}]
                This defines the type of TAF whether SESSION or SELECT.
        [-m {NONE|BASIC}]
                This defines the method of TAF.
        [-z <failover_retries>]
                This defines the the number of times to attempt to connect after a failover.
        [-w <failover_delay>]
               This defines the amount of time in seconds to wait between connect attempts.
        Above four options are passed to database while starting up the service and is viewable in DBA_SERVICES view.
        Basic method
        1. Create Service
        Syntax:
                  srvctl add service -d <db_unique_name> -s <service_name>
                  -r "<preferred_list>" [-a "<available_list>"] [-P {BASIC | NONE | PRECONNECT}]
                 [-e {NONE|SESSION|SELECT}]
                 [-m {NONE|BASIC}]
                 [-z <failover_retries>]
                 [-w <failover_delay>]
        Example:
                 $ srvctl add service -d db112 -s mysrvb -r db1121,db1122 -P basic -e select -m basic -z 10 -w 2

        Preconnect Method
        1. Create service
        Syntax:
                   srvctl add service -d <db_unique_name> -s <service_name>
                  -r "<preferred_list>" [-a "<available_list>"] [-P {BASIC | NONE | PRECONNECT}]
                  [-e {NONE|SESSION|SELECT}]
                  [-m {NONE|BASIC}]
        Example:
                  $ srvctl add service -d db112 -s mysrv -r db1121 -a db1122 -P preconnect
AODU>


AODU> rac crsctl    ---crsctl常用命令


        ****Crsctl Command Reference****
        $ ./crsctl -h
        Usage: crsctl add       - add a resource, type or other entity
               crsctl check     - check a service, resource or other entity
               crsctl config    - output autostart configuration
               crsctl debug     - obtain or modify debug state
               crsctl delete    - delete a resource, type or other entity
               crsctl disable   - disable autostart
               crsctl enable    - enable autostart
               crsctl get       - get an entity value
               crsctl getperm   - get entity permissions
               crsctl lsmodules - list debug modules
               crsctl modify    - modify a resource, type or other entity
               crsctl query     - query service state
               crsctl pin       - Pin the nodes in the nodelist
               crsctl relocate  - relocate a resource, server or other entity
               crsctl replace   - replaces the location of voting files
               crsctl setperm   - set entity permissions
               crsctl set       - set an entity value
               crsctl start     - start a resource, server or other entity
               crsctl status    - get status of a resource or other entity
               crsctl stop      - stop a resource, server or other entity
               crsctl unpin     - unpin the nodes in the nodelist
               crsctl unset     - unset a entity value, restoring its default
        With 11gR2, these settings can be changed online without taking any node down:
        1) Execute crsctl as root to modify the misscount:
             $CRS_HOME/bin/crsctl set css misscount <n>    #### where <n> is the maximum private network latency in seconds
             $CRS_HOME/bin/crsctl set css reboottime <r> [-force]  #### (<r> is seconds)
             $CRS_HOME/bin/crsctl set css disktimeout <d> [-force] #### (<d> is seconds)
        2) Execute crsctl as root to confirm the change:
             $CRS_HOME/bin/crsctl get css misscount
             $CRS_HOME/bin/crsctl get css reboottime
             $CRS_HOME/bin/crsctl get css disktimeout
        NOTE: You can use "-init" to check Local Resources

        crsctl status res |grep -v "^$"|awk -F "=" 'BEGIN {print " "} {printf("%s",NR%4 ? $2"|" : $2"\n")}'|sed -e 's/  *, /,/g' -e 's/, /,/g'|
        awk -F "|" 'BEGIN { printf "%-40s%-35s%-20s%-50s\n","Resource Name","Resource Type","Target ","State" }
        { split ($3,trg,",") split ($4,st,",")}{for (i in trg) {printf "%-40s%-35s%-20s%-50s\n",$1,$2,trg[i],st[i]}}'
AODU>


AODU> rac ocrconfig  --与OCR命令

        ****Ocrconfig Command Reference****
        $ ./ocrconfig -help
        Name:
                ocrconfig - Configuration tool for Oracle Cluster/Local Registry.
        Synopsis:
                ocrconfig [option]
                option:
                        [-local] -export <filename>
                                                            - Export OCR/OLR contents to a file
                        [-local] -import <filename>         - Import OCR/OLR contents from a file
                        [-local] -upgrade [<user> [<group>]]
                                                            - Upgrade OCR from previous version
                        -downgrade [-version <version string>]
                                                            - Downgrade OCR to the specified version
                        [-local] -backuploc <dirname>       - Configure OCR/OLR backup location
                        [-local] -showbackup [auto|manual]  - Show OCR/OLR backup information
                        [-local] -manualbackup              - Perform OCR/OLR backup
                        [-local] -restore <filename>        - Restore OCR/OLR from physical backup
                        -replace <current filename> -replacement <new filename>
                                                            - Replace a OCR device/file <filename1> with <filename2>
                        -add <filename>                     - Add a new OCR device/file
                        -delete <filename>                  - Remove a OCR device/file
                        -overwrite                          - Overwrite OCR configuration on disk
                        -repair -add <filename> | -delete <filename> | -replace <current filename> -replacement <new filename>
                                                            - Repair OCR configuration on the local node
                        -help                               - Print out this help information
        Note:
                * A log file will be created in
                $ORACLE_HOME/log/<hostname>/client/ocrconfig_<pid>.log. Please ensure
                you have file creation privileges in the above directory before
                running this tool.
                * Only -local -showbackup [manual] is supported.
                * Use option '-local' to indicate that the operation is to be performed on the Oracle Local Registry
AODU>


AODU> rac note   --RAC相关的常用文档

        11gR2 Clusterware and Grid Home - What You Need to Know (Doc ID 1053147.1)
        NOTE:888828.1 - Exadata Database Machine and Exadata Storage Server Supported Versions
        NOTE:1070954.1 - Oracle Exadata Database Machine exachk or HealthCheck
        11gR2 Clusterware and Grid Home - What You Need to Know (Doc ID 1053147.1)
        Master Note for Real Application Clusters (RAC) Oracle Clusterware and Oracle Grid Infrastructure (Doc ID 1096952.1)
        Get Proactive with Oracle Database - RAC, Scalability (Doc ID 1626755.1)
        SRDC - Data Collection for RAC Instance Eviction Issues (Doc ID 1675148.1)
        RACGuide: 12.1.0.1 RAC Installation on Linux [Video] (Doc ID 1600316.1)
        Oracle Grid Infrastructure: Understanding Split-Brain Node Eviction (Doc ID 1546004.1)
        Document 811306.1 RAC and Oracle Clusterware Best Practices and Starter Kit (Linux)
        Document 811280.1 RAC and Oracle Clusterware Best Practices and Starter Kit (Solaris)
        Document 811271.1 RAC and Oracle Clusterware Best Practices and Starter Kit (Windows)
        Document 811293.1 RAC and Oracle Clusterware Best Practices and Starter Kit (AIX)
        Document 811303.1 RAC and Oracle Clusterware Best Practices and Starter Kit (HP-UX)
        Oracle Clusterware 10gR2/ 11gR1/ 11gR2/ 12cR1 Diagnostic Collection Guide (Doc ID 330358.1)
        Reference: Document 1513912.1 TFA Collector - Tool for Enhanced Diagnostic Gathering
        Master Note For Oracle Database 12c Release 1 (12.1)Database/Client Installation/Upgrade/Migration Standalone Environment(Non-RAC)(Doc ID 1520299.1)
        RAC and Oracle Clusterware Best Practices and Starter Kit (Platform Independent) (Doc ID 810394.1)
        Note that it is also a good idea to follow the RAC Assurance best practices in Note: 810394.1
AODU>





关键词:aodu 

相关文章

aodu(At Oracle Database Utility)之optim
aodu(At Oracle Database Utility)之asm(二)
aodu(At Oracle Database Utility)之asm(一)
aodu(At Oracle Database Utility)之rac(二)
aodu(At Oracle Database Utility)之rac(一)
aodu(At Oracle Database Utility)之ora600
aodu(At Oracle Database Utility)之asmdisk
aodu(At Oracle Database Utility)之unwrap
aodu(At Oracle Database Utility)之rdba
aodu(At Oracle Database Utility)之drux
aodu(At Oracle Database Utility)之time
aodu(At Oracle Database Utility)之odlog
Top