Focus On Oracle

Installing, Backup & Recovery, Performance Tuning,
Troubleshooting, Upgrading, Patching

Oracle Engineered System


当前位置: 首页 » 技术文章 » AODU

aodu(At Oracle Database Utility)之optim

aodu的optim功能包含:Hint的正确写法,常用的Hint,如何查看执行计划,如何正确收集10g,11g的统计信息,如何收集系统信息,数据字典信息,如果设置统计信息,以及如何迁移统计信息等

如何使用optim功能?

[oracle@db1 ~]$ ./aodu

AT Oracle Database Utility,Release 1.1.0 on Tue Jun 14 16:18:00 2016
Copyright (c) 2014, 2015, Robin.Han.  All rights reserved.
http://ohsdba.cn
E-Mail:375349564@qq.com

AODU> optim ohsdba

         optim general|join|access|dep|mis|parallel|mode|subqv|display|11g|10g|gather|gds|gss|sss|st
AODU> optim oracle

         Currently it's for internal use only
AODU>

注意:只有使用optim ohsdba才会把帮助信息列出来。

AODU> optim general

        Syntax: /*+ HINT HINT ... */
        Note: (In PLSQL the space between the '+' and the first letter of the hint is vital as otherwise the hint may be ignored.
        So /*+ ALL_ROWS */ is fine but /*+ALL_ROWS */ will cause problems. See Bug:697121
        Hints always force the use of the cost based optimizer (Exception: the RULE hint).
        Hints must reference the table alias, if aliases are in use. For example:
        Incorrect:
        SELECT /*+ FULL ( emp ) */ empno FROM emp myalias WHERE empno > 10;
        Correct:
        SELECT /*+ FULL ( myalias ) */ empno FROM emp myalias WHERE empno > 10;
        Hints should not reference the schema name. For example:
        SELECT /*+ index(scott.emp emp1) */ ...
        should not be used. Alias the table name instead and use the alias in the hint.
        Invalid hints cause the hint to be ignored without warning
        Invalid hints may not be immediately obvious. For example: FIRST_ROWS as a hint when the statement has an ORDER BY clause
        (since the data has to be ordered prior to the first row being returned the first_rows hint may not have the desired effect).
        The access path to be HINTed must be an available access path. For example an index hint referencing a non-existant index with fail silently.
        If third party tools do not contain the logic to support hints, then a potential workaround is to embed hints in a view and then reference that view.
        Refer to the OracleDatabase Performance Tuning Guide for more on hints.
AODU>


AODU> optim join   ---与join相关的Hint

        ****Joining Hints:****
        USE_NL(tab)
         Desc=Use table 'tab' as the inner table in a Nested Loops join.
        May not work unless by a hint that forces the correct join order (e.g. ORDERED hint).
        NO_USE_NL(tab)
         Desc=Excludes the use of table 'tab' as the inner table in a Nested Loops join
        USE_NL_WITH_INDEX(tab)
         Desc=The USE_NL_WITH_INDEX hint will cause the optimizer to join the specified table to another
        row source with a nested loops join using the specified table as the inner table but only under
        the following condition:
        If no index is specified, the optimizer must be able to use some index with at least one join predicate as the index key.
        If an index is specified, the optimizer must be able to  use that index with at least one join predicate as the index key.
        USE_MERGE(tab..)
         Desc=Join each specified table with another row source using a sort-merge join.
        NO_USE_MERGE(tab..)
         Desc=Excludes the option of joining each specified table with another row source using a sort-merge join.
        USE_HASH(tab1 tab2)
         Desc=Join each specified table with another row source with a hash join.'tab1' is joined to previous row source using a hash join. (>=7.3)
        NO_USE_HASH(tab1 tab2)
         Desc=Excludes the option of joining each specified table with another row source with a hash join.
        STAR_TRANSFORMATION
         Desc=Use best plan containing a STAR transformation (if there is one)
        NO_STAR_TRANSFORMATION
         Desc=Exclude the option of using the best plan containing a STAR transformation
        ORDERED
         Desc=Access tables in the order of the FROM clause
        LEADING
         Desc=This hint specifies only the driving table. From there CBO is free to investigate multiple join orders
AODU>


AODU> optim access   ---与访问相关的Hint

        ****Access Hints:****
        FULL(tab)
         Desc=Use Full Table Scan FTS on table 'tab'
        CACHE(tab)
         Desc=If table within <Parameter:CACHE_SIZE_THRESHOLD> treat as if it had the CACHE option set.
        See <Parameter:CACHE_SIZE_THRESHOLD>.  Only applies if FTS used.
        NOCACHE(tab)
         Desc=Do not cache table even if it has CACHE option set. Only relevant for FTS.
        CLUSTER(tab)
         Desc=Use cluster scan to access 'tab'
        HASH(tab)
         Desc=Use hash scan to access 'tab'
        INDEX(tab [ind])
         Desc=Use 'ind' scan to access 'tab' - Disables index_ffs. See Document 50607.1 How to specify an INDEX Hint
        NO_INDEX (tab [ind])
         Desc=Do not use 'ind' to access 'tab'
        INDEX_ASC(tab [ind])
         Desc=Use 'ind' to access 'tab' for range scan.
        INDEX_DESC(tab {ind])
         Desc=Use descending index range scan (Join problems pre 7.3)
        INDEX_FFS(tab [ind])
         Desc=Index fast full scan - rather than FTS.
        NO_INDEX_FFS(tab [ind])
         Desc=Exclude the option of using Index fast full scan - rather than FTS.
        INDEX_RRS(tab [ind])
         Desc=Index Rowid Range scan
        INDEX_COMBINE( tab i1.. i5 )
         Desc=Try to use some boolean combination of bitmap index/s i1,i2 etc
        INDEX_SS(tab [ind])
         Desc=Use 'ind' to access 'tab' with an index skip scan
        INDEX_SS_ASC(tab [ind])
         Desc=Use 'ind' to access 'tab' with an index skip scan in Ascending order
        INDEX_SS_DESC(tab [ind])
         Desc=Use 'ind' to access 'tab' with an index skip scan in Descending order
        NO_INDEX_SS(tab [ind])
         Desc=Exclude the option of using 'ind' to access 'tab' with an index skip scan
        USE_CONCAT
         Desc=Use concatenation (Union All) for OR (or IN) statements. (>=7.2). See Document 17214.1 (7.2 requires <Event:10078>, 7.3 no hint req)
        NO_EXPAND
         Desc=Do not perform OR-expansion (Ie: Do not use Concatenation).
        DRIVING_SITE(table)
         Desc=Forces query execution to be done at the site where "table" resides
AODU>



AODU> optim dep  ---被废弃的Hint

        ****Deprecated Hints:****
        AND_EQUAL(tab i1.. i5 )
         Desc=Merge scans of 2 to 5 single column indexes.
        MERGE_AJ(v),ASH_AJ(v),NL_AJ(v)
         Desc=Put hint in a NOT IN subquery to perform sort-merge anti-join or hash anti-join or nested loops antijoin (>=7.3).For example:SELECT .. WHERE deptno is not null AND deptno NOT IN (SELECT /*+ HASH_AJ */ deptno ...)
        HASH_SJ(v),MERGE_SJ(v),NL_SJ(v)
         Desc=Transform EXISTS subquery into HASH or MERGE or nested loops semi-join to access v
        ORDERED_PREDICATES
         Desc=Forces optimizer to apply predicates as they appear in the WHERE clause, except for predicates used as index keys
        ROWID(tab)
         Desc=Access tab by ROWID directly. For example:SELECT /*+ ROWID( table ) */ ... FROM tab WHERE ROWID between '&1' and '&2';
        STAR
         Desc=Force a star query plan if possible. A star plan has the largest table in the query last in the join order and joins it with a nested loops join on a  concatenated index. The STAR hint applies when there are at least 3 tables and the large table's concatenated index has at least 3 columns and there are no conflicting access or join method hints.(>=7.3)
        NOPARALLEL(table)
         Desc=No parallel on table
        NOPARALLEL_INDEX(table [,index])
         Desc=Opposite to PARALLEL_INDEX
        NOREWRITE
         Desc=8.1+ Do not rewrite the query
AODU>


AODU> optim mis  ---其他常用Hint

        ****Miscellaneous:****
        APPEND
         Desc=Only valid for INSERT .. SELECT. Allows INSERT to work like direct load or to perform parallel insert.See Document 50592.1
        NOAPPEND
         Desc=Do not use INSERT APPEND functionality
        REWRITE(v1[,v2])
         Desc=8.1+ With a view list use eligible materialized view Without view list use any eligible MV
        NO_REWRITE
         Desc=Starting from 10g this syntax should be used
        NO_UNNEST
         Desc=Add to a subquery to prevent it from being unnested
        UNNEST
         Desc=Unnests specified subquery block if possible
        SWAP_JOIN_INPUTS
         Desc=Allows the user to switch the inputs of a join. See Document 171940.1
        CARDINALITY(t1 [,..],n)
         Desc=Makes the CBO to use different assumptions about cardinality at the table level
AODU>
AODU> optim parallel   ---与并行相关的Hint
        PARALLEL ( table, <degree> [, <instances>] )
         Desc=Use parallel degree / instances as specified
        PARALLEL_INDEX(table, [ index, [ degree[,instances] ] ]  )
         Desc=Parallel range scan for partitioned index
        PQ_DISTRIBUTE(tab,out,in)
         Desc=How to distribute rows from tab in a PQ(out/in may be HASH/NONE/BROADCAST/PARTITION)
        NO_PARALLEL(table)
         Desc=Starting from 10g this syntax should be used
        NO_PARALLEL_INDEX(table [,index])
         Desc=Starting from 10g this syntax should be used
AODU>


AODU> optim mode

        ****Optimizer Mode hints:****
        FIRST_ROWS
         Desc=Force CBO first rows
        ALL_ROWS
         Desc=Force CBO all rows
        RULE
         Desc=Force RBO if possible
AODU>


AODU> optim subqv   ---与子查询相关的Hint

        ****Hints referring to Sub-Queries/views:****
        PUSH_SUBQ
         Desc=Causes all subqueries in a query block to be executed at the earliest possible time.
        Causes all subqueries in a query block to be executed at the earliest possible time.
        Normally subqueries are executed as the last is applied is outerjoined or remote or joined with a merge join.(>=7.2)

        NO_MERGE(v)
         Desc=Use this hint in a VIEW to PREVENT it being merged into the parent query. (>=7.2) or use NO_MERGE(v) in parent query block to prevent view V being merged
        MERGE(v)
         Desc=Do merge view V
        PUSH_JOIN_PRED(v)
         Desc=Push join predicates into view V
        NO_PUSH_JOIN_PRED(v)
         Desc=Do NOT push join predicates
AODU>


AODU> optim display   ---如何查看执行计划,从plan_table,内存,awr等

        ****Create plan table:****
        @@?/rdbms/admin/catplan (10g and above)
        @@?/rdbms/admin/utlxplan
        SQL> set lines 130 long 2000 head off
        SQL> explain plan for >> your query goes here <<
        SQL> @@?/rdbms/admin/utlxplp
        SQL> @@?/rdbms/admin/utlxpls
        SQL> spool
        SQL> alter session set cursor_sharing=EXACT;
        SQL> select plan_table_output from table(dbms_xplan.display('PLAN_TABLE',null,'ALL'));
        SQL> spool off
        SQL> set head off
        SQL> spool >>spool file <<
        SQL> set autotrace trace explain
        SQL> @@ >> your query <<
        SQL> spool off

        ****Gather plan statistics:****
        select /*+ gather_plan_statistics */ col1, col2 etc.....
        SELECT sql_id, hash_value, child_number, SUBSTR(sql_text,1,40) Text
        FROM v$sql where sql_text LIKE '%&Some_Identifiable_String%'▒▒
        select * from table(dbms_xplan.display_cursor(null,null,'advanced ALLSTATS LAST'));
        select * from table(dbms_xplan.display_cursor(null,null, 'ALL'));
        select * from TABLE(dbms_xplan.display_cursor('&SQL_ID', &CHILD,'ALL IOSTATS LAST'))

        ****Plan from Memory:****
        sqlid can be found in V$SQL.SQL_ID,V$SESSION.SQL_ID,V$SESSION.PREV_SQL_ID
        For SQL ID :
        select * from table(dbms_xplan.display_cursor('&sql_id'));
        select * from table(dbms_xplan.display_cursor('&sql_id', NULL, 'ALL'));

        For SQL ID, Child Cursor :
        select * from table(dbms_xplan.display_cursor('&sql_id', &child_number, 'ALL'));

        For SQL Text :
        select t.*
        from v$sql s, table(dbms_xplan.display_cursor(s.sql_id, s.child_number, 'ALL')) t
        where s.sql_text like '%&querystring%';

        ****Plan From AWR:****
        For SQL ID :
        select * from table(dbms_xplan.display_awr('&sql_id')) ;
        select * from table(dbms_xplan.display_awr('&sql_id', NULL, NULL, 'ALL')) ;

        For SQL ID, Plan Hash Value in the current database :
        select * from table(dbms_xplan.display_awr('&sql_id', '&plan_hash_value', NULL, 'ALL')) ;

        For SQL ID, Plan Hash Value in a different database ID :
        select * from table(dbms_xplan.display_awr('&sql_id', '&plan_hash_value', &dbid, 'ALL')) ;

        For SQL ID that was executed during a certain period :
        select t.*
        from (select distinct sql_id, plan_hash_value, dbid
        from dba_hist_sqlstat
        where sql_id = '&sql_id'
        and snap_id between &begin_snap and &end_snap) s,
        table(dbms_xplan.display_awr(s.sql_id, s.plan_hash_value, s.dbid, 'ALL')) t;

        For SQL Text :
        select t.*
        from (select distinct r.sql_id sql_id, r.plan_hash_value plan_hash_value, r.dbid dbid
        from dba_hist_sqltext q, dba_hist_sqlstat r
        where q.sql_id = r.sql_id
        and q.sql_text like '%&querystring%') s,
        table(dbms_xplan.display_awr(s.sql_id, s.plan_hash_value, s.dbid, 'ALL')) t;

        ****Plan From SQL Tuning Set (STS):****
        Note : STS owner is the current user by default.
        For SQL ID in a STS :
        select * from table (dbms_xplan.display_sqlset( '&sts_name','&sql_id'));

        For All Statements in a STS :
        select t.*
        from dba_sqlset_statements s, table(dbms_xplan.display_sqlset(s.sqlset_name, s.sql_id, s.plan_hash_value, 'ALL', s.sqlset_owner)) t
        where s.sqlset_name = '&sts_name';

        For SQL ID, Plan Hash Value in a STS :
        select * from table (dbms_xplan.display_sqlset( '&sts_name','&sql_id', '&plan_hash_value', 'ALL'));

        For SQL ID, Plan Hash Value, STS Owner :
        select * from table (dbms_xplan.display_sqlset( '&sts_name','&sql_id', '&plan_hash_value', 'ALL', '&sts_owner'));

        For SQL Text in a STS :
        select t.*
        from dba_sqlset_statements s, table(dbms_xplan.display_sqlset(s.sqlset_name, s.sql_id, s.plan_hash_value, 'ALL', s.sqlset_owner)) t
        where s.sqlset_name = '&sts_name'
        and s.sql_text like '%&querystring%';

        ****Plan From SQL Plan Baseline:****
        For SQL Handle :
        select * from table(dbms_xplan.display_sql_plan_baseline('&sql_handle'));

        For SQL Handle, Plan Name :
        select * from table(dbms_xplan.display_sql_plan_baseline('&sql_handle', '&plan_name', 'ALL'));

        For SQL Text :
        select t.*
        from (select distinct sql_handle, plan_name from dba_sql_plan_baselines where sql_text like '%&querystring%') s,
        table(dbms_xplan.display_sql_plan_baseline(s.sql_handle, s.plan_name, 'ALL')) t;
AODU>


AODU> optim 11g   ---如何收集11g统计信息

        ****gather stats for 11g****
        exec dbms_auto_task_admin.disable('auto optimizer stats collection', NULL, NULL);
        exec dbms_auto_task_immediate.gather_optimizer_stats
        select job_name,state from dba_scheduler_jobs where program_name='GATHER_STATS_PROG';
        variable jobid varchar2(32)
        exec select job_name into :jobid from dba_scheduler_jobs where program_name='GATHER_STATS_PROG';
        print :jobid
        exec dbms_scheduler.stop_job(:jobid,false)
        exec dbms_stats.delete_table_stats(ownname=>'user_name',tabname=>'table_name',cascade_indexes=>true);
        exec dbms_stats.gather_table_stats(ownname=>'user_name',tabname=>'table_name',estimate_percent=>dbms_stats.auto_sample_size,
        cascade=>true,method_opt=>'for all columns size AUTO');
        exec dbms_stats.gather_table_stats(ownname=>'Schema_name',tabname=>'Table_name',estimate_percent=>DBMS_STATS.AUTO_SAMPLE_SIZE,
        cascade=>TRUE,method_opt=>'FOR ALL COLUMNS SIZE AUTO');
        NOTE:For a more cautious approach as outlined in the text above and where column statistics are known not to be beneficial,Replace:
            method_opt => 'FOR ALL COLUMNS SIZE AUTO'
        with
            method_opt => 'FOR ALL COLUMNS SIZE 1'
        exec dbms_stats.gather_schema_stats(ownname=>'Schema_name',cascade=>TRUE,method_opt => 'FOR ALL COLUMNS SIZE AUTO');
        exec dbms_stats.gather_database_stats(cascade => TRUE,method_opt => 'FOR ALL COLUMNS SIZE AUTO');

        In 11g, using auto size for ESTIMATE_PERCENT defaults to 100 0x0.0000000000063p-1022nd therefore is as accurate as possible for the table itself.
        In prior versions a 100ample may have been impossible due to time collection constraints, however 11g implements a new
        hashing algorithm to compute the statistics rather than sorting (in 9i and 10g the slow part was typically the sorting)
        which significantly improves collection time and resource usage. Note that the column statistics are automatically decided
        and so a more variable sample may apply here.
        Additionally, even though a 100 Additionally, even though a 100 Additionally, even though a 100 Additionally, even though a 100 Additionally, even though a 100Additionally, even though a 100  Additionally, even though a 100 Additionally, even though a 100 Additionally, even though a 100 Additionally, even though a 100 Additionally, even though a 100 Additionally, even though a 100 Additionaample is collected, the gathering process is really
         fast since a new hashing algorithm is used to compute the statistics
        rather than sorting (in 9i and 10g the slow part was typically the sorting).
        In 10g, support experience has shown that the default ESTIMATE_PERCENT
        sample size was extremely small which often resulted in poor statistics and is therefore not recommended.
AODU>


AODU> optim 10g   --如何收集10g统计信息

        ****gather stats for 10g****
        exec dbms_stats.delete_table_stats(ownname=>'user_name',tabname=>'table_name',cascade_indexes=>true);
        exec dbms_stats.gather_table_stats(ownname=>'user_name',tabname=>'table_name',estimate_percent=> 100,cascade=>true,
        method_opt=>'for all columns size skewonly');
        exec dbms_stats.gather_table_stats(ownname=> 'Schema_name',tabname=> 'Table_name',estimate_percent=>100,
        cascade =>TRUE,method_opt => 'FOR ALL COLUMNS SIZE 1' );
        exec dbms_stats.gather_schema_stats(ownname =>'Schema_name ',cascade => TRUE,method_opt=>'FOR ALL COLUMNS SIZE 1');
        exec dbms_stats.gather_database_stats(cascade => TRUE,method_opt => 'FOR ALL COLUMNS SIZE 1' );
            ESTIMATE_PERCENT: defaults:
                 9i : 100
                10g : DBMS_STATS.AUTO_SAMPLE_SIZE (using very small estimate percentage)
                11g : DBMS_STATS.AUTO_SAMPLE_SIZE (using larger estimate percentage - 100%)
            METHOD_OPT: defaults:
                9i : "FOR ALL COLUMNS SIZE 1" effectively no detailed column statistics.
                10g and 11g : "FOR ALL COLUMNS SIZE AUTO" - This setting means that DBMS_STATS decides which columns to add histogram to
                where it believes that they may help to produce a better plan.
        Note that on 11g, although using auto size for ESTIMATE_PERCENT tends to default to 100%,
        because this is an auto sample, the engine may still decide
        to use a different sample size for tables and columns.  This means that Column statistics
        could still be gathered with a small sample size and create
        a histogram that is missing key values. When ESTIMATE_PERCENT is set to a specific numeric value,
        that value will be used for both the table and columns.
        'GATHER AUTO': Oracle implicitly determines which objects need new statistics, and determines how to gather those statistics.
        Oracle makes up its own mind based on its own criteria.
        'GATHER STALE': Oracle gathers on objects considered to be STALE. By default 1010276450000f the rows need to change to trigger this.
        NOTE:For cases where column data is known to be skewed and column statistics are known to be beneficial, Replace:
            method_opt => 'FOR ALL COLUMNS SIZE 1'
        with
            method_opt => 'FOR ALL COLUMNS SIZE AUTO'
        or with
            method_opt => 'FOR ALL COLUMNS SIZE SKEWONLY'
AODU>


AODU> optim gather   ---与性能相关的系统参数

        Workload statistics were introduced in Oracle 9i.In release 9.0,the following system statistics are gathered:
        sreadtim - single block read time
        mreadtim - multiblock read time
        mbrc - multi-block read count
        cpuspeed - CPU speed
        In release 9.2 this was extended to include the following in order to set a lower limit for a full table scan (FTS).
        maxthr - maximum I/O throughput
        slavethr -average slave throughput
        In release 10g and 11g, there are three new parameters available:
        cpuspeedNW - Represents noworkload CPU speed
        ioseektim - I/O seek time equals seek time + latency time + operating system overhead time.
        iotfrspeed - I/O transfer speed is the rate at which an Oracle database can read data in a single read request.
        select * from sys.aux_stats$;

        four level objects
        schema objects - In use since the introduction of the CBO
        data dictionary - introduced in 9i and considered optional to gather
        fixed objects - Introduced in 10g
        system (performance) statistics - introduced in 9i but rarely implemented.AODU>
AODU>


AODU> optim gds   ---如何收集数据字典信息

        ****Gather dictionary stats****
        exec dbms_stats.gather_schema_stats ('sys');
        exec dbms_stats.gather_database_stats (gather_sys=>true);
        exec dbms_stats.gather_dictionary_stats;
        exec dbms_stats.gather_fixed_objects_stats;
        exec dbms_stats.export_fixed_objects_stats(stattab=>'STATS_TABLE', statown=>'MY_USER');
        exec dbms_stats.delete_fixed_objects_stats();
        exec dbms_scheduler.run_job('GATHER_STATS_JOB');
        exec dbms_stats.gather_fixed_objects_stats();
AODU>


AODU> optim gss   ---如何收集系统信息,还原系统信息等

        ****Gather system Stats****
        execute DBMS_STATS.CREATE_STAT_TABLE ('SYS','OLTP_stats','STATS_TBS');
        execute DBMS_STATS.GATHER_SYSTEM_STATS(gathering_mode => 'INTERVAL', interval => 2, stattab => 'OLTP_stats', statid => 'OLTP');
        execute DBMS_STATS.CREATE_STAT_TABLE('SYS','OLAP_stats','STATS_TBS');
        execute DBMS_STATS.GATHER_SYSTEM_STATS (gathering_mode => 'INTERVAL', interval => 2, stattab => 'OLAP_stats', statid => 'OLAP');
        execute DBMS_STATS.CREATE_STAT_TABLE ('SYS','INTERVAL_STATS','USERS');
        execute DBMS_STATS.GATHER_SYSTEM_STATS (gathering_mode => 'START',stattab => 'INTERVAL_STATS', statid => 'INTERVAL_STATS');
        Execute the activity for which the statistics are required.
        execute DBMS_STATS.GATHER_SYSTEM_STATS (gathering_mode => 'STOP',stattab => 'INTERVAL_STATS', statid => 'INTERVAL_STATS');
        column statid format a7
        column c1 format a13
        column c2 format a16
        column c3 format a16
        select statid, c1, c2, c3 from oltp_stats;

        ****Backup user & system stats****
        exec dbms_stats.create_stat_table(user,'STAT_TIMESTAMP');
        exec dbms_stats.export_table_stats(user,'<TABLE_NAME>',NULL,'STAT_TIMESTAMP');
        exec dbms_stats.export_schema_stats(user,'STAT_TIMESTAMP');
        exec dbms_stats.import_table_stats(NULL,'<TABLE_NAME>', NULL,'STAT_TIMESTAMP');
        exec dbms_stats.import_schema_stats(user,'STAT_TIMESTAMP');
        select sname,pname,pval1 from sys.aux_stats$ where pval1 is not null
        exec dbms_stats.create_stat_table(user,'STAT_TIMESTAMP');
        exec dbms_stats.export_system_stats('STAT_TIMESTAMP');
        exec dbms_stats.import_system_stats('STAT_TIMESTAMP');

        ****Restore table,database & schema stats****
        execute DBMS_STATS.RESTORE_TABLE_STATS ('owner','table',date);
        execute DBMS_STATS.RESTORE_DATABASE_STATS(date);
        execute DBMS_STATS.RESTORE_DICTIONARY_STATS(date);
        execute DBMS_STATS.RESTORE_FIXED_OBJECTS_STATS(date);
        execute DBMS_STATS.RESTORE_SCHEMA_STATS('owner',date);
        execute DBMS_STATS.RESTORE_SYSTEM_STATS(date);
        execute dbms_stats.restore_table_stats ('SCOTT','EMP','25-JUL-07 12.01.20.766591 PM +02:00');
AODU>


AODU> optim sss   ---如何设置统计信息,比如设置系统参数,表的块数,行数,最大值,最小值等

        ****Set System Stats****
        execute DBMS_STATS.DELETE_SYSTEM_STATS;
        execute DBMS_STATS.IMPORT_SYSTEM_STATS(stattab => 'OLTP_stats', statid => 'OLTP', statown => 'SYS');
        execute DBMS_STATS.SET_SYSTEM_STATS (pname => 'cpuspeed', pvalue => 400);
        execute DBMS_STATS.SET_SYSTEM_STATS (pname => 'sreadtim', pvalue => 100);
        execute DBMS_STATS.SET_SYSTEM_STATS (pname => 'mbrc', pvalue => 9);
        execute DBMS_STATS.SET_SYSTEM_STATS (pname => 'mreadtim', pvalue => 100);

        ****Stats Retention****
        execute DBMS_STATS.ALTER_STATS_HISTORY_RETENTION (xx);
        select DBMS_STATS.GET_STATS_HISTORY_RETENTION from dual;
        select DBMS_STATS.GET_STATS_HISTORY_AVAILABILITY from dual;
        select TABLE_NAME, STATS_UPDATE_TIME from dba_tab_stats_history

        SET SERVEROUTPUT ON
        DECLARE
          STATUS VARCHAR2(20);
          DSTART DATE;
          DSTOP DATE;
          PVALUE NUMBER;
          PNAME VARCHAR2(30);
        BEGIN
          PNAME := 'cpuspeed';
          DBMS_STATS.GET_SYSTEM_STATS(status, dstart, dstop, pname, pvalue, stattab => 'mystats', statid => 'DAY', statown => 'SYSTEM');
          DBMS_OUTPUT.PUT_LINE('status                      : '||status);
          DBMS_OUTPUT.PUT_LINE('cpu in mhz                  : '||pvalue);
          PNAME := 'sreadtim';
          DBMS_STATS.GET_SYSTEM_STATS(status, dstart, dstop, pname, pvalue, stattab => 'mystats', statid => 'DAY', statown => 'SYSTEM');
          DBMS_OUTPUT.PUT_LINE('single block readtime in ms : '||pvalue);
          PNAME := 'mreadtim';
          DBMS_STATS.GET_SYSTEM_STATS(status, dstart, dstop, pname, pvalue, stattab => 'mystats', statid => 'DAY', statown => 'SYSTEM');
          DBMS_OUTPUT.PUT_LINE('multiblock readtime in ms   : '||pvalue);
          PNAME := 'mbrc';
          DBMS_STATS.GET_SYSTEM_STATS(status, dstart, dstop, pname, pvalue, stattab => 'mystats', statid => 'DAY', statown => 'SYSTEM');
          DBMS_OUTPUT.PUT_LINE('average multiblock readcount: '||pvalue);
        END;
        /
        DECLARE
           I NUMBER;
        BEGIN
           DBMS_JOB.SUBMIT (I, 'DBMS_STATS.IMPORT_SYSTEM_STATS(stattab => ''mystats'', statown => ''SYSTEM'',
        statid => ''DAY'');', trunc(sysdate) + 1 + 7/24, 'sysdate+ 1');
        END;
        /
AODU>


AODU> optim st   ---如何迁移系统统计信息

        ****Stas transfer****
        SQL> connect user/pwd
        SQL> EXEC DBMS_STATS.CREATE_STAT_TABLE(ownname=> 'user',stattab=>'STATS');
        EXEC DBMS_STATS.EXPORT_DATABASE_STATS(stattab => 'STATS');
        EXEC DBMS_STATS.EXPORT_SCHEMA_STATS(ownname => 'user', stattab => 'STATS');
        EXEC DBMS_STATS.EXPORT_TABLE_STATS(<'username' or NULL>,'TAB1',NULL,'STATS');
        exp system/manager tables=STATS file=stats.dmp owner=<user>log=stats.log
        imp system/manager tables=STATS file=stats.dmp full=Y
        EXEC DBMS_STATS.IMPORT_DATABASE_STATS(stattab => 'STATS');
        EXEC DBMS_STATS.IMPORT_SCHEMA_STATS(ownname => user, stattab => 'STATS');
        EXEC DBMS_STATS.IMPORT_TABLE_STATS(ownname => user, tabname => 'TAB1', stattab => 'STATS');
        update table STATS  set c5 = '<target schema>'
        where c5 = '<Source schema>'
        and statid = <Stat Id used while exporting these stats>;
AODU>



关键词:aodu 

相关文章

aodu(At Oracle Database Utility)之optim
aodu(At Oracle Database Utility)之asm(二)
aodu(At Oracle Database Utility)之asm(一)
aodu(At Oracle Database Utility)之rac(二)
aodu(At Oracle Database Utility)之rac(一)
aodu(At Oracle Database Utility)之ora600
aodu(At Oracle Database Utility)之asmdisk
aodu(At Oracle Database Utility)之unwrap
aodu(At Oracle Database Utility)之rdba
aodu(At Oracle Database Utility)之drux
aodu(At Oracle Database Utility)之time
aodu(At Oracle Database Utility)之odlog
Top