Oracle FAQ

整理硬盘,N年前整理的一份FAQ:

 

1. Q: Oracle 的安装复杂么? 我是否可以像安装其他程序一样,通过setup程序来安装Oracle?

A: Oracle安装在Windows平台,确实可以通过点击setup程序很顺利的完成安装,但是安装完成后的设置操作需要对Oracle的运行机制有一定了解才能保证你的Oracle合理高效运行; Oracle软件在Unix平台安装相对复杂,虽然Oracle提供了基于Java的安装界面,保证了各平台安装界面的一致;但是要完成在Unix平台的安装,需要了解对应的操作系统补丁安装、内核参数调整;X窗口设置等内容,没有安装经验很难独立完成安装。

2. Q:如何确定当前的Oracle版本?

A: 当一个系统中安装有多个版本的ORACLE时,可以按照下面的方法来确定当前的Oracle版本:
  1、比较“/etc/oratab”与$ORACLE_HOME环境变量;
  2、进入ORACLE数据库中,执行SELECT * FROM V$INSTANCE;

3、执行select * from v$version

3. 什么是Oracle的隐藏参数,都有什么作用?

A: Oracle中存在这“大量”的以”_”开头的参数,这些参数需要在Oracle专家的指导下使用,在不充分了解其含义、危害的情况下使用,会带来不可预知的副作用;应该说好的系统不会需要设置这些参数,不要迷信这些隐藏的东西。

4. Oracle都有哪些数据类型?

A: CHAR 固定长度字符串 最大长度2000 bytes
  VARCHAR2 可变长度的字符串 最大长度4000 bytes 可做索引的最大长度749
  NCHAR 根据字符集而定的固定长度字符串 最大长度2000 bytes
  NVARCHAR2 根据字符集而定的可变长度字符串 最大长度4000 bytes
  DATE 日期(日-月-年) DD-MM-YY(HH-MI-SS)
  LONG 超长字符串 最大长度2G(231-1) 足够存储大部头著作
  RAW 固定长度的二进制数据 最大长度2000 bytes 可存放多媒体图象声音等
  LONG RAW 可变长度的二进制数据 最大长度2G 同上
  BLOB 二进制数据 最大长度4G
  CLOB 字符数据 最大长度4G
  NCLOB 根据字符集而定的字符数据 最大长度4G
  BFILE 存放在数据库外的二进制数据 最大长度4G
  ROWID 数据表中记录的唯一行号 10 bytes ********.****.****格式,*为0或1
  NROWID 二进制数据表中记录的唯一行号 最大长度4000 bytes
  NUMBER(P,S) 数字类型 P为整数位,S为小数位
  DECIMAL(P,S) 数字类型 P为整数位,S为小数位
  INTEGER 整数类型 小的整数
  FLOAT 浮点数类型 NUMBER(38),双精度
  REAL 实数类型 NUMBER(63),精度更高

5. Q]如何检查用户是否用了默认密码
  [A]如果使用默认密码,很可能就对你的数据库造成一定的安全隐患,那么可以使用如下的查询获得那些用户使用默认密码
  select username "User(s) with Default Password!"
  from dba_users
  where password in
  ('E066D214D5421CCC', -- dbsnmp
  '24ABAB8B06281B4C', -- ctxsys
  '72979A94BAD2AF80', -- mdsys
  'C252E8FA117AF049', -- odm
  'A7A32CD03D3CE8D5', -- odm_mtr
  '88A2B2C183431F00', -- ordplugins
  '7EFA02EC7EA6B86F', -- ordsys
  '4A3BA55E08595C81', -- outln
  'F894844C34402B67', -- scott
  '3F9FBD883D787341', -- wk_proxy
  '79DF7A1BD138CF11', -- wk_sys
  '7C9BA362F8314299', -- wmsys
  '88D8364765FCE6AF', -- xdb
  'F9DA8977092B7B81', -- tracesvr
  '9300C0977D7DC75E', -- oas_public
  'A97282CE3D94E29E', -- websys
  'AC9700FD3F1410EB', -- lbacsys
  'E7B5D92911C831E1', -- rman
  'AC98877DE1297365', -- perfstat
  '66F4EF5650C20355', -- exfsys
  '84B8CBCA4D477FA3', -- si_informtn_schema
  'D4C5016086B2DC6A', -- sys
  'D4DF7931AB130E37') -- system
  /

6. Q: ZHS16GBK字符集是ZHS16CGB231280的超级么?

A: 我们需要提醒,Google上搜索来的东西你得自己保证他是否是正确的,这两个字符集是不兼容的。Oracle的官方网站提供有字符集的详细的兼容列表,我们没有发现任意两个常用中文字符集是完全兼容的。

7. Q: Oracle和DB2哪一个更好?效率更高?

A: Oracle和DB2都是非常棒的产品,它们所能承受的极限压力远远高于我们的心理承受能力。

8. Q: 使用exp-drop-imp方式改变表的碎片问题可行么?

A: 不建议这样操作,Oracle的数据应该由Oracle程序自行管理,exp出来的就是一个操作系统文件,一旦有损害,Oracle完全无能为力。

9. Q: 怎样在Oracle的表中插入一个新列到指定位置

A: Oracle不支持此做法,你完全可以使用View实现同样功能。

 

Backup & Recovery

  1. Q: Oracle常见的备份方式有几种?

A: 逻辑备份(exp),物理备份(RMAN、手工数据文件拷贝等)

  1. Q:使用exp进行逻辑备份与RMAN的备份有和区别?

A: exp逻辑备份只能将数据库恢复到备份时间点,RMAN可以轻易的将数据库回复到任意时间点。

3.

 

Oracle Tuning:

 

Oracle database Performance Tuning FAQ

Why and when should one tune?

One of the biggest responsibilities of a DBA is to ensure that the Oracle database is tuned properly. The Oracle RDBMS is highly tunable and allows the database to be monitored and adjusted to increase its performance.

One should do performance tuning for the following reasons:

  • The speed of computing might be wasting valuable human time (users waiting for response);
  • Enable your system to keep-up with the speed business is conducted; and
  • Optimize hardware usage to save money (companies are spending millions on hardware).

Although this site is not overly concerned with hardware issues, one needs to remember than you cannot tune a Buick into a Ferrari.

Where should the tuning effort be directed?

Consider the following areas for tuning. The order in which steps are listed needs to be maintained to prevent tuning side effects. For example, it is no good increasing the buffer cache if you can reduce I/O by rewriting a SQL statement.

  • Database Design (if it's not too late):

Poor system performance usually results from a poor database design. One should generally normalize to the 3NF. Selective denormalization can provide valuable performance improvements. When designing, always keep the "data access path" in mind. Also look at proper data partitioning, data replication, aggregation tables for decision support systems, etc.

  • Application Tuning:

Experience showed that approximately 80% of all Oracle system performance problems are resolved by coding optimal SQL. Also consider proper scheduling of batch tasks after peak working hours.

  • Memory Tuning:

Properly size your database buffers (shared_pool, buffer cache, log buffer, etc) by looking at your wait events, buffer hit ratios, system swapping and paging, etc. You may also want to pin large objects into memory to prevent frequent reloads.

  • Disk I/O Tuning:

Database files needs to be properly sized and placed to provide maximum disk subsystem throughput. Also look for frequent disk sorts, full table scans, missing indexes, row chaining, data fragmentation, etc.

  • Eliminate Database Contention:

Study database locks, latches and wait events carefully and eliminate where possible.

  • Tune the Operating System:

Monitor and tune operating system CPU, I/O and memory utilization. For more information, read the related Oracle FAQ dealing with your specific operating system.

What tools/utilities does Oracle provide to assist with performance tuning?

Oracle provide the following tools/ utilities to assist with performance monitoring and tuning:

When is cost based optimization triggered?

It's important to have statistics on all tables for the CBO (Cost Based Optimizer) to work correctly. If one table involved in a statement does not have statistics, and optimizer dynamic sampling isn't performed, Oracle has to revert to rule-based optimization for that statement. So you really want for all tables to have statistics right away; it won't help much to just have the larger tables analyzed.

Generally, the CBO can change the execution plan when you:

  • Change statistics of objects by doing an ANALYZE;
  • Change some initialization parameters (for example: hash_join_enabled, sort_area_size, db_file_multiblock_read_count).

How can one optimize %XYZ% queries?

It is possible to improve %XYZ% (wildcard search) queries by forcing the optimizer to scan all the entries from the index instead of the table. This can be done by specifying hints.

If the index is physically smaller than the table (which is usually the case) it will take less time to scan the entire index than to scan the entire table.

Where can one find I/O statistics per table?

The STATSPACK and UTLESTAT reports show I/O per tablespace. However, they do not show which tables in the tablespace has the most I/O operations.

The $ORACLE_HOME/rdbms/admin/catio.sql script creates a sample_io procedure and table to gather the required information. After executing the procedure, one can do a simple SELECT * FROM io_per_object; to extract the required information.

For more details, look at the header comments in the catio.sql script.

My query was fine last week and now it is slow. Why?

The likely cause of this is because the execution plan has changed. Generate a current explain plan of the offending query and compare it to a previous one that was taken when the query was performing well. Usually the previous plan is not available.

Some factors that can cause a plan to change are:

  • Which tables are currently analyzed? Were they previously analyzed? (ie. Was the query using RBO and now CBO?)
  • Has OPTIMIZER_MODE been changed in INIT<SID>.ORA?
  • Has the DEGREE of parallelism been defined/changed on any table?
  • Have the tables been re-analyzed? Were the tables analyzed using estimate or compute? If estimate, what percentage was used?
  • Have the statistics changed?
  • Has the SPFILE/ INIT<SID>.ORA parameter DB_FILE_MULTIBLOCK_READ_COUNT been changed?
  • Has the INIT<SID>.ORA parameter SORT_AREA_SIZE been changed?
  • Have any other INIT<SID>.ORA parameters been changed?

What do you think the plan should be? Run the query with hints to see if this produces the required performance.

It can also happen because of a very high high water mark. Typically when a table was big, but now only contains a couple of records. Oracle still needs to scan through all the blocks to see it they contain data.

Does Oracle use my index or not?

One can use the index monitoring feature to check if indexes are used by an application or not. When the MONITORING USAGE property is set for an index, one can query the v$object_usage to see if the index is being used or not. Here is an example:

SQL> CREATE TABLE t1 (c1 NUMBER);

Table created.

 

SQL> CREATE INDEX t1_idx ON t1(c1);

Index created.

 

SQL> ALTER INDEX t1_idx MONITORING USAGE;

Index altered.

 

SQL>

SQL> SELECT table_name, index_name, monitoring, used FROM v$object_usage;

TABLE_NAME INDEX_NAME MON USE

------------------------------ ------------------------------ --- ---

T1 T1_IDX YES NO

 

SQL> SELECT * FROM t1 WHERE c1 = 1;

no rows selected

 

SQL> SELECT table_name, index_name, monitoring, used FROM v$object_usage;

TABLE_NAME INDEX_NAME MON USE

------------------------------ ------------------------------ --- ---

T1 T1_IDX YES YES

To reset the values in the v$object_usage view, disable index monitoring and re-enable it:

ALTER INDEX indexname NOMONITORING USAGE;

ALTER INDEX indexname MONITORING USAGE;

Why is Oracle not using the damn index?

This problem normally only arises when the query plan is being generated by the Cost Based Optimizer (CBO). The usual cause is because the CBO calculates that executing a Full Table Scan would be faster than accessing the table via the index. Fundamental things that can be checked are:

  • USER_TAB_COLUMNS.NUM_DISTINCT - This column defines the number of distinct values the column holds.
  • USER_TABLES.NUM_ROWS - If NUM_DISTINCT = NUM_ROWS then using an index would be preferable to doing a FULL TABLE SCAN. As the NUM_DISTINCT decreases, the cost of using an index increase thereby making the index less desirable.
  • USER_INDEXES.CLUSTERING_FACTOR - This defines how ordered the rows are in the index. If CLUSTERING_FACTOR approaches the number of blocks in the table, the rows are ordered. If it approaches the number of rows in the table, the rows are randomly ordered. In such a case, it is unlikely that index entries in the same leaf block will point to rows in the same data blocks.
  • Decrease the INIT<SID>.ORA parameter DB_FILE_MULTIBLOCK_READ_COUNT - A higher value will make the cost of a FULL TABLE SCAN cheaper.

Remember that you MUST supply the leading column of an index, for the index to be used (unless you use a FAST FULL SCAN or SKIP SCANNING).

There are many other factors that affect the cost, but sometimes the above can help to show why an index is not being used by the CBO. If from checking the above you still feel that the query should be using an index, try specifying an index hint. Obtain an explain plan of the query either using TKPROF with TIMED_STATISTICS, so that one can see the CPU utilization, or with AUTOTRACE to see the statistics. Compare this to the explain plan when not using an index.

When should one rebuild an index?

You can run the ANALYZE INDEX <index> VALIDATE STRUCTURE command on the affected indexes - each invocation of this command creates a single row in the INDEX_STATS view. This row is overwritten by the next ANALYZE INDEX command, so copy the contents of the view into a local table after each ANALYZE. The 'badness' of the index can then be judged by the ratio of 'DEL_LF_ROWS' to 'LF_ROWS'.

For example, you may decide that index should be rebuilt if more than 20% of its rows are deleted:

select del_lf_rows * 100 / decode(lf_rows,0,1,lf_rows) from index_stats

where name = 'index_ name';

How does one tune Oracle Wait event XYZ?

Here are some of the wait events from V$SESSION_WAIT and V$SYSTEM_EVENT views:

  • db file sequential read: Tune SQL to do less I/O. Make sure all objects are analyzed. Redistribute I/O across disks.
  • buffer busy waits: Increase DB_CACHE_SIZE (DB_BLOCK_BUFFERS prior to 9i)/ Analyze contention from SYS.V$BH
  • log buffer space: Increase LOG_BUFFER parameter or move log files to faster disks
  • log file sync: If this event is in the top 5, you are committing too often (talk to your developers)
  • log file parallel write: deals with flushing out the redo log buffer to disk. Your disks may be too slow or you have an I/O bottleneck.

Two useful sections in Oracle's Database Performance Tuning Guide:

What is the difference between DBFile Sequential and Scattered Reads?

Both "db file sequential read" and "db file scattered read" events signify time waited for I/O read requests to complete. Time is reported in 100's of a second for Oracle 8i releases and below, and 1000's of a second for Oracle 9i and above. Most people confuse these events with each other as they think of how data is read from disk. Instead they should think of how data is read into the SGA buffer cache.

db file sequential read:

A sequential read operation reads data into contiguous memory (usually a single-block read with p3=1, but can be multiple blocks). Single block I/Os are usually the result of using indexes. This event is also used for rebuilding the controlfile and reading datafile headers (P2=1). In general, this event is indicative of disk contention on index reads.

db file scattered read:

Similar to db file sequential reads, except that the session is reading multiple data blocks and scatters them into different discontinuous buffers in the SGA. This statistic is NORMALLY indicating disk contention on full table scans. Rarely, data from full table scans could be fitted into a contiguous buffer area, these waits would then show up as sequential reads instead of scattered reads.

The following query shows average wait time for sequential versus scattered reads:

prompt "AVERAGE WAIT TIME FOR READ REQUESTS"

select a.average_wait "SEQ READ", b.average_wait "SCAT READ"

from sys.v_$system_event a, sys.v_$system_event b

where a.event = 'db file sequential read'

and b.event = 'db file scattered read';

How does one tune the Redo Log Buffer?

The size of the Redo log buffer is determined by the LOG_BUFFER parameter in your SPFILE/INIT.ORA file. The default setting is normally 512 KB or (128 KB * CPU_COUNT), whichever is greater. This is a static parameter and its size cannot be modified after instance startup.

SQL> show parameters log_buffer

NAME TYPE value

------------------------------------ ----------- ------------------------------

log_buffer integer 262144

When a transaction is committed, info in the redo log buffer is written to a Redo Log File. In addition to this, the following conditions will trigger LGWR to write the contents of the log buffer to disk:

  • Whenever the log buffer is MIN(1/3 full, 1 MB) full; or
  • Every 3 seconds; or
  • When a DBWn process writes modified buffers to disk (checkpoint).

Larger LOG_BUFFER values reduce log file I/O, but may increase the time OLTP users have to wait for write operations to complete. In general, values between the default and 1 to 3MB are optimal. However, you may want to make it bigger to accommodate bulk data loading, or to accommodate a system with fast CPUs and slow disks. Nevertheless, if you set this parameter to a value beyond 10M, you should think twice about what you are doing.

SQL> SELECT name, value

2 FROM SYS.v_$sysstat

3 WHERE NAME in ('redo buffer allocation retries',

4 'redo log space wait time');

NAME value

---------------------------------------------------------------- ----------

redo buffer allocation retries 3

redo log space wait time 0

Statistic "REDO BUFFER ALLOCATION RETRIES" shows the number of times a user process waited for space in the redo log buffer. This value is cumulative, so monitor it over a period of time while your application is running. If this value is continuously increasing, consider increasing your LOG_BUFFER (but only if you do not see checkpointing and archiving problems).

"REDO LOG SPACE WAIT TIME" shows cumulative time (in 10s of milliseconds) waited by all processes waiting for space in the log buffer. If this value is low, your log buffer size is most likely adequate.

 

 

 

RAC FAQ

What is RAC and how is it different from non RAC databases?

RAC stands for Real Application Clusters. It allows multiple nodes in a clustered system to mount and open a single database that resides on shared disk storage. Should a single system fail (node), the database service will still be available on the remaining nodes.

A non-RAC database is only available on a single system. If that system fails, the database service will be down (single point of failure).

Can any application be deployed on RAC?

Most applications can be deployed on RAC without any modifications and still scale linearly (well, almost).

However, applications with 'hot' rows (the same row being accessed by processes on different nodes) will not work well. This is because data blocks will constantly be moved from one Oracle Instance to another. In such cases the application needs to be partitioned based on function or data to eliminate contention.

Do you need special hardware to run RAC?

RAC requires the following hardware components:

  • A dedicated network interconnect - might be as simple as a fast network connection between nodes; and
  • A shared disk subsystem.

Example systems that can be used with RAC:

  • Windows Clusters
  • Linux Clusters
  • Unix Clusters like SUN PDB (Parallel DB).
  • IBM z/OS in SYSPLEX

How many OCR and voting disks should one have?

For redundancy, one should have at lease two OCR disks and three voting disks (raw disk partitions). These disk partitions should be spread across different physical disks.

How does one convert a single instance database to RAC?

Oracle 10gR2 introduces a utility called rconfig (located in $ORACLE_HOME/bin) that will convert a single instance database to a RAC database.

$ cp $ORACLE_HOME/assistants/rconfig/sampleXMLs/ConvertToRAC.xml racconv.xml

$ vi racconv.xml

$ rconfig racconv.xml

One can also use dbca and enterprise manager to convert the database to RAC mode.

For prior releases, follow these steps:

  • Shut Down your Database:

SQL> CONNECT SYS AS SYSDBA

SQL> SHUTDOWN NORMAL

  • Enable RAC - On Unix this is done by relinking the Oracle software.
  • Make the software available on all computer systems that will run RAC. This can be done by copying the software to all systems or to a shared clustered file system.
  • Each instance requires its own set of Redo Log Files (called a thread). Create additional log files:

SQL> CONNECT SYS AS SYSBDA

SQL> STARTUP EXCLUSIVE

 

SQL> ALTER DATABASE ADD LOGFILE THREAD 2

SQL> GROUP G4 ('RAW_FILE1') SIZE 500k,

SQL> GROUP G5 ('RAW_FILE2') SIZE 500k,

SQL> GROUP G6 ('RAW_FILE3') SIZE 500k;

 

SQL> ALTER DATABASE ENABLE PUBLIC THREAD 2;

  • Each instance requires its own set of Undo segments (rollback segments). To add undo segments for New Nodes:

UNDO_MANAGEMENT = auto

UNDO_TABLESPACE = undots2

  • Edit the SPFILE/INIT.ORA files and number the instances 1, 2,...:

CLUSTER_DATABASE = TRUE (PARALLEL_SERVER = TRUE prior to Oracle9i).

INSTANCE_NUMBER = 1

THREAD = 1

UNDO_TABLESPACE = undots1 (or ROLLBACK_SEGMENTS if you use UNDO_MANAGEMENT=manual)

# Include %T for the thread in the LOG_ARCHIVE_FORMAT string.

# Set LM_PROCS to the number of nodes * PROCESSES

# etc....

  • Create the dictionary views needed for RAC by running catclust.sql (previously called catparr.sql):

SQL> START ?/rdbms/admin/catclust.sql

  • On all the computer systems, startup the instances:

SQL> CONNECT / as SYSDBA

SQL> STARTUP;

How does one stop and start RAC instances?

There are no difference between the way you start a normal database and RAC database, except that a RAC database needs to be started from multiple nodes. The CLUSTER_DATABASE=TRUE (PARALLEL_SERVER=TRUE) parameter needs to be set before a database can be started in cluster mode.

In Oracle 10g one can use the srvctl utility to start instances and listener across the cluster from a single node. Here are some examples:

$ srvctl status database -d RACDB

$ srvctl start database -d RACDB

$ srvctl start instance -d RACDB -i RACDB1

$ srvctl start instance -d RACDB -i RACDB2

$ srvctl stop database -d RACDB

$ srvctl start asm -n node2

Before Oracle 8.0, use the following command sequence from each node (using the old server manager):

SVRMGR> connect INTERNAL

SVRMGR> set retries 5

SVRMGR> startup parallel retry .. or SVRMGR> startup shared

You can also use the SET INSTANCE instanceN command to switch between instances (if defined in TNSNAMES.ORA).

Can I test if a database is running in RAC mode?

Use the DBMS_UTILITY package to determine if a database is running in RAC mode or not. Example:

BEGIN

IF dbms_utility.is_cluster_database THEN

dbms_output.put_line('Running in SHARED/RAC mode.');

ELSE

dbms_output.put_line('Running in EXCLUSIVE mode.');

END IF;

END;

/

For Oracle 8i and prior releases:

BEGIN

IF dbms_utility.is_parallel_server THEN

dbms_output.put_line('Running in SHARED/PARALLEL mode.');

ELSE

dbms_output.put_line('Running in EXCLUSIVE mode.');

END IF;

END;

/

Another method is to look at the database parameters. For example, from SQL*Plus:

SQL> show parameter CLUSTER_DATABASE

If the value of CLUSTER_DATABASE is FALSE then database is not running in RAC Mode.

How can I keep track of active instances?

You can keep track of active RAC instances by executing one of the following queries:

SELECT * FROM SYS.V_$ACTIVE_INSTANCES;

SELECT * FROM SYS.V_$THREAD;

To list the active instances from PL/SQL, use DBMS_UTILITY.ACTIVE_INSTANCES().

Can one see how connections are distributed across the nodes?

Select from gv$session. Some examples:

SELECT inst_id, count(*) "DB Sessions" FROM gv$session

WHERE type = 'USER' GROUP BY inst_id;

With login time (hour):

SELECT inst_id, TO_CHAR(logon_time, 'DD-MON-YYYY HH24') "Hour when connected", count(*) "DB Sessions"

FROM gv$session

WHERE type = 'USER'

GROUP BY inst_id, TO_CHAR(logon_time, 'DD-MON-YYYY HH24')

ORDER BY inst_id, TO_CHAR(logon_time, 'DD-MON-YYYY HH24');

What is pinging and why is it so bad?

Starting with Oracle 9i, RAC can transfer blocks from one instance to another across the interconnect (cache fusion). This method is much faster than the old "pinging" method, where one instance had to write the block to disk before another instance could read it.

Oracle 8i and below:

Pinging is the process whereby one instance requests another to write a set of blocks from its SGA to disk so it can obtain it in exclusive mode. This method of moving data blocks from one instance's SGA to another is extremely slow. The challenge of tuning RAC/OPS is to minimize pinging activity.

Leave Comment