2009/03/31

Tuning 'log file sync' Event

What is a 'log file sync' Wait

At commit time, a process creates a redo record (containing commit opcodes) and copies that redo record into the log buffer. Then, that process signals LGWR to write the contents of log buffer. LGWR writes from the log buffer to the log file and signals user process back completing a commit. A commit is considered successful after the LGWR write is successful. In a nutshell, after posting LGWR to write, user or background processes wait for LGWR to signal back with a 1-second timeout. The user process charges this wait time as a 'log file sync' event.

The Root Causes of 'log file sync' Waits

1.LGWR is unable to complete writes fast enough for one of the following reasons:
a.Disk I/O performance to log files is not good enough.
b.LGWR is starving for CPU resource.
c.Due to memory starvation issues, LGWR can be paged out.
d.LGWR is unable to complete writes fast enough due to file system or Unix buffer cache limitations.
2.LGWR is unable to post the processes fast enough, due to excessive commits.
3.IMU undo/redo threads. With private strands, a process can generate few Megabytes of redo before commiting. LGWR must write the generated redo so far, and processes must wait for 'log file sync' waits, even if the redo generated from other processes is small enough.
4.LGWR is suffering from other database contention such as enqueue waits or latch contention.
5.Various bugs.

Identify the Root Cause

1.First make sure the 'log file sync' event is indeed a major wait event. Compare 'log file sync' wait time with CPU time.
2.Identify and break down LGWR wait events, and query wait events for LGWR.

Find sid for LGWR process.

SQL> SELECT sid 
SQL> FROM v$session
SQL> WHERE type='BACKGROUND' and program like '%LGWR%';
Find wait events for LGWR.

SQL> SELECT event, time_waited, time_waited_micro
SQL> FROM v$session_event
SQL> WHERE sid=
SQL> ORDER BY time_waited;
When LGWR is waiting for posts from the user sessions, that wait time is accounted as an 'rdbms ipc message' event. Normally, this event can be ignored.
It is worth noting that v$session_wait is a cummulative counter from the instance startup, and hence, can be misleading.

2009/03/25

Get Table/View/Index Definition and Indexed Columns

1. Get View Definition

SQL> SELECT test 
SQL> FROM dba_views
SQL> WHERE owner='' AND view_name='';
2. Get Table Definition

SQL> SET PAGESIZE 0
SQL> SET LONG 100000
SQL> EXEC DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM, 'SQLTERMINATOR', true);
SQL> SELECT DBMS_METADATA.GET_DDL('TABLE','','') FROM dual;
3. Get Index Definition

SQL> SET PAGESIZE 0
SQL> SET LONG 100000
SQL> SELECT DBMS_METADATA.GET_DDL('INDEX','','') FROM dual;
If you are only interested in the indexed column names, simply query COLUMN_NAME of table DBA_IND_COLUMNS.

SQL> SELECT column_name
SQL> FROM dba_ind_columns
SQL> WHERE index_owner='' AND index_name='';

2009/03/16

Password Verification in Oracle

The default values for Oracle password security is very weak and special measures must be taken to strengthen Oracle password security. You can get an idea about scripting Oracle password security profiles by examing Oracle's utlpwdmg.sql script located in $ORACLE_HOME/rdbms/admin/utlpwdmg.sql.

To make password complex as you like, the following procedure should be followed:
1) Create a password verification function.
2) Assign this verification function to your desired profile or system default profile.
3) Assign this profile to the users to whom you want to impose rules.

Inside utlpwdmg.sql script, there is a sample PL/SQL script by which we can impose password complexity.

Securing Listener through a Password

Oracle 9i

When the listener is first created, there is no password by default. The password can be set by two methods: LSNRCTL comand and LISTENER.ORA parameter file.

1. LSNRCTL command

LSNRCTL> change_passord

2. LISTENER.ORA parameter file

passwords_listener =


If you save the change using save_config command, the first method will leave the encrypted password in the parameter file. The second method will always leave the unencrypted password in the parameter file. Once the passwords are changed, you have to supply the password before doing any administrative work. The password must be supplied differently depending on how the password is changed. If the password is changed through LSNRCTL command method, you must supply it like this:

LSNRCTL> set password
Password:
The command completed successfully

If the password is change through LISTENER.ORA parameter file, you must supply it like this:

LSNRCTL> set password

Otherwise, you will get this error:
TNS-01169: The listener has not recognized the password

However, when the password is changed through LSNRCTL command if it is already explicitly specified (unencrypted) in the LISTENER.ORA parameter file, the operation will never be successful. To change the password, you have to remove the line passwords_listener in the parameter file and then change the password.

Use LSNRCTL STATUS command to check if the password is set. If the 'Secrity' setting is shown as ON, then the password is set for the listener.

Reference:
http://www.rampant-books.com/art_nanda_dbazine_securing_listener.htm

2009/03/12

Data Pump Export Data Consistency

How Data Pump Export Parameters Map to Those of the Original Export Utility

http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/dp_export.htm#sthref181

CONSISTENT parameter is obsoleted in Data Pump. By default, it will not be consistent export. But you can use FLASHBACK_SCN and FLASHBACK_TIME to keep export consistent. Check Oracle document on this parameters.

But Data Pump has bug #5472500. Even those parameters are be used, it can't guarantee consistency in some rare cases. It happens: when a table has rows at the SCN picked for the consistent export, but does not have rows when Data Pump is actually invoked, that tables's dump will not be consistent because it will get exported with zero rows. This is a test post for it:

http://forums.oracle.com/forums/thread.jspa?threadID=623510

Since it's rare cases, we can record the row number for each table before export and the row number after export and make a comparison. It they are the same, the export will be consistent.

FTP: 426 Connection closed; transfer aborted

Referenced from here:
http://www.chilkatsoft.com/p/p_182.asp

If you get this error when doing FTP transfers in passive mode, the problem could be that there are multiple IP addresses on the external interface. (You may alternatively see the error "425 Can’t open data connection.".

A Microsoft knowledge base article explains it: http://support.microsoft.com/kb/817829

"When the client establishes a passive mode FTP connection to an FTP server, the client must establish both connections. The FTP server returns the TCP port used for the FTP data connection. The FTP server expects this FTP data connection to come from the same IP address as the FTP control channel. If the connection comes from a different IP address, the FTP server may reject or close the connection because this may be a passive mode port theft attempt.

When ISA has more than one IP address bound to the external interface, the source IP address for the FTP data connection may not be the primary IP address. Instead, the source IP address may be one of the secondary IP addresses bound to the external interface. With the hotfix installed, ISA always uses the primary IP address for both types of connections."

The solution: "You must install ISA Server Service Pack 1 (SP1) before you install the following hotfix."

2009/03/10

Prepare for EXP/IMP Using Data Pump

1. Create directory object in source and target database.

SQL> CREATE DIRECTORY directory_name as 'directory_path';
2. Create user account for source and target database.

SQL> CREATE USER user IDENTIFIED BY pwd
SQL> DEFAULT TABLESPACE users QUOTA unlimited ON users
SQL> TEMPORARY TABLESPACE temp;
3. Grant privileges to the user to exp/imp.

For source database:

SQL> GRANT READ, WRITE ON DIRECTORY directory_name TO user;
SQL> GRANT create session TO user;
SQL> GRANT create table TO user;
SQL> GRANT exp_full_database TO user;
For target database:

SQL> GRANT READ, WRITE ON DIRECTORY directory_name TO user;
SQL> GRANT create session TO user;
SQL> GRANT create table TO user;
SQL> GRANT imp_full_database TO user;
Note: If a user with the EXP_FULL_DATABASE role is exporting a table that is owned by a different schema and that table has as SYS.XMLTYPE column, then the Export DataPump job will fail with ORA-31693 and ORA-01031. The solution is to explicitly grant the FLASHBACK privilege to the schema that runs the Export DataPump job. The privilege can be granted on specific objects (privilege: FLASHBACK), or on all tables (privilege: FLASHBACK ANY TABLE).

2009/03/09

How to Check Character Set in Oracle


SQL> SELECT value$ FROM sys.props$ WHERE name = 'NLS_CHARACTERSET';

2009/03/06

Kill Session in RAC Database

You may get ORA-01940 even after you make sure that no user is connected by checking v$session.

If you are operating on RAC database, you probably should check gv$session and kill session on different instance. What Oracle will do is check sessions on the current instance you are connecting. So you will find nothing active there.

After you find the user connecting on another insntance through gv$session, you can kill that session by connecting to that instance if you are in versions before 11.
For Oracle 11, you can kill session in any instance by using this command:

SQL> ALTER SYSTEM KILL SESSION 'sid, serial#, @instance_id';

2009/03/05

Recover Database Using Backup Controlfile Until Cancel

Referenced from here:
http://searchoracle.techtarget.com/expert/KnowledgebaseAnswer/0,289625,sid41_gci1248488,00.html

If you were to recover the database with a current control file, Oralce will know the last SCN of the last checkpoint. So Oracle can use the information in the archived and online redo logs to apply recovery up to that SCN. This is called a "complete" recovery.

If you do not have the current control file, your option is to use a backup controlfile. Which means Oracle does not know the SCN to stop applying recovery. So you tell Oracle that you are using a "backup controlfile" and that you will tell it when to stop applying redo by replying "cancel." When Oracle starts recovery, it looks at the datafiles to know the last time a checkpoint was performed on the datafile. Oracle now knows to start applying recovery to the datafile for all SCNs after the SCN in the datafile header. Oracle rolls forward transactions. But Oracle does not know when to stop, and eventually, Oracle applies recovery in all of your archived redo logs. You can then tell Oracle to use the redo in the online redo logs. Oracle will ask you where to find more redo. At this point, you tell it to quit applying redo by replying CANCEL.

Hopefully, you never have to use this command. This is because you multiplex your control files and have three copies of the control file, all on different disk units. So if you lose a disk unit, you use the control files on the other disk units. The only time you need this command is when you lose all of your control files. In this case, either use a binary backup of the control file or you re-create the control file with the CREATE CONTROLFILE command and then recover with the backup control file.

Oracle Text

Introduction to Oracle Text can be found at Oracle documentation website named 'Text Reference' for different versions.

Oracle Text installation should be completed during the Oracle Installtion using OUI.

Using the following procedure to confirm Oracle Text installation:

Login to an account with DBA privileges and confirm that the CTXSYS user exists.

SELECT username FROM dba_users WHERE username='CTXSYS';

If there is no CTXSYS user, then it is not installed. It is best to use OUI to install it.

Doc 275689.1 in metalink is also a good reference for manual installation of Oracle Text 9.2.

Installing Oracle Text for 10gR2:
@$ORACLE_HOME/ctx/admin/catctx.sql ctxsys DRSYS TEMP NOLOCK
where ctxsys is the install schema, DRSYS is the default tablespace, Temp is well - temp tablespace for that user, and NOLOCK instructs the script to not lock the account when the install is complete (which by the way, you should lock).

If DRG-10700 comes up, check doc 107626.1 in metalink.

A good post for Oracle Text Indexing is here:
http://www.oracle-base.com/articles/9i/FullTextIndexingUsingOracleText9i.php

2009/03/04

Check Which Port is Listening or Open on Linux

1. cat /etc/services | grep port#
2. netstat -nan | grep port#
3. lsof -i -n -P | grep port#
4. nmap -sS -O ip
5. telnet ip port#

2009/03/02

Designated PFILE in Registry on Windows

On windows platform, sometimes there is an entry inside registry to specify which pfile to start oracle. The entry is located at HKEY_LOCAL_MACHINE\SOFTWARE\ORACLE\HOME0\ORA__PFILE. If a pfile is specified, oracle will use this file to start the instance.

If you want to start oracle instance using spfile, there are workarounds for this problem.
1.You can remove the pfile entry ORA__PFILE from the registry.
2.In your init.ora you can add an entry spfile=.

Automating Database Startup and Shutdown on Linux

http://www.oracle-base.com/articles/linux/AutomatingDatabaseStartupAndShutdownOnLinux.php