Most Common Oracle Wait Events

Every DBA knows how to read the Top 5 Timed Events section of an AWR report. The common ones — db file sequential read, log file sync, buffer busy waits — are covered in every Oracle performance tuning course. What separates senior DBAs from intermediate ones is recognizing the less common wait events that indicate specific, often non-obvious problems. This post covers the wait events that I see causing significant production issues in 2025 environments — the ones that aren’t in every tutorial, and the ones that misled DBAs waste hours on because they don’t understand what’s actually being waited on.


enq: TX - row lock contention — Not Always an Application Problem

This wait event appears when a session is waiting to lock a row that another session has already locked. The immediate reaction is almost always “find the blocking session and kill it.” That’s the right immediate action. The right diagnostic action is understanding why the contention is happening systematically.

-- Find blocking sessions right now
SELECT l.sid blocker_sid, l.serial# blocker_serial,
s.sid waiter_sid, s.serial# waiter_serial,
l.type, l.mode_held, l.mode_requested,
sq.sql_text blocker_sql
FROM v$lock l
JOIN v$session s ON s.lockwait = l.addr
JOIN v$sql sq ON sq.sql_id = (
SELECT sql_id FROM v$session WHERE sid = l.sid
)
WHERE l.block > 0;
-- Historical pattern from ASH
SELECT blocker_sid, blocker_sess_serial#,
event, COUNT(*) waits,
sql_id
FROM v$active_session_history
WHERE event = 'enq: TX - row lock contention'
AND sample_time > SYSDATE - 1/24 -- Last hour
GROUP BY blocker_sid, blocker_sess_serial#, event, sql_id
ORDER BY waits DESC;

The unindexed foreign key trap causes this wait constantly and is missed just as constantly. When you delete a parent row in table A, Oracle locks all child rows in table B to prevent orphans — but only if there’s no index on the foreign key column in table B. Under concurrent OLTP load, this creates sustained enq: TX - row lock contention that looks like application-level row contention but is actually a structural schema problem.

-- Find unindexed foreign keys causing lock contention
SELECT c.owner, c.table_name, c.constraint_name,
cc.column_name, c.r_owner, c.r_constraint_name
FROM dba_constraints c
JOIN dba_cons_columns cc ON c.constraint_name = cc.constraint_name
AND c.owner = cc.owner
WHERE c.constraint_type = 'R'
AND NOT EXISTS (
SELECT 1 FROM dba_ind_columns ic
WHERE ic.table_name = c.table_name
AND ic.column_name = cc.column_name
AND ic.table_owner = c.owner
);

Add indexes on every foreign key column returned by this query. The lock contention often disappears immediately.


cursor: pin S wait on X — Library Cache Contention Under Load

This wait event indicates that sessions are waiting for a shared pin on a cursor that another session holds an exclusive pin on (during hard parse or cursor invalidation). Under high concurrency, this appears as a cascade: one hard parse causes dozens of sessions to queue waiting for the cursor to become available.

-- Identify the hot cursors causing pin contention
SELECT s.sql_id, s.sql_text,
s.invalidations,
s.parse_calls,
s.executions,
s.loads
FROM v$sql s
WHERE s.sql_id IN (
SELECT sql_id FROM v$active_session_history
WHERE event = 'cursor: pin S wait on X'
AND sample_time > SYSDATE - 1/24
)
ORDER BY s.invalidations DESC, s.parse_calls DESC;

High invalidations on a specific SQL_ID while cursor: pin S wait on X is in the top wait events means that cursor is being repeatedly invalidated — forcing hard parses under concurrent load. Common causes:

  • Statistics gathering on the referenced tables while they’re actively queried
  • DDL operations on referenced objects (ALTER TABLE, index rebuilds)
  • Excessive cursor sharing mismatches causing frequent cursor replacement

The fix depends on the cause. For statistics gathering causing invalidations, schedule DBMS_STATS jobs during low-activity windows and use the NO_INVALIDATE => DBMS_STATS.AUTO_INVALIDATE parameter to stagger invalidations:

-- Gather stats without immediate invalidation
EXEC DBMS_STATS.GATHER_TABLE_STATS(
ownname => 'MY_SCHEMA',
tabname => 'HOT_TABLE',
no_invalidate => DBMS_STATS.AUTO_INVALIDATE,
degree => 4
);

gc buffer busy acquire — RAC-Specific

In RAC environments, gc buffer busy acquire means a session is waiting for a Global Cache (GC) operation to complete on a buffer that another operation is already in progress on. It’s different from buffer busy waits (single-instance contention) and gc cr request (inter-node block transfer).

This specific wait means: “I want to request a buffer from another node, but the local buffer manager is already processing a GC operation for this buffer.”

-- Find hot objects causing gc buffer busy waits in RAC
SELECT o.owner, o.object_name, o.object_type,
s.statistic_name,
SUM(s.value) total_value
FROM gv$segment_statistics s
JOIN dba_objects o ON s.obj# = o.object_id
WHERE s.statistic_name IN (
'gc buffer busy acquire',
'gc buffer busy release',
'gc cr blocks received',
'gc current blocks received'
)
AND s.value > 1000
GROUP BY o.owner, o.object_name, o.object_type, s.statistic_name
ORDER BY total_value DESC;

High gc buffer busy acquire on specific segments in a RAC environment indicates hot block contention across nodes — the same blocks are being accessed from multiple instances simultaneously.

Solutions depend on the object type:

  • Sequence hot blocks: increase CACHE and use NOORDER if ordering isn’t required
  • Index hot blocks (right-hand inserts): consider reverse key index if the index is used only for equality lookups
  • Table hot blocks: application-level partitioning or row distribution changes

log file switch (checkpoint incomplete) — The One That Stops Commits Dead

This wait event means: a session tried to commit, Oracle tried to switch to a new redo log, but the database writer hasn’t finished checkpointing the old log yet so it can’t be reused. The committing session waits — potentially for seconds — for the checkpoint to complete.

Under heavy write load, this can cascade into severe application response time degradation because every commit is delayed.

-- Check log switch frequency and size
SELECT l.group#, l.members, l.bytes/1024/1024 size_mb,
l.status, l.archived
FROM v$log l
ORDER BY l.group#;
-- Historical log switch rate
SELECT TO_CHAR(first_time, 'YYYY-MM-DD HH24') hour,
COUNT(*) switches_per_hour
FROM v$log_history
WHERE first_time > SYSDATE - 7
GROUP BY TO_CHAR(first_time, 'YYYY-MM-DD HH24')
ORDER BY 1 DESC;

If you’re seeing more than 4-6 log switches per hour under normal load, your redo logs are undersized. Each log should take 15-20 minutes to fill.

The fix is adding redo log groups and increasing log size:

-- Add larger redo log groups
ALTER DATABASE ADD LOGFILE GROUP 5
'/oradata/redo05a.log' SIZE 500M;
ALTER DATABASE ADD LOGFILE GROUP 6
'/oradata/redo06a.log' SIZE 500M;
-- Once the new groups are current and the old small ones are inactive,
-- drop the old groups:
ALTER DATABASE DROP LOGFILE GROUP 1; -- only if status is INACTIVE

log file switch (checkpoint incomplete) combined with undersized redo logs is one of the most impactful and most easily fixed performance problems in Oracle. I’ve seen commit response times drop from 8 seconds to under 100ms just by sizing redo logs correctly.


SQL*Net message from client — When It’s Actually a Problem

This wait event is normally benign — it means the server process is waiting for the next client request. In a healthy system, it appears in the wait event list because idle sessions spend most of their time in this state.

But when SQL*Net message from client appears as a top wait event in AWR during a period of reported poor performance, it’s telling you something different: the application is not sending work to the database efficiently. The database is waiting for the application.

Patterns that cause this legitimately:

  • Application processing large result sets row-by-row in the application tier (fetcharray too small)
  • Application performing computation between database calls
  • Network latency between application and database tier
  • Application connection pool exhausted — sessions sitting idle in the pool holding database connections
-- Check fetchsize for active sessions
SELECT s.sid, s.username, s.program,
s.sql_id, s.last_call_et elapsed_seconds,
s.row_wait_obj#
FROM v$session s
WHERE s.wait_class = 'Network'
AND s.event = 'SQL*Net message from client'
AND s.last_call_et > 30 -- Waiting more than 30 seconds
ORDER BY s.last_call_et DESC;

If you see sessions waiting in SQL*Net message from client for extended periods while the application is “busy,” the bottleneck is the application tier, not Oracle. The database is ready and waiting.


direct path read temp — Sort and Hash Join Spills

This wait indicates that Oracle is reading temporary tablespace data for a sort or hash join that didn’t fit in PGA memory. It’s a direct read (bypasses buffer cache) to/from the TEMP tablespace.

-- Find SQL statements causing direct path read temp
SELECT s.sql_id, s.sql_text,
s.sorts, s.executions,
ROUND(s.elapsed_time/1e6/NULLIF(s.executions,0),2) avg_ela_sec,
ROUND(s.disk_reads/NULLIF(s.executions,0),0) disk_reads_per_exec
FROM v$sql s
WHERE s.sql_id IN (
SELECT sql_id FROM v$active_session_history
WHERE event = 'direct path read temp'
AND sample_time > SYSDATE - 1/24
)
ORDER BY s.elapsed_time DESC
FETCH FIRST 10 ROWS ONLY;
-- Check current temp tablespace usage
SELECT u.tablespace, u.segtype,
ROUND(u.blocks * 8192/1024/1024, 1) used_mb,
s.sid, s.username, s.sql_id
FROM v$tempseg_usage u
JOIN v$session s ON u.session_addr = s.saddr
ORDER BY used_mb DESC;

Repeated direct path read temp on specific SQL_IDs indicates either:

  • PGA_AGGREGATE_TARGET is too small — operations spill to disk because they can’t fit in memory
  • A specific query doing an unnecessary sort or hash join that could be avoided with better index design
  • A query doing a cartesian join accidentally (missing join condition)
-- Check if PGA is causing excessive spills
SELECT name, value FROM v$pgastat
WHERE name IN (
'cache hit percentage',
'aggregate PGA target parameter',
'aggregate PGA auto target',
'total PGA inuse',
'maximum PGA allocated',
'PGA memory freed back to OS'
);

cache hit percentage below 80% means PGA is consistently too small for your workload. Increase PGA_AGGREGATE_TARGET. For individual queries with excessive temp reads, check the execution plan for unexpected sort operations and cartesian products.


library cache lock — Different From Library Cache Pin, Different Fix

library cache lock occurs when a session needs an exclusive lock on a library cache object — typically to compile or invalidate it — and another session holds a shared lock on the same object (executing it).

-- Find library cache lock holders and waiters
SELECT s1.sid holder_sid, s1.username holder_user,
s2.sid waiter_sid, s2.username waiter_user,
s1.sql_id holder_sql
FROM v$session s1, v$session s2
WHERE s1.sid != s2.sid
AND s2.event = 'library cache lock'
AND s1.status = 'ACTIVE';
-- Find what object is being locked
SELECT kgllktype, kgllkhdl, kgllkmod, kgllkreq,
kgllkses
FROM x$kgllk
WHERE kgllkmod > 0 OR kgllkreq > 0;

The most common cause in production: a DBA or automated process is trying to recompile a package (ALTER PACKAGE... COMPILE) while sessions are actively executing that package. Oracle needs an exclusive library cache lock to recompile, but can’t get it while executions hold shared locks.

The solution during a production incident: wait for active executions to complete, or schedule recompilations during low-traffic periods. During upgrades or deployments where you must recompile objects, coordinate with the application team to quiesce traffic to that schema first.



Yorum bırakın