SQLServerCentral Article

Oracle Performance Tuning: Practical Techniques Every DBA Should Master

,

As database professionals, we've all been there – that dreaded moment when users start complaining about slow queries, and suddenly everyone's looking at you with that "fix it now" expression. Oracle performance tuning can feel overwhelming, especially when you're under pressure, but the good news is that most performance issues stem from a handful of common culprits. Let me walk you through some battle-tested techniques that have saved my bacon more times than I can count.

Start with the Low-Hanging Fruit: Statistics and Execution Plans

Before diving into complex tuning strategies, always check if your statistics are current. Oracle's cost-based optimizer relies heavily on accurate statistics to make smart decisions about query execution paths. I've seen queries run 10x slower simply because someone forgot to gather stats after a large data load.

Here's the thing about statistics gathering – SIZE AUTO might seem convenient, but it can be a performance killer on large databases. Oracle decides whether histograms are needed and how many buckets to create, which can lead to extremely long statistics collection times. For production environments, I prefer SIZE REPEAT to maintain consistent performance while keeping the histogram settings that already work well.

-- Check when statistics were last gathered
SELECT table_name, last_analyzed, num_rows, blocks, avg_row_len
FROM user_tables
WHERE table_name IN ('CUSTOMERS', 'ORDERS', 'ORDER_ITEMS')
ORDER BY last_analyzed DESC;

-- Gather fresh statistics - performance-conscious approach
BEGIN
    DBMS_STATS.GATHER_TABLE_STATS(
        ownname => USER,
        tabname => 'CUSTOMERS',
        estimate_percent => 10,  -- Fixed percentage for predictable performance
        method_opt => 'FOR ALL COLUMNS SIZE REPEAT',  -- Maintains existing histogram settings
        cascade => TRUE,
        degree => 4  -- Parallel execution for faster collection
    );
END;
/

-- For initial setup or when you need to establish histograms
BEGIN
    DBMS_STATS.GATHER_TABLE_STATS(
        ownname => USER,
        tabname => 'CUSTOMERS',
        estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
        method_opt => 'FOR ALL COLUMNS SIZE 254',  -- Explicit bucket count
        cascade => TRUE
    );
END;
/

Once your stats are current, examine execution plans for problematic queries. The EXPLAIN PLAN statement is your best friend here, but don't just run it and walk away – actually analyze what Oracle is telling you.

-- Get execution plan with cost information
EXPLAIN PLAN FOR
SELECT c.customer_name, COUNT(o.order_id) as order_count,
       SUM(oi.quantity * oi.unit_price) as total_spent
FROM customers c
JOIN orders o ON c.customer_id = o.customer_id
JOIN order_items oi ON o.order_id = oi.order_id
WHERE o.order_date >= DATE '2023-01-01'
GROUP BY c.customer_id, c.customer_name;

-- Display the plan
SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY);

Look for red flags like full table scans on large tables, nested loop joins with high cardinalities, or operations with suspiciously high costs. These usually point you toward your performance bottlenecks.

Indexing: The Art of Strategic Access Paths

Proper indexing can make or break your database performance. But here's the thing – more indexes aren't always better. Each index comes with maintenance overhead, so you want to be strategic about what you create.

Before rushing to create new indexes, dig deeper into how your queries have actually performed over time. Check the AWR repository to see if Oracle has found better execution paths using hash joins or other methods:

-- Review historical execution plans for a specific query
SELECT sql_id, plan_hash_value, executions_total, 
       elapsed_time_total/1000000 as elapsed_seconds_total,
       cpu_time_total/1000000 as cpu_seconds_total,
       buffer_gets_total, disk_reads_total
FROM dba_hist_sqlstat 
WHERE sql_id = '&sql_id'  -- Replace with your SQL_ID
ORDER BY snap_id DESC;

-- Get the actual execution plan details
SELECT operation, options, object_name, cost, cardinality, bytes
FROM dba_hist_sql_plan 
WHERE sql_id = '&sql_id' 
  AND plan_hash_value = &plan_hash_value  -- Use the best performing plan
ORDER BY id;

This historical data reveals crucial insights – maybe Oracle is already using hash joins effectively, or perhaps a different execution plan performed much better in the past. Sometimes a full table scan with a hash join outperforms an index-based nested loop, especially when dealing with larger result sets.

For the query above, let's say you notice consistent full table scans on the orders table and the historical data shows this is indeed a bottleneck. A composite index might be exactly what you need:

-- Create a composite index to support the WHERE clause and JOIN
CREATE INDEX idx_orders_date_customer 
ON orders (order_date, customer_id);

-- For queries that frequently filter by customer and date range
CREATE INDEX idx_orders_customer_date 
ON orders (customer_id, order_date);

The order of columns in composite indexes matters enormously. Oracle can use an index even if you don't specify all columns, but only if you include the leading columns. Think of it like a phone book – you can find "Smith, John" easily, but finding all the "Johns" regardless of last name requires scanning the entire book.

Don't forget about function-based indexes for queries that use expressions:

-- If you frequently search by uppercase customer names
CREATE INDEX idx_customers_upper_name 
ON customers (UPPER(customer_name));

-- Now this query can use the index
SELECT * FROM customers 
WHERE UPPER(customer_name) = 'JOHN SMITH';

SQL Tuning: Writing Smarter Queries

Sometimes the issue isn't with indexes or statistics – it's with how we write our SQL. Here are some patterns I've learned to watch for:

Avoid functions in WHERE clauses on indexed columns:

-- This prevents index usage
SELECT * FROM orders WHERE TRUNC(order_date) = DATE '2023-06-15';

-- This allows index usage
SELECT * FROM orders 
WHERE order_date >= DATE '2023-06-15' 
  AND order_date < DATE '2023-06-16';

Use EXISTS instead of IN with subqueries:

-- Less efficient for large datasets
SELECT * FROM customers c
WHERE c.customer_id IN (
    SELECT o.customer_id FROM orders o 
    WHERE o.order_date >= DATE '2023-01-01'
);

-- More efficient - stops at first match
SELECT * FROM customers c
WHERE EXISTS (
    SELECT 1 FROM orders o 
    WHERE o.customer_id = c.customer_id 
      AND o.order_date >= DATE '2023-01-01'
);

Memory Configuration: Getting the SGA Right

Oracle's System Global Area (SGA) configuration can dramatically impact performance. The three key components you should focus on are the buffer cache, shared pool, and PGA.

-- Check current SGA configuration
SELECT component, current_size/1024/1024 as size_mb
FROM v$sga_dynamic_components
WHERE current_size > 0
ORDER BY current_size DESC;

-- Monitor buffer cache hit ratio (should be > 90%)
SELECT name, value
FROM v$sysstat
WHERE name IN ('db block gets from cache',
               'consistent gets from cache',
               'physical reads cache');

If your buffer cache hit ratio is consistently below 90%, consider increasing the DB_CACHE_SIZE parameter. However, don't just throw memory at the problem without understanding what's causing the cache misses.

Query Optimization with Hints (Use Sparingly!)

The old advice of "hints as a last resort" isn't quite accurate anymore. Since Oracle 11g, hints have evolved into a legitimate tool for addressing optimizer challenges, especially with complex nested views, poor database designs, or when the cost-based optimizer simply can't make optimal decisions with available statistics.

Modern Oracle provides several approaches to guide query execution, each building on the previous:

The Performance Management Evolution:

Manual Hints → SQL Profiles → SQL Baselines → SQL Plan Management → SQL Plan Directives
     ↓              ↓              ↓                ↓                     ↓
Direct control   Automatic      Stable plans    Comprehensive        Adaptive
in SQL code      optimization   across changes   plan evolution       feedback

Here's how to use hints strategically in modern Oracle:

-- Force a specific join order when the optimizer chooses poorly
SELECT /*+ ORDERED USE_NL(o oi) */ 
       c.customer_name, o.order_date, oi.product_name
FROM customers c, orders o, order_items oi
WHERE c.customer_id = o.customer_id
  AND o.order_id = oi.order_id
  AND c.customer_id = 12345;

-- Force index usage when you're certain it's better
SELECT /*+ INDEX(orders idx_orders_date_customer) */       * FROM orders
WHERE order_date >= DATE '2023-01-01';

Beyond Manual Hints - Modern Oracle Options:

When hints become unwieldy or you need more sophisticated control, Oracle offers several automated alternatives:

SQL Profiles - Oracle can automatically generate optimized execution plans:

-- Check existing SQL profiles
SELECT name, sql_text, status, created
FROM dba_sql_profiles
ORDER BY created DESC;

SQL Baselines - Ensure consistent performance across database changes:

-- Create a baseline for a well-performing query
DECLARE
    baseline_name VARCHAR2(128);
BEGIN
    baseline_name := DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE(
        sql_id => '&your_sql_id'
    );
END;
/

SQL Plan Management - The most comprehensive approach for complex environments where you need to evolve plans safely while maintaining performance stability.

These tools work behind the scenes using hint-like mechanisms but provide much more sophisticated control. They're particularly valuable when dealing with application code you can't modify or complex reporting queries where manual hint management becomes impractical.

The key is choosing the right tool for your scenario: use direct hints for quick fixes and testing, profiles for automatic optimization, baselines for stability, and full plan management for comprehensive performance governance in large, complex environments.

Monitoring and Continuous Improvement

Performance tuning isn't a one-time activity. Set up monitoring to catch issues before your users do. However, be mindful of the performance impact your monitoring queries can have on production systems.

Smart Monitoring with v$sqlstats(Avoiding v$SQL whenever possible)

-- Find your most expensive queries using v$sqlstats (better for production)
SELECT sql_id, plan_hash_value, executions, 
       elapsed_time_total/1000000 as elapsed_seconds_total,
       cpu_time_total/1000000 as cpu_seconds_total,
       buffer_gets_total, physical_read_bytes_total
FROM v$sqlstats
WHERE elapsed_time_total > 10000000  -- More than 10 seconds total
ORDER BY elapsed_time_total DESC
FETCH FIRST 10 ROWS ONLY;

AWR-Based Performance Analysis (Requires Diagnostic Pack License):

For production environments with proper licensing, AWR provides much more accurate and less intrusive monitoring:

-- Top SQL by DB Time from AWR (last 7 days)
SELECT sql_id, plan_hash_value, 
       executions_total,
       elapsed_time_total/1000000 as elapsed_seconds,
       (elapsed_time_total/1000000) / (SELECT SUM(value)/1000000 
                                       FROM dba_hist_sys_time_model 
                                       WHERE stat_name = 'DB time') * 100 as pct_db_time
FROM dba_hist_sqlstat s
WHERE s.snap_id > (SELECT MAX(snap_id) - 7*24 FROM dba_hist_snapshot)
ORDER BY elapsed_time_total DESC
FETCH FIRST 10 ROWS ONLY;

-- Average Active Sessions (AAS) analysis - more meaningful than blocking sessions
SELECT to_char(sample_time, 'YYYY-MM-DD HH24') as hour,
       event,
       COUNT(*) as active_sessions,
       ROUND(COUNT(*) / 60, 2) as avg_active_sessions
FROM dba_hist_active_sess_history
WHERE sample_time >= SYSTIMESTAMP - INTERVAL '24' HOUR
GROUP BY to_char(sample_time, 'YYYY-MM-DD HH24'), event
HAVING COUNT(*) > 30  -- Focus on significant wait events
ORDER BY hour DESC, active_sessions DESC;

Performance Trend Monitoring:

-- Monitor top 10 SQL performance changes over time
WITH baseline AS (
    SELECT sql_id, 
           AVG(elapsed_time_total/executions_total) as avg_elapsed_baseline
    FROM dba_hist_sqlstat
    WHERE snap_id BETWEEN (SELECT MAX(snap_id) - 14*24 FROM dba_hist_snapshot) 
                      AND (SELECT MAX(snap_id) - 7*24 FROM dba_hist_snapshot)
    GROUP BY sql_id
    HAVING SUM(executions_total) > 100
),
current_period AS (
    SELECT sql_id,
           AVG(elapsed_time_total/executions_total) as avg_elapsed_current
    FROM dba_hist_sqlstat  
    WHERE snap_id > (SELECT MAX(snap_id) - 7*24 FROM dba_hist_snapshot)
    GROUP BY sql_id
    HAVING SUM(executions_total) > 100
)
SELECT b.sql_id,
       b.avg_elapsed_baseline/1000 as baseline_ms,
       c.avg_elapsed_current/1000 as current_ms,
       ROUND((c.avg_elapsed_current - b.avg_elapsed_baseline) / b.avg_elapsed_baseline * 100, 1) as pct_change
FROM baseline b
JOIN current_period c ON b.sql_id = c.sql_id
WHERE ABS((c.avg_elapsed_current - b.avg_elapsed_baseline) / b.avg_elapsed_baseline) > 0.2
ORDER BY ABS(pct_change) DESC;

Important Licensing Note: The AWR-based queries above require Oracle's Diagnostic and Tuning Pack license. If you don't have this license, stick with the v$sqlstats approach or use Statspack as a free alternative.

Why This Approach is Better:

  • v$sqlstats provides historical aggregated data without the concurrency issues of v$sql
  • AAS analysis gives you a more meaningful picture of database load than simple blocking session counts
  • Trend analysis helps you catch performance degradation before it becomes critical
  • Focus on DB Time percentage helps prioritize which SQL statements truly impact your users

The key is establishing baseline performance metrics and monitoring for significant deviations, rather than just looking for blocking sessions or arbitrary thresholds.

Wrapping Up

Oracle performance tuning is both an art and a science. Start with the fundamentals – current statistics, proper indexing, and well-written SQL. Monitor your system regularly and don't be afraid to question the optimizer's decisions when something doesn't make sense.

Remember, the best performance improvement is often the simplest one. Before implementing complex solutions, make sure you've covered the basics. Your future self (and your users) will thank you for taking a methodical approach to performance tuning.

The key is to develop a systematic approach: identify the problem, understand the root cause, implement a targeted solution, and measure the results. With these techniques in your toolkit, you'll be well-equipped to handle whatever performance challenges Oracle throws your way.

Rate

5 (1)

You rated this post out of 5. Change rating

Share

Share

Rate

5 (1)

You rated this post out of 5. Change rating