Oracle IO wait events: db file sequential read

(the details are investigated and specific to Oracle’s database implementation on Linux x86_64)

Exadata IO: This event is not used with Exadata storage, ‘cell single block physical read’ is used instead.
Parameters:
p1: file#
p2: block#
p3: blocks

Despite p3 listing the number of blocks, I haven’t seen a db file sequential read event that read more than one block ever. Of course this could change in a newer release.

Implementation:
One of the important things (more...)

Capturing long-running SQL in Statspack

Oracle 11.2.0.1.0

I have been investigating “enq: TM – contention” wait events, which are almost certainly due to un-indexed foreign keys. One such extreme example which showed up in a Statspack report is as follows :

                                                             Avg          (more...)

Oracle Cloud Friendly – Red Samurai ADF Performance Audit

I have deployed our tool for ADF performance monitoring to Oracle Java Cloud service. It runs perfectly on the cloud, monitors slow performance and allows to analyse collected performance data. All data is stored in Oracle Database Cloud.

You can access performance monitoring dashboard using this address. Access will be available for a month or so, until my trial account will expire. You would need to use following login credentials - user: redsam, password (more...)

Complément : Analyse des performances sur du long terme

Scaling up Cardinality Estimates in 12.1.0.2

Topic: Counting the number of distinct values (NDV) for a table column has important applications in the database domain, ranging from query optimization to optimizing reports for large data warehouses. However the legacy SQL method of using SELECT COUNT (DISTINCT <COL>) can be very slow. This is a well known problem and Oracle 12.1.0.2 provides a new function APPROX_COUNT_DISTINCT implemented with a new-generation algorithm to address this issue by providing (more...)

Being Right and Proving It

One of the great things about working in IT is that you can often win an argument simply by being right. Not because of who you are or because you are more eloquent than others, but because the facts support your position. Almost every IT person I have ever met respects facts.

In order to win arguments this way, you of course need some facts to work with. And that’s where too many people fail. (more...)

That’s … Huge!

Recently I’ve noticed the occasional thread in Oracle newsgroups and lists asking about hugepages support in Linux, including ‘best practices’ for hugepages configuration. This information is out on that ‘world-wide web’ in various places; I’d rather put a lot of that information in this article to provide an easier way to get to it. I’ll cover what hugepages are, what they do, what they can’t do and how best to allocate them for your particular (more...)

Exadata storage indexes and DML

Last week I’ve gotten a question on how storage indexes (SI) behave when the table for which the SI is holding data is changed. Based on logical reasoning, it can be two things: the SI is invalidated because the data it’s holding is changed, or the SI is updated to reflect the change. Think about this for yourself, and pick a choice. I would love to hear if you did choose the correct one.

First (more...)

New Version Of XPLAN_ASH Utility

A new version 4.1 of the XPLAN_ASH utility is available for download.

As usual the latest version can be downloaded here.

This version in particular supports now the new 12c "Adaptive" plan feature - previous versions don't cope very well with those if you don't add the "ADAPTIVE" formatting option manually.

Here are the notes from the change log:

- GV$SQL_MONITOR and GV$SQL_PLAN_MONITOR can now be customized in the
settings as table names in (more...)

B-tree and nulls

Today while I was scrolling my “WordPress reader” page I saw a post which reminds me to something I had fun before /and find it useful in many situations/: So if we have an index based on null column followed by not null column we can use it to filter the rows with null value. […]

FIRST_ROWS_n Optimizer Mode – What is Wrong with this Statement?

June 8, 2014 It has been nearly two years since I last wrote a review of an Oracle Database related book, although I have recently written reviews of two Microsoft Exchange Server 2013 books and a handful of security cameras in the last two years.  My copy of the second edition of the “Troubleshooting Oracle […]

Simora: Alpha Testers Confirmed

It's been a while since I provided any public updates regarding Simora, our Oracle workload simulation product. It's finally time to unveil the status of Simora and our steps moving forwards. We have been working extensively on the Simora engine and infrastructure over the  last several months, with a view to transforming it into a […]

Flame Graphs for Oracle

Topic: This post is a hands-on introduction to using on-CPU Flame Graphs for investigating Oracle workloads. This technique is about collecting and analyzing sampled stack trace data to analyze and troubleshoot Oracle processes at the OS level (in particular applied to Linux).

Motivations: The techniques and tools described here can be used for performance investigations to complement wait-event based information gathered from the Oracle engine, such as information available with ASH and sql monitoring. They (more...)

Oracle can now use function-based indexes in queries without functions!

There’s a neat optimization in Oracle I found while tinkering around (look closely at the predicate section):

select oracle_can_now_use,
       a_function_based_index
from   queries_with_no_functions
where  a_date_col_with_an_fbi = :a_date
and    oracle_version >= '11.2.0.2';

-------------------------------------------------------------------------
| Id  | Operation                   | Name                      | Rows  |
-------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |                           |     1 |
|*  1 |  TABLE ACCESS BY INDEX ROWID| QUERIES_WITH_NO_FUNCTIONS |     1 |
|*  2 |   INDEX RANGE SCAN          | QUWF_DATE_FBI             |      (more...)

A Closer Look at CALIBRATE_IO

Topic: This blog entry is about investigating Oracle's DBMS_RESOURCE_MANAGER.CALIBRATE_IO

Spoiler: If you have reached this article in search of a tool for quantitative analysis of storage performance and in particular for measuring random read I/O in Oracle, I'd rather advise you to use tools that allow generating test workloads in a controlled manner, in a way that can be understood and measured and in particular with latency details together with IOPS measurements. For example (more...)

Solaris 11 – IPS Local Repository

A fast and straight approach for creating local IPS repository. CREATING THE REPOSITORY The steps are basic: 1.Download the repository; 2.Concatenate the iso files; 3.Mount the iso; 4.Create the zfs dataset; 5.Create the repository; 6.Copy the repository; 7.Build the search index; 8.Set the repository; 9.Unmount the iso; Code example: Note – pkg set-publisher: -G ‘*’ […]

When Joins Go Bad

So far in the joins series we’ve looked at the effect removing joins (via denormalization) has on performance. We’ve seen that joins can cause primary key looks to do more work. Lowering the normalization level to remove these can negatively impact “search” style queries though. More importantly, we’ve seen the real cost of denormalizing to remove joins is when updating records, potentially leading to concurrency waits and application bugs.

So are joins always “good”?

The (more...)

Denormalizing for Performance is a Bad Idea – Your Updates are Killing Me

In the previous article in the joins series we compared query performance between a third normal form schema and the same schema denormalized to second normal form. We then extended it the example so our denormalized schema was in just first normal form.

The normalized approach performed better overall. The differences were small though – generally just a few consistent gets and all the queries executed in under a second. As Jeff Atwood points out (more...)

Denormalizing for Performance Is a Bad Idea

Continuing the series on joins, I’m going to look at denormalization. This process reduces the number of joins necessary to return results for a schema.

One of the big arguments against normalizing data is “for performance”. The process of normalization creates new tables as relations are decomposed according to their functional dependencies. This means (more) joins are necessary to return the same results.

A google of “database normalization performance” turns up several articles like this (more...)

New Version Of XPLAN_ASH Utility

A minor update 4.01 to the XPLAN_ASH utility is available for download.

As usual the latest version can be downloaded here.

These are the notes from the change log:

- More info for RAC Cross Instance Parallel Execution: Many sections now show a GLOBAL aggregate info in addition to instance-specific data

- The Parallel Execution Server Set detection and ASSUMED_DEGREE info now makes use of the undocumented PX_STEP_ID and PX_STEPS_ARG info (bit mask part (more...)