One of the ways Zone Maps can be refreshed when the underlying table data is changed is fast on commit
. This is similar to how materialized views can be refreshed with the exception that a Zone Map does not need a materialized view log to do so.
It can also lead to some peculiar side effects. Test setup
Let's begin by creating a test table with the on commit refresh materialized zone map:
SQL> create (more...)
Just a quick follow up to my previous post
on how Zone ID's are calculated.
Let's take the following example:
SQL> select rid, sys_op_zone_id(rid) zone_id
4 select chartorowid('AAAS5KAAHAAABYDAAA') rid
5 from dual
Recalling that extended ROWID
has the following format (a nice picture from Oracle Documentation
In the binary format that would correspond to:
- Data Object Number -- 32 bits
- Relative File (more...)
Zone Maps is a new feature that got officially introduced
in 18.104.22.168.0 so I've decided to take a closer look.
From the Oracle Documentation:
For full table access, zone maps allow I/O pruning of data based on the physical location of the data on disk, acting like an anti-index.
Let's start by creating a test table:
SQL> create table t pctfree 95 clustering by linear order (n) as
2 select level (more...)
I've noticed that people post how to get AWR Top 5 Timed Foreground Events
other a range of snapshots using a SQL query from time to time. Since this is something I've done for years here is the version of the SQL I use in case somebody finds it useful:
select case wait_rank when 1 then inst_id end "Inst Num",
case wait_rank when 1 then snap_id end "Snap Id",
case wait_rank when 1 then begin_snap (more...)
In part I
I've shown some interesting side effects that happen when you're trying to select from a table block which have an outstanding active transaction in it. In this post we're going to make things a little bit more interesting by introducing indexes into the picture. Test Setup
I'll create a table with two rows and an index:
SQL> create table test as
2 select level n
3 from dual
4 connect by level < = 2;
I was going to write a blog post about some peculiar side effects you can get into with the delayed block cleanout when running parallel DML but soon discovered that the entry became so big that I've decided to split it up into a series of more manageable posts.
For a good background on various themes of block cleanout check out Clean it up
by Jonathan Lewis. Active transactions, consistent reads and table scans
Core dump issues sometimes can be notoriously difficult to troubleshoot. I've got a call this morning from one of my customers saying that after a power outage Grid Infrastructure
is not able to fully come up on some nodes on their Exadata cluster. After further examining the situation it turned out that crsd.bin binary is simply core dumping upon start up.
Troubleshooting Grid Infrastructure
startup issues when nothing is core dumping sometimes could be (more...)
One interesting enhancement in 12cR1 PL/SQL is UDF pragma
which has the following description:
The UDF pragma tells the compiler that the PL/SQL unit is a user defined function that is used primarily in SQL statements, which might improve its performance.
I though it would be very cool to try (more...)
One interesting aspect of Oracle 12cR1 database when it comes to PDBs is how latching is done. For example, if all PDBs have to work under the same latch then contention in one PDB can easily affect users in other PDBs too.
Continuing my series of posts on the Result (more...)
In this post I'll show how extremely easy it is to maintain the number of distinct
values when using HyperLogLog
. Please reference to my previous post
for some description how HyperLogLog
Let's assume we have a table with some existing data:
SQL> create table existing_data as
2 select round(dbms_random. (more...)
Calculating number of unique values using Oracle distinct
for big data sets have two major problems:
- It may require lots of memory for sort/hash group by.
- It is very difficult to refresh distinct numbers incrementally meaning every time you append some new data you generally have to perform distinct calculations (more...)
There has been some information written on the subject already (see this post
by Randolf Geist).
In a nutshell, the way optimizer costs full table scans when using flashback query makes it look much more expensive than without. What further complicates the problem is the fact that index access costs (more...)
Just a quick note that I'll be presenting at this year's Enkitec E4
conference. You can find the schedule here
. I did some under the hood investigation regarding how the whole DBFS stack works from the performance perspective and, needless to say, some findings simply left me startled. If you (more...)
I have already written
some words on the subject before. However, since then some interesting things have happened.
To recap (or save you some time if you don't want to read the original article) GoldenGate Integrated Capture is nothing else but Oracle Streams Capture in disguise. When running in the (more...)
Take a look at the following Parallel section of a SQL Monitor report:
Any query which produces such a report won't care about how much parallel you're running because virtually all the time is spent by the query coordinator (which is a serial process) being busy.
In this case the query in question is quite simple:
select /*+ parallel(t,8) */ median(basket_amount) from whs.fact_sale t
The reason it behaves the way it does has everything to do with how Oracle executes it:
Plan hash value: 712547042
| Id | Operation | Name | Rows | Bytes (more...)
In my previous post
I've demonstrated how in-memory PQ can access the table directly from the buffer cache even when you're using manual DOP.
One interesting question, however, is what happens when PQ slave needs to read some blocks from disk given that object has been qualified for in-memory (cached) access? Would the slave do it using direct or buffered I/O?
The answer becomes somewhat clear once you realize that in-memory PQ is enabled by simply utilizing buffered reads. Since direct path reads can not take advantage of any blocks stored in the buffer cache (local or remote), trying to (more...)
In-memory parallel execution seems to be gaining popularity especially among people running x2-8 and x3-8 Exadata systems or any other system that have large amounts of memory capable of caching lots of data.
Oracle documentation suggests
that in order to utilize in-memory PQ, parallel_degree_policy
needs to be set to auto
One of the parameters influenced by parallel_degree_policy
. When using Auto DOP _parallel_cluster_cache_policy
will be set to cached
. The question then becomes what happens if we set _parallel_cluster_cache_policy=cached
while still keeping Manual DOP? Will the system use in-memory PQ?
Below is a test table setup:
The problem of transient PK updates is well known and comes from the fact that pretty much every replication solution on the market applies changes using row-by-row approach.
To quickly recap the problem, if you have a table like:
SQL> create table tpk (n number primary key);
SQL> insert into tpk values (1);
1 row inserted
SQL> insert into tpk values (2);
1 row inserted
Then executing the following statement...
update tpk set n=n+1
...will result in a transient PK problem since the replication solution will have to decompose it into the following two (more...)
When using Oracle GoldenGate sequence replication there is a number of issues you need to be aware of especially if you replicate quite a lot of busy sequences.
The first issue is that GoldenGate sequence replication does not use bind variables. Let's execute the following statements on the source system:
SQL> create sequence rep1.s1 nocache;
SQL> select rep1.s1.nextval from dual;
SQL> select rep1.s1.nextval from dual;
GoldenGate uses PL/SQL procedure called replicateSequence
each time it needs to sync sequence values. The following calls will be made on the (more...)
About a year ago I've decided to put all my data on a home built ZFS storage server. The growing amount of devices around my household prompted for an easier and much faster way to share the data. Since then the box was happily serving both CIFS as well as iSCSI over 1GbE network without any issues.
I was keen on upgrading to 10GbE for quite some time as both my server as well as clients could easily saturate 1GbE link when ZFS had all the required data in ARC. 32GB RAM in my storage server usually left me with (more...)