Data Guard Unexpected Lag

facepalmWhen configuring a physical standby database for Oracle using Data Guard, you need to create Standby Redo logs to allow the redo to be applied in (near) real time to the Standby. Without standby redo logs, Oracle will wait for an entire Archive Log to be filled and copied across to the standby before it will apply changes, which could take quite a while.

Which leads me to the problem I encountered a while ago, (more...)

connection manager – what happens at startup & shutdown

Oracle Connection Manager (cman) is a great tool to create a gateway between networks which can not be connected, or filter those who can access to a service.
Most of the time, it's a very robust implementation and doesn't need a lot of attention. But if it's required to dig into it, a basic knowledge about it's components can be useful.


In this post I'll show what happens in a simple startup (& (more...)

ORA-01422: exact fetch returns more than requested number of rows

When you’re doing clone of EBS 12.2, you may get below issue

INFO : Rapid Clone completed successfully , but the AutoConfig run recorded some errors.
Please review the AutoConfig section in the logfile. and Re-Run the entire cloning cycle , after fixing the problem.

Checked the logfile :

[APPLY PHASE]
AutoConfig could not successfully execute the following scripts:
Directory: /u01/LEBEBSDEV/fs2/FMW_Home/webtier/perl/bin/perl -I /u01/LEBEBSDEV/fs2/FMW_Home/webtier/perl/lib/5.10.0 -I /u01/LEBEBSDEV/fs2/FMW_Home/webtier/perl/lib/site_perl/5.10.0 -I /u01/LEBEBSDEV/fs2/EBSapps/appl/au/12.0.0/perl -I (more...)

Oracle Tidbits – December 2018 #oratidbit

“Oracle *daily* TidBits” (#oratidbit) published on  Facebook, Twitter, and Google+ during weekdays in December 2018. The list is short as there were no #oratidbit during the Christmas holidays. Hope you find these helpful to learn something new or to remind you of its existence and use. HAPPY NEW YEAR 2019! #oratidbit #orcldb When you move the online data file using ALTER …

2018 : A Year in Review

What a strange year this has been for me from a technology perspective!

The Good

Lots of good things have happened over the year.

  • I did presentations at 15 separate tech events, as well as a talk to students at a local university. I do some talks at work too, but you can’t really count that. Even though I had some drama at some of the conferences, the presentations went well for the most part.
  • (more...)

Happy New Year 2019

Wish you all a very Happy, Healthy and Successful New Year 2019. Have a blessed year ahead!

Cheers!!

Your New Years Resolution

Aligning roughly with the calendar year, based on the Chinese zodiak we’re about to go from the year of the dog to the year of the pig. But for me, in the “Information Technology Zodiak” Smile , 2018 was the year of the hack, just as it was in 2017 and just as it will be for 2019.

I’ve not dedicated much time to keeping a record of all of the high profile breaches this year, (more...)

How to manage DB and Cell servers remotely on Exadata with ExaCLI utility – (Part 1)

Whoever is working with Exadata or knew how to administrate Exadata major components like DB and Cell nodes, will certainly knows the use of CellCLI, dcli and DBMCLI utilities. This blog post is focused about managing database and cell servers remotely using the ExCLI and ExadCLI utilities.

Simply put, ExaCLI is a command line tool which comes by default on database & cell nodes that provides the capabilities of remote management for database and cell (more...)

Top-N again: fetch first N rows only vs rownum

Three interesting myths about rowlimiting clause vs rownum have recently been posted on our Russian forum:

  1. TopN query with rownum<=N is always faster than "fetch first N rows only" (ie. row_number()over(order by ...)<=N)
  2. “fetch first N rows only” is always faster than rownum<=N
  3. “SORT ORDER BY STOPKEY” stores just N top records during sorting, while “WINDOW SORT PUSHED RANK” sorts all input and stores all records sorted in memory.

Interestingly that after Vyacheslav posted first statement as an axiom and someone posted old tests(from 2009) and few people made own tests which showed that “fetch first N rows” is about 2-3 times faster than the query with rownum, the final decision was that “fetch first” is always faster.

First of all I want to show that statement #3 is wrong and “WINDOW SORT PUSHED RANK” with row_number works similarly as “SORT ORDER BY STOPKEY”:
It’s pretty easy to show using sort trace:
Let’s create simple small table Tests1 with 1000 rows where A is in range 1-1000 (just 1 block):

create table test1(a not null, b) as
  select level, level from dual connect by level<=1000;

alter session set max_dump_file_size=unlimited;
ALTER SESSION SET EVENTS '10032 trace name context forever, level 10';

ALTER SESSION SET tracefile_identifier = 'rownum';
select * from (select * from test1 order by a) where rownum<=10;

ALTER SESSION SET tracefile_identifier = 'rownumber';
select * from test1 order by a fetch first 10 rows only;

And we can see from the trace files that both queries did the same number of comparisons:

rownum:
----- Current SQL Statement for this session (sql_id=bbg66rcbt76zt) -----
select * from (select * from test1 order by a) where rownum<=10

---- Sort Statistics ------------------------------
Input records                             1000
Output records                            10
Total number of comparisons performed     999
  Comparisons performed by in-memory sort 999
Total amount of memory used               2048
Uses version 1 sort
---- End of Sort Statistics -----------------------

[collapse]
row_number

----- Current SQL Statement for this session (sql_id=duuy4bvaz3d0q) -----
select * from test1 order by a fetch first 10 rows only

---- Sort Statistics ------------------------------
Input records                             1000
Output records                            10
Total number of comparisons performed     999
  Comparisons performed by in-memory sort 999
Total amount of memory used               2048
Uses version 1 sort
---- End of Sort Statistics -----------------------

[collapse]

Ie. each row (except first one) was compared with the biggest value from top 10 values and since they were bigger than top 10 value, oracle doesn’t compare it with other TopN values.

And if we change the order of rows in the table both of these queries will do the same number of comparisons again:

from 999 to 0

create table test1(a not null, b) as
  select 1000-level, level from dual connect by level<=1000;

alter session set max_dump_file_size=unlimited;
ALTER SESSION SET EVENTS '10032 trace name context forever, level 10';

ALTER SESSION SET tracefile_identifier = 'rownum';
select * from (select * from test1 order by a) where rownum<=10;


ALTER SESSION SET tracefile_identifier = 'rownumber';
select * from test1 order by a fetch first 10 rows only;

[collapse]
rownum

----- Current SQL Statement for this session (sql_id=bbg66rcbt76zt) -----
select * from (select * from test1 order by a) where rownum<=10

---- Sort Statistics ------------------------------
Input records                             1000
Output records                            1000
Total number of comparisons performed     4976
  Comparisons performed by in-memory sort 4976
Total amount of memory used               2048
Uses version 1 sort
---- End of Sort Statistics -----------------------

[collapse]
row_number

----- Current SQL Statement for this session (sql_id=duuy4bvaz3d0q) -----
select * from test1 order by a fetch first 10 rows only

---- Sort Statistics ------------------------------
Input records                             1000
Output records                            1000
Total number of comparisons performed     4976
  Comparisons performed by in-memory sort 4976
Total amount of memory used               2048
Uses version 1 sort
---- End of Sort Statistics -----------------------

[collapse]

We can see that both queries required much more comparisons(4976) here, that’s because each new value is smaller than the biggest value from the topN and even smaller than lowest value, so oracle should get right position for it and it requires 5 comparisons for that (it compares with 10th value, then with 6th, 3rd, 2nd and 1st values from top10). Obviously it makes less comparisons for the first 10 rows.

Now let’s talk about statements #1 and #2:
We know that rownum forces optimizer_mode to switch to “first K rows”, because of the parameter “_optimizer_rownum_pred_based_fkr”

SQL> @param_ rownum

NAME                               VALUE  DEFLT  TYPE      DESCRIPTION
---------------------------------- ------ ------ --------- ------------------------------------------------------
_optimizer_rownum_bind_default     10     TRUE   number    Default value to use for rownum bind
_optimizer_rownum_pred_based_fkr   TRUE   TRUE   boolean   enable the use of first K rows due to rownum predicate
_px_rownum_pd                      TRUE   TRUE   boolean   turn off/on parallel rownum pushdown optimization

while fetch first/row_number doesn’t (it will be changed after the patch #22174392) and it leads to the following consequences:
1. first_rows disables serial direct reads optimization, that’s why the tests with big tables showed that “fetch first” were much faster than the query with rownum.
So if we set “_serial_direct_read”=always, we get the same performance in both tests (within the margin of error).

2. In cases when index access (index full scan/index range scan) is better, CBO differently calculates the cardinality of underlying INDEX FULL(range) SCAN:
the query with rownum is optimized for first_k_rows and the cardinality of index access is equal to K rows, but CBO doesn’t reduce cardinality for “fetch first”, so the cost of index access is much higher, compare them:

rownum
SQL> explain plan for
  2  select *
  3  from (select * from test order by a,b)
  4  where rownum<=10;

--------------------------------------------------------------------------------------------
| Id  | Operation                     | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |            |    10 |   390 |     4   (0)| 00:00:01 |
|*  1 |  COUNT STOPKEY                |            |       |       |            |          |
|   2 |   VIEW                        |            |    10 |   390 |     4   (0)| 00:00:01 |
|   3 |    TABLE ACCESS BY INDEX ROWID| TEST       |  1000K|    12M|     4   (0)| 00:00:01 |
|   4 |     INDEX FULL SCAN           | IX_TEST_AB |    10 |       |     3   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(ROWNUM<=10)

[collapse]
fetch first

SQL> explain plan for
  2  select *
  3  from test
  4  order by a,b
  5  fetch first 10 rows only;

-----------------------------------------------------------------------------------------
| Id  | Operation                | Name | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT         |      |    10 |   780 |       |  5438   (1)| 00:00:01 |
|*  1 |  VIEW                    |      |    10 |   780 |       |  5438   (1)| 00:00:01 |
|*  2 |   WINDOW SORT PUSHED RANK|      |  1000K|    12M|    22M|  5438   (1)| 00:00:01 |
|   3 |    TABLE ACCESS FULL     | TEST |  1000K|    12M|       |   690   (1)| 00:00:01 |
-----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("from$_subquery$_002"."rowlimit_$$_rownumber"<=10)
   2 - filter(ROW_NUMBER() OVER ( ORDER BY "TEST"."A","TEST"."B")<=10)

[collapse]
fetch first + first_rows

SQL> explain plan for
  2  select/*+ first_rows */ *
  3  from test
  4  order by a,b
  5  fetch first 10 rows only;

--------------------------------------------------------------------------------------------
| Id  | Operation                     | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |            |    10 |   780 | 27376   (1)| 00:00:02 |
|*  1 |  VIEW                         |            |    10 |   780 | 27376   (1)| 00:00:02 |
|*  2 |   WINDOW NOSORT STOPKEY       |            |  1000K|    12M| 27376   (1)| 00:00:02 |
|   3 |    TABLE ACCESS BY INDEX ROWID| TEST       |  1000K|    12M| 27376   (1)| 00:00:02 |
|   4 |     INDEX FULL SCAN           | IX_TEST_AB |  1000K|       |  2637   (1)| 00:00:01 |
--------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("from$_subquery$_002"."rowlimit_$$_rownumber"<=10)
   2 - filter(ROW_NUMBER() OVER ( ORDER BY "TEST"."A","TEST"."B")<=10)

[collapse]
fetch first + index

SQL> explain plan for
  2  select/*+ index(test (a,b)) */ *
  3  from test
  4  order by a,b
  5  fetch first 10 rows only;

--------------------------------------------------------------------------------------------
| Id  | Operation                     | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |            |    10 |   780 | 27376   (1)| 00:00:02 |
|*  1 |  VIEW                         |            |    10 |   780 | 27376   (1)| 00:00:02 |
|*  2 |   WINDOW NOSORT STOPKEY       |            |  1000K|    12M| 27376   (1)| 00:00:02 |
|   3 |    TABLE ACCESS BY INDEX ROWID| TEST       |  1000K|    12M| 27376   (1)| 00:00:02 |
|   4 |     INDEX FULL SCAN           | IX_TEST_AB |  1000K|       |  2637   (1)| 00:00:01 |
--------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("from$_subquery$_002"."rowlimit_$$_rownumber"<=10)
   2 - filter(ROW_NUMBER() OVER ( ORDER BY "TEST"."A","TEST"."B")<=10)

[collapse]

So in this case we can add hints “first_rows” or “index”, or install the patch #22174392.

ps. I thought to post this note later, since I hadn’t time enough to add other interesting details about the different TopN variants, including “with tie”, rank(), etc, so I’ll post another note with more details later.

It’s not my job to do your job for you!

I’m currently drowning in a sea of requests for a number of things, including people wanting me to help them with specific issues, which feels a bit like they want me to do their job for them. I feel a little bit uncharitable saying that, but that’s how it feels.

Let me start by saying I understand what it’s like to be asked to do stuff in your job that you don’t know anything about, (more...)

Another day…another “use the right datatype” post

Here’s an interesting little oddity (aka bug) with scalar queries.

We’ll start with a simple working example


SQL> create table t1 ( c1 number );

Table created.

SQL> insert into t1 values (1);

1 row created.

SQL> create table t2 ( c1 int, c2 varchar2(10));

Table created.

SQL>
SQL> insert into t2 values(1,'t1');

1 row created.

SQL> insert into t2 values(1,'t01');

1 row created.

SQL> commit;

Commit complete.

SQL> exec dbms_stats.gather_table_stats('','T1')

PL/SQL procedure successfully  (more...)

Cloning Around


"We never choose which words to use, for as long as they mean what
they mean to mean, we don’t care if they make sense or nonsense." 
― Norton Juster, The Phantom Tollbooth

Sometimes it’s necessary to clone an existing ORACLE_HOME; one case for this is when the business requires a new ORACLE_HOME when CPU patchsets are applied. With some operating systems cloning a home is simiply a matter of creating a suitable archive (more...)

dbca doesn’t list diskgroups

This is an Exadata machine running GI version 18.3.0.0.180717 and DB version 12.1.0.2.180717. On one of the DB nodes while running dbca, it doesn’t list the diskgroups. it works fine on the other node.

I cheked the dbca trace and found that the kfod command was failing. I tried to run it manually and got the same error:

[oracle@exadb01 ~]$ /u01/app/18.0.0.0/grid/bin/kfod op=groups verbose=true
KFOD-00300:  (more...)

A Christmas Carol

You better watch out,
Let me tell you a fact.
If your SQL has literals,
You’re gonna be hacked.

SQL Injection is comin’ to town

We’ve got a library cache,
It’s memory all linked twice.
But it only works well,
When you’re binding all nice.

Santa Claus is comin’ to town

We know when you’ve been lazy,
And concatenated simple strings.
So just make sure you bind your stuff,
And don’t let the bad guys (more...)

bsu.sh Exception in thread “main” java.lang.OutOfMemoryError: GC overhead limit exceeded

Error :

[appluat@ebs-xxx–xxx bsu]$ bsu.sh -install -patch_download_dir=/u01/EBSUPGR/fs1/FMW_Home/utils/bsu/cache_dir -patchlist=EQDE -prod_dir=/u01/EBSUPGR/fs1/FMW_Home/wlserver_10.3
Exception in thread “main” java.lang.OutOfMemoryError: GC overhead limit exceeded
at org.apache.xmlbeans.impl.schema.SchemaTypeImpl.getProperties(SchemaTypeImpl.java:707)
at com.bea.cie.common.dao.xbean.XBeanDataHandler.loadPropertyMap(XBeanDataHandler.java:775)
at com.bea.cie.common.dao.xbean.XBeanDataHandler.<init>(XBeanDataHandler.java:99)
at com.bea.cie.common.dao.xbean.XBeanDataHandler.createDataHandler(XBeanDataHandler.java:559)
at com.bea.cie.common.dao.xbean.XBeanDataHandler.getComplexValue(XBeanDataHandler.java:455)
at com.bea.plateng. (more...)

Visibility vs Results

If you speak to my colleagues you will know one of my regular topics of conversation at meetings is the “judgement of worth” within the company. I get quite annoyed when I see people who I believe are adding value, but always seem to get ignored, while others who talk a great talk, but ultimately can’t walk the walk, seem to catch all the breaks. It seems visibility is more important than getting results these (more...)

Ode to Azure Cloud Shell on Christmas

When I arrived at Microsoft, I knew I would hopefully get to use my Linux skills for more than teaching SQL Server DBAs about Linux and was pleasantly surprised as I began working in Azure to find that, of course, it’s ALL LINUX.

After almost six months at the company and coming into the Christmas week, I’m thankful for all the technology I’m working with and what many assume that Microsoft won’t be about-  the (more...)

Find Database Growth Using OEM Repository

Typically, what as been done is to schedule job for each database to collect database growth.

This may be problematic as it can be forgotten when new databases are created versus the likelihood of forgetting to add database to monitoring for OEM.

EM12c, EM13c : Querying the Repository Database for Building Reports using Metric Information (Doc ID 2347253.1)

Those raw data are inserted in various tables like EM_METRIC_VALUES for example. 
EM aggregates those management  (more...)

QC vs. PX

One last post before closing down for the Christmas break.

Here’s a little puzzle with a remarkably easy and obvious solution that Ivica Arsov presented at the UKOUG Tech2018 conference. It’s a brilliant little puzzle that makes a very important point, because it reminded me that most problems are easy and obvious only after you’ve seen them at least once. If you you’ve done a load of testing and investigation into something it’s easy to (more...)

Your AskTOM Top 10 for 2018

Here we go folks…here are the top 10 most viewed questions in AskTOM this year!

We’ll count them down from 10 to 1

10) Inserting values into a table with ‘&’

Viewed 80,000 times.

First asked in 2012, this actually is not a database issue but a client issue. Many client tools view ampersand as a substitution variable, and this they intercept the execution before the command is sent to the database. Full details here

(more...)