Just been tearing my hair out on some unexpected SQL behaviour in SQL Developer.
Eventually I managed to get to the root cause of the problem as illustrated below.
SQL> var ps number
SQL> exec :ps := 4001644945;
PL/SQL procedure successfully completed.
SQL> select :ps from dual;
SQL Developer v3.2 and v4.0 via “Run As Script(F5)”:
var ps number
exec :ps := 4001644945;
select :ps from dual;
I shouldn’t be using an old version of SQL Developer like v3 (3.2.10.09) anyway but… cut a long story short … I am.
I’ve also got v4 which doesn’t display the same problem
Just pointing out an oddity whilst investigating something for someone:
with x as
(select to_date('28-MAR-2015 01:30','DD-MON-YYYY HH24:MI') dt1
, to_date('29-MAR-2015 01:30','DD-MON-YYYY HH24:MI') dt2
, dt2 "date"
, cast(dt2 as timestamp) "timestamp"
, to_Char(cast(dt2 as timestamp),'DD-MON-YYYY (more...)
Just a quick plug for looking at raw sql trace files via SQL Developer.
There is a nice Tree View:
Which can be expanded:
Also summary view of statistics, filterable:
And a list view, filterable and orderable:
Some sort of right click summary for binds/waits might be a nice addition.
Moving through some observations of an ongoing investigation tackling the easiest ones first.
Previously I observed the slow execution of a recursive delete of statistics history but coming not from MMON but from a number of application processes which should have only been calculating stats for their own partitions.
Statement was sql id 9v9n97qj8z1dg:
delete /*+ dynamic_sampling(4) */
where savtime < :1
and rownum <= NVL(:2, rownum);
From the real time sql (more...)
Yesterday I mentioned issues with a recursive delete operation on statistics history.
This is a quick illustration of the last points I made on that post regarding the lack of a COUNT STOPKEY optimisation because of the use of NVL.
COUNT STOPKEY is an optimisation which allows processing to stop once the target number of rows has been reached.
create table t1 as select * from dba_objects;
alter session set statistics_level = all;
Today I helped resolve a problem quickly but to the detriment of my understanding of exactly what was going on and why.
And as part of the quicker resolution, I had to destroy the evidence which would have helped understand better.
So… now need to go back and figure it out if I have time to model it properly, etc.
Here’s what little I know so far.
What happened was that there were a number (more...)
Prior to 184.108.40.206, DBMS_STATS.PURGE_STATS does a slow delete of stats before the parameterised input timestamp.
Why might you be purging? Here’s one such illustration:
This delete can be slow if these tables are large and there are a number of reasons why they might be so, notably if MMON cannot complete the purge within its permitted timeframe.
But note that if you’re happy to purge all history, there (more...)
Just a reference to a really simple difference between CALL and EXEC.
I thought I had mentioned this before but couldn’t find it so…
EXEC/EXECUTE is a SQL*Plus command which wraps the proc call in an anonymous BEGIN … END; block.
CALL is a SQL command hence it is limited to SQL data types and there are other restrictions which the documentation sums up pretty well.
Because CALL is SQL, there is on key behavioural (more...)
Two minor partition gotchas on partition extent sizing and on the number of partitions in an interval partitioned.
First – old news – note that initial extent size for partitioned tables changed in 220.127.116.11.
Somehow this passed me by and I was late to the large extent party – duh!
Influenced by parameter _partition_large_extents, the default was changed to 8MB rather than 64K for autoallocated ASSM extent sizing.
Pre 11.2. (more...)
Learning, relearning or unforgetting…
What value is there in a DISABLEd constraint?
This was a question on the OTN forums this week and a) my first reply was wrong and b) I couldn’t find a clear demonstration elsewhere.
The documentation is clear.
The ETL process commonly verifies that certain constraints are true. For example, it can validate all of the foreign keys in the data coming into the fact table. This means that (more...)