Note to self because it’s just one of those date/timezone-related topics which just doesn’t seem to stick…
Epoch/Unix time – See https://en.wikipedia.org/wiki/Unix_time
Unix time (also known as POSIX time or erroneously as Epoch time) is a system for describing instants in time, defined as the number of seconds that have elapsed since 00:00:00 Coordinated Universal Time (UTC), Thursday, 1 January 1970,
Firstly when converting from Oracle date or timestamp – we need to (more...)
I’ve never really spent time on evolution because most of the time I use baselines for emergency sql plan management and not with the added value of controlled evolution with which the feature was conceived.
But some observations on sql plan baseline evolution originating from the questions:
- What does evolution actually do?
- What information does it use?
- What happens with DML?
Starting point – one baselined plan
1. FTS plan in memory from SQL which (more...)
Execution Plan Instability – What is the problem?
The Oracle Optimizer is a complex piece of software and with every release it becomes more complex.
In the beginning, the Optimizer was rule-based.
The Optimizer had a ranked list of heuristics used to optimize a query, picking the lowest ranked rule available.
This rule-based mode, whilst still in use with some internal Oracle dictionary queries, has been unsupported since version 10.1.
This means that (more...)
Just been tearing my hair out on some unexpected SQL behaviour in SQL Developer.
Eventually I managed to get to the root cause of the problem as illustrated below.
SQL> var ps number
SQL> exec :ps := 4001644945;
PL/SQL procedure successfully completed.
SQL> select :ps from dual;
SQL Developer v3.2 and v4.0 via “Run As Script(F5)”:
var ps number
exec :ps := 4001644945;
select :ps from dual;
I shouldn’t be using an old version of SQL Developer like v3 (3.2.10.09) anyway but… cut a long story short … I am.
I’ve also got v4 which doesn’t display the same problem
Just pointing out an oddity whilst investigating something for someone:
with x as
(select to_date('28-MAR-2015 01:30','DD-MON-YYYY HH24:MI') dt1
, to_date('29-MAR-2015 01:30','DD-MON-YYYY HH24:MI') dt2
, dt2 "date"
, cast(dt2 as timestamp) "timestamp"
, to_Char(cast(dt2 as timestamp),'DD-MON-YYYY (more...)
Just a quick plug for looking at raw sql trace files via SQL Developer.
There is a nice Tree View:
Which can be expanded:
Also summary view of statistics, filterable:
And a list view, filterable and orderable:
Some sort of right click summary for binds/waits might be a nice addition.
Moving through some observations of an ongoing investigation tackling the easiest ones first.
Previously I observed the slow execution of a recursive delete of statistics history but coming not from MMON but from a number of application processes which should have only been calculating stats for their own partitions.
Statement was sql id 9v9n97qj8z1dg:
delete /*+ dynamic_sampling(4) */
where savtime < :1
and rownum <= NVL(:2, rownum);
From the real time sql (more...)
Yesterday I mentioned issues with a recursive delete operation on statistics history.
This is a quick illustration of the last points I made on that post regarding the lack of a COUNT STOPKEY optimisation because of the use of NVL.
COUNT STOPKEY is an optimisation which allows processing to stop once the target number of rows has been reached.
create table t1 as select * from dba_objects;
alter session set statistics_level = all;
Today I helped resolve a problem quickly but to the detriment of my understanding of exactly what was going on and why.
And as part of the quicker resolution, I had to destroy the evidence which would have helped understand better.
So… now need to go back and figure it out if I have time to model it properly, etc.
Here’s what little I know so far.
What happened was that there were a number (more...)
Prior to 126.96.36.199, DBMS_STATS.PURGE_STATS does a slow delete of stats before the parameterised input timestamp.
Why might you be purging? Here’s one such illustration:
This delete can be slow if these tables are large and there are a number of reasons why they might be so, notably if MMON cannot complete the purge within its permitted timeframe.
But note that if you’re happy to purge all history, there (more...)