Since I have written about my pet project about a month ago, I have made some major changes to the functionality of it.
If you haven't read that blog about my pet project, here's the synopsis:
I love Logger to instrument my code, I just don't like to type in all the bits and pieces to register all the input arguments when I write a new procedure or function. To solve this problem I have (more...)
Execution Plan Instability – What is the problem?
The Oracle Optimizer is a complex piece of software and with every release it becomes more complex.
In the beginning, the Optimizer was rule-based.
The Optimizer had a ranked list of heuristics used to optimize a query, picking the lowest ranked rule available.
This rule-based mode, whilst still in use with some internal Oracle dictionary queries, has been unsupported since version 10.1.
This means that (more...)
When you are working with and developing Decision Trees by far the easiest way to visualise these is by using the Oracle Data Miner (ODMr) tool that is part of SQL Developer.
Developing your Decision Tree models using the ODMr allows you to explore the decision tree produced, to drill in on each of the nodes of the tree and to see all the statistics etc that relate to each node and branch of the (more...)
How many times have you guessed the right answer?
select * from dual order by -1;
select * from dual order by 0;
select * from dual order by -(0.1+0/1) desc;
select 1 n,0 n,2 n,0 n,1 n from dual group by grouping sets(1,2,3,2,1,0) order by -(0.1+0/1) desc;
select 1 n,0 n,2 n,0 n,1 n from dual group by grouping sets(1,2,3,2,1,0) order by 0;
select 1 n,0 n,2 (more...)
P.O.I.S.E.D. is the acronym I invented for the performance tuning method described by Chris Lawson in The Art and Science of Oracle Performance Tuning; it stands for Problem, Observation, Interpretation, Solution, Execution, and Documentation. The Observation and Interpretation phases feed each other. The Interpretation phase concludes with a determination of root cause. If we are investigating poor performance a single SQL statement, the tool of choice for the observation phase (more...)
inspired by Laurent Schneider’s remark concerning special time changes in Lord Howe Island
with dates as
trunc(sysdate, 'year') + level - 1 as day
extract(year from trunc(sysdate, 'year') + level - 1) = extract(year from trunc(sysdate, 'year'))
vtn.TZNAME, listagg(vtn.TZABBREV, ', ') within group(order by vtn.tzabbrev) tzabbrevs
v.*, d. (more...)
Since some time I have been adding
WHERE 1=1 to all my queries.
I get queries like this:
FROM emp e
AND e.ename LIKE 'A%'
AND e.deptno = 20
Lots of people ask me what’s the use of this
You know I like to type as little as possible but here I am typing a lot of extra characters. And yet, it makes my development life a (more...)
A recent question on the OTN SQL forum
asked how best to join two tables related by ID and date range, in order to insert one row per date into a data warehouse. One solution was to expand the data from each table, creating one row per date, then join on date. I think it's more efficient to join on date range, then expand.
If you run Oracle Standard Edition or haven’t licenced Diagnostics Pack for Enterprise Edition, then you don’t have AWR and ASH Data available. This is when Statspack, the predecessor of AWR, comes in handy to keep a history of database performance metrics. But although Oracle still deliver Statspack with their recent DB releases (yes, even in […]
The mother of all SQL “anti-patterns” is the ad-hoc query. Here, the end-user of the application can specify any combination of parameters—for example, the job_id, the dept_id, or a combination of both—and the data is retrieved using a catchall query. This antipattern has two sisters who are almost as evil. These two sisters arise from the requirement to paginate through the records retrieved by a query.(read more
If a column or expression may be NULL, it is a pain to compare it to something else. Why? Because comparisons involving a NULL do not result in TRUE or FALSE: they result in NULL. I use the DECODE function to work around this problem: here's why - and how.
Moving through some observations of an ongoing investigation tackling the easiest ones first.
Previously I observed the slow execution of a recursive delete of statistics history but coming not from MMON but from a number of application processes which should have only been calculating stats for their own partitions.
Statement was sql id 9v9n97qj8z1dg:
delete /*+ dynamic_sampling(4) */
where savtime < :1
and rownum <= NVL(:2, rownum);
From the real time sql (more...)
Yesterday I mentioned issues with a recursive delete operation on statistics history.
This is a quick illustration of the last points I made on that post regarding the lack of a COUNT STOPKEY optimisation because of the use of NVL.
COUNT STOPKEY is an optimisation which allows processing to stop once the target number of rows has been reached.
create table t1 as select * from dba_objects;
alter session set statistics_level = all;
@thatjeffsmith recently recommended an article about making your blog more popular. The article said "lists of 10 things" were great ways to get more readers. Hey, if that's all it takes...
is a package that generates SQL to compare data or synchronize tables. Here are 10 good reasons to use it.
As you know, NaN is a “Not a Number”.
How do you think, what would be the result of the following query? (0f/0 == NaN)
select count(*) cnt from dual where rownum < 0f/0;
Ok, when you know the result, try to guess what will return this query:
select count(*) cnt (more...)
February 8, 2015 I was recently contacted about an odd performance issue with a software product named TDM, where certain SQL statements submitted to an Oracle 9.2.x database required roughly 30 seconds to execute when a nearly instantaneous response was expected. The person who reported the issue to me provided a sample SQL statement that […]
A subset of all possible execution plans can be described as robust. While such solutions are not always quite optimum, they are almost always close to optimum in real-world queries, and they have desirable characteristics, such as predictability and low likelihood of errors during execution.(read more
The biggest problem with SQL query optimization is that semantically equivalent SQL queries are not guaranteed to perform equally well. When I encounter problems like this, I sympathize with the folks who got frustrated with RDBMS performance and created NoSQL.(read more
The mother of all SQL “anti-patterns” is the ad-hoc query. Here, the end-user of the application can specify any combination of parameters—for example, the job_id, the dept_id, or a combination of both—and the data is retrieved using a catchall query.(read more