Learning, relearning or unforgetting…
What value is there in a DISABLEd constraint?
This was a question on the OTN forums this week and a) my first reply was wrong and b) I couldn’t find a clear demonstration elsewhere.
The documentation is clear.
The ETL process commonly verifies that certain constraints are true. For example, it can validate all of the foreign keys in the data coming into the fact table. This means that (more...)
Every now and then you come across a feature or a combination of features which has turned into such a dog’s dinner that you wonder how many people can possibly be using it.
This week – fast fresh materialized views.
I would have thought that this was a very commonly used feature.
This is quite a long article so I will do a top-level TOC first
- Why am I looking at a fast-refresh, on-commit, join-only (more...)
With Basic Compression, you cannot drop a column.
create table t1
table T1 created.
alter table t1 drop column col2;
SQL Error: ORA-39726: unsupported add/drop column operation on compressed tables
39726. 00000 - "unsupported add/drop column operation on compressed tables"
*Cause: An unsupported add/drop column operation for compressed table
*Action: When adding a column, do not specify a default value.
DROP column is only supported in the form (more...)
A couple of very, very basic observations on getting going with 12c Inmemory in a multitenant database.
1. When trying to set inmemory_size within a PDB when inmemory_size is 0 in the CDB
ORA-02096: specified initialization parameter is not modifiable with this option
SQL> alter session set container = cdb$root;
SQL> select value from v$parameter where name = 'inmemory_size';
SQL> alter session set container = orcl;
SQL> alter (more...)
A comparison of sucking data into a table over a db link using DBMS_PARALLEL_EXECUTE.
This particular example is based on something I needed to do in the real world, copying data from one database into another over a db link. Datapump is not available to me. Tables in question happen to be partitioned by a date-like number (boo!) hence some of the specific actions in the detail.
I think it’s a good example of (more...)
There seems to me to be a relatively simple choice.
Either you except that the Oracle Optimizer has a wealth of complicated strategies and, in this complex effort to get the best executions it can, will inevitably either get it wrong sometimes (or take some extra executions to realise it’s wrong).
Or you stick your head in the sand and raise a bug for every unexpected poor execution or plan flip.
But let’s say that (more...)
A quick fly through the options for conditional uniqueness.
Requirement #1: I want uniqueness on a column but only under certain conditions.
For example, I have an active flag and I want to make sure there is only one active record for a particular attribute but there can be many inactive rows.
create table t1
(col1 number not null
,col2 varchar2(24) not null
,is_active number(1) not null
,constraint pk_t1 primary key (col1)
According to http://en.wikipedia.org/wiki/Perfect_storm:
A “perfect storm” is an expression that describes an event where a rare combination of circumstances will aggravate a situation drastically.
The other day we had a bit of a panic when AUD$ space grew by 45GB in 15 minutes and set off some alerts.
The audit_trail parameter was set to DB, EXTENDED.
Behaves the same as AUDIT_TRAIL=DB, but also populates the SQL bind and SQL text (more...)
Is this an appropriate response?
Others have raised this before – in fact I think I’ve raised the question before – but my initial search on Oracle Support did not turn up an answer so I thought I would raise a question again.
Question only: Null predicates in DBA_HIST_SQL_PLAN
Everyone knows that the FILTER_PREDICATES and ACCESS_PREDICATES columns in DBA_HIST_SQL_PLAN are not populated.
If I remember correctly, this was originally (more...)
Another annoying thing about unusable indexes
I’m surprised that I can’t remember coming across this before before.
I want to archive some data from a bunch of partitions.
Unfortunately, I can’t follow the virtuous circle of data lifecycling and partition the table in such a way that I can archive all the data from a particular partition (or subpartition) via exchange partition.
Without going into too much detail and skipping a bunch of other steps, (more...)