Using DataGuard broker to show apply lag and throughput

To determine how much lag there is I normally run a script similar to this

select sequence#, applied, to_date(to_char(first_time,’DD-MON-YY:HH24:MI:SS’),
‘DD-MON-YY:HH24:MI:SS’) “First Time” ,
to_char(next_time,’DD-MON-YY:HH24:MI:SS’) “Next Time”
from v$archived_log
UNION
select NULL,database_role,NULL, db_unique_name from v$database
order by “First Time”;

However there is another way which I sometime use which actually gives a lot more information. This uses the dataguard broker command line. Use the show configuration parameter to determine database name if you are not sure

 (more...)

Downgrading a RAC database from 11.2.0.4 to 11.2.0.3

It is not often that I see a database downgrade activity performed and so I thought it would be worthwhile just noting how it was done.
 2 node RAC 11.2.0.4 database to 11.2.0.3, downgraded the database only and not the grid home.
Downgrade taking place on HP-UX, any downgrades taking place on Windows OS have several additional steps and won’t be covered in this post. 
This database does (more...)

What is the future for an Oracle DBA?

I have worked with Oracle databases for around 25 years now and during that time I have been very fortunate in that there has always been work for DBAs and it has been one of the higher paying disciplines within IT.

I am not prophesying the end of the Oracle database engine but I do see the writing on the wall for some of the large corporate solutions sitting on physical equipment in a datacentre. (more...)

GoldenGate – Restarting a replicat with the command filterduptransactions

If a Goldengate replicat process fails then occasionally on the restart it skips the correct RBA and ‘loses it’s position’. The relative byte address (RBA) is the location within the trail file to indicate the current transaction.

The old school method was to calculate which RBA was the correct one and then restart the replicat. However there is a new command on the block now (pun intended) and I will demonstrate how the two methods (more...)

Problem with V$RECOVERY_AREA_USAGE view and FRA space not being reclaimed

We received the following alert from our alerting system
 
Flash Recovery Area for FOLPRD1A has 9.97 percent of usable space left.
 
This is a standby database:
 
HOST       INSTANCE   STATUS     VERSION      STARTED                   UPTIME
———- ———- ———- ———— ————————- ————————————————–
xxxxxxxx   INSTANCE   MOUNTED    11.2.0.3.0   18-JAN-2017 18:58:18     (more...)

Xmas day -150 hits. What is wrong with the world

Yes, very tongue in cheek. I know everyone does not celebrate Xmas.

I was still surprised though. This is what was viewed

Best wishes for 2017 to all my readers.

xmasday


RMAN checksyntax function

I was looking at the RMAN DEBUG options and came across the CHECKSYNTAX function which I had not used before.

Firstly a quick recap on the DEBUG option.

This can be called using the following syntax

rman target / catalog rman12g1/xxxx@rmancat debug trace=rmantrace.log cmdfile=backup.rcv

or

rman target / catalog rman12g1/xxxx@rmancat debug trace=rmantrace.log then run RMAN> @backup.rcv (or just type in your run block of commands)

There are a number of options (more...)

Performance problems with OEM AWR warehouse

The Enterprise Manager AWR Warehouse is designed to hold performance data from multiple databases  for long-term analysis. It promoses that it will save storage and improve performance on your production systems. In that it is indeed correct. However the warehouse itself does not seem to be performant when taking in multiple sources and retaining them long-term – 400 days in our case. Why 400 days is an obvious question that might be asked. Primarily because (more...)

Reorganising data within tablespaces

I am currently interested in managing storage on our Exadata platform which is primarily used for running a data warehouse.

The ongoing challenge is that most of the large tablespaces use a bigfile datafile and we insert data using append. That causes data gaps within the datafile and it is constantly growing. A recent example showed a 7Tb datafile containing only 2Tb of data and yet the ASM diskgroups were reporting we were running out (more...)

Resolving two errors on Datapump export – parameter cluster=N/Y

I attempt to export a table with 76,000 rows using a relatively standard datapump command

expdp schemas=ODI_TEMP cluster=y directory=MCB content=ALL dumpfile=OEC_%U.dmp include=table:"IN('ORDM_ERROR_COUNTS')" parallel=8 logfile=OEC.log

It fails with a fatal error and also produces a warning

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 – 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management and OLAP options
Starting “ODI_TEMP”.”SYS_EXPORT_SCHEMA_03″:  odi_temp/******** schemas=ODI_TEMP cluster=y directory=MCB content=ALL dumpfile=OEC_%U.dmp (more...)