Solving DBFS UnMounting Issue

Often, I am quite baffle with Oracle’s implementations and documentations.

RAC GoldenGate DBFS implementation has been a nightmare and here is one example DBFS Nightmare

I am about to show you another.

In general, I find any implementation using ACTION_SCRIPT is good in theory, bad in practice, but I digress.

Getting ready to shutdown CRS for system patching to find out CRS failed to shutdown.

# crsctl stop crs
CRS-2675: Stop of 'dbfs_mount' on 'host02'  (more...)

Final Conclusion for 18c Cluster upgrade state is [NORMAL]

Finally, I have reached a point that I can live with for Grid 18c upgrade because the process runs to completion without any error and intervention.

Note that ACFS Volume is created in CRS DiskGroup which may not be ideal for production.

Rapid Home Provisioning Server is configured and is not running.

The outcome is different depending on whether the upgrade is performed via GUI or silent as demonstrated 18c Upgrade Getting to Results – (more...)

18c Upgrade Getting to Results – The cluster upgrade state is [NORMAL]

There are/were a lot of discussions about Check 18c Upgrade Results – The cluster upgrade state is [UPGRADE FINAL]
on how cluvfy stage -post crsinst -allnodes -collect cluster -gi_upgrade could have changed the cluster upgrade state to [NORMAL].

Running gridSetup.sh -executeConfigTools in silent mode, the next step cluvfy is not run.

[oracle@racnode-dc1-1 ~]$ /u01/18.3.0.0/grid/gridSetup.sh -executeConfigTools -responseFile /sf_OracleSoftware/18cLinux/gridsetup_upgrade.rsp -silent
Launching Oracle Grid Infrastructure Setup Wizard...

You can find the logs  (more...)

Check 18c Upgrade Results – The cluster upgrade state is [UPGRADE FINAL]

After upgrade and apply RU for Grid 18c, the cluster upgrade state was not NORMAL.

The cluster upgrade state is [UPGRADE FINAL] which I have never seen before.

Searching Oracle Support was useless as I was only able to find the following states:

The cluster upgrade state is [NORMAL]
The cluster upgrade state is [FORCED]
The cluster upgrade state is [ROLLING PATCH]

The following checks were performed after upgrade:

[oracle@racnode-dc1-1 ~]$ crsctl query crs releaseversion
 (more...)

Data Guard Unexpected Lag

facepalmWhen configuring a physical standby database for Oracle using Data Guard, you need to create Standby Redo logs to allow the redo to be applied in (near) real time to the Standby. Without standby redo logs, Oracle will wait for an entire Archive Log to be filled and copied across to the standby before it will apply changes, which could take quite a while.

Which leads me to the problem I encountered a while ago, (more...)

enabling Database Vault on e-business RAC database

Right now I'm in a process to setup Database Vault for an E-Business suite database. This is a 2 node RAC cluster.
The DB is 12.1.0.2 with April 17 BP.

As the DB exists already, I followed How To Enable Database Vault in a 12c database ? (Doc ID 2112167.1).
Everything looks smooth, but unfortunately, at the Configuration of DV
exec dvsys.configure_dv('DVOWNER','DVMANAGER');
failes with
ERROR at line 1: 
ORA-47500: Database Vault (more...)

asmcmd “connected to an idle instance” – not

This is more a note to myself in case I’ll encounter a similar environment. But maybe it helps others – at least my search results weren’t suitable to Windows in the first place. Issue C:\> set ORACLE_HOME=C:\path\to\grid\home C:\> set ORACLE_SID=+ASM1 C:\> asmcmd connected to an idle instance. Environment Windows 2012R2 Oracle Grid Infrastructure 12.1.0.1 2 […]

root.sh fails with CRS-2101:The OLR was formatted using version 3

Got this while trying to install 11.2.0.4 RAC on Redhat Linux 7.2. root.sh fails with a message like

ohasd failed to start
Failed to start the Clusterware. Last 20 lines of the alert log follow:
2017-11-09 15:43:37.883:
[client(37246)]CRS-2101:The OLR was formatted using version 3.

This is bug 18370031. Need to apply the patch before running root.sh.

I am speaking at Mumbai, OTNYatra 2007! Come and say Hi!

I am speaking at Mumbai leg of OTNYatra 2007. I shall be presenting about high availability in RAC and what are different options and configurations possible in it, including the 12c specific Transacion Guard and Application Continuity. If you are attending, do come at my session at 4pm and say Hello!

Vagrant? Again? …Really?

(Yes. And Ansible. And Oracle…) TL;DR. This is the repo I’ll be talking about. It can use Ansible to provision Oracle (SI/RAC). Like I’ve said before, I use Vagrant quite a lot and I basically have 2 configs that I use every time. One that uses an external ‘hosts.yml’ to define the hosts (ip, ram etc) […]

Exadata Upgrade to 12.2.0.1 – The Missing Step

I decided this week to be a little brave and upgrade one of the Enkitec Exadata racks to 12.2.0.1.  I installed the 12.2.1.0.0 Exadata image a few weeks ago, and have been waiting for a chance to upgrade clusterware to 12.2.  Thankfully, Oracle provides a very good note for this, but I did hit one large snag that should be documented.

The process for upgrading GI to 12.2 (more...)

Creating a RAC cluster using Ansible (part 2)

In my previous 2 blogposts i explained how to setup a Vagrantfile that deploys 2 virtualbox machines that can be used as a basis for a RAC cluster and how you could use Ansible to deploy RAC on those VM’s. In this post i want to dive a bit deeper into how i setup Ansible and why, keep in mind that this just one way of doing this.

The Github repository containing all the files (more...)

Grid Infrastructure 12c installation fails because of 255 in the subnet ID

I was doing another GI 12.1.0.2 cluster installation last month when I got really weird error.

While root.sh was running on the first node I got the following error:

2016/07/01 15:02:10 CLSRSC-343: Successfully started Oracle Clusterware stack
2016/07/01 15:02:23 CLSRSC-180: An error occurred while executing the command '/ocw/grid/bin/oifcfg setif -global eth0/10.118.144.0:public eth1/10.118.255.0:cluster_interconnect' (error code 1)
2016/07/01 15:02:24 CLSRSC-287: FirstNode configuration failed
Died at /ocw/grid/crs/install/crsinstall.pm  (more...)

SCAN Listener Crash After Applying July 2016 PSU

I ran in to a small issue while applying the July 2016 quarterly patch to a couple of Exadata racks last week.  The systems were running GI 12.1.0.2, previously with the January 2016 PSU.  The patches applied successfully, and we were beginning the process of running the post-patch scripts on the databases in the cluster.  This process involves manually starting the database in upgrade mode, and we saw a message in SQL*Plus that the (more...)

When should you use Oracle RAC?

Over the past years we've worked on many Oracle projects, some with RAC, some without, some which intended to implement RAC and failed, and some which implemented poorly and ripped it out at the last minute. 

For those new to Oracle, RAC is the Real Application Cluster option for the database which lets you cluster the database servers. The main reason for using for RAC is that it brings resilience to a system, it (more...)

Review: Oracle RAC Performance Tuning

Some time ago, I received a free review copy of Brian Peasland‘s recent book, Oracle RAC Performance Tuning.

First, a note on my RAC background: I spent 7 years on Oracle’s RAC Support team. When customers had an intractable RAC performance issue, I was on the other end of the “HELP!” line until it was resolved.

I made Brian’s acquaintance through the MOS RAC Support forum, where Brian stood out as a frequent (more...)

racattack, meet ansible-oracle!

A while back I was approached by Jeremy Schneider, who is one of the original contributors to the racattack project and he wanted to know if I was interested in integrating ansible-oracle with the RAC Attack automation project, and of course I was! The idea was to provide a completely hands off installation of an […]

Creating a RAC Flex Cluster using ansible-oracle

As of version 1.3 it is possible to create a RAC Flex Cluster using ansible-oracle. From a ansible-oracle configuration perspective there is not a huge difference from a normal ‘standard’ cluster, basically a few new parameters. There are other differences though, specifically in how you have to run the playbook and deal with the inventory configuration. In […]

ansible-oracle, the RAC edition

First off, I’ll be setting up page, where I intend to have a complete list of parameters used in all roles and a description of what they actually do. Up until now I’ve kept it in a file in the github repo, but I think it will be easier to maintain this way. The page is […]

Rolling Out-of-Place patching para Oracle Grid Infrastructure 12c y 11gR2: clonación en esteroides

Mi artículo más reciente para ToadWorld, dentro de una serie dedicada a aplicar patches con mínima suspensión del servicio. Ahora detallando cómo aplicar patches a Grid Infrastructure usando un Golden Image.