Result! I have finally been able to gather a complete RDA (Oracle Remote Diagnostic Agent) output on my 2 node RAC system. After consulting the relevant documentation on MOS-which is spread over at least 42 Doc IDs-I found them not to be very helpful to the degree that some of what I read is actually wrong or contradicting. I put together a short note, primarily to myself, to remind me of the process. I hope you find it useful, (more...)
Mark Rittman will be giving us a keynote talk, with an emphasis on kettles or eating dinner in the dark or the Hadoop cluster in his garage something.
There are some really cool other speakers too, from Oracle, Leading Consultancies and the (more...)
In my previous 2 blogposts i explained how to setup a Vagrantfile that deploys 2 virtualbox machines that can be used as a basis for a RAC cluster and how you could use Ansible to deploy RAC on those VM’s. In this post i want to dive a bit deeper into how i setup Ansible and why, keep in mind that this just one way of doing this.
The Github repository containing all the files (more...)
I have just realised that the number of posts about RAC 12c Release 1 on this blog is rather too small. And since I’m a great fan of RAC this has to change :) In this mini-series I am going to share my notes about creating a Data Guard setup on my 2 node 220.127.116.11.161018 RAC primary + identical 2 node RAC standby system in the lab.
NOTE: As always, (more...)
The time has come for me to plan the upcoming 2017 UKOUG Special Interest Groups.
I am chairman of the RAC, Cloud, Infrastructure and Availability (RAC-CIA) SIG and I’m after presentations for the 3 joint SIGs we will be putting on with the RDBMS SIG, plus the stand-alone SIG will we will having in the autumn.
A SIG is a single day one-or-two stream conference which we take around the UK to make it (more...)
Recently I finished a Grid Upgrade from 18.104.22.168 to 22.214.171.124 + PSU JUL 2016. So far so good during a check I saw that the old tfactl tool under Software Release 126.96.36.199 where up and running.
That could not be okay. So I start an Uninstall and Setup for Release 188.8.131.52.
What steps has to be done?
Check the actual tfactl installation
So this customer has an Exadata quarter rack and they have an IB listener configured on both DB nodes (for DB connections from a multi-racked Exalogic system). We were adding a new DB node to this rack. So just followed the standard procedure of creating users, directories etc on the new node, setting up ssh equivalence and running addNode.sh. All went fine but root.sh failed. Little looking into the logs revealed that it (more...)
I was doing another GI 184.108.40.206 cluster installation last month when I got really weird error.
While root.sh was running on the first node I got the following error:
2016/07/01 15:02:10 CLSRSC-343: Successfully started Oracle Clusterware stack 2016/07/01 15:02:23 CLSRSC-180: An error occurred while executing the command '/ocw/grid/bin/oifcfg setif -global eth0/10.118.144.0:public eth1/10.118.255.0:cluster_interconnect' (error code 1) 2016/07/01 15:02:24 CLSRSC-287: FirstNode configuration failed Died at /ocw/grid/crs/install/crsinstall.pm (more...)
I ran in to a small issue while applying the July 2016 quarterly patch to a couple of Exadata racks last week. The systems were running GI 220.127.116.11, previously with the January 2016 PSU. The patches applied successfully, and we were beginning the process of running the post-patch scripts on the databases in the cluster. This process involves manually starting the database in upgrade mode, and we saw a message in SQL*Plus that the (more...)
To build Oracle Clusterware Database at Home, I believe , RAC ATTACK is the best place to learn. Its is a free curriculum and platform for hands-on learning labs related to Oracle RAC. While reviewing the article, I thought to perform 12cR1 RAC installation on OEL 7.2.
Attached is the document :- 12c_RAC_on_OEL7
The attached article is inspired by
Deploying Oracle RAC (more...)
Over the past years we've worked on many Oracle projects, some with RAC, some without, some which intended to implement RAC and failed, and some which implemented poorly and ripped it out at the last minute.For those new to Oracle, RAC is the Real Application Cluster option for the database which lets you cluster the database servers. The main reason for using for RAC is that it brings resilience to a system, it (more...)
By default on an Oracle RAC installation, the listeners are configured to allow any database to register with them. There is no security out of the box to determine which databases may register. While this makes it easy to create new databases without worrying about listener registration, this can cause potential problems in a real environment.
This can be dangerous working with RAC environments where the database registers with both a local and remote listener. The (more...)
Some time ago, I received a free review copy of Brian Peasland‘s recent book, Oracle RAC Performance Tuning.
First, a note on my RAC background: I spent 7 years on Oracle’s RAC Support team. When customers had an intractable RAC performance issue, I was on the other end of the “HELP!” line until it was resolved.
I made Brian’s acquaintance through the MOS RAC Support forum, where Brian stood out as a frequent (more...)
If you’ve run an exachk report, y0u may have seen the following message with regard to your databases:
|FAIL||Database Check||Database parameter CLUSTER_INTERCONNECTS is NOT set to the recommended value||db01:dbm011, db02:dbm012||View|
This check is commonly seen when a database is created on Exadata without using the custom “Exadata” templates included with the database creation assistant. These customized templates include a multitude of recommended parameter settings found in (more...)
There are several ways to convert non-RAC database to RAC database:
- using Database Configuration Assistant (DBCA)
- Oracle Enterprise Manager
- rconfig command line utility
In this post I will describe how to perform conversion using rconfig utility.
Removing a service via srvctl has not historically resulted in the service being fully removed from the database and it would still be visible in DBA_SERVICES as show below in an 18.104.22.168 database:
Create the service:
$ srvctl add service -d orcl -s demo -r "ORCL1,ORCL2"
Original Post can be viewed at MGMTDB: Grid Infrastructure Management Repository
MGMTDB is new database instance which is used for storing Cluster Health Monitor (CHM) data. In 11g this was being stored in berkley database but starting Oracle database 12c it is configured as Oracle Database Instance. In 11g, .bdb files (more...)