A lot of customers of mine who are starting with Oracle Golden Gate have issues when it comes to troubleshooting Golden Gate. It is the most common question i get: “I have setup Golden Gate, it was working fine but now process X abandoned… now what do i do?”. The OGG error log is a good starting point but is not always showing the real issue. There are a lot different way to obtain (more...)
In my previous 2 blogposts i explained how to setup a Vagrantfile that deploys 2 virtualbox machines that can be used as a basis for a RAC cluster and how you could use Ansible to deploy RAC on those VM’s. In this post i want to dive a bit deeper into how i setup Ansible and why, keep in mind that this just one way of doing this.
The Github repository containing all the files (more...)
In my previous blog post i explained how Vagrant can be used to create a 2-node VM environment for deploying a RAC cluster on it. In this post i want to dive into how to actually do the RAC install using ansible so you can easily create a RAC test environment on eg. your laptop.
I have created a git repo with the files that i am using for this blog post, this git repo (more...)
Creating your own test VM environment with RAC is a fun exercise to do, however after you have rebuild your environment a couple of times this will get a very tiresome exercise and you want to start to automate your deployments. People have already written several blogposts about the GI and RDBMS orchestrations and the several tools that are available for this. Within the Oracle community Ansible seems to be a very popular choice for (more...)
I have been patching engineered systems since the launch of the Exadata V2 and recently i had the opportunity to patch the BDA we have in house. As far as comparisons go, this is were the similarities stop between Exadata and a Big Data Appliance (BDA) patching.
Our BDA is a so called startes rack consisting of 6 nodes running a hadoop cluster, for more information about this read my First Impressions blog post. On (more...)
The last 2 weeks we are lucky enough to have the Big Data Appliance (BDA) from Oracle in our lab/demo environment at VX Company and iRent. In this blog post i am trying to share my first experiences and some general observations. I am coming from an Oracle (Exadata) RDBMS background so i will probably reflect some of that experience on the BDA. The BDA we got here at is a starters rack which consists (more...)
Most of the talk about Oracle’s release of 22.214.171.124 is about the InMemory feature, but more things have changed, for example some essential things about loggin in the Grid Infrastructure have changed. Normally in Oracle Grid Infrastructure logging for Grid components was done in $GI_HOME:
oracle@dm01db01(*gridinfra):/home/oracle> cd $ORACLE_HOME/log/`hostname -s`/ oracle@dm01db01(*gridinfra):/u01/app/126.96.36.199/grid/log/dm01db01>
There we have the main alert log for GI and several subdirectories for the GI binaries where they write (more...)
There are already a lot of blogposts and presentations done about Hybrid Columnar Compression and i am adding one more blogpost to that list. Recently i was doing some small tests one HCC and noticed that that inserts on a HCC row didn’t got compressed and yes i was using direct path loads:
DBA@TEST1> create table hcc_me (text1 varchar2(4000)) compress for archive high; Table created. KJJ@TEST1> insert /*+ append */ into hcc_me select dbms_random.string('x',100) (more...)
Yes that is true, i said it, cellcli can lie to you there are some special cases were the output of cellcli is not the reality and you should double check it’s output with your standard OS tools. So this the output if dcli calling cellcli on an Exadata rack from an client:
[root@dm01db01 ~]# dcli -g cell_group -l root cellcli -e list cell attributes cellsrvStatus,msStatus,rsStatus dm01cel01: running running running dm01cel02: running running running dm01cel03: (more...)
Recently i was upgrading a half rack Exadata to Grid Infrastructure 188.8.131.52. for a customer who had 1 node removed from the cluster, at least so we thought. While doing the upgrade we ran rootupgrade.sh on the first 2 nodes without issues. Now when running the script on what supposed to be the 3rd and final node in the cluster, the rootupgrade.sh failed with the following error:
CRS-1119: Unable to (more...)