With the announcement Exadata X3, Oracle has introduced a new feature called “FlashCache Writeback” to allow writes to cell Flash Cache (aka Exadata Smart FlashCache) in WriteBack mode. Earlier with WriteThrough mode, writes were not written to FlashCache, instead they were written directly to cell disks. Exadata software used to decide whether to cache these writes back into FlashCache or not. In WriteBack mode, writes are written to cell FlashCache and acknowledgement is given back to calling process as soon as data is written to flashcache. Exadata Server software de-stages the dirty writes in flashcache to spinning disks in the (more...)
In my last post, I looked at the effect of the Exadata smart flash logging. Overall, there seemed to be a slight negative effect on median redo log sync times. This chart (slightly different from the last post because of different load and configuration of the system), shows how there’s a “hump” of redo log syncs that take slightly longer when the flash logging is enabled:
But of course, the flash logging feature was designed to improve performance not of the “average” redo log sync, but of the “outliers”.
In my tests, I had 40 concurrent (more...)
Just a quick note about change in the way the compute nodes are patched starting from version 126.96.36.199.1. For earlier versions Oracle provided the minimal pack for patching the compute nodes. Starting with version 188.8.131.52.1 Oracle has discontinued the minimal pack and the updates to compute nodes are done via Unbreakable Linux Network (ULN).
Now there are three ways to update the compute nodes:
1) You have internet access on the Compute nodes. In this case you can download patch 13741363, complete the one time setup and start the update.
Exadata storage software 184.108.40.206 introduced the Smart flash logging feature. The intent of this is to reduce overall redo log sync times - especially outliers - by allowing the exadata flash storage to serve as a secondary destination for redo log writes. During a redo log sync, Oracle will write to the disk and flash simultaneously and allow the redo log sync operation to complete when the first device completes.
I’ve reported in the past on using SSD for (more...)
I’ll be co-speaking with Randy Johnson (one of the authors of Expert Oracle Exadata) at E4 to share about the war stories and detail on a bunch of technical stuff on a Peoplesoft and BIEE consolidation project we had on one of our clients. See the abstract below:
Randy Johnson & Karl Arao
A PeopleSoft & OBIEE Consolidation Success Story
In today’s competitive business climate companies are under constant pressure to reduce costs without sacrificing quality. Many companies see database and server consolidation as the key to meeting this goal. Since its introduction, Exadata has become the obvious choice for (more...)
The effect of ASM redundancy/parity on read/write IOPS – SLOB test case! for Exadata and non-Exa environments
Last week I had a lengthy post at oracle-l that tackles Calibrate IO, Short Stroking, Stripe size, UEK kernel, and ASM redundancy effect on IOPS Exadata which you can read here
followed by interesting exchange of tweets with Kevin Closson here (see 06/21-22 tweets) which I was replying in between games at UnderwaterHockey US Nationals 2012 which we won the championship for the B division I have my awesome photo with the medal here
This post will detail on the ASM redundancy/parity effect on IOPS… if… by changing the ASM redundancy (external, normal, and high) will it decrease the workload (more...)
Tuning has always being good fun and something like a challenge for me.
From time to time we are being asked to find out why something did run slow while you are sleeping; answering this question is, in most cases, a challenge.
My batch did run slow last night, can you let us know why? Or why did this query run slow? Are questions we, as DBAs, have to answer from time to time.
Oracle has provided us with many tools to dig out information about past operations. We have EM, AWR, ASH, dba_hist_* tables, scripts (more...)
I have received a fair number of responses to my previous post on this subject (some via comments and some via email). I thought the discussion worthwhile enough to punch it up a bit more here.
As I pointed out in the previous post, EMC can easily match NetApp's play to back up ExaData with the following:
As Geoff Rosser so correctly pointed out, this answer is incomplete. Yes, Data Domain is an awesome Oracle backup solution. Yes, it provides incredible deduplication rates for Oracle database environments. (Thanks, dynamox.) However, it is not the only viable solution from (more...)
There has been lots of material on the web recently concerning NetApp being able to backup ExaData. The purpose of this blog is to respond to that content, and state why NetApp's offering is rather lame, and actually offers nothing new.
The items on the web produced by NetApp are easy to find. I will not increase their Google hit rate by linking to them here. Suffice it to say, Neil Gerren's blog contains the principle content to which I will respond here. There is also NetApp technical report TR 4022, a 34 page tome, which I have read thoroughly. (more...)
Hybrid Columnar Compression (HCC) is a new awesome feature in Exadata that helps in saving a lot of storage space in your environment. This whitepaper on Oracle website explains this feature in detail. Also Uwe Hesse has an excellent how to use all this post on his blog. You can see the compression levels one can achive by making use of HCC. It is very simple to use feature but one needs to be aware of few things before using HCC extensively as otherwise all your storage calculations may go weird. Here are few of the things to keep in (more...)
Appliances such as microwave ovens, refrigerators, iPods, iPads and TVs are excellent examples of the ease-of-use approach. Bringing the inherently complex world of Oracle databases together with the ease-of-use approach of appliances is challenging. By definition if Oracle Exadata is an appliance then its use should be simple, require relatively little maintenance and like a refrigerator do its job which in this case is run databases at extreme performance levels. If Oracle Exadata isn’t an appliance than what is it?
I found this question (more...)
I’ve seen some posts on the blogosphere where people attempt to explain (or should I say guess) how Exadata Smart Flash Logging works and most of them are wrong. Hopefully this post will help clear up some the misconceptions out there.
The following is an excerpt from the paper entitled “Exadata Smart Flash Cache Features and the Oracle Exadata Database Machine” that goes into technical detail on the Exadata Smart Flash Logging feature.
Smart Flash Logging works as follows. When receiving a redo log write request, Exadata will do
parallel writes to the on-disk redo logs as well (more...)
From time to time, I have to run scripts or single commands on all nodes for Exadata. This can take some time.
We have a request from our developers to flush the shared pool on all nodes on our UAT Exadata. This is due to a bug we are still experiencing.
This is a typical request for my team, were we have to run something on all our nodes. Flushing shared pool can be one of them.
Connecting and executing the same command 8 times, if you have a full rack, can be time-consuming and it (more...)
Couple of days ago I had an interesting request, “How can I see the contents of nfs_dir”?
We were using DBFS to store our exports. This was the perfect solution as the business could “see” the files on the destination folder, but it did not meet our requirements performance wise on our Exadata.
We have decided to mount NFS and performance did improve, but we had a different problem. NFS is mounted on the database server and business do not have access for security reasons and segregation of duties.
Since then, the export jobs run, but business could (more...)
This is a quick post regarding the error on the subject. This is the second time it happens to me, so I thought I will write a bit about it.
I am refreshing one of my UAT environments (happens to be a Full Rack Exadata) using Oracle RMAN duplicate command. Then the following happens (on both occasions).
1.- Duplicate command fails (lack of space for restoring archivelogs, or any other error). This is can be fixed quite easy.
2.- following error while trying to open the database after restore and recover has finished:
SQL> alter database (more...)
I had a requirement of transferring files from our PROD ASM to our UAT ASM as DBFS is proving to be slow.
We are currently refreshing UAT schemas using Oracle Datapump to DBFS and then transferring those files to UAT using SCP.
DBFS does not provided us with the performance we need as datapump files are quite big. Same export onto ASM or NFS proves to be much, much faster.
We are currently testing exports to ASM, but, how to move dmp files from PROD ASM to UAT ASM?
The answer for us is using DBMS_FILE_TRANSFER. (more...)