I’m sharing this in the hope of saving someone from an unwelcome surprise.
I recent upgraded an Exadata system from 188.8.131.52.1 to 184.108.40.206.1. Apart from what turns out to be a known bug that resulted in the patching of the InfiniBand switches “failing”, it all seemed to go without a snag. That’s until I decided to do some node failure testing…
Having forced a node (more...)
Just a quick post on a new Exadata feature called Zone Maps. They’re similar to storage indexes on Exadata, but with more control (you can define the columns and how the data is refreshed for example). People have complained for years that storage indexes provided no control mechanisms, but now we have a way to exert our God given rights as DBA’s to control yet another aspect of the database. Here’s a link to the (more...)
Oracle is announcing today what it’s calling “Oracle Big Data SQL”. As usual, I haven’t been briefed, but highlights seem to include:
- Oracle Big Data SQL is basically data federation using the External Tables capability of the Oracle DBMS.
- Unlike independent products — e.g. Cirro — Oracle Big Data SQL federates SQL queries only across Oracle offerings, such as the Oracle DBMS, the Oracle NoSQL offering, or Oracle’s Cloudera-based Hadoop appliance.
- Also unlike independent (more...)
As part of my series on the keys to and likelihood of success, I outlined some examples from the DBMS industry. The list turned out too long for a single post, so I split it up by millennia. The part on 20th Century DBMS success and failure went up Friday; in this one I’ll cover more recent events, organized in line with the original overview post. Categories addressed will include analytic RDBMS (including data (more...)
Last week I’ve gotten a question on how storage indexes (SI) behave when the table for which the SI is holding data is changed. Based on logical reasoning, it can be two things: the SI is invalidated because the data it’s holding is changed, or the SI is updated to reflect the change. Think about this for yourself, and pick a choice. I would love to hear if you did choose the correct one.
Here’s a little known feature of Exadata – you can use a Bloom filter computed from a join column of a table to skip disk I/Os against another table it is joined to. This not the same as the Bloom filtering of the datablock contents in Exadata storage cells, but rather avoiding reading in some storage regions from the disks completely.
So, you can use storage indexes to skip I/Os against your large fact table, based on a (more...)
Enkitec is the best consulting firm for hands on implementation, running and troubleshooting your Oracle based systems, especially the engineered systems like Exadata. We have a truly awesome group of people here; many are the best in their field (just look at the list!!!).
This is why I am here.
This is also why Accenture approached us some time ago – and you may already have seen today’s announcement that Enkitec got bought!
There are already a lot of blogposts and presentations done about Hybrid Columnar Compression and i am adding one more blogpost to that list. Recently i was doing some small tests one HCC and noticed that that inserts on a HCC row didn’t got compressed and yes i was using direct path loads:
DBA@TEST1> create table hcc_me (text1 varchar2(4000)) compress for archive high;
KJJ@TEST1> insert /*+ append */ into hcc_me select dbms_random.string('x',100) (more...)
We have been migrating our databases from the non-Exadata servers to the Exadata Database Machine using the “RMAN 11g Duplicate standby from Active database” command, to create the standby databases on the Exadata machine. Below are the steps which were performed for these successful migrations. Assumptions Here we assume that the following tasks has been […]
The post Migrating a Database to an Exadata Machine appeared first on VitalSoftTech.
I noticed something for the first time tonight when I was playing around in the Enkitec lab – something that I have been doing wrong for a while. When working in the lab, I often rely on the crsctl command to shut down the entire cluster stack for me. It’s really easy to use “crsctl stop cluster -all” followed by “dcli -l root -g ~/dbs_group /u01/app/220.127.116.11/grid/bin/crsctl stop crs” to get everything down (more...)
Yes that is true, i said it, cellcli can lie to you there are some special cases were the output of cellcli is not the reality and you should double check it’s output with your standard OS tools. So this the output if dcli calling cellcli on an Exadata rack from an client:
[root@dm01db01 ~]# dcli -g cell_group -l root cellcli -e list cell attributes cellsrvStatus,msStatus,rsStatus
dm01cel01: running running running
dm01cel02: running running running
We had a client that was running into a strange issue on their Exadata where new connections coming in through the SCAN were failing. After doing some troubleshooting, it was discovered that it was related to one of the SCAN listeners not properly accepting requests from new sessions. The VIP and listener were running, and everything looked normal.
We had the following SCAN setup:
On Exadata the local drives on the compute nodes are not big enough to allow larger exports and often dbfs is configured. In my case I had a 1.2 TB dbfs file system mounted under /dbfs_direct/.
While I was doing some exports yesterday I found that my dbfs wasn’t mounted, running quick crsctl command to bring it online failed:
[oracle@exadb01 ~]$ crsctl start resource dbfs_mount -n exadb01
CRS-2672: Attempting to start 'dbfs_mount' on 'exadb01'
On Exadata (or when setting cell_offload_plan_display = always on non-Exadata) you may see the storage() predicate in addition to the usual access() and filter() predicates in an execution plan:
SQL> SELECT * FROM dual WHERE dummy = 'X';
Check the plan:
Display execution plan for last statement for this session from library cache...
SQL_ID dtjs9v7q7zj1g, child number 0
SELECT * FROM dual WHERE dummy = 'X'
Most people are relating direct path reads with an algorithm which is just controlling the way our read is performed. But actually in Exadata environment this is the algorithm which is balancing the load between the Compute and the Storage nodes. Something really important. As usual, the algorithm is not perfect and for some situations […]
If you try to find out what is HCC and how it works you could start reading the documentation, then some books, blog posts and at the end you will have to put all together. In this post I’ll do exactly this. Put all together. Starting with the basic and going through the internals with […]
I’ve previously written about a problem I encountered when kdump is configured to write to an NFS location with UEK (in Exadata software version 18.104.22.168.1). I’m please to report that the root cause of the problem has been identified and there is a very simple workaround.
There were some frustrating times working this particular SR, the most notable being a response that was effectively, “It works for me (and so I’ll (more...)
I seem to be getting a lot of surprising performance results lately on our X-2 quarter rack Exadata system, which is good – the result you don’t expect is the one that teaches you something new.
This time, I was looking at using a temporary tablespace based on flash disks (more...)
Oracle has released the much-anticipated version of cellsrv compatible with Oracle Database 22.214.171.124 (patch #16980054). Before thinking about upgrading, read MOS note #1571789.1 carefully. Unless you are planning to run database 12c on your Exadata, it would be advisable to continue down the 11.2 branch (more...)
Today, Oracle announced the Exadata X4-2 model. The X4 has some considerable improvements, namely:
- 12-core Intel Xeon e5-2697 CPUs, up from the 8-core models found in the X3-2 (hello, database licenses!)
- 256GB RAM per database server, upgradeable to 512GB
- 96GB RAM per storage server
- 800GB Sun Flash F80 cards (more...)