I’m sharing this in the hope of saving someone from an unwelcome surprise.
I recent upgraded an Exadata system from 188.8.131.52.1 to 184.108.40.206.1. Apart from what turns out to be a known bug that resulted in the patching of the InfiniBand switches “failing”, it all seemed to go without a snag. That’s until I decided to do some node failure testing…
Having forced a node (more...)
Just a quick post on a new Exadata feature called Zone Maps. They’re similar to storage indexes on Exadata, but with more control (you can define the columns and how the data is refreshed for example). People have complained for years that storage indexes provided no control mechanisms, but now we have a way to exert our God given rights as DBA’s to control yet another aspect of the database. Here’s a link to the (more...)
Recently I’ve noticed the occasional thread in Oracle newsgroups and lists asking about hugepages support in Linux, including ‘best practices’ for hugepages configuration. This information is out on that ‘world-wide web’ in various places; I’d rather put a lot of that information in this article to provide an easier way to get to it. I’ll cover what hugepages are, what they do, what they can’t do and how best to allocate them for your particular (more...)
Oracle is announcing today what it’s calling “Oracle Big Data SQL”. As usual, I haven’t been briefed, but highlights seem to include:
- Oracle Big Data SQL is basically data federation using the External Tables capability of the Oracle DBMS.
- Unlike independent products — e.g. Cirro — Oracle Big Data SQL federates SQL queries only across Oracle offerings, such as the Oracle DBMS, the Oracle NoSQL offering, or Oracle’s Cloudera-based Hadoop appliance.
- Also unlike independent (more...)
As part of my series on the keys to and likelihood of success, I outlined some examples from the DBMS industry. The list turned out too long for a single post, so I split it up by millennia. The part on 20th Century DBMS success and failure went up Friday; in this one I’ll cover more recent events, organized in line with the original overview post. Categories addressed will include analytic RDBMS (including data (more...)
Last week I’ve gotten a question on how storage indexes (SI) behave when the table for which the SI is holding data is changed. Based on logical reasoning, it can be two things: the SI is invalidated because the data it’s holding is changed, or the SI is updated to reflect the change. Think about this for yourself, and pick a choice. I would love to hear if you did choose the correct one.
I know, posts about up-coming user group meetings are not exactly exciting, but it’s good to be reminded. You can’t beat a bit of free training, can you?
On Monday 14th I am doing a lightning talk at the 4th Oracle Midlands event. The main reason to come along is to see Jonathan Lewis talk about designing efficient SQL and then he will also do a 10 minute session on Breaking Exadata (to achieve that (more...)
Anyone who has looked at Exadata might ask the question, and I did so too. After all, cell smart table scan is in wait class User IO so there should be more, right? This is what you find for a smart scan:
NAME PARAMETER1 PARAMETER2 PARAMETER3 WAIT_CLASS
------------------------------ -------------------- -------------------- ------------------------------ ---------------
cell smart table scan cellhash# User I/O
cell smart index scan cellhash# User I/O
Compare this to the traditional IO request:
NAME PARAMETER1 PARAMETER2 (more...)
I recently was involved in an investigation on a slow-running report on an Exadata system. This was rather interesting, the raw text file with the query was 33kb in size, and SQL Developer formatted the query to > 1000 lines. There were lots of interesting constructs in the query and the optimiser did its best to make sense of the inline views and various joins.
This almost lead to a different post about the importance (more...)
At the Accenture Enkitec Group we have a couple of Exadata racks for Proof of Concepts (PoC), Performance validation, research and experimenting. This means the databases on the racks appear and vanish more than (should be) on an average customer Exadata rack (to be honest most people use a fixed few existing databases rather than creating and removing a database for every test).
Nevertheless we gotten in a situation where the /etc/oratab file was not (more...)
One of the cool new things in 12.1 is that you can set the Session Data Unit to 2MB. This might not sound like a Big Deal, but getting this to work required me to dig deeper into the TNS layer than I intended…Then I somehow got stuck on the wrong track, thankfully the team at Enkitec helped out here with pointers.
This post is rather boring if you just look at it but (more...)
Here’s a little known feature of Exadata – you can use a Bloom filter computed from a join column of a table to skip disk I/Os against another table it is joined to. This not the same as the Bloom filtering of the datablock contents in Exadata storage cells, but rather avoiding reading in some storage regions from the disks completely.
So, you can use storage indexes to skip I/Os against your large fact table, based on a (more...)
I guess everybody who is working with Oracle databases and has been involved with Oracle Exadata in any way knows about smartscans. It is the smartscan who makes the magic happen of full segment scans with sometimes enormously reduced scan times. The Oracle database does smartscans which something that is referred to as ‘offloading’. This is all general known information.
But how does that work? I assume more people are like me, and are anxious (more...)
Enkitec is the best consulting firm for hands on implementation, running and troubleshooting your Oracle based systems, especially the engineered systems like Exadata. We have a truly awesome group of people here; many are the best in their field (just look at the list!!!).
This is why I am here.
This is also why Accenture approached us some time ago – and you may already have seen today’s announcement that Enkitec got bought!
There are already a lot of blogposts and presentations done about Hybrid Columnar Compression and i am adding one more blogpost to that list. Recently i was doing some small tests one HCC and noticed that that inserts on a HCC row didn’t got compressed and yes i was using direct path loads:
DBA@TEST1> create table hcc_me (text1 varchar2(4000)) compress for archive high;
KJJ@TEST1> insert /*+ append */ into hcc_me select dbms_random.string('x',100) (more...)
We have been migrating our databases from the non-Exadata servers to the Exadata Database Machine using the “RMAN 11g Duplicate standby from Active database” command, to create the standby databases on the Exadata machine. Below are the steps which were performed for these successful migrations. Assumptions Here we assume that the following tasks has been […]
The post Migrating a Database to an Exadata Machine appeared first on VitalSoftTech.
I noticed something for the first time tonight when I was playing around in the Enkitec lab – something that I have been doing wrong for a while. When working in the lab, I often rely on the crsctl command to shut down the entire cluster stack for me. It’s really easy to use “crsctl stop cluster -all” followed by “dcli -l root -g ~/dbs_group /u01/app/220.127.116.11/grid/bin/crsctl stop crs” to get everything down (more...)
Yes that is true, i said it, cellcli can lie to you there are some special cases were the output of cellcli is not the reality and you should double check it’s output with your standard OS tools. So this the output if dcli calling cellcli on an Exadata rack from an client:
[root@dm01db01 ~]# dcli -g cell_group -l root cellcli -e list cell attributes cellsrvStatus,msStatus,rsStatus
dm01cel01: running running running
dm01cel02: running running running
We had a client that was running into a strange issue on their Exadata where new connections coming in through the SCAN were failing. After doing some troubleshooting, it was discovered that it was related to one of the SCAN listeners not properly accepting requests from new sessions. The VIP and listener were running, and everything looked normal.
We had the following SCAN setup:
On Exadata the local drives on the compute nodes are not big enough to allow larger exports and often dbfs is configured. In my case I had a 1.2 TB dbfs file system mounted under /dbfs_direct/.
While I was doing some exports yesterday I found that my dbfs wasn’t mounted, running quick crsctl command to bring it online failed:
[oracle@exadb01 ~]$ crsctl start resource dbfs_mount -n exadb01
CRS-2672: Attempting to start 'dbfs_mount' on 'exadb01'