I think 2 years is long enough to wait between posts!
Today I delivered a session about Oracle Exadata Database Machine Best Practices and promised to post the slides for it (though no one asked about them :). I’ve also posted them to the Tech14 agenda as well.
Direct download: UKOUG Tech14 Exadata Security slides
With the INMEMORY clause you can specify 4 sub-clauses:
- The MEMCOMPRESS clause specifies whether and how compression is used
- The PRIORITY clause specifies the priority (“order”) in which the segments are loaded when the IMCS is populated
- The DISTRIBUTE clause specifies how data is distributed across RAC instances
- The DUPLICATE clause specifies whether and how data is duplicated across RAC instances
The aim of this post is not to describe these attribues in detail. Instead, (more...)
In Part I, I discussed how Zone Maps are new index like structures, similar to Exadata Storage Indexes, that enables the “pruning” of disk blocks during accesses of the table by storing the min and max values of selected columns for each “zone” of a table. A Zone being a range of contiguous (8M) blocks. I […]
Andy Colvin has the lowdown on the Oracle response and fixes for the bash shellshock vulnerability.
However, when I last looked it seemed Oracle had not discussed anything regarding the IB switches being vulnerable.
The IB switches have bash running on them and Oracle have verified the IB switches are indeed vulnerable.
[root@dm01dbadm01 ~]# ssh 10.200.131.22
Last login: Tue Sep 30 22:46:41 2014 from dm01dbadm01.e-dba.com
There has recently been a lot of news about the exploit revealed in the bash shell. While the fix is very quick to implement, there are a couple of tricks that are required to install this update on an Exadata environment. According to Oracle support note #1405320.1, Exadata storage server versions 11.2.3.x.x and 12.1.1.x.x are susceptible to the exploit. On a typical Oracle Enterprise Linux, a simple (more...)
This is based on the presentation Juan Loaiza gave regarding What’s new with Exadata. While a large part of the presentation focussed on what was already available, there are quite a few interesting new features that are coming down the road.
First of was a brief mention of the hardware. I’m less excited about this. The X4 has plenty of the hardware that you could want: CPU, memory and flash. You’d expect some or all (more...)
Today while working on ASM diskgroup i noticed Negative value for USABLE_FILE_MB. I was little surprised as it has been pretty long that i worked on ASM. So i started looking around for blogs and mos docs and found few really nice one around. A negative value for USABLE_FILE_MB means that you do not have [&hellip
Despite the title, this is actually a technical post about Oracle, disk I/O and Exadata & Oracle In-Memory Database Option performance. Read on :)
If a car dealer tells you that this fancy new car on display goes 10 times (or 100 or 1000) faster than any of your previous ones, then either the salesman is lying or this new car is doing something radically different from all the old ones. You don’t just get orders of magnitude (more...)
In Oracle Database 12c we can find many new and shiny things… So many that we can miss the little good things really easy. I think that this one, is one of them. Previously I made a post “All About HCC“, describing how HCC is working and some of the issues that we can hit […]
I’m sharing this in the hope of saving someone from an unwelcome surprise.
I recent upgraded an Exadata system from 220.127.116.11.1 to 18.104.22.168.1. Apart from what turns out to be a known bug that resulted in the patching of the InfiniBand switches “failing”, it all seemed to go without a snag. That’s until I decided to do some node failure testing…
Having forced a node (more...)
Just a quick post on a new Exadata feature called Zone Maps. They’re similar to storage indexes on Exadata, but with more control (you can define the columns and how the data is refreshed for example). People have complained for years that storage indexes provided no control mechanisms, but now we have a way to exert our God given rights as DBA’s to control yet another aspect of the database. Here’s a link to the (more...)
Oracle is announcing today what it’s calling “Oracle Big Data SQL”. As usual, I haven’t been briefed, but highlights seem to include:
- Oracle Big Data SQL is basically data federation using the External Tables capability of the Oracle DBMS.
- Unlike independent products — e.g. Cirro — Oracle Big Data SQL federates SQL queries only across Oracle offerings, such as the Oracle DBMS, the Oracle NoSQL offering, or Oracle’s Cloudera-based Hadoop appliance.
- Also unlike independent (more...)
As part of my series on the keys to and likelihood of success, I outlined some examples from the DBMS industry. The list turned out too long for a single post, so I split it up by millennia. The part on 20th Century DBMS success and failure went up Friday; in this one I’ll cover more recent events, organized in line with the original overview post. Categories addressed will include analytic RDBMS (including data (more...)
Here’s a little known feature of Exadata – you can use a Bloom filter computed from a join column of a table to skip disk I/Os against another table it is joined to. This not the same as the Bloom filtering of the datablock contents in Exadata storage cells, but rather avoiding reading in some storage regions from the disks completely.
So, you can use storage indexes to skip I/Os against your large fact table, based on a (more...)
Enkitec is the best consulting firm for hands on implementation, running and troubleshooting your Oracle based systems, especially the engineered systems like Exadata. We have a truly awesome group of people here; many are the best in their field (just look at the list!!!).
This is why I am here.
This is also why Accenture approached us some time ago – and you may already have seen today’s announcement that Enkitec got bought!
I noticed something for the first time tonight when I was playing around in the Enkitec lab – something that I have been doing wrong for a while. When working in the lab, I often rely on the crsctl command to shut down the entire cluster stack for me. It’s really easy to use “crsctl stop cluster -all” followed by “dcli -l root -g ~/dbs_group /u01/app/22.214.171.124/grid/bin/crsctl stop crs” to get everything down (more...)
We had a client that was running into a strange issue on their Exadata where new connections coming in through the SCAN were failing. After doing some troubleshooting, it was discovered that it was related to one of the SCAN listeners not properly accepting requests from new sessions. The VIP and listener were running, and everything looked normal.
We had the following SCAN setup:
On Exadata the local drives on the compute nodes are not big enough to allow larger exports and often dbfs is configured. In my case I had a 1.2 TB dbfs file system mounted under /dbfs_direct/.
While I was doing some exports yesterday I found that my dbfs wasn’t mounted, running quick crsctl command to bring it online failed:
[oracle@exadb01 ~]$ crsctl start resource dbfs_mount -n exadb01
CRS-2672: Attempting to start 'dbfs_mount' on 'exadb01'
I’ve previously written about a problem I encountered when kdump is configured to write to an NFS location with UEK (in Exadata software version 126.96.36.199.1). I’m please to report that the root cause of the problem has been identified and there is a very simple workaround.
There were some frustrating times working this particular SR, the most notable being a response that was effectively, “It works for me (and so I’ll (more...)
I seem to be getting a lot of surprising performance results lately on our X-2 quarter rack Exadata system, which is good – the result you don’t expect is the one that teaches you something new.
This time, I was looking at using a temporary tablespace based on flash disks (more...)