That’s … Huge!

Recently I’ve noticed the occasional thread in Oracle newsgroups and lists asking about hugepages support in Linux, including ‘best practices’ for hugepages configuration. This information is out on that ‘world-wide web’ in various places; I’d rather put a lot of that information in this article to provide an easier way to get to it. I’ll cover what hugepages are, what they do, what they can’t do and how best to allocate them for your particular (more...)

The point of predicate pushdown

Oracle is announcing today what it’s calling “Oracle Big Data SQL”. As usual, I haven’t been briefed, but highlights seem to include:

  • Oracle Big Data SQL is basically data federation using the External Tables capability of the Oracle DBMS.
  • Unlike independent products — e.g. Cirro — Oracle Big Data SQL federates SQL queries only across Oracle offerings, such as the Oracle DBMS, the Oracle NoSQL offering, or Oracle’s Cloudera-based Hadoop appliance.
  • Also unlike independent (more...)

21st Century DBMS success and failure

As part of my series on the keys to and likelihood of success, I outlined some examples from the DBMS industry. The list turned out too long for a single post, so I split it up by millennia. The part on 20th Century DBMS success and failure went up Friday; in this one I’ll cover more recent events, organized in line with the original overview post. Categories addressed will include analytic RDBMS (including data (more...)

Exadata storage indexes and DML

Last week I’ve gotten a question on how storage indexes (SI) behave when the table for which the SI is holding data is changed. Based on logical reasoning, it can be two things: the SI is invalidated because the data it’s holding is changed, or the SI is updated to reflect the change. Think about this for yourself, and pick a choice. I would love to hear if you did choose the correct one.

First (more...)

Combining Bloom Filter Offloading and Storage Indexes on Exadata

Here’s a little known feature of Exadata – you can use a Bloom filter computed from a join column of a table to skip disk I/Os against another table it is joined to. This not the same as the Bloom filtering of the datablock contents in Exadata storage cells, but rather avoiding reading in some storage regions from the disks completely.

So, you can use storage indexes to skip I/Os against your large fact table, based on a (more...)

Enkitec + Accenture = Even More Awesomeness!

Enkitec is the best consulting firm for hands on implementation, running and troubleshooting your Oracle based systems, especially the engineered systems like Exadata. We have a truly awesome group of people here; many are the best in their field (just look at the list!!!).

This is why I am here.

This is also why Accenture approached us some time ago – and you may already have seen today’s announcement that Enkitec got bought!

(more...)

Inserts on HCC tables

There are already a lot of blogposts and presentations done about Hybrid Columnar Compression and i am adding one more blogpost to that list. Recently i was doing some small tests one HCC and noticed that that inserts on a HCC row didn’t got compressed and yes i was using direct path loads:

DBA@TEST1> create table hcc_me (text1 varchar2(4000)) compress for archive high;

Table created.

KJJ@TEST1> insert /*+ append */ into hcc_me select dbms_random.string('x',100)  (more...)

Migrating a Database to an Exadata Machine

We have been migrating our databases from the non-Exadata servers to the Exadata Database Machine using the “RMAN 11g Duplicate standby from Active database” command, to create the standby databases on the Exadata machine. Below are the steps which were performed for these successful migrations. Assumptions Here we assume that the following tasks has been […]

The post Migrating a Database to an Exadata Machine appeared first on VitalSoftTech.

Database Shutdown With crsctl

I noticed something for the first time tonight when I was playing around in the Enkitec lab – something that I have been doing wrong for a while.  When working in the lab, I often rely on the crsctl command to shut down the entire cluster stack for me.  It’s really easy to use “crsctl stop cluster -all” followed by “dcli -l root -g ~/dbs_group /u01/app/11.2.0.4/grid/bin/crsctl stop crs” to get everything down (more...)

Cellcli can lie to you…

Yes that is true, i said it, cellcli can lie to you there are some special cases were the output of cellcli is not the reality and you should double check it’s output with your standard OS tools. So this the output if dcli calling cellcli on an Exadata rack from an client:

[root@dm01db01 ~]# dcli -g cell_group -l root cellcli -e list cell attributes cellsrvStatus,msStatus,rsStatus
dm01cel01: running       running         running
dm01cel02: running       running         running
dm01cel03:  (more...)

SCAN VIP Troubleshooting

We had a client that was running into a strange issue on their Exadata where new connections coming in through the SCAN were failing.  After doing some troubleshooting, it was discovered that it was related to one of the SCAN listeners not properly accepting requests from new sessions.  The VIP and listener were running, and everything looked normal.

We had the following SCAN setup:

SCAN VIP # VIP IP
1 172.25.2.70
2 (more...)

Troubleshooting Oracle DBFS mount issues

On Exadata the local drives on the compute nodes are not big enough to allow larger exports and often dbfs is configured. In my case I had a 1.2 TB dbfs file system mounted under /dbfs_direct/.

While I was doing some exports yesterday I found that my dbfs wasn’t mounted, running quick crsctl command to bring it online failed:

[oracle@exadb01 ~]$ crsctl start resource dbfs_mount -n exadb01
 CRS-2672: Attempting to start 'dbfs_mount' on 'exadb01'
  (more...)

Where does the Exadata storage() predicate come from?

On Exadata (or when setting cell_offload_plan_display = always on non-Exadata) you may see the storage() predicate in addition to the usual access() and filter() predicates in an execution plan:

SQL> SELECT * FROM dual WHERE dummy = 'X';

D
-
X

Check the plan:

SQL> @x
Display execution plan for last statement for this session from library cache...

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------
SQL_ID  dtjs9v7q7zj1g, child number 0
-------------------------------------
SELECT * FROM dual WHERE dummy = 'X'

 (more...)

Direct Path Reads and Cell Offloading

Most people are relating direct path reads with an algorithm which is just controlling the way our read is performed. But actually in Exadata environment this is the algorithm which is balancing the load between the Compute and the Storage nodes. Something really important. As usual, the algorithm is not perfect and for some situations […]

ALL about HCC

If you try to find out what is HCC and how it works you could start reading the documentation, then some books, blog posts and at the end you will have to put all together. In this post I’ll do exactly this. Put all together. Starting with the basic and going through the internals with […]

Kdump to NFS in UEK (Solution)

I’ve previously written about a problem I encountered when kdump is configured to write to an NFS location with UEK (in Exadata software version 11.2.3.2.1). I’m please to report that the root cause of the problem has been identified and there is a very simple workaround.

There were some frustrating times working this particular SR, the most notable being a response that was effectively, “It works for me (and so I’ll (more...)

Using SSD for a temp tablespace on Exadata

I seem to be getting a lot of surprising performance results lately on our X-2 quarter rack Exadata system, which is good – the result you don’t expect is the one that teaches you something new.

This time, I was looking at using a temporary tablespace based on flash disks (more...)

Exadata Storage Server Version 12.1.1.1.0 Released

Oracle has released the much-anticipated version of cellsrv compatible with Oracle Database 12.1.0.1 (patch #16980054). Before thinking about upgrading, read MOS note #1571789.1 carefully.  Unless you are planning to run database 12c on your Exadata, it would be advisable to continue down the 11.2 branch (more...)

Oracle Announces Exadata X4-2

Today, Oracle announced the Exadata X4-2 model.  The X4 has some considerable improvements, namely:

  • 12-core Intel Xeon e5-2697 CPUs, up from the 8-core models found in the X3-2 (hello, database licenses!)
  • 256GB RAM per database server, upgradeable to 512GB
  • 96GB RAM per storage server
  • 800GB Sun Flash F80 cards (more...)

Exadata MAA Presentation – Oracle Day 2013 Istanbul

Exadata Maximum Availability Architecture from Yunus Emre Baransel