The Unix utilities ps and top report memory differently with HugePages than without.
Without HugePages ps seems to include the SGA memory under the SZ column:
UID PID PPID C SZ RSS PSR STIME TTY TIME CMD
oracle 1822 1 0 846155 16232 0 07:19 ? 00:00:00 ora_d000_orcl
oracle 1824 1 0 846155 16228 0 07:19 ? 00:00:00 ora_d001_orcl
oracle 1826 1 0 846155 16236 0 07:19 ? 00:00:00 ora_d002_orcl
oracle 1828 1 0 846155 (more...)
Join myself and Tim Gorman as we host a live webinar and Q&A September 18th at 10am PST with Jonathan Lewis. Jonathan will explain from his on experiences how Delphix works and what industry problems it solves.
Click here to register for our webinar.
Jonathan explains issues and obstacles to creating “thin clones” on typical industry hardware:
how point in time snapshots work
And from point in time snapshots, he explains how clones can rapidly be made.
Good blogs keep popping up and other blogs fade into the back ground, so it’s hard to keep track of the good stuff out there. The following is a list of blogs I have either in the past gotten a lot out of or currently do. It would be great to get comments on what the best current blogs are and then iterate on this list and keep it updated.
On Monday we had some performance problems on a system that includes a database which uses shared servers. The top wait was “virtual circuit wait”. Here are the top 5 events for a 52 minute time frame:
Top 5 Timed Foreground Events
||Avg wait (ms)
||% DB time
|virtual circuit wait
|db file sequential read
In my recent post I showed how log file sync (LFS) and log file parallel write (LFPW) look for normal systems. I think it would also be interesting to compare that to the situation when LGWR does not have enough CPU.
I happen to have collected LGWR and database-level trace files for a 220.127.116.11 database on a Solaris 10 server which was under serious pressure (50 threads mostly inserting and committing data, only 32 (more...)
I’m proud to be one of the Oracle ACE Directors. Watch the video for some viewpoints on the Oracle ACE program from both Oracle and other Oracle ACEs.
There is a very common mistake in troubleshooting log file sync (LFS) waits: comparing its average time to average log file parallel write (LFPW) and trying to deduce from that whether the root cause of the wait is slow I/O or something else. The fact that this approach is recommended by Oracle itself (e.g. MOS 1376916.1) and many independent experts unfortunately doesn’t make it any less wrong.
It is well known that averages and ratios can distort the reality (Milsap and (more...)
I’ll be in Openworld again this year, and have a couple of speaking slots…
I’ll talk about new features for developers in 12c on Sunday, Sep 28, 2:30 Moscone South 303
and of course, at the awesome Oaktable world..
Drop in, learn some cool things, or just pop up and say Hello!
Just got the invitation to the first AZORA (Arizona Oracle user group) meeting on October 23. Here is the link: url
It’s 2 pm at Oracle’s office, 2355 E Camelback Rd Ste 950, Phoenix, AZ.
I’m looking forward to it!
Catchy title, don’t you think? My session has been moved to Monday 4 P.M., in direct conflict with Tom Kyte – and Keith Laker, who asked me to present in the first place. Avoid the lines: come see the MATCH_RECOGNIZE clause push great pre-12c solutions into retirement. As a bonus, be the first person on your block able […]
If you followed our recent postings on the updated Oracle Information Management Reference Architecture, one of the key concepts we talk about is the “data reservoir”. This is a pool of additional data that you can add to your data warehouse, typically stored on Hadoop or NoSQL databases, where you store unstructured, semi-structured or unprocessed structured data in large volume and at low cost. Adding a data reservoir gives you the ability to leverage (more...)
I have an example of paging in some Exadata OS Watcher log files. We got an error in the alert log about a high load but reviewing AWR reports it did not seem like we had an unusual number of active sessions. Also, the CPU use seemed low but the system load was high. Reviewing the OS Watcher logs and finding a similar case on Oracle’s support site convinced me that our (more...)
Overview only :
Complete prerequisites on target server see my step by step document
Step 2 :
run adpreclone procedure on database tier and application tier
Step 3 :
copy db, apps, inst to target machine and also change ownership
db — database user
apps,inst — appl user
Step 4 :
take rman backup and copy to target machine
Restore database to target machine using rman
change database name
One response to my series on reading execution plans was an email request asking me to clarify what I meant by the “order of operation” of the lines of an execution plan. Looking through the set of articles I’d written I realised that I hadn’t made any sort of formal declaration of what I meant, all I had was a passing reference in the introduction to part 4; so here’s the explanation.
By “order of operation” (more...)
Zone Maps are new index-like structures that enables the “pruning” of disk blocks during accesses of the table by storing the min and max values of selected columns for each “zone” of a table. A zone is simply a range of contiguous blocks within a table. Zone Maps are similar in concept to Exadata storage […]
One of the often given advices on hardening a database is to run scripts without broadcasting your login data at the same time. According to Arup Nanda in his famous articles on “Project Lockdown” you have three options to run your scripts without letting everybody in on your password secrets:
- Start your scripts under /nolog and add your login to the SQL-script your running
- Start SQL*Plus under /nolog and add the login at the beginning (more...)
Not all documentation is created equal. Too much time is spent on formal design documents that are immediately outdated, and too little is spent on writing code comments.
Make sure your process requires and rewards good code comments. And make sure your architecture diagrams are kept up-to-date.
This illustration is from my weekly “Technology That Fits” newsletter – sign up here.
IDC study found that Delphix
- Pays for itself in 4.3 months
- ROI 461% over 5 years
- $1 Million storage and hardware savings
- $50 Millionannual savings for companies over 75,000 employees
- $78,500 saved per year per 85 employees
- $85,000 annua lIT efficiency per 100 employees
- 98.6% storage reduction on a 2.42 TB database footprint