I’ve been doing a series on Analytics here but if you need to group data into ranges, you don’t necessarily need a fancy analytic. The function WIDTH_BUCKET can easily determine histogram groupings within a table.
SQL> SELECT CUST_ID, CUST_LAST_NAME, CUST_CREDIT_LIMIT,
2 WIDTH_BUCKET(CUST_CREDIT_LIMIT, 100, 5000, 10) credit_bucket
3 FROM sh.customers
4 WHERE country_id = 52787
5 ORDER BY CUST_CREDIT_LIMIT;
CUST_ID CUST_LAST_NAME CUST_CREDIT_LIMIT CREDIT_BUCKET
---------- -------------------- ----------------- -------------
44282 Justice 1500 3
50671 Sandoval 1500 3
Notice in all of the examples below that the date string does not match the format mask.
When a date conversion fails, Oracle tries some similar formats to try succeed. This is actually documented as well here
|Original Format Element
||Additional Format Elements to Try if Original fails
||‘MON’ and ‘MONTH’
SQL> select to_date('01JAN2000','ddmmyyyy') from dual;
SQL> select to_date('01JAN1999','ddmmyyyy') from dual;
SQL Plus does not appear to be bothered by the direction of slashes, ie “/” or “” in its scripts
Hence as long as you have a “C:tmp” folder on your Windows drive (or wherever your SQL Pus binaary is installed/running from), you can do:
and it will work on Unix and Windows without alteration.
(This also works for SQLcl as well by the way)
People often think when using the SAMPLE clause, that because they are only dealing with a subset of the data, that immediately it should be a much faster operation. And whilst sometimes this is indeed the case, it is not a guarantee. The optimizer probably gives us a hint ( no pun intended) as to how we should interpret the SAMPLE clause.
Lets compare a standard scan of a table, with a sampled scan.
Question I which may have been asked.
For 1 time backup to migrate/clone database, KEEP backup does not affect retention policy and not backed up 1 times clause.
Image the consequences for deleting 1 off backup which has archivelog backup and the main scripts use not backed up 1-2 times clause.
RMAN> list archivelog all;
using target database control file instead of recovery catalog
List of Archived Log Copies for database with db_unique_name HAWKLAS
Key Thrd (more...)
geekin geekin geekin… it’s always good while is fun ( :; But too much is not good from anything. Lately I was having so much fun that I was hardly finding time to sleep, but now the balance is almost in place and I think this time to finish that post… or maybe it will […]
Here are the two BIWA Summit 2016 presentations I delivered today. The first one is a collection of high level thoughts (and opinions) of mine and the 2nd one is more technical:
NB! If you want to move to the "New World" - and benefit from the awesomeness of Hadoop, without having to re-engineer your existing applications - (more...)
The project was to migrate database from one DC to another.
The decision we made was to perform RMAN KEEP backup so it does not interfere with existing retention policy.
Backup also resides in its own separate directory for easier checksum and transfer.
This is for 4 nodes RAC environment and backup was taken from node1 at 2016-JAN-21 14:12:10
RMAN backup scripts.
ALLOCATE CHANNEL C1 DEVICE TYPE DISK FORMAT '/oracle/FRA/migration_backup/%d_%I_%T_%U_MIGRATION_%s' MAXPIECESIZE 4G MAXOPENFILES 1;
OK. So the original quote from Spawn is exactly the opposite, but let’s go with it…
A few times in the past I’ve been asked questions and started to give a direct answer, then someone smarter has jumped in and asked the killer question. Why? Quite often it’s easy to answer the initial question, so rather than understand the reason for the question, you just respond and pat yourself on the back. That’s great, but without knowing the context (more...)
About 18 months ago I wrote a post describing the different types of NAND flash known as SLC, MLC and TLC. However, 18 months is a lifetime in the world of technology so now I need to clarify it based on the widespread adoption of a new type of NAND flash. Let me explain…
Recap: 2D Planar NAND
Until recently, most of the flash memory used for data storage was of a form known as 2D Planar NAND (more...)
"But I suppose there's a lot to see everywhere, if only you keep your eyes open."
-- Norton Juster, The Phantom Tollbooth
Oracle Support recently updated what I call a ‘blast from the past’; it involves a subpool memory leak in Oracle 184.108.40.206 and shared cursors. Originally submitted in 2010 this issue was updated the end of last year to reflect the status of ‘Fixed in Product Version 12.1’. There (more...)
Here’s a convenient enhancement for tracing that came up on Twitter a few days ago – first in a tweet that I retweeted, then in a question from Christian Antognini based on this bit of the 12c Oracle documentation (opens in separate tab). The question was – does it work for you ?
The new description for max_dump_file_size says that for large enough values Oracle will split the file into multiple chunks of a few megabytes, (more...)
After upgrading to SQL Developer 220.127.116.11 a number of users noticed that some connections in the left hand pane were greyed out. I had the same experience for myself. There did not seem to be any pattern as to why specific connections were chosen and not others. Once greyed out they remained greyed out permanently – the selection did not change
I have not been able to find out an answer as (more...)
Tomorrow is Oracle Midlands Event #13.
Franck is a super-smart guy, so please show your support and start the year as you mean to go on!
Oracle Midlands : Event #13
was first posted on January 25, 2016 at 11:41 pm.
©2012 "The ORACLE-BASE Blog
". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty (more...)
This is the second blogpost on using PL/SQL inside SQL. If you landed on this page and have not read the first part, click this link and read that first. I gotten some reactions on the first article, of which one was: how does this look like with ‘pragma udf’ in the function?
Pragma udf is a way to speed up using PL/SQL functions in (user defined function), starting from version 12. If you want (more...)
How do you find out what is happening now
inside an Oracle database instance when you only have SQL level access to the database instance, and no nice GUI management tool? I'll cover how to start doing this with some SQL query examples.
A user is complaining that the application is running slowly, or a report someone scheduled is taking too long to finish - where do you look to find out what is going (more...)
I was going to do a follow-up post to my post on USERENV(‘COMMITSCN’) just to describe the slightly better known but still overlooked (and possibly more useful) ORA_ROWSCN – but I don’t need to!
As Neil Chandler has done this excellent post on it
Go and look at that post, it has all the information and detail you were actually looking for.
However, for any of you search-engine-seekers who can’t face the pain of following (more...)
this post is about Linux perf and uprobes for tracing and profiling Oracle workloads for advanced troubleshooting.Context
The recent progress and maturity of some of the Linux dynamic tracing tools has raised interest in applying these techniques to Oracle troubleshooting and performance investigations. See Brendan Gregg
's web pages for summary and future developments on dynamic traces for Linux. Some recent work on applying these tools and techniques to Oracle can be found (more...)
I had a requirement the other day to understand when some data had been changed, but there was no column on the table which showed this.
So how do I find out? Well I could go off mining redo and lots of other time consuming and exotic things, but you can use the Oracle Pseudocolumn ORA_ROWSCN. This gives the SCN assocaited with each row. Well, actually it usually doesn’t. It does not show when the (more...)
Here’s one of those odd little tricks that (a) may help in a couple of very special cases and (b) may show up at some future date – or maybe it already does – in the optimizer if it is recognised as a solution to a more popular problem. It’s about an apparent restriction on how the optimizer uses the BITMAP MERGE operation, and to demonstrate a very simple case I’ll start with a data (more...)