The Big Data presentation I gave yesterday is now available for download. In this presentation I define some common features of Big Data use cases, explain what the big deal about Big Data is all about and explore the impact of Big Data on the traditional data warehouse framework.
When most people think about analytics and Hadoop, they tend to think of technologies such as Hive, Pig and Impala as the main tools a data analyst uses. When you talk to data analysts and data scientists though, they’ll usually tell you that their primary tool when working on Hadoop and big data sources is in fact “R”, the open-source statistical and modelling language “inspired” by SAS but now with its own rich ecosystem, and (more...)
Recently I have be working with Oracle Enterprise Data Quality (EDQ) combined with Oracle Data Integrator (ODI), we will tell you more about exactly what we have been doing in another blog, but for now I would like to revisit our earlier EDQ blogs. I am going to keep to the batch ETL paradigm and not really look at the Real-Time aspects of Oracle EDQ.
My colleague, Koen Vantomme, blogged about EDQ back in August (more...)
It’s possible to configure either a Full- or an Incremental Load in Oracle BIA. If you look at the Informatica version of Oracle BIA, there are a few areas you will have to configure. First you start with the Informatica Mapping. This will be one Mapping. It does not matter whether you run this Mapping […]
With the recent release of SQL Developer 4.0.1 there has been some very minor bug fixes for Oracle Data Miner. But there has been one particular enhancement that I wanted to have a look at. This blog post will look at this new feature and how you can use it too. In the previously released version of the Oracle Data Miner tool we had a Graph Node. This is really a new feature (more...)
I recently read a very interesting blogpost from Cary Millsap. Everybody who is wants to do a good job should find some time to read this. I think his recommendations do not necessarily only apply to the Oracle database. If you want to succeed in the field of Oracle BI, you could follow up the […]
Stewart’s too modest to mention it on the blog himself, but I just wanted to congratulate Stewart Bryson on being awarded Oracle ACE Director status by the Oracle OTN ACE program. Stewart was given the Oracle ACE award a few years ago to recognise past work he’d done for the Oracle BI, DW and ETL community, but this higher ACE Director award recognises the ongoing work he’s since been doing to share his knowledge and (more...)
On April 2nd Cloudera is hosting a webinar featuring Ralph Kimball who will "describe how Apache Hadoop complements and integrates effectively with the existing enterprise data warehouse. The Hadoop environment's revolutionary architectural advantages open the door to more data and more kinds of data than are possible to analyze with conventional RDBMSs, and additionally offer a whole series of new forms of integrated analysis.
I’m very pleased to announce that the Rittman Mead BI Forum 2014 running in Brighton and Atlanta, May 2014, is now open for registration. Keeping the format as before – a single stream at each event, world-class speakers and expert-level presentations, and a strictly-limited number of attendees – this is the premier Oracle BI tech conference for developers looking for something beyond marketing and beginner-level content.
This year we have a fantastic line-up of speakers (more...)
I recently ran into the situation where the primary mount for a Linux tech account running an OBI install was just way too small to get OBIEE 126.96.36.199.140114 through.
Prerequisite check "CheckSystemSpace" failed.
The details are:
Required amount of space(17499.766MB) is not available.
So with a bit of hacking I got around it by displacing the ./patch_storage directory and forcing opatch to stop doing a file system check (basically no (more...)
I recently did a podcast with Stewart Bryson (Chief Innovation Officer RittmanMead), Kevin McGinley, and Alex Shlepakov (both Oracle Analytics at Accenture).
In the first part of this two part series we cover the following areas:
ODI 12c. What are the advantages? When should you upgrade?
Migration from OWB to ODI 12c. Should you migrate? How and when?
Comparison of ODI to Informatica and other ETL tools.
ETL style vs. ELT style data integration tools.
Recently, my colleague, Pete Carpenter, described a proof of concept we carried out using Amazon Redshift as the data warehouse storage layer in a system capturing data from Oracle E-Business Suite (EBS) using Attunity CloudBeam in conjunction with Oracle Data Integrator (ODI) for specialised ETL processing and Oracle Business Intelligence (OBI) as the reporting tool.
In this blog I will look at Amazon Redshift and how it compares with a more traditional DW approach using, (more...)
In my post yesterday we stepped through the initial set up and staging load for a data warehouse using Amazon Redshift and Attunity, for eventual use with OBIEE and ODI. Now that we have our source data in Redshift, let’s look at transforming it into a star schema using ODI, by initially looking how we set up the connection to Redshift in ODI’s Topology Navigator.
As I mentioned in yesterday’s post and on a blog (more...)
One of our longstanding Oracle customers recently asked us to put together a proof-of-concept DW system using Amazon Redshift as the data warehouse database, rather than Oracle Database. The main driver for this was the economics of running Redshift in the cloud vs. Oracle on-premise, or using Amazon RDS, and they were also interested in the potential performance benefits of running their data warehouse on a column-store database. They were also interested in trying (more...)
I remember some years back when Rittman Mead first received the OBIEE 11g beta and I tried installing it for the first time. It was a nightmare. If you think the 188.8.131.52 release was challenging, you should have tried working with any one of the betas. It was with these first few releases that Weblogic was injected into my world. Some time around the first month of starting the beta, I recall (more...)
The thing about OBIEE 11g, is that there is no single way to get things “up and running”. Over the past year I have performed countless OBIEE installs and almost every one has been different to the last, due to infrastructure challenges, scale requirements or requirements for high-availability and server redundancy.
There is little we can do about infrastructure challenges, typically around policies which are in place across the IT estate and must be adhered to. Scale is (more...)