This is a simple trick that presents the process to have a parametrized BIP report in OBIEE without the need to create new prompts in OBIEE.
1. Edit the dashboard and drag Embedded Content
in an OBIEE dashboard section.
2. Put the following URL in the Embedded Content Properties
http://<OBIEE presentration server>:<OBIEE presentation server port>/xmlpserver/<BIP report xdo file path starting from shared folders>?&_xmode=2
http://<OBIEE presentration server>:<OBIEE presentation server port>/xmlpserver/Supply+
Permission issues is one of the key error , while setting up Hadoop Cluster, while debugging some error found below table on http://hadoop.apache.org/ . It’s a good scorecard to keep handy.
Permissions for both HDFS and local fileSystem paths
The following table lists various paths on HDFS and local filesystems (on all nodes) and recommended permissions:
For a few years now I (and I'm sure you have too) have heard about and followed the various Oracle User Group tours that OTN arranges/facilitates. A tour consists of a number of Oracle User Groups in a region coordinating together to have their conferences organised so that they can get speakers from across the world to come and present.
For most presenters it involves lots of travel. So instead of them doing all that (more...)
You may have wondered why we were quiet over the last couple of weeks? Well, we locked ourselves into the basement and did some research and a couple of projects and PoCs on Hadoop, Big Data, and distributed processing frameworks in general. We were also looking at Clickstream data and Web Analytics solutions. Over the next couple of weeks we will update our website with our new offerings, products, and services. The article below summarises (more...)
The headline articles of Oracle Magazine for September/October 2000 were on e-Business Integration, including online healthy prescription for online retailing, streamlineing the pulp and fiber industries, and the health care industry. Plus there was lots and lots of articles and news items all on businesses delivering solutions via the internet.
As this was the Oracle Open World edition (and you see the label on the cover saying Biggest Ever) you can imagine there was a (more...)
After you have installed I ORE on your client and server (see my previous blog posts on these), you are not ready to start getting your hand dirty with ORE. During the installation of ORE you setup a test schema in your database to test that you can make a connection. But you will not be using this schema for you ORE work. Typically you will want to use one of your existing schemas that (more...)
I'm back a few days now after an eventful OUG Finland Conference. It was a great 2 days of Oracle techie stuff in one of the best conference locations in the world.
The conference kind of started on the Wednesday. It seemed like most of the speakers from across Europe and USA were getting into Helsinki around lunch time. Heli had arranged to be a tour guide for the afternoon and took us to see (more...)
In ORE there are a number ways to get you R scripts to run in parallel in the database. One way is to enable the Parallel option in ORE. This is what will be shown in this post. There are other methods of running various ORE commands/scripts in parallel. With these the scripts are divided out and several parallel R processes are started on the server.
But what if you want to use the database (more...)
In previous posts I gave the steps required to install Oracle R Enterprise on your Database server and your client machine.
One of the steps that I gave was the initial set of Database privileges that the DB needed to give to the RQUSER. The RQUSER is a little bit like the SCOTT/TIGER schema in the Oracle Database. Setting up the RQUSER as part of the installation process allows you to test that you can (more...)
There are many different ways for you to connect to a database using R. You can setup an RODBC connection, use RJDBC, use Oracle R Enterprise (ORE), etc. But if you are an Oracle user you will want to be able to connect to your Oracle databases and be able to access the data as quickly as possible.
The problem with RODBC and RJDBC connections is that they are really designed to process small amounts (more...)
Hi Fellow Big Data Admirers ,
With big data and analytics playing an influential role helping organizations achieve a competitive advantage, IT managers are advised not to deploy big data in silos but instead to take a holistic approach toward it and define a base reference architecture even before contemplating positioning the necessary tools.
My latest print media article (5th in the series) for CIO magazine (ITNEXT) talks extensively about need of reference architecture in (more...)
The headline articles of Oracle Magazine for July/August 2000 were on business intelligence, architectures for BI and how companies like NetFlix m drug-store.com and health insurances companies are using BI to better understand their customers.
Other articles included:
- Tom Kyte has an article on Back to Basic for DBAs to ensure robust performance and scalability. He looks at sizing and some of the different aspects involved in this, some of the hot backup methods (more...)
As Hive metastore is getting into the center of nervous system for the different type of SQL engines like Shark and Impala. It getting equally difficult to distinguish type of table created in Hive metastore. Eg. if we create a impala table using impala shell you will see the same table on hive prompt and vice versa. See the below example
Step 1 : “Create Table” in Impala Shell and “Show Table” (more...)
While building a data flow for replacing one of the EDW’ workflow using Big Data technology stack , came across some interesting findings and issues. Due to UPSERT ( INSERT new records or UPDATE existing records depending) nature of data we had to use Hbase, but to expose the outbound feed we need to do some calculation on HBase and publish that to Hive as external. Even though conceptually , its easy to create an (more...)
Last week, we released Dodeca version 188.8.131.5240 which focuses on some new relational functionality. The major new features in 6.7.1 are:
- Concurrent SQL Query Execution
- Detailed SQL Timed Logging
- Query and Display PDF format
- Ability to Launch Local External Processes from Within Dodeca
Concurrent SQL Query Execution
Dodeca has a built-in SQLPassthroughDataSet object that supports queries to a relational database. The SQLPassthroughDataSet functionality was engineered such that a SQLPassthroughDataSet object (more...)
While looking into HBase performance issue, one of the suggestion was to have more region for a larger table. There was some confusion around, “Region” vs “RegionServer” . While doing some digging, found a simple text written below.
The basic unit of scalability and load balancing in HBase is called a region. Regions are essentially contiguous ranges of rows stored together. They are dynamically split by the system when they become too large. Alternatively, they may (more...)
Sometimes it can happen that user profiles within a web catalog become corrupted for any number of reasons. In order for these user profiles to be correctly re-initialized, there's more to be done than just drop /users/JohnDoe from the web catalog.
All in all there are three distinct places which need to be cleaned:
This is really important since especially the third place contains the translation between the userid and the effective (more...)
With increasing data volume , in HDFS space could be continued challenge. While running into some space related issue, following command came very handy, hence thought of sharing with extended virtual community.
hadoop dfsadmin -report
Post running the command, below is the result, it takes all the nodes in the cluster and gives the detail break-up based on the space availability and spaces used.
Configured Capacity: 13965170479105 (12.70 TB)
Present Capacity: 4208469598208 (more...)