Oracle Big Data Cloud Service CE: Working with Hive, Spark and Zeppelin 0.7

In my previous post, I mentioned that Oracle Big Data Cloud Service – Compute Edition started to come with Zeppelin 0.7 and the version 0.7 does not have HIVE interpreter. It means we won’t be able to use “%hive” blocks to run queries for Apache Hive. Instead of “%hive” blocks, we can use JDBC interpreter (“%jdbc” blocks) or Spark SQL (“%sql” blocks).

The JDBC interpreter lets you create a JDBC connection to any (more...)

The Key To The Kingdom

I spend quite a bit of my time these days advising Oracle customers on implementing and using HCM Cloud Applications.  Makes sense, as that is the primary role of my group 😉  And I’m often asked about where I get my own information.  While I get some of it through hands-on experimenting and experience, I also have a treasure trove of online information.  And it’s all centralized.  And I’m going to share it…this is "the (more...)

Oracle BDCSCE Upgraded: Zeppelin 0.7 and Spark 2.1

Last week, Oracle Big Data Cloud Service – Compute Edition was upgraded from 17.2.5 to 17.3.1-20. I do not know if the new version is still in testing phase and available to only trial users, but sooner or later the new version will be available to all Oracle Cloud users.

The new version is still based on HDP 2.4.2 but it contains upgrades on two key components: Zeppelin and (more...)

Introduction to Oracle Big Data Cloud Service – Compute Edition (Part VI) – Hive

I though I would stop writing about “Oracle Big Data Cloud Service – Compute Edition” after my fifth blog post, but then I noticed that I didn’t mention about the Apache Hive, another important component of the Big Data. Hive is a data warehouse infrastructure built on top of Hadoop, designed to work with large datasets. Why is it so important? Because it includes support for SQL (SQL:2003 and SQL:2011), and helps users to utilize (more...)

OAC: Essbase – Incremental Loads / Automation

I recently detailed data load possibilities with the tools provided with Essbase under OAC here. Whilst all very usable, my thoughts turned to systems that I have worked on and how the loads currently work, which led to how you might perform incremental and / or automated loads for OAC Essbase.

A few background points:

  • The OAC front end and EssCS command line tools contain a ‘clear’ option for data, but both are full data (more...)

More Articles Published Elsewhere

So it has been a busy week in terms of seeing articles published that I’ve at least contributed to. It’s funny the gap between writing and publishing can be several weeks. So whilst we’re thinking about new things we see the twitter pickup etc or work several weeks old.

Anyway, first up was a contribution to Leon Smiers‘ blog on integrating chatbots. The latest in a series of excellent blog posts looking at  the (more...)

OAC: Essbase – Loading Data

After my initial quick pass through Essbase under OAC here, this post looks at the data loading options available in more detail. I used the provided sample database ASOSamp.Basic, which first had to be created, as a working example.

Creating ASOSamp

Under the time-honoured on-prem install of Essbase, the sample applications were available as an install option – supplied data has to be loaded separately, but the applications / cubes themselves are (more...)

Web Services: REST vs SOAP

I was in the middle of a discussion earlier today about service-based integration for Oracle HCM Cloud.  In that conversation, someone asked me why anyone would ever use SOAP over REST.  My answer was pretty lengthy but, when I was done, someone said I ought to share it on my blog. 

Well, here's the thing:  my thinking on the subject is not original.  I simply communicate fundamentals I learned elsewhere.  And I'm big on (more...)

Introduction to Oracle Big Data Cloud Service – Compute Edition (Part V) – Pig

This is my last blog post of my introduction series for Oracle Big Data Cloud Service – Compute Edition. In this blog post, I’ll mention “Apache Pig”. It’s a tool/platform created by “Yahoo!” to analyze large data sets without the complexities of writing a traditional MapReduce program. It’s designed to process any kind of data (structured or unstructured) so it’s a great tool for ETL jobs. Pig comes installed and ready to use with (more...)

clckwrk Design Blueprint: Oracle e-Business Suite on AWS

Moving to the Cloud Made Simple: A Framework for Oracle e-Business Suite in AWS 

We wouldn’t blame you if you’re not 100% ready to jump into the Cloud, just because that’s what everyone else seems to be doing. Making the right decision for your business means taking a careful look at what exactly the benefits are going to be. How will your systems work in the Cloud? Can Cloud providers like AWS actually meet your (more...)

OGh Tech Experience 2017 – recap

On June 15th and 16th 2017 the very first OGh Tech Experience was held. This 2-day conference was a new combination of the DBA Days and Fusion Middleware Tech Experience that were held in previous years. To summarize: OGh hit bullseye. It was two days packed with excellent in-depth technical sessions, good customer experiences and great networking opportunities.

The venue was well chosen. De Rijtuigenloods in Amersfoort is a former maintenance building of the Dutch (more...)

Introduction to Oracle Big Data Cloud Service (Part IV) – Zeppelin

This is my forth blog post about Oracle Big Data Cloud Service. In my previous blog posts, I showed how we can create a big data cloud service on Oracle Cloud, which services are installed by default, ambari management service and now it’s time to write about how we can work with data using Apache Zeppelin. Apache Zeppelin is a web-based notebook that enables interactive data analytics. Zeppelin is not the only way to work (more...)

Introduction to Oracle Big Data Cloud Service (Part III) – Ambari

This is the third blog post about Oracle Big Data Cloud Service. I continue to guide you about the Big Data Cloud Service and its components. In this blog post, I will introduce Ambari – the management service of our hadoop cluster.

The Apache Ambari simplifies provisioning, managing, and monitoring Apache Hadoop clusters. It’s the default management tool of Hortonworks Data Platform but it can be used independently from Hortonworks. After you create your big (more...)

Notes From Orlando

I thought y'all would appreciate some notes from last week's OHUG conference in Orlando Florida.  So, in no particular order, my observations...
  • The sessions were pretty evenly divided between Oracle E-Business, PeopleSoft and HCM Cloud.  Right around 1/3 of total sessions for each track.
  • The mix of attendees, from what I could tell, were about half HCM Cloud users and half on either EBS or PeopleSoft.  And out of those on EBS or Peoplesoft, about (more...)

Introduction to Oracle Big Data Cloud Service – Compute Edition (Part II)

In my previous post, I gave a list of installed services on a Oracle Big Data Cloud Service when you select “full” as deployment profile. In this post, I’ll explain these services and software.

HDFS: HDFS is a distributed, scalable, and portable file system written in Java for Hadoop. It stores data so it is the main component of the our cluster. A Hadoop (big data) cluster has nominally a single namenode plus a cluster (more...)

Introduction to Oracle Big Data Cloud Service – Compute Edition (Part I)

Over the last few years, Oracle has dedicated to cloud computing and they are in a very tough race with its competitors. In order to stand out in this race, Oracle provides more services day by day. One of the services Oracle offers to the end user is “Oracle Big Data Cloud Service”. I examined this service by creating a trial account, and I decided to write a series of blog posts for those who (more...)

Oracle Database on Hyper-scale Public Cloud – License Questions!

We all know, on January 23, 2017, Oracle updated the Licensing Oracle Software in the Cloud Computing Environment document. The gist of the announcement was “When counting Oracle Processor license requirements in Authorized Cloud Environments, the Oracle Processor Core Factor Table is not applicable“. So, if you are running Oracle database on x86 Intel platform on-premise, […]

OHUG 2017 – What Looks Good To Me

So I’m headed to the OHUG 2017 conference next week.  As it is one of the few conferences I attend anymore, I’m pretty excited about going.  I’m particularly interested in information related to the implementation of Oracle HCM Cloud.  So, in  preparation for the conference, I thought I’d share some events and sessions that look good to me.

First, a few caveats about the following list.   I’m presenting twice myself, so I’m breaking my (more...)

How to install Oracle Database Cloud Backup Module

I had the joy of using the Oracle Database Cloud backup module a month ago. Needless to say that things changed since last year and some of the blogs posts I found were not relative anymore.

I used the database cloud backup module to take a backup of a local database and restore it in the cloud.

A good start would be this link to the documentation.

The installation is really simple, download the installation (more...)

Oracle EBS User Experience Makeover Contest

Young plant growing in the morning light and green bokeh background  , new life growth ecology concept

As is our tradition, when Spring begins and rebirth is in the air, we like to offer our customers a renewal of their own. This year, we thought it was time to give some love to our Oracle E-business suite (EBS) customers, as we’ve had some incredible success this year modernizing EBS applications. You can read about our digital transformations in Forbes Magazine here.  

We want your pain points!

We are putting it out there to (more...)