We’re running a set of three webcasts together with Oracle on three popular use-cases for big data within an Oracle context – with the first one running tomorrow, November 3rd 2015 15:00 – 16:00 GMT / 16:00 – 17:00 CET on extending the data warehouse using Hadoop and NoSQL technologies.
The sessions are running over three weeks this month and look at ways we’re seeing Rittman Mead use big data technologies to extend the and capabilities of their (more...)
In yesterdays part one of our three-part Oracle Openworld 2015 round-up, we looked at the launch of OBIEE12c just before Openworld itself, and the new Data Visualisation Cloud Service that Thomas Kurian demo’d in his mid-week keynote. In part two we’ll look at what happened around data integration both on-premise and in the cloud, along with big data – and as you’ll see they’re too topics that are very much linked this year.
One of the defining features of “Big Data” from a technologist’s point of view is the sheer number of tools and permutations at one’s disposal. Do you go Flume or Logstash? Avro or Thrift? Pig or Spark? Foo or Bar? (I made that last one up). This wealth of choice is wonderful because it means we can choose the right tool for the right job each time.
Of course, we need to establish that have (more...)
Since I wrote my whitebook about Oracle NoSQL, some things have changed. Also I found out some things were not quite clear. One of those confusing things, is that in the examples I call my host “weblogic” (the host I used in my demo), but when kvstore runs locally, it can be replaced by “localhost” .
I went through the examples with NoSQL 3.4 and Oracle JDK 1.8, en that resulted in the following steps for a (more...)
Gluent – where I’m a cofounder & CEO – is hiring awesome developers and (big data) infrastructure specialists in US and UK!
We are still in stealth mode, so won’t be detailing publicly what exactly we are doing ;-)
However, it is evident that the modern data platforms (for example Hadoop) with their scalability, affordability-at-scaleand freedom to use many different processing engines on open data formats are turning enterprise IT upside down.
“You’ve been running! Take a selfie, see how exercise changes you!” I smile when that message it pops into the notifications list on my Android smartphone when using theBasis Peak. All part of what endears me to using it even more to track my activity and sleep patterns.
The “smile-o-meter” approach of the Basis Peak Photo Finish feature is a great mix of the analog and digital, leveraging well-familiar smart phone functionality to (more...)
Cloudera welcomes InfoCaptor as a certified partner for data analytics and visualization. InfoCaptor delivers self-service BI and analytics to data analysts and business users in enterprise organizations, enabling more users to mine and search for data that uncovers valuable business insights and maximizes value from an enterprise data hub
Rudrasoft, the software company that specializes in data analytics dashboard solutions, announced today that it has released an updated version of its popular InfoCaptor software, which (more...)
Over the past couple of years we have had a lot of information about Big Data presented to us. But one of the things that still stands out is that there is still a bit of confusion on what Big Data is. Depending on who you are talking to you will get a different definition and interpretation of what Big Data is and what you can do with it.
Spending a bit more time with Apache Phoenix in my previous post I realised that you can use it to query existing HBase tables. That is NOT tables created using Apache Phoenix, but HBase - the columnar NoSQL database in Hadoop.
I think this is cool as it gives you the ability to use SQL on an HBase table.
To test this, let's say you login to HBase and you create an HBase table like (more...)
Phoenix is a bit different, a bit closer to my heart too, as I read the documentation on Apache Phoenix, the word 'algebra' and 'relational algebra' came across few times, and that mean only one thing, SQL! The use of (more...)
It explains what we see is coming, at a high level, from long time Oracle database professionals’ viewpoint and using database terminology (as the E4 audience is all Oracle users like us).
However, this change is not really about Oracle database world, it’s about a much wider shift in enterprise computing: modern Hadoop data lakes and clouds are here to stay. They are already taking over many workloads traditionally executed on (more...)
In this post I will share my experience with an Apache Hadoop component called Hive which enables you to do SQL on an Apache Hadoop Big Data cluster.
Being a great fun of SQL and relational databases, this was my opportunity to set up a mechanism where I could transfer some (a lot) data from a relational database into Hadoop and query it with SQL. Not a very difficult thing to do these days, actually (more...)
We will be presenting the Sonra Hadoop Quick Start Appliance at CeBIT next week in Hanover. Meet and greet us in Hall 2, Stand D52 (C58).
At Sonra we understand the difficulties faced by businesses when they begin their Big Data journey. We help you get started in days or weeks and immediately reap the benefits of Big Data. Sonra have packaged optimised Hadoop Supermicro hardware with MapR, the prime Hadoop distribution, and added our (more...)
Join MapR and Sonra for the Hadoop User Group Ireland Meetup on 23 February at 6 pm at the Wayra offices (O2/Three building). You’ll learn more about the MapR distribution for Apache Hadoop through use cases, case studies and an introduction to the benefits of using the MapR platform.
Come by for this content-packed first event ending with the opportunity to socialise over beer and pizza kindly provided by Sonra.
I have been patching engineered systems since the launch of the Exadata V2 and recently i had the opportunity to patch the BDA we have in house. As far as comparisons go, this is were the similarities stop between Exadata and a Big Data Appliance (BDA) patching.
Our BDA is a so called startes rack consisting of 6 nodes running a hadoop cluster, for more information about this read my First Impressions blog post. On (more...)