In case you missed this webinar, here’s a 1.5h holiday video about how Gluent “turbocharges” your databases with the power of Hadoop – all this without rewriting your applications :-)
Also, you can already sign up for the next webinar here:
- GNW06 – Modernizing Enterprise Data Architecture with Gluent, Cloud and Hadoop
- January 17 @ 12:00pm-1:00pm CST
- Register here.
See you soon!
NB! If you want to move to the "New World" - offload (more...)
It’s time to announce the next webinar in the Gluent New World series. This time I will deliver it myself (and let’s have some fun :-)
I used to spend most of my time — blogging and consulting alike — on data warehouse appliances and analytic DBMS. Now I’m barely involved with them. The most obvious reason is that there have been drastic changes in industry structure:
It’s time to announce the 3rd episode of Gluent New World webinar series! This time Gwen Shapira will talk about Kafka as a key data infrastructure component of a modern enterprise. And I will ask questions from a old database guy’s viewpoint :)
Apache Kafka and Real Time Stream Processing
It’s time to announce the 2nd episode of the Gluent New World webinar series!
The Gluent New World webinar series is about modern data management: architectural trends in enterprise IT and technical fundamentals behind them.
GNW02: SQL-on-Hadoop : A bit of History, Current State-of-the-Art, and Looking towards the Future
- This GNW episode is presented by no other than Mark Rittman, the co-founder & CTO of Rittman Mead and an all-around guru of enterprise BI!
Although we are still in stealth mode (kind-of), due to the overwhelming requests for information, we decided to publish a video about what we do :)
It’s a short 5-minute video, just click on the image below or go straight to http://gluent.com:
And this, by the way, is just the beginning.
Gluent is getting close to 20 people now, distributed teams in US and UK – and we are still hiring!
Hi, it took a bit longer than I had planned, but here’s the first Gluent New World webinar recording!
You can also subscribe to our new Vimeo channel here – I will announce the next event with another great speaker soon ;-)
A few comments:
- Slides are here
- I’ll figure a good way to deal with offline follow-up Q&A later on, after we’ve done a few of these events
If you like this (more...)
There is no question that massive data is being generated in greater volumes than ever before. Along with the traditional data set, new data sources as sensors, application logs, IOT devices, and social networks are adding to data growth. Unlike traditional ETL platforms like Informatica, ODI, DataStage that are largely proprietary commercial products, the majority of Big ETL platforms are powered by open source.
With many execution engines, customers are always curious about their usage (more...)
Mike Stonebraker and Larry Ellison have numerous things in common. If nothing else:
- They’re both titanic figures in the database industry.
- They both gave me testimonials on the home page of my business website.
- They both have been known to use the present tense when the future tense would be more accurate.
I last wrote about Couchbase in November, 2012, around the time of Couchbase 2.0. One of the many new features I mentioned then was secondary indexing. Ravi Mayuram just checked in to tell me about Couchbase 4.0. One of the important new features he mentioned was what I think he said was Couchbase’s “first version” of secondary indexing. Obviously, I’m confused.
Now that you’re duly warned, let me remind you of aspects of (more...)
In this post I will share my experience with an Apache Hadoop component called Hive which enables you to do SQL on an Apache Hadoop Big Data cluster.
Being a great fun of SQL and relational databases, this was my opportunity to set up a mechanism where I could transfer some (a lot) data from a relational database into Hadoop and query it with SQL. Not a very difficult thing to do these days, actually (more...)
1. There are multiple ways in which analytics is inherently modular. For example:
- Business intelligence tools can reasonably be viewed as application development tools. But the “applications” may be developed one report at a time.
- The point of a predictive modeling exercise may be to develop a single scoring function that is then integrated into a pre-existing operational application.
- Conversely, a recommendation-driven website may be developed a few pages — and hence also a few (more...)
We will be presenting the Sonra Hadoop Quick Start Appliance at CeBIT next week in Hanover. Meet and greet us in Hall 2, Stand D52 (C58).
At Sonra we understand the difficulties faced by businesses when they begin their Big Data journey. We help you get started in days or weeks and immediately reap the benefits of Big Data. Sonra have packaged optimised Hadoop Supermicro hardware with MapR, the prime Hadoop distribution, and added our (more...)
For those of you who missed the event I have posted some pictures below. We have recorded (more...)
In this blog post we look at how we can address a shortcoming in the Hive ALTER TABLE statement using parameters and variables in the Hive CLI (Hive 0.13 was used).
There’s a simple way to query Hive parameter values directly from CLI
You simply execute (without specifying the value to be set):
SET hive.exec.compress.output; --- hive.exec.compress.output=false
You may use those parameters directly in (more...)
Join MapR and Sonra for the Hadoop User Group Ireland Meetup on 23 February at 6 pm at the Wayra offices (O2/Three building). You’ll learn more about the MapR distribution for Apache Hadoop through use cases, case studies and an introduction to the benefits of using the MapR platform.
Come by for this content-packed first event ending with the opportunity to socialise over beer and pizza kindly provided by Sonra.
What is (more...)
If you want to upskill and get certified on Hadoop you can now do so for free. Thanks to MapR. Over the next couple of weeks they are rolling out their on-demand Hadoop training courses. The highlight of the first batch of courses is Developing Hadoop Applications on Yarn.