Data isn't really respected in businesses, you can see that because unlike other corporate assets there is rarely a decent corporate catalog that shows what exists and who has it. In the vast majority of companies there is more effort and automation put into tracking laptops than there is into cataloging and curating information.
Historically we've sort of been able to get away with this
Over six parts I've gone through a bit of a journey on what Big Data Security is all about.
Securing Big Data is about layers
Use the power of Big Data to secure Big Data
How maths and machine learning helps
Why its how you alert that matters
Why Information Security is part of Information Governance
Classifying Risk and the importance of Meta-Data
The fundamental point here is that
So now your Information Governance groups consider Information Security to be important you have to then think about how they should be classifying the risk. Now there are docs out there on some of these which talk about frameworks. British Columbia's government has one for instance that talks about High, Medium and Low risk, but for me that really misses the point and over simplifies the
What does your security team look like today?
Or the IT equivalent, "the folks that say no". The point is that in most companies information security isn't actually something that is considered important. How do I know this? Well because basically most IT Security teams are the equivalent of the nightclub bouncers, they aren't the people who own the club, they aren't as important as the
In the first three parts of this I talked about how Securing Big Data is about layers, and then about how you need to use the power of Big Data to secure Big Data, then how maths and machine learning helps to identify what is reasonable and was is anomalous.
The Target Credit Card hack highlights this problem. Alerts were made, lights did flash. The problem was that so many lights flashed and
In the first two parts of this I talked about how Securing Big Data is about layers, and then about how you need to use the power of Big Data to secure Big Data. The next part is "what do you do with all that data?". This is where Machine Learning and Mathematics comes in, in other words its about how you use Big Data analytics to secure Big Data.
What you want (more...)
In the first part of Securing Big Data I talked about the two different types of security. The traditional IT and ACL security that needs to be done to match traditional solutions with an RDBMS but that is pretty much where those systems stop in terms of security which means they don't address the real threats out there, which are to do with cyber attacks and social engineering. An ACL is only
As Big Data and its technologies such as Hadoop head deeper into the enterprise so questions around compliance and security rear their heads.
The first interesting point in this is that it shows the approach to security that many of the Silicon Valley companies that use Hadoop at scale have taken, namely pretty little really. It isn't that protecting information has been seen as a massively
These days Hortonworks with their IPO and Cloudera sitting on $1bn of cash grab all the headlines. However,the real visionary in the field is someone else. Someone blasting the previous world record in TeraSort . A Hadoop distribution on both Amazon Web Services and the Google Compute Engine. A company that Google is invested in. While their competitors have been in skirmishes with each other, MapR has been quietly working away and innovating.
MapR-FS: Features and (more...)
In the past 2 years, I have met many developers, architects that are working on “big data” projects. This sounds amazing, but quite often the truth is not that amazing.
You believe that you have a big data project?
Do not start with the installation of an Hadoop Cluster -- the "how"
Start to talk to business people to understand their problem -- the "why"
Understand the data you must
Last week I attended Oracle OpenWorld 2014, and it was an outstanding event filled with great people, awesome sessions, and a few outstanding notable experiences.
Personally I thought the messaging behind the conference itself wasn’t as amazing and upbeat as OpenWorld 2013, but that’s almost to be expected. Last year there was a ton of buzz around the introduction of Oracle 12c, Big Data was a buzzword that people were totally excited (more...)
I will give a presentation on 24 September at the Jury’s Inn in Dublin on the next generation of Big Data 2.0 tools and architecture.
Over the last two years there have been significant changes and improvements in the various Big Data frameworks. With the release of Yarn (Hadoop 2.0) the most popular of these platforms now allows you to run mixed workloads. Gone are the days when Hadoop was only good for (more...)
Both ODI and the Hadoop ecosystem share a common design philosophy. Bring the processing to the data rather than the other way around. Sounds logical, doesn’t it? Why move Terabytes of data around your network if you can process it all in the one place. Why invest millions in additional servers and hardware just to transform and process your data?
In the ODI world this approach is known as ELT. ELT is a marketing concept (more...)
Permission issues is one of the key error , while setting up Hadoop Cluster, while debugging some error found below table on http://hadoop.apache.org/ . It’s a good scorecard to keep handy.
Permissions for both HDFS and local fileSystem paths
The following table lists various paths on HDFS and local filesystems (on all nodes) and recommended permissions:
In every change there are hype machines that over play and sages who call doom. Into the Big Data arena steps David Searls to proclaim that Big Data is a myth and simply hype which is set to burst in an article over at ZDNet.
But big data, he said, is nothing more than the myth that collecting vast amounts of data can help companies know customers better than those customers even know
My paper on NoSQL and Big Data won the Editor’s Choice award at ODTUG Kscope14. Here are some key points from the paper: The relational camp made serious mistakes that limited the performance and usefulness of the relational model. NoSQL is based on the incorrect premise that tables in the relational model must be mapped to […]
I'm going to state a sacrilegious position for a moment: the quality of data isn't a primary goal in Master Data Management
Now before the perfectly correct 'Garbage In, Garbage Out' statement let me explain. Data Quality is certainly something that MDM can help with but its not actually the primary aim of MDM.
MDM is about enabling collaboration, collaboration is about the cross-reference
There is a massive amount of IT hype that is focused on what people see, its about the agile delivery of interfaces, about reporting, visualisation and interactional models. If you could weight hype then it is quite clear that 95% of all IT is about this area. Its why we need development teams working hand-in-hand with the business, its why animations and visualisation are massively important.
Scoop, Flume, PIG, Zookeeper. Do these mean anything to you? If they do then the odds are you are looking at Hadoop. The thing is that while that was cool a few years ago it really is time to face it that HDFS is a commodity, Map Reduce is interesting but not feasible for most users and the real question is how we turn all that raw data in HDFS into something we can actually (more...)
Hi Fellow Big Data Admirers ,
With big data and analytics playing an influential role helping organizations achieve a competitive advantage, IT managers are advised not to deploy big data in silos but instead to take a holistic approach toward it and define a base reference architecture even before contemplating positioning the necessary tools.
My latest print media article (5th in the series) for CIO magazine (ITNEXT) talks extensively about need of reference architecture in (more...)