Hadoop 2013 – Part Four: Players

The first three posts in this series talked about performance projects and platforms as key themes in what is beginning to feel like a  watershed year for Hadoop. All three are reflected in the surprising emergence of a number of new players on the scene, as well as some new offerings from additional ones, which I’ll cover in another post. Intel, WANdisco, and Data Delivery Networks recently entered the distribution game, making it clear that capitalizing on potential differentiators (real or perceived)  in a hot market is still a powerful magnet. And in a space where much of the IP in the stack is open source, why not go for it? These introductions could all fall into the performance theme as well – they are all driven by innovations intended to improve Hadoop speed.

— more — 

Hadoop 2013 – Part Three: Platforms

In the first two posts in this series, I talked about performance and projects as key themes in Hadoop’s watershed year. As it moves squarely into the mainstream, organizations making their first move to experiment will have to make a choice of platform. And – arguably for the first time in the early mainstreaming of an information technology wave – that choice is about more than who made the box where the software will run, and the spinning metal platters the bits will be stored on.There are three options, and choosing among them will have dramatically different implications on the budget, on the available capabilities, and on the fortunes of some vendors seeking to carve out a place in the IT landscape with their offerings.

— more —

Hadoop 2013 – Part Two: Projects

In Part One of this series, I pointed out that how significant attention is being lavished on performance in 2013. In this installment, the topic is projects, which are proliferating precipitously. One of my most frequent client inquiries is “which of these pieces make Hadoop?” As recently as a year ago, the question was pretty simple for most people: MapReduce, HDFS, maybe Sqoop and even Flume, Hive, Pig, HBase, Lucene/Solr, Oozie, Zookeeper. When I published the Gartner piece How to Choose the Right Apache Hadoop Distribution, that was pretty much it.

–more–

Hadoop 2013 – Part One: Performance

It’s no surprise that we’ve been treated to many year-end lists and predictions for Hadoop (and everything else IT) in 2013. I’ve never been that much of a fan of those exercises, but I’ve been asked so much lately that I’ve succumbed. Herewith, the first of a series of posts on what I see as the 4 Ps of Hsdoop in the year ahead: performance, projects, platforms and players.

— more —

Stack Up Hadoop to Find Its Place in Your Architecture

2013 promises to be a banner year for Apache Hadoop, platform providers, related technologies – and analysts who try to sort it out. I’ve been wrestling with ways to make sense of it for Gartner clients bewildered by a new set of choices, and for them and myself, I’ve built a stack diagram that describes the functional layers of a Hadoop-based model.

more

2013 Data Resolution: Avoid Architectural Cul-de-Sacs

I had an inquiry today from a client using packaged software for a business system that is built on a proprietary, non-relational datastore (in this case an object-oriented DBMS.) They have an older version of the product – having “failed” with a recent upgrade attempt.

The client contacted me to ask about ways to integrate this OODBMS-based system with others in their environment. They said the vendor-provided utilities were not very good and hard to use, and the vendor has not given them any confidence it will improve. The few staff programmers who have learned enough internals have already built a number of one-off connections using multiple methods, and were looking for a more generalizable way to create a layer for other systems to use when they need data from the underlying database. They expect more such requests, and foresee chaos, challenges hiring and retaining people with the right skills, and cycles of increasing cost and operational complexity.
My reply: “you’re absolutely right.”

For GoodData, SaaS Changes The Channel Model Too

Last time I mentioned GoodData, it was in passing, as I discussed YouCalc and other SaaS BI players. In the ensuing year, many other toes have been dipped into the water. I sat down with GoodData CEO and founder Roman Stanek and Marketing VP Sam Boonin this week to catch up on how it’s all going, and from where they sit, the news seems to look pretty good. With 40 employees, 25 customers since last November, and a funding round from the likes of Marc Andreesen and Tim O’Reilly, GoodData seems to be off to a GoodStart. And now it has a new initiative: free analytics for other SaaS players to expand its presence. Read more of this post

Oracle Sets Sights on BI Leadership. Has it Picked the Right Target?

Oracle is not first in BI, and wants to change that – that was the clear message of a well executed, multi-site “real plus virtual” event with top executives showing off the result of a multi-year effort to rationalize and integrate a set of leading but overlapping components into a seamless suite. Oracle Business Intelligence Enterprise Edition 11g (OBIEE) deserves the accolades it has already received from analysts who welcomed its announcement – it makes bold and serious bets on effective centralized metadata administration, data integration/ unification and optimized analytic architecture, collaboration, globalization, mobile device support, and a powerful link to action that will be most effective (unsurprisingly) with its own business applications. While it misses some pieces – fully integrated in-memory processing, SaaS and cloud support among them – these will be forthcoming, and Oracle is clearly committed to a quicker release cycle now that the thorny internal politics around legacy products seem to be resolved. But its competitive focus may be misdirected; while SAP is still ahead in market share, IBM is the bigger threat in the marketplace.

Read more of this post

IBM Shows Broad Mobile Portfolio at Largest Lab

IBM employs 45,000 software engineers worldwide, and like all large firms, has been greatly expanding its overseas contingent, leading some in the US to complain that not enough is being done “back home.” In mid-June, IBM provided an answer with the opening of a new lab facility in the Boston suburb of Littleton, Massachusetts, one of 70 IBM Software Labs around the globe, and its largest in North America.  It has “more square footage than Boston’s Fenway Park or the TD Garden,” IBM noted, and employs fully 10% of the firm’s software engineers. Since 2003, IBM said, it has acquired 14 Massachusetts-based companies, partnered with more than 100 VC-backed small firms, and has more than 1,600 business partners in New England. This investment was not lost on the Deval Patrick, Governor of Massachusetts, who joined IBM SVP and Group Executive Steve Mills for the lab opening and ribbon-cutting ceremony. In a bid to demonstrate the breadth of his portfolio, Mills assembled the heads of several of his software brands to discuss mobility, a primary focus of the Littleton lab. Read more of this post

Sybase SQL Anywhere 12 Extends Mobile Leadership

In my coverage of SAP’s Sybase acquisition, I noted that SQL Anywhere is a best kept secret among more than 20,000 developers who relish its ease of embedding and minimal database administration. Now Sybase is about to release its next version, SQL Anywhere 12, with ambitions to add to its claimed ten million users worldwide using SQL Anywhere-powered applications. Geospatial features, key to mobile applications, will feature prominently. Read more of this post

Follow

Get every new post delivered to your Inbox.

Join 20,508 other followers