Oracle Exadata: Early Signs Promising

Exadata is looking good. In the past few months, I’ve had the chance to talk to several early adopters of Oracle Exadata V2, some in connection with a sponsored white paper Oracle has just published. It’s still early, but I see this product as a milestone, regardless of its commercial success. That is still to be determined, although I wouldn’t bet against it. How it will be affected by Oracle’s execution of the Sun acquisition is another open question, and the recent surprise layoffs, which showed that either the announced expectations were laughably off base or Ellison’s early announcements about  hiring plans were less than candid, don’t bode too well for the near term. Rob Enderle made some strong and provocative points in his guest post here.

What I am more confident in is the degree to which this product represents the convergence of progress in many key areas of information technology, and the fact that like Teradata’s new product family and IBM’s Smart Analytics Servers it represents the old guard RDBMS stalwarts digging in their heels and modernizing their offerings.

Advances and new directions in the components of data-management architecture – CPU, memory, storage, I/O, and of course the database engine itself – have been steady. But they have hitherto occurred separately, and when they are instead integrated effectively, the resulting synergy delivers a major step forward in the raw power of business- and mission-critical applications. The emerging next-generation platform will enable users to keep pace with the ever-increasing demand for more storage (data growth is literally outpacing the growth in available storage), faster response times, and growing user populations.

Integrating the new capabilities is not straightforward. For example, solid-state devices (SSDs) are increasingly useful as an additional storage tier between main memory and disk that can speed delivery of data to the CPU. However, unless the other parts of data management architecture are designed to optimize a storage hierarchy with SSDs in it, many of the performance benefits of SSDs are lost. A holistic approach to design must be in place to gain all the benefits of advances in every part of the architecture.

We now are beginning to see offerings from all the key players that leverage memory – and their competitors like Netezza are there too, as is Kickfire with a further step up in hardware assists in its SQL chip. IBM has a long tradition of peripheral hardware innovations – its z system coprocessors for encryption, its hardware-based data deduplication at the storage layer, for example. And Teradata has a new generation interconnect at the top end as well as memory-based appliances in its portfolio.

What all the Tier One vendors must contend with is that software development is rarely reductive – systems get bigger, more complex, harder to maintain, and the more they innovate, the harder things get at the margin. Database software today is stuffed with features designed to surmount the obstacles imposed by earlier limits on processor speed, memory availability, and especially the movement of data from physical storage into the layer of memory closest to the processor (I/O). Software designed to exploit the newest architectures can take innovative approaches to processing, unconstrained by many of these limitations, and improve performance. But code lines built for older architectures need to be pruned. The nimble new players have made much of their leanness, even though feature poverty makes them far less broadly usable, by cherry-picking specific user cases the behemoths struggle to optimize for.

Clearly, the future for database engines is a more modular one, with features, functions and options that may or may not be “lit up” depending on the needs. As the new generation of products matures, that will drive competition around ease of configuration and deployment, and we already see evidence of it in the appliance approaches increasingly being adopted. In such a battle, owning the hardware will certainly enable Oracle and IBM to bring such thinking to their offerings more quickly than many of their competitors – notably Microsoft, who last week described the process of creating reference architectures it must undergo for its Parallel Data Warehouse partners to build on.

Storage and I/O will become an increasing focus, and ownership will make a difference there too. Oracle Exadata Storage Servers include 384 GB of “flash memory” SSDs, and a full rack of 14 storage servers has over 5Tb. The Storage Server coordinates with the database engine to use striping and other techniques to improve performance. Incremental backups execute faster due to fine-grained block change tracking available only on Exadata storage, giving Oracle a leg up on other storage vendors it now competes with. Oracle will work at the interface between database and storage more and more as user experience flows in. It already has multiple compression levels, and the ability to do some filtering at the storage layer, like Netezza. And Teradata is touting its new “temperature”-sensitive approach to optimizing stored data. Watch this space closely as smart storage gets smarter still and learns to collaborate with DBMSs.

My conversations with early Exadata V2 customers while preparing the white paper were fascinating. One surprise was that even before the Sun acquisition, the field teams were much more competent at standing the systems up than the HP teams those customers encountered when they brought the V1 product in.  “It was like night and day,” one told me. “They were knowledgeable and responsive. We had a lot of trouble with the HP hardware and it took longer.”

All indications are that Oracle has built a big pipeline quite rapidly for Exadata. Now they must learn to align hardware delivery and deployment with their software delivery promises. It’s new territory and execution will be critical. And it will take place under intense scrutiny. This will be a test, and the next few quarters will be very instructive. Success will mean that Oracle’s database dominance has years to run – and the odds are good so far, though its market share has stalled of late. There is less room to give on price, which remains a big objection for many customers and prospects. Oracle just bought a big load of low margin and may not feel inclined to give much discounting latitude to its sales force. Buyers should assess competitive offerings carefully – the price curve is moving down, and thus far Oracle is not moving with it. But be aggressive now – Oracle wants to get off to a fast start.

Disclosures: Oracle is a client of IT Market Strategy

Published by Merv Adrian

Independent information technology market analyst and consultant, 40 years of industry experience, covering software in and around the data management space.

One thought on “Oracle Exadata: Early Signs Promising

Leave a Reply

%d bloggers like this: