Why Billion Dollar Red Hat and OpenStack Need to Dance

On Februar 29, 2012, Red Hat’s fiscal year came to a close and they are expected to cross an important milestone; becoming the first billion dollar commercial open source software company.  Whether or not you believe they are the first open source software company to cross this mythical threshold is inconsequential, the fact is Red Hat has done it.  With all my sincerest respect and admiration, I tip my “red hat” to this historical accomplishment.

With all due respect to other Linux distributions such as Canonical (Ubuntu) and SUSE, Red Hat is the de facto standard for Enterprise Linux.  They have a reputation for building a quality product, have a stable of certified applications from leading ISVs, maintain a “Cisco-like” army of certified professionals, and provide long term support for their products.  Unlike the early years of their business, Red Hat’s biggest threat does not come from a new operating system challenger ala Microsoft; it comes from virtualization vendors with all eyes on VMware, Microsoft, Citrix, and Oracle.

The good news is that Red Hat foresaw this threat and purchased Qumranet in 2008, which created kernel-based virtual machine or KVM.  The bad news is while VMware grew to a virtualization powerhouse, it took Red Hat until January 2012 to release a real challenger within Red Hat Enterprise Virtualization 3.0 (RHEV3). With this release, the next chapter in Red Hat’s history is unfolding within the Cloud era.

Meanwhile, OpenStack is nothing short of an amazing story of the power of open source and of community.  In July 2010, Rackspace and NASA jointly launched OpenStack and less than 2 years later OpenStack has the backing of over 150 companies with names like Dell, AT&T, HP, Citrix, and more.  Additionally, you’ll find the likes of Canonical and SUSE, but Red Hat is noticeably absent.  However, is Red Hat really that far away?

All one has to do is open OpenStack’s Administrator Manual and you will find instructions on “Installing OpenStack Compute on Red Hat Enterprise Linux 6”.  Also, looking closely at the list of contributing companies you will find Gluster (one of my favorites), a project acquired by Red Hat on the list.

In the battle for virtualization supremacy, OpenStack is a vital weapon against the competition.  Sure, Red Hat has Aeolus and Deltacloud, but what would the world look like with a RHEO (Red Hat Enterprise OpenStack) edition?   Wouldn’t such a release accelerate OpenStack’s rise in the Enterprise while opening up a new revenue source for Red Hat?

Before any of this can happen, Red Hat and OpenStack must dance.  Sure, there are reports that early on Red Hat was invited by Rackspace to join OpenStack but they refused due to its governance model.  However, things have changed as Rackspace has transitioned management of OpenStack to an independent OpenStack Foundation with a defined mission and structure.  Can Red Hat and OpenStack unite under this new model?

Perhaps the final piece of this puzzle will end with a Red Hat acquisition within the OpenStack ecosystem.  Without naming names, there are at least 2 attractive take-over targets that would give Red Hat the development expertise and OpenStack credibility to be a force.

As John Lennon famously wrote and performed in “Imagine”; “You may say I’m a dreamer, but I’m not the only one, I hope some day you’ll join us, And the world will be as one.”

Red Hat and OpenStack, let’s dance!

Originally posted on http://blog.zenoss.com

Floyd’s Law: Open Source vs. Proprietary Software

As the pace of innovation continues to accelerate, it is increasingly impossible for legacy software vendors to maintain pace.  Professional services organizations are pushed to the brink as they attempt to fill product gaps only to find that they are further and further behind the innovation curve.  Customer frustration is increasing as these projects never end, product innovation never comes, and maintenance costs continue to increase.

Open Source, free of the legacy baggage and bureaucracy of their traditional competitors, is the only model that can keep pace with the accelerating rate of change in the industry.  In fact, Open Source is the disruptive force that continues to break-down legacy paradigms and offer new and disruptive solutions.  As commercialization of Open Source is inevitable, the key is remaining true to the principals of open source while providing customers the innovation and value they desperately desire.


Note To Dell: Forget Big Data and Go For Cloud Infrastructure

It seems like five seconds after HP purchased Vertica, the entire world focused on Dell and their big data strategy.  This was further compounded by the fact that Dell blew out their earnings with a $15.7 Billion fourth quarter and Michael Dell suggested that they would target smaller acquisitions to help their server and storage divisions.

Speculation is rising that Dell will purchase Aster Data Systems a Stanford University start-up that is backed by Sequoia Capital.  Aster’s nCluster sports a massively parallel processing (MPP) data warehouse with integrated MapReduce that is built on commodity hardware.  Whose commodity hardware?  You guessed it, Aster partners with Dell to provide the Aster Data MapReduce DW Appliance.

However innovative and powerful Aster’s solutions are, their rumored valuations are sky high.  According to Gigaom’s article Cloud Startup Values are Getting Insane published on September 24, 2010, Aster’s valuation is rumored, “somewhere between $85 and $120 million.”  Furthermore, Aster took issue with Gigaom’s assessment saying, “The valuation you/GigaOm stated recently is more reflective of the previous B round that closed Q4 2008, and while we don’t disclose the actual valuation of the latest C round it is in fact materially greater than the Series B.” Really?  Let’s get back on track.

Dell is a remarkable turnaround story that is predicated on their decisions to blaze their own trail in the industry.  Rather than purchase network equipment or security vendors, Dell has been acquiring interesting software companies such as Scalent, Boomi, and Insite One, with a purpose or focus on the Cloud.  Why change this focus?  When you think Dell do you think database warehousing? Software?

Dell’s future growth hinges on their Data Center Solutions (DCS) and Cloud Computing.  They have two choices; make a major market disrupting acquisition or take some risks by purchasing smaller but highly disruptive software companies.  It’s no secret that I am a proponent of Dell purchasing Rackspace, even in the face of a rising market valuation and the prospects of another bidding war.  Rackspace is that good and Dell knows it.

Enough, who else should Dell purchase?  There are the obvious, Joyent, and the obscure, Nimbula.  They could lean forward, OnApp, or take a risk, Appistry.  They could choose infrastructure, GoGrid, or go a bit crazy, Marathon Technologies.  They can go services, Appirio, or go international, ElasticHosts. And on, and on, and on, …

Regardless of what path Dell chooses, Michael Dell has done one incredible job of turning and changing the course of a $60 Billion company. While some have written that Dell is “yesterday’s company”, I’d watch out as they may just surprise you and the entire industry.

When The Cloud Goes Down

Has everyone seen the ‘Slip Slidin’ Away’ Bridgestone Blizzak television commercial? It brilliantly depicts what it’s like to lose control of an automobile on a snowy or icy road.  Anyone who lives in a winter wonderland or visits one can easily relate to that terrifying feeing of having no control and helplessness of something you depend on.

Imagine having a similar feeling of helplessness when your Cloud provider has a major outage that affects your entire business.   Perhaps it’s your email system, Intranet, documents, CRM site, or another key application.  Worse, what if it’s your IAAS provider that houses development or customer facing systems?

Naturally, you quickly turn to your Cloud provider for help.  Suddenly, your love of instant messaging, web forms, status pages, and social media turns into shear panic as you realize you’ll never get to speak to a human being.  You are greeted with a message saying, “We’re sorry.  Our services are temporarily down.  Our technicians and engineers are hard at work to resolve the problem. Please check back or follow us on the myriad of social networking outlets we support.  We’ll be updating this site every 15 minutes.  We appreciate your business and your patience.”

Don’t worry; the next 15 minutes will pass faster than a quantum torpedo detonating against a Borg cube, as you’ll be spending it fielding calls from people of all levels in the organization.  After hitting the refresh button on the browser you’re greeted with another message saying, “We are in the process of restoring our services.  The approximate time to completion is 5 to 6 hours.  We apologize for the outage but remember we haven’t had one in 2 years.  We appreciate your business and your patience.”

Far fetched?   I thought so too until I had the pleasure of experiencing it first hand.  In the aftermath, no guaranteed SLAs or credits could make up for the headache I had.  As someone who ponders, evangelizes, analyzes, and designs the next generation datacenter (the Cloud), this was a first hand lesson in the importance of continuing to radically re-think how we design, manage, monitor, predict, and recover the Cloud.  In other words, it’s time to stop putting lipstick on the technology and ideas of yesterday and make room for something different and innovative.

Finally, I’ve never really liked the term Cloud.  It implies simplicity or ease of use that may be prevalent on the front-end (users) but masks the reality of the complexity on the back-end (administrators).  The reality is nothing is 100 percent and “even the best laid plans go awry.”  The key is to understand that while technology is awesome, it pales into comparison to the power of being human.

%d bloggers like this: