OCI’s Relevance within an Amazon and OpenStack World

Last week, yet another Cloud initiative began as the Open Cloud Initiative (OCI) launched from OSCON 2011 in Portland, Oregon.  The OCI bills itself as a non-profit organization to advocate standards-based Open Cloud Computing.   The OCI hopes to provide a legal framework based on the Open Cloud Principles Document (OCP) and apply them to Cloud computing products and services.

While conspiracy theorists will call this the “One Cloud” movement, the reality is there is little to worry about.  An OCI without Amazon, Microsoft, Verizon, AT&T, and more isn’t really an assembly of “leaders of Cloud computing” but more of an ideology.  Academics and Open Source aside, there is very little motivation for Cloud providers to work together other than standard connectivity and a few APIs.

The biggest force in promoting the OCI’s self-proclaimed slogan of “A non-profit advocate of open cloud computing” is actually another truly powerful Open Source Movements called OpenStack.  As OpenStack adoption continues to increase, they may become the defacto standard for building Clouds.  OpenStack is the core platform that allows Enterprises and Service Providers to build value-added software and/or services to create new and unique offerings or businesses to their customers.

It is the difference between “talking” and “action”.  While some in this industry like to debate the merits of Cloud computing and interoperability, others are creating and innovating.  I have already mentioned the OpenStack movement and its importance to Cloud computing, and no conversation on this subject would be relevant without talking about Amazon.

Amazon is rapidly innovating within Cloud computing while continuing to disrupt the industry, drop their published prices, and make money.  Instead of getting caught up in all this debate, Amazon is setting their agenda and putting the entire industry on the defensive.  In fact, their rate of innovation is astounding while their rate of adoption is actually accelerating.  What is their motivation to interoperate with other Cloud providers?  As long as they have open and defined APIs into the private clouds (VMware, Microsoft, Xen, KVM) of their Enterprise customers, then they are all set.

Altruistic goals cannot be confused with the capitalistic reality of the world we live in.  The OCI may have great intentions, but they plenty of work to do to make themselves relevant within an Amazon and OpenStack world.


For the Datacenter, Forget E=MC^2, Sav= (MC^4+AV) Sec

Why do we need Cisco UCS, HP Adaptive Infrastructure, IBM Stratus, Liquid Computing, and more? 


Management is a critical component of any datacenter.  A datacenter may be defined as a symphony of hardware and software spanning multiple disciplines that is expected to be “always-on” and never to fail.  If you couple this with advances in virtualization, the “green movement”, and the need to understand a complete Total Cost of Ownership (TCO) of datacenter operations, then management is the only answer.  Management is not intended to replace the human element, rather to augment it through automation that allows human beings to tame an ever complex environment.

Examples of this renewed interest in management are plentiful; HP buys Opsware and Mercury Interactive, BMC buys BladeLogic, Cisco partners with BMC, Cisco UCS Manager, EMC buys Configuresoft, Voyence, SMARTS, and Infra, and more.

Current, also known as power, usage within the datacenter continues to increase at a staggering rate.  In fact, the price for said current may actually outpace both the IT equipment and the facility itself.  It’s not simply servers, but routers, switches, wan acceleration devices, security devices, sans, nas, lights, laptops, monitors, and more that cause the bills to continually increase.  Couple this with the additional demands of cooling and redundancy and you have a real crisis on your hands.

An example of changes in the industry may be seen in ActivePower’s efforts in the areas of power and environmentally friendly “green” solutions.  Additionally, we might have been given a glimpse to one answer to this problem, as Google has made a $10 million investment in eSolar; inventors of Utility-Scale Solar Power.

Cabling is an essential ingredient to any datacenter design and one that has the potential to provide significant cost savings in the next generation datacenter.  It started with the blade server revolution including embedded switches, and may very well end with Cisco’s UCS, HP’s Adaptive Infrastructure, or IBM’s Stratus datacenter initiatives. 

Illustrating this point, Cisco has published a case study with Saint Joseph Health System (SJHS) in which the hospital claimed an 85% savings in cabling costs by using the Cisco Nexus equipment.

Current generates heat, heat requires cooling, cooling requires current, and around-and-around we go.  In the old days, you simply purchased the appropriate amount of cooling to keep your datacenter at a cool and constant temperature.  Today, upwards of 40% of your datacenter energy bill is from cooling.  Additionally, we have “green” concerns and use PUE (Power Usage Effectiveness) and DCE (Data Center Efficiencies) metrics to calculate how well we are doing and compare datacenters against others.  Incidentally, chillers, humidifiers, and CRAC’s (Computer Room Air Conditioning) contribute handsomely to these calculations.

A concept called adaptive cooling is a promising technology to solve the cooling challenge.  The premise is today’s equipment manufactures build systems that are more reliable and are designed to “handle the heat.”  Sensors are used to form baselines and models that are used to optimize modern cooling techniques.  Yahoo improved cooling and energy savings of 31% by partnering with SynapSense.

Once thought to be endless, datacenters are rapidly running out of capacity.  By capacity, I am referring to everything from floor space to power and cooling to facilities themselves.  This has lead to the innovation of a “datacenter in a box” which is offered by the likes of Sun, Rackable, HP, IBM, and more.  These containers allow datacenters to expand rapidly while offering innovative power and cooling options.  However, space alone won’t solve the capacity issue.  Therefore, the efforts by Cisco, IBM, HP, and others to create a new datacenter fabric that combines massively dense servers, storage, networking, security, and virtualization are so important.

Look no further than Facebook who has started construction on a custom datacenter with over 140,000 square foot capacity at a cost of $188 million.  Note that they are touting the efficiency of this new datacenter including the potential of power and cooling cost savings.

As Ronald Reagan famously said, “Mr. Gorbachev, tear down this wall!” so too can we proclaim the tearing down of the walls between the silos within the datacenter.  We no longer can allow storage, networking, servers, security, applications, facilities, and more to operate independently of each other.  By operating as a unified team, the datacenter becomes more agile, proactive, efficient, and better equipped to handle all challenges. 

Examples of this movement is detected within software vendors (BMC, HP) unifying the management of these disciplines and hardware (Cisco, Juniper, Brocade) vendors integrating the functions into a single chassis.

No equation of savings within the datacenter would be complete without discussing virtualization.  While the ideas of virtualization have been around for years, it’s the application of this technology that has changed the industry forever.  Advances in network, server, application, and storage virtualization impact cost savings across the equation.

Examples include VMware vSphere, Citrix XenServer, Sun xVM, Cisco UCS (Nexus 1000v), Arcadia (Cisco/EMC JV)

Security has and will continue to be a major concern within the datacenter.  The number of attacks and sophistication of these attacks continues to rise.  With the advent of Cloud Computing or shared services running on a common platform, the potential risks of a security breach are enormous.  Additionally, security must span all the disciplines within the data center while taking into account user access/privileges, data (in-motion and at-rest), and more.  Finally, security must continue to evolve while adhering to compliance and regulatory pressures.

Recent activities in this area include Cisco acquiring Rohati, SAIC purchasing CloudShield, the growth of Tufin and AlgoSec, and next generation firewall providers such as Palo Alto Networks.

Oracle buys VirtualIron: Is disruption coming?

Oracle has entered into a definitive agreement to purchase VirtualIron for an undisclosed sum of money. While my initial reaction was to find out what the break-up fee was, I now see the value in this acquisition. Perhaps my angst is centered around VirtualIron’s reputation in the open source community as a consumer of and not a contributor to the Xen Open Source Project.

As I’ve stated before, the gem of Oracle’s acquisition of Sun lies within Sun’s xVM virtualization projects; xVM includes virtualzation products for servers, storage, and workstations. In the early days of x86 virtualization, the battle lines were draw around the hypervisor technology itself. Today, the hypervisor has become a commodity with a plethora of commercial and open source options and the real battle centers around managing virtualized environments.

To that end, Sun has xVM Ops Center but it lags behind VMware’s Virtual Infrastructure 3 and Citrix’s XenServer. This is where the VirtualIron acquisition makes perfect sense. VirtualIron’s strategy has revolved around creating software based on the Xen Open Source Project with comparative functionality to VMware at a lower price point. Since Sun’s xVM technology is based on both the Xen Open Source Project as well as Sun’s Logical Domains (LDOM) technology, VirtualIron plugs a huge hole in Sun’s portfolio.

Oracle has never been a “me too” company, so their challenge will be to elevate VirtualIron’s products to equal footing with VMware; functionality, price, and innovation. Additionally, Oracle needs to create a new strategy to attack VMware’s recently announced vSphere Cloud Computing OS product. Clearly, the pressure is on both Oracle’s engineering teams and marketing teams to make this transition. If Oracle’s marketing team can create the same type of buzz they achieved with Fusion, then Oracle will redefine the virtualization industry.

Finally, Oracle is not done shopping. To be a player within the virtualizatoin management space you need to be able to offer heterogeneous management. Oracle must be eying companies such as Veeam or Embotics. These type of companies would complete the picture and cause major disruption in the industry. The real question is where does Oracle draw the line? Do they want to challenge IBM, BMC, CA, or HP? For what it’s worth, I sure do hope so!

Tired of the Clouds: The other OSS

As the title of this post suggests, I am not talking about open source software; instead I am referring to one-stop-shopping. This was a concept that was popular in the IT industry around the mid 1990’s as big managed service providers attempted to sell enterprises everything the needed to run a WAN; connectivity, hardware, and management. The question is; does cloud computing inherently depend on a single vendor providing network, security, storage, servers, and virtualization?

If you believe the hype regarding Cisco’s California Virtualization Server Blades for their Nexus Switch, then the answer may be yes; at least Cisco’s answer is yes. If you believe IBM’s cloud computing demonstration using IBM, Juniper, and Xen, then the answer may be no; or at least no for now.

Take a tour of any modern datacenter and you will see products from the likes of Cisco, F5, Juniper, Emerson, Teradata, EMC, Sun, NetScout (Network General), Brocade, IBM, HP, Force10, and more as well as newcomers such as Woven Systems and Arista. Could all this specialized gear be replaced by a single vendor? Would even Cisco dare to dream so big?

It seems unrealistic that a single vendor could displace all the pieces of the datacenter puzzle; at least in the immediate future. However, I would never say never as HP is amassing a diversified portfolio with a delivery mechanism in EDS. Additionally, both Cisco and IBM are not that far off themselves needing only a few remaining pieces to complete the puzzle.

What do you think?

Tired of the Cloud: Virtualization (Part 2)

Ask any 5 people what virtualization is and you’ll get five different answers.  By the way, ask those same people what the definition of cloud computing is and you’ll get back 4 different answers and 1 blank stare.   Virtualization refers to; operating systems, applications, networking, storage, security, I/O, Memory, CPU, and more.  It encompasses major hardware vendors (Cisco, Juniper, Brocade), Chip Vendors (IBM, SUN, Intel, and AMD), giant software vendors (Oracle, IBM, Microsoft, VMware, Citrix, Red Hat, Sun, HP, and Symantec), OSS projects (KVM, Xen, and VirtualBox), and start-ups like Xsigo, RingCube, MokaFive, and Pano Logic.


In fact, every time I read the aforementioned list I am tempted to add additional categories and companies to the list. Since the epic rise of VMware, we are experiencing a rate of commoditization of the hypervisor.  While VMware is the “defacto standard” of x86 virtualization, you still have choices; Red Hat, Citrix, VirtualIron, Oracle, Microsoft, KVM, IBM, and more.  To complicate matters, I have only focused on one area of virtualization and there are many others left to explore.


Once quick note, virtualization is not new to the networking world.  Has anyone heard of VLANs (Virtual Local Area Networks)?  How about Virtual Firewalls?  Even the vaunted Terabit Router can now be virtualized across the largest backbones in the world using technology from Cisco and Juniper.  My have we come so far since the early days of networking and the introduction of TCP/IP and Ethernet.


For cloud computing to become a reality, we can’t simply draw a basic architecture with a box labeled virtualization. Cloud computing depends on all aspects of virtualization to be flexible, elastic, and scalable.   Is x86 virtualization the right platform?  Is storage virtualization ready?  Does x86 virtualization introduce as many challenges as it solves?  What role does standardization play within virtualization?  Who owns the hypervisor?  Shouldn’t virtualization be in the chip?


Finally, the wild card in this discussion is consolidation.  What are the ramifications if Cisco buys EMC?  Is Dell the odd server vendor out?  What is IBM planning?  What is HP really building?  How will the application vendors respond; SAP, Oracle, Adobe, Red Hat?  Uncertain economic conditions breed opportunity for M&A activity as well as disruptive start-ups that are in stealth mode.

IBM: Send in the Clouds

Cisco, HP, EMC, VMware beware; the father of Autonomic Computing has entered the market.  Today, IBM announced the IBM Blue Cloud initiative with Erich Clementi at the helm reporting directly to Sam Palmisano and Kristof Kloeckner as its CTO.

Instead of entering the market with blogs, rhetoric, and posturing, IBM showcased years of research and commercial deployments to demonstrate “overflow cloud”.  Overflow Cloud showed how you can provision and transfer data from one cloud to another with simple the drag-and-drop of a mouse.  Furthermore they built this technology without help from either Cisco or VMware as Overflow Cloud utilizes technology from Juniper Networks and the open source Xen hypervisor.

Does IBM have the research, development, deployment capabilities, and global market presence to make cloud computing a reality?  Will IBM finally unleash autonomic computing to the world?  While some have written that these announcements are a “disappointment”, I believe those comments are out of fear and self-preservation.  Clearly, by creating a division that reports directly to Sam Palmisano, IBM is taking cloud computing very seriously.  Furthermore, no one can question IBM’s research and development credentials in the area of autonomic computing and virtualization.

As Platen has been examining the path to cloud computing, how will IBM answer the technology challenges?  Will Tivoli evolve into the control plane that becomes the foundation for a true data center OS?  As Cisco is pouring millions of dollars into cloud computing, datacenter 3.0, and VMware itself, how will they react to IBM showcasing Juniper and Xen’s technology?

I am excited to welcome IBM to the cloud party and I hope they continue to publish research on both cloud and autonomic computing.

Please stay tuned to Platen as we continue to examine the path toward cloud computing.  

Tired of the Cloud: Virtualization (Part 1)

Contrary to mainstream reports, Cloud computing is not synonymous with virtualization.  While Cloud computing and its derivatives are in their infancy, virtualization has been around since the 1960s and was first implemented by IBM 30 years ago to logically partition mainframe computers into virtual machines.  With the standardization on x86 architecture in the 1990s, virtualization moved from the proprietary mainframe to commodity X86 hardware.  Unlike the mainframe, X86 hardware was not designed to handle the challenges of virtualization.  To overcome these challenges companies, both commercial and OSS, emerged with VMware and XenSource (Citrix) being the most widely known.


As a quick aside, once again the pundants have called for the death of the mainframe.  However, IBM responded to the challenge by releasing the z/OS and specifically the z/VM hypervisor for their mainframes.  For the first time, enterprises could run z/Linux on mainframes and virtualize thousands of Linux servers on a single mainframe.  


Two theories on this link:


While utility, grid, and now cloud computing vendors struggled for widespread acceptance, virtualization found a niche within development and test environments.  The ability to rapidly deploy and tear down virtual servers followed by the promise of server consolidation caught the eye of both enterprises and cloud vendors.  The idea was brilliant, separate the operating system from the application thereby “cloud enabling” applications without waiting for the application vendors themselves.   However, the reality has been quite different.


In August of 2006, Amazon released EC2 (Elastic Compute Cloud) based on the Xen hypervisor.  Hailed as a “science project”, EC2 allowed users (individual, SMB, Large Enterprise, and Software Companies) to host their applications on a virtual infrastructure.  While EC2 has experienced its growing pains, it has become a thought and revenue leader in cloud computing infrastructure.  However, Amazon’s cloud is much more than simply the Xen hypervisor running on a server farm.


In the end, virtualization has some of the same challenges as cloud computing; manageability, security, networking, storage, etc.   For the virtualization vendors, there is much revenue and market share to be gained by tying these technologies and concepts together.  In fact, VMware has boldly released the Virtual Datacenter OS that proclaims to transform your datacenter into an “internal cloud.”  If only it was that easy.


Repeat after me, Cloud Computing is NOT Virtualization.   

%d bloggers like this: