Dave and JP: Cloud Connect Roundup
Cloud Connect, a UBM/TechWeb event, always generates a lot of news of interest to IT pros, developers, and cloud providers. As the third day of meetings are set to kick off in Santa Clara, Calif., that tradition is holding true. Here are the four topics that have already caught my eye.
Telecommunications Companies Show Cloud Aspirations
The telco service providers are thinking out loud about how they'd like to join the ranks of cloud service providers. CenturyLink and Verizon are already there, with their purchases of Savvis and Terremark cloud service suppliers, respectively, last year. More would like to get into the business.
I've wondered why they didn't do so sooner, given their worldwide investments in data centers to serve their large customer bases. Their data centers in some ways don't look so different from the large, "invented exclusively by us" centers built by Google, Yahoo, Amazon, and Microsoft. Still, I pull up short of believing that an AT&T or a Telefonica is about to supply infrastructure as a service that's competitive with Amazon Web Services, GoGrid, or Rackspace.
But what if the telcos took a different tack? Amazon Web Services and other first-generation cloud suppliers were all builders of large-scale data centers making their services available over the Internet. So far, the debate over cloud computing, with only one or two isolated exceptions, has presumed cloud services will be dispensed over the public Internet.
[ Want to learn more about how telcos are viewing their prospects in cloud computing? See Telcos Poised To Disrupt Amazon's Enterprise Cloud. ]
What if the telcos, with their large inventory of sometimes underutilized network capacity, generated a common carrier Ethernet WAN that carried traffic, much like the Internet, over switches and routers that were not open to the public? Such a private alternative to the Internet at first blush sounds like it could never compete with free services. But Ralph Santioro, a founding member of the Metro Ethernet Forum, and director of carrier Ethernet market development at Fujitsu, clearly thought there were uses for such a private, Ethernet network.
In a talk at the Carrier Forum, a concurrent event at CloudConnect Monday, said the Internet is the WAN that delivers cloud services "but there's surprisingly little attention paid to it by the cloud community." That is to say, the cloud community loves to talk about the services it can invent. The telecom community would spend more time talking about how the network could make the services better.
That potentially leaves an opening for telcos to re-invent the cloud for the private enterprise in ways that: make network quality of service an option of cloud services; offer more guarantees of data security through denial of public access (users are purchasers of the service); and provide more private data center-types of guarantees on application performance and security.
As I've tried to point out before in the cases of Terremark and Savvis, a cloud supplier with its telco owner suddenly has the option of becoming a data chain of data centers linked together with a point-and-click choice by a customer. Such links could provide automated backup and recovery from a separate geographical area for a customer, and allow cloud services to maintain higher availability through the option of shifting workloads away from a trouble spot.
Cloud services from a global, common-carrier Ethernet network may yet emerge, not so much as a challenge to Amazon Web Services' dominant EC2, but as a completely different way of delivering cloud services.
AWS Backup And Recovery
These thoughts were running through my head at Cloud Connect when I ran across a second bit of information that strikes me as addressing that latter topic in a different way. SunGard, which now has six cloud data centers for providing high availability production environments, is about to sign a pact with Amazon Web Services to provide backup and recovery services to customers in Amazon data centers.
That is, Amazon seems aware--as well it should be after its U.S. East data center outage last April--that customers need a simple option for implementing backup in a geographically separate facility. Amazon so far has offered it in different availability zones within an Amazon data center or complex of centers, but not in a geographically separate part of the country. During the 2011 Easter weekend freeze-up, one of the ways customers avoided being hurt was to shift their virtual machines out of U.S. East. Not everyone had provided for such a maneuver, and in the crunch, they learned the hard way that one availability zone wasn't as isolated from another as they had assumed.
Amazon, for its part, has caught the scent of what those telcos have in mind as they maneuver behind its pioneering back. Amazon hasn't linked its own data centers so a workload in one can be backed up in another. But it also wants it to be easy to implement disaster recovery and data recovery. Linking to a SunGard center in Ireland or the United Kingdom would be a natural for Amazon's Dublin, Ireland, facility. Likewise, U.S. East isn't far from Philadelphia, where SunGard has another center. Better to provide an ease of linking itself, rather than leave it to those telco suppliers. SunGard will have more than just backup services available. An announcement is coming soon.
Cisco Tackles Cloud Security
On another front, Lew Tucker, CTO of cloud computing at Cisco Systems, gave a revealing talk in his 15-minute slot in the rapid-fire Cloud Connect keynotes Tuesday, but it was just complicated enough for perhaps some its import to be missed. Enterprises have progressed smartly with both storage and server virtualization, while the network, hardwired into its devices, has tended to lag behind.
Tucker, who was lead on cloud computing at Sun Microsystems before joining Cisco, is in an excellent position to say what's happening on that front. To realize the full benefit of virtualized resources, the network needs to be subdivided into a series of isolated subnets, with one or more subnets serving a single virtual data center, he said in his keynote.
That would mean an additional guarantee of privacy and security for the user or users of a virtual data center, allowing them to do in the cloud many of the things they do in their private enterprise data center. This can already be done, of course, by establishing a VLAN for each user that needs to tunnel into the cloud center, but that's an expensive and wasteful alternative. Only so many VLANs can be created per data center--the max is somewhere around 2,000, a finite number compared to the demands that can be placed on a modern data center.
Cisco has been working on the problem, said Tucker in an interview after his speech, and its answer takes the form of a contribution to the OpenStack open source code project--Quantum. Developers today seek a cloud service by generating a call to its IP address. They should be able to include a description of networking requirements in an application, and have a software receiving agent, with access to services, examine the incoming workload and specify the service they want. It would issue orders for services through the cloud API, and services would be activated from there.
Quantum is such a general-purpose frontend for network services. It would be able to look at an application being uploaded to a cloud and recognize the level of network service needed, the firewalls needed to protect it, and the routers needed to carry its messages. It would then direct underlying systems to set up those resources as virtual services. If the application is only to be enabled to talk to one or two others, the virtual routers created for it would contain those restrictions. If the application needs an Internet gateway, then a virtual server can be designated for the purpose.
In its first-phase implementation, Quantum can supply services "to a simple Level 2 network segment," something like what a single router can manage on a home network, is how Tucker put it. That leaves a lot of limits on what Quantum can do for any given workload, but the baseline functionality is likely to be extended out deeper into the network, if the implementer chooses.
That would mean Quantum might start reconfiguring nearby network devices, and obviously there are some limits to how far any network administrator wants a frontend service like Quantum to reach. But since these are virtual devices, they can represent small chunks of powerful physical devices spread across many users, with each user still making network decisions and setting up connections in his own virtual data center, in isolation from other users.
Zynga Settles On Hybrid Cloud
The fourth hot topic that I ran across at Cloud Connect was Zynga's announcement during its earnings call Tuesday that it's brought much of its online game activity, which means millions of users' simultaneous events, back into its zCloud. Zynga at the start of 2011 was known to be a heavy user of Amazon's EC2. That dependence was cited in Zynga's prospectus as it prepared to go public last summer.
John Schappert, chief operating officer, said during the earnings call Tuesday that "nearly 80% of our daily active users were hosted in [Zynga's] zCloud at the end of 2011 compared to just 20% at the beginning of the year."
Zynga will continue its close relationship with Amazon's EC2 cloud. But Zynga is clearly getting more adept at constructing the data center space it needs or leasing it from wholesale builders--it does both--and tying them together into its internally controlled zCloud.
Zynga CTO of infrastructure Allan Leinwand is scheduled to speak Wednesday at Cloud Connect and he promises to explain more when he gives his talk.
The Apache Software Foundation's (ASF) Deltacloud interoperability toolkit has graduated from the Apache Incubator to become a Top-Level Project (TLP).
Apache Deltacloud defines a RESTful Web service API for interacting with cloud service providers and resources in those clouds in a unified manner. It also consists of a number of implementations of this API for the most popular cloud computing environments, such as Amazon, Eucalyptus, GoGrid, IBM, Microsoft, OpenStack, Rackspace and more. In addition to the API server, the project also provides client libraries for a wide variety of languages.
Initially proposed for development within the ASF by David Lutterkort, chair of the Apache Deltacloud Project Management Committee (PMC) and principal software engineer at Red Hat back in May 2010. Deltacloud was seeded with code developed by Red Hat. Since then, the project has continued to innovate by expanding both its committer base and the diversity of clouds it can manage.
"Deltacloud delivers an elegant REST API, focused on exposing the differences between cloud services," said Adrian Cole, founder of jclouds.org and CTO of jclouds at CloudSoft. jclouds and Deltacloud have a strong history together, starting with collaborating on abstraction design in 2009, spiking with our interface to Deltacloud released last year. I'm excited to see Deltacloud's graduation, and looking forward to more shared code this year."
Apache Deltacloud software is released under the Apache License v2.0, and is overseen by a self-selected team of active contributors to the project. A Project Management Committee (PMC) guides the project's day-to-day operations, including community development and product releases. Apache Deltacloud source code, documentation, mailing lists and related resources are available here
Curious to see how executable this is, history is littered with examples of interoperability failures. In system engineering for Auto and Aero engineering parametric geometry is owned by vendor applications, the AP233 standard though it has existed for years has never been implemented as it isn’t in the vendors’ collective best interests. In Healthcare we are moving toward Integrated Health Exchanges, kind of like an ATM network to securely transfer medical records when a patient needs those records. In similar fashion, the vendors have no monetary incentive outside of government regulations to do this, and thus at the state level IHE (HIEs) are small in success and giant in failure. This is more than just standards, it’s a series of APIs that clearly identify the differences between the major vendors and helps you work between and move between them. Love to see this work, and have the healthcare and manufacturing industries learn from it as technology isn’t the limiting factor there (it’s opaque interfaces and obfuscated data stores).