icon-carat-right menu search cmu-wordmark

Standards in Cloud Computing Interoperability

Headshot of Grace Lewis.
PUBLISHED IN
Cloud Computing
CITE

In 2011, Col. Timothy Hill, director of the Futures Directorate within the Army Intelligence and Security Command, urged industry to take a more open-standards approach to cloud computing. "Interoperability between clouds, as well as the portability of files from one cloud to another, has been a sticking point in general adoption of cloud computing," Hill said during a panel at the AFCEA International 2011 Joint Warfighting Conference. Hill's view has been echoed by many in the cloud computing community, who believe that the absence of interoperability has become a barrier to adoption. This posting reports on recent research exploring the role of standards in cloud computing and offers recommendations for future standardization efforts.

Avoiding Vendor Lock-In

Since the inception of the cloud, organizations have been transferring data and workloads to pooled, configurable computing resources. These resources include networks, servers, storage, applications, and services. One concern voiced by many organizations that use cloud-based services is vendor lock-in, which stems from the inability to move resources from one cloud provider to another.

Users want to have the freedom to move between cloud providers for many reasons. For example, a relationship with a vendor may not be working, service-level agreements may not be met, other providers offer better prices, or a provider goes out of business. In an environment without common standards, there is little or no freedom to move between vendors.

The cloud computing community has already developed numerous standards (some argue there are too many) by various forums, standards organizations, and nonprofit organizations including OpenStack, the Standards Acceleration to Jumpstart Adoption of Cloud Computing, and The Open Group Cloud Computing Work Group, to name a few. One issue explored in my research is whether we should create new standards or just leverage existing standards.

Some standardization efforts focus on codifying parts of a cloud-computing solution, such as workloads, authentication, and data access. Other standards focus on unifying disparate efforts to work together on a solution. In addition, due to their large market share in this space, interfaces used by Amazon have emerged as de facto standards.

The technical report describing my research, The Role of Standards in Cloud Computing Interoperability, explains how answers to questions about how standards can enable interoperability depend on several factors. Key factors include the type of service model that a cloud provider uses and the level of interoperability that an organization expects. Note that the cloud community typically uses the term interoperability to refer to portability, i.e., the ability to move a system from one platform to another.

Use Cases

Initially, my research identified four typical cloud-computing interoperability use cases that are supported by standards:

  1. Workload migration. A workload that executes in one cloud provider can be uploaded to another cloud provider. Some standardization efforts that support this use case are Amazon Machine Image (AMI), Open Virtualization Framework (OVF), and Virtual Hard Disk (VHD).
  2. Data migration. Data that resides in one cloud provider can be moved to another cloud provider. A standardization effort that supports this use case is Cloud Data Management Interface (CDMI). In addition, even though SOAP and REST are not data-specific standards, multiple cloud-storage providers support data- and storage-management interfaces that use SOAP and REST.
  3. User authentication. A user who has established an identity with a cloud provider can use the same identity with another cloud provider. Standardization efforts that support this use case are Amazon Web Services Identity Access Management (AWS IAM), OAuth, OpenID, and WS-Security.
  4. Workload management. Custom tools developed for cloud workload management can be used to manage multiple cloud resources from different vendors. Even though most environments provide a form of management console or command-line tools, they also provide APIs based on REST or SOAP.

We found that workload migration and data migration can benefit the most from standardization. For example, standardization of VM (virtual machine) image file formats would allow organizations to move workloads from one provider to another or from private to public clouds. Standardized APIs for cloud storage would do the same for data.

Provider Service Models

In examining the issue of standardization through the provider lens, we looked at the three main service models:

  • Infrastructure-as-a-service (IaaS). IaaS stands to benefit the most from standardization because the main building blocks are workloads that are represented as VM images and storage units, whether type data or raw data. This finding also ties back to the first two use cases identified earlier, which were workload migration and data migration.
  • Platform-as-a-service (PaaS). Organizations that buy into PaaS, do so for the perceived advantages of the development platform. The platform provides many capabilities out of the box such as managed application environments, user authentication, data storage, reliable messaging, and other functionality in the form of libraries that can be integrated into applications. Organizations that adopt PaaS are not thinking only of extending their IT resources, but are seeking value-added features (such as libraries and platforms) that can help them develop and deploy applications more quickly.
  • Software-as-a-service (SaaS). SaaS stands to benefit the least from standardization. SaaS is different from IaaS and PaaS in that it represents a licensing agreement to third-party software instead of a different deployment model for existing resources that range from data storage to applications. Organizations that adopt SaaS are acquiring complete software solutions or services that can be integrated into applications.

Organizations select PaaS and SaaS specifically for these value-added features, and end up in a commitment similar what one experiences when purchasing software. Expecting PaaS and SaaS providers to standardize these features would be equivalent to asking an enterprise resource-planning software vendor to standardize all of its features; it's not going to happen because it's not in their best interests.

Future Research

One challenge among standardization organizations is determining what areas of cloud computing to standardize first. In 2005, researchers from the European Union defined three generations of service-oriented systems. The development of cloud-based systems over time is analogous to the following classification of the way that service-oriented systems have evolved

  • First-generation. The location and negotiation of cloud resources occur at design time. Cloud resources are provisioned and instantiated following the negotiation process.
  • Second-generation. The location and negotiation of cloud resources occur at design time. Depending on business needs, however, cloud resources are provisioned either at design time or runtime, and instantiated at runtime. This approach would support, for example, a cloud-bursting strategy in which developers design a system for an average load, but the system can balance its load to a cloud provider when it reaches its full capacity.
  • Third generation. In the third generation of cloud-based systems, the location, negotiation, provisioning, and instantiation of cloud resources occur at runtime.

Reaching this third generation of cloud-based systems will most likely be the focus of future research. This work will require cloud consumers, cloud providers, and software vendor groups to work together to define standardized, self-descriptive, machine-readable representations of

  • basic resource characteristics such as size, platform, and API (application programming interface); and
  • more advanced resource characteristics such as pricing and quality attributes values, negotiation protocols and processes; and billing protocols and processes.

For now, standardization efforts should focus on the basic use cases of user authentication, workload migration, data migration, and workload management. Those efforts can then be used as a starting point for the more dynamic use cases of the future.

Recommendations

Even if vendor lock-in is mitigated, it is important for organizations to know that any migration effort comes at a cost--whether between cloud providers or local servers, databases, or applications. Cloud standardization efforts should therefore focus on finding common representations of user identity, workload (virtual-machine images), cloud-storage APIs, and cloud management APIs. Vendors influence many standards committees, and it is unrealistic to assume that each of these elements will have a single standard.

Agreement on a small number of standards, however, can reduce migration efforts by enabling the creation of transformers, importers, exporters, or abstract APIs. Such an effort could enable the dynamic third-generation of cloud-based systems.

What are your thoughts on cloud computing interoperability? Do we need new standards? Or can we live with the ones we have? Will we ever get to the third generation? Is it necessary?

Additional Resources

The technical report describing this research, The Role of Standards in Cloud Computing Interoperability, may be downloaded at
https://resources.sei.cmu.edu/library/asset-view.cfm?assetID=28017

To read all of Grace Lewis' posts on her research in cloud computing, please visit
/authors/grace-lewis/

Get updates on our latest work.

Each week, our researchers write about the latest in software engineering, cybersecurity and artificial intelligence. Sign up to get the latest post sent to your inbox the day it's published.

Subscribe Get our RSS feed