icon-carat-right menu search cmu-wordmark

An Introduction to DevOps

C. Aaron Cois
PUBLISHED IN
DevSecOps
CITE

At Flickr, the video- and photo-sharing website, the live software platform is updated at least 10 times a day. Flickr accomplishes this through an automated testing cycle that includes comprehensive unit testing and integration testing at all levels of the software stack in a realistic staging environment. If the code passes, it is then tagged, released, built, and pushed into production.

This type of lean organization, where software is delivered on a continuous basis, is exactly what the agile founders envisioned when crafting their manifesto: a nimble, stream-lined process for developing and deploying software into the hands of users while continuously integrating feedback and new requirements. A key to Flickr's prolific deployment is DevOps, a software development concept that literally and figuratively blends development and operations staff and tools in response to the increasing need for interoperability. This blog post, the first in a series, introduces DevOps and explores its impact from an internal perspective on our own software development practices and through the lens of its impact on the software community at large.

At the SEI, I oversee a software engineering team that works within CERT's Cyber Security Solutions (CS2) Directorate. Within CS2, our engineers design and implement software solutions that solve challenging problems for federal agencies, law enforcement, defense intelligence organizations, and industry by leveraging cutting-edge academic research and emerging technologies.

The manner in which teams develop software is constantly evolving. A decade ago, most software development environments were siloed, consisting of software developers in one silo and mainframe computers and a staff of IT professionals who maintained that mainframe in another silo.

The arrival of virtualization marked a technological revolution in the field of software development. Before, if I needed a new server for my web application, I would have to order the server, and wait for it to ship. Then, upon arrival, I would have to rack the server, install and provision the system, and configure networking and access controls, all before I could begin my real development work.

Today, virtualization allows us to create and proliferate virtual machines almost instantly. For example, my developers simply click a button to create a virtual machine, and it appears instantly. This ability to instantaneously generate synthetic computers that run on a shared infrastructure underlies a range of modern technologies, such as Amazon's Elastic Compute Cloud (Amazon EC2), that provide resizable compute capacity in the cloud.

This new immediacy powers a lot of cool technologies, such as cloud platform OpenStack, Platform-as-a-Service (PaaS) solutions such as Heroku or Microsoft's Windows Azure, and software development tools such as Vagrant, as well as enterprise infrastructures of most modern companies. At the same time, these technologies enable us to automate more tasks and command larger, more powerful infrastructures to increase the efficiency of our software development operations.

It Works on My Machine

There's a saying often heard among young developers: "it works on my machine." This references developers, often early in their careers, who write a piece of code to fix a bug. Then, after testing the code locally on their machine, they proclaim it fit for deployment. Inevitably, when they install it on the customer's system, the code breaks because of differences in system configuration. This problem provides a canonical example of the types of issues that DevOps can help you avoid.

To mitigate this prolific problem, SEI researchers leverage Vagrant to manage the creation of a canonical environment (which is a set of virtual machines) for each software project replicated locally for each developer on the project team. These virtual machines are configured to be identical to the machines in our testing, staging, and ultimately production clouds. This setup ensures that if it works on our developer's local machine, it will also work on the production system, whether hosted by us or in a customer's infrastructure.

Moreover, synchronicity assures developers that if it works on their machine, it will work on other developers' machines because they are using the same environment for that project. Files that define the configuration of these project environments are small and can be checked in to source control along with software code. The ability to check configuration files into source control allows the development team to update, share, and version the project environment--along with the code itself--with the assurance of parity throughout the team.

This methodology also provides a far simpler onboarding process when new developers join a project, as their environments setup is reduced to a single "create environment" command. This advanced process, unimaginable a decade ago, offers just one example of the power and precision that DevOps automation brings to software engineering.

A New Approach for Developing Software

Another innovation that has impacted the manner in which software is developed stresses collaboration between developers who wrote the software and the operations team (i.e., the IT group) that maintains an organization's hardware infrastructure. The incarnation of DevOps can be traced to 2009 when a group of Belgian developers began hosting "DevOps" days during which they stressed collaboration and interaction between these two entities. Previously, developers and operations staff would work independently until their interests converged, usually with an inefficient and costly struggle to integrate their work products and efforts for the final race to deployment.

DevOps emerged from the realization that infrastructure should support not only the production capability, but also the act of development. Ideally, DevOps should exist in one merged environment and set of concepts. For example, if I am writing software in a virtualized environment, I can be assured that the software I've developed will deploy seamlessly in that environment. Integrated DevOps assures us that the operations team remains involved throughout the software development lifecycle to ensure a smooth, efficient process through transition and deployment. Just as security concerns cannot be initially ignored and then successfully addressed at the end of a project, the same is true for successful deployment and maintenance concerns.

DevOps provides an ideal solution for iterative software development environments, especially those that release software updates frequently, such as Flickr. The initial push for DevOps stemmed from the need to integrate operations to make software development more efficient and of higher quality. At the SEI, we are taking that concept and pushing forward, along with many others in the software industry, to fully-automated DevOps processes.

Automated DevOps

In an article published in the August 2011 edition of Cutter IT Journal, Why Enterprise Must Adopt Devops to Enable Continuous Delivery, co-authors Jez Humble and Joanne Molesky wrote that "automation of build, deployment, and testing" are key to achieving low lead times and rapid feedback. The authors write that automation also offers "configuration and steps required to recreate the correct environment for the current service are stored and maintained in a central location."

Any software organization must be an early adopter of innovation to maintain a competitive edge. As a federally-funded research and development center, the SEI must maintain high standards of efficiency, security, and functionality in systems we develop. Forward-thinking approaches to process, including heavily automated DevOps techniques, allow us to systematically implement, maintain, and monitor these standards for each project we work on.

Looking Ahead

While this post served to introduce the concepts of virtualization and outline some DevOps practices, future posts in this series will present the following topics:

  • a generalized model for DevOps
  • advanced DevOps automation
  • DevOps system integration
  • continuous integration
  • continuous deployment
  • automated software deployment environment configuration.

We welcome your feedback on this series, and what DevOps topics would be of interest to you. Please leave feedback in the comments section below.

Additional Resources

To listen to the podcast, DevOps--Transform Development and Operations for Fast, Secure Deployments, featuring Gene Kim and Julia Allen, please visit
https://resources.sei.cmu.edu/library/asset-view.cfm?assetid=58525.

To view the August 2011 edition of the Cutter IT Journal, which was dedicated to DevOps, please visit https://www.cutter.com/offer/devops-software-revolution-making-0

Additional resources include the following sites:

https://devops.com/ (currently being revamped)

http://dev2ops.org/

http://devopscafe.org/

https://www.evolven.com/blog/devops-developments.html

https://www.ibm.com/developerworks/library/d-develop-reliable-software-devops/index.html?ca=dat-

Get updates on our latest work.

Each week, our researchers write about the latest in software engineering, cybersecurity and artificial intelligence. Sign up to get the latest post sent to your inbox the day it's published.

Subscribe Get our RSS feed