search menu icon-carat-right cmu-wordmark

Fabric, Ansible, Gauntlt, and Chaos Monkey: The Top 10 DevOps Posts of (The First Six Months) of 2016

Headshot of Hasan Yasar.
PUBLISHED IN
CITE

It has been nearly a year since the DevOps blog launched its own platform. In the nearly 12 months since our launch, we have offered guidelines, practical advice, and tutorials to the ever-increasing number of organizations adopting DevOps (up 26 percent since 2011). In the first six months of 2016, an increasing number of blog visitors were drawn to posts highlighting successful DevOps implementations at Amazon and Netflix as well as tutorials on new technologies such as Otto, Fabric, Ansible, and Docker. This post presents in descending order (with #1 being the most popular) the 10 most popular DevOps posts published in the first six months of 2016.

10. Adding Security to Your DevOps Pipeline

DevOps practitioners often omit security testing when building their DevOps pipelines because security is often linked with slow-moving business units and outdated policies. These characteristics conflict with the overall goal of DevOps, which is to improve the software delivery process. However, security plays an important role in the software development lifecycle and must be addressed in all applications. Incorporating security into different stages of the DevOps pipeline will not only start to automate security, but will allow your security process to become traceable and easily repeatable. For instance, static code analysis could be done on each code commit, and penetration testing could be completed as part of the deployment phase. In this blog post, Kiriakos Kontostathis presents two common tools that can be used during deployment that allow for automated security tests: Gauntlt and OWASP Zed Attack Proxy (ZAP).

Here is an excerpt:

Gauntlt is a security testing tool that incorporates existing organizational tools that are used for security testing, i.e. nmap. This tool is built on the Cucumber framework and provides an easy to read language that gives users the ability to write their own attacks. Gauntlt also provides example attacks for most common attacks on its Github repo. A Gauntlt attack file consists of primarily two sections: the background and the scenarios. The background section ensures that the packages needed for the attacks are installed and that developers can connect to the host that is being tested for security. The scenario section is where the attacks, or tests, are actually performed. Operational staff can have an unlimited number of scenarios, but it is best practice to have a small number of them. That way each attack file will not get too broad.

I have prepared a short walkthrough that shows how to set up to run Gauntlt tests for your web application. I will be using the Gauntlt starter kit to set up the environment quickly. You will need Vagrant, VirtualBox, and Git installed on your local machine for this demo.

Read the complete post.

9. An Introduction to DevOps

At Flickr, the video- and photo-sharing website, the live software platform is updated at least 10 times a day. Flickr accomplishes this through an automated testing cycle that includes comprehensive unit testing and integration testing at all levels of the software stack in a realistic staging environment. If the code passes, it is then tagged, released, built, and pushed into production.

This type of lean organization, where software is delivered on a continuous basis, is exactly what the agile founders envisioned when crafting their manifesto: a nimble, stream-lined process for developing and deploying software into the hands of users while continuously integrating feedback and new requirements. A key to Flickr's prolific deployment is DevOps, a software development concept that literally and figuratively blends development and operations staff and tools in response to the increasing need for interoperability. In this blog post, the first in a series, C. Aaron Cois introduces DevOps and explores its impact from an internal perspective on our own software development practices and through the lens of its impact on the software community at large.

Here is an excerpt:

At the SEI, I oversee a software engineering team that works within CERT's Cyber Security Solutions (CS2) Directorate. Within CS2, our engineers design and implement software solutions that solve challenging problems for federal agencies, law enforcement, defense intelligence organizations, and industry by leveraging cutting-edge academic research and emerging technologies.

The manner in which teams develop software is constantly evolving. A decade ago, most software development environments were siloed, consisting of software developers in one silo and mainframe computers and a staff of IT professionals who maintained that mainframe in another silo.

The arrival of virtualization marked a technological revolution in the field of software development. Before, if I needed a new server for my web application, I would have to order the server, and wait for it to ship. Then, upon arrival, I would have to rack the server, install and provision the system, and configure networking and access controls, all before I could begin my real development work.

Read the complete post.

8. Developing with Otto: A First Look

You will be hard pressed to find a DevOps software development shop that doesn't employ Vagrant to provision their local software development environments during their development phase. In this blog post, Aaron Volkmann introduces a tool called Otto, by Hashicorp, the makers of Vagrant.

Here is an excerpt:

Vagrant is great for standing up development environments running on a developer's laptop, but when it comes time to migrate an application to production, the environment configuration stored in Vagrant's configuration file, the Vagrantfile, cannot be directly used to provision production servers. Otto expands upon Vagrant by allowing a single configuration file to define, provision, and deploy to both development and production environments.

Otto addresses one of the big challenges in managing micro service-based applications by resolving service dependencies. In a micro service architecture, a system is broken down into many independent, deployable units as opposed to a single monolith. This means that we must deploy many small applications instead of one single large application, adding management complexity. Figure 1 below shows that Your App depends on several services to function. To spin up a local development environment, a developer would have to know about all these dependencies and configure the Vagrant environment to run them. Otto simplifies infrastructure provisioning by traversing the dependency hierarchy and building a Vagrant environment that can accommodate all dependencies. If a service on which the application depends has dependencies of its own, Otto will automatically resolve, fetch, and build them.

Read the complete post.

7. Monitoring in the DevOps Pipeline

In the realm of DevOps, automation often takes the spotlight, but nothing is more ubiquitous than the monitoring. There is value to increased awareness during each stage of the delivery pipeline. However, perhaps more than any other aspect of DevOps, the act of monitoring raises the question, "Yes, but what do we monitor?" There are numerous aspects of a project you may want to keep an eye on and dozens of tools from which to choose. This blog post by Tim Palko explores what DevOps monitoring means and how it can be applied effectively.

Here is an excerpt:

Before getting into the state of monitoring in DevOps, I want to take a minute to discuss tooling. Because there are so many products trying to promote monitoring, choosing among them can be distracting. It is best to, first, understand what holds business value to you and your customers. You should also recognize that not everything that can be monitored should be monitored. Discussing any particular tool isn't the aim of this post, but it is worth noting that the most vendors offer free trials and many other products are simply free, so it might be worth your time to sample a few after you have determined your monitoring strategy.

Infrastructure and service monitoring have been around long before DevOps, so how does DevOps really affect monitoring strategy, and is DevOps even needed for monitoring? Strangely, yes, in a way.

Read the complete post.

6. Addressing the Detrimental Effects of Context Switching with DevOps

In a computing system, a context switch occurs when an operating system stores the state of an application thread before stopping the thread and restoring the state of a different (previously stopped) thread so its execution can resume. The overhead incurred by a context switch managing the process of storing and restoring state negatively impacts operating system and application performance. In the blog post Addressing the Detrimental Effects of Context Switching with DevOps, Todd Waits describes how DevOps ameliorates the negative impacts that "context switching" between projects can have on a software engineering team's performance.

Here is an excerpt:

In the book Quality Software Management: Systems Thinking, Gerald Weinberg discusses how the concept of context switching applies to an engineering team. From a human workforce perspective, context switching is the process of stopping work in one project and picking it back up after performing a different task on a different project. Just like computing systems, human team members often incur overhead when context switching between multiple projects.

Context switching most commonly occurs when team members are assigned to multiple projects. The rationale behind the practice of context switching is that it is logistically simpler to allocate team members across projects than trying to have dedicated resources on each project. It seems reasonable to assume that splitting a person's effort between two projects yields 50 percent effort on each project. Moreover, if a team member is dedicated to a single project, that team member will be idle if that project is waiting for something to occur, such as completing paperwork, reviews, etc.

Read the complete post.

5. DevOps and Docker

Docker is quite the buzz in the DevOps community these days, and for good reason. Docker containers provide the tools to develop and deploy software applications in a controlled, isolated, flexible, and highly portable infrastructure. In the post DevOps and Docker, CERT researcher Joe Yankel introduces Docker as a tool to develop and deploy software applications with substantial benefits to scalability, resource efficiency, and resiliency.

Here is an excerpt:

Linux container technology (LXC), which provides the foundation that Docker is built upon, is not a new idea. LXC has been in the linux kernel since version 2.6.24, when Control Groups (or cgroups) were officially integrated. Cgroups were actually being used by Google as early as 2006, since Google has always been looking for ways to isolate resources running on shared hardware. In fact, Google acknowledges firing up over 2 billion containers a week and has released its own version of LXC containers called lmctfy, or "Let Me Contain That For You."

Unfortunately, none of this technology has been easy to adopt until Docker came along and simplified container technology, making it easier to utilize. Before Docker, developers had a hard time accessing, implementing, or even understanding LXC let alone its advantages over hypervisors. DotCloud founder and current Docker chief technology officer Solomon Hykes was on to something really big when he began the Docker project and released it to the world as open source in March 2013. Docker's ease of use is due to its high-level API and documentation, which enabled the DevOps community to dive in full force and create tutorials, official containerized applications, and many additional technologies. By lowering the barrier to entry for container technology, Docker has changed the way developers share, test, and deploy applications.

Read the complete post.

4. DevOps Case Study: Amazon AWS

Regular readers of the DevOps blog will recognize a recurring theme in this series: DevOps is fundamentally about reinforcing desired quality attributes through carefully constructed organizational process, communication, and workflow. By studying well-known tech companies and their techniques for managing software engineering and sustainment, DevOps blog authors have been able to present valuable real-world examples for software engineering approaches and associated outcomes. These examples also serve as excellent case studies for DevOps practitioners. In the post DevOps Case Study: Amazon AWS, C. Aaron Cois explores Amazon's experience with DevOps.

Here is an excerpt:

Amazon is one of the most prolific tech companies today. Amazon transformed itself in 2006 from an online retailer to a tech giant and pioneer in the cloud space with the release of Amazon Web Services (AWS), a widely used on-demand Infrastructure as a Service (IaaS) offering. Amazon accepted a lot of risk with AWS. By developing one of the first massive public cloud services, they accepted that many of the challenges would be unknown, and many of the solutions unproven. To learn from Amazon's success we need to ask the right questions. What steps did Amazon take to minimize this inherently risky venture? How did Amazon engineers define their process to ensure quality?

Luckily, some insight into these questions was made available when Google engineer Steve Yegge (a former Amazon engineer) accidentally made public an internal memo outlining his impression of Google's failings (and Amazon's successes) at platform engineering. This memo (which Yegge has specifically allowed to remain online) outlines a specific decision that illustrates CEO Jeff Bezos's understanding of the underlying tenets of what we now call DevOps, as well as his dedication to what I will claim are the primary quality attributes of the AWS platform: interoperability, availability, reliability, and security.

Read the complete post

3. DevOps Case Study: Netflix and the Chaos Monkey

While DevOps is often approached through practices such as Agile development, automation, and continuous delivery, the spirit of DevOps can be applied in many ways. In this blog post, C. Aaron Cois examines another seminal case study of DevOps thinking applied in a somewhat out-of-the-box way by Netflix.

Here is an excerpt:

Netflix is a fantastic case study for DevOps because their software-engineering process shows a fundamental understanding of DevOps thinking and a focus on quality attributes through automation-assisted process. Recall, DevOps practitioners espouse a driven focus on quality attributes to meet business needs, leveraging automated processes to achieve consistency and efficiency.

Netflix's streaming service is a large distributed system hosted on Amazon Web Services (AWS). Since there are so many components that have to work together to provide reliable video streams to customers across a wide range of devices, Netflix engineers needed to focus heavily on the quality attributes of reliability and robustness for both server- and client-side components. In short, they concluded that the only way to be comfortable handling failure is to constantly practice failing. To achieve the desired level of confidence and quality, in true DevOps style, Netflix engineers set about automating failure.

Read the complete post.

2. Continuous Integration in DevOps

When Agile software development models were first envisioned, a core tenet was to iterate more quickly on software changes and determine the correct path via exploration--essentially, striving to "fail fast" and iterate to correctness as a fundamental project goal. The reason for this process was a belief that developers lacked the necessary information to correctly define long-term project requirements at the onset of a project, due to an inadequate understanding of the customer and an inability to anticipate a customer's evolving needs. Recent research supports this reasoning by continuing to highlight disconnects between planning, design, and implementation in the software development lifecycle. In the blog post, Continuous Integration in DevOps, C. Aaron Cois highlights continuous integration to avoid disconnects and mitigate risk in software development projects.

Here is an excerpt:

A cornerstone of DevOps is continuous integration (CI), a technique designed and named by Grady Booch that continually merges source code updates from all developers on a team into a shared mainline. This continual merging prevents a developer's local copy of a software project from drifting too far afield as new code is added by others, avoiding catastrophic merge conflicts. In practice, CI involves a centralized server that continually pulls in all new source code changes as developers commit them and builds the software application from scratch, notifying the team of any failures in the process. If a failure is seen, the development team is expected to refocus and fix the build before making any additional code changes. While this may seem disruptive, in practice it focuses the development team on a singular stability metric: a working automated build of the software.

Recall that a fundamental component of a DevOps approach is that to remove disconnects in understanding and influence, organizations must embed and fully engage one or more appropriate experts within the development team to enforce a domain-centric perspective. To remove the disconnect between development and sustainment, DevOps practitioners include IT operations professionals in the development team from the beginning as full team members. Likewise, to ensure software quality, QA professionals must be team members throughout the project lifecycle. In other words, DevOps takes the principles of Agile and expands their scope, recognizing that ensuring high quality development requires continual engagement and feedback from a variety of technical experts, including QA and operations specialists.

Read the complete post.

1. DevOps Technologies: Fabric or Ansible

In the blog post DevOps Technologies: Fabric or Ansible, CERT researcher Tim Palko highlights use cases associated with the DevOps deployment process, including evaluating resource requirements, designing a production system, provisioning and configuring production servers, and pushing code to name a few.

Here is an excerpt:

The workflow of deploying code is almost as old as code itself. There are many use cases associated with the deployment process, including evaluating resource requirements, designing a production system, provisioning and configuring production servers, and pushing code to name a few. In this blog post I focus on a use case for configuring a remote server with the packages and software necessary to execute your code. This use case is supported by many different and competing technologies, such as Chef, Puppet, Fabric, Ansible, Salt, and Foreman, which are just a few of which you are likely to have heard on the path to automation in DevOps. All these technologies have free offerings, leave you with scripts to commit to your repository, and get the job done. This post explores Fabric and Ansible in more depth. To learn more about other infrastructure-as-code solutions, check out Joe Yankel's blog post on Docker or my post on Vagrant.

One difference between Fabric and Ansible is that while Fabric will get you results in minutes, Ansible requires a bit more effort to understand. Ansible is generally much more powerful since it provides much deeper and more complex semantics for modeling multi-tier infrastructure, such as those with arrays of web and database hosts. From an operator's perspective, Fabric has a more literal and basic API and uses Python for authoring, while Ansible consumes YAML and provides a richness in its behavior (which I discuss later in this post). We'll walk through examples of both in this posting.

Read the complete post.

Wrapping Up and Looking Ahead

In the remaining months of 2016, the DevOps blog will continue to publish guidelines and practical advice to organizations seeking to adopt DevOps in practice with upcoming posts on the following topics:

  • Adding Risk Managing Framework into DevOps platform,
  • Number of steps to implement DevOps in the Enterprise,
  • Applying DevOps principles to address dynamic threat in CyberSecurity Domain

We welcome your feedback on the DevOps blog as well as suggestions for future content. Please leave feedback in the comments section below.

Additional Resources

To view the webinar DevOps Panel Discussion featuring Kevin Fall, Hasan Yasar, and Joseph D. Yankel, please click here.

To view the webinar Culture Shock: Unlocking DevOps with Collaboration and Communication with Aaron Volkmann and Todd Waits please click here.

To view the webinar What DevOps is Not! with Hasan Yasar and C. Aaron Cois, please click here.

To listen to the podcast DevOps--Transform Development and Operations for Fast, Secure Deployments featuring Gene Kim and Julia Allen, please click here.

To read all of the blog posts in our DevOps series, please click here.

Get updates on our latest work.

Each week, our researchers write about the latest in software engineering, cybersecurity and artificial intelligence. Sign up to get the latest post sent to your inbox the day it's published.

Subscribe Get our RSS feed