The term "software security" often evokes negative feelings among software developers because it is associated with additional programming effort, uncertainty, and road blocks on fast development and release cycle. To secure software, developers must follow numerous guidelines that, while intended to satisfy some regulation or other, can be very restrictive and hard to understand. As a result, a lot of fear, uncertainty, and doubt can surround software security. This blog post, the first in a series, is based on a keynote I recently delivered at the International Conference on Availability, Reliability, and Security (ARES). In this talk I describe how the SecureDevOps movement attempts to combat the toxic environment surrounding software security by shifting the paradigm from following rules and guidelines to creatively determining solutions for tough security problems.
So, you're using Vagrant, and maybe you've even read my earlier post on it, but your Vagrant box doesn't have everything you need. Or maybe it has too much, and you need something simpler. For instance, do you find yourself installing or removing packages or fixing packages to specific versions to get parity with your production platform? Or maybe you need more extensive auditing over your environment, such as when you (or your customer) can't trust a third-party box vendor. Or you need a way to clone a virtual machine for parity with the production environment. What are your options? In this blog post, I will explain what a box file is and how you can have more control over your Vagrant workflow by creating your own box. I will also introduce Packer as a tool to create a Vagrant box, and I will finish with an example for managing Vagrant box versions and distributing updates in a team setting.
Change is hard. When we help teams adopt DevOps processes or more general Agile methodologies, we often encounter initial resistance. When people learn a new tool or process, productivity and enthusiasm consistently dip, which is known as the "implementation dip." This dip should not be feared, however, but embraced. In his book Leading in a Culture of Change, Michael Fullan defines the implementation dip as "a dip in performance and confidence as one encounters an innovation that requires new skills and new understandings."
A shift to DevOps is a shift to constantly changing and improving tools and processes. Without deliberate steps, we could thrust our team into a constant cycle of implementation dip. In this blog post, I present three strategies for limiting the depth and duration of the implementation dip in software development organizations adopting DevOps.
In the ever-changing world of DevOps, where micro-services and distributed architectures are becoming the norm, the need to understand application internal state is growing rapidly. Whitebox monitoring gives you details about the internal state of your application, such as the total number of HTTP requests on your web server or the number of errors logged. In contrast, blackbox testing (e.g., Nagios) allows you to check a system or application (e.g., checking disk space, or pinging a host) to see if a host or service is alive, but does not help you understand how it may have gotten to the current state. Prometheus is an open source whitebox monitoring solution that uses a time-series database to provide scraping, querying, graphing and alerting based on time-series data. This blog post briefly explores the benefits of using Prometheus as a whitebox monitoring tool.
It has been nearly a year since the DevOps blog launched its own platform. In the nearly 12 months since our launch, we have offered guidelines, practical advice, and tutorials to the ever-increasing number of organizations adopting DevOps (up 26 percent since 2011). In the first six months of 2016, an increasing number of blog visitors were drawn to posts highlighting successful DevOps implementations at Amazon and Netflix as well as tutorials on new technologies such as Otto, Fabric, Ansible, andDocker. This post presents in descending order (with #1 being the most popular) the 10 most popular DevOps posts published in the first six months of 2016.
In this DevOps revolution, we are trying to make everything continuous: continuous integration, continuous deployment, continuous monitoring--the list goes on. One term you rarely hear, however, is continuous security, because it is often seen as an afterthought when building and implementing a delivery pipeline. The pipeline I will be discussing has six components: plan, code, build, test, release, and operate. There is also a seventh, less-formal component, which is the iterative nature of the delivery pipeline in a DevOps environment. Security can, and should, be implemented throughout the pipeline. In this blog post, I discuss how security can be implemented in all components, as well as what benefits come from implementing security in each of those components.
DevOps practices can increase the validity of software tests and decrease risk in deploying software changes to production environments. Anytime a software change is deployed to production, there is a risk that the change will break and lead to a service outage. This risk is minimized through rigorous testing of the software in a separate test environment where the change can be safely vetted without affecting normal business operations. Problems can arise, however, when these isolated test environments do not properly mimic the production environment. Sometimes a test environment will have different operating system patches installed, different software dependencies installed, different firewall rules, or even different data in its database. These differences open the door to risk because even if the software change passes testing in the test environment, there is a chance of failure in production because it was never before tested in that context. In this blog post, I explore how the DevOps practices of infrastructure as code and automated test execution through continuous integration increase the effectiveness of software testing, allowing us to create test environments that more closely match production.
A few years ago, my team took the task of designing and writing a new (and fairly large) web application project that required us to work collaboratively on features, deploy to unfamiliar environments, and work with other teams to complete those deployments. Does this sound like DevOps yet? Our task was to make these deployments happen with limited resources; however, we didn't want to sacrifice environment parity or automation, knowing that these would help our team lower risk for the project and maintain a better handle on the security of our process. The idea of using Chef, a leading suite of platform-independent infrastructure management and provisioning tools, came to mind; however the work that would have been required to implement the full Chef ecosystem was not in scope for the project. We realized, however, that we were already using Vagrant to provision our local environments in the same way we wanted to provision our remote environments. That is, our Vagrant-based workflow was applying Infrastructure as Code concepts and provisioning with just a single component of Chef, rather than depending on a larger suite of Chef tools. To solve our remote deployment problem, we furnished a solution that allowed us to maintain environment parity by reusing all of the existing Chef configuration while sharing it with Vagrant and keeping the deployment for any size system to a single, automatable line. In this blog post, I will be talking about the transformation of a vanilla Vagrant setup to a stable and testable Infrastructure as Code solution.
As organizations' critical assets have become digitized and access to information has increased, the nature and severity of threats has changed. Organizations' own personnel--insiders--now have greater ability than ever before to misuse their access to critical organizational assets. Insiders know where critical assets are, what is important, and what is valuable. Their organizations have given them authorized access to these assets and the means to compromise the confidentiality, availability, or integrity of data. As organizations rely on cyber systems to support critical missions, a malicious insider who is trying to harm an organization can do so through, for example, sabotaging a critical IT system or stealing intellectual property to benefit a new employer or a competitor. Government and industry organizations are responding to this change in the threat landscape and are increasingly aware of the escalating risks. CERT has been a widely acknowledged leader in insider threat since it began investigating the problem in 2001. The CERT Guide to Insider Threat was inducted in 2016 into the Palo Alto Networks Cybersecurity Canon, illustrating its value in helping organizations understand the risks that their own employees pose to critical assets. This blog post describes the challenge of insider threats, approaches to detection, and how machine learning-enabled software helps provide protection against this risk.