Posted on by DevOpsin
Formal documentation (such as source code documentation, system requirements and design documentation, or documentation for various user types) is often completely ignored by development teams; applying DevOps processes and philosophies to documentation can help alleviate this problem. Software documentation tends to fall into several categories: code, requirement, design, system, and user documentation. One reason documentation is often ignored is that standard documentation tools and processes create an obstacle for development teams since the tools and processes do not fit in well with the suite of tools development teams rely on, such as version control, issue trackers, wikis, and source code. As a consequence of this mismatch, slow the velocity of development teams. This blog post explores three primary challenges to documentation--process, documenting source code, and system documentation--and explains how DevOps-based documentation allows all stakeholders to access a common, trusted source of information for project details.
User and system documentation is often created and maintained using clunky binary files (i.e., *.docx). Generally, collaboration systems for documentation include passing updated versions through long email chains or network file shares. Moreover, proprietary formats (*.docx and PDF generation) tend to suffer from inconsistencies across operating systems, which can yield data corruption across teams with disparate work environments.
Storing binary files in version control systems is a solution to some of these problems, but versioning binary files is still challenging. Automating and integrating these types of files into a software development lifecycle is problematic at best, often resulting in documents languishing behind the pace of a project, or being deprecated entirely. Extensive documentation can be seen as an anti-pattern (approaching a problem with a bad solution); each team has to find the right balance between depth and simplicity.
Shifting Documentation "Left"
Ideally, documentation should be maintained within and generated from canonical sources. When discussing documentation it is important to distinguish between information and artifacts. Information is the data, or source of what is documented. Artifacts are the consumable end products of organizing the information in a manner that can be read by an appropriate audience. Artifacts may be system requirements documents, design documents, status reports, etc.
Information can be maintained in a variety of sources, such as issue trackers, wikis, and code repositories. Information should be stored where people actually interact with and act on the data. For example, if we are looking for documentation on a specific function, the documentation for that function should be kept where the function is: the code.
If the documentation of the function is not kept with code, when the function changes engineers do not just update the code, but they have to hunt down other places where the code is documented and update those as well. Poor documentation slows the velocity of development. The engineers become maintainers or curators of information and will work with it in its source state.
After all these pieces of information are stored in the proper place, we can use tools to generate document artifacts people can read and parse as consumers of information. The artifacts become immutable and reference the documentation-generation process as a means to get the most up-to-date data. Hosting documentation artifacts as web pages is the perfect medium for this type of documentation, because it will always display the current version of a document.
The ability to document code has been a part of programming best-practices for a long time. Over the last decade, several tools have been developed for various languages to enhance the documentation experience. These tools allow developers to document the pertinent information where it makes sense for those working or interacting with the source code. Some tools mentioned below also allow engineers to document human readable tests into their documentation. When the code compiles, the tests from the documentation are run, and if code has changed and the documentation has not been updated, the builds will fail. This rapid response from a continuous integration environment can help ensure adherence to proper documentation strategies.
The following tools are exemplary of libraries that generate readable documentation artifacts directly from comments in source code files.
Often managers may not understand the demands of documentation on the engineers. More than once, I have received requests to document the functionality of every line of code. Managers need to learn that this type of documentation is onerous for an engineer and will quickly destroy any ability to deliver business value in a reasonable time frame.
As with all things in DevOps, we automate what we can and find a balance of what makes sense. Auto-documenting all new objects with "This is a new object that should have documentation" may seem like a good way to get developers to document their code. However, if there is no consequence for not documenting (i.e., build failure), then you will end up with every object undocumented (or mis-documented with placeholder information) and a significant amount of incurred technical debt to go back and clean up the poor documentation.
Developers can use the tools listed above to implement good practices to verify code overage of documentation. If you are trying to document a project at the end of its lifecycle, start with the most critical portions of the application. From the inception of a project, focus on a minimum viable product when it comes to documentation: document the facts, not the journey that came to the solution.
System, Design, and User Documentation
Tools for documenting system, design, and user documentation are not as plentiful as those for documenting source code. Many times, organizations will begin to develop their own custom processes and infrastructure.
In a recent blog post Mikey Ariel, senior technical writer at Red Hat advocates for using continuous integration and unit testing documentation. In the post, Ariel describes a process that is able to test documentation for style guide adherence (e.g., whether your organization uses "bankend" or "back-end") and grammar (using APIs for tools like Hemingway or After the Deadline). Using unit test philosophies with documentation can ensure standardized documentation across broad organizational boundaries.
During a discussion about documentation at DevOpsDays NYC 2015, Mike Rembestsy from Etsy, described their process for dynamically documenting network infrastructure for their data centers. Etsy uses Chef to update their infrastructure, and the Chef script dynamically updates their Nagios monitoring instance and dynamically edits and publishes a network diagram. By using a DevOps approach to their documentation, Etsy developers have automated the process of updating documents so it happens as a byproduct of doing their work. These concepts and practices ensure documentation that is always accurate and reflects the current state of a system.
Treating documentation like source code allows organizations to have versioned information and allows individuals the ability to maintain or curate smaller sources of data to be aggregated automatically into various documentation artifacts. Working with the data where it is actionable enables efficient use of tasking to minimize the detrimental effects of context switching. A switch to DevOps documentation processes and workflows requires a shift of thinking about what tools are necessary for generating documentation. The more we can do as teams to automate the generation of information or facilitate the curation of the information in the proper repositories, the more we will improve the quality and usefulness of documentation for engineering teams and the people who consume the documents. Ultimately DevOps-based documentation allows all stakeholders to access a common, trusted source of information for project details.
Every two weeks, the SEI will publish a new blog post that offers technical guidelines and practical advice for DevOps in practice. We welcome your feedback on this series, as well as suggestions for future content. Please leave feedback in the comments section below.
On August 11th Chris Taschner and Tim Palko will present the SEI Webinar DevOps Security: Ignore It As Much As You Would Ignore Regular Security. To register, please click here.
To view the webinar Culture Shock: Unlocking DevOps with Collaboration and Communication with Aaron Volkmann and Todd Waits please click here.
To view the webinar What DevOps is Not! with Hasan Yasar and C. Aaron Cois, please click here.
To listen to the podcast DevOps--Transform Development and Operations for Fast, Secure Deployments featuring Gene Kim and Julia Allen, please click here.
To read all of the blog posts in our DevOps series, please click here.