SEI Insights

SEI Blog

The Latest Research in Software Engineering and Cybersecurity

10 Recommended Practices for Achieving Agile at Scale

Posted on by in

This is the second installment of two blog posts highlighting recommended practices for achieving Agile at Scale that was originally published on the Cyber Security & Information Systems Information Analysis Center (CSIAC) website. The first post in the series by Ipek Ozkaya and Robert Nord explored challenges to achieving Agile at Scale and presented the first five recommended practices:


1. Team coordination
2. Architectural runway
3. Align development and decomposition.
4. Quality-attribute scenarios
5. Test-driven development

This post presents the remaining five technical best practices, as well as three conditions that will help organizations achieve the most value from these recommended practices. This post was originally published in its entirety on the SPRUCE website.

Recommended Practices for Achieving Agile at Scale

6. Use end-to-end testing for early insight into emerging system properties.

To successfully derive the full benefit from test-driven development at scale, consider early and continuous end-to-end testing of system scenarios. When teams test only the features for which they are responsible, they lose insight into overall system behavior (and how their efforts contribute to achieving it). Each small team could be successful against its own backlog, but someone needs to look after broader or emergent system properties and implications. For example, who is responsible for the fault tolerance of the system as a whole? Answering such questions requires careful orchestration of development with verification activities early and throughout development. When testing end-to-end, take into account different operational contexts, environments, and system modes.

At scale, understanding end-to-end functionality requires its elicitation and documentation. These goals can be achieved through the application of agile requirements management techniques, such as stories, as well as use of architecturally significant requirements. If there is a need to orchestrate multiple systems, however, a more deliberate elicitation of end-to-end functionality as mission/business threads should provide a better result.

7. Use continuous integration for consistent attention to integration issues.

This basic Agile practice becomes even more important at scale, given the increased number of subsystems that must work together and whose development must be orchestrated. One implication is that the underlying infrastructure developers will use day-to-day must be able to support continuous integration. Another is that developers focus on integration earlier, identifying the subsystems and existing frameworks that will need to integrate. This identification has implications for the architectural runway, quality-attribute scenarios, and orchestration of development and verification activities presented in our earlier blog posting. Useful measures for managing continuous integration include rework rate and scrap rate. It is also important to start early in the project to identify issues that can arise during integration. What this means more broadly is that both integration and the ability to integrate must be managed in the Agile environment.

8. Consider recent field study management as an approach to manage system development strategically.

The concept of technical debt arose naturally from the use of Agile methods, where the emphasis on releasing features quickly often creates a need for rework later. At scale, there may be multiple opportunities for shortcuts, so understanding technical debt and its implications becomes a means for strategically managing the development of the system. For example, there might be cases where certain architectural selections made to accelerate delivery have long-term consequences. A recent field study the SEI conducted with software developers also strongly supports that the leading sources of technical debt are architectural choices. Such tradeoffs must be understood and managed based on both qualitative and quantitative measurements of the system. Qualitatively, architecture evaluations can be used as part of the product demos or retrospectives that Agile advocates. Quantitative measures are harder but can arise from understanding productivity, system uncertainty, and measures of rework (e.g., when uncertainty is greater, it may make more sense to incur more rework later).

9. Use prototyping to rapidly evaluate and resolve significant technical risks.

To address significant technical issues, teams employing Agile methods will sometimes perform what in Scrum is referred to as a technical spike, in which a team branches out from the rest of the project to investigate a specific technical issue, develop one or more prototypes to evaluate possible solutions, and report what they learned to the project team so that they can proceed with greater likelihood of success. A technical spike may extend over multiple sprints, depending on the seriousness of the issue and how much time it takes to investigate the issue and report information that the project can use.

At scale, technical risks having severe consequences are typically more numerous. Prototyping (and other approaches to evaluating candidate solutions such as simulation and demonstration) can therefore be an essential early planning but also recurring. A goal of Agile methods is increased early visibility. From that perspective, prototyping is a valuable means of achieving visibility more quickly for technical risks and their mitigations. The practice of making team coordination top priority as mentioned earlier has a role here, too, to help orchestrate reporting what was learned from prototyping to the overall system.

10. Use architectural evaluations to ensure that architecturally significant requirements are being addressed.

While not considered part of mainstream Agile practice, architecture evaluations have much in common with Agile methods in seeking to bring a project's stakeholders together to increase their visibility into and commitment to the project, as well as to identify overlooked risks. At scale, architectural issues become even more important, and architecture evaluations thus have a critical role on the project. Architecture evaluation can be formal, as in the SEI's Architecture Tradeoff Analysis Method, which can be performed, for example, early in the Agile project lifecycle before the project's development teams are launched, or recurrently. There is also an important role for lighter weight evaluations in project retrospectives to evaluate progress against architecturally significant requirements.

Under what conditions will organizations derive the most benefit from the Agile at Scale best practices?
None of the practices presented in these blog postings will enable Agile at Scale in isolation. They are meant to be orchestrated together. Improving visibility and understanding into high priority concerns for the system under development and understanding the technical challenges hindering their development early on and continuously is what enabled agile development practices to succeed in its initial context. Perpetuating that approach to scale means ensuring the technical barriers and enablers are clearly communicated through not only team practices, but through the working system, as well. When an organization neglects the following factors, the effectiveness of Agile at Scale practices--and of Agile more generally--may be severely limited:

1. A technical infrastructure that enables the teams to collaborate. An infrastructure that supports such capabilities as configuration management, issue and defect tracking, and team measurement and analysis are extremely important for Agile at Scale practices. For example, a large Agile project with distributed teams may lack something as simple as a standard virtual-meeting capability to support daily standup meetings.

2. A management culture that trusts team decisions. Agile practices assume empowerment of development teams. Technical decisions made at the development level should be trusted and propagated to other teams and management that might be affected. More generally, communication barriers must be removed, and management must create a culture that removes silos, particularly those that impede interdependent work.

One key is ensuring that team members have the training and mentoring they need to make sound technical judgments. Teams must be encouraged to define their own work processes, define the measurements they will collect and analyze, and regularly evaluate the quality of their work and gauge the progress made.

Strongly hierarchical decision-making organizations may experience significant challenges as they try to transition to such a culture: development teams may be accustomed to being told what to do and may be uneasy taking the initiative. Likewise, their management may remain uneasy in granting teams that initiative.

3. Visibility. Agile is all about achieving visibility early and continuously and recognizing and addressing risks in a timely way. The challenge with knowledge work is that though work processes may be "proven" across a range of circumstances, they nevertheless represent theories of how the work should proceed (theories that can improve with time); thus, team processes should be measured, monitored, and adjusted as needed.

One key to greater visibility and understanding is to make all team artifacts that contribute to the development of the system broadly accessible to everyone in the project. Many open-source efforts now employ social coding environments--such as GitHub--that provide full transparency into each developer's work. More generally, it is not possible to fully anticipate who needs to know about team progress and issues, now or in the future, and thus the environment should make working code, team and project backlogs, and quality-attribute priorities visible to all.

Looking Ahead

Technology transition is a key part of the SEI's mission and a guiding principle in our role as a federally funded research and development center. These practices are certainly not complete--they are a work in progress. We welcome your comments and suggestions on further refining these recommended practices.

Additional Resources

To learn more about continuous integration:

Continuous Integration: Improving Software Quality and Reducing Risk
Paul M. Duvall; Steve Matyas; Andrew Glover; Addison-Wesley Professional, 2007

To learn more about technical debt:

Neil A. Ernst, Stephany Bellomo, Ipek Ozkaya, Robert L. Nord, and Ian Gorton, Measure It? Manage It? Ignore It? Software Practitioners and Technical Debt Software Practitioners and Technical Debt Proceedings of the Foundations of Software Engineering Conference, August 2015.

Philippe Kruchten, Robert L. Nord, Ipek Ozkaya. Technical debt: from metaphor to theory and practice. IEEE Software Special Issue on Technical Debt (Nov/Dec 2012).

To learn more about prototyping:

Stephany Bellomo, Robert L. Nord, Ipek Ozkaya. Elaboration on an Integrated Architecture and Requirement Practice: Prototyping with Quality Attribute Focus. Second International Workshop on the Twin Peaks of Requirements and Architecture. International Conference on Software Engineering (ICSE) 2013, May 18-26, 2013 in San Francisco, CA, USA.

About the Author

Comments

Policy

We welcome comments with a wide range of opinions and views. To keep the conversation focused on topic, we reserve the right to moderate comments.

Add a Comment

Comments*


Type the characters you see in the picture above.