Change is hard. When we help teams adopt DevOps processes or more general Agile methodologies, we often encounter initial resistance. When people learn a new tool or process, productivity and enthusiasm consistently dip, which is known as the "implementation dip." This dip should not be feared, however, but embraced. In his book Leading in a Culture of Change, Michael Fullan defines the implementation dip as "a dip in performance and confidence as one encounters an innovation that requires new skills and new understandings."
A shift to DevOps is a shift to constantly changing and improving tools and processes. Without deliberate steps, we could thrust our team into a constant cycle of implementation dip. In this blog post, I present three strategies for limiting the depth and duration of the implementation dip in software development organizations adopting DevOps.
Three Strategies for Limiting the Dip
There are three strategies to limit the depth and duration of the dip:
Empathy and understanding. In the article Overcome the 5 Main Reasons People Resist Change, Lisa Quasts highlights "fear of the unknown" as a key reason team members resist change. Fear can be a significant driving force behind a negative reaction. For change leaders, empathy for that fear can go a long way in helping people overcome the challenge of learning new tools and processes. Initial reactions to an implementation dip from leadership may not take into account the human toll of a process change. It is not uncommon to hear managers bemoan the frustration and lost productivity of a team of software developers when making an initial adoption. The team, however, deserves support and empathy. Change is hard, and the team will take time to come around. We can shorten this time by providing a consistent path of small wins and training the team in the tools they will be using.
A consistent path of small wins. Do not try to change everything all at once. The reason for this goes beyond overwhelming the team and blocking active development with infrastructure and process changes. Change leaders want to provide quick wins to team members so they can immediately see and make use of the benefit introduced by each change we make.
When adopting any new tools or processes, break down the overarching problem into isolated tasks or goals that can deliver business value immediately upon their completion. Breaking up deliverables this way aligns the team on the value we can deliver rather than their individual tasks. Break deliverables down into component pieces also fits into agile methods and providing a Minimum Viable Product or Successive Approximation Model of development.
As an example from my experience working with a particular team, while learning about their environment, I discovered that they stored their code on various shared drives with no version control or continuous integration or local development environments (this is not uncommon, though it is an anti-pattern to effective and secure development practices). Moreover, their modules contained interdependent code that did not have clear separation of duties.
Rather than attempt to introduce continuous integration and version control and refactor the code, we chose to start by abstracting their shared development environment into a repeatable, local development environment. This single task allowed us to demonstrate the benefit of developing locally and generate a win for the team.
In the next stage, we were in much better shape to implement version control. With version control in place, refactoring vast swaths of code became much easier, and the team immediately saw the benefit.
Once the refactoring was complete we could move on to continuous integration. Throughout the whole process, we did not disrupt their manual deployment methods, which was saved until the end of the transition. Once we arrived at that point, the deployment was straightforward because we had set up each step to not only deliver value to the team but also create a rational improvement plan that both introduced concepts and tools successively and built confidence among the team members.
Each time the team wins, they are closer to overcoming the implementation dip.
Training and upskilling team members. Providing training to the team will provide them with the skills and knowledge to execute and find answers to their problems. Training will assuage fear in the team members by showing that the organization is committed to helping them succeed. In the technology space, tools and methods are constantly changing. Training is the tool in our box to make sure our team members are able to continue to address the challenges of tomorrow. Implementing DevOps workflows from scratch is a daunting process. Without training, team members will find further ways to resist and prevent effective change in the organization.
Wrapping Up and Looking Ahead
While change may be hard and result in an implementation dip, the effects can be mitigated through specific strategies. Empathy will help the team feel trusted and heard as they embark in new territory. Providing a path of consistent, small wins will help them build confidence and trust in the new tools and processes. Training will help the team members grow their skillsbring value to the organization and end users.
To view the webinar DevOps Panel Discussion featuring Kevin Fall, Hasan Yasar, and Joseph D. Yankel, please click here.
To view the webinar Culture Shock: Unlocking DevOps with Collaboration and Communication with Aaron Volkmann and Todd Waits please click here.
To view the webinar What DevOps is Not! with Hasan Yasar and C. Aaron Cois, please click here.
To listen to the podcast DevOps--Transform Development and Operations for Fast, Secure Deployments featuring Gene Kim and Julia Allen, please click here.
To read all of the blog posts in our DevOps series, please click here.
This posting is the third in a series that focuses on multicore processing and virtualization, which are becoming ubiquitous in software development. The first blog entry in this series introduced the basic concepts of multicore processing and virtualization, highlighted their benefits, and outlined the challenges these technologies present. The second post addressed multicore processing. This third posting concentrates on virtualization via virtual machines (VMs). Below I define the relevant concepts underlying virtualization via VMs, list its current trends, and examine its pros and cons.