search menu icon-carat-right cmu-wordmark

Reflection on 20 Years of Software Architecture: A Presentation by Robert Schwanke

Headshot of Bill Pollak.
PUBLISHED IN
CITE

It is widely recognized today that software architecture serves as the blueprint for both the system and the project developing it, defining the work assignments that must be performed by design and implementation teams. Architecture is the primary purveyor of system quality attributes that are hard to achieve without a unifying architecture; it's also the conceptual glue that holds every phase of projects together for their many stakeholders. Last month, we presented two posting in a seriesfrom a panel at SATURN 2012 titled "Reflections on 20 Years of Software Architecture" that discussed the increased awareness of architecture as a primary means for achieving desired quality attributes and advances in software architecture practice for distributed real-time embedded systems during the past two decades.

This blog posting--the next in the series--provides a lightly edited transcription of a presentation by Robert Schwanke, who reflected on four general problems in software architecture: modularity, systems of systems, maintainable architecture descriptions, and system architecture.

Robert Schwanke, Siemens Corporate Research

We've been using the term "software architecture" for about 20 years, but the foundations of the concept go back another 20 years, to the information-hiding principle introduced by David Parnas in 1972. So, we've actually had 40 years of software architecture. Parnas also talked about hierarchical structure in 1974 and data encapsulation in 1975. Some classic papers from that era are listed at the end of this article.

We still lean on these and other early principles. In fact, if we look around for general principles of software architecture, there are not many new ones. But we do have important, unsolved, general problems in software architecture. Today I want to draw your attention to four problem areas: modularity, systems of systems, maintainable architecture descriptions, and system architecture.

What is so hard about modularity today? According to Parnas, modules were supposed to decouple development tasks. But somewhere along the way, we got the idea that modularity is about syntactic dependency. It's not. It's about dividing systems into separate chunks so that people can work on them independently, most of the time, not needing to talk to each other very often. To create a good decomposition, we need to know which development tasks should be decoupled, because we can't decouple them all. Modular decomposition has to be a tree. At every node in the tree, we decide which tasks are most important to decouple and divide the subsystem into smaller pieces accordingly.

Modularity is also about anticipating change. The marketplace, stakeholders, and technology can all change, altering the software's requirements and the criteria for success. To get a perfect architecture, you must have perfect insight into the future to know what is going to change. How far into the future should you look when selecting tasks to decouple? If you look too far, you get an over-engineered system; if you don't look far enough, your project may fail before its first delivery.

My team is now working on measuring modularity. Past efforts at measuring it looked at coupling and cohesion, design similarity, and other measures, but we never really validated any of those measures--we could never show what the measurements were good for. These days we are looking at detecting modularity errors by contrasting code structure with change sets.

For example, if certain pairs of files get changed together often--and there's no syntactic explanation for why they're being changed together--we suspect a modularity error. Modularity is supposed to keep things independent, but such pairs of files are not independent. We are combining one line of work by Yuanfang Cai at Drexel University, and another line by Alan MacCormack's team at MIT and Harvard Business School that are both studying how to predict future change--where changes will most likely happen in the system--using structure measures and change-history measures together.

Preliminary indications are that file size is still the best single predictor of future bugs. This seems intuitive--bigger files means more bugs--except that the bug density in large files turns out to be lower than in small files. Not the number of bugs--that is still higher--but the bug density is lower. Another interesting predictor, coming from social-network research, is "betweenness centrality." Centrality is how much a node in the network is in the middle of the network--specifically, the frequency with which it appears on the shortest path between any pair of nodes in the network--and it's a pretty strong predictor of future change. The reason centrality is a good predictor is that if changes are likely to propagate through this node, the node is likely to change.

Another hard problem is technology stacks. Specialization forces us to rely heavily on third-party components and technologies, and not just in ultra-large scale systems. I worked on a small system recently in which the first draft implementation, installed on my desktop, used 15 third-party technologies. By the time we delivered the system, it contained 300 open-source components, protected by 30 distinct open-source licenses, which gave us many headaches even though we hadn't modified any of the source files.

When that happens, your system loses control over aggregated quality attributes. When I was working on a VOIP (Voice Over Internet Protocol) telephone-switch product a few years ago, there were only four VOIP-switch vendors selling complete hardware and software solutions. They were all relying on third-party server hardware, specified by the VOIP vendor but sold by the server vendor directly to the VOIP customer. Servers today have a market window of 18 months, after which the vendor will change the design, typically to take advantage of new and better components. Due to that short market window, the server vendors have reduced what they spend on reliability analysis of the hardware.

The telephone business was once famous for its five-nines (.99999) reliability. Not anymore, because they can afford neither to build their own servers nor to keep re-analyzing the reliability of third-party servers. There was one server vendor that was poised to take over the entire telephony server market just by offering a server with a 5-year market window and a reliability specification.

The next problem is maintainable architecture descriptions. We've been trying for a long time to put good, useful descriptions into the hands of architects and developers and get people to maintain them. Instead, the current practice is to figure out the architecture once, document it, put it on a shelf, and never change it again. Or, actually, we do change the architecture, but the description doesn't change, and then it's useless. The biggest obstacle to maintainable architecture descriptions is that the subsystem tree, often reflected in the directory structure of the project, is almost enough. Much of the value of the architecture description resides in the module decomposition tree. Making the rest of the architecture description accurate, enforceable, maintainable, and usable is really hard, and we have not yet demonstrated enough of a return on the expense.

Finally, there is the challenge of system architecture. We realized recently that with the way the systems engineering field now defines itself [INCOSE standard], system architecture and software architecture are almost the same thing. That is, most large systems are now dominated by software, making the software architecture and system architecture almost the same. The domain-specific physical technologies define many of the components' quality attributes, but the software provides the integration and control that synthesizes the system qualities out of the components. So we need to worry, as software architects, that we're about to become system architects.

In our emerging role, we need to add the physical, mechanical, and electrical components to our system architectures, but more importantly, on the people side, we must develop cross-domain communication, trust, and engagement. This requires a real engineering education that most software people don't have. Instead, we have engineers with good, practical engineering training but an inadequate appreciation of software, and software guys who understand abstraction, dependencies, modularity, and so forth, but think they can build anything, whether it's feasible or not. We software architects probably know a lot more about system architecture, in general, but we can't speak the language that systems engineers have been talking for decades.

The next post in this series will include presentations by Jeromy Carriere and Ian Gorton.

Additional Resources

Schwanke's Presentation
https://resources.sei.cmu.edu/library/asset-view.cfm?assetid=21352

Classic papers cited in Schwanke's Presentation

Module Guide
The modular structure of complex systems by Paul Clements, David Parnas, and David Weiss https://dl.acm.org/citation.cfm?id=801999

Get updates on our latest work.

Each week, our researchers write about the latest in software engineering, cybersecurity and artificial intelligence. Sign up to get the latest post sent to your inbox the day it's published.

Subscribe Get our RSS feed