SEI Insights

SATURN Blog

SEI Architecture Technology User Network (SATURN) News and Updates

SATURN 2013 Architectural Evaluation Session (notes)

Posted on by in

Notes by Brendan Foote

All Architecture Evaluation Is Not the Same: Lessons Learned from More Than 50 Architecture Evaluations in Industry
Matthias Naab, Jens Knodel, and Thorsten Keuler, Fraunhofer IESE
Matthias has evaluated many systems' architecture, ranging from tens of thousands of lines of code to tens of millions, and primarily in Java, C++ and C#. From this he distills out commonalities in the various stages of the evaluations. To start with, the initiator of the evaluation was either the development company or an outside company, such as a current customer or a potential one. The questions being asked also varied--whether wondering if the architecture is adequate for one's solutions, what the impact would be of changing the system's paradigm, or how big a difference there was between a system and the reference architecture.

The typical evaluation had 2 people from Fraunhofer, 3-15 people from the initiating organization, and 3-15 from the company producing the system. The time the people had to spend varied much more greatly, with Fraunhofer having the lion's share of the work. The effort's driving forces included complexity of the system, complexity of the organization, and criticality of the situation. One key finding was that requirements were neglected to varying degrees, more and more as they moved from runtime requirements, to dev-time, to operation time. Of 43 systems evaluated, the teams categorized their adequacy and found 6 to be "red," 11 were "yellow," and 16 were "green." But these results often can be clouded by context, un-objective questions or goals, and whether or not the problems found are fixable.

Leveraging Simulation to Create Better Software Systems in an Agile World
Jason Ard and Kristine Davidsen, Raytheon Missile Systems
Raytheon is leveraging simulation throughout the entire missile development process: for system design, software development, and system assessment. Writing software for embedded systems, especially those that are part of a larger real-time system, is difficult, since test units can't be acquired à la carte, and any defect found could be in either the software being written or the test unit itself! Just as they build the system in layers, Raytheon builds the simulation progressively throughout the project. This is consistent with how simulation is shown as one slice of the space when using the spiral methodology, being passed through again and again. This makes the project more agile as well, providing much quicker feedback than could be gotten from physical tests.

Test-Driven Non-Functionals? Test-Driven Non-Functionals!

Wilco Koom, Xabia

Kent Beck is one of the founding fathers of test-driven development, which he pioneered in the Java World. These tests can be used not only while coding, but also for regression testing. Advantages of using tests include knowing clearly when the job is done, confidence in refactoring, and quality up front. But how can we test non-functional requirements? Well, to test scalability, we need to create scale. This can be done, for example, with tools such as JMeter. More importantly, when Wilco created such a setup, with multiple JMeter instances pushing requests to Apache, which was configured with 3 JBoss nodes connected to a database, he built it before writing any of his system. That way he could measure the throughput of a single node, two nodes, and then three, and watch the trending. Wilco says that in general, this kind of practice is really the most useful when automatable.

About the Author

Bill Pollak

Contact Bill Pollak
Visit the SEI Digital Library for other publications by Bill
View other blog posts by Bill Pollak

Comment

Policy

We welcome comments with a wide range of opinions and views. To keep the conversation focused on topic, we reserve the right to moderate comments.

Add a Comment

Comments*


Type the characters you see in the picture above.