Category: High-Performance Computing

In 2015, the SEI blog launched a redesigned platform to make browsing easier, and our content areas more accessible and easier to navigate. The SEI Blog audience also continued to grow with an ever-increasing number of visitors learning more about our research in technical debt, shift-left testing, graph analytics, DevOps, secure coding, and malware analysis. In 2015 (from January 1 through December 15), the SEI blog logged 159,604 visits and sessions (we also switched analytics platforms mid-year), a 26 percent increase in traffic from the previous year. This blog post highlights the top 10 posts published in 2015. As we did with our mid-year review, we will include links to additional related resources that readers might find of interest. We also will present the posts in descending order beginning with the 10th most popular post of 2015 and counting down to number one.

10. Ten Recommended Practices for Achieving Agile at Scale
9. Open System Architectures: When and Where to Be Closed
8. A Taxonomy of Testing
7. Is Java More Secure Than C?
6. Managing Software Complexity in Models
5. The Pharos Framework: Binary Static Analysis of Object Oriented Code
4. Developing a Software Library for Graph Analytics
3. Four Types of Shift-Left Testing
2. DevOps Technologies: Fabric or Ansible
1. A Field Study of Technical Debt

The SEI Blog continues to attract an ever-increasing number of readers interested in learning more about our work in agile metrics, high-performance computing, malware analysis, testing, and other topics. As we reach the mid-year point, this blog posting highlights our 10 most popular posts, and links to additional related resources you might find of interest (Many of our posts cover related research areas, so we grouped them together for ease of reference.)

Before we take a deeper dive into the posts, let's take a look at the top 10 posts (ordered by number of visits, with #1 being the highest number of visits):

The Department of Defense (DoD) and other government agencies increasingly rely on software and networked software systems. As one of over 40 federally funded research and development centers sponsored by the United States government, Carnegie Mellon University's Software Engineering Institute (SEI) is working to help the government acquire, design, produce, and evolve software-reliant systems in an affordable and secure manner. The quality, safety, reliability, and security of software and the cyberspace it creates are major concerns for both embedded systems and enterprise systems employed for information processing tasks in health care, homeland security, intelligence, logistics, etc. Cybersecurity risks, a primary focus area of the SEI's CERT Division, regularly appear in news media and have resulted in policy action at the highest levels of the US government (See Report to the President: Immediate Opportunities for Strengthening the Nation's Cybersecurity ).

This blog post was co-authored by Eric Werner.

Graph algorithms are in wide use in Department of Defense (DoD) software applications, including intelligence analysis, autonomous systems, cyber intelligence and security, and logistics optimizations. In late 2013, several luminaries from the graph analytics community released a position paper calling for an open effort, now referred to as GraphBLAS, to define a standard for graph algorithms in terms of linear algebraic operations. BLAS stands for Basic Linear Algebra Subprograms and is a common library specification used in scientific computation. The authors of the position paper propose extending the National Institute of Standards and Technology's Sparse Basic Linear Algebra Subprograms (spBLAS) library to perform graph computations. The position paper served as the latest catalyst for the ongoing research by the SEI's Emerging Technology Center in the field of graph algorithms and heterogeneous high-performance computing (HHPC). This blog post, the second in our series, describes our efforts to create a software library of graph algorithms for heterogeneous architectures that will be released via open source.

The power and speed of computers have increased exponentially in recent years. Recently, however, modern computer architectures are moving away from single-core and multi-core (homogenous) central processing units (CPUs) to many-core (heterogeneous) CPUs. This blog post describes research I've undertaken with my colleagues at the Carnegie Mellon Software Engineering Institute (SEI)--including colleagues Jonathan Chu and Scott McMillan of the Emerging Technology Center (ETC) as well as Alex Nicoll, a researcher with the SEI's CERT Division--to create a software library that can exploit the heterogeneous parallel computers of the future and allow developers to create systems that are more efficient in terms of computation and power consumption.