SEI Insights

SEI Blog

The Latest Research in Software Engineering and Cybersecurity

Earlier this month, the U.S. Postal Service reported that hackers broke into their computer system and stole data records associated with 2.9 million customers and 750,000 employees and retirees, according to reports on the breach. In the JP Morgan Chase cyber breach earlier this year, it was reported that hackers stole the personal data of 76 million households as well as information from approximately 8 million small businesses. This breach and other recent thefts of data from Adobe (152 million records), EBay (145 million records), and The Home Depot (56 million records) highlight a fundamental shift in the economic and operational environment, with data at the heart of today's information economy. In this new economy, it is vital for organizations to evolve the manner in which they manage and secure information. Ninety percent of the data that is processed, stored, disseminated, and consumed in the world today was created in the past two years. Organizations are increasingly creating, collecting, and analyzing data on everything (as exemplified in the growth of big data analytics). While this trend produces great benefits to businesses, it introduces new security, safety, and privacy challenges in protecting the data and controlling its appropriate use. In this blog post, I will discuss the challenges that organizations face in this new economy, define the concept of information resilience, and explore the body of knowledge associated with the CERT Resilience Management Model (CERT-RMM) as a means for helping organizations protect and sustain vital information.

Melvin Conway, an eminent computer scientist and programmer, create Conway's Law, which states: Organizations that design systems are constrained to produce designs which are copies of the communication structures of these organizations. Thus, a company with frontend, backend, and database teams might lean heavily towards three-tier architectures. The structure of the application developed will be determined, in large part, by the communication structure of the organization developing it. In short, form is a product of communication.

Soldiers in battle or emergency workers responding to a disaster often find themselves in environments with limited computing resources, rapidly-changing mission requirements, high levels of stress, and limited connectivity, which are often referred to as "tactical edge environments." These types of scenarios make it hard to use mobile software applications that would be of value to a soldier or emergency personnel, including speech and image recognition, natural language processing, and situational awareness, since these computation-intensive tasks take a heavy toll on a mobile device's battery power and computing resources. As part of the Advanced Mobile Systems Initiative at the Carnegie Mellon University Software Engineering Institute (SEI), my research has focused on cyber foraging, which uses discoverable, forward-deployed servers to extend the capabilities of mobile devices by offloading expensive (battery draining) computations to more powerful resources that can be accessed in the cloud, or for staging data particular to a mission. This blog post is the latest installment in a serieson how my research uses tactical cloudlets as a strategy for providing infrastructure to support computation offload and data staging at the tactical edge.

A DevOps approach must be specifically tailored to an organization, team, and project to reflect the business needs of the organization and the goals of the project.

Software developers focus on topics such as programming, architecture, and implementation of product features. The operations team, conversely, focuses on hosting, deployment, and system sustainment. All professionals naturally consider their area of expertise first and foremost when discussing a topic. For example, when discussing a new feature a developer may first think "How can I implement that in the existing code base?" whereas an operations engineer may initially consider "How could that affect the load on our servers?"

DevOps is a software development approach that brings development and operations staff (IT) together. The approach unites previously siloed organizations that tend to cooperate only when their interests converge, resulting in an inefficient and expensive struggle to release a product. DevOps is exactly what the founders of the Agile Manifesto envisioned: a nimble, streamlined process for developing and deploying software while continuously integrating feedback and new requirements. Since 2011, the number of organizations adopting DevOps has increased by 26 percent. According to recent research, those organizations adopting DevOps ship code 30 times faster. Despite its obvious benefits, I still encounter many organizations that hesitate to embrace DevOps. In this blog post, I am introducing a new series that will offer weekly guidelines and practical advice to organizations seeking to adopt the DevOps approach.

In an era of sequestration and austerity, the federal government is seeking software reuse strategies that will allow them to move away from stove-piped development toward open, reusable architectures. The government is also motivated to explore reusable architectures for purposes beyond fiscal constraints: to leverage existing technology, curtail wasted effort, and increase capabilities rather than reinventing them. An open architecture in a software system adopts open standards that support a modular, loosely coupled, and highly cohesive system structure that includes the publication of key interfaces within the system and full design disclosure.

When we verify a software program, we increase our confidence in its trustworthiness. We can be confident that the program will behave as it should and meet the requirements it was designed to fulfill. Verification is an ongoing process because software continuously undergoes change. While software is being created, developers upgrade and patch it, add new features, and fix known bugs. When software is being compiled, it evolves from program language statements to executable code. Even during runtime, software is transformed by just-in-time compilation. Following every such transformation, we need assurance that the change has not altered program behavior in some unintended way and that important correctness and security properties are preserved. The need to re-verify a program after every change presents a major challenge to practitioners--one that is central to our research. This blog post describes solutions that we are exploring to address that challenge and to raise the level of trust that verification provides.

With the rise of multi-core processors, concurrency has become increasingly common. The broader use of concurrency, however, has been accompanied by new challenges for programmers, who struggle to avoid race conditions and other concurrent memory access hazards when writing multi-threaded programs. The problem with concurrency is that many programmers have been trained to think sequentially, so when multiple threads execute concurrently, they struggle to visualize those threads executing in parallel. When two threads attempt to access the same unprotected region of memory concurrently (one reading, one writing) logical inconsistencies can arise in the program, which can yield security concerns that are hard to detect.