The term big data is a subject of much hype in both government and business today. Big data is variously the cause of all existing system problems and, simultaneously, the savior that will lead us to the innovative solutions and business insights of tomorrow. All this hype fuels predictions such as the one from IDC that the market for big data will reach $16.1 billion in 2014, growing six times faster than the overall information technology market, despite the fact that the "benefits of big data are not always clear today," according to IDC. From a software-engineering perspective, however, the challenges of big data are very clear, since they are driven by ever-increasing system scale and complexity. This blog post, a continuation of my last poston the four principles of building big data systems, describes how we must address one of these challenges, namely, you can't manage what you don't monitor.
Organizations are continually fending off cyberattacks in one form or another. The 2014 Verizon Data Breach Investigations Report, which included contributions from SEI researchers, tagged 2013 as "the year of the retailer breach." According to the report, 2013 also witnessed "a transition from geopolitical attacks to large-scale attacks on payment card systems." To illustrate the trend, the report outlines a 12-month chronology of attacks, including a January "watering hole" attack on the Council on Foreign Relations website followed in February by targeted cyber-espionage attacks against The New York Times and TheWall Street Journal. The well-documented Target breachbrought 2013 to a close with the theft of more than 40 million debit and credit card numbers. This blog post highlights a recent research effort to create a taxonomy that provides organizations a common language and set of terminology they can use to discuss, document, and mitigate operational cybersecurity risks.
The role of software within systems has fundamentally changed over the past 50 years. Software's role has changed both on mission-critical DoD systems, such as fighter aircraft and surveillance equipment, and on commercial products, such as telephones and cars. Software has become not only the brain of most systems, but the backbone of their functionality. Acquisition processes must acknowledge this new reality and adapt. This blog posting, the second in a series about the relationship of software engineering (SwE) and systems engineering (SysE), shows how software technologies have come to dominate what formerly were hardware-based systems. This posting describes a case study: the story of software on satellites, whose lessons can be applied to many other kinds of software-reliant systems.
Many warfighters and first responders operate at what we call "the tactical edge," where users are constrained by limited communication connectivity, storage availability, processing power, and battery life. In these environments, onboard sensors are used to capture data on behalf of mobile applications to perform tasks such as face recognition, speech recognition, natural language translation, and situational awareness. These applications then rely on network interfaces to send the data to nearby servers or the cloud if local processing resources are inadequate. While software developers have traditionally used native mobile technologies to develop these applications, the approach has some drawbacks, such as limited portability. In contrast, HTML5 has been touted for its portability across mobile device platforms, as well an ability to access functionality without having to download and install applications. This blog post describes research aimed at evaluating the feasibility of using HTML5 to develop applications that can meet tactical edge requirements.
In earlier posts on big data, I have written about how long-held design approaches for software systems simply don't work as we build larger, scalable big data systems. Examples of design factors that must be addressed for success at scale include the need to handle the ever-present failures that occur at scale, assure the necessary levels of availability and responsiveness, and devise optimizations that drive down costs. Of course, the required application functionality and engineering constraints, such as schedule and budgets, directly impact the manner in which these factors manifest themselves in any specific big data system. In this post, the latest in my ongoing series on big data, I step back from specifics and describe four general principles that hold for any scalable, big data system. These principles can help architects continually validate major design decisions across development iterations, and hence provide a guide through the complex collection of design trade-offs all big data systems require.
In the first half of this year, the SEI blog has experienced unprecedented growth, with visitors in record numbers learning more about our work in big data, secure coding for Android, malware analysis, Heartbleed, and V Models for Testing. In the first six months of 2014 (through June 20), the SEI blog has logged 60,240 visits, which is nearly comparable with the entire 2013 yearly total of 66,757 visits. As we reach the mid-year point, this blog posting takes a look back at our most popular areas of work (at least according to you, our readers) and highlights our most popular blog posts for the first half of 2014, as well as links to additional related resources that readers might find of interest.
Federal agencies depend on IT to support their missions and spent at least $76 billion on IT in fiscal year 2011, according to a report from the Government Accountability Office (GAO). The catalyst for the study was congressional concern over prior IT expenditures that produced disappointing results, including multimillion dollar cost overruns and schedule delays measured in years, with questionable mission-related achievements. The Office of Management and Budget (OMB) in 2010 issued guidance that advocates federal agencies employ "shorter delivery time frames, an approach consistent with Agile." This ongoing series on the Readiness & Fit Analysis (RFA) approach focuses on helping federal agencies and other organizations understand the risks involved when contemplating or embarking on the adoption of new practices, such as Agile methods. This blog posting, the fifth in this series, explores the Practices category, which helps organizations understand which Agile practices are already in use to formulate a more effective adoption strategy.
This posting is the third in a series that focuses on multicore processing and virtualization, which are becoming ubiquitous in software development. The first blog entry in this series introduced the basic concepts of multicore processing and virtualization, highlighted their...