Archive: 2011-01

After 47 weeks and 50 blog postings, the sands of time are quickly running out in 2011. Last week's blog posting summarized key 2011 SEI R&D accomplishments in our four major areas of software engineering and cyber security: innovating software for competitive advantage, securing the cyber infrastructure, accelerating assured software delivery and sustainment for the mission, and advancing disciplined methods for engineering software.This week's blog posting presents a preview of some upcoming blog postings you'll read about in these areas during 2012.

A key mission of the SEI is to advance the practice of software engineering and cyber security through research and technology transition to ensure the development and operation of software-reliant Department of Defense (DoD) systems with predictable and improved quality, schedule, and cost. To achieve this mission, the SEI conducts research and development (R&D) activities involving the DoD, federal agencies, industry, and academia. One of my initial blog postings summarized the new and upcoming R&D activitieswe had planned for 2011. Now that the year is nearly over, this blog posting presents some of the many R&D accomplishments we completed in 2011.

As with any new initiative or tool requiring significant investment, the business value of statistically-based predictive models must be demonstrated before they will see widespread adoption. The SEI Software Engineering Measurement and Analysis (SEMA)initiative has been leading research to better understand how existing analytical and statistical methods can be used successfully and how to determine the value of these methods once they have been applied to the engineering of large-scale software-reliant systems.

The DoD relies heavily on mission- and safety-critical real-time embedded software systems (RTESs), which play a crucial role in controlling systems ranging from airplanes and cars to infusion pumps and microwaves. Since RTESs are often safety-critical, they must undergo an extensive (and often expensive) certification process before deployment. This costly certification process must be repeated after any significant change to the RTES, such as migrating a single-core RTES to a multi-core platform, significant code refactoring, or performance optimizations, to name a few.

As part of an ongoing effort to keep you informed about our latest work, I'd like to let you know about some recently published SEI technical reports and notes. These reports highlight the latest work of SEI technologists in Agile methods, insider threat,the SMART Grid Maturity Model, acquisition, and CMMI. This post includes a listing of each report, author/s, and links where the published reports can be accessed on the SEI website.

Whether soldiers are on the battlefield or providing humanitarian relief effort, they need to capture and process a wide range of text, image, and map-based information. To support soldiers in this effort, the Department of Defense (DoD) is beginning to equip soldiers with smartphones to allow them to manage that vast array and amount of information they encounter while in the field. Whether the information gets correctly conveyed up the chain of command depends, in part, on the soldier's ability to capture accurate data while in the field. This blog posting, a follow-up to our initial post, describes our work on creating a software application for smartphones that allows soldier end-users to program their smartphones to provide an interface tailored to the information they need for a specific mission.

Cloudlets, which are lightweight servers running one or more virtual machines (VMs), allow soldiers in the field to offload resource-consumptive and battery-draining computations from their handheld devices to nearby cloudlets. This architecture decreases latency by using a single-hop network and potentially lowers battery consumption by using WiFi instead of broadband wireless. This posting extends our original postby describing how we are using cloudlets to help soldiers perform various mission capabilities more effectively, including facial, speech, and imaging recognition, as well as decision making and mission planning.

As noted in the National Research Council's report Critical Code: Software Producibility for Defense, mission-critical Department of Defense (DoD) systems increasingly rely on software for their key capabilities. Ironically, it is increasingly hard to motivate investment in long-term software research for the DoD. This lack of investment stems, in part, from the difficulty that acquisitions programs have making a compelling case for the return on these investments in software research. This post explores how the SEI is using the Systems and Software Producibility Collaboration and Experimentation Environment (SPRUCE) to help address this problem.

Cyber-physical systems (CPS) are characterized by close interactions between software components and physical processes. These interactions can have life-threatening consequences when they include safety-critical functions that are not performed according to their time-sensitive requirements. For example, an airbag must fully inflate within 20 milliseconds (its deadline) of an accident to prevent the driver from hitting the steering wheel with potentially fatal consequences. Unfortunately, the competition of safety-critical requirements with other demands to reduce the cost, power consumption, and device size also create problems, such as automotive recalls, new aircraft delivery delays, and plane accidents.

Malware, which is short for "malicious software," is a growing problem for government and commercial organizations since it disrupts or denies important operations, gathers private information without consent, gains unauthorized access to system resources, and other inappropriate behaviors. A previous blog postdescribed the use of "fuzzy hashing" to determine whether two files suspected of being malware are similar, which helps analysts potentially save time by identifying opportunities to leverage previous analysis of malware when confronted with a new attack. This posting continues our coverage of fuzzy hashing by discussing types of malware against which similarity measures of any kind (including fuzzy hashing) may be applied.

The SEI has devoted extensive time and effort to defining meaningful metrics and measures for software quality, software security, information security, and continuity of operations. The ability of organizations to measure and track the impact of changes--as well as changes in trends over time--are important tools to effectively manage operational resilience, which is the measure of an organization's ability to perform its mission in the presence of operational stress and disruption. For any organization--whether Department of Defense (DoD), federal civilian agencies, or industry--the ability to protect and sustain essential assets and services is critical and can help ensure a return to normalcy when the disruption or stress is eliminated. This blog posting describes our research to help organizational leaders manage critical services in the presence of disruption by presenting objectives and strategic measures for operational resilience, as well as tools to help them select and define those measures.

The SEI has devoted extensive time and effort to defining meaningful metrics and measures for software quality, software security, information security, and continuity of operations. The ability of organizations to measure and track the impact of changes--as well as changes in trends over time--are important tools to effectively manage operational resilience, which is the measure of an organization's ability to perform its mission in the presence of operational stress and disruption. For any organization--whether Department of Defense (DoD), federal civilian agencies, or industry--the ability to protect and sustain essential assets and services is critical and can help ensure a return to normalcy when the disruption or stress is eliminated. This blog posting describes our research to help organizational leaders manage critical services in the presence of disruption by presenting objectives and strategic measures for operational resilience, as well as tools to help them select and define those measures.

This post is the second installment in a two-part series describing our recent engagement with Bursatec to create a reliable and fast new trading system for Groupo Bolsa Mexicana de Valores (BMV, the Mexican Stock Exchange). This project combined elements of the SEI's Architecture Centric Engineering (ACE) method, which requires effective use of software architecture to guide system development, with its Team Software Process (TSP), which is a team-centric approach to developing software that enables organizations to better plan and measure their work and improve software development productivity to gain greater confidence in quality and cost estimates. The first postexamined how ACE was applied within the context of TSP. This posting focuses on the development of the system architecture for Bursatec within the TSP framework.

Bursatec, the technology arm of Groupo Bolsa Mexicana de Valores (BMV, the Mexican Stock Exchange), recently embarked on a project to replace three existing trading engines with one system developed in house. Given the competitiveness of global financial markets and recent interest in Latin American economies, Bursatec needed a reliable and fast new system that could work ceaselessly throughout the day and handle sharp fluctuations in trading volume. To meet these demands, the SEI suggested combining elements of its Architecture Centric Engineering (ACE) method, which requires effective use of software architecture to guide system development, with its Team Software Process (TSP), which teaches software developers the skills they need to make and track plans and produce high-quality products. This posting--the first in a two-part series--describes the challenges Bursatec faced and outlines how working with the SEI and combining ACE with TSP helped them address those challenges.

Background: In our research and acquisition work on commercial and Department of Defense (DoD) programs, we see many systems with critical safety and security ramifications. With such systems, safety and security engineering are used to managing the risks of accidents and attacks. Safety and security requirements should therefore be engineered to ensure that residual safety and security risks will be acceptable to system stakeholders. The first post in this series explored problems with quality requirements in general and safety and security requirements in particular. The second post, took a deeper dive into key obstacles that acquisition and development organizations encounter concerning safety- and security-related requirements. This post introduces a collaborative method for engineering these requirements that overcomes the obstacles identified in earlier posts.

Malware, which is short for "malicious software," consists of programming aimed at disrupting or denying operation, gathering private information without consent, gaining unauthorized access to system resources, and other inappropriate behavior. Malware infestation is of increasing concern to government and commercial organizations. For example, according to the Global Threat Report from Cisco Security Intelligence Operations, there were 287,298 "unique malware encounters" in June 2011, double the number of incidents that occurred in March. To help mitigate the threat of malware, researchers at the SEI are investigating the origin of executable software binaries that often take the form of malware. This posting augments a previous postingdescribing our research on using classification (a form of machine learning) to detect "provenance similarities" in binaries, which means that they have been compiled from similar source code (e.g., differing by only minor revisions) and with similar compilers (e.g., different versions of Microsoft Visual C++ or different levels of optimization).

A reliable, secure energy supply is vital to our economy, our security, and our well being. A key component of achieving a reliable and secure energy supply is the "smart grid" initiative. This initiative is a modernization effort that employs distributed sensing and control technologies, advanced communication systems, and digital automation to enable the electric power grid to respond intelligently to fluctuations in energy supply and demand, the actions of consumers, and market forces with an overall objective to improve grid efficiency and reliability. A smart grid will also allow homeowners to track energy consumption and adjust their habits accordingly. This posting describes several initiatives that the SEI has taken to support power utility companies in their modernization efforts to create a smart grid.

Happy Labor Day from all of us here at the SEI. I'd like to take advantage of this special occasion to keep you apprised of some recent technical reports and notes from the SEI. It's part of an ongoing effort to keep you informed about our latest work. These reports highlight the latest work of SEI technologists in architecting service-oriented systems, operational resilience, standards-based automated remediation, and acquisition. This post includes a listing of each report, author/s, and links where the published reports can be accessed on the SEI website.

Background: Over the past decade, the U.S. Air Force has asked the SEI's Acquisition Support Program (ASP) to conduct a number of Independent Technical Assessments (ITAs) on acquisition programs related to the development of IT systems; communications, command and control; avionics; and electronic warfare systems. This blog posting is the latest installment in a series that explores common themes across acquisition programs that we identified as a result of our ITA work. Previous themes explored in this series include Misaligned Incentives, The Need to Sell the Program, and The Evolution of "Science Projects."This post explores the fourth theme: common infrastructure and joint programs, which describes a key issue that arises when multiple organizations attempt to cooperate in the development of a single system, infrastructure, or capability that will be used and shared by all parties.

Testing plays a critical role in the development of software-reliant systems. Even with the most diligent efforts of requirements engineers, designers, and programmers, faults inevitably occur. These faults are most commonly discovered and removed by testing the system and comparing what it does to what it is supposed to do. This blog posting summarizes a method that improves testing outcomes (including efficacy and cost) in a software-reliant system by using an architectural design approach, which describes a coherent set of architectural decisions taken by architects to help meet the behavioral and quality attribute requirements of systems being developed.

Software sustainment is growing in importance as the inventory of DoD systems continues to age and greater emphasis is placed on efficiency and productivity in defense spending. In part 1 of this series, I summarized key software sustainment challenges facing the DoD. In this blog posting, I describe some of the R&D activities conducted by the SEI to address these challenges.

The 2011 CyberSecurity Watch survey revealed that 27 percent of cybersecurity attacks against organizations were caused by disgruntled, greedy, or subversive insiders, employees, or contractors with access to that organization's network systems or data. Of the 607 survey respondents, 43 percent view insider threat attacks as more costly and cited not only a financial loss but also damage to reputation, critical system disruption, and loss of confidential or proprietary information. For the Department of Defense (DoD) and industry, combating insider threat attacks is hard due to the authorized physical and logical access of insiders to organization systems and intimate knowledge of organizations themselves.

Department of Defense (DoD) programs have traditionally focused on the software acquisition phase (initial procurement, development, production, and deployment) and largely discounted the software sustainment phase (operations and support) until late in the lifecycle. The costs of software sustainment are becoming too high to discount since they account for 60 to 90 percent of the total software lifecycle effort.

Occasionally this blog will highlight different posts from the SEI blogosphere. Today's post is from the SATURN Network blog by Nanette Brown, a senior member of the technical staff in the SEI's Research, Technology, and System Solutions program. This post, the third in a series on lean principles and architecture, continues the discussion on the eight types of waste identified in Lean manufacturing and how these types of waste manifst themselves in software development. The focus of this post is on mapping the waste of motion and the waste of transportation from manufacturing to the waste of information transformation in software development.

Organizations run on data. They use it to manage programs, select products to fund or develop, make decisions, and guide improvement. Data comes in many forms, both structured (tables of numbers and text) and unstructured (emails, images, sound, etc.). Data are generally considered high quality if they are fit for their intended uses in operations, decision making, and planning. This definition implies that data quality is both a subjective perception of individuals involved with the data, as well as the quality associated with the objective measurements based on the data set in question. This post describes the work we're doing with the Office of Acquisition, Technology and Logistics (AT&L)--a division of the Department of Defense (DoD) that oversees acquisition programs and is charged with, among other things, ensuring that the data reported to Congress is reliable.

Background: In our research and acquisition work on commercial and Department of Defense (DoD) programs, ranging from relatively simple two-tier data-processing applications to large-scale multi-tier weapons systems, one of the primary problems that we see repeatedly is that acquisitionand development organizations encounter the following three obstacles concerning safety- and security-related requirements:

Background: Over the past decade, the U.S. Air Force has asked the SEI's Acquisition Support Program (ASP) to conduct a number of Independent Technical Assessments (ITAs) on acquisition programs related to the development of IT systems, communications, command and control, avionics, and electronic warfare systems. This blog post is the third in a series that enumerates common themes across acquisition programs that we identified as a result of our ITA work. Other themes explored in this series include misaligned incentives, the need to sell the program, and common infrastructure and joint programs. This post explores the third theme in this series, the evolution of "science projects," which describes how prototype projects that unexpectedly grow in size and scope during development often have difficulty transitioning into a formal acquisition program.

Happy Independence Day from all of us here at the SEI. I'd like to take advantage of this special occasion to keep you apprised of a new technical report from the SEI. It's part of an ongoing effort to keep you informed about the latest work of SEI technologists. This report highlights the latest work of SEI technologists in the fields of insider threat. This post includes a listing of the report, authors, and links where the published reports can be accessed on the SEI website.

In our research and acquisition work on commercial and Department of Defense (DoD) programs ranging from relatively simple two-tier data-processing applications to large-scale multi-tier weapons systems , one of the primary problems that we see repeatedly is that requirements engineers tend to focus almost exclusively on functional requirements and largely ignore the so-called nonfunctional requirements, such as data, interface, and quality requirements, as well as technical constraints. Unfortunately, this myopia means that requirements engineers overlook critically important, architecturally-significant, quality requirements that specify minimum acceptable amounts of qualities, such as availability, interoperability, performance, portability, reliability, safety, security, and usability. This blog post is the first in a series that explores the engineering of safety- and security-related requirements.

The Department of Defense (DoD) is increasingly interested in having soldiers carry handheld mobile computing devicesto support their mission needs. Soldiers can use handheld devices to help with various tasks, such as speech and image recognition, natural language processing, decision-making and mission planning. Three challenges, however, present obstacles to achieving these capabilities. The first challenge is that mobile devices offer less computational power than a conventional desktop or server computer. A second challenge is that computation-intensive tasks, such as image recognition or even global positioning system (GPS), take a heavy toll on battery power. The third challenge is dealing with unreliable networks and bandwidth. This post explores our research to overcome these challenges by using cloudlets, which are localized, lightweight servers running one or more virtual machines (VMs) on which soldiers can offload expensive computations from their handheld mobile devices, thereby providing greater processing capacity and helping conserve battery power.

The Government Accountability Office (GAO) has frequently cited poor cost estimation as one of the reasons for cost overrun problems in acquisition programs. Software is often a major culprit. One study on cost estimation by the Naval Postgraduate School found a 34 percent median value increase of software size over the estimate. Cost overruns lead to painful Congressional scrutiny, and an overrun in one program often cascades and leads to the depletion of funds from others. The challenges encountered in estimating software cost were described in the first postof this two-part series on improving the accuracy of early cost estimates. This post describes new tools and methods we are developing at the SEI to help cost estimation experts get the right information they need into a familiar and usable form for producing high quality cost estimates early in the life cycle.

Occasionally this blog will highlight different posts from the SEI blogosphere. Today's post is from the SATURN Network blog by Nanette Brown, a visiting scientist in the SEI's Research, Technology, and System Solutions program. This post, the second in a series on lean principles and architecture, takes an in-depth look at the waste of waiting and how it is an important aspect of the economics of architecture decision making.

Background: The U.S. Air Force has sponsored a number of SEI Independent Technical Assessments (ITAs) on acquisition programs that operated between 2006 and 2009. The programs focused on the development of IT systems, communications, command and control, avionics, and electronic warfare systems. This blog post is the second in a series that identifies four themes across acquisition programs that the SEI identified as a result of our ITA work. Other themes explored in the series include misaligned incentives, the evolution of science projects, and common infrastructure and joint programs. This post explores a related second theme, the need to sell the program, which describes a situation in which people involved with acquisition programs have strong incentives to "sell" those programs to their management, sponsors, and other stakeholders so that they can obtain funding, get them off the ground, and keep them sold.

Happy Memorial Day from all of us here at the SEI. I'd like to take advantage of this special occasion to keep you apprised of some recent technical reports and notes from the SEI. It's part of an ongoing effort to keep you informed about the latest work of SEI technologists. These reports highlight the latest work of SEI technologists in embedded systems, cyber security, appraisal requirements for CMMI Version 1.3, improving the quality and use of data, and software assurance. This post includes a listing of each report, author/s, and links where the published reports can be accessed on the SEI website.

The Government Accountability Office (GAO) has frequently citedpoor cost estimation as one of the reasons for cost overrun problems in acquisition programs. Software is often a major culprit. One study on cost estimation by the Naval Postgraduate School found a 34 percent median value increase of software size over the estimate. Cost overruns lead to painful Congressional scrutiny, and an overrun in one program often leads to the depletion of funds from another. This post, the first in a series on improving the accuracy of early cost estimates, describes challenges we have observed trying to accurately estimate software effort and cost in Department of Defense (DOD) acquisition programs, as well as other product development organizations.

Background:Over the past decade, the U.S. Air Force has asked the SEI's Acquisition Support Program (ASP) to conduct a number of Independent Technical Assessments (ITAs) on acquisition programs related to the development of IT systems; communications, command and control; avionics; and electronic warfare systems. This blog post is the first in a series that identifies common themes across acquisition programs that we identified as a result of our ITA work. This post explores the first theme: misaligned incentives, which occur when different individuals, groups, or divisions are rewarded for behaviors that conflict with a common organizational goal.

Occasionally this blog will highlight different posts from the SEI blogosphere. Today's post is from the SATURN Network blog by Nanette Brown, a visiting scientist in the SEI's Research, Technology, and System Solutions program. This post explores Categories of Waste in Lean Principles and Architecture, and takes an in-depth look at three of the eight categories of waste (defects, overproduction, and extra complexity) from the perspective of software development in general and software architecture in particular.

The SEI has long advocated software architecture documentation as a software engineering best practice. This type of documentation is not particularly revolutionary or different from standard practices in other engineering disciplines. For example, who would build a skyscraper without having an architect draw up plans first? The specific value of software architecture documentation, however, has never been established empirically. This blog describes a research project we are conducting to measure and understand the value of software architecture documentation on complex software-reliant systems.

As industry and government customers demand increasingly rapid innovation and the ability to adapt products and systems to emerging needs, the time frames for releasing new software capabilities continue to shorten. Likewise, Agile software development processes, with their emphasis on releasing new software capabilities rapidly, are increasing in popularity beyond their initial small team and project context. Practices intended to speed up the delivery of value to users, however, often result in high rework costs that ultimately offset the benefits of faster delivery, especially when good engineering practices are forgotten along the way. This rework and degrading quality often is referred to as technical debt. This post describes our research on improving the overall value delivered to users by strategically managing technical debt, which involves decisions made to defer necessary work during the planning or execution of a software project.

In some key industries, such as defense, automobiles, medical devices, and the smart grid, the bulk of the innovations focus on cyber-physical systems. A key characteristic of cyber-physical systems is the close interaction of software components with physical processes, which impose stringent safety and time/space performance requirements on the systems. This blog post describes research and development we are conducting at the SEI to optimize the performance of cyber-physical systems without compromising their safety.

As part of an ongoing effort to keep you informed about the latest work of SEI technologists, I will keep you apprised of SEI-related work that's published each month as SEI technical reports and notes. This post includes a listing of each report, author/s, and links where reports published in March can be accessed on the SEI website. The first report, A Framework for Evaluating Common Operating Environments, is based on a recent SEI blog postingand is an area I'm actively working on at the SEI. As always, we welcome your feedback on our work.

This is a second in a series of posts focusing on Agile software development. In the first post, "What is Agile?" we provided a short overview of the key elements of the Agile approach, and we introduced the Agile Manifesto. One of the guiding principles from the manifesto emphasizes valuing people over developing processes. While the manifesto clearly alludes to the fact that too much focus on process (and not results) can be a bad thing, we introduce the notion here that the other end of the spectrum can also be bad. This blog explores the level of skill that is needed to develop software using Agile (do you need less skill or more?), as well as the importance of maintaining strong competency in a core set of software engineering processes.

If you ask the question, "What is Agile?" you are likely to get lots of different answers. That's because there is no universally accepted formal definition for Agile. To make matters worse, there are ongoing debates over what Agile software development SHOULD mean. That being the case, when answering the question, "What is Agile?" the safest bet is to stick to what people can agree on, and people generally agree on three key elements of Agile. Taken together, these describe the Agile software development method, as well as the software development approach. In this post--the first in a series on Agile--I will explain the foundations of Agile and its use by developers.

Many people today carry handheld computing devices to support their business, entertainment, and social needs in commercial networks. The Department of Defense (DoD) is increasingly interested in having soldiers carry handheld computing devices to support their mission needs in tactical networks. Not surprisingly, however, conventional handheld computing devices (such as iPhone or Android smartphones) for commercial networks differ in significant ways from handheld devices for tactical networks. For example, conventional devices and the software that runs on them do not provide the capabilities and security needed by military devices, nor are they configured to work over DoD tactical networks with severe bandwidth limitations and stringent transmission security requirements. This post describes exploratory research we are conducting at the SEI to (1) create software that allows soldiers to access information on a handheld device and (2) program the software to tailor the information for a given mission or situation.

Malware--generically defined as software designed to access a computer system without the owner's informed consent--is a growing problem for government and commercial organizations. In recent years, research into malware focused on similarity metrics to decide whether two suspected malicious files are similar to one another. Analysts use these metrics to determine whether a suspected malicious file bears any resemblance to already verified malicious files. Using these metrics allows analysts to potentially save time, by identifying opportunities to leverage previous analysis. This post will describe our efforts to develop a technique (known as fuzzy hashing) to help analysts determine whether two pieces of suspected malware are similar.

Malicious software (known as "malware") is increasingly pervasive with a constant influx of new, increasingly complex strains that wreak havoc by exploiting computers or personal and business information stored therein for malicious or criminal purposes. Examples include code that is designed to pilfer personal and digital credentials; plunder sensitive information from government or business enterprises; or interrupt, misdirect, or render inoperable computer hardware and computer-controlled equipment. This post describes our work to create a rapid search capability that allows analysts to quickly analyze a new piece of malware.

Large-scale DoD acquisition programs are increasingly being developed atop reusable software platforms--known as Common Operating Environments (COEs) --that provide applications and end-users with many net-centric capabilities, such as cloud computing or Web 2.0 applications, including data-sharing, interoperability, user-centered design, and collaboration. Selecting an appropriate COE is critical to the success of acquisition programs, yet the processes and methods for evaluating COEs had not been clearly defined. I explain below how the SEI developed a Software Evaluation Framework and applied it to help assess the suitability of COEs for the US Army.

Continuous technological improvement is the hallmark of the hardware industry. In an ideal world--one without budgets or schedules--software would be redesigned and redeveloped from scratch to leverage each such improvement. But applying this process for software is often infeasible--if not impossible--due to economic constraints and competition. This posting discusses our research in applying verification, namely regression verification, to help the migration of real-time embedded systems from single-core to multi-core platforms.

Strategic planning is a process for defining an organization's approach for achieving its mission. Conducting successful strategic planning is essential because it creates a foundation for executing work, as well as setting the stage for enterprise architecture, process improvement, risk management, portfolio management, and any other enterprise-wide initiatives. Government organizations are operating in an environment of almost near-constant change, however, which makes it hard to conduct strategic planning efforts successfully. Moreover, when organizations do tackle strategy, they often get bogged down reflecting on the past or speculating on an uncertain future. This posting describes recent work investigating and integrating techniques to improve organizational strategic planning.

As software becomes an ever-increasing part of our daily lives, organizations find themselves relying on software that originates from unknown and untrusted sources. The vast majority of such software is available only as executables, known as "binaries." Many binaries--such as malware or different versions and builds of a software package--are simply minor variants of old programs (or in some cases exact copies) that have been run through a different compiler.

When I joined the SEI last year, one of my top priorities was to advance the scope and impact of SEI R&D programs, along with increasing the visibility of the excellent work of SEI technologists who staff these programs. While the SEI is well known for its innovation and impact in several key areas, the breadth and depth of our expertise extends far beyond our most popular technologies. To increase awareness of all that we're doing, we've established a blog to highlight SEI R&D initiatives, methods, and solutions that are meeting the needs of our customers and partners in government and industry. This initial post will introduce the goals and themes of the new SEI blog.