What Engineers Need to Know About Artificial Intelligence
PUBLISHED IN
Artificial Intelligence EngineeringArtificial intelligence (AI) systems by their nature are software-intensive. To create viable and trusted AI systems, engineers need technologies and standards, similar to those in software engineering.
At the Software Engineering Institute (SEI)--a federally funded research and development center tasked with advancing the field of software engineering and cybersecurity--we are leading a movement to establish a professional AI Engineering discipline. As we begin a national conversation on AI Engineering, we have identified several key aspects and elements of AI that engineers must understand to work with emerging systems.
A Legacy of Defining Engineering
Before we look at what engineers need to know about AI, we will look at the SEI's credentials for leading the conversation about AI Engineering.
In 1984, the U.S Department of Defense (DoD) established the SEI at Carnegie Mellon to advance modern software engineering techniques and methods for the DoD. A technical history of the SEI noted that at the time of the SEI's inception "even the notion that software development could be called an engineering discipline was debated. There were, after all, few analytic techniques available to a 'software engineer;' and there was no set of accepted practices to guide managers, developers, and maintainers of software."
Since then--through our research and transition work with the DoD and other government agencies--the SEI has played a leading role in the development of methods and practices in software engineering that have helped the DoD develop and field software-enabled capabilities more quickly and defend software systems more effectively.
Along the way the SEI has led the software and cybersecurity communities to develop several innovations:
- the recovery of compromised systems, beginning with the creation of the SEI's CERT Coordination Center in the wake of the Morris Worm incident
- the repeatable delivery of platforms through process improvement methods, particularly the Capability Maturity Model Integration
- the strategic design, reuse, and evolution of systems through work in software architecture and software product lines
- reasoning about complex, cyber-physical systems
- secure coding standards
Introductory Concepts: AI, ML, and Deep Learning
At a very basic level, many practitioners use the terms AI and machine learning as if they are separate entities. They are not. The DoD AI Strategy defines AI as
. . . the ability of machines to perform tasks that normally require human intelligence - for example, recognizing patterns, learning from experience, drawing conclusions, making predictions, or taking action - whether digitally or as the smart software behind autonomous physical systems.
Machine learning (ML), a part of AI, which the SEI defines as
A system that learns and improves its performance at some task by using data and experience.
Further a recent SEI blog post defined deep learning as
a family of machine learning techniques whose models extract important features by iteratively transforming the data, "going deeper" toward meaningful patterns in the dataset with each transformation. Unlike traditional machine learning methods, in which the creator of the model has to choose and encode features ahead of time, deep learning enables a model to automatically learn features that matter.
AI Engineering Concepts
- AI depends on the human element. AI augments, but does not replace, human knowledge and expertise. This basic understanding affects engineers of AI systems in two dimensions: human-machine teaming and the probabilistic nature of AI "answers." Engineers developing AI systems must account for human-machine teaming--the interactions between the system and the people who build and use it. Often, the success of those interactions comes down to trust and transparency: How should AI systems be deployed in environments where people have become accustomed to ignoring automation? How can you address ethics--accounting for algorithms not having a sense of morality? Further, AI will produce probabilistic answers: How does AI present results to a human as based on a probability distribution, not as a discrete answer? How does the human know when a prediction is bad?
- AI depends on labeled and unlabeled data as well as the systems that store and access it. The availability of data and the speed at which today's computers can process it are reasons why AI is exploding today. AI systems are really good at classifying, categorizing, and partitioning massive amounts of data to make the most relevant pieces available for humans to analyze and make decisions. Engineers must consider the data itself--provenance, security, quality, and aligning test and training data--and the hardware and software systems that support that data. Large amounts of data require a computing environment that has the capacity to handle it. Managing data requires designing storage solutions around physical data constraints and types of queries desired.
- One AI, many algorithms. When we talk about AI, ML, and deep learning, we are referring to many different algorithms, many different approaches, not all of which are neural-network based. AI is not a new field, and many of the algorithms in use today were generated in the 1950s, 1960s, or 1970s. For example, the A* shortest path algorithm was conceived in the 1950s, and improved on in the 1960s.
- The insight is the benefit of AI. Engineers face the reality that it is impossible to test a system in every situation it will ever encounter. An AI system adds capability for the engineering because it can find an answer to never-seen-before situations that is insightful and has a very good probability of being correct. However, it is not necessarily correct, but probabilistic. Thus, gaining increased confidence in AI is hard for engineers who need to focus on creating and validating a system.
- An AI system depends on the system under which it runs. When building a system that does not incorporate AI, you can build it in isolation, test it in isolation, and then deploy it and be certain it is going to behave just as it did in the lab. An AI system depends on the conditions under which the AI runs and what the AI system is sensing, and this context adds another level of complexity.
AI at the SEI
At the SEI--because we have dealt with software and cybersecurity as engineering disciplines--we are incorporating and advancing the engineering discipline of AI in the same manner, through research and transition. Our focus is to mature the AI engineering space so that systems can be developed, repeatedly and reliably, to be capable, trustworthy, affordable, and timely. In this effort, we are fortunate to be part of CMU, where Herbert Simon and Allen Newell invented AI decades ago.
In recent years, the SEI has also made strides in applied AI and machine learning. Examples of our work include inverse reinforcement learning, machine emotional intelligence, deep learning and satellite imagery, automated static analysis report classification, and software cost estimation.
Although we will take a deeper dive into the SEI's AI Engineering work in the next post in this series, I would like to highlight two current SEI projects that are implementing AI to help address bottlenecks in systems that require costly and time-consuming extensive human involvement:
- Automating alert handling to reduce manual effort. Static analysis tools search code for flaws without executing it--providing alerts about flaws that cyber intruders might exploit as vulnerabilities. Those alerts require costly human effort to determine if they are true and to repair the code. As a result, organizations often severely limit the types of alerts they manually examine to the types of code flaws they most worry about. That approach results in a tradeoff where significant flaws never get fixed. To make alert handling more efficient, the SEI is developing and testing novel software that enables the rapid deployment of a method to classify alerts automatically and accurately, with planned work to include enhanced functionality related to adaptive heuristics.
- Model-based engineering AI. AI and ML system development remains primarily trial-and-error design with limited abstractions, architectures, and patterns--a very human-intensive effort. Current ML systems are normally monolithic with limited abstractions and modularity. Representations are typically limited to single modalities and domain structure can be lost. To address this, SEI researchers are working to develop richer representations, abstractions, patterns, and architectures to analyze and ultimately synthesize AI/ML components and systems.
"What's Past is Prologue"
In 1990, Mary Shaw, an SEI founder and our chief scientist from 1984 to 1987, said, "Although software engineering is not yet a true engineering discipline, it has the potential to become one." The SEI led the movement to establish software engineering as a true engineering discipline. Although AI engineering is not yet a true engineering discipline, it has the potential to become one.
Additional Resources
View the SEI Podcast Leading in the Age of Artificial Intelligence. Watch the SATURN presentation Smart Decisions Game: Machine Learning for Architects. View the SEI Podcast Deep Learning in Depth: Deep Learning versus Machine Learning.
More By The Author
More In Artificial Intelligence Engineering
PUBLISHED IN
Artificial Intelligence EngineeringGet updates on our latest work.
Sign up to have the latest post sent to your inbox weekly.
Subscribe Get our RSS feedMore In Artificial Intelligence Engineering
Get updates on our latest work.
Each week, our researchers write about the latest in software engineering, cybersecurity and artificial intelligence. Sign up to get the latest post sent to your inbox the day it's published.
Subscribe Get our RSS feed