
Blog Posts
Bridging the Gap between Requirements Engineering and Model Evaluation in Machine Learning
Requirements engineering for machine learning (ML) is not standardized and considered one of the hardest tasks in ML development. This post defines a simple evaluation framework centered around validating requirements.
• By Violet Turri, Eric Heim
In Artificial Intelligence Engineering


MXNet: A Growing Deep Learning Framework

MXNet (pronounced mix-net) is Apache’s open-source spin on a deep-learning framework that supports building and training models in multiple languages, including Python, R, Scala, Julia, Java, Perl, and C++.
• By Jeffrey Mellon
In Artificial Intelligence Engineering

How to Grow an AI-Ready DoD Workforce
This SEI Blog post discusses the unique challenges of AI engineering for defense and national security, how to build an AI-ready workforce, and how the SEI is supporting DoD workforce …
• By Robert Beveridge
In Artificial Intelligence Engineering

Creating Transformative and Trustworthy AI Systems Requires a Community Effort
This post explores how professionalizing the practice of AI engineering and developing the AI engineering discipline can increase the dependability and availability of AI systems.
• By Carrie Gardner
In Artificial Intelligence Engineering

Improving Automated Retraining of Machine-Learning Models

This post describes how to improve representative MLOps pipelines by automating exploratory data-analysis tasks.
• By Rachel Brower-Sinning
In Artificial Intelligence Engineering

How Easy Is It to Make and Detect a Deepfake?

The technology underlying the creation and detection of deepfakes and assessment of current and future threat levels
• By Catherine Bernaciak, Dominic Ross
In Artificial Intelligence Engineering


A Hitchhiker’s Guide to ML Training Infrastructure

Hardware is a key enabler for machine learning. Recent advances in the field, including the introduction of graphics processing units, have had a significant impact on the training of AI …
• By Jay Palat
In Artificial Intelligence Engineering

What is Explainable AI?
Explainable artificial intelligence is a powerful tool in answering critical How? and Why? questions about AI systems and can be used to address rising ethical and legal concerns.
• By Violet Turri
In Artificial Intelligence Engineering

5 Ways to Start Growing an AI-Ready Workforce
This blog post by Rachel Dzombak and Jay Palat outlines 5 factors that are critical for organizations and leaders to consider as they grow an AI-ready workforce.
• By Rachel Dzombak, Jay Palat
In Artificial Intelligence Engineering


Software Engineering for Machine Learning: Characterizing and Detecting Mismatch in Machine-Learning Systems
This post describes how we are creating and assessing empirically validated practices to guide the development of machine-learning-enabled systems.
• By Grace Lewis, Ipek Ozkaya
In Artificial Intelligence Engineering

