In March 2021, the National Security Commission on Artificial Intelligence (NSCAI) released a report detailing the challenges and opportunities around adoption of artificial intelligence for mission needs. The report identified growing an AI-ready workforce as a significant need to enable the United States to buy, build, and field AI technologies for national security purposes. “This is not a time to add a few new positions in national security departments and agencies for Silicon Valley technologists and call it a day,” the commission wrote. “We need to build entirely new talent pipelines from scratch.” In this article we outline 5 factors that are critical for organizations and leaders to consider as they grow an AI-ready workforce.
Accept that no one has all the answers.
The desire to grow an AI-ready workforce is emerging as many organizations in the national security space ask, “How do we leverage AI towards mission outcomes?” Often, an assumption exists that someone—a machine learning researcher, the CEO of an industry company, a team lead, an engineer—knows exactly how to accomplish that goal but they work in industry or academia and are not hirable. The truth is that today, much about the implementation of AI systems is still in the artisan phase. Applying new algorithms to real-world problems and real-world datasets is hard—it’s not canned datasets that have a well-understood set of properties. Organizations across industry and government are continuing to evolve their practices and are working to create and adopt well-defined processes.
Given the rapid growth and change in AI design, development, and deployment today, even within industry it is challenging to get needed talent. As mission needs and environments are rapidly changing, what the workforce needs is individuals with the willingness to cross the boundaries between data, engineering, machine learning, design, and other fields. In the current age where no one has all the answers, employees and teams need to talk to each other about what is going on to understand where bottlenecks or stumbling blocks are in the system and work together to achieve the desired system outcomes.
Organizations today are often faced with a challenge: How do we move forward even if we don’t have all of the answers yet? When attempting to leverage AI towards mission outcomes in the defense and national security space, effective implementation cannot happen without a significant blurring of lines. A data engineer can have impact across the application, from application performance to the semantics and meaning of the data flowing across the system. AI team members must be curious and humble enough to acknowledge that they don’t have all the answers and identify who can reach across different boundaries within a system to track down an answer. Especially in these early days of AI, team members must be able to facilitate conversations across various types of audiences to understand how the many facets of an AI system come together as well as the technical debt that accompanies certain decisions. By understanding the computational costs of a system, team members will better understand how fast or how hard a system can be scaled. Traits like these will likely be needed in jobs across many domains in the next decade, but the AI workforce needs them now.
Draw talent into your problems.
A common refrain for many organizations, and government organizations in particular, is that building an AI-ready workforce is particularly challenging because it is impossible to match the salaries offered by large, private-sector companies such as Amazon, Google, and Microsoft. Salary discrepancies between industry and government are unlikely to change any time soon, however. Where government does have a strategic advantage is around the types of problems it is aiming to solve.
As government organizations aim to build AI capabilities they are confronted with a host of constraints: where and how data and systems exist, where information is stored, what policies and regulations exist, and how to establish confidence and assurance. Employees working in government also must place central focus on questions of safety, ethics, and robustness and have a keen sense of how what is built addresses stakeholder needs. While similar questions do also exist in industry—and of course, everyone strives to build effective tools—addressing such questions in the context of government presents a unique context that is filled with potential for impact.
A key motivator for people of all ages—and especially for many young people today—is to work on problems that matter. Although money plays a role in decision making, many individuals choose meaningful work over a larger salary. And fortunately, implementing AI for government applications encompasses a variety of meaningful challenges: How do we center the needs of human users? How do we design AI systems to be robust in the face of uncertainty or threat? How can AI scale to meet mission needs? Organizations are often surprised to realize that they can achieve amazing results by tapping into people’s motivations and passions, whether or not they have the skills on paper. While organizations are continuing to work to grow salaries to match industry offers, they also can leverage the compelling nature of problems to be solved as a draw for talent.
Match your workforce needs to your development needs.
For many organizations, workforce needs depend on where they are in adopting, deploying, and maintaining AI. Organizations just starting out on their AI journey may have a large set of data that they have been collecting over the years, and they are now trying to identify what predictions they can make from it. In that case, organizations should focus on building a small team with flexible roles. An ideal hire might be an individual with experience in data analysis and data extraction—someone who can help determine the correct data to use and then start applying questions such as “What is the right set of hypotheses that we are going to test?” and “What experiments should we conduct to start building the predictions we are trying to make to meet our business goals?”
Other organizations have started rolling out AI systems and building out predictive pipelines. In this scenario, AI team roles are more defined and should focus on hiring talent with more depth in a specific skillset. For example, organizational leaders should try to recruit data engineers who can move data from various sources around the enterprise to destinations required for building better systems. These organizations may also seek out data analysts with more domain knowledge who can understand business and mission goals.
Regardless of where an organization is in its AI journey, leaders need to move away from checklist-driven hiring practices and focus more on skills that showcase a candidate’s ability to work on a team, feel comfortable with ambiguity, and move forward in a rapidly changing environment.
Focus on hiring and supporting diverse talent.
Too often when organizations seek to hire talent in the AI space, they assume they should focus on a couple of schools. In our experience, robust, secure, scalable, and human-centered AI systems are ones that incorporate varied perspectives and data. AI systems learn from examples, so it helps to have a diverse team that can bring different lenses to a problem and identify appropriate datasets to train the AI system on. It naturally follows that assembling a team with different backgrounds that can speak to different aspects of the problem will result in a better selection of datasets.
The Department of Defense (DoD) has an established stance on what it means to implement ethical AI, and these requirements can’t be addressed within a single discipline. AI teams need to be informed by a range of cultures, experiences, and how team members think about the world and the heuristics they use to solve problems. A team can be made up of members with diverse backgrounds, but if all the team members are engineers, they will approach the problem space in the same way. Teams need to explore what it would mean to partner with a policy maker or a philosopher and how those unique perspectives would drive solutions that would be ethical and implementable.
A caution to remember is that you cannot only focus on diversity when hiring. To enable teaming with diversity, organization leaders also need to think about how to support diverse teams over time. Working in diverse teams provides the “engine of organizational learning...a way of working that brings people together to generate new ideas, find answers and solve problems. But people have to learn to team; it doesn’t come naturally” (The Importance of Teaming). For example, one challenge often faced by small teams is that when you have members with deep domain expertise, they often hit a roadblock in terms of language. The way that a data scientist would describe a problem differs significantly from how an engineer or a user-experience researcher would describe the same problem. It is therefore critical to consider who can help translate across these different roles or how teams invest time in developing shared language over time.
Help your talent learn how to learn.
AI technologies are evolving so quickly that any specific requirements might soon be overcome by advancement. For that reason, organizations looking to adopt AI need to grow a culture of learning. On the hiring side, that also means looking for people with a sense of curiosity. There is a time and a place for people who can do deep thinking and focus down and get to great results by diving deep. In the early days of adopting any new technology, and AI in particular, it is often more helpful to have individuals with the curiosity and willingness to try things that are outside their traditional bounds to figure out solutions to problems. A culture of curiosity and learning is a hallmark of many early-stage startup companies by necessity. As early companies grow, team members are working toward shared vision the best they can, often without the resources or full infrastructure they desire. Teams are forced to prioritize and try out different pathways towards reaching goals—which often looks like rapidly learning new ways of working and doing.
Organizations in the early stages of building AI capability are in a similar position to early-stage companies. Individuals end up having to wear a lot of hats and take on multiple roles simultaneously. Teams have to negotiate resources, determine starting points for business outcomes amidst high ambiguity, and explore the art of the possible with technology. A core skill to navigate the initial phases is being able to ask questions, to be curious, to be able to go out and read things or talk to people and ask questions about, Why is this happening? or What should I do? to understand the practices that are out there. It’s tempting to look outside one’s organization to acquire teams and knowledge, yet for many organizations looking for rapid adoption of AI technologies, the best resource is the current talent pool. One benefit of the recent explosion of AI is that a wealth of resources is now available to organizations seeking to develop internal talent, including online courses and online universities.
Organizations also need to focus on helping existing employees learn how to learn. The expectation cannot be that everyone will be able to easily add learning on top of days filled with back-to-back meetings and never-ending lists for deliverables. Organization leaders have to think about ways they can create the structure to enable learning behaviors for individuals and teams. To help individuals learn how to learn, managers can ask themselves the following questions:
- What is our shared vision for leveraging AI? What outcomes are we hoping to achieve?
- How am I creating opportunities for people to learn and grow? How am I establishing psychological safety to encourage risk-taking?
- Am I there and present when my team members have questions about where to go next? Who else can provide guidance?
- How do I help my team see things they haven’t previously seen, ask new questions, or curate a set of resources?
The Starting Point for an AI-Ready Workforce
Organizations today are working to assemble teams that can take bespoke pieces of AI, leverage them towards specific outcomes, and endlessly tune system components to arrive at assured AI systems, able to be deployed in a variety of different environments. To develop such systems, organizations and leaders will have to take action to develop a workforce that has the necessary skillsets, mindsets, and array of experiences. Unfortunately, there is no perfected recipe and we at the SEI are trying to navigate the growth of our own workforce to support our AI engineering portfolio. Our hope is that by sharing our lessons learned and what is guiding our thinking today, we will be able to enable organizations to grow a workforce capable of designing and deploying AI systems that are human-centered, robust and secure, and scalable.
Rachel Dzombak and Matt Gaston will host a free webcast at 10 a.m. ET on September 22 where they will take your questions and discuss what is needed to create, deploy, and maintain AI systems we can trust. To learn more or register visit https://www.sei.cmu.edu/news-events/events/event.cfm?customel_datapageid_5541=324680.
Read the SEI white paper Human-Centered AI by Hollen Barmer, Rachel Dzombak, Matt Gaston, Jay Palat, Frank Redner, Carol J. Smith, and Tanisha Smith.
View the SEI Podcast AI Workforce Development with Dr. Rachel Dzombak and Jay Palat.
View the SEI Podcast Is Your Organization Ready for AI? with Dr. Rachel Dzombak and Carol Smith.
This post has been shared 6 times.
More By The Authors
More In Artificial Intelligence Engineering
Software Engineering for Machine Learning: Characterizing and Detecting Mismatch in Machine-Learning Systems
Get updates on our latest work.
Sign up to have the latest post sent to your inbox weekly.