Skip to Main Content
HBS Home
  • About
  • Academic Programs
  • Alumni
  • Faculty & Research
  • Baker Library
  • Giving
  • Harvard Business Review
  • Initiatives
  • News
  • Recruit
  • Map / Directions
Managing the Future of Work
  • Newsletter
  • Partners
  • About the Project
  • Research
  • Faculty & Researchers
  • Media Coverage
  • Podcast
  • …→
  • Harvard Business School→
  • Managing The Future of Work→
  • Podcast→

Podcast

Podcast

Harvard Business School Professors Bill Kerr and Joe Fuller talk to leaders grappling with the forces reshaping the nature of work.
SUBSCRIBE ON iTUNES
  • 05 May 2021
  • Managing the Future of Work

US plans for AI primacy

As the US vies with global AI rivals for technological and strategic advantage, where will it find the human brainpower and skilled labor to compete? Is the government prepared for the challenge? Artificial intelligence is crossing boundaries, transforming markets, and raising ethical concerns. José-Marie Griffiths, member of the National Security Committee on Artificial Intelligence, discusses the commission’s recommendations.

Bill Kerr: Artificial intelligence is changing many facets of society, from education and media to medicine and finance. But national security concerns have led the US to consider mobilizing public and private resources to vie with global competitors, particularly China. The National Security Commission on Artificial Intelligence was established by Congress in 2018 to study developments in the field through the lenses of security and defense. The commission’s final report, issued in March 2021, calls for significant new investments in research, technology, and human capital. Unlike the Space Race or the Manhattan Project, the products of an AI moon shot won’t necessarily be tangible nor singular. What’s the government’s proper role in advancing the technology? And what are the strategic and economic implications of the commission’s recommendations?

Welcome to the Managing the Future of Work podcast from Harvard Business School. I’m your host, Bill Kerr. I’m joined today by Commissioner José-Marie Griffiths, president of Dakota State University. Dr. Griffiths focused on the workforce and ethics issues within the commission. We’ll talk about the report’s overall considerations and discuss its proposals for bolstering AI education and preventing abuses of the technology. We’ll also discuss the prospects for the commission’s agenda as the Biden administration pursues its infrastructure, defense, technology, and education plans with a divided Congress. Welcome to the podcast, José-Marie.

José-Marie Griffiths: Thank you very much, Bill. Pleased to be here.

Kerr: José-Marie, maybe you can start by telling us a little bit about your background and how you came to join the commission.

Griffiths: Yes. My disciplinary background is in physics and computer science. I worked in some of the early AI application areas when I was still in England, doing some early robotics work and pattern recognition and early work on natural language processing. And then a little bit later, developed some algorithms related to cancer and other health-related research. In 2018, the Congress established the John S. McCain National Defense Authorization Act through FY 2019, and they established the National Security Commission for Artificial Intelligence. I was appointed by the then-chair of the Senate Commerce Committee, Senator John Thune of South Dakota. That’s how I came to be a member of the commission.

Kerr: The rest of the commission was comprised of both people from academia and then also from the private sector?

Griffiths: Yes, from business and industry and some people from government.

Kerr: Great. What was the mandate that was given to this commission? How did you think about organizing the commission and then carrying out the study?

Griffiths: Right. The overall mandate of the commission was to review advances in artificial intelligence and related machine-learning developments and associated technologies in order to address the national security and defense needs of the United States. We brainstormed the areas that we thought we might want to cover. We emerged with six different, what we called “lines of effort”—that was almost like six different working groups within the commission. It was: invest in artificial intelligence, research and development, and software; applying artificial intelligence to the national security missions; train and recruit artificial intelligence talent; protect and build on US technological advancements and hardware; marshal global AI cooperation; and then ethics. My first choice was train and recruit AI talent, which was a natural for me, given my role in academia these days. And then I selected ethics as my second choice, because I felt that it was very, very, important that we do not fall into the trap that other countries have fallen into in their overly zealous, maybe deliberate, power-mongering move to use AI for nefarious purposes.

Kerr: Give us maybe a little sense of how one of these work streams functioned. There was some staff that was assigned the commission out. What was the cadence?

Griffiths: Each line of effort had its own staff. And so, for the most part, we tended to work, let’s say, during a month, we would meet as a working group or as a line of effort. We would have research done by the staff, who would bring it back to the commissioners involved in each line of effort. And then, every other month, we held a plenary session—the entire commission—for discussion and review and deliberation.

Kerr: Were there any particular topics that were out of bounds for the commission or said to be off limits from those six research streams?

Griffiths: No, not really. We tried not to stray too far from AI. We were really to look at AI for national security and defense purposes. But we also felt that we needed to address K–12 STEM education in some way in addressing the pipeline. We got into making recommendations for the STEM pipeline that would, in fact, feed the AI and software development and other areas of need in the government.

Kerr: In the private sector, Dr. Griffiths, of course, there’s a lot of urgency that companies are experiencing around, what does AI mean for our organizations, and how can we make sure that we’re in front of the curve? What was the sense of urgency among the commission and also within the government at large?

Griffiths: I would say the sense of urgency was definitely there all the way through—and perhaps accelerated a little bit more as we went through the process. We had seen quite aggressive moves and claims—or stakes in the ground, if you like—from China. We know that we’re seeing, in certain areas, a lot of use of AI techniques in the intelligence world, particularly by Russia. And so we felt that the United States has not really organized itself for this kind of competition in a long while. We felt that, in the private sector and in academia, we’re still ahead. But in terms of organizing the government to be ready to adopt and exploit AI for US purposes, according to our values, we just were not there.

Kerr: This podcast deals with a lot of workforce- and talent-related issues. And so I want to begin with that one, which is, what did the commission come out [with] in terms of its recommendations for the federal government and toward its workforce strategy?

Griffiths: Actually, we were prolific in developing recommendations in the workforce and talent-development arena. The first thing: We took a phased approach. We started off by saying, is there some low-hanging fruit? Are there some things that we can take a look at that could be made quickly without having to have major legislation develop? And so we looked at the current and projected talent deficits. As I say, it was very clear that the United States doesn’t produce enough people. We looked at ways for the government to recruit talent into the government. By the way, we wanted to look at ways they could upskill or reskill the talent that they have within the government pipeline. That’s assuming they knew where the talent was within their ranks. The third one there was to address the underlying STEM education issues. In the first piece, we talked about a number of recommendations on improving and streamlining the hiring process. We talked a lot about the kinds of basic education that certain people within the government, particularly the civilian government, needed to have. The HR professionals, for example, needed to have at least some level of understanding of artificial intelligence if they were to help actually in the hiring of appropriately qualified people. It’s not as if we have a lot of degrees that say “AI” on them. We wanted to be able have the government be able to hire those occasional brilliant young high schoolers who’ve done some really fantastic work, but they don’t have a credential that’s recognized at the moment in the government hiring process. I think if I were to select a couple of major recommendations, one of them was that the government really needs to organize its technologists with new career fields. There are no career progressions for artificial intelligence or software development or computer science, generally. The example we tended to use was the Medical Corps. You can go into the Medical Corps, and you can have a medical career. We wanted to create this national career field, and then also matched with the idea of creating a digital corps within the government. Each agency should create a digital corps, and then there could be a recruiting office for people with the appropriate talent. They could organize the training for those people, a career progression. They could organize the actual positions, where you’re going to be posted for your work. They can keep track of those people as they add capabilities, and actually, by the way, being rewarded for adding capabilities as they move through their careers within the government. I think this combination of career fields, and then the digital corps within the government, was important. And then we thought, well, if we have a digital corps, it might be very appropriate... There are people in academia and in the private sector that would be willing to lend their talents to the government, not necessarily on a full-time basis. You can imagine why. Well, not in academia, but in the private sector, they make a lot of money. What would lure them, what would attract them, would be the interesting projects, interesting and significant projects, that potentially they could work with. And so we recommended a civilian National Reserve Digital Corps. That, we thought, was a quicker way that adds to having to build your own, but it’s a quicker way to get projects started and to get them moving.

Kerr: The other thing that we’ve heard mentioned is a US Digital Service Academy. Tell us a little bit about that. It sounds like a very big initiative, and I’m sure a lot of places might be interested in being a part of something like that.

Griffiths: That’s an interesting one. I think it generated, probably, the biggest buzz of the recommendations that we made. It was a bold recommendation. The idea is to create a new US Digital Service Agency akin to West Point, akin to the Air Force Academy, et cetera. But this would be a little bit different. This would be for civilian workers in the government. The idea is, we would create, basically, a new accredited university that would focus exclusively on civilians. It’s also aimed at all of government, not just the Department of Defense or another department. We felt that it’s a four-year student-loan-free experience for those who would be admitted. We anticipate that the actual rules for admission might be similar in nature, similar in process, to the other academies. In a way, it would be a STEM university—a science, technology, engineering, and math university. There would be, as all accredited universities, there would be a general curriculum of some kind, which probably would include history and politics and philosophy and maybe what’s happening geographically in terms of religion and politics and warfare and other things. But then, on top of that would be these more traditional STEM disciplines—particularly, computer science, software engineering, artificial intelligence, those kinds of programs. We anticipated that it would graduate 500 a year. Basically, you’re talking about a small university of about 2,000 students. We think that that will do a couple of things. First of all, it will generate, each year, a cohort of 500 people ready to go into the civilian government workforce, but already networked with a common educational experience and, therefore, a common culture that probably is ultimately going to propel many of those people into leadership roles within the government. We think, also, it will help diversify the digital service workforce in the civilian government as well, because of the nature of the way people would apply to get in. A lot of people said to us, “Well, why don’t we just have people in a network of universities?” Well, we already have that, in some sense, through the Scholarship for Service programs. We have the CyberCorps Scholarship for Service program. We have the STEM Scholarship for Service in effect. In those, students are at different universities, getting different experiences, which is also good, and bringing them together. What we think was adding to what we already have. And we have recommended, by the way, a doubling of the Scholarship for Service programs, particularly in fields associated with artificial intelligence. There’s been a tremendous amount of interest, not only in Congress, but also by governors and, of course, university presidents around the country.

Kerr: I can imagine a lot of people want to vie for a part of that. Tell us a little bit more about how the federal government policy makers responded to these recommendations. Were there responses that you were getting along the way that were fed into the recommendations? And then, upon release of the report, what was the response from the political circles?

Griffiths: We decided, at the very beginning, we were not going to sit, do our work for two years, write a report, pat ourselves on the back that we’d done it. We created a process—or rolling recommendations, almost. We did a quick intensive look at artificial intelligence, the federal government’s readiness—we call that “AI readiness”—and what was going on with a lot of interviews, et cetera. We came up with a set of findings pretty early on in our process. We decided that we would put out quarterly reports. Our quarterly reports had findings and recommendations. They were put out to everyone we talked to, to gather the input, but also much further along, to see how people would react to those recommendations. So congressional committee staffers on the Hill—I know they were working with the Office of Science and Technology Policy and others in the White House. And then we had every other potential group that we could consider. We put out our recommendations, and then we gathered more feedback, and some recommendations were modified, some recommendations stayed the same. We also developed proposed legislation—if legislation was going to be required. What happened then was a number of our recommendations were adopted into pieces of legislation, particularly the National Defense Authorization Act, and got incorporated. As we were going through, some of our recommendations were already being incorporated, some of them were being recommended for funding, et cetera. And so we just continued to accumulate sets of recommendations as we went forward. So that, even though our final report just came out in March of this year, the recommendations in that report are not the only recommendations that have been adopted. Again, there were implementation plans, which related on what the executive branch could do, what Congress could do, which committees of Congress, which piece of legislation might this fit into. And so it was really making it very easy for people to implement if they wanted to implement fairly quickly. That, I think, is a little bit unusual from what I’ve seen of commissions of various kinds, particularly relating to STEM topics.

Kerr: A little bit unusual, but it’s certainly more effective than the typical report that comes from a commission. Right now, it’s mid-April as we record this. The Biden administration has released an infrastructure bill. This will be a big topic of conversation for several months to come. What role does AI play in that infrastructure bill? Are there parts of your commission’s work that you anticipate interfacing or connecting with the infrastructure proposals?

Griffiths: Well, the infrastructure proposals are very interesting, beyond the fact that there were obvious elements of infrastructure that are going to be important if we’re going to move this feature forward and we’re going to be competitive with other nations. The entire underpinning, I mean, of what enables artificial intelligence applications to be developed and applied is infrastructure, computational infrastructure, networking infrastructure, et cetera. If you look at those listings, there’s about four or five or six of them. Some have come out of DoD, some have come out of the OSTP, some have come out of Congress. But there’s a strong overlap in those. Artificial intelligence makes all of the lists. Cybersecurity makes all of the lists. There are several areas. Quantum computing makes all of the lists. I’m hoping that, just as we used to argue on former infrastructure initiatives to include telecommunications and networking, I think this computational infrastructure is going to be needed to be upgraded in order for this country to be able to take advantage of artificial intelligence for its future defense, as well as for its future economy.

Kerr: Dr. Griffiths, let’s go and speak a little bit about the second research stream that you were working on and leading, which was around the ethical issues, biased human control. What was your commission seeking to address there?

Griffiths: We were seeking to really spread the word that we cannot sit back and assume that any of our AI potential applications are going to be completely unbiased, they’re going to be transparent, they’re not going to be misused. And so we wanted, from the beginning, to say that we thought it would be important to develop AI-enabled technologies with appropriate transparencies, so that everybody knows how it’s being developed without necessarily giving away all the IP. There’s strong oversight for the development and use, particularly in DoD applications, of course. And that there’s full accountability. And so we felt that there are approaches to ethical considerations and what we called “responsible use.” We looked at some examples. There are already procedures for considering how to develop software, how to ensure and test software, to try and eliminate unintended biases. We incorporated some of those recommendations into the report, including guidelines for ethical and responsible use. I will have to say, it’s a little bit harder to talk about these—I would call them almost “softer”—issues around ethics and use than it is to talk about specifics about “hard” technologies, hard software technologies. Getting people to sit and understand why it’s important that we have these considerations and that we try and ensure, at least in the United States, that our approach is going to be prudent and consistent with our values, I think was very, very, important. I also think the fact that we incorporated that from the beginning was recognized by outside groups who would otherwise, I’m sure, have been lobbying us to say, “We don’t want to have AI unleashed by the Department of Defense without any type of accountability.” There are, of course, legal considerations. There are accountabilities. We just wanted to make sure that, from the AI perspective, where it can be very easy to develop a technology application, test it out with the intended group, and then not realize that you’ve left out a large chunk of the population. We see this with initial voice-recognition systems, the facial-recognition systems. We wanted to make sure that we avoided that, so that, as the government develops AI applications, it has guidelines, it will have oversight.

Kerr: In the private sector, we often see that the very best talent—working, for example, with a big tech firm—can be particularly forceful about the type of projects they believe they should be working on, and what Google or what Apple should be engaged in. Was there a connection between the talent side of this commission’s work and this ethical side, trying to think about, what would the young talent be most interested in and find ethically right?

Griffiths: We did in a way. Maybe not quite that directly. We were thinking about, why does the government not attract the talent that it needs to? Well, part of it is, it’s not organized to do that. Second of all, it’s very slow and bureaucratic, and the private sector isn’t. The private sector gets everyone, because they’re faster than even our own academic processes. The issue wasn’t what should people work on; the issue is, are there interesting large-scale projects that need to be addressed? We believe there are. When we went out to ask these questions, people said, “Yes, I’d be willing to spend some time on that,” or “Yes, we’d be willing to help out.” In academe, we have large numbers of students—particularly graduate students—who are working on projects. They could easily work on projects to help the government out in how it does its business. I think part of the concern that we had was the fact that there wasn’t a sufficient knowledge about what the government’s doing—or could do—in terms of its use of technology, partly because it doesn’t have these career opportunities. And so we don’t have people say, “Oh, I want to be just like so-and-so, who’s developed a career doing A, B, or C.” We try to address that by having an improved relationship between the academic institutions—particularly faculty—and government agencies. We’re recommending that more faculty do work with the government, particularly in their summers, perhaps get more of their students involved, and begin to learn what problems the government has that could be resolved. We’re doing that, in part, through this National Reserve Digital Corps.

Kerr: Dr. Griffiths, as you think about the allocation of resources, we have obviously some big questions in front of us, including climate change and social programs. Do you see AI and the investments that this commission is recommending for the use in the government and for defense purposes as being in competition with those? Is it a zero-sum game, or are there deeper connections between them?

Griffiths: I think there are much deeper connections between them. I think, if we really think about the true potential of AI in its broader sense, then AI can be brought into play to help out with some of those problems. In fact, when Covid hit, the commission turned its attention to Covid, and could AI play a role? We actually developed some whitepapers that were published pretty quickly on the role of AI in helping with the Covid, including one, by the way, that I had a little input into, which was privacy and ethics recommendations for computing applications developed to mitigate Covid-19. We see AI as having the potential of being everywhere. We didn’t come up and say, “Oh, would we recommend that less money go here and more money go into this other program that’s just as important?” because that wasn’t our mandate. Our mandate was to say, “What do we believe that the government needs to do to play in this global technological competition with China and others? How does the United States prepare itself and get itself ready in a relatively short period of time to actually be competitive and stay competitive? How does the US continue to be a de facto leader building coalitions of various democratic countries, so that we work together to counter the unwanted applications of artificial intelligence?”

Kerr: I’m sure we could go on for another couple of hours, but we haven’t even come close yet to touching on your day job, which is a significant day job being president of Dakota State University. We have a lot of educators that listen to this podcast. So can you tell us a little bit about what you’re doing at Dakota State to prepare your young talents for the digital future?

Griffiths: I’d be happy to. Dakota State is a university in Madison, South Dakota. It’s one of six public higher-education institutions in the state—the computing and information technology, data processing institution of the entire system. The university has computer science, cybersecurity programs. Most recently, we’ve added programs in artificial intelligence and machine learning. Every area of this university, we’ve got College of Computer and Cyber Sciences, the College of Business and Information Systems, the College of Arts and Sciences, and the College of Education. All our students have some element, if you like, of fundamental computer science computation in their degree program. That’s our thing that makes us different. And so we’ve been preparing people, particularly for the National Cybersecurity mission and defense. We’ve been losing our students, as people might imagine, from the middle of the country to the coasts. And so we are making a move now to actually not only increase the number of students in these areas, but we’re also creating the jobs and the environment in which they can work. And so we have some being developed right now in Madison, and we have plans to significantly expand and build the cybersecurity industry in South Dakota. We have an AI lab that’s developing applications in a number of areas. We also recognize there’s a tremendous synergy between everything we’re doing in computer and cyber sciences and artificial intelligence. We need to protect the artificial intelligence data and models, and we need to use artificial intelligence to deal with large-scale cybersecurity issues—particularly, as I say, at the national level. We are moving to bring more young people into these technology careers. Everything we have talked about on the commission about, how do we expand the pipeline, and how do we attract a more diverse population into these kinds of careers is something that I’ve been living every day that I’ve been here.

Kerr: Dr. Griffiths, if you think about the workforce that you’re seeking to develop, there’s been some known challenges with the science and engineering and computational and AI workforce as being less diverse than it needs to be, and also oftentimes being very reliant on immigrant talent. How did the commission think about those workforce dynamics?

Griffiths: In terms of the commission’s work, this is something we felt very strongly, and it starts with the K– 12 pipeline. And so in our recommendations, when we recommended the National Defense Education Act 2—that’s what we recommended be established—we felt that it would, in fact, be geared toward K– 12 communities with disadvantaged students, in particular. We recommended, as you know, summer programs, after-school programs, help for teachers, et cetera, as well as scholarships, and a significant increase in scholarship availability. So 25,000 scholarships for undergraduate students; I think it was 5,000 for graduate students; and then we did 500 postdocs. All of those were geared toward this issue of increasing the diversity. We felt very strongly that a lot of young people—children of immigrants, first-generation students at four-year colleges, English-as-a-second-language students—don’t always have the opportunities for careers in these areas. We also think that the Reserve Corps might also attract a broader range of students. We don’t think that it’s enough for the United States to just capture its own and build its own workforce. We have to take advantage of the best and brightest coming from other countries. And so, in that sense, we recommended that, for example, any international student who’s got a STEM PhD from a US–accredited university and at least some of their coursework here in the United States should be given a Green Card. The idea is to try and still attract the best from around the world, and then try and keep them here and overcome the reasons why they’ve tended to go back and go elsewhere. Just locally, when I came here to Dakota State, we were heavily male oriented because of that technology orientation throughout our programs. But we have a program called CybHER that encourages girls and young women to come into career here. We have been able to offer the GenCyber camps that are funded by NSA through NSF—National Security Agency through the National Science Foundation. As a result of that, we’ve actually been able to bring a number, a significant number, of young women into our academic programs on campus. We’re nowhere near where we need to be, but we’ve started to do that. In addition, South Dakota has been an immigrant refugee resettlement state. And so, when I look at the larger school district—which is in Sioux Falls, not too far away—I think there are more than 50 languages spoken in the K–12 system. We know, I think, in the next couple of years the majority of students graduating will be of those people we typically have thought of as minorities. We are now extending our K–12 work not just to do cybersecurity courses, we’re going to add artificial intelligence. We have a program in the high schools, where students can take a full year of college credit in the last two years of high school. They will be able to take … 70 percent of that first year of college could be courses in our artificial intelligence bachelor’s degree or courses in one or other of our cybersecurity degrees.

Kerr: That sounds great. Dr. José-Marie Griffiths is the president of Dakota State University and served on the National Security Commission on Artificial Intelligence. Thanks so much for joining us today.

Griffiths: Thank you, Bill. Appreciate it.

Kerr: We hope you enjoy the Managing the Future of Work podcast. If you haven’t already, please subscribe and rate the show wherever you get your podcasts. You can find out more about the Managing the Future of Work Project at our website hbs.edu/managingthefutureofwork. While you’re there, sign up for our newsletter.

SUBSCRIBE ON iTUNES
ǁ
Campus Map
Managing the Future of Work
Manjari Raman
Program Director & Senior Researcher
Harvard Business School
Boston, MA 02163
Phone: 1.617.495.6288
Email: mraman+hbs.edu
→Map & Directions
→More Contact Information
  • Make a Gift
  • Site Map
  • Jobs
  • Harvard University
  • Trademarks
  • Policies
  • Accessibility
  • Digital Accessibility
Copyright © President & Fellows of Harvard College