Justice, Fairness, Inclusion, and Performance.

This piece was written by Dr. Alan R. Shark, Executive Director, CompTIA’s Public Technology Institute; Assistant Professor of the Schar School of Public Policy and Government, GMU; and NAPA Fellow and Chair of NAPA’s Standing Panel on Technology Leadership

Artificial intelligence (AI) has become far more than a buzzword these days as it is finding its way into applications within all levels of federal state and local government. Examples of applications include public health, policing, climate change, identity management, transportation, space exploration, to name just a few. While there appears to be common agreement that AI is being deployed – arriving at an agreed-upon definition is elusive. There are over a dozen definitions from which to choose. In the world of public administration, we can safely say that what we are witnessing today can best be described as AI as “augmented intelligence” in which machine learning informs us about such things as data patterns, anomalies, predictive analytics, and trend analysis. AI as “augmented intelligence” is perhaps the single most promising emerging tech in public management in decades. As computing technologies increase their march forward with blazing fast speeds and sheer processing power, alarm bells are sounding regarding the concern over unintended bias, ethical trepidations, and unintended consequences. Indeed, there is a worry that AI regardless of definition, requires the inclusion of different viewpoints that find ways to work alongside our data scientists and computer software engineers. Most of us who actively watch the growth of AI applications want to see other disciplines involved in the logic formulation behind all those mysterious algorithms that are designed to instruct our computers and help ensure a trustworthy AI environment. Such disciplines would include individuals from the following communities: scientific, research, humanities, philosophy, history, ethics, and public policy thought leaders at the federal, state, and local levels.

The past two Administrations have risen to the occasion by establishing a robust AI agenda and more recently through the creation of the National Artificial Intelligence Initiative housed in the White House Office of Science and Technology (EOP/OSTP). The mission is rather straightforward, to oversee and implement the United States national AI strategy playing the role as the leading coordinating arm of no less than 20 key agencies including Defense. While I applaud the hard work and dedication of this new office and initiative, there remains a need for greater representation from among state and local government too. In response to calls for greater representation, the U.S. Secretary of Commerce issued a press release and posted a Federal Register Notice (FRN) (86 FR 50326) requesting nominations for the National Artificial Intelligence Advisory Committee (NAIAC). As outlined in the National Artificial Intelligence Initiative Act of 2020, the NAIAC will advise the President and the National Artificial Intelligence Initiative Office on several topics as they relate to artificial intelligence (AI). 

The advisory panel is to consist of at least nine members drawn from academia, industry, the public sector, and non-profits, representing broad and interdisciplinary expertise and perspectives among a range of AI-relevant disciplines from across academia, industry, non-profits, and civil society. If all goes according to plan this new advisory body will provide advice and information on science and technology research, development, ethics, standards, education, fairness, civil rights implications, technology transfer, commercial application, security, and economic competitiveness all related to AI. Despite the significance of forming this advisory body, there is no specific mention of state and local government thought leadership expertise. On the other hand – there is nothing stopping such individuals from being considered or nominated.

It is important to note that state and local governments are very concerned about the promise and the challenges of AI. For example, New York City has released a 116-page strategic vision for how it plans to benefit from artificial intelligence as a community, with an emphasis on ethical considerations. The report contains sections that detail how New York can modernize the city’s data infrastructure; the areas wherein AI can do the most good with the smallest potential for harm as well as ways the city can use AI internally to better serve residents.  The report is full of optimism for the responsible use of AI in general and at the same recognizes how it can be misused. New York already has a Director of Artificial Intelligence, and the City is already using AI in a few areas, most notably, cyber security.

The National Association of State Chief Information Officers (NASCIO) recently released its 2021 State CIO Survey – Driving Digital Acceleration which ranked artificial intelligence/machine learning as one of the two most impactful emerging technologies in the next 3-5 years. These are but 2 examples.

As AI programs advance, there is also a growing and recognized need for AI program oversight and audit for all levels of government. The most exciting news here is the release of a years-long project developed by GAO who published Artificial Intelligence: An Accountability Framework for Federal Agencies and Other Entities. The report makes clear that their framework was designed to be applicable to all levels of government.  And as part of the development process, the authors of the framework interviewed AI subject matter experts representing industry, state audit associations, nonprofit entities, and other organizations in addition to the traditional federal resources. The GAO framework provides questions to ask and procedures to follow that program auditors can use by program auditors at any level of government. This highly useful framework dives deeper that the traditional AI principals published by numerous agencies over the past several years. The GAO framework delves deeper and offers operational steps to better evaluate programs that rely on AI.  

AI has the potential to dramatically change government processes at the federal state and local levels. It is imperative that with so much at stake we develop systems and collaboration between not only experts both inside and outside of technology and foster programs that allow government officials to share information, learn from one another and strive to make AI systems both resilient, safe, and providing the public with improved services.

Finally, with so much at stake, having a healthy mix of senior public sector leaders from all levels of government working alongside those involved in AI planning helps inform us of many of the immediate and upcoming needs of the modern workforce of tomorrow. What should we be doing to better prepare our students to develop necessary and critical skills to be able to comprehend AI and have the ability to become better auditors for the public good? What might these skills actually look like and how can we develop stronger partnerships and collaboration to address the challenges and opportunities AI provides?


This piece was written by Dr. Alan R. Shark, Executive Director, CompTIA’s Public Technology Institute; Assistant Professor of the Schar School of Public Policy and Government, GMU; and NAPA Fellow and Chair of NAPA’s Standing Panel on Technology Leadership