II. An Artificial Intelligence Primer
Artificial Intelligence (AI) is not a single defined technology, but rather a term that encompasses a spectrum of real tools and aspirational systems, which can appear to make decisions the way humans do. The use of advanced algorithms in law-making will force regulators and administrators to decide which embodiment of AI is both feasible and acceptable. The term Artificial Intelligence triggers different visions for different people. This primer will focus on the most common branch of AI: machine-learning. The following introduction will focus on basic algorithms, machine-learning tools, and limited transparency.
A. Two Types of Algorithms
Artificial Intelligence is typically assumed to have a learning capability, giving an algorithm the apparent ability to improve on its own. Yet, many non-learning algorithms are already in place to facilitate regulatory-like human decision-making. In fact, many of the algorithms currently implemented by the government for administrative and adjudicative purposes appear to be non-learning algorithms. Thus, it is important to highlight the function of these basic algorithms in comparison to their machine-learning counterparts. An elemental understanding of these concepts is important for evaluating the potential role for artificial intelligence and machine-learning in government.
1. Non-Learning Algorithms
At the “primitive” end, non-learning algorithmic tools are based on predefined rules that remain constant for all inputs. One basic illustration is a situation involving a rock band and a bowl of M&Ms. Imagine that a famous rock band has a long list of conditions that need to be met before they can agree to perform a concert. Among these preconditions is a requirement for a bowl of M&Ms in the dressing rooms, with one caveat: all the blue candies must be removed. One impractical way of tackling this task is to create an algorithm. First, the algorithm needs data, which, in this case is the color of each M&M. Each M&M needs to be labeled with their respective color. Next, the algorithm needs to know what to do. The predefined rules would look something like:
IF M&M is blue
THEN discard
ELSE keep
These rules should make sure the algorithm keeps all of the blue M&Ms out of the bowls. Regardless of the amount of M&Ms the algorithm processes, the rules do not change—it never learns.
In a different manner, sophisticated algorithms that have a technical learning abilitycan contain real non-learning characteristics. Consider the first iteration of Washington D.C.’s IMPACT teacher evaluation system. The IMPACT system would be one of the first to tie teachers’ pay and job security to performance. Under this new system each teacher would receive a score generated by a complex set of algorithms intended to show teacher performance in relation to student performance on standardized tests. In 2011, IMPACT resulted in the termination of 206 teachers with the lowest scores. As complex as this algorithm appeared to be at the time, one variable it did not consider is whether or not it was right. For example, if there was a decline in student test scores relative to the previous year, which was due to an inflation of scores in the previous year, then the algorithm would still choose to fire the teacher, even if this teacher was the most effective in the school. The IMPACT algorithm could adapt and change, based on data from the teachers who do not get fired, but without feedback it never learns from the ones who do.
2. Machine-Learning Algorithms
Machine-learning algorithms find patterns in large amounts of data and are intended to solve problems dynamically based on the inputs it receives. Machine-learning, as it is aptly named, “learns” from these inputs. Consider a modification of the blue M&Ms illustration, imagine that M&Ms are produced in two different shades of blue: one shade is the original blue and the other is a purple-blue color. The band has decided that they now only want to have the original blue M&Ms removed and would like to keep the purple-blue candies in the bowls. The non-learning algorithm only knows to discard blue M&Ms. In our original algorithm, the purple-blue is still labeled blue and both shades of blue would still be discarded. In contrast, a machine-learning algorithm would receive feedback each time it discarded a purple-blue M&M. Eventually, a pattern would emerge, and the new algorithm learns not to discard thepurple-blue M&Ms. This example is an abstraction meant to highlight the difference between machine-learning and basic algorithms, what is missing from the hypothetical is how the algorithm learned to keep the purple-blue M&Ms. To start, the “learning” in machine-learning comes in three different varieties: supervised, unsupervised, and reinforcement.
Supervised Learning
Supervised learning is a prolific learning type that can be used to classify data or to make predictions about the future. This type of algorithm involves large amounts of labeled data, which a computer uses to recognize patterns to apply to new unlabeled data. To achieve this effect, an algorithm designer must first specify the correct label on a subset of data in order to train their machine-learning model and decide which categories the data should be sorted into, or which specific predictions to make.
Artificial Intelligence that can categorize future data, based on patterns from previous labeled data, is known as the classificationtype of supervised learning. One real-life example of classification can be found in legal aid technology. Spot is an artificial intelligence tool developed through the Legal Innovation and Technology (LIT) Lab at Suffolk law. Exoterically, Spot is a robotic issue spotter; in particular, the algorithm is currently being trained to pin down specific legal categories of posts and submissions from people seeking legal help online. The data is labeled according to the National Subject Matter Index (NSMI) Database, which currently has almost 600 different legal aid content categories in its taxonomy. Hundreds of lines of user inputs have already been manually labeled with the NSMI taxonomy. Spot can subsequently be trained with this labeled data, and use what it learned from the training set, to decide which categories new unlabeled user questions fit into. The ideal scenario would involve a person looking for answers to an issue on a legal aid website, and the website, with Spot enabled, wouldthen accurately classify the typeof legal issue raised, based on similar questions it has seen before. The user is then directed to the appropriate resources on the legal aid website, based on the classifications made by Spot. In its current form, the AI is a classic model for supervised learning classification. In the future, LIT Lab Director David Colarusso and his students hope that the technology will be “used by courts, legal offices, and nonprofits to direct people to the most appropriate resources.”
Another form of supervised learning is regression. Regression algorithms are also trained using labeled data to identify patterns. However, instead of classifying new data into pre-defined categories, a regression algorithm will make outcome predictions about the future, that are defined by the algorithm’s designer. A common example is an abstraction of Netflix recommendations. A simplified version of the Netflix algorithm would collect and label data based on which content you did and did not like. Netflix would then make a recommendation on what you might enjoy based on that labeled data. In summary, the algorithm is predicting your enjoyment of the content it recommends.
Supervised learning works best for decisions that are meant to mimic human judgement. These algorithms are trained on past data sets labeled by human decisions. For Spot, the hope is that it can categorize a legal inquiry with the same accuracy as a competent lawyer. For Netflix, their recommendations are trying to replicate the judgements humans make when they decide whether or not they want to watch a movie based on its genre, actors, similar movies, etc.
Reinforcement Learning
Instead of learning from pre-labeled data, reinforcement learning algorithms draw lessons from experience. The algorithm’s designer will decide on the reward signals that will teach the algorithm when it has made the “right” choice towards reaching its objective. The reinforcement algorithm learns by trial and error, like a dog receiving a treat every time he sits when commanded. Alpha Zero is an incredible illustration of the power of reinforcement learning. One of Alpha Zero’s talents is playing chess. Rather than relying on example chess games, it tries many different approaches, and is rewarded or penalized depending on whether its behaviors help or hinder it from winning. In less than 24 hours, based only on self-play and the rules of the game, Alpha Zero was able to teach itself chess well enough to beat a world-champion program.
Reinforcement algorithms can be highly impressive, particularly when an AI is able to beat the best humans at their own game. However, the strength of the reinforcement algorithms are limited to simulated environments, and unfortunately for us, reality is not limited to defined rules that lead to clear outcomes.
Unsupervised Learning
An unsupervised learning algorithm is not given a reward signal or labeled data. Instead the algorithm will learn to recognize patterns, similarities, and dissimilarities in a group of unlabeled data. One example is the collaborative filtering component of Spotify’s Discover Weekly algorithm. Discover Weekly is an algorithmically generated playlist of thirty songs that a user has not yet heard, but may enjoy. Part of generating this playlist is looking at the other 200 million Spotify subscribers and creating groups of users with similar tastes. A song recommendation could be as simple as suggesting a song that some users in the group have listened to but another user has not. Spotify is a nice illustration, but since there are typically no defined outcomes in unsupervised learning, there tend to be less obvious applications for direct decision making.
B. Transparency
An algorithm’s characteristics will determine the transparency available to its users and the amount of influence a human operator can have. Generally, the more autonomous an algorithm is, the less transparency the operator will have on its decision making.
1. The Black Box Problem
The complexity of machine-learning algorithms can also make it difficult to explain how the machine makes decisions. An inability to fully understand an algorithm’s decision-making process, and the inability to predict its outputs, is generally referred to as the Black Box Problem. AI exists on a spectrum of transparency, which can be loosely categorized as having strong or weak black boxes. Strong black boxes are algorithms whose complexity and design make its processes almost entirely inaccessible to humans. On the other hand, weak black box algorithms can be, to an extent, reversed engineered to gain further understanding on how specific outcomes are produced.
Non-learning algorithms typically do not encounter the Black Box Problem. This is because the algorithm’s designers will input rule constraints, and since the process remains constant, a willing designer will be able to map out the entire decision-making process.
Learning algorithms, by design, will change their approach according to what they have learned from new inputs. Complexity and mathematical approaches play an important role in the transparency of an AI’s black box. However, controlling for complexity, an algorithm’s range of transparency can be roughly estimated by its learning type. First, the supervised learning methods can produce a weaker black box effect. Since example training data is given to the algorithm with the “right” answers, it would be easier to explain the conclusions an algorithm makes by comparing it to the human judgements. A designer expects the algorithm, utilizing supervised learning, to apply similar reasoning as the humans that determined its training data. Second, reinforcement learning algorithms may be less transparent. Developers of reinforcement-type-algorithms do not have control on how to reach a desired outcome, since no model approach is given. The main source of information that users might have access to, are the rules of the simulated environment in which the algorithm operates in. Third, unsupervised learning algorithms do not have defined outcomes; therefore, it may be more difficult to determine the algorithm’s decision-making process. However, since outcomes are not defined, the black box nature of unsupervised learning algorithms may seem less urgent to stakeholders. Unsupervised learning would typically be used to inform decisions, rather than making them.
III. The Building Blocks of AI in Government
The promulgation of artificial intelligence in American society follows considerable efforts of digitization, big data collection, and acceptance in industry. Present day, the United States government has recognized its place as a global leader in the new era of AI, and has plans to remain a leader. Although there have been sophisticated implementations of machine-learning in the private sector, courts and governments have been slower to adopt. As of early 2020, “[n]o judicial or administrative body in the United States has instituted a system that provides for total decision-making by the algorithm. . . .” Advancements in computer technology and data collection are necessary conditions for realizing AI in any modern system, and acceptance is a crucial requirement for implementing new technology in rulemaking systems. This section will highlight the transformations within the United States’ governments that provide a foundation for incorporating artificial intelligence in local rulemaking.
A. Digital Government
1. E-Government
In the 1990s, the internet revolution prompted state and federal government agencies to digitize their services and make greater use of the internet. By 2001, the American government was in the early phase of its transformation towards a digital government. The following year, Congress adopted the E-Government Act of 2002, further cementing a commitment towards electronic government services and processes. Throughout multiple administrations, the United States has continued to signal its intention to move towards a digital government. In 2012, President Obama launched a “comprehensive Digital Government Strategy aimed at delivering better digital services to the American people.” In 2017, the Trump administration appeared to follow suit by creating the American Technology Council (ATC), to improve digital government services. That same year, ATC produced a 61-page Federal IT Modernization Report which was intended to be “a key piece in this Administration’s efforts to modernize Federal IT.” Currently, the United States is a champion of e-government, and ranks near the top of the United Nations E-Government Survey. Looking ahead, the United States is expected to hold its leading role in e-government development globally.
2. Open Data Initiatives
Digitization is a prerequisite for collecting and storing massive amounts of information. As early as 1996, experts began to recognize that digital storage is much more cost-effective for storing data than paper is. The move towards a digital government paved the way for a data-reliant open government. Efforts to collect and distribute data to improve operations eventually became a core feature of government administration innovations. As part of his 2012 Digital Government Strategy, President Obama issued Memorandum M-13-13, Open Data Policy-Managing Information as an Asset. In 2014, the Food and Drug Administration (FDA) launched the open FDA initiative aimed at “creat[ing] easy access to public data and a new level of openness and accountability; ensure the privacy and security of public FDA data; educate the public; and save lives.” Less than a year later, the Environmental Protection Agency (EPA) announced a program that would leverage big data and data visualization, to evaluate environmental policy issues. Recently, Congress passed the Foundations for Evidence-Based Policymaking Act of 2018. The Act incorporated language from the Open Government Data Act which requires open government data assets made available by federal agencies to be published as machine-readable data. In addition, the General Services Administration was directed to “develop and maintain an online repository of tools, best practices, and schema standards to facilitate the adoption of open data practices across the Federal Government.”
Cities have also recognized the need for open data governance. Los Angeles, New York, Boston, and Chicago have already established their own data-focused teams, departments, and offices. There are many ways cities are leveraging data, and use cases span health and human services, infrastructure, public safety, mobility, and regulation. Detroit is one city turning their eye towards data in search of a solution to a key issue: high-threat infrastructure. In 2014, Detroit’s Blight Removal Task Force was commissioned to help identify candidates for demolition and high-threat areas within the city. The task force collected several data points on approximately 99 percent of the city’s properties. This data was then analyzed to promulgate recommended changes to processes for hearings, judgement liens, and foreclosure, in hopes of improving blight removal in the city. Overall, local governments, for better or worse, have continued to integrate data-science and analytics into their planning and governance.
B. Robots in Regulation & Administration
Over the last twenty years, digitization and data have become commonly accepted tools for assisting governance efforts. Now, the government has started to take the next step in their technological evolution with the adoption of AI-based tools. The most robust canvas of AI implementation in federal agencies was recently conducted by researchers at Stanford University and New York University, which culminated in the Administrative Conference of the United States (ACUS) AI Report. The researchers surveyed 142 federal departments, agencies, and sub-agencies. Of those surveyed, nearly half of the federal agencies studied (45%) have experimented with AI and related machine-learning tools. AI tools spanned a range of governance tasks including enforcing, adjudicating, monitoring, communicating, and extracting information. As of the report’s publication, there doesn’t appear to be a federal agency that is using AI to generate official rules. However, the FDA’s FAERS pilot project, demonstrates the Government’s willingness to incorporate machine-learning in the regulatory process.
The FDA is, once again, among the leading government agencies experimenting with technology. On top of rulemaking and issuing guidance, the FDA collects and monitors millions of adverse event reports in post-market surveillance for policymaking. These adverse event reports, along with medication error reports and product quality complaints are housed in the FAERS database. One of the FDA’s pilot AI program had evaluators label sample data that was then used to train a machine-learning algorithm. The outcome was a rank-ordering of FAERS database reports based in their probability of containing policy-related information. In other words, the AI predicted which reports would be the most important. Not only could these results help prioritize FDA resources, but the AI tools could also facilitate a shift in regulatory efforts from premarket approval to post surveillance methods.
Pilot and permanent uses of AI in government agencies signal increasing acceptance. The internet revolution marked the start of the transformation into digital government and led the way toward open-data governance. Building internal technological capabilities and data collection are important first steps in AI-tool implementation for the government. The current exploration of machine-learning in federal agencies provides a starting point. In the new age of AI, the United States government seems prepared to take on the next technological metamorphosis.
IV. Smart Regulations
A. City Laboratories for AI Governance
Digitization and adoption of machine-learning have started to normalize the idea of AI in governance. It appears that administrative agencies have taken some steps towards allowing adopting AI enabled policymaking, but local government may be a more fitting place to start. In New State Ice Co. v. Liebmann, Justice Brandies famously remarked how “a single courageous State may, if its citizens choose, serve as a laboratory; and try novel social and economic experiments without risk to the rest of the country.” Artificial Intelligence in rulemaking is a fairly novel proposition, one for which a single courageous city may serve as a laboratory to try new experiments in regulating.
1. Smart City Experimentation
Cities are already experimenting with cutting edge technology. The expression “smart city,” has emerged as a broad term representing efforts to advance urban progress through technological innovations. A common conception of a smart city program is using internet connected (IoT) devices that gather real-time data, to inform city agencies of patterns and trends. Both the federal and local governments are keen to build on IoT technology. For example, in 2016, Columbus, Ohio won the U.S. Department of Transportation’s Smart City Challenge and an accompanying $50 million in grant funding. This win launched SmartColumbus, a program that envisionsColumbus as a leading laboratory for reinventing mobility. Columbus’s efforts are built on an initial database known as the Smart Columbus Operating System (SCOS), providing users open and real-time mobility data. So far, SCOS has created apps that track free parking spaces and give real-time access to public transportation. Sixteen other proposed projects are currently in development, with hopes of transforming Columbus into “America’s Smart City.” These next projects include connected transit systems, futurized payment systems, self-driving shuttles, and reduced carbon emissions. At this stage, Columbus is still experimenting; however, the successes and failures of the midsized city will serve as an auspicious model for innovation and mobility.
2. The Zoning Hypothetical
The following illustration is meant to explore a potential transformative application of artificial intelligence in rulemaking: algorithmic zoning. Researchers at the MIT Media Lab, Kent Larson and John Clippinger, have started thinking through the idea of algorithmic zoning. The two researchers propose a system where AI would enable the capacity for a responsive, narrowly tailored, and data driven system for zoning regulations. Their theoretical approach rests on the premise that zoning regulations work to optimize financial benefits. Their goal is to instead optimize zoning regulations for social benefits, cultural benefits, and environmental benefits. This new zoning system receives data inputs from mobility times, unit economics, amenities scores, health outcomes, and other indicators, and processes this data through a machine-learning model that will “maximize local resident happiness.” In this model, information about resident’s preferences would be collected through digital tokens that residents would “spend” towards projects they value. The system would then create sets of rules based on the desires and preferences of residents in the given area. The end result would be the replacement of the static rules-based system, with “dynamic, self-regulating systems.”
Larson and Clippinger’s conception of algorithmic zoning would fundamentally shift the way officials make and change zoning regulations. The use of artificial intelligence in zoning highlights the pronounced effects the technology would have at different points of the rulemaking process: (1) in collecting inputs for consideration, (2) in processing feedback, and (3) in final decision-making. First, machine-learning enabled regulation would force the collection of more data and expand the scope of inputs used in rulemaking consideration. Large data sets are required for machine-learning models to be effective, and so, cities would have to consistently build and update their underlying databases. Increasing the amount of data collected, would also mean increasing the capacity for data collection. In turn, local officials could expand the types of inputs they consider for rulemaking. For example, in the algorithmic zoning proposal, collecting information on residents’ values and preferences through digital tokens would provide mass data on social values. Previously, aggregate social value inputs would be very hard to collect and consider, if even considered at all. Second, artificial intelligence can intake and synthesize amounts of data at levels beyond human capacity. Like digitization, a major benefit of artificial intelligence is convenience and efficiency. An increased use of algorithms at this point in the rulemaking process would help reserve valuable human resources at local government offices. Finally, machine-learning models would be able to generate hyper-custom and dynamic regulation. In the Larson-Clippinger hypothetical, this would mean the automatic adoption of machine-learning produced zoning rules. Without human intervention, machine-learning systems could easily provide rules for the specific needs of each sub-community, and adapt as quickly as new information comes in.
Overall, the use of AI has the potential to increase a city’s ability to respond more efficiently with effective regulation. Moreover, the most of the data and technology required for the Larson-Clippinger hypothetical, or a similar model, already exists. Yet, so far, no local government has fully implemented AI as a decision maker. The use of AI to inform regulation is in its infancy, and automatic AI decision-making is unchartered in public agencies and administrations. Cities have the opportunity to test the waters and set the stage for artificial intelligence governance.
B. Limitations
Artificial intelligence governance is a bold proposition, even for the smallest or most technologically advanced town. Administrators and officials must consider the social, political, and legal limitations of any AI implementation. For example, a program such as algorithmic zoning, would have to be accepted by the people, chosen by the regulators, and conform to the state constitutional requirements of substantive due process and equal protection. Before cities can move forward with their experimentations, questions of bias and accountability need to be addressed.
1. Machine Bias
AI and its underlying algorithms are susceptible to biased outcomes and discriminatory results. Not only can machine-learning models reflect the biases of the people who develop them, but it can also amplify those biases. Take, for example, supervised learning algorithms that intake human-classified data. Imagine an app named colorID that identifies the different colors contained in a picture. Ten humans are charged with classifying the training data for the app’s underlying algorithm. Each of the trainers come across the same picture of a dress and all ten of them label it white and gold. At the same time, a strong debate is brewing over whether the dress from the same picture is actually white and gold or instead blue and black. As arguments ensue, people turn to colorID in hopes of settling their disagreements. The app determines the colors of the dress to be white and gold based on its training data. However, it turns out that the dress was in fact blue and black. Since the colorID algorithm only received training inputs from people on one side of the debate, it was not only wrong, but also provided the incorrect information to all users relying on the app for answers. Unfortunately, these types of errors are not just hypothetical and often far more problematic. In 2015, Google’s image search experienced problems with its photos image recognition system, which automatically mislabeled African Americans as “gorillas” and “animals.” Google is still having a hard time fixing the algorithm and has resorted to censoring tags for gorilla, chimps, and chimpanzees.
Biased trainers or algorithm designers are not required to produce problematic outcomes. Even “neutral” algorithms and training data can result in disparate effects and amplify individual bias. One problem is the negative feedback loop. For example, African-Americans who are primarily the target for high-interest credit options could be fed advertisements for these products online. Consumers click on these ads without realizing that they will continue to receive similar predatory online advertisements. In this case, the algorithm accepts these clicks as reinforcement, and might never instead suggest lower-interest credit options that the consumer could be eligible for and quite possibly prefer. Another problem emerges with the increasing occurrence of filter bubbles—a term coined by author Eli Pariser. Algorithms have a great capacity to personalize recommendations online by collecting individual information on location, internet behavior, and click history. Personalized recommendations work well when users are looking for a movie to watch or a song to listen to. Unfortunately, people are increasingly relying online platforms not only for their entertainment, but also, their information. The result: algorithms feeding users news sources that are similar to what they have chosen in the past. Pariser explains, “[p]ersonalization filters serve a kind of invisible autopropaganda, indoctrinating us with our own ideas, amplifying our desire for things that are familiar and leaving us oblivious to the dangers lurking in the dark territory of the unknown.”
To fix machine errors, humans need to intervene. Regulators must be mindful of the biases that can be introduced and the adverse outcomes that can result from AI. For algorithms used in rulemaking, discriminatory outcomes are not only morally problematic, but have the potential for violating equal protection.
2. Transparency
The proverbial Black Box Problem poses many complications for regulators hoping to leverage artificial intelligence in governance. The ideal amount of transparency in decision-making is a full account of a decision’s origin, including inputs, outputs, and the main factors that drove it. Unfortunately, even full disclosure of the data inputs, the entire source code, and an opportunity to watch the algorithm “at work” may not be enough to discern how the artificial intelligence comes to its decisions. An inability to audit the decision-making process makes it difficult to challenge regulations and assign responsibility for adverse outcomes.
3. Governance Mechanisms
The fast advancement of private sector AI has raised questions about the regulatory mechanisms governing it. The potential for bias, privacy violations, and a lack of transparency have become mainstream concerns. For instance, in 2018, Congress held three hearings concerning artificial intelligence, and at the third, hearing members brought in industry professionals to discuss the path towards future AI policies. Later, the House Committee on Financial Services announced a new Artificial Intelligence Task Force focused on applications of machine-learning in financial services. These current discussions appear to focus on private sector regulation, but if the government continues to expand its use of machine-learning, lawmakers need to begin considering standards for public sector AI.
One way to address challenges with transparency and accountability is to put limitations, or outright bans, on the technology. This approach involves limiting data sets to make them easier to review, or banning complex algorithms that have strong black boxes, in favor of algorithms with weak black boxes. However, since AI use continues to evolve in the private sector, it may be difficult for governments to limit their technological progression.
Instead of limiting the use of artificial intelligence, regulators could work with the transparency available and attempt to provide accountability in AI-enabled rulemaking. Though, people may not always be able to understand the inner workings of an algorithm, rule makers can still be transparent about the data used, how an algorithm is trained, and what the AI is trained for. Where transparency is available, governments will have to consider regulatory mechanisms for transforming “transparency into meaningful accountability.” The authors of the ACUS AI Report suggest three familiar avenues for these regulatory mechanisms: legal accountability, public accountability, and enforced rules. Legal accountability could be a form of judicial review on AI activity and decisions. Public accountability could facilitate notice-and-comment style feedback or mandatory audits and impact assessments. Rules would likely be tailored to the specific needs of the government or agency, but could be accompanied with a mandatory approval process and ex-post review. Government officials already have tools of accountability at their disposal, and they need to decide if the existing mechanisms will be enough.
A different way of implementing governance mechanisms is to have a Human in The Loop (HITL). HITL involves the presence of an active human operator in the AI system. Typically, HITL is used to improve algorithmic performance and “with the help of humans, the machine becomes smarter as well as more trained and confident to take the quick and accurate decisions. . . .” However, having a human in the loop is also a powerful regulatory tool. The human would provide supervisory and accountability functions at crucial points of the system’s operation. In the supervisory role, humans can identify misbehavior, intervene, and change outcomes in real time. Humans can also provide instant ex-post feedback for unintended adverse outcomes. In addition, HITL provides an accountable entity. Having a human responsible for an algorithm would force administrations to avoid blaming adverse outcomes on the machine. Moreover, assigning accountability to identifiable people gives consumers and constituents the power to advocate for forced human intervention in AI.
Forced human intervention can happen after the algorithm produces outcomes, but before those outcomes are applied. If a model produces a set of rules, it is essentially predicting which rules will produce the best outcome based on the information available. Regulators could view these predictions as informative rather than decisive. AI could still provide important information that would force regulators consider solutions outside the scope of their consideration or imagination. Ultimately, the predicted rules would serve simply as suggestions and with HITL, humans still have the final say.
V. Conclusion
The next technological revolution is here. Artificial intelligence is abundant, and some experts believe that just as electricity transformed almost everything 100 years ago, AI will follow suit in the next several years. Addressing the limitations of artificial intelligence will involve testing approaches to accountability and applying regulatory mechanisms in new ways. Overhauling the way human regulators make decisions requires experimentation. The same qualities that allow cities to be great catalysts for urban innovation make local governments the perfect laboratory for AI regulation.