chevron-down Created with Sketch Beta.

Administrative & Regulatory Law News

Summer 2023 — Dealing with Disruption in Administrative Law and Regulation

The Promise and Peril of ChatGPT in Informal Rulemaking

Stephen M. Johnson

Summary

  • The most significant limitation of ChatGPT is that it often generates false information.
  • It is unlikely that ChatGPT will generate comments that include the unique situational knowledge that agencies seek from the public.
  • There are several ways that agencies might constructively use ChatGPT and other artificial intelligence tools in the notice-and-comment rulemaking process, including educating the public about proposed rules.
The Promise and Peril of ChatGPT in Informal Rulemaking
Alan Thornton via Getty Images

Jump to:

The dawn of e-rulemaking promised broader public participation, increased government efficiency, and higher quality decisionmaking. Moving notice and comment rulemaking online has increased the number of persons participating in the process for some rules, but not necessarily the quality of the comments they are submitting. In addition, in some cases, the transformation of the process has created new challenges for agencies by making it easier to flood agencies with duplicative and potentially fraudulent comments to which the agencies must respond. Every technological innovation holds promise for improvement of the rulemaking process, though, and artificial intelligence, including ChatGPT, is the latest tool that might transform, or at least significantly impact, the process.

If ChatGPT is only used to automate the creation of public comments, it may increase public involvement in the commenting process and assist persons in writing clear and intelligible comments, but it is unlikely to improve the quality of public comments because it won’t necessarily make it any easier for commenters to provide agencies with the type of information that agencies are seeking. In addition, the use of ChatGPT to draft comments may create challenges for agencies in identifying useful or accurate information in the comments and responding to the comments, which could delay the completion and implementation of rules.

There is, however, an alternative way that ChatGPT could improve the quality of comments and improve agency decisionmaking. Agencies could use ChatGPT, or other artificial intelligence tools, to facilitate public understanding of the rules and the types of information that agencies are seeking in the rulemaking process. In addition, ChatGPT and other artificial intelligence tools might help agencies efficiently organize and summarize comments they receive and generate summaries of comments and responses for final rules.

The Importance of Public Participation

When developing most rules, agencies rely heavily on scientific and technical information, so they prefer that commenters provide detailed facts, studies, or analyses in comments. Agencies also benefit from receiving comments that identify how a rule will impact communities, small businesses, or individuals that the agencies may have yet to anticipate.

Many comments, however, simply express support or opposition to rules or specific portions of rules. Those value, preference, and sentiment comments are significantly less helpful to agencies. When the federal government migrated notice-and-comment rulemaking online, it created guidance for commenters. The guidance describes the types of comments agencies generally find more effective in the rulemaking process.

ChatGPT

ChatGPT is one of several generative AI tools that can be used to summarize information and generate various types of content. It predicts the next word in a body of text, based on its training on over 45 terabytes of data, and adds that word to a string of text. It recursively predicts the next word to construct content in response to users’ prompts. In creating content, though, the model does not always choose to add the statistically most probable word to the existing body of text. In some random cases, it chooses another highly probable, but not most probable, word. Consequently, if a user asks ChatGPT to do the same task four times, the user will likely receive four different outputs.

Like any tool, ChatGPT has both strengths and weaknesses. It can produce very clearly written, credible-sounding material in seconds. The output is usually very well organized and presented in a clear, analytical format, with few grammatical errors. ChatGPT is very effective in summarizing material. In addition, ChatGPT is customizable. Although it was trained on a massive, general-purpose corpus and optimized for general-purpose dialogue, users can train it further with additional data and prompts and “fine-tune” it to summarize a more limited corpus of data or to perform other specialized tasks.

There are, however, some important limitations to ChatGPT. The most significant drawback is that it often makes up facts. By design, the outputs sound convincing and may even be supported by citations to articles or studies, but the underlying facts and supporting authorities are fabricated.

Potential Uses of ChatGPT by the Public or Interest Groups

The most apparent way that ChatGPT could be used by the public, organizations, or interest groups in the rulemaking process would be as a tool to aid in drafting comments. Members of the public could ask ChatGPT to draft a comment in support of or in opposition to, a proposed rule. If they wanted to know more about the rule before prompting ChatGPT to draft the comment, they could ask ChatGPT for information about the rule, including its background, purposes, and implications, and any legal issues that arise in connection with the rule. Somebody would need to train ChatGPT on this information.

Even if members of the public did not utilize ChatGPT in that manner, interest groups could use it to assist them in their mass comment campaigns. Instead of simply providing supporters with form letters or talking points to use in submitting comments, the groups could prepare a wider variety of customizable comment letters based on supporters’ interests or provide a web interface for supporters to facilitate their creation of “personalized” comments.

Using ChatGPT in this manner would help reduce some of the barriers that persons face to writing and submitting comments on rules and help them draft comments that are clear, concise, and well-written, which are generally more effective. ChatGPT might also help members of the public learn more about the rules so that they could include information in the comments addressing how the rules directly affect them.

There are important limits to the use of ChatGPT in drafting comments, though. The most significant limit is that it often generates false information. As an experiment, I asked it to draft a comment from a farmer indicating opposition to EPA’s 2015 “waters of the United States” rule and to provide data and supporting studies for the comment. While it created a clear and concise comment that identified concerns that many farmers expressed about the rule, it made up data. It attributed data to studies from the American Farm Bureau Federation and National Cattlemen’s Beef Association that did not exist. While organizations may be aware of this limitation of ChatGPT, the general public may not be aware that they might submit false information when generating comments with ChatGPT.

Even if ChatGPT didn’t generate false information, it might be of limited value in drafting comments because it might be used to generate only sentiment and preference comments. Most members of the public are likely to ask ChatGPT to draft a comment in support of or against, a rule or part of a rule. Even though agencies provide guidance on effective public comments, many commenters do not draft their comments in ways the guidance suggests.

Finally, it is unlikely that ChatGPT will generate comments that include the unique situational knowledge that agencies seek from the public. An algorithm that anticipates what words will be used in sequence based on public information on which it was trained is unlikely to anticipate unique information that is not publicly available but tied to a person’s individual experiences.

Challenges for Agencies Raised by ChatGPT

If the public used ChatGPT to a meaningful extent to draft comments, it could create several challenges for agencies, including (1) significantly increasing the volume of comments that agencies need to independently analyze; (2) increasing the volume of comments that merely provide expressions of sentiment, values or preferences; and (3) increasing the volume of comments that provide false information to agencies.

One of the major challenges agencies have faced because of the transition to e-rulemaking has been the increase in the number of comments on some high-profile rulemaking. Mass comment campaigns, where hundreds or thousands of persons submit identical or nearly identical comments, usually dominate the rulemakings that attract significant numbers of comments. Over time, agencies have developed or acquired robust “de-duplication” software that analyzes comments and groups identical or nearly identical comments together so that the agencies can respond to all similar comments simultaneously. If commenters utilize ChatGPT more frequently to draft comments, it could undermine the ability of agencies to identify similar comments using existing de-duplication software. Using ChatGPT, which often generates several different responses to the same prompt, commenters could create identical or nearly identical arguments using very different language. If it becomes more difficult for agencies to identify duplicate comments, they will have to spend significantly more time and resources analyzing and responding to comments.

To the extent that the comments generated by ChatGPT merely express values and preferences, it will be fairly easy for agencies to respond to the comments once they have identified which comments are unique. However, suppose ChatGPT leads more people to comment. In that case, newcomers to the process who don’t realize that rulemaking is not a plebiscite may feel “disenfranchised” if agencies finalize rules in a way that appears to contradict the will of most commenters.

Most importantly, though, if commenters increasingly use ChatGPT to draft comments, there could be a significant increase in the number of comments that contain false information. As a result, the information commenters provide to agencies would be far less reliable, and agencies would have to work harder to identify accurate and useful information. In the best-case scenario, agencies would need to devote significantly more time and resources to verify information provided in comments in order to acknowledge and respond to those comments rationally. In a less optimistic scenario, agencies would not have time or resources to adequately verify all the information provided and might rely on false information in developing rules. In that scenario, the quality of agencies’ rules will decrease, and public confidence in agencies will decrease.

Potential Use of ChatGPT by Agencies

While the use of ChatGPT to aid in drafting comments may create some challenges for agencies, there are several ways that agencies might constructively use ChatGPT and other AI tools in the notice-and-comment rulemaking process, including:

  1. educating the public about proposed rules and the information supporting the rules;
  2. educating the public about the rulemaking process and how to prepare effective public comments;
  3. organizing and synthesizing comments received during the rulemaking process.

Since ChatGPT is very effective in summarizing material and presenting information in a clear, organized format, agencies could use it to create “plain language” summaries of rules, portions of rules, issues arising in rules, and the documents supporting rules. They could also use it to create FAQ documents, infographics, or videos that provide similar information to potential commenters. Agencies might also use ChatGPT to create an interactive chatbot that could respond to users’ questions about proposed rules, issues surrounding rules, and supporting documents.

In addition to educating the public about rules, agencies could use ChatGPT to educate members of the public on the rulemaking process and the type of comments that are most effective in the process. They could create the same types of educational materials they might create to educate members of the public about specific rules—summaries, FAQs, infographics, and videos—but targeted at the rulemaking process, rather than a specific rule. A chatbot might even review a comment and suggest ways to make it more effective.

Finally, agencies might use ChatGPT or other artificial intelligence tools to sort through and categorize public comments or prepare an initial summary of comments on a rule or an agency’s responses to comments that could be edited for inclusion in the preamble to the final rule.

Embracing these alternative uses of artificial intelligence and ChatGPT could generate some of the gains in public participation that other technological tools have promised but not delivered.

This ARLN article is based on the author’s full-length manuscript, "Rulemaking 3.0: Incorporating AI and ChatGPT into Notice and Comment Rulemaking," forthcoming in the Missouri Law Review.