The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider removing the loading script and relying on automated data support (you can use convert_to_parquet from the datasets library). If this is not possible, please open a discussion for direct help.

Dataset Card for Regulatory Comments (Direct API Call)

United States governmental agencies often make proposed regulations open to the public for comment. Proposed regulations are organized into "dockets". This dataset will use Regulation.gov public API to aggregate and clean public comments for dockets selected by the user.

Each example will consist of one docket, and include metadata such as docket id, docket title, etc. Each docket entry will also include information about the top 10 comments, including comment metadata and comment text.

In this version, the data is called directly from the API, which can result in slow load times. If the user wants to simply load a pre-downloaded dataset, reference https://huggingface.co/datasets/ro-h/regulatory_comments.

For an example of how to use this dataset structure, reference [https://colab.research.google.com/drive/1AiFznbHaDVszcmXYS3Ht5QLov2bvfQFX?usp=sharing].

Dataset Details

Dataset Description and Structure

This dataset will call the API to individually gather docket and comment information. The user must input their own API key ([https://open.gsa.gov/api/regulationsgov/]), as well as a list of dockets they want to draw information from.

Data collection will stop when all dockets input have been gathered, or when the API hits a rate-limit. The government API limit is 1000 calls per hour. Since each comment's text requires an individual call and there will be ~10 comments collected per docket, only around 100 dockets can be collected without extending the limit. Furthermore, since some dockets don't have comment data and thus will not be included, it is realistic to expect approximatley 60-70 dockets collected. For this number of dockets, expect a load time of around 10 minutes.

If a larger set of dockets are required, consider requesting a rate-unlimited API key. For more details, visit [https://open.gsa.gov/api/regulationsgov/].

The following information is included in this dataset:

Docket Metadata

id (int): A unique numerical identifier assigned to each regulatory docket.

agency (str): The abbreviation for the agency posting the regulatory docket (e.g., "FDA")

title (str): The official title or name of the regulatory docket. This title typically summarizes the main issue or area of regulation covered by the docket.

update_date (str): The date when the docket was last modified on Regulations.gov.

update_time (str): The time when the docket was last modified on Regulations.gov.

purpose (str): Whether the docket was rulemaking, non-rulemaking, or other.

keywords (list): A string of keywords, as determined by Regulations.gov.

Comment Metadata

Note that huggingface converts lists of dictionaries to dictionaries of lists.

comment_id (int): A unique numerical identifier for each public comment submitted on the docket.

comment_url (str): A URL or web link to the specific comment or docket on Regulations.gov. This allows direct access to the original document or page for replicability purposes.

comment_date (str): The date when the comment was posted on Regulations.gov. This is important for understanding the timeline of public engagement.

comment_time (str): The time when the comment was posted on Regulations.gov.

commenter_fname (str): The first name of the individual or entity that submitted the comment. This could be a person, organization, business, or government entity.

commenter_lname (str): The last name of the individual or entity that submitted the comment.

comment_length (int): The length of the comment in terms of the number of characters (spaces included)

Comment Content

text (str): The actual text of the comment submitted. This is the primary content for analysis, containing the commenter's views, arguments, and feedback on the regulatory matter.

Dataset Limitations

Commenter name features were phased in later in the system, so some dockets will have no first name/last name entries. Further, some comments were uploaded solely via attachment, and are stored in the system as null since the API has no access to comment attachments.

  • Curated by: Ro Huang

Dataset Sources

Uses

This dataset may be used by researchers or policy-stakeholders curious about the influence of public comments on regulation development. For example, sentiment analysis may be run on comment text; alternatively, simple descriptive analysis on the comment length and agency regulation may prove interesting.

Dataset Creation

Curation Rationale

After a law is passed, it may require specific details or guidelines to be practically enforceable or operable. Federal agencies and the Executive branch engage in rulemaking, which specify the practical ways that legislation can get turned into reality. Then, they will open a Public Comment period in which they will receive comments, suggestions, and questions on the regulations they proposed. After taking in the feedback, the agency will modify their regulation and post a final rule.

As an example, imagine that the legislative branch of the government passes a bill to increase the number of hospitals nationwide. While the Congressman drafting the bill may have provided some general guidelines (e.g., there should be at least one hospital in a zip code), there is oftentimes ambiguity on how the bill’s goals should be achieved. The Department of Health and Human Services is tasked with implementing this new law, given its relevance to national healthcare infrastructure. The agency would draft and publish a set of proposed rules, which might include criteria for where new hospitals can be built, standards for hospital facilities, and the process for applying for federal funding. During the Public Comment period, healthcare providers, local governments, and the public can provide feedback or express concerns about the proposed rules. The agency will then read through these public comments, and modify their regulation accordingly.

While this is a vital part of the United States regulatory process, there is little understanding of how agencies approach public comments and modify their proposed regulations. Further, the data extracted from the API is often unclean and difficult to navigate.

Data Collection and Processing

Filtering Methods: For each docket, we retrieve relevant metadata such as docket ID, title, context, purpose, and keywords. Additionally, the top 10 comments for each docket are collected, including their metadata (comment ID, URL, date, title, commenter's first and last name) and the comment text itself. The process focuses on the first page of 25 comments for each docket, and the top 10 comments are selected based on their order of appearance in the API response. Dockets with no comments are filtered out.

Data Normalization: The collected data is normalized into a structured format. Each docket and its associated comments are organized into a nested dictionary structure. This structure includes key information about the docket and a list of comments, each with its detailed metadata.

Data Cleaning: HTML text tags are removed from comment text. However, the content of the comment remains unedited, meaning any typos or grammatical errors in the original comment are preserved.

Tools and Libraries Used: Requests Library: Used for making API calls to the Regulations.gov API to fetch dockets and comments data. Datasets Library from HuggingFace: Employed for defining and managing the dataset's structure and generation process. Python: The entire data collection and processing script is written in Python.

Error Handling: In the event of a failed API request (indicated by a non-200 HTTP response status), the data collection process for the current docket is halted, and the process moves to the next docket.

Downloads last month
76