Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -54,12 +54,13 @@ tags:
|
|
54 |
pretty_name: Beyond Words
|
55 |
size_categories:
|
56 |
- 1K<n<10K
|
|
|
|
|
57 |
---
|
58 |
|
59 |
# Dataset Card for Beyond Words
|
60 |
|
61 |
## Table of Contents
|
62 |
-
- [Table of Contents](#table-of-contents)
|
63 |
- [Dataset Description](#dataset-description)
|
64 |
- [Dataset Summary](#dataset-summary)
|
65 |
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
|
@@ -74,125 +75,129 @@ size_categories:
|
|
74 |
- [Annotations](#annotations)
|
75 |
- [Personal and Sensitive Information](#personal-and-sensitive-information)
|
76 |
- [Considerations for Using the Data](#considerations-for-using-the-data)
|
77 |
-
- [Social Impact
|
78 |
- [Discussion of Biases](#discussion-of-biases)
|
79 |
-
- [
|
80 |
- [Additional Information](#additional-information)
|
81 |
- [Dataset Curators](#dataset-curators)
|
82 |
-
- [Licensing
|
83 |
-
- [Citation
|
84 |
- [Contributions](#contributions)
|
85 |
|
86 |
## Dataset Description
|
87 |
|
88 |
-
- **Homepage:** https://labs.loc.gov/
|
89 |
- **Repository:** https://github.com/LibraryOfCongress/newspaper-navigator
|
90 |
- **Paper:** https://arxiv.org/abs/2005.01583
|
91 |
-
- **
|
92 |
-
- **Point of Contact:** [email protected]
|
93 |
|
94 |
### Dataset Summary
|
95 |
|
96 |
-
|
97 |
|
98 |
### Supported Tasks and Leaderboards
|
99 |
|
100 |
-
|
|
|
|
|
101 |
|
102 |
### Languages
|
103 |
|
104 |
-
|
105 |
|
106 |
## Dataset Structure
|
107 |
|
108 |
### Data Instances
|
109 |
|
110 |
-
|
111 |
|
112 |
### Data Fields
|
113 |
|
114 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
115 |
|
116 |
### Data Splits
|
117 |
|
118 |
-
|
|
|
119 |
|
120 |
## Dataset Creation
|
121 |
|
122 |
### Curation Rationale
|
123 |
|
124 |
-
|
125 |
|
126 |
### Source Data
|
127 |
|
128 |
-
|
129 |
-
|
130 |
-
|
131 |
-
|
132 |
-
#### Who are the source language producers?
|
133 |
-
|
134 |
-
[More Information Needed]
|
135 |
|
136 |
### Annotations
|
137 |
|
138 |
-
|
139 |
-
|
140 |
-
|
141 |
-
|
142 |
-
|
143 |
-
|
144 |
-
[More Information Needed]
|
145 |
|
146 |
### Personal and Sensitive Information
|
147 |
|
148 |
-
|
149 |
|
150 |
## Considerations for Using the Data
|
151 |
|
152 |
-
### Social Impact
|
153 |
|
154 |
-
|
|
|
155 |
|
156 |
### Discussion of Biases
|
157 |
|
158 |
-
|
|
|
|
|
159 |
|
160 |
-
###
|
161 |
|
162 |
-
|
|
|
163 |
|
164 |
## Additional Information
|
165 |
|
166 |
### Dataset Curators
|
167 |
|
168 |
-
|
|
|
|
|
169 |
|
170 |
-
### Licensing
|
171 |
|
172 |
-
|
173 |
|
174 |
-
### Citation
|
175 |
|
176 |
```bibtex
|
177 |
@inproceedings{10.1145/3340531.3412767,
|
178 |
-
author = {Lee, Benjamin Charles Germain and Mears, Jaime and Jakeway, Eileen and Ferriter, Meghan and Adams, Chris and Yarasavage, Nathan and Thomas, Deborah and Zwaard, Kate and Weld, Daniel S.},
|
179 |
-
title = {The Newspaper Navigator Dataset: Extracting Headlines and Visual Content from 16 Million Historic Newspaper Pages in Chronicling America},
|
180 |
-
year = {2020},
|
181 |
-
|
182 |
-
|
183 |
-
|
184 |
-
url = {https://doi.org/10.1145/3340531.3412767}
|
185 |
-
doi = {10.1145/3340531.3412767},
|
186 |
-
abstract = {Chronicling America is a product of the National Digital Newspaper Program, a partnership between the Library of Congress and the National Endowment for the Humanities to digitize historic American newspapers. Over 16 million pages have been digitized to date, complete with high-resolution images and machine-readable METS/ALTO OCR. Of considerable interest to Chronicling America users is a semantified corpus, complete with extracted visual content and headlines. To accomplish this, we introduce a visual content recognition model trained on bounding box annotations collected as part of the Library of Congress's Beyond Words crowdsourcing initiative and augmented with additional annotations including those of headlines and advertisements. We describe our pipeline that utilizes this deep learning model to extract 7 classes of visual content: headlines, photographs, illustrations, maps, comics, editorial cartoons, and advertisements, complete with textual content such as captions derived from the METS/ALTO OCR, as well as image embeddings. We report the results of running the pipeline on 16.3 million pages from the Chronicling America corpus and describe the resulting Newspaper Navigator dataset, the largest dataset of extracted visual content from historic newspapers ever produced. The Newspaper Navigator dataset, finetuned visual content recognition model, and all source code are placed in the public domain for unrestricted re-use.},
|
187 |
-
booktitle = {Proceedings of the 29th ACM International Conference on Information & Knowledge Management},
|
188 |
-
pages = {3055–3062},
|
189 |
-
numpages = {8},
|
190 |
-
keywords = {digital humanities, dataset, chronicling america, newspaper navigator, document analysis, information retrieval, digital libraries and archives, public domain, historic newspapers},
|
191 |
-
location = {Virtual Event, Ireland},
|
192 |
-
series = {CIKM '20}
|
193 |
}
|
194 |
-
|
195 |
|
196 |
### Contributions
|
197 |
|
198 |
-
Thanks to
|
|
|
54 |
pretty_name: Beyond Words
|
55 |
size_categories:
|
56 |
- 1K<n<10K
|
57 |
+
language:
|
58 |
+
- en
|
59 |
---
|
60 |
|
61 |
# Dataset Card for Beyond Words
|
62 |
|
63 |
## Table of Contents
|
|
|
64 |
- [Dataset Description](#dataset-description)
|
65 |
- [Dataset Summary](#dataset-summary)
|
66 |
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
|
|
|
75 |
- [Annotations](#annotations)
|
76 |
- [Personal and Sensitive Information](#personal-and-sensitive-information)
|
77 |
- [Considerations for Using the Data](#considerations-for-using-the-data)
|
78 |
+
- [Social Impact](#social-impact)
|
79 |
- [Discussion of Biases](#discussion-of-biases)
|
80 |
+
- [Limitations](#limitations)
|
81 |
- [Additional Information](#additional-information)
|
82 |
- [Dataset Curators](#dataset-curators)
|
83 |
+
- [Licensing](#licensing)
|
84 |
+
- [Citation](#citation)
|
85 |
- [Contributions](#contributions)
|
86 |
|
87 |
## Dataset Description
|
88 |
|
89 |
+
- **Homepage:** https://labs.loc.gov/work/experiments/beyond-words/
|
90 |
- **Repository:** https://github.com/LibraryOfCongress/newspaper-navigator
|
91 |
- **Paper:** https://arxiv.org/abs/2005.01583
|
92 |
+
- **Contact:** [email protected]
|
|
|
93 |
|
94 |
### Dataset Summary
|
95 |
|
96 |
+
The **Beyond Words** dataset is a crowdsourced collection of bounding box annotations on World War I-era historical newspaper pages from the Library of Congress’s Chronicling America collection. Volunteers marked seven types of visual content — photographs, illustrations, maps, comics, editorial cartoons, headlines, and advertisements — enabling the training of the visual content recognition model behind the Newspaper Navigator project. It serves as a foundational dataset for large-scale document layout analysis in historical archives.
|
97 |
|
98 |
### Supported Tasks and Leaderboards
|
99 |
|
100 |
+
- Object detection
|
101 |
+
- Visual content classification
|
102 |
+
- Document layout analysis
|
103 |
|
104 |
### Languages
|
105 |
|
106 |
+
- English (used in transcribed captions and OCR content)
|
107 |
|
108 |
## Dataset Structure
|
109 |
|
110 |
### Data Instances
|
111 |
|
112 |
+
Each instance is an image of a historic newspaper page, annotated with bounding boxes around regions containing visual content.
|
113 |
|
114 |
### Data Fields
|
115 |
|
116 |
+
- `image_id`: Unique identifier for the image.
|
117 |
+
- `image`: Full page image from Chronicling America.
|
118 |
+
- `width`, `height`: Image dimensions.
|
119 |
+
- `objects`: List of annotations, each including:
|
120 |
+
- `bw_id`: Unique Beyond Words annotation ID.
|
121 |
+
- `category_id`: One of 7 class labels (Photograph, Illustration, etc.).
|
122 |
+
- `image_id`: Reference to source image.
|
123 |
+
- `id`: Object instance ID.
|
124 |
+
- `area`: Area of the bounding box.
|
125 |
+
- `bbox`: Bounding box coordinates (x, y, width, height).
|
126 |
+
- `iscrowd`: Crowd label (for COCO format compatibility).
|
127 |
|
128 |
### Data Splits
|
129 |
|
130 |
+
- **Train:** 2,846 examples
|
131 |
+
- **Validation:** 712 examples
|
132 |
|
133 |
## Dataset Creation
|
134 |
|
135 |
### Curation Rationale
|
136 |
|
137 |
+
To train models that can detect and classify visual content in historic newspapers at scale, the Library of Congress launched the Beyond Words crowdsourcing initiative in 2017. The data produced is designed to support machine learning workflows and historical content analysis.
|
138 |
|
139 |
### Source Data
|
140 |
|
141 |
+
- Scanned WWI-era newspapers from Chronicling America
|
142 |
+
- Public domain metadata and images
|
143 |
+
- OCR from METS/ALTO XML files
|
|
|
|
|
|
|
|
|
144 |
|
145 |
### Annotations
|
146 |
|
147 |
+
- Collected via the Beyond Words crowdsourcing platform
|
148 |
+
- Up to 6 volunteers per annotation; verified by consensus
|
149 |
+
- Categories include photograph, illustration, map, comic/cartoon, editorial cartoon
|
150 |
+
-
|
151 |
+
- Additional headline and advertisement annotations added by project team
|
|
|
|
|
152 |
|
153 |
### Personal and Sensitive Information
|
154 |
|
155 |
+
None known. All pages are from historical public domain newspapers.
|
156 |
|
157 |
## Considerations for Using the Data
|
158 |
|
159 |
+
### Social Impact
|
160 |
|
161 |
+
- Democratizes access to historical newspaper content
|
162 |
+
- Supports digital humanities, education, and public history initiatives
|
163 |
|
164 |
### Discussion of Biases
|
165 |
|
166 |
+
- Limited to WWI-era newspapers
|
167 |
+
- Class distribution skewed toward some categories (e.g. headlines, ads)
|
168 |
+
- Some annotations (headlines, ads) not crowd-verified
|
169 |
|
170 |
+
### Limitations
|
171 |
|
172 |
+
- Visual content from pre-1875 newspapers may yield lower model performance
|
173 |
+
- Quality of annotations can vary due to the experimental nature of the crowdsourcing workflow
|
174 |
|
175 |
## Additional Information
|
176 |
|
177 |
### Dataset Curators
|
178 |
|
179 |
+
- Benjamin Charles Germain Lee
|
180 |
+
- Jaime Mears, Eileen Jakeway, Meghan Ferriter, Chris Adams, Nathan Yarasavage, Deborah Thomas, Kate Zwaard (Library of Congress)
|
181 |
+
- Daniel Weld (University of Washington)
|
182 |
|
183 |
+
### Licensing
|
184 |
|
185 |
+
- **License:** CC0 1.0 Universal (Public Domain Dedication)
|
186 |
|
187 |
+
### Citation
|
188 |
|
189 |
```bibtex
|
190 |
@inproceedings{10.1145/3340531.3412767,
|
191 |
+
author = {Lee, Benjamin Charles Germain and Mears, Jaime and Jakeway, Eileen and Ferriter, Meghan and Adams, Chris and Yarasavage, Nathan and Thomas, Deborah and Zwaard, Kate and Weld, Daniel S.},
|
192 |
+
title = {The Newspaper Navigator Dataset: Extracting Headlines and Visual Content from 16 Million Historic Newspaper Pages in Chronicling America},
|
193 |
+
year = {2020},
|
194 |
+
publisher = {Association for Computing Machinery},
|
195 |
+
address = {New York, NY, USA},
|
196 |
+
doi = {10.1145/3340531.3412767},
|
197 |
+
url = {https://doi.org/10.1145/3340531.3412767}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
198 |
}
|
199 |
+
|
200 |
|
201 |
### Contributions
|
202 |
|
203 |
+
Thanks to @davanstrien for adding this dataset to Hugging Face.
|