Datasets:
Questionable quality of some of the data
Heya, just randomly stumbled upon this and had a quick read through some of the sample data.
It seems like some of the texts are very weirdly OCR-ed and seem to have various quirks and incorrect wording to them? I only looked at the examples from the BLBooks source for now, but here's some of my findings.
For example (ID: 000000489):
"die drohenden Kricgswolkcn würden sich durch den Congrcß"
probably meant:
"die drohenden Kriegswolken würden sich durch den Congreß"
Or in ID 000080961:
"Bref af Christian, Markgrefve i Brandenburg, till Konung Gustaf II Adolph. Durchleuchtigster König, Euer Königl: Wfden seindt Vn ser freundtlich willig dienst zuuorn, freundtlicher lieber herr Oheimb vnnd Schwager. Wier zweiffeln ganz nicht, Euer Königl. Wrd: werden albereit von dero Veldtmarschalchen Gust a ro Horn vnnd änder ortten glaubwiirdige nachrichtung erlanget haben, Wie es mitt dem Stifft Bamberg, vnnd Euer Königl. Wrd: darinn befundenen arméen hergangen, vnnd das dieselbe sich wegen des sterckheren gewalt der Tillischen urmee wiederumb in das Stifft Wiirzburgk releriret,"
is pretty much completely illegible (some words like "Bref" seem to missing letters, probably meant to be "Brief").
I guess that's nothing that can be changed about the original dataset, so this is more about the quality of the data itself before this can be used in fine-tuning any models.
Yes, since the data is sourced from OCR processes, there will be some character-level misspellings. We opted to include as much data as possible, especially since historical texts might trigger "modern" OCR error recognition, but also include the ocr_error column for downstream filtering, that can be used to retroactively clean these errors further.
Side note: BLBooks suffers from especially bad OCR (likely because of Fraktur fonts in historical books), so its unfortunate to have it as the first split and thus example of the data. We are working on having a more representative subsample to be displayed instead.