Dataset Viewer

The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider removing the loading script and relying on automated data support (you can use convert_to_parquet from the datasets library). If this is not possible, please open a discussion for direct help.

MaCoCu-sl 2.0 Hugging Face Port

This repository contains a Hugging Face dataset port of the MaCoCu-sl 2.0 dataset originally developed and described by CLARIN. This port simplifies the usage of the dataset for researchers and practitioners by exposing the rich metadata and text content through the Hugging Face datasets library.

Note: This is a port of the original CLARIN dataset and is not an independently curated dataset. All credits go to the original authors Original License: CC0

Dataset Description

The MaCoCu-sl 2.0 dataset was created by crawling the ".si" top-level domain in 2021 and 2022, with the crawl dynamically extending to other domains as well. Extensive cleaning procedures were applied to the extracted texts including boilerplate removal, deduplication, and language filtering. Each document in the XML file includes:

  • Document-level metadata: title, crawl date, URL, domain, file type, language distribution, and a fluency score.
  • Paragraph-level annotations: text, heading flags, quality labels (e.g., “short”, “good”), fluency scores, detected language, and indicators for sensitive content.

For more detailed information, please refer to the original CLARIN description:
MaCoCu-sl 2.0 Dataset Description

Description of the dataset

This descripton is copied from the original DataSet page:

The Slovene web corpus MaCoCu-sl 2.0 was built by crawling the ".si" internet top-level domain in 2021 and 2022, extending the crawl dynamically to other domains as well. The crawler is available at https://github.com/macocu/MaCoCu-crawler.

Considerable effort was devoted into cleaning the extracted text to provide a high-quality web corpus. This was achieved by removing boilerplate (https://corpus.tools/wiki/Justext) and near-duplicated paragraphs (https://corpus.tools/wiki/Onion), discarding very short texts as well as texts that are not in the target language. Furthermore, samples from the largest 1,500 domains were manually checked and bad domains, such as machine-translated domains, were removed. The dataset is characterized by extensive metadata which allows filtering the dataset based on text quality and other criteria (https://github.com/bitextor/monotextor), making the corpus highly useful for corpus linguistics studies, as well as for training language models and other language technologies.

In XML format, each document is accompanied by the following metadata: title, crawl date, url, domain, file type of the original document, distribution of languages inside the document, and a fluency score based on a language model. The text of each document is divided into paragraphs that are accompanied by metadata on the information whether a paragraph is a heading or not, metadata on the paragraph quality (labels, such as “short” or “good”, assigned based on paragraph length, URL and stopword density via the jusText tool - https://corpus.tools/wiki/Justext) and fluency (score between 0 and 1, assigned with the Monocleaner tool - https://github.com/bitextor/monocleaner), the automatically identified language of the text in the paragraph, and information whether the paragraph contains sensitive information (identified via the Biroamer tool - https://github.com/bitextor/biroamer).

As opposed to the previous version, this version has more accurate metadata on languages of the texts, which was achieved by using Google's Compact Language Detector 2 (CLD2) (https://github.com/CLD2Owners/cld2), a high-performance language detector supporting many languages. Other tools, used for web corpora creation and curation, have been updated as well, resulting in an even cleaner, as well as larger corpus.

The corpus can be easily read with the prevert parser (https://pypi.org/project/prevert/).

Notice and take down: Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please: (1) Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted. (2) Clearly identify the copyrighted work claimed to be infringed. (3) Clearly identify the material that is claimed to be infringing and information reasonably sufficient in order to allow us to locate the material. (4) Please write to the contact person for this resource whose email is available in the full item record. We will comply with legitimate requests by removing the affected sources from the next release of the corpus.

This action has received funding from the European Union's Connecting Europe Facility 2014-2020 - CEF Telecom, under Grant Agreement No. INEA/CEF/ICT/A2020/2278341. This communication reflects only the author’s view. The Agency is not responsible for any use that may be made of the information it contains.

Packaging & Projects

This dataset has been packaged as part of the PoVejMo project. The goal is to streamline access and use for training language models and performing corpus linguistics research.

Usage

To load the dataset in your code, first install the required dependencies:

pip install datasets prevert
Downloads last month
23