Dataset Viewer
Auto-converted to Parquet
datasetId
stringlengths
5
121
author
stringlengths
2
42
last_modified
unknowndate
2021-04-29 15:34:29
2025-03-26 01:30:06
downloads
int64
0
4.51M
likes
int64
0
7.64k
tags
sequencelengths
1
7.92k
task_categories
sequencelengths
0
47
createdAt
unknowndate
2022-03-02 23:29:22
2025-03-26 01:27:57
card
stringlengths
15
1.02M
huggingface/documentation-images
huggingface
"2025-03-25T16:15:48"
4,510,777
55
[ "license:cc-by-nc-sa-4.0", "size_categories:n<1K", "format:imagefolder", "modality:image", "library:datasets", "library:mlcroissant", "region:us" ]
null
"2022-03-02T23:29:22"
--- license: cc-by-nc-sa-4.0 --- ### This dataset contains images used in the documentation of HuggingFace's libraries. HF Team: Please make sure you optimize the assets before uploading them. My favorite tool for this is https://tinypng.com/.
opentensor/openvalidators
opentensor
"2023-09-25T14:03:34"
3,689,181
8
[ "license:mit", "size_categories:1M<n<10M", "region:us" ]
null
"2023-06-15T15:29:34"
--- license: mit viewer: False size_categories: - 1M<n<10M --- # Dataset Card for Openvalidators dataset ## Dataset Description - **Repository:** https://github.com/opentensor/validators - **Homepage:** https://bittensor.com/ ### Dataset Summary The OpenValidators dataset, created by the OpenTensor Foundation, is a continuously growing collection of data generated by the [OpenValidators](https://github.com/opentensor/validators) project in [W&B](https://wandb.ai/opentensor-dev/openvalidators/table). It contains millions of records and serves researchers, data scientists, and miners in the Bittensor network. The dataset provides information on network performance, node behaviors, and wandb run details. Researchers can gain insights and detect patterns, while data scientists can use it for training models and analysis. Miners can use the generated data to fine-tune their models and enhance their incentives in the network. The dataset's continuous updates support collaboration and innovation in decentralized computing. ### Version support and revisions This dataset is in constant evolution, so in order to facilitate data management, each data schema is versioned in a hugging face dataset branch, so legacy data can be easily retrieved. The main branch (or default revision) will always be the latest version of the dataset, following the latest schema adopted by the openvalidators. The current state of data organization is as following: - `v1.0`: All data collected from the first openvalidators schema, ranging from version `1.0.0` to `1.0.8`. - `main`: Current state of the dataset, following the latest schema adopted by the openvalidators (>= `1.1.0`). ### How to use The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The OpenValidators dataset gives you the granularity of extracting data by **run_id**, by **OpenValidators version** and by **multiple OpenValidators versions.** The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function. **Downloading by run id** For example, to download the data for a specific run, simply specify the corresponding **OpenValidators version** and the **wandb run id** in the format `version/raw_data/run_id.parquet`: ```python from datasets import load_dataset version = '1.1.0' # OpenValidators version run_id = '0drg98iy' # WandB run id run_id_dataset = load_dataset('opentensor/openvalidators', data_files=f'{version}/raw_data/{run_id}.parquet') ``` _Please note that only completed run_ids are included in the dataset. Runs that are still in progress will be ingested shortly after they finish._ **Downloading by OpenValidators version** One can also leverage the `datasets` library to download all the runs within a determined **OpenValidators** version. That can be useful for researchers and data enthusiasts that are looking to do analysis in a specific **OpenValidators** version state. ```python from datasets import load_dataset version = '1.1.0' # Openvalidators version version_dataset = load_dataset('opentensor/openvalidators', data_files=f'{version}/raw_data/*') ``` **Downloading by multiple OpenValidators version** Utilizing the `datasets` library, users can efficiently download runs from multiple **OpenValidators** versions. By accessing data from various OpenValidators versions, users can undertake downstream tasks such as data fine-tuning for mining or to perform big data analysis. ```python from datasets import load_dataset versions = ['1.1.0', '1.1.1', ...] # Desired versions for extraction data_files = [f'{version}/raw_data/*' for version in versions] # Set data files directories dataset = load_dataset('opentensor/openvalidators', data_files={ 'test': data_files }) ``` **Downloading legacy data using revisions** ```python from datasets import load_dataset version = '1.0.4' # OpenValidators version run_id = '0plco3n0' # WandB run id revision = 'v1.0' # Dataset revision run_id_dataset = load_dataset('opentensor/openvalidators', data_files=f'{version}/raw_data/{run_id}.parquet', revision=revision) ``` > Note: You can interact with legacy data in all the ways mentioned above, as long as your data scope is within the same revision. **Analyzing metadata** All the state related to the details of the wandb data ingestion can be accessed easily using pandas and hugging face datasets structure. This data contains relevant information regarding the metadata of the run, including user information, config information and ingestion state. ```python import pandas as pd version = '1.1.0' # OpenValidators version for metadata analysis df = pd.read_csv(f'hf://datasets/opentensor/openvalidators/{version}/metadata.csv') ``` ## Dataset Structure ### Data Instances **versioned raw_data** The data is provided as-in the wandb logs, without further preprocessing or tokenization. This data is located at `version/raw_data` where each file is a wandb run. **metadata** This dataset defines the current state of the wandb data ingestion by **run id**. ### Data Fields **Raw data** The versioned raw_data collected from W&B follows the following schema: - `rewards`: (float64) Reward vector for given step - `completion_times`: (float64) List of completion times for a given prompt - `completions`: (string) List of completions received for a given prompt - `_runtime`: (float64) Runtime of the event - `_timestamp`: (float64) Timestamp of the event - `name`: (string) Prompt type, e.g. 'followup', 'answer', 'augment' - `block`: (float64) Current block at given step - `gating_loss`: (float64) Gating model loss for given step - `rlhf_reward_model`: (float64) Output vector of the rlhf reward model - `relevance_filter`: (float64) Output vector of the relevance scoring reward model - `dahoas_reward_model`: (float64) Output vector of the dahoas reward model - `blacklist_filter`:(float64) Output vector of the blacklist filter - `nsfw_filter`:(float64) Output vector of the nsfw filter - `prompt_reward_model`:(float64) Output vector of the prompt reward model - `reciprocate_reward_model`:(float64) Output vector of the reciprocate reward model - `diversity_reward_model`:(float64) Output vector of the diversity reward model - `set_weights`: (float64) Output vector of the set weights - `uids`:(int64) Queried uids - `_step`: (int64) Step of the event - `prompt`: (string) Prompt text string - `step_length`: (float64) Elapsed time between the beginning of a run step to the end of a run step - `best`: (string) Best completion for given prompt **Metadata** - `run_id`: (string) Wandb Run Id - `completed`: (boolean) Flag indicating if the run_id is completed (finished, crashed or killed) - `downloaded`: (boolean) Flag indicating if the run_id data has been downloaded - `last_checkpoint`: (string) Last checkpoint of the run_id - `hotkey`: (string) Hotkey associated with the run_id - `openvalidators_version`: (string) Version of OpenValidators associated with the run_id - `problematic`: (boolean) Flag indicating if the run_id data had problems to be ingested - `problematic_reason`: (string) Reason for the run_id being problematic (Exception message) - `wandb_json_config`: (string) JSON configuration associated with the run_id in Wandb - `wandb_run_name`: (string) Name of the Wandb run - `wandb_user_info`: (string) Username information associated with the Wandb run - `wandb_tags`: (list) List of tags associated with the Wandb run - `wandb_createdAt`: (string) Timestamp of the run creation in Wandb ## Dataset Creation ### Curation Rationale This dataset was curated to provide a comprehensive and reliable collection of historical data obtained by the execution of different OpenValidators in the bittensor network. The goal is to support researchers, data scientists and developers with data generated in the network, facilitating the discovery of new insights, network analysis, troubleshooting, and data extraction for downstream tasks like mining. ### Source Data #### Initial Data Collection and Normalization The initial data collection process for this dataset involves recurrent collection by a specialized worker responsible for extracting data from wandb and ingesting it into the Hugging Face datasets structure. The collected data is organized based on the OpenValidators version and run ID to facilitate efficient data management and granular access. Each run is collected based on its corresponding OpenValidators version tag and grouped into version-specific folders. Within each version folder, a `metadata.csv` file is included to manage the collection state, while the raw data of each run is saved in the `.parquet` format with the file name corresponding to the run ID (e.g., `run_id.parquet`). Please note that the code for this data collection process will be released for transparency and reproducibility. #### Who are the source language producers? The language producers for this dataset are all the openvalidators that are logging their data into wandb in conjunction of other nodes of the bittensor network. The main wandb page where the data is sent can be accessed at https://wandb.ai/opentensor-dev/openvalidators/table. ### Licensing Information The dataset is licensed under the [MIT License](https://github.com/opentensor/validators/blob/main/LICENSE) ### Supported Tasks and Leaderboards [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
KakologArchives/KakologArchives
KakologArchives
"2025-03-26T01:26:55"
3,183,791
15
[ "task_categories:text-classification", "language:ja", "license:mit", "region:us" ]
[ "text-classification" ]
"2023-05-12T13:31:56"
--- pretty_name: ニコニコ実況 過去ログアーカイブ license: mit language: - ja task_categories: - text-classification --- # ニコニコ実況 過去ログアーカイブ ニコニコ実況 過去ログアーカイブは、[ニコニコ実況](https://jk.nicovideo.jp) のサービス開始から現在までのすべての過去ログコメントを収集したデータセットです。 去る2020年12月、ニコニコ実況は [ニコニコ生放送内の一公式チャンネルとしてリニューアル](https://blog.nicovideo.jp/niconews/143148.html) されました。 これに伴い、2009年11月から運用されてきた旧システムは提供終了となり(事実上のサービス終了)、torne や BRAVIA などの家電への対応が軒並み終了する中、当時の生の声が詰まった約11年分の過去ログも同時に失われることとなってしまいました。 そこで 5ch の DTV 板の住民が中心となり、旧ニコニコ実況が終了するまでに11年分の全チャンネルの過去ログをアーカイブする計画が立ち上がりました。紆余曲折あり Nekopanda 氏が約11年分のラジオや BS も含めた全チャンネルの過去ログを完璧に取得してくださったおかげで、11年分の過去ログが電子の海に消えていく事態は回避できました。 しかし、旧 API が廃止されてしまったため過去ログを API 経由で取得することができなくなり、またアーカイブされた過去ログから見たい範囲のログを探す場合も、アーカイブのサイズが合計約 150GB もあることから、とても以前のように手軽に過去ログに触れることはできなくなってしまいました。 一方、ニコニコ生放送内の一公式チャンネルとして移行した新ニコニコ実況では、タイムシフト(旧ニコニコ実況での過去ログに相当)の視聴期限は3週間までとなっているため、その期限を過ぎると過去ログは視聴できなくなってしまいます。 また一般会員は事前にタイムシフト予約をしておく必要があるなど、以前のような利便性は失われています。 私たちは、ニコニコ実況に投稿された日本のテレビ放送についてのコメントは、当時の世相や時代背景を端的に表す、歴史的価値のある資料だと考えています。 このデータセットでは、ニコニコ実況のすべての過去ログを後世に残すべく、Nekopanda 氏が配布されていた旧ニコニコ実況の 2020/12/15 までのすべての過去ログに加え、コミュニティでの実況番組も含めた新ニコニコ実況、さらに 2024/06/10 からは実況用代替コメントサーバーである [NX-Jikkyo](https://nx-jikkyo.tsukumijima.net/) の当日分の過去ログを5分に1回収集し、随時反映しています。 過去ログをかんたんに取得するための [API](https://jikkyo.tsukumijima.net/) もあります。 よろしければそちらもご活用ください。 ## Dataset Structure ### Builder Config | Key | Value Type | Default Value | Description | | --------------- | ---------- | ------------- | ----------- | | channel_id | string | None | 過去ログを取得するニコニコ実況チャンネルの ID (省略時はすべてのチャンネル) | | year | int | None | 取得する過去ログの年 (省略時はすべての年) | | number_of_files | int | None | 取得する過去ログファイルの数 (省略時はすべてのファイル) | ### Data Splits | Split | Approximate Size | Description | | ------- | ---------------- | ----------- | | sample | 1GB | サンプルとして、2022年中に投稿された TOKYO MX (ID: jk9) のすべての過去ログコメントを取得します。1GB ほどあります。 | | all | 190GB | 全チャンネル/全期間のすべての過去ログコメントを取得します。190GB 以上あるため注意してください。 | ### Data Fields | Field | Type | Description | | --------------- | -------- | ----------- | | thread | string | コメントのスレッド ID | | no | int64 | コメント番号 (コメ番) | | vpos | int64 | スレッド ID から起算したコメントの再生位置 (1/100秒) | | date | int64 | コメント投稿時間の UNIX タイムスタンプ | | date_usec | int64 | コメント投稿時間の小数点以下の時間 | | user_id | string | ユーザー ID (コマンドに 184 が指定されている場合は匿名化され、1週間ほどでシャッフルされる) | | mail | string | コメントのコマンド (184, red naka big など、省略されることもある) | | premium | boolean | コメントしたユーザーがプレミアム会員であれば True | | anonymity | boolean | 匿名コメントであれば True | | content | string | コメント本文 (AA など、まれに複数行コメントがあるので注意) | ## Example ```python from datasets import load_dataset dataset = load_dataset('KakologArchives/KakologArchives', 'all', channel_id='jk211', year=2023, number_of_files=10) for data in dataset['train']: print(data) ``` ## Licensing Information [MIT License](https://opensource.org/license/mit/)
Symato/cc
Symato
"2023-07-11T07:56:55"
1,929,280
2
[ "language:vi", "license:mit", "size_categories:1K<n<10K", "region:us" ]
null
"2023-07-06T04:14:51"
--- license: mit language: - vi size_categories: - 1K<n<10K --- # What is Symato CC? To download all WARC data from Common Crawl then filter out Vietnamese in Markdown and Plaintext format. There is 1% of Vietnamse in CC, extract all of them out should be a lot (~10TB of plaintext). ## Main contributors - https://huggingface.co/nampdn-ai - https://huggingface.co/binhvq - https://huggingface.co/th1nhng0 - https://huggingface.co/iambestfeed # Simple quality filters To make use of raw data from common crawl, you need to do filtering and deduping. Below is a simple quality filtering code for reference to write your own filters. ```sh ## Convert .parquet to .jsonl.gz mkdir -p jsonl filtered python3 parquet2jsonl.py ## Quality filter # wget https://huggingface.co/datasets/Symato/goods_vs_c4_cc_classifiers/resolve/main/fasttext_good_vs_c4_001.bin python3 filters.py jsonl/2023-14_20230401125552-20230401155552.jsonl.gz logging ``` # Disclaimer - We use content from Common Crawl as it is. Go to CC website to know more about data. - We provide simple quality filters code to make it easier for you to use data but no warranty the data quality meet everyone expectations. Modifiy ours or write your own filters in-case you need more advanced / better ones. Contact **dung at symato dot xyz** if you have other questions.
HuggingFaceM4/the_cauldron
HuggingFaceM4
"2024-05-06T13:37:52"
840,560
387
[ "size_categories:1M<n<10M", "format:parquet", "modality:image", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "arxiv:1603.07396", "arxiv:2206.01718", "arxiv:2208.05358", "arxiv:1612.06890", "arxiv:2310.00367", "arxiv:1710.07300", "arxiv:2312.12241", "arxiv:1912.03098", "arxiv:2211.08545", "arxiv:2306.05425", "arxiv:1709.00103", "arxiv:2003.12462", "arxiv:1612.00837", "arxiv:2205.00363", "arxiv:2403.09029", "arxiv:2405.02246", "region:us" ]
null
"2024-04-11T17:53:57"
--- dataset_info: - config_name: ai2d features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 435362437.84770346 num_examples: 2434 download_size: 438136609 dataset_size: 435362437.84770346 - config_name: aokvqa features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 871997710.0 num_examples: 16539 download_size: 893265070 dataset_size: 871997710.0 - config_name: chart2text features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 1060566797.2728182 num_examples: 26961 download_size: 1103141721 dataset_size: 1060566797.2728182 - config_name: chartqa features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 784719364.9441738 num_examples: 18265 download_size: 803192402 dataset_size: 784719364.9441738 - config_name: clevr features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 11522617868.0 num_examples: 70000 download_size: 13267429872 dataset_size: 11522617868.0 - config_name: clevr_math features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 13308311206.0 num_examples: 70000 download_size: 16315284 dataset_size: 13308311206.0 - config_name: cocoqa features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 2213960474.0 num_examples: 46287 download_size: 2393991009 dataset_size: 2213960474.0 - config_name: datikz features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 481233278.0 num_examples: 47974 download_size: 613100257 dataset_size: 481233278.0 - config_name: diagram_image_to_text features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 18877197.0 num_examples: 300 download_size: 18706661 dataset_size: 18877197.0 - config_name: docvqa features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 6885686042.0 num_examples: 10189 download_size: 6887803845 dataset_size: 6885686042.0 - config_name: dvqa features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 3689940101.0 num_examples: 200000 download_size: 4295254110 dataset_size: 3689940101.0 - config_name: figureqa features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 1901887152.0 num_examples: 100000 download_size: 2220036667 dataset_size: 1901887152.0 - config_name: finqa features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 135268568.0 num_examples: 5276 download_size: 123698250 dataset_size: 135268568.0 - config_name: geomverse features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 951640204.0 num_examples: 9303 download_size: 323746516 dataset_size: 951640204.0 - config_name: hateful_memes features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 3035059823.0 num_examples: 8500 download_size: 3054208907 dataset_size: 3035059823.0 - config_name: hitab features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 161130580.0 num_examples: 2500 download_size: 158295807 dataset_size: 161130580.0 - config_name: iam features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 1129180352.0 num_examples: 5663 download_size: 1128935602 dataset_size: 1129180352.0 - config_name: iconqa features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 264513634.7170419 num_examples: 27307 download_size: 326674337 dataset_size: 264513634.7170419 - config_name: infographic_vqa features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 291677986.0 num_examples: 2118 download_size: 292351760 dataset_size: 291677986.0 - config_name: intergps features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 24982328.291771192 num_examples: 1280 download_size: 24870320 dataset_size: 24982328.291771192 - config_name: localized_narratives features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 21380844262.41927 num_examples: 199998 download_size: 22164342699 dataset_size: 21380844262.41927 - config_name: mapqa features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 3238062926.0 num_examples: 37417 download_size: 3307676486 dataset_size: 3238062926.0 - config_name: mimic_cgd features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 12592929433.0 num_examples: 70939 download_size: 13147641100 dataset_size: 12592929433.0 - config_name: multihiertt features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 1356766489.046 num_examples: 7619 download_size: 1360814135 dataset_size: 1356766489.046 - config_name: nlvr2 features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 8375492591.0 num_examples: 50426 download_size: 10838882020 dataset_size: 8375492591.0 - config_name: ocrvqa features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 5467134439.0 num_examples: 165746 download_size: 6078073015 dataset_size: 5467134439.0 - config_name: okvqa features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 281454288182.492 num_examples: 9009 download_size: 3009062 dataset_size: 281454288182.492 - config_name: plotqa features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 7837605221.0 num_examples: 157070 download_size: 5320249066 dataset_size: 7837605221.0 - config_name: raven features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 1506550467.0 num_examples: 42000 download_size: 1720691636 dataset_size: 1506550467.0 - config_name: rendered_text features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 11086896502.0 num_examples: 10000 download_size: 11086960376 dataset_size: 11086896502.0 - config_name: robut_sqa features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 679135952.0 num_examples: 8514 download_size: 678722272 dataset_size: 679135952.0 - config_name: robut_wikisql features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 5950915477.0 num_examples: 74989 download_size: 6160300141 dataset_size: 5950915477.0 - config_name: robut_wtq features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 4023729236.0 num_examples: 38246 download_size: 4061523247 dataset_size: 4023729236.0 - config_name: scienceqa features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 284601898.76188564 num_examples: 4976 download_size: 283265438 dataset_size: 284601898.76188564 - config_name: screen2words features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 1670723783.0 num_examples: 15730 download_size: 1346254268 dataset_size: 1670723783.0 - config_name: spot_the_diff features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 1643123792.0 num_examples: 8566 download_size: 1526740548 dataset_size: 1643123792.0 - config_name: st_vqa features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 696265340.0 num_examples: 17247 download_size: 720462890 dataset_size: 696265340.0 - config_name: tabmwp features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 265337140.19648907 num_examples: 22722 download_size: 306643610 dataset_size: 265337140.19648907 - config_name: tallyqa features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 4267143189.0 num_examples: 98680 download_size: 4662245152 dataset_size: 4267143189.0 - config_name: tat_qa features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 73213942.0 num_examples: 2199 download_size: 70862028 dataset_size: 73213942.0 - config_name: textcaps features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 5938676115.0 num_examples: 21953 download_size: 6175419911 dataset_size: 5938676115.0 - config_name: textvqa features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 5939437331.0 num_examples: 21953 download_size: 6175442839 dataset_size: 5939437331.0 - config_name: tqa features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 380346870.806369 num_examples: 1493 download_size: 378238311 dataset_size: 380346870.806369 - config_name: vistext features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 541250281.0 num_examples: 9969 download_size: 386023352 dataset_size: 541250281.0 - config_name: visual7w features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 4432168161.0 num_examples: 14366 download_size: 4443083495 dataset_size: 4432168161.0 - config_name: visualmrc features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 2941051627.2639995 num_examples: 3027 download_size: 2912911810 dataset_size: 2941051627.2639995 - config_name: vqarad features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 16561537.0 num_examples: 313 download_size: 16226241 dataset_size: 16561537.0 - config_name: vqav2 features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 10630091683.0 num_examples: 82772 download_size: 13479302437 dataset_size: 10630091683.0 - config_name: vsr features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 107489763.0 num_examples: 2157 download_size: 107576214 dataset_size: 107489763.0 - config_name: websight features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 2011365901.0 num_examples: 10000 download_size: 1601222161 dataset_size: 2011365901.0 configs: - config_name: ai2d data_files: - split: train path: ai2d/train-* - config_name: aokvqa data_files: - split: train path: aokvqa/train-* - config_name: chart2text data_files: - split: train path: chart2text/train-* - config_name: chartqa data_files: - split: train path: chartqa/train-* - config_name: clevr data_files: - split: train path: clevr/train-* - config_name: clevr_math data_files: - split: train path: clevr_math/train-* - config_name: cocoqa data_files: - split: train path: cocoqa/train-* - config_name: datikz data_files: - split: train path: datikz/train-* - config_name: diagram_image_to_text data_files: - split: train path: diagram_image_to_text/train-* - config_name: docvqa data_files: - split: train path: docvqa/train-* - config_name: dvqa data_files: - split: train path: dvqa/train-* - config_name: figureqa data_files: - split: train path: figureqa/train-* - config_name: finqa data_files: - split: train path: finqa/train-* - config_name: geomverse data_files: - split: train path: geomverse/train-* - config_name: hateful_memes data_files: - split: train path: hateful_memes/train-* - config_name: hitab data_files: - split: train path: hitab/train-* - config_name: iam data_files: - split: train path: iam/train-* - config_name: iconqa data_files: - split: train path: iconqa/train-* - config_name: infographic_vqa data_files: - split: train path: infographic_vqa/train-* - config_name: intergps data_files: - split: train path: intergps/train-* - config_name: localized_narratives data_files: - split: train path: localized_narratives/train-* - config_name: mapqa data_files: - split: train path: mapqa/train-* - config_name: mimic_cgd data_files: - split: train path: mimic_cgd/train-* - config_name: multihiertt data_files: - split: train path: multihiertt/train-* - config_name: nlvr2 data_files: - split: train path: nlvr2/train-* - config_name: ocrvqa data_files: - split: train path: ocrvqa/train-* - config_name: okvqa data_files: - split: train path: okvqa/train-* - config_name: plotqa data_files: - split: train path: plotqa/train-* - config_name: raven data_files: - split: train path: raven/train-* - config_name: rendered_text data_files: - split: train path: rendered_text/train-* - config_name: robut_sqa data_files: - split: train path: robut_sqa/train-* - config_name: robut_wikisql data_files: - split: train path: robut_wikisql/train-* - config_name: robut_wtq data_files: - split: train path: robut_wtq/train-* - config_name: scienceqa data_files: - split: train path: scienceqa/train-* - config_name: screen2words data_files: - split: train path: screen2words/train-* - config_name: spot_the_diff data_files: - split: train path: spot_the_diff/train-* - config_name: st_vqa data_files: - split: train path: st_vqa/train-* - config_name: tabmwp data_files: - split: train path: tabmwp/train-* - config_name: tallyqa data_files: - split: train path: tallyqa/train-* - config_name: tat_qa data_files: - split: train path: tat_qa/train-* - config_name: textcaps data_files: - split: train path: textcaps/train-* - config_name: textvqa data_files: - split: train path: textvqa/train-* - config_name: tqa data_files: - split: train path: tqa/train-* - config_name: vistext data_files: - split: train path: vistext/train-* - config_name: visual7w data_files: - split: train path: visual7w/train-* - config_name: visualmrc data_files: - split: train path: visualmrc/train-* - config_name: vqarad data_files: - split: train path: vqarad/train-* - config_name: vqav2 data_files: - split: train path: vqav2/train-* - config_name: vsr data_files: - split: train path: vsr/train-* - config_name: websight data_files: - split: train path: websight/train-* --- # Dataset Card for The Cauldron ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6177322d37f32ecb1e2d4cdf/3q8wnTYvCWyFiCGn2q1OX.png) ## Dataset description The Cauldron is part of the Idefics2 release. It is a massive collection of 50 vision-language datasets (training sets only) that were used for the fine-tuning of the vision-language model Idefics2. ## Load the dataset To load the dataset, install the library `datasets` with `pip install datasets`. Then, ``` from datasets import load_dataset ds = load_dataset("HuggingFaceM4/the_cauldron", "ai2d") ``` to download and load the config `ai2d` for example. ## Data fields An example of a sample looks as follows: ``` { "images" = [PIL.Image] "texts" = [ { "user": "Question: How many actions are depicted in the diagram?\nChoices:\nA. 6.\nB. 4.\nC. 8.\nD. 7.\nAnswer with the letter.", "assistant": "Answer: D", "source": "TQA" } ] } ``` In `images`, there is a list of images, to be placed before the text. In `texts`, there is a conversation between a user and an assistant about the images that is represented by a list of turns. ## Stats about the datasets in The Cauldron | Dataset | # images | # Q/A pairs | # tokens | |----------------------|----------|-------------|------------| | *General visual question answering* | | VQAv2 | 82,772 | 443,757 | 1,595,929 | | COCO-QA | 46,287 | 78,736 | 286,982 | | Visual7W | 14,366 | 69,817 | 279,268 | | A-OKVQA | 16,539 | 17,056 | 236,492 | | TallyQA | 98,680 | 183,986 | 738,254 | | OK-VQA | 8,998 | 9,009 | 38,853 | | HatefulMemes | 8,500 | 8,500 | 25,500 | | VQA-RAD | 313 | 1,793 | 8,418 | | Captioning | | LNarratives | 507,444 | 507,444 | 21,328,731 | | Screen2Words | 15,730 | 15,743 | 143,103 | | VSR | 2,157 | 3,354 | 10,062 | | *OCR, document understanding, text transcription* | | RenderedText | 999,000 | 999,000 | 27,207,774 | | DocVQA | 10,189 | 39,463 | 337,829 | | TextCaps | 21,953 | 21,953 | 389,658 | | TextVQA | 21,953 | 34,602 | 181,918 | | ST-VQA | 17,247 | 23,121 | 127,846 | | OCR-VQA | 165,746 | 801,579 | 6,073,824 | | VisualMRC | 3,027 | 11,988 | 168,828 | | IAM | 5,663 | 5,663 | 144,216 | | InfoVQA | 2,118 | 10,074 | 61,048 | | Diagram image-to-text| 300 | 300 | 22,196 | | *Chart/figure understanding* | | Chart2Text | 26,985 | 30,242 | 2,852,827 | | DVQA | 200,000 | 2,325,316 | 8,346,234 | | VisText | 7,057 | 9,969 | 1,245,485 | | ChartQA | 18,271 | 28,299 | 185,835 | | PlotQA | 157,070 | 20,249,479 | 8478299.278| | FigureQA | 100,000 | 1,327,368 | 3,982,104 | | MapQA | 37,417 | 483,416 | 6,470,485 | | *Table understanding* | | TabMWP | 22,729 | 23,059 | 1,948,166 | | TAT-QA | 2,199 | 13,215 | 283,776 | | HiTab | 2,500 | 7,782 | 351,299 | | MultiHiertt | 7,619 | 7,830 | 267,615 | | FinQA | 5,276 | 6,251 | 242,561 | | WikiSQL | 74,989 | 86,202 | 9,680,673 | | SQA | 8,514 | 34,141 | 1,894,824 | | WTQ | 38,246 | 44,096 | 6,677,013 | | *Reasoning, logic, maths* | | GeomVerse | 9,303 | 9,339 | 2,489,459 | | CLEVR-Math | 70,000 | 788,650 | 3,184,656 | | CLEVR | 70,000 | 699,989 | 2,396,781 | | IconQA | 27,315 | 29,859 | 112,969 | | RAVEN | 42,000 | 42,000 | 105,081 | | Inter-GPs | 1,451 | 2,101 | 8,404 | | *Textbook/academic questions* | | AI2D | 3,099 | 9,708 | 38,832 | | TQA | 1,496 | 6,501 | 26,004 | | ScienceQA | 4,985 | 6,218 | 24,872 | | *Differences between 2 images* | | NLVR2 | 50,426 | 86,373 | 259,119 | | GSD | 70,939 | 141,869 | 4,637,229 | | Spot the diff | 8,566 | 9,524 | 221,477 | | *Screenshot to code* | | WebSight | 500,000 | 500,000 | 276,743,299| | DaTikz | 47,974 | 48,296 | 59,556,252 | ## Decontamination The Cauldron contains only the train split of each sub-datasets. On top of that, we removed the few examples containing an image also present in the test splits of MMMU, MathVista or MMBench. ## References to the original datasets <details> <summary>References to the original datasets</summary> @misc{AI2D, title={A Diagram Is Worth A Dozen Images}, author={Aniruddha Kembhavi and Mike Salvato and Eric Kolve and Minjoon Seo and Hannaneh Hajishirzi and Ali Farhadi}, year={2016}, eprint={1603.07396}, archivePrefix={arXiv}, primaryClass={cs.CV} } @misc{A-OKVQA, title={A-OKVQA: A Benchmark for Visual Question Answering using World Knowledge}, author={Dustin Schwenk and Apoorv Khandelwal and Christopher Clark and Kenneth Marino and Roozbeh Mottaghi}, year={2022}, eprint={2206.01718}, archivePrefix={arXiv}, primaryClass={cs.CV} } @inproceedings{Chart2Text, title = "Chart-to-Text: Generating Natural Language Descriptions for Charts by Adapting the Transformer Model", author = "Obeid, Jason and Hoque, Enamul", editor = "Davis, Brian and Graham, Yvette and Kelleher, John and Sripada, Yaji", booktitle = "Proceedings of the 13th International Conference on Natural Language Generation", month = dec, year = "2020", address = "Dublin, Ireland", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.inlg-1.20", doi = "10.18653/v1/2020.inlg-1.20", pages = "138--147", } @inproceedings{ChartQA, title = "{C}hart{QA}: A Benchmark for Question Answering about Charts with Visual and Logical Reasoning", author = "Masry, Ahmed and Long, Do and Tan, Jia Qing and Joty, Shafiq and Hoque, Enamul", booktitle = "Findings of the Association for Computational Linguistics: ACL 2022", month = may, year = "2022", address = "Dublin, Ireland", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.findings-acl.177", doi = "10.18653/v1/2022.findings-acl.177", pages = "2263--2279", } @misc{CLEVR-Math, doi = {10.48550/ARXIV.2208.05358}, url = {https://arxiv.org/abs/2208.05358}, author = {Lindström, Adam Dahlgren}, keywords = {Machine Learning (cs.LG), Computation and Language (cs.CL), Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences, I.2.7; I.2.10; I.2.6; I.4.8; I.1.4}, title = {CLEVR-Math: A Dataset for Compositional Language, Visual, and Mathematical Reasoning}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution Share Alike 4.0 International} } @misc{CLEVR, title={CLEVR: A Diagnostic Dataset for Compositional Language and Elementary Visual Reasoning}, author={Justin Johnson and Bharath Hariharan and Laurens van der Maaten and Li Fei-Fei and C. Lawrence Zitnick and Ross Girshick}, year={2016}, eprint={1612.06890}, archivePrefix={arXiv}, primaryClass={cs.CV} } @inproceedings{CocoQA, author = {Ren, Mengye and Kiros, Ryan and Zemel, Richard}, booktitle = {Advances in Neural Information Processing Systems}, editor = {C. Cortes and N. Lawrence and D. Lee and M. Sugiyama and R. Garnett}, pages = {}, publisher = {Curran Associates, Inc.}, title = {Exploring Models and Data for Image Question Answering}, url = {https://proceedings.neurips.cc/paper_files/paper/2015/file/831c2f88a604a07ca94314b56a4921b8-Paper.pdf}, volume = {28}, year = {2015} } @misc{DaTikz, title={AutomaTikZ: Text-Guided Synthesis of Scientific Vector Graphics with TikZ}, author={Jonas Belouadi and Anne Lauscher and Steffen Eger}, year={2024}, eprint={2310.00367}, archivePrefix={arXiv}, primaryClass={cs.CL} } Diagram image to text: https://huggingface.co/datasets/Kamizuru00/diagram_image_to_text by @Kamizuru00 @INPROCEEDINGS{DocVQA, author={Mathew, Minesh and Karatzas, Dimosthenis and Jawahar, C. V.}, booktitle={2021 IEEE Winter Conference on Applications of Computer Vision (WACV)}, title={DocVQA: A Dataset for VQA on Document Images}, year={2021}, volume={}, number={}, pages={2199-2208}, keywords={Visualization;Computer vision;Text analysis;Image recognition;Image analysis;Conferences;Layout}, doi={10.1109/WACV48630.2021.00225}} @inproceedings{DVQA, title={DVQA: Understanding Data Visualizations via Question Answering}, author={Kafle, Kushal and Cohen, Scott and Price, Brian and Kanan, Christopher}, booktitle={CVPR}, year={2018} } @misc{FigureQA, title={FigureQA: An Annotated Figure Dataset for Visual Reasoning}, author={Samira Ebrahimi Kahou and Vincent Michalski and Adam Atkinson and Akos Kadar and Adam Trischler and Yoshua Bengio}, year={2018}, eprint={1710.07300}, archivePrefix={arXiv}, primaryClass={cs.CV} } @inproceedings{FinQA, title = "{F}in{QA}: A Dataset of Numerical Reasoning over Financial Data", author = "Chen, Zhiyu and Chen, Wenhu and Smiley, Charese and Shah, Sameena and Borova, Iana and Langdon, Dylan and Moussa, Reema and Beane, Matt and Huang, Ting-Hao and Routledge, Bryan and Wang, William Yang", editor = "Moens, Marie-Francine and Huang, Xuanjing and Specia, Lucia and Yih, Scott Wen-tau", booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.300", doi = "10.18653/v1/2021.emnlp-main.300", pages = "3697--3711", } @misc{GeomVerse, title={GeomVerse: A Systematic Evaluation of Large Models for Geometric Reasoning}, author={Mehran Kazemi and Hamidreza Alvari and Ankit Anand and Jialin Wu and Xi Chen and Radu Soricut}, year={2023}, eprint={2312.12241}, archivePrefix={arXiv}, primaryClass={cs.CV} } @inproceedings{hatefulmeme, author = {Kiela, Douwe and Firooz, Hamed and Mohan, Aravind and Goswami, Vedanuj and Singh, Amanpreet and Ringshia, Pratik and Testuggine, Davide}, booktitle = {Advances in Neural Information Processing Systems}, editor = {H. Larochelle and M. Ranzato and R. Hadsell and M.F. Balcan and H. Lin}, pages = {2611--2624}, publisher = {Curran Associates, Inc.}, title = {The Hateful Memes Challenge: Detecting Hate Speech in Multimodal Memes}, url = {https://proceedings.neurips.cc/paper_files/paper/2020/file/1b84c4cee2b8b3d823b30e2d604b1878-Paper.pdf}, volume = {33}, year = {2020} } @inproceedings{Hitab, title = "{H}i{T}ab: A Hierarchical Table Dataset for Question Answering and Natural Language Generation", author = "Cheng, Zhoujun and Dong, Haoyu and Wang, Zhiruo and Jia, Ran and Guo, Jiaqi and Gao, Yan and Han, Shi and Lou, Jian-Guang and Zhang, Dongmei", editor = "Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline", booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = may, year = "2022", address = "Dublin, Ireland", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.acl-long.78", doi = "10.18653/v1/2022.acl-long.78", pages = "1094--1110", } @article{IAM, author = {Marti, Urs-Viktor and Bunke, H.}, year = {2002}, month = {11}, pages = {39-46}, title = {The IAM-database: An English sentence database for offline handwriting recognition}, volume = {5}, journal = {International Journal on Document Analysis and Recognition}, doi = {10.1007/s100320200071} } @inproceedings{IconQA, title = {IconQA: A New Benchmark for Abstract Diagram Understanding and Visual Language Reasoning}, author = {Lu, Pan and Qiu, Liang and Chen, Jiaqi and Xia, Tony and Zhao, Yizhou and Zhang, Wei and Yu, Zhou and Liang, Xiaodan and Zhu, Song-Chun}, booktitle = {The 35th Conference on Neural Information Processing Systems (NeurIPS) Track on Datasets and Benchmarks}, year = {2021} } @INPROCEEDINGS{InfographicVQA, author={Mathew, Minesh and Bagal, Viraj and Tito, Rubèn and Karatzas, Dimosthenis and Valveny, Ernest and Jawahar, C. V.}, booktitle={2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, title={InfographicVQA}, year={2022}, volume={}, number={}, pages={2582-2591}, keywords={Visualization;Computer vision;Computational modeling;Layout;Data visualization;Benchmark testing;Brain modeling;Document Analysis Datasets;Evaluation and Comparison of Vision Algorithms;Vision and Languages}, doi={10.1109/WACV51458.2022.00264} } @inproceedings{Inter-GPS, title = {Inter-GPS: Interpretable Geometry Problem Solving with Formal Language and Symbolic Reasoning}, author = {Lu, Pan and Gong, Ran and Jiang, Shibiao and Qiu, Liang and Huang, Siyuan and Liang, Xiaodan and Zhu, Song-Chun}, booktitle = {The Joint Conference of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL-IJCNLP 2021)}, year = {2021} } @misc{LocalizedNarratives, title={Connecting Vision and Language with Localized Narratives}, author={Jordi Pont-Tuset and Jasper Uijlings and Soravit Changpinyo and Radu Soricut and Vittorio Ferrari}, year={2020}, eprint={1912.03098}, archivePrefix={arXiv}, primaryClass={cs.CV} } @misc{MapQA, title={MapQA: A Dataset for Question Answering on Choropleth Maps}, author={Shuaichen Chang and David Palzer and Jialin Li and Eric Fosler-Lussier and Ningchuan Xiao}, year={2022}, eprint={2211.08545}, archivePrefix={arXiv}, primaryClass={cs.CV} } @misc{MIMIC-IT-General-Scene-Difference, title={MIMIC-IT: Multi-Modal In-Context Instruction Tuning}, author={Bo Li and Yuanhan Zhang and Liangyu Chen and Jinghao Wang and Fanyi Pu and Jingkang Yang and Chunyuan Li and Ziwei Liu}, year={2023}, eprint={2306.05425}, archivePrefix={arXiv}, primaryClass={cs.CV} } @inproceedings{Multihiertt, title = "{M}ulti{H}iertt: Numerical Reasoning over Multi Hierarchical Tabular and Textual Data", author = "Zhao, Yilun and Li, Yunxiang and Li, Chenying and Zhang, Rui", booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = may, year = "2022", address = "Dublin, Ireland", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.acl-long.454", pages = "6588--6600", } @inproceedings{NLVR2, title = "A Corpus for Reasoning about Natural Language Grounded in Photographs", author = "Suhr, Alane and Zhou, Stephanie and Zhang, Ally and Zhang, Iris and Bai, Huajun and Artzi, Yoav", editor = "Korhonen, Anna and Traum, David and M{\`a}rquez, Llu{\'\i}s", booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", month = jul, year = "2019", address = "Florence, Italy", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/P19-1644", doi = "10.18653/v1/P19-1644", pages = "6418--6428", } @INPROCEEDINGS{OCR-VQA, author={Mishra, Anand and Shekhar, Shashank and Singh, Ajeet Kumar and Chakraborty, Anirban}, booktitle={2019 International Conference on Document Analysis and Recognition (ICDAR)}, title={OCR-VQA: Visual Question Answering by Reading Text in Images}, year={2019}, volume={}, number={}, pages={947-952}, keywords={Optical character recognition software;Visualization;Task analysis;Knowledge discovery;Text analysis;Text recognition;Character recognition;Optical Character Recognition (OCR), Visual Question Answering (VQA), Document image analysis, textVQA}, doi={10.1109/ICDAR.2019.00156} } @InProceedings{okvqa, author = {Kenneth Marino and Mohammad Rastegari and Ali Farhadi and Roozbeh Mottaghi}, title = {OK-VQA: A Visual Question Answering Benchmark Requiring External Knowledge}, booktitle = {Conference on Computer Vision and Pattern Recognition (CVPR)}, year = {2019}, } @InProceedings{PlotQA, author = {Methani, Nitesh and Ganguly, Pritha and Khapra, Mitesh M. and Kumar, Pratyush}, title = {PlotQA: Reasoning over Scientific Plots}, booktitle = {The IEEE Winter Conference on Applications of Computer Vision (WACV)}, month = {March}, year = {2020} } @inproceedings{RAVEN, title={RAVEN: A Dataset for Relational and Analogical Visual rEasoNing}, author={Zhang, Chi and Gao, Feng and Jia, Baoxiong and Zhu, Yixin and Zhu, Song-Chun}, booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, year={2019} } RenderedText: https://huggingface.co/datasets/wendlerc/RenderedText by @wendlerc @inproceedings{Robut, title = "{R}obu{T}: A Systematic Study of Table {QA} Robustness Against Human-Annotated Adversarial Perturbations", author = "Zhao, Yilun and Zhao, Chen and Nan, Linyong and Qi, Zhenting and Zhang, Wenlin and Tang, Xiangru and Mi, Boyu and Radev, Dragomir", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-long.334", doi = "10.18653/v1/2023.acl-long.334", pages = "6064--6081", } @inproceedings{SQA, title = "Search-based Neural Structured Learning for Sequential Question Answering", author = "Iyyer, Mohit and Yih, Wen-tau and Chang, Ming-Wei", editor = "Barzilay, Regina and Kan, Min-Yen", booktitle = "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = jul, year = "2017", address = "Vancouver, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/P17-1167", doi = "10.18653/v1/P17-1167", pages = "1821--1831", } @misc{WikiSQL, title={Seq2SQL: Generating Structured Queries from Natural Language using Reinforcement Learning}, author={Victor Zhong and Caiming Xiong and Richard Socher}, year={2017}, eprint={1709.00103}, archivePrefix={arXiv}, primaryClass={cs.CL} } @inproceedings{WTQ, title = "Compositional Semantic Parsing on Semi-Structured Tables", author = "Pasupat, Panupong and Liang, Percy", editor = "Zong, Chengqing and Strube, Michael", booktitle = "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)", month = jul, year = "2015", address = "Beijing, China", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/P15-1142", doi = "10.3115/v1/P15-1142", pages = "1470--1480", } @inproceedings{ScienceQA, author = {Lu, Pan and Mishra, Swaroop and Xia, Tanglin and Qiu, Liang and Chang, Kai-Wei and Zhu, Song-Chun and Tafjord, Oyvind and Clark, Peter and Kalyan, Ashwin}, booktitle = {Advances in Neural Information Processing Systems}, editor = {S. Koyejo and S. Mohamed and A. Agarwal and D. Belgrave and K. Cho and A. Oh}, pages = {2507--2521}, publisher = {Curran Associates, Inc.}, title = {Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering}, url = {https://proceedings.neurips.cc/paper_files/paper/2022/file/11332b6b6cf4485b84afadb1352d3a9a-Paper-Conference.pdf}, volume = {35}, year = {2022} } @inproceedings{screen2words, author = {Wang, Bryan and Li, Gang and Zhou, Xin and Chen, Zhourong and Grossman, Tovi and Li, Yang}, title = {Screen2Words: Automatic Mobile UI Summarization with Multimodal Learning}, year = {2021}, isbn = {9781450386357}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, url = {https://doi.org/10.1145/3472749.3474765}, doi = {10.1145/3472749.3474765}, booktitle = {The 34th Annual ACM Symposium on User Interface Software and Technology}, pages = {498–510}, numpages = {13}, keywords = {Mobile UI summarization, dataset., deep learning, language-based UI, screen understanding}, location = {Virtual Event, USA}, series = {UIST '21} } @inproceedings{SpotTheDiff, title = "Learning to Describe Differences Between Pairs of Similar Images", author = "Jhamtani, Harsh and others", editor = "Riloff, Ellen and Chiang, David and Hockenmaier, Julia and Tsujii, Jun{'}ichi", booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", month = oct # "-" # nov, year = "2018", address = "Brussels, Belgium", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/D18-1436", doi = "10.18653/v1/D18-1436", pages = "4024--4034", } @INPROCEEDINGS{STVQA, author={Biten, Ali Furkan and Tito, Rubèn and Mafla, Andrés and Gomez, Lluis and Rusiñol, Marçal and Jawahar, C.V. and Valveny, Ernest and Karatzas, Dimosthenis}, booktitle={2019 IEEE/CVF International Conference on Computer Vision (ICCV)}, title={Scene Text Visual Question Answering}, year={2019}, volume={}, number={}, pages={4290-4300}, keywords={Visualization;Task analysis;Knowledge discovery;Text recognition;Cognition;Computer vision;Semantics}, doi={10.1109/ICCV.2019.00439} } @inproceedings{TabMWP, title={Dynamic Prompt Learning via Policy Gradient for Semi-structured Mathematical Reasoning}, author={Lu, Pan and Qiu, Liang and Chang, Kai-Wei and Wu, Ying Nian and Zhu, Song-Chun and Rajpurohit, Tanmay and Clark, Peter and Kalyan, Ashwin}, booktitle={International Conference on Learning Representations (ICLR)}, year={2023} } @inproceedings{TallyQA, title={TallyQA: Answering Complex Counting Questions}, author={Acharya, Manoj and Kafle, Kushal and Kanan, Christopher}, booktitle={AAAI}, year={2019} } @inproceedings{TAT-QA, title = "{TAT}-{QA}: A Question Answering Benchmark on a Hybrid of Tabular and Textual Content in Finance", author = "Zhu, Fengbin and Lei, Wenqiang and Huang, Youcheng and Wang, Chao and Zhang, Shuo and Lv, Jiancheng and Feng, Fuli and Chua, Tat-Seng", booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.acl-long.254", doi = "10.18653/v1/2021.acl-long.254", pages = "3277--3287" } @misc{textcaps, title={TextCaps: a Dataset for Image Captioning with Reading Comprehension}, author={Oleksii Sidorov and Ronghang Hu and Marcus Rohrbach and Amanpreet Singh}, year={2020}, eprint={2003.12462}, archivePrefix={arXiv}, primaryClass={cs.CV} } @inproceedings{textvqa, title={Towards VQA Models That Can Read}, author={Singh, Amanpreet and Natarjan, Vivek and Shah, Meet and Jiang, Yu and Chen, Xinlei and Parikh, Devi and Rohrbach, Marcus}, booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition}, pages={8317-8326}, year={2019} } @INPROCEEDINGS{TQA, author={Kembhavi, Aniruddha and Seo, Minjoon and Schwenk, Dustin and Choi, Jonghyun and Farhadi, Ali and Hajishirzi, Hannaneh}, booktitle={2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, title={Are You Smarter Than a Sixth Grader? Textbook Question Answering for Multimodal Machine Comprehension}, year={2017}, volume={}, number={}, pages={5376-5384}, keywords={Knowledge discovery;Visualization;Cognition;Training;Natural languages;Computer vision}, doi={10.1109/CVPR.2017.571} } @inproceedings{VisText, title = {{VisText: A Benchmark for Semantically Rich Chart Captioning}}, author = {Benny J. Tang AND Angie Boggust AND Arvind Satyanarayan}, booktitle = {The Annual Meeting of the Association for Computational Linguistics (ACL)}, year = {2023}, url = {http://vis.csail.mit.edu/pubs/vistext} } @InProceedings{Visual7w, title = {{Visual7W: Grounded Question Answering in Images}}, author = {Yuke Zhu and Oliver Groth and Michael Bernstein and Li Fei-Fei}, booktitle = {{IEEE Conference on Computer Vision and Pattern Recognition}}, year = 2016, } @inproceedings{VisualMRC, author = {Ryota Tanaka and Kyosuke Nishida and Sen Yoshida}, title = {VisualMRC: Machine Reading Comprehension on Document Images}, booktitle = {AAAI}, year = {2021} } @article{VQA-RAD, author = {Lau, Jason and Gayen, Soumya and Ben Abacha, Asma and Demner-Fushman, Dina}, year = {2018}, month = {11}, pages = {180251}, title = {A dataset of clinically generated visual questions and answers about radiology images}, volume = {5}, journal = {Scientific Data}, doi = {10.1038/sdata.2018.251} } @misc{VQAv2, title={Making the V in VQA Matter: Elevating the Role of Image Understanding in Visual Question Answering}, author={Yash Goyal and Tejas Khot and Douglas Summers-Stay and Dhruv Batra and Devi Parikh}, year={2017}, eprint={1612.00837}, archivePrefix={arXiv}, primaryClass={cs.CV} } @misc{VSR, title={Visual Spatial Reasoning}, author={Fangyu Liu and Guy Emerson and Nigel Collier}, year={2023}, eprint={2205.00363}, archivePrefix={arXiv}, primaryClass={cs.CL} } @misc{WebSight, title={Unlocking the conversion of Web Screenshots into HTML Code with the WebSight Dataset}, author={Hugo Laurençon and Léo Tronchon and Victor Sanh}, year={2024}, eprint={2403.09029}, archivePrefix={arXiv}, primaryClass={cs.HC} } </details> ## Licensing Information Each of the publicly available sub-datasets present in the Cauldron are governed by specific licensing conditions. Therefore, when making use of them you must take into consideration each of the licenses governing each dataset. To the extent we have any rights in the prompts, these are licensed under CC-BY-4.0. ## Citation Information If you are using this dataset, please cite ``` @misc{laurençon2024matters, title={What matters when building vision-language models?}, author={Hugo Laurençon and Léo Tronchon and Matthieu Cord and Victor Sanh}, year={2024}, eprint={2405.02246}, archivePrefix={arXiv}, primaryClass={cs.CV} } ```
m-a-p/FineFineWeb
m-a-p
"2024-12-19T11:34:03"
827,931
43
[ "task_categories:text-classification", "task_categories:text2text-generation", "task_categories:text-generation", "language:en", "license:apache-2.0", "size_categories:1B<n<10B", "modality:tabular", "modality:text", "region:us" ]
[ "text-classification", "text2text-generation", "text-generation" ]
"2024-12-14T12:46:33"
--- license: apache-2.0 task_categories: - text-classification - text2text-generation - text-generation language: - en size_categories: - n>1T --- # FineFineWeb: A Comprehensive Study on Fine-Grained Domain Web Corpus arXiv: Coming Soon Project Page: Coming Soon Blog: Coming Soon ## Data Statistics | Domain (#tokens/#samples) | Iteration 1 Tokens | Iteration 2 Tokens | Iteration 3 Tokens | Total Tokens | Iteration 1 Count | Iteration 2 Count | Iteration 3 Count | Total Count | | --- | --- | --- | --- | --- | --- | --- | --- | --- | | aerospace | 5.77B | 261.63M | 309.33M | 6.34B | 9100000 | 688505 | 611034 | 10399539 | | agronomy | 13.08B | 947.41M | 229.04M | 14.26B | 15752828 | 2711790 | 649404 | 19114022 | | artistic | 178.25B | 5.79B | 3.75B | 187.80B | 314279703 | 16113512 | 9957104 | 340350319 | | astronomy | 5.20B | 134.39M | 54.66M | 5.38B | 7596521 | 357647 | 145832 | 8100000 | | atmospheric_science | 2.80B | 102.04M | 259.25M | 3.16B | 5709537 | 267789 | 525969 | 6503295 | | automotive | 36.72B | 436.34M | 911.65M | 38.07B | 60239679 | 1166729 | 1535882 | 62942290 | | beauty | 19.10B | 671.88M | 1.01B | 20.78B | 34787376 | 1808382 | 2201810 | 38797568 | | biology | 85.84B | 371.29M | 776.99M | 86.99B | 81413569 | 995384 | 1350348 | 83759301 | | celebrity | 9.63B | 706.41M | 4.22B | 14.56B | 19831188 | 1803788 | 7949240 | 29584216 | | chemistry | 27.80B | 588.92M | 131.46M | 28.52B | 31188189 | 1499085 | 328038 | 33015312 | | christianity | 47.72B | 403.68M | 732.55M | 48.86B | 55013147 | 1349874 | 2021458 | 58384479 | | civil_engineering | 8.85B | 1.27B | 402.91M | 10.52B | 13591632 | 2683940 | 940742 | 17216314 | | communication_engineering | 9.21B | 3.60B | 327.66M | 13.14B | 13001767 | 5959526 | 746495 | 19707788 | | computer_science_and_technology | 194.46B | 3.95B | 4.76B | 203.16B | 278420434 | 10263521 | 8654255 | 297338210 | | design | 96.58B | 3.80B | 450.00M | 100.82B | 190275603 | 16653588 | 2090515 | 209019706 | | drama_and_film | 19.12B | 10.86B | 206.27M | 30.19B | 33117478 | 18443259 | 564251 | 52124988 | | economics | 205.01B | 1.23B | 2.63B | 208.87B | 263965085 | 3874091 | 5505880 | 273345056 | | electronic_science | 30.19B | 7.76B | 482.62M | 38.43B | 42745767 | 12572747 | 1115605 | 56434119 | | entertainment | 152.92B | 1.67B | 5.06B | 159.65B | 256935144 | 5801081 | 9648023 | 272384248 | | environmental_science | 56.98B | 1.48B | 920.77M | 59.37B | 84500393 | 3557056 | 1966731 | 90024180 | | fashion | 18.72B | 977.27M | 264.01M | 19.96B | 53465628 | 3926500 | 1346988 | 58739116 | | finance | 146.39B | 327.45M | 1.13B | 147.85B | 187797764 | 1295893 | 3058801 | 192152458 | | food | 56.10B | 136.32M | 978.91M | 57.22B | 96485838 | 613875 | 3051981 | 100151694 | | gamble | 30.12B | 696.52M | 158.48M | 30.98B | 24909037 | 770540 | 164168 | 25843745 | | game | 43.47B | 2.36B | 2.68B | 48.51B | 65680699 | 4670033 | 3720700 | 74071432 | | geography | 110.18B | 1.16B | 192.67M | 111.53B | 161677214 | 3835932 | 559447 | 166072593 | | health | 191.20B | 427.93M | 18.43B | 210.06B | 215747152 | 1291215 | 23975955 | 241014322 | | history | 45.27B | 1.56B | 1.69B | 48.52B | 55710432 | 4167508 | 3463033 | 63340973 | | hobby | 150.23B | 42.78B | 44.05B | 237.06B | 276636362 | 81360893 | 71407735 | 429404990 | | hydraulic_engineering | 57.36M | 75.40M | 3.65M | 136.41M | 135079 | 163299 | 13453 | 311831 | | instrument_science | 5.35B | 2.02B | 165.43M | 7.54B | 8307736 | 2904274 | 462256 | 11674266 | | journalism_and_media_communication | 440.98B | 21.00B | 1.55B | 463.53B | 645801807 | 50657668 | 4909008 | 701368483 | | landscape_architecture | 3.07B | 557.66M | 64.76M | 3.70B | 5613141 | 1138409 | 166526 | 6918076 | | law | 128.58B | 455.19M | 2.38B | 131.42B | 166473205 | 1660944 | 6145032 | 174279181 | | library | 57.16B | 5.01B | 36.56M | 62.21B | 86592305 | 10440991 | 153014 | 97186310 | | literature | 71.07B | 7.01B | 67.53B | 145.61B | 71191075 | 13247806 | 54760578 | 139199459 | | materials_science | 17.79B | 1.11B | 303.66M | 19.20B | 22136519 | 1663376 | 708384 | 24508279 | | mathematics | 5.87B | 50.33M | 261.65M | 6.18B | 10131933 | 179592 | 653050 | 10964575 | | mechanical_engineering | 86.13B | 1.24B | 129.96M | 87.49B | 111778813 | 3201605 | 428714 | 115409132 | | medical | 140.03B | 813.46M | 4.97B | 145.81B | 149594634 | 2266477 | 8527901 | 160389012 | | mining_engineering | 7.26B | 206.05M | 529.02M | 8.00B | 5540631 | 236145 | 468458 | 6245234 | | movie | 13.09B | 639.20M | 124.67M | 13.86B | 22938808 | 1577576 | 511882 | 25028266 | | music_and_dance | 15.42B | 10.38B | 618.46M | 26.42B | 29566554 | 20233446 | 1998272 | 51798272 | | news | 328.47B | 12.37B | 11.34B | 352.18B | 508567768 | 33206709 | 23482422 | 565256899 | | nuclear_science | 559.05M | 79.89M | 78.79M | 717.72M | 784847 | 170282 | 133598 | 1088727 | | ocean_science | 2.36B | 537.82M | 229.43M | 3.13B | 3700000 | 853052 | 425792 | 4978844 | | optical_engineering | 2.33B | 253.06M | 263.99M | 2.85B | 3510836 | 535026 | 400371 | 4446233 | | painting | 374.41M | 429.63M | 96.57M | 900.61M | 875783 | 824217 | 336203 | 2036203 | | pet | 12.12B | 154.14M | 307.28M | 12.58B | 19624688 | 457635 | 778970 | 20861293 | | petroleum_and_natural_gas_engineering | 950.08M | 515.05M | 121.56M | 1.59B | 1669447 | 899860 | 237843 | 2807150 | | philosophy | 47.99B | 121.26M | 335.77M | 48.44B | 50396964 | 505275 | 1030405 | 51932644 | | photo | 6.56B | 1.74B | 41.44M | 8.34B | 16194329 | 3901598 | 179607 | 20275534 | | physics | 21.56B | 372.21M | 191.17M | 22.12B | 24640373 | 843508 | 473758 | 25957639 | | politics | 79.52B | 253.26M | 930.96M | 80.70B | 97403603 | 1026315 | 2504127 | 100934045 | | psychology | 51.53B | 688.50M | 2.56B | 54.78B | 58829917 | 1881452 | 4066667 | 64778036 | | public_administration | 100.13B | 5.54B | 716.81M | 106.39B | 160247751 | 10657768 | 1785347 | 172690866 | | relationship | 21.87B | 3.69B | 129.60M | 25.69B | 28153321 | 6794774 | 321268 | 35269363 | | sociology | 76.34B | 3.59B | 8.88B | 88.82B | 106447186 | 7836896 | 13040695 | 127324777 | | sports | 118.64B | 379.18M | 1.79B | 120.80B | 173243631 | 1286718 | 4212540 | 178742889 | | statistics | 19.59B | 1.15B | 1.75B | 22.49B | 29958726 | 2746797 | 3390606 | 36096129 | | systems_science | 24.58B | 11.30B | 163.99M | 36.05B | 32879249 | 15120751 | 470001 | 48470001 | | textile_science | 2.59B | 2.89B | 94.56M | 5.57B | 8018141 | 8022001 | 456668 | 16496810 | | topicality | 34.87M | 5.22M | 0 | 40.09M | 137789 | 13506 | 0 | 151295 | | transportation_engineering | 12.80B | 6.61B | 972.50M | 20.38B | 23595624 | 11005933 | 2027812 | 36629369 | | travel | 78.87B | 584.78M | 957.26M | 80.41B | 127250195 | 1851342 | 2430704 | 131532241 | | urban_planning | 12.13B | 2.93B | 53.24M | 15.12B | 20040937 | 6176104 | 201963 | 26419004 | | weapons_science | 80.62M | 3.32B | 140.89M | 3.54B | 215544 | 5695154 | 369541 | 6280239 | | Grand Total | 4010.76B | 206.51B | 208.02B | 4425.30B | 5781764055 | 442387964 | 311920860 | 6536072879 | ## Data Construction Workflow ![finefineweb-data-workflow](./assets/finefineweb-data-workflow.png) The data construction workflow can be summarized as follows: 1. **Deduplicate**: The FineWeb dataset is deduplicated using exact deduplication and MinHash techniques to remove redundant data. 2. **URL Labeling**: Root URLs from FineWeb are counted, and the top 1 million URLs are labeled using **GPT-4**. This step generates **DoI (Domain-of-Interest) Coarse-Grained URLs** and **DoNI (Domain-of-Non-Interest) Coarse-Grained URLs** as seed data sources. 3. **Coarse Recall**: a. Based on the labeled root URLs, data is sampled for each domain. b. The sampled data is labeled using **Qwen2-7B-Instruct**, producing 500K **DoI Positive Data** and 500K **DoI Negative Data** (note that for N>1 iterations, each 500K samples are composed of 250K sampled original seed data and 250K refined data after Fine Recall). c. A binary **FastText** model is trained per domain using the labeled data. d. The FastText model performs **coarse recall** on FineWeb, generating **Coarse DoI Data**. 4. **Fine Recall**: a. The **Coarse DoI Data** is labeled using **Qwen2-72B-Instruct** to produce **100K DoI Positive Data** and **50K DoI Negative Data**, with the latter further augmented with 50K negative samples from earlier FastText training. b. A **BERT** model is trained using this labeled data. c. The BERT model performs **fine recall** on the Coarse DoI Data, producing a refined dataset, which is the DoI subset of **FineFineWeb**. 5. **Coarse-Fine Recall Iteration**: The workflow of coarse and fine recall iterates for **3 rounds** with the following adjustments: a. FastText is re-trained using updated seed data, which combines BERT-recalled samples, BERT-dropped samples, and previously labeled seed data. b. The BERT model keeps frozen during subsequent iterations. c. Steps for training FastText, coarse recall, and fine recall are repeated without re-labeling data with Qwen2-Instruct models. ## Domain-Domain Similarity Analysis 1. Perform proportional weighted sampling of the domain subsets based on the sample size of each domain, with a total of 1 billion tokens sampled from the domain subsets. 2. Use the BGE-M3 model to compute the embeddings of the samples in each domain subset, referred to as domain embeddings. 3. Use the BGE-M3 model to compute the embeddings of the samples in each benchmark, referred to as benchmark embeddings (bench embeddings). 4. Calculate the MMD distance and the Wasserstein distance between the domain embeddings and the benchmark embeddings. ![domain-benchmark similarity](./assets/domain-benchmark%20similarity.png) The results above reveal the following observations: 1. The two code-related benchmarks, MBPP and HumanEval, exhibit relatively large distances from nearly all domains, indicating that the proportion of code data in the training set is relatively small. Notably, their distance to the mathematics domain is comparatively smaller, suggesting a certain degree of overlap between mathematics data and code data. 2. Benchmarks such as Hellaswag, ARC, MMLU, and BoolQ have distances that are close to almost all domains, except for the gamble domain. This indicates that the samples in these benchmarks involve synergetic effects across multiple domains of knowledge, with a wide distribution. 3. GSM8K and TriviaQA show significant discrepancies with a small number of domains, suggesting that the distribution differences between domains are more pronounced for samples involving grade-school mathematics and fact-based question answering. Some domains contain a substantial amount of this type of data, while others do not. 4. The gamble domain exhibits substantial differences from other domains and has large distances from all benchmarks, indicating that pretraining data related to gambling provides limited benefits for these benchmarks. ## Domain-Domain Duplication Let \\(D_1, D_2, \dots, D_N\\) represent \\(N\\) distinct domains, where we select top-20 URLs for each domain \\(D_i\\), denoted as \\(\{U_{i1}, U_{i2}, \dots, U_{i20}\}\\),. The total set of URLs across all domains is represented as \\(\mathcal{U}\\), and the total number of URLs is \\(M = |\mathcal{U}|\\). For each URL \\(U_k \in \mathcal{U}\\), the term frequency (TF) is defined as the proportion of \\(U_k\\) in the total set of URLs: \\(\text{TF}(U_k) = \frac{\text{count}(U_k)}{M}\\) where \\(\text{count}(U_k)\\) is the number of times \\(U_k\\) appears in \\(\mathcal{U}\\). Additionally, the document frequency \\(K_k\\) of \\(U_k\\) is the number of domains in which \\(U_k\\) appears. Based on this, the inverse document frequency (IDF) is calculated as: \\(\text{IDF}(U_k) = \log(\frac{N}{K_k})\\) The TF-IDF value for each URL \\(U_{ij}\\) in a specific domain \\(D_i\\) is then computed as: \\(\text{TF-IDF}(U_{ij}) = \text{TF}(U_{ij}) \times \text{IDF}(U_{ij})\\) ![domain-domain URL duplication](./assets/duplication.png) Using the TF-IDF values of all URLs within a domain, the domain-domain duplicate rate can be analyzed by comparing the **distribution** of TF-IDF values across domains. If a domain has many URLs with **high TF-IDF values**, it indicates that the domain’s URLs are relatively **unique** and significant within the entire set of URLs. Conversely, if a domain has many URLs with **low TF-IDF values**, it suggests that the domain's URLs are more **common** across other domains. Analyzing these values helps assess how similar or redundant a domain's content is in relation to others based on its URL composition. As shown in the figure, most domains have low duplication rates, except for topicality, pet, and atmospheric science. ## **Domain-Benchmark BPC-Acc Correlation** Experimental method: Using 28 models (see the paper), we first calculate BPC for all domains to obtain a model ranking \\(R_D\\). Similarly, we compute scores across all benchmarks to obtain a model ranking \\(R_M\\). We then calculate the Spearman correlation between \\(R_D\\) and \\(R_M\\). ![domain-benchmark BPC-Acc correlation](./assets/domain-benchmark%20correlation.png) - For benchmarks like ARC, MMLU, GSM8K, HumanEval, and MBPP, STEM-related domains show higher correlation rankings, particularly mathematics, physics, and systems science. - For TriviaQA, which emphasizes factual knowledge over reasoning, domains rich in world knowledge such as literature, history, and library science demonstrate higher correlation rankings. ## Bibtex ```bibtex @misc{ title={FineFineWeb: A Comprehensive Study on Fine-grained Domain Web Corpus}, url={[https://huggingface.co/datasets/m-a-p/FineFineWeb](https://huggingface.co/datasets/m-a-p/FineFineWeb)}, author = {M-A-P, Ge Zhang*, Xinrun Du*, Zhimiao Yu*, Zili Wang*, Zekun Wang, Shuyue Guo, Tianyu Zheng, Kang Zhu, Jerry Liu, Shawn Yue, Binbin Liu, Zhongyuan Peng, Yifan Yao, Jack Yang, Ziming Li, Bingni Zhang, Minghao Liu, Tianyu Liu, Yang Gao, Wenhu Chen, Xiaohuan Zhou, Qian Liu, Taifeng Wang+, Wenhao Huang+}, publisher={huggingface}, verision={v0.1.0}, month={December}, year={2024} } ```
lavita/medical-qa-shared-task-v1-toy
lavita
"2023-07-20T00:29:06"
781,885
18
[ "size_categories:n<1K", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2023-07-20T00:28:51"
--- dataset_info: features: - name: id dtype: int64 - name: ending0 dtype: string - name: ending1 dtype: string - name: ending2 dtype: string - name: ending3 dtype: string - name: ending4 dtype: string - name: label dtype: int64 - name: sent1 dtype: string - name: sent2 dtype: string - name: startphrase dtype: string splits: - name: train num_bytes: 52480.01886421694 num_examples: 32 - name: dev num_bytes: 52490.64150943396 num_examples: 32 download_size: 89680 dataset_size: 104970.6603736509 --- # Dataset Card for "medical-qa-shared-task-v1-toy" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
jat-project/jat-dataset
jat-project
"2024-02-16T13:52:52"
709,437
38
["task_categories:reinforcement-learning","task_categories:text-generation","task_categories:questio(...TRUNCATED)
[ "reinforcement-learning", "text-generation", "question-answering" ]
"2023-08-29T09:03:24"
"---\nannotations_creators:\n- found\n- machine-generated\nlicense: apache-2.0\nsource_datasets:\n- (...TRUNCATED)
hf-doc-build/doc-build
hf-doc-build
"2025-03-25T22:24:51"
675,976
8
[ "license:mit", "region:us" ]
null
"2022-10-24T15:39:05"
"---\nlicense: mit\npretty_name: Generated Docs for HF\n---\nThis repo contains all the docs publish(...TRUNCATED)
hf-doc-build/doc-build-dev
hf-doc-build
"2025-03-26T01:02:22"
664,480
4
[ "license:mit", "region:us", "documentation" ]
null
"2022-11-08T09:03:37"
"---\nlicense: mit\ntags:\n- documentation\npretty_name: HF Documentation (PRs)\n---\n\nThis is a da(...TRUNCATED)
End of preview. Expand in Data Studio

Dataset Card for Hugging Face Hub Dataset Cards

This datasets consists of dataset cards for models hosted on the Hugging Face Hub. The dataset cards are created by the community and provide information about datasets hosted on the Hugging Face Hub. This dataset is updated on a daily basis and includes publicly available datasets on the Hugging Face Hub.

This dataset is made available to help support users wanting to work with a large number of Dataset Cards from the Hub. We hope that this dataset will help support research in the area of Dataset Cards and their use but the format of this dataset may not be useful for all use cases. If there are other features that you would like to see included in this dataset, please open a new discussion.

Dataset Details

Uses

There are a number of potential uses for this dataset including:

  • text mining to find common themes in dataset cards
  • analysis of the dataset card format/content
  • topic modelling of dataset cards
  • training language models on the dataset cards

Out-of-Scope Use

[More Information Needed]

Dataset Structure

This dataset has a single split.

Dataset Creation

Curation Rationale

The dataset was created to assist people in working with dataset cards. In particular it was created to support research in the area of dataset cards and their use. It is possible to use the Hugging Face Hub API or client library to download dataset cards and this option may be preferable if you have a very specific use case or require a different format.

Source Data

The source data is README.md files for datasets hosted on the Hugging Face Hub. We do not include any other supplementary files that may be included in the dataset directory.

Data Collection and Processing

The data is downloaded using a CRON job on a daily basis.

Who are the source data producers?

The source data producers are the creators of the dataset cards on the Hugging Face Hub. This includes a broad variety of people from the community ranging from large companies to individual researchers. We do not gather any information about who created the dataset card in this repository although this information can be gathered from the Hugging Face Hub API.

Annotations [optional]

There are no additional annotations in this dataset beyond the dataset card content.

Annotation process

N/A

Who are the annotators?

N/A

Personal and Sensitive Information

We make no effort to anonymize the data. Whilst we don't expect the majority of dataset cards to contain personal or sensitive information, it is possible that some dataset cards may contain this information. Dataset cards may also link to websites or email addresses.

Bias, Risks, and Limitations

Dataset cards are created by the community and we do not have any control over the content of the dataset cards. We do not review the content of the dataset cards and we do not make any claims about the accuracy of the information in the dataset cards. Some dataset cards will themselves discuss bias and sometimes this is done by providing examples of bias in either the training data or the responses provided by the dataset. As a result this dataset may contain examples of bias.

Whilst we do not directly download any images linked to in the dataset cards, some dataset cards may include images. Some of these images may not be suitable for all audiences.

Recommendations

Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.

Citation

No formal citation is required for this dataset but if you use this dataset in your work, please include a link to this dataset page.

Dataset Card Authors

@davanstrien

Dataset Card Contact

@davanstrien

Downloads last month
580

Space using librarian-bots/dataset_cards_with_metadata 1

Collection including librarian-bots/dataset_cards_with_metadata