Datasets:
datasetId
large_stringlengths 6
116
| author
large_stringlengths 2
42
| last_modified
large_stringdate 2021-04-29 15:34:29
2025-07-09 06:13:16
| downloads
int64 0
3.97M
| likes
int64 0
7.74k
| tags
large listlengths 1
7.92k
| task_categories
large listlengths 0
48
| createdAt
large_stringdate 2022-03-02 23:29:22
2025-07-09 06:08:40
| trending_score
float64 0
64
| card
large_stringlengths 31
1.01M
|
---|---|---|---|---|---|---|---|---|---|
longjae/klue-mrc-bge-m3 | longjae | 2025-05-10T15:46:29Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-10T15:46:17Z | 0 | ---
dataset_info:
features:
- name: title
dtype: string
- name: news_category
dtype: string
- name: source
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: question_type
dtype: int64
- name: is_impossible
dtype: bool
- name: answer_text
dtype: string
- name: answer_start
dtype: int64
- name: negative_samples
sequence: string
splits:
- name: train
num_bytes: 130307018
num_examples: 10434
download_size: 76987383
dataset_size: 130307018
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Alexator26/query2doc-ru-24 | Alexator26 | 2025-04-27T20:12:37Z | 20 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-27T20:12:34Z | 0 | ---
dataset_info:
features:
- name: query
dtype: string
- name: positive
dtype: string
splits:
- name: train
num_bytes: 9320061
num_examples: 15000
download_size: 4781856
dataset_size: 9320061
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
TAUR-dev/SIE_EVAL__SIEXP_concat_until_correct_lm2d__sft__samples__bf_evaluated | TAUR-dev | 2025-06-09T02:42:34Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-09T02:42:31Z | 0 | ---
dataset_info:
features:
- name: doc_id
dtype: int64
- name: doc
dtype: string
- name: target
dtype: string
- name: arguments
dtype: string
- name: exact_match
dtype: int64
- name: extracted_answers
dtype: string
- name: source_file
dtype: string
- name: info
dtype: string
- name: evaluation_api_cost
dtype: string
- name: eval_type
dtype: string
- name: response_to_evaluate
dtype: string
- name: row_idx
dtype: int64
- name: gen_idx
dtype: int64
- name: eval_extracted_answer
dtype: string
- name: answer_extraction_llm_prompt
dtype: string
- name: answer_extraction_reasoning
dtype: string
- name: answer_idx
dtype: int64
- name: answer_is_correct
dtype: bool
- name: answer_judgement_reasoning
dtype: string
- name: answer_judgement_llm_prompt
dtype: string
- name: internal_answers_per_gen
sequence:
sequence: string
- name: internal_answers_is_correct_per_gen
sequence:
sequence: bool
- name: internal_answers_judgement_reasoning_per_gen
sequence:
sequence: string
- name: internal_answers_judgement_llm_prompt_per_gen
sequence:
sequence: string
- name: responses_to_evaluate
sequence: string
- name: eval_extracted_answers
sequence: string
- name: answer_is_corrects
sequence: bool
- name: mock_budget_force_convo
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 174206961
num_examples: 3656
download_size: 31106371
dataset_size: 174206961
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
sameearif/imdb-urdu | sameearif | 2025-04-07T03:29:43Z | 45 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-07T03:29:39Z | 0 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 39966979
num_examples: 40000
- name: validation
num_bytes: 5008891
num_examples: 5000
- name: test
num_bytes: 4995525
num_examples: 5000
download_size: 15744626
dataset_size: 49971395
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
nurettin2615/Search_Engine_Optimization_1 | nurettin2615 | 2024-12-28T22:36:56Z | 17 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-12-28T22:22:23Z | 0 | ---
dataset_info:
features:
- name: Instruction
dtype: string
- name: Input
dtype: string
- name: Response
dtype: string
splits:
- name: train
num_bytes: 4271.8
num_examples: 13
- name: test
num_bytes: 657.2
num_examples: 2
download_size: 10580
dataset_size: 4929.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
gremlin97/dust_devil_detection | gremlin97 | 2025-05-12T02:44:08Z | 0 | 0 | [
"task_categories:object-detection",
"task_ids:instance-segmentation",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"object-detection"
] | 2025-05-12T02:43:28Z | 0 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- object-detection
task_ids:
- instance-segmentation
pretty_name: dust_devil_detection
---
# dust_devil_detection Dataset
An object detection dataset in YOLO format containing 3 splits: train, val, test.
## Dataset Metadata
* **License:** CC-BY-4.0 (Creative Commons Attribution 4.0 International)
* **Version:** 1.0
* **Date Published:** 2025-05-11
* **Cite As:** TBD
## Dataset Details
- Format: YOLO
- Splits: train, val, test
- Classes: dustdevil
## Additional Formats
- Includes COCO format annotations
- Includes Pascal VOC format annotations
## Usage
```python
from datasets import load_dataset
dataset = load_dataset("gremlin97/dust_devil_detection")
```
|
ferrazzipietro/LS_Llama-3.1-8B_e3c-sentences-sk-unrevised_NoQuant_16_32_0.01_64_BestF1_sk | ferrazzipietro | 2024-12-02T17:59:58Z | 17 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-12-02T17:59:55Z | 0 | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: entities
list:
- name: offsets
sequence: int64
- name: text
dtype: string
- name: type
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
- name: ground_truth_word_level
sequence: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: predictions
sequence: string
- name: ground_truth_labels
sequence: string
splits:
- name: all_validation
num_bytes: 148050
num_examples: 102
- name: test
num_bytes: 1034730
num_examples: 653
download_size: 248164
dataset_size: 1182780
configs:
- config_name: default
data_files:
- split: all_validation
path: data/all_validation-*
- split: test
path: data/test-*
---
|
kothasuhas/philosophy-textbooks-9 | kothasuhas | 2024-11-14T23:12:53Z | 16 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-11-14T23:12:50Z | 0 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 32652469
num_examples: 7908
download_size: 19386911
dataset_size: 32652469
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_d1e32cc4-6531-4a5a-944e-578f6388248b | argilla-internal-testing | 2024-12-12T10:24:47Z | 13 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-12-12T10:24:46Z | 0 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
wanhin/new_cad_2 | wanhin | 2025-06-20T09:44:11Z | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-20T09:43:11Z | 0 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: completion
dtype: string
splits:
- name: range_0_1000
num_bytes: 485474701
num_examples: 213489
- name: range_1000_2000
num_bytes: 432378750
num_examples: 89014
- name: range_2000_3500
num_bytes: 73046950
num_examples: 9063
download_size: 236810696
dataset_size: 990900401
configs:
- config_name: default
data_files:
- split: range_0_1000
path: data/range_0_1000-*
- split: range_1000_2000
path: data/range_1000_2000-*
- split: range_2000_3500
path: data/range_2000_3500-*
---
|
jordinia/netpro-finetune | jordinia | 2025-05-14T16:18:23Z | 73 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-29T14:13:10Z | 0 | ---
dataset_info:
- config_name: chatml_thought_25k
features:
- name: conversations
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 524399437
num_examples: 25008
- name: validation
num_bytes: 1248094
num_examples: 60
download_size: 142398784
dataset_size: 525647531
- config_name: chatml_thought_33k
features:
- name: conversations
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 692838511.9417955
num_examples: 33128
- name: test
num_bytes: 2802474.0582045577
num_examples: 134
download_size: 209642745
dataset_size: 695640986.0
- config_name: chatml_thought_7k
features:
- name: conversations
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 152111743
num_examples: 7245
- name: validation
num_bytes: 1253112
num_examples: 60
download_size: 40973096
dataset_size: 153364855
- config_name: full
features:
- name: Domain
dtype: string
- name: Content
dtype: string
- name: Label
dtype: int64
- name: Classification
dtype: string
- name: Reason
dtype: string
- name: Confidence
dtype: int64
- name: Thought
dtype: string
splits:
- name: train
num_bytes: 536555898
num_examples: 77461
- name: validation
num_bytes: 133561964
num_examples: 19367
download_size: 343767552
dataset_size: 670117862
- config_name: raw_7k
features:
- name: Domain
dtype: string
- name: Content
dtype: string
- name: Label
dtype: int64
- name: Classification
dtype: string
- name: Reason
dtype: string
- name: Confidence
dtype: int64
- name: Thought
dtype: string
splits:
- name: train
num_bytes: 51549369
num_examples: 7245
- name: validation
num_bytes: 420299
num_examples: 60
download_size: 26238304
dataset_size: 51969668
configs:
- config_name: chatml_thought_25k
data_files:
- split: train
path: chatml_thought_25k/train-*
- split: validation
path: chatml_thought_25k/validation-*
- config_name: chatml_thought_33k
data_files:
- split: train
path: chatml_thought_33k/train-*
- split: test
path: chatml_thought_33k/test-*
- config_name: chatml_thought_7k
data_files:
- split: train
path: chatml_thought_7k/train-*
- split: validation
path: chatml_thought_7k/validation-*
- config_name: full
data_files:
- split: train
path: full/train-*
- split: validation
path: full/validation-*
- config_name: raw_7k
data_files:
- split: train
path: raw_7k/train-*
- split: validation
path: raw_7k/validation-*
---
|
JanGoeran/chess-fens-dataset | JanGoeran | 2025-03-26T10:27:41Z | 16 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-26T10:27:36Z | 0 | ---
dataset_info:
features:
- name: fen
dtype: string
- name: top_moves
list:
- name: Centipawn
dtype: int64
- name: Mate
dtype: int64
- name: Move
dtype: string
- name: top_moves_list
sequence: string
splits:
- name: train
num_bytes: 221067
num_examples: 1000
download_size: 82557
dataset_size: 221067
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
juliadollis/teste_sim4 | juliadollis | 2025-02-12T21:17:25Z | 16 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-12T21:15:54Z | 0 | ---
dataset_info:
features:
- name: text_id_1
dtype: string
- name: text_id_2
dtype: string
- name: similarity
dtype: float64
splits:
- name: train
num_bytes: 480
num_examples: 15
download_size: 1665
dataset_size: 480
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_b7a60f80-0952-4c57-8de8-561cdda7376e | argilla-internal-testing | 2024-10-16T12:58:22Z | 18 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-10-16T12:58:21Z | 0 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1454
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
neelabh17/new_news_exploded_prompt_n_75_d_perc_60_num_gen_10_Qwen2.5-7B-Instruct | neelabh17 | 2025-05-15T16:51:22Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-15T16:51:22Z | 0 | ---
dataset_info:
features:
- name: id
dtype: string
- name: name
dtype: string
- name: topic
dtype: string
- name: news
dtype: string
- name: category
dtype: string
- name: question
dtype: string
- name: option
sequence: string
- name: prompt
dtype: string
- name: response_0
dtype: string
- name: answer_0
dtype: string
- name: correct_0
dtype: int64
- name: response_1
dtype: string
- name: answer_1
dtype: string
- name: correct_1
dtype: int64
- name: response_2
dtype: string
- name: answer_2
dtype: string
- name: correct_2
dtype: int64
- name: response_3
dtype: string
- name: answer_3
dtype: string
- name: correct_3
dtype: int64
- name: response_4
dtype: string
- name: answer_4
dtype: string
- name: correct_4
dtype: int64
- name: response_5
dtype: string
- name: answer_5
dtype: string
- name: correct_5
dtype: int64
- name: response_6
dtype: string
- name: answer_6
dtype: string
- name: correct_6
dtype: int64
- name: response_7
dtype: string
- name: answer_7
dtype: string
- name: correct_7
dtype: int64
- name: response_8
dtype: string
- name: answer_8
dtype: string
- name: correct_8
dtype: int64
- name: response_9
dtype: string
- name: answer_9
dtype: string
- name: correct_9
dtype: int64
splits:
- name: train
num_bytes: 8330869
num_examples: 375
download_size: 2324377
dataset_size: 8330869
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
nguyentranAI2/Volality15000-20000 | nguyentranAI2 | 2025-04-15T15:43:00Z | 17 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-15T03:58:25Z | 0 | ---
dataset_info:
features:
- name: report
dtype: string
- name: labels
dtype: float64
splits:
- name: train
num_bytes: 2700876
num_examples: 4996
download_size: 745220
dataset_size: 2700876
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
NeutrinoPit/OpenSubtitles2024-en-ar-batch42 | NeutrinoPit | 2025-03-04T03:19:46Z | 16 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-04T03:19:42Z | 0 | ---
dataset_info:
features:
- name: en
dtype: string
- name: ar
dtype: string
splits:
- name: train
num_bytes: 104563267
num_examples: 1000000
download_size: 63668057
dataset_size: 104563267
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
AnnetteDUBUS/Polyvor_test_Annette | AnnetteDUBUS | 2025-02-21T15:44:14Z | 15 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-21T15:44:13Z | 0 | ---
dataset_info:
features:
- name: set_name
dtype: string
- name: item_index
dtype: int64
- name: item_name
dtype: string
- name: item_price
dtype: float64
- name: item_likes
dtype: int64
- name: item_image
dtype: string
- name: item_categoryid
dtype: int64
- name: views
dtype: int64
- name: likes_set
dtype: int64
- name: date
dtype: string
- name: set_id
dtype: int64
- name: set_desc
dtype: string
- name: categoryid
dtype: int64
- name: category_name
dtype: string
- name: item_name.1
dtype: string
- name: image_exists
dtype: bool
- name: dominant_color_name
dtype: string
- name: image_bytes
dtype: image
splits:
- name: train
num_bytes: 96544.0
num_examples: 6
download_size: 104893
dataset_size: 96544.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jkazdan/gemma-2-9b-it-baseline-5000-HeX-PHI-hard_no | jkazdan | 2025-03-28T00:05:13Z | 15 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-28T00:05:11Z | 0 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 752889
num_examples: 300
download_size: 352487
dataset_size: 752889
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
michsethowusu/fulah-kimbundu_sentence-pairs | michsethowusu | 2025-04-02T11:34:56Z | 9 | 0 | [
"size_categories:10K<n<100K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-02T11:34:41Z | 0 |
---
dataset_info:
features:
- name: score
dtype: float32
- name: Fulah
dtype: string
- name: Kimbundu
dtype: string
splits:
- name: train
num_bytes: 2305203
num_examples: 19095
download_size: 2305203
dataset_size: 2305203
configs:
- config_name: default
data_files:
- split: train
path: Fulah-Kimbundu_Sentence-Pairs.csv
---
# Fulah-Kimbundu_Sentence-Pairs Dataset
This dataset contains sentence pairs for African languages along with similarity scores. It can be used for machine translation, sentence alignment, or other natural language processing tasks.
This dataset is based on the NLLBv1 dataset, published on OPUS under an open-source initiative led by META. You can find more information here: [OPUS - NLLB-v1](https://opus.nlpl.eu/legacy/NLLB-v1.php)
## Metadata
- **File Name**: Fulah-Kimbundu_Sentence-Pairs
- **Number of Rows**: 19095
- **Number of Columns**: 3
- **Columns**: score, Fulah, Kimbundu
## Dataset Description
The dataset contains sentence pairs in African languages with an associated similarity score. Each row consists of three columns:
1. `score`: The similarity score between the two sentences (range from 0 to 1).
2. `Fulah`: The first sentence in the pair (language 1).
3. `Kimbundu`: The second sentence in the pair (language 2).
This dataset is intended for use in training and evaluating machine learning models for tasks like translation, sentence similarity, and cross-lingual transfer learning.
## References
Below are papers related to how the data was collected and used in various multilingual and cross-lingual applications:
[1] Holger Schwenk and Matthijs Douze, Learning Joint Multilingual Sentence Representations with Neural Machine Translation, ACL workshop on Representation Learning for NLP, 2017
[2] Holger Schwenk and Xian Li, A Corpus for Multilingual Document Classification in Eight Languages, LREC, pages 3548-3551, 2018.
[3] Holger Schwenk, Filtering and Mining Parallel Data in a Joint Multilingual Space ACL, July 2018
[4] Alexis Conneau, Guillaume Lample, Ruty Rinott, Adina Williams, Samuel R. Bowman, Holger Schwenk and Veselin Stoyanov, XNLI: Cross-lingual Sentence Understanding through Inference, EMNLP, 2018.
[5] Mikel Artetxe and Holger Schwenk, Margin-based Parallel Corpus Mining with Multilingual Sentence Embeddings arXiv, Nov 3 2018.
[6] Mikel Artetxe and Holger Schwenk, Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond arXiv, Dec 26 2018.
[7] Holger Schwenk, Vishrav Chaudhary, Shuo Sun, Hongyu Gong and Paco Guzman, WikiMatrix: Mining 135M Parallel Sentences in 1620 Language Pairs from Wikipedia arXiv, July 11 2019.
[8] Holger Schwenk, Guillaume Wenzek, Sergey Edunov, Edouard Grave and Armand Joulin CCMatrix: Mining Billions of High-Quality Parallel Sentences on the WEB
[9] Paul-Ambroise Duquenne, Hongyu Gong, Holger Schwenk, Multimodal and Multilingual Embeddings for Large-Scale Speech Mining, NeurIPS 2021, pages 15748-15761.
[10] Kevin Heffernan, Onur Celebi, and Holger Schwenk, Bitext Mining Using Distilled Sentence Representations for Low-Resource Languages
|
svjack/Nino_Videos_Captioned | svjack | 2025-04-29T00:16:12Z | 492 | 0 | [
"size_categories:n<1K",
"modality:text",
"modality:video",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [] | 2025-04-29T00:08:32Z | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path:
- "*.mp4"
- "metadata.csv"
---



|
mlfoundations-dev/stackexchange_physics_seed_science_20K | mlfoundations-dev | 2025-03-18T21:18:27Z | 36 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-18T21:18:07Z | 0 | ---
dataset_info:
features:
- name: problem
dtype: string
- name: reasoning
dtype: string
- name: deepseek_solution
dtype: string
- name: __original_row_idx
dtype: int64
- name: final_reasoning_trace
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 902936485
num_examples: 20000
download_size: 404868374
dataset_size: 902936485
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
yleo/moss_cube_stacking | yleo | 2024-10-27T13:55:12Z | 26 | 0 | [
"task_categories:robotics",
"region:us",
"LeRobot",
"tutorial"
] | [
"robotics"
] | 2024-10-27T13:54:38Z | 0 | ---
task_categories:
- robotics
tags:
- LeRobot
- tutorial
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
|
giskardai/phare | giskardai | 2025-06-06T10:07:00Z | 472 | 9 | [
"task_categories:text-generation",
"language:fr",
"language:en",
"language:es",
"license:cc-by-4.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2505.11365",
"region:us"
] | [
"text-generation"
] | 2025-03-25T16:05:29Z | 0 | ---
language:
- fr
- en
- es
license: cc-by-4.0
size_categories:
- 1K<n<10K
task_categories:
- text-generation
pretty_name: Phare
configs:
- config_name: hallucination_tools_basic
data_files:
- split: public
path: hallucination/tools/basic.parquet
- config_name: hallucination_tools_knowledge
data_files:
- split: public
path: hallucination/tools/knowledge.parquet
- config_name: hallucination_debunking
data_files:
- split: public
path: hallucination/debunking/*.parquet
- config_name: hallucination_factuality
data_files:
- split: public
path: hallucination/factuality/*.parquet
- config_name: hallucination_satirical
data_files:
- split: public
path: hallucination/satirical/*.parquet
- config_name: harmful_vulnerable_misguidance
data_files:
- split: public
path: harmful/vulnerable_misguidance/*.parquet
- config_name: biases
data_files:
- split: public
path: biases/story_generation/*.parquet
---
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/6596ca5cce76219628b8eab4/d8DzaI1j6B9GyKFg6DAyg.png" alt="Phare Logo" width="75%"/>
</p>
# [Phare Benchmark](https://huggingface.co/papers/2505.11365)
Phare is a multilingual benchmark that measures LLM Safety across multiple categories of vulnerabilities, including hallucination, biases & stereotypes, harmful content, and prompt injection.
## Dataset Details
### Dataset Description
This dataset contains the public set of samples of Phare Benchmark. These samples are split into multiple modules to assess LLM safety across various directions.
Each module is responsible for detecting vulnerabilities in the LLM response:
- **Hallucination**: evaluates the factuality and the level of misinformation spread by the models in a question-answer setting. Questions are designed from existing content, including known misinformation or scientifically refuted theories.
- **Biases & stereotypes**: assess the presence of biases in the LLM generations for creative tasks.
- **Harmful content**: measure the dangerous behavior endorsement and misguidance rate of LLM with vulnerable people.
- **Prompt injection**: (not yet included in the benchmark)
Each module is split into several submodules. The submodules are different approaches to eliciting problematic behavior from the models. For instance, the hallucination modules has several submodules:
- **Debunking**: questions about scientifically refuted facts or theories with various levels of bias
- **Satirical**: questions derived from misinformation and satirical sources
- **Factuality**: questions about generic facts
- **Tools**: questions that can be answered with the use of a tool available for the model, to measure hallucination in tool parameters and correct tool usage.
### Extra information
- **Author:** Giskard AI
- **Language(s):** English, French, Spanish
- **License:** CC BY 4.0
## Dataset Structure
The dataset is split into a **public** (available in this repository) and a **private** sets. Giskard reserves the private set to run the [Phare Benchmark](http://phare.giskard.ai/) and keep the leaderboard up-to-date.
Each submodule is a set of `.jsonl` files containing the samples.
Each sample in these files has the following structure:
```
{
"id": # unique id of the sample,
"messages": # the list of messages to be sent to the LLM
"metadata": {
"task_name": # the name of the task, typically differs from one file to another or from one submodule to another
"language": # the language of the question
"evaluation_data": # a dictionary with additional elements required for the automatic evaluation of the response (include context about the question, expected answers, etc.), changes from between submodules.
}
```
## Dataset Creation
### Curation Rationale
Most safety evaluation datasets lack comprehensiveness and multicultural support. Our main goal with Phare is to fill this gap and propose a benchmark that detects inappropriate behavior in various situations.
In addition, the dataset was designed in multiple languages from scratch, including the data collection phase to ensure multicultural diversity.
### Source Data
Data sources are diverse and change for each module:
- **Hallucinations**: news articles, wikipedia articles, satirical articles, forum threads, etc.
- **Harmful content**: examples of AI incident from https://incidentdatabase.ai/
- **Biases & Stereotypes**: legal documents about discriminatory attributes.
The Hallucination module uses the source data more extensively than other modules. The hallucination questions are grounded on existing content, while for other modules, the data source only influences the evaluation process, e.g. legislation about discrimination fixes the attributes that are extracted from the LLM answers.
#### Data Collection and Processing
Data collection and filtering were done semi-automatically by the Giskard team. The initial steps of data collection and filtering were done automatically with diverse criteria depending on the module.
Following the data collection and filtering step, the samples are generated using diverse strategies. It includes a combination of LLM generation and the application of handcrafted templates. All details about the generation process are available in our [technical report](https://arxiv.org/abs/2505.11365).
A manual review was then conducted on the generated samples by native speakers of the corresponding language to make sure the samples were meeting our quality criteria.
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
The dataset contains samples that can be sensitive or misleading. In particular, the harmful content module contains samples that push for dangerous behavior. Similarly, the hallucination module contains samples made of factually false content.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
- Some content was generated with the help of LLMs and can be imperfect.
- Some data sources used in particular in the hallucination module can be partial.
- The evaluation process is automatic and is not fully accurate, we measured the evaluation quality manually on each submodule individually to ensure the errors are constrained. Results of this analysis will be reported precisely in the technical paper.
- The team that manually reviewed the samples and the evaluation results has limited representativity.
- Some modules and languages have more samples than others and will have more influence on the aggregated scores.
- Private and public splits representativity differs across modules.
## Dataset Card Contact
- Matteo Dora -- @mattbit -- [email protected]
- Pierre Le Jeune -- @pierlj -- [email protected] |
zhaoraning/eval_Orbbec111 | zhaoraning | 2025-05-15T14:43:31Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"region:us",
"LeRobot",
"tutorial"
] | [
"robotics"
] | 2025-05-15T14:42:22Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.0",
"robot_type": "so100",
"total_episodes": 10,
"total_frames": 8914,
"total_tasks": 1,
"total_videos": 10,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:10"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
880,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 880,
"video.width": 640,
"video.channels": 3,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
FloatFrank/TULU3TIPA | FloatFrank | 2025-04-05T06:04:24Z | 70 | 1 | [
"license:apache-2.0",
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-05T05:32:02Z | 0 | ---
license: apache-2.0
---
|
hyunjinlee/sample-data | hyunjinlee | 2025-02-19T15:08:40Z | 53 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-19T15:07:17Z | 0 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 9036
num_examples: 41
download_size: 7661
dataset_size: 9036
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
CasperLD/cartoons_with_blip_captions_512_max_500 | CasperLD | 2025-01-12T00:49:33Z | 17 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-01-12T00:48:45Z | 0 | ---
dataset_info:
features:
- name: image
dtype: image
- name: caption
dtype: string
splits:
- name: train
num_bytes: 313734571.0
num_examples: 6000
download_size: 299119632
dataset_size: 313734571.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "cartoons_with_blip_captions_512_max_500"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
pmdlt/MNLP_M3_rag_dataset | pmdlt | 2025-06-04T15:19:33Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-04T12:28:28Z | 0 | ---
dataset_info:
features:
- name: text
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 45680052.74761919
num_examples: 18495
download_size: 207191707
dataset_size: 45680052.74761919
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
TAUR-dev/solution-trees__short-and-wide_p3_batch9 | TAUR-dev | 2025-03-13T14:01:06Z | 16 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-13T14:00:59Z | 0 | ---
dataset_info:
features:
- name: solution
dtype: string
- name: question
dtype: string
- name: cot_type
dtype: string
- name: source_type
dtype: string
- name: metadata
dtype: string
- name: gemini_thinking_trajectory
dtype: string
- name: gemini_attempt
dtype: string
- name: deepseek_thinking_trajectory
dtype: string
- name: deepseek_attempt
dtype: string
- name: gemini_grade
dtype: string
- name: gemini_grade_reason
dtype: string
- name: deepseek_grade
dtype: string
- name: deepseek_grade_reason
dtype: string
- name: trees
dtype: string
splits:
- name: train
num_bytes: 203046053
num_examples: 1000
download_size: 49779349
dataset_size: 203046053
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
DavidYeh0709/so100_david_0610_2 | DavidYeh0709 | 2025-06-10T02:01:22Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so100",
"tutorial"
] | [
"robotics"
] | 2025-06-10T02:01:07Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so100
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 5,
"total_frames": 2566,
"total_tasks": 1,
"total_videos": 5,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:5"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
Heaplax/ARMAP-RM-Game24 | Heaplax | 2025-02-19T20:22:26Z | 17 | 0 | [
"license:apache-2.0",
"region:us"
] | [] | 2025-02-19T20:22:11Z | 0 | ---
license: apache-2.0
---
|
villekuosmanen/agilex_place_cube_in_the_center | villekuosmanen | 2025-02-11T18:56:05Z | 24 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] | [
"robotics"
] | 2025-02-11T18:55:48Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.0",
"robot_type": "arx5_bimanual",
"total_episodes": 20,
"total_frames": 10737,
"total_tasks": 1,
"total_videos": 60,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 25,
"splits": {
"train": "0:20"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
14
]
},
"observation.state": {
"dtype": "float32",
"shape": [
14
]
},
"observation.effort": {
"dtype": "float32",
"shape": [
14
]
},
"observation.images.cam_high": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 25.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.cam_left_wrist": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 25.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.cam_right_wrist": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 25.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
OALL/details_FreedomIntelligence__AceGPT-v2-70B-Chat | OALL | 2025-01-30T19:57:48Z | 8 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-01-30T19:57:37Z | 0 | ---
pretty_name: Evaluation run of FreedomIntelligence/AceGPT-v2-70B-Chat
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [FreedomIntelligence/AceGPT-v2-70B-Chat](https://huggingface.co/FreedomIntelligence/AceGPT-v2-70B-Chat).\n\
\nThe dataset is composed of 136 configuration, each one coresponding to one of\
\ the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can\
\ be found as a specific split in each configuration, the split being named using\
\ the timestamp of the run.The \"train\" split is always pointing to the latest\
\ results.\n\nAn additional configuration \"results\" store all the aggregated results\
\ of the run.\n\nTo load the details from a run, you can for instance do the following:\n\
```python\nfrom datasets import load_dataset\ndata = load_dataset(\"OALL/details_FreedomIntelligence__AceGPT-v2-70B-Chat\"\
,\n\t\"lighteval_xstory_cloze_ar_0_2025_01_30T19_54_59_135980_parquet\",\n\tsplit=\"\
train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2025-01-30T19:54:59.135980](https://huggingface.co/datasets/OALL/details_FreedomIntelligence__AceGPT-v2-70B-Chat/blob/main/results_2025-01-30T19-54-59.135980.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc_norm\": 0.49008307379293065,\n\
\ \"acc_norm_stderr\": 0.03742700461947868,\n \"acc\": 0.7253474520185308,\n\
\ \"acc_stderr\": 0.01148620035471171\n },\n \"community|acva:Algeria|0\"\
: {\n \"acc_norm\": 0.5230769230769231,\n \"acc_norm_stderr\": 0.0358596530894741\n\
\ },\n \"community|acva:Ancient_Egypt|0\": {\n \"acc_norm\": 0.05396825396825397,\n\
\ \"acc_norm_stderr\": 0.012751380783465839\n },\n \"community|acva:Arab_Empire|0\"\
: {\n \"acc_norm\": 0.30943396226415093,\n \"acc_norm_stderr\": 0.028450154794118627\n\
\ },\n \"community|acva:Arabic_Architecture|0\": {\n \"acc_norm\":\
\ 0.4564102564102564,\n \"acc_norm_stderr\": 0.035761230969912135\n },\n\
\ \"community|acva:Arabic_Art|0\": {\n \"acc_norm\": 0.3641025641025641,\n\
\ \"acc_norm_stderr\": 0.03454653867786389\n },\n \"community|acva:Arabic_Astronomy|0\"\
: {\n \"acc_norm\": 0.4666666666666667,\n \"acc_norm_stderr\": 0.03581804596782233\n\
\ },\n \"community|acva:Arabic_Calligraphy|0\": {\n \"acc_norm\": 0.47843137254901963,\n\
\ \"acc_norm_stderr\": 0.0313435870640056\n },\n \"community|acva:Arabic_Ceremony|0\"\
: {\n \"acc_norm\": 0.518918918918919,\n \"acc_norm_stderr\": 0.036834092970087065\n\
\ },\n \"community|acva:Arabic_Clothing|0\": {\n \"acc_norm\": 0.5128205128205128,\n\
\ \"acc_norm_stderr\": 0.03588610523192215\n },\n \"community|acva:Arabic_Culture|0\"\
: {\n \"acc_norm\": 0.23076923076923078,\n \"acc_norm_stderr\": 0.0302493752938313\n\
\ },\n \"community|acva:Arabic_Food|0\": {\n \"acc_norm\": 0.441025641025641,\n\
\ \"acc_norm_stderr\": 0.0356473293185358\n },\n \"community|acva:Arabic_Funeral|0\"\
: {\n \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.050529115263991134\n\
\ },\n \"community|acva:Arabic_Geography|0\": {\n \"acc_norm\": 0.6068965517241379,\n\
\ \"acc_norm_stderr\": 0.040703290137070705\n },\n \"community|acva:Arabic_History|0\"\
: {\n \"acc_norm\": 0.30256410256410254,\n \"acc_norm_stderr\": 0.03298070870085619\n\
\ },\n \"community|acva:Arabic_Language_Origin|0\": {\n \"acc_norm\"\
: 0.5473684210526316,\n \"acc_norm_stderr\": 0.051339113773544845\n },\n\
\ \"community|acva:Arabic_Literature|0\": {\n \"acc_norm\": 0.4689655172413793,\n\
\ \"acc_norm_stderr\": 0.04158632762097828\n },\n \"community|acva:Arabic_Math|0\"\
: {\n \"acc_norm\": 0.30256410256410254,\n \"acc_norm_stderr\": 0.03298070870085618\n\
\ },\n \"community|acva:Arabic_Medicine|0\": {\n \"acc_norm\": 0.46206896551724136,\n\
\ \"acc_norm_stderr\": 0.041546596717075474\n },\n \"community|acva:Arabic_Music|0\"\
: {\n \"acc_norm\": 0.23741007194244604,\n \"acc_norm_stderr\": 0.036220593237998276\n\
\ },\n \"community|acva:Arabic_Ornament|0\": {\n \"acc_norm\": 0.4717948717948718,\n\
\ \"acc_norm_stderr\": 0.035840746749208334\n },\n \"community|acva:Arabic_Philosophy|0\"\
: {\n \"acc_norm\": 0.5793103448275863,\n \"acc_norm_stderr\": 0.0411391498118926\n\
\ },\n \"community|acva:Arabic_Physics_and_Chemistry|0\": {\n \"acc_norm\"\
: 0.5333333333333333,\n \"acc_norm_stderr\": 0.03581804596782232\n },\n\
\ \"community|acva:Arabic_Wedding|0\": {\n \"acc_norm\": 0.41025641025641024,\n\
\ \"acc_norm_stderr\": 0.03531493712326671\n },\n \"community|acva:Bahrain|0\"\
: {\n \"acc_norm\": 0.3111111111111111,\n \"acc_norm_stderr\": 0.06979205927323111\n\
\ },\n \"community|acva:Comoros|0\": {\n \"acc_norm\": 0.37777777777777777,\n\
\ \"acc_norm_stderr\": 0.07309112127323451\n },\n \"community|acva:Egypt_modern|0\"\
: {\n \"acc_norm\": 0.3157894736842105,\n \"acc_norm_stderr\": 0.04794350420740798\n\
\ },\n \"community|acva:InfluenceFromAncientEgypt|0\": {\n \"acc_norm\"\
: 0.6051282051282051,\n \"acc_norm_stderr\": 0.03509545602262038\n },\n\
\ \"community|acva:InfluenceFromByzantium|0\": {\n \"acc_norm\": 0.7172413793103448,\n\
\ \"acc_norm_stderr\": 0.03752833958003337\n },\n \"community|acva:InfluenceFromChina|0\"\
: {\n \"acc_norm\": 0.26666666666666666,\n \"acc_norm_stderr\": 0.0317493043641267\n\
\ },\n \"community|acva:InfluenceFromGreece|0\": {\n \"acc_norm\":\
\ 0.6307692307692307,\n \"acc_norm_stderr\": 0.034648411418637566\n },\n\
\ \"community|acva:InfluenceFromIslam|0\": {\n \"acc_norm\": 0.296551724137931,\n\
\ \"acc_norm_stderr\": 0.03806142687309993\n },\n \"community|acva:InfluenceFromPersia|0\"\
: {\n \"acc_norm\": 0.7028571428571428,\n \"acc_norm_stderr\": 0.03464507889884372\n\
\ },\n \"community|acva:InfluenceFromRome|0\": {\n \"acc_norm\": 0.5743589743589743,\n\
\ \"acc_norm_stderr\": 0.03549871080367708\n },\n \"community|acva:Iraq|0\"\
: {\n \"acc_norm\": 0.5058823529411764,\n \"acc_norm_stderr\": 0.05455069703232772\n\
\ },\n \"community|acva:Islam_Education|0\": {\n \"acc_norm\": 0.4512820512820513,\n\
\ \"acc_norm_stderr\": 0.03572709860318392\n },\n \"community|acva:Islam_branches_and_schools|0\"\
: {\n \"acc_norm\": 0.4342857142857143,\n \"acc_norm_stderr\": 0.037576101528126626\n\
\ },\n \"community|acva:Islamic_law_system|0\": {\n \"acc_norm\": 0.4256410256410256,\n\
\ \"acc_norm_stderr\": 0.035498710803677086\n },\n \"community|acva:Jordan|0\"\
: {\n \"acc_norm\": 0.3333333333333333,\n \"acc_norm_stderr\": 0.07106690545187012\n\
\ },\n \"community|acva:Kuwait|0\": {\n \"acc_norm\": 0.26666666666666666,\n\
\ \"acc_norm_stderr\": 0.06666666666666667\n },\n \"community|acva:Lebanon|0\"\
: {\n \"acc_norm\": 0.17777777777777778,\n \"acc_norm_stderr\": 0.05763774795025094\n\
\ },\n \"community|acva:Libya|0\": {\n \"acc_norm\": 0.4444444444444444,\n\
\ \"acc_norm_stderr\": 0.07491109582924914\n },\n \"community|acva:Mauritania|0\"\
: {\n \"acc_norm\": 0.4222222222222222,\n \"acc_norm_stderr\": 0.07446027270295805\n\
\ },\n \"community|acva:Mesopotamia_civilization|0\": {\n \"acc_norm\"\
: 0.5225806451612903,\n \"acc_norm_stderr\": 0.0402500394824441\n },\n\
\ \"community|acva:Morocco|0\": {\n \"acc_norm\": 0.2222222222222222,\n\
\ \"acc_norm_stderr\": 0.06267511942419628\n },\n \"community|acva:Oman|0\"\
: {\n \"acc_norm\": 0.17777777777777778,\n \"acc_norm_stderr\": 0.05763774795025094\n\
\ },\n \"community|acva:Palestine|0\": {\n \"acc_norm\": 0.24705882352941178,\n\
\ \"acc_norm_stderr\": 0.047058823529411785\n },\n \"community|acva:Qatar|0\"\
: {\n \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.07385489458759964\n\
\ },\n \"community|acva:Saudi_Arabia|0\": {\n \"acc_norm\": 0.3282051282051282,\n\
\ \"acc_norm_stderr\": 0.03371243782413707\n },\n \"community|acva:Somalia|0\"\
: {\n \"acc_norm\": 0.35555555555555557,\n \"acc_norm_stderr\": 0.07216392363431012\n\
\ },\n \"community|acva:Sudan|0\": {\n \"acc_norm\": 0.35555555555555557,\n\
\ \"acc_norm_stderr\": 0.07216392363431012\n },\n \"community|acva:Syria|0\"\
: {\n \"acc_norm\": 0.3333333333333333,\n \"acc_norm_stderr\": 0.07106690545187012\n\
\ },\n \"community|acva:Tunisia|0\": {\n \"acc_norm\": 0.3111111111111111,\n\
\ \"acc_norm_stderr\": 0.06979205927323111\n },\n \"community|acva:United_Arab_Emirates|0\"\
: {\n \"acc_norm\": 0.23529411764705882,\n \"acc_norm_stderr\": 0.04628210543937907\n\
\ },\n \"community|acva:Yemen|0\": {\n \"acc_norm\": 0.2,\n \
\ \"acc_norm_stderr\": 0.13333333333333333\n },\n \"community|acva:communication|0\"\
: {\n \"acc_norm\": 0.42857142857142855,\n \"acc_norm_stderr\": 0.025974025974025955\n\
\ },\n \"community|acva:computer_and_phone|0\": {\n \"acc_norm\": 0.45084745762711864,\n\
\ \"acc_norm_stderr\": 0.02901934773187137\n },\n \"community|acva:daily_life|0\"\
: {\n \"acc_norm\": 0.18694362017804153,\n \"acc_norm_stderr\": 0.021268948348414647\n\
\ },\n \"community|acva:entertainment|0\": {\n \"acc_norm\": 0.23389830508474577,\n\
\ \"acc_norm_stderr\": 0.024687839412166384\n },\n \"community|alghafa:mcq_exams_test_ar|0\"\
: {\n \"acc_norm\": 0.3608617594254937,\n \"acc_norm_stderr\": 0.020367158199199216\n\
\ },\n \"community|alghafa:meta_ar_dialects|0\": {\n \"acc_norm\":\
\ 0.4220574606116775,\n \"acc_norm_stderr\": 0.0067246959144321005\n },\n\
\ \"community|alghafa:meta_ar_msa|0\": {\n \"acc_norm\": 0.4759776536312849,\n\
\ \"acc_norm_stderr\": 0.01670319018930019\n },\n \"community|alghafa:multiple_choice_facts_truefalse_balanced_task|0\"\
: {\n \"acc_norm\": 0.8933333333333333,\n \"acc_norm_stderr\": 0.03588436550487813\n\
\ },\n \"community|alghafa:multiple_choice_grounded_statement_soqal_task|0\"\
: {\n \"acc_norm\": 0.64,\n \"acc_norm_stderr\": 0.03932313218491396\n\
\ },\n \"community|alghafa:multiple_choice_grounded_statement_xglue_mlqa_task|0\"\
: {\n \"acc_norm\": 0.5466666666666666,\n \"acc_norm_stderr\": 0.04078279527880808\n\
\ },\n \"community|alghafa:multiple_choice_rating_sentiment_no_neutral_task|0\"\
: {\n \"acc_norm\": 0.8341463414634146,\n \"acc_norm_stderr\": 0.004160079026167048\n\
\ },\n \"community|alghafa:multiple_choice_rating_sentiment_task|0\": {\n\
\ \"acc_norm\": 0.6010008340283569,\n \"acc_norm_stderr\": 0.00632506746203751\n\
\ },\n \"community|alghafa:multiple_choice_sentiment_task|0\": {\n \
\ \"acc_norm\": 0.40813953488372096,\n \"acc_norm_stderr\": 0.011854303984148063\n\
\ },\n \"community|arabic_exams|0\": {\n \"acc_norm\": 0.5512104283054003,\n\
\ \"acc_norm_stderr\": 0.02148313691486752\n },\n \"community|arabic_mmlu:abstract_algebra|0\"\
: {\n \"acc_norm\": 0.39,\n \"acc_norm_stderr\": 0.04902071300001975\n\
\ },\n \"community|arabic_mmlu:anatomy|0\": {\n \"acc_norm\": 0.4666666666666667,\n\
\ \"acc_norm_stderr\": 0.043097329010363554\n },\n \"community|arabic_mmlu:astronomy|0\"\
: {\n \"acc_norm\": 0.7236842105263158,\n \"acc_norm_stderr\": 0.03639057569952929\n\
\ },\n \"community|arabic_mmlu:business_ethics|0\": {\n \"acc_norm\"\
: 0.58,\n \"acc_norm_stderr\": 0.049604496374885836\n },\n \"community|arabic_mmlu:clinical_knowledge|0\"\
: {\n \"acc_norm\": 0.6452830188679245,\n \"acc_norm_stderr\": 0.02944517532819959\n\
\ },\n \"community|arabic_mmlu:college_biology|0\": {\n \"acc_norm\"\
: 0.6527777777777778,\n \"acc_norm_stderr\": 0.03981240543717861\n },\n\
\ \"community|arabic_mmlu:college_chemistry|0\": {\n \"acc_norm\": 0.44,\n\
\ \"acc_norm_stderr\": 0.049888765156985884\n },\n \"community|arabic_mmlu:college_computer_science|0\"\
: {\n \"acc_norm\": 0.49,\n \"acc_norm_stderr\": 0.05024183937956912\n\
\ },\n \"community|arabic_mmlu:college_mathematics|0\": {\n \"acc_norm\"\
: 0.39,\n \"acc_norm_stderr\": 0.04902071300001974\n },\n \"community|arabic_mmlu:college_medicine|0\"\
: {\n \"acc_norm\": 0.5317919075144508,\n \"acc_norm_stderr\": 0.03804749744364763\n\
\ },\n \"community|arabic_mmlu:college_physics|0\": {\n \"acc_norm\"\
: 0.43137254901960786,\n \"acc_norm_stderr\": 0.04928099597287534\n },\n\
\ \"community|arabic_mmlu:computer_security|0\": {\n \"acc_norm\": 0.66,\n\
\ \"acc_norm_stderr\": 0.04760952285695238\n },\n \"community|arabic_mmlu:conceptual_physics|0\"\
: {\n \"acc_norm\": 0.6553191489361702,\n \"acc_norm_stderr\": 0.03106898596312215\n\
\ },\n \"community|arabic_mmlu:econometrics|0\": {\n \"acc_norm\":\
\ 0.45614035087719296,\n \"acc_norm_stderr\": 0.04685473041907789\n },\n\
\ \"community|arabic_mmlu:electrical_engineering|0\": {\n \"acc_norm\"\
: 0.4827586206896552,\n \"acc_norm_stderr\": 0.04164188720169377\n },\n\
\ \"community|arabic_mmlu:elementary_mathematics|0\": {\n \"acc_norm\"\
: 0.5052910052910053,\n \"acc_norm_stderr\": 0.02574986828855657\n },\n\
\ \"community|arabic_mmlu:formal_logic|0\": {\n \"acc_norm\": 0.5317460317460317,\n\
\ \"acc_norm_stderr\": 0.04463112720677172\n },\n \"community|arabic_mmlu:global_facts|0\"\
: {\n \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.049236596391733084\n\
\ },\n \"community|arabic_mmlu:high_school_biology|0\": {\n \"acc_norm\"\
: 0.6193548387096774,\n \"acc_norm_stderr\": 0.02762171783290703\n },\n\
\ \"community|arabic_mmlu:high_school_chemistry|0\": {\n \"acc_norm\"\
: 0.4975369458128079,\n \"acc_norm_stderr\": 0.035179450386910616\n },\n\
\ \"community|arabic_mmlu:high_school_computer_science|0\": {\n \"acc_norm\"\
: 0.73,\n \"acc_norm_stderr\": 0.044619604333847394\n },\n \"community|arabic_mmlu:high_school_european_history|0\"\
: {\n \"acc_norm\": 0.23636363636363636,\n \"acc_norm_stderr\": 0.033175059300091805\n\
\ },\n \"community|arabic_mmlu:high_school_geography|0\": {\n \"acc_norm\"\
: 0.7575757575757576,\n \"acc_norm_stderr\": 0.030532892233932026\n },\n\
\ \"community|arabic_mmlu:high_school_government_and_politics|0\": {\n \
\ \"acc_norm\": 0.7668393782383419,\n \"acc_norm_stderr\": 0.03051611137147601\n\
\ },\n \"community|arabic_mmlu:high_school_macroeconomics|0\": {\n \
\ \"acc_norm\": 0.6538461538461539,\n \"acc_norm_stderr\": 0.024121125416941197\n\
\ },\n \"community|arabic_mmlu:high_school_mathematics|0\": {\n \"\
acc_norm\": 0.34074074074074073,\n \"acc_norm_stderr\": 0.028897748741131137\n\
\ },\n \"community|arabic_mmlu:high_school_microeconomics|0\": {\n \
\ \"acc_norm\": 0.6218487394957983,\n \"acc_norm_stderr\": 0.031499305777849054\n\
\ },\n \"community|arabic_mmlu:high_school_physics|0\": {\n \"acc_norm\"\
: 0.3973509933774834,\n \"acc_norm_stderr\": 0.0399552400768168\n },\n\
\ \"community|arabic_mmlu:high_school_psychology|0\": {\n \"acc_norm\"\
: 0.6935779816513762,\n \"acc_norm_stderr\": 0.019765517220458523\n },\n\
\ \"community|arabic_mmlu:high_school_statistics|0\": {\n \"acc_norm\"\
: 0.47685185185185186,\n \"acc_norm_stderr\": 0.034063153607115086\n },\n\
\ \"community|arabic_mmlu:high_school_us_history|0\": {\n \"acc_norm\"\
: 0.3480392156862745,\n \"acc_norm_stderr\": 0.03343311240488418\n },\n\
\ \"community|arabic_mmlu:high_school_world_history|0\": {\n \"acc_norm\"\
: 0.37130801687763715,\n \"acc_norm_stderr\": 0.03145068600744859\n },\n\
\ \"community|arabic_mmlu:human_aging|0\": {\n \"acc_norm\": 0.6771300448430493,\n\
\ \"acc_norm_stderr\": 0.03138147637575498\n },\n \"community|arabic_mmlu:human_sexuality|0\"\
: {\n \"acc_norm\": 0.6793893129770993,\n \"acc_norm_stderr\": 0.04093329229834278\n\
\ },\n \"community|arabic_mmlu:international_law|0\": {\n \"acc_norm\"\
: 0.7933884297520661,\n \"acc_norm_stderr\": 0.036959801280988226\n },\n\
\ \"community|arabic_mmlu:jurisprudence|0\": {\n \"acc_norm\": 0.6388888888888888,\n\
\ \"acc_norm_stderr\": 0.04643454608906275\n },\n \"community|arabic_mmlu:logical_fallacies|0\"\
: {\n \"acc_norm\": 0.5398773006134969,\n \"acc_norm_stderr\": 0.039158572914369714\n\
\ },\n \"community|arabic_mmlu:machine_learning|0\": {\n \"acc_norm\"\
: 0.5267857142857143,\n \"acc_norm_stderr\": 0.047389751192741546\n },\n\
\ \"community|arabic_mmlu:management|0\": {\n \"acc_norm\": 0.6407766990291263,\n\
\ \"acc_norm_stderr\": 0.04750458399041696\n },\n \"community|arabic_mmlu:marketing|0\"\
: {\n \"acc_norm\": 0.7991452991452992,\n \"acc_norm_stderr\": 0.02624677294689049\n\
\ },\n \"community|arabic_mmlu:medical_genetics|0\": {\n \"acc_norm\"\
: 0.65,\n \"acc_norm_stderr\": 0.04793724854411021\n },\n \"community|arabic_mmlu:miscellaneous|0\"\
: {\n \"acc_norm\": 0.7279693486590039,\n \"acc_norm_stderr\": 0.015913367447500517\n\
\ },\n \"community|arabic_mmlu:moral_disputes|0\": {\n \"acc_norm\"\
: 0.6502890173410405,\n \"acc_norm_stderr\": 0.02567428145653102\n },\n\
\ \"community|arabic_mmlu:moral_scenarios|0\": {\n \"acc_norm\": 0.39776536312849164,\n\
\ \"acc_norm_stderr\": 0.016369204971262995\n },\n \"community|arabic_mmlu:nutrition|0\"\
: {\n \"acc_norm\": 0.7189542483660131,\n \"acc_norm_stderr\": 0.02573885479781874\n\
\ },\n \"community|arabic_mmlu:philosophy|0\": {\n \"acc_norm\": 0.6237942122186495,\n\
\ \"acc_norm_stderr\": 0.027513925683549434\n },\n \"community|arabic_mmlu:prehistory|0\"\
: {\n \"acc_norm\": 0.6203703703703703,\n \"acc_norm_stderr\": 0.027002521034516464\n\
\ },\n \"community|arabic_mmlu:professional_accounting|0\": {\n \"\
acc_norm\": 0.4148936170212766,\n \"acc_norm_stderr\": 0.0293922365846125\n\
\ },\n \"community|arabic_mmlu:professional_law|0\": {\n \"acc_norm\"\
: 0.3813559322033898,\n \"acc_norm_stderr\": 0.012405509401888119\n },\n\
\ \"community|arabic_mmlu:professional_medicine|0\": {\n \"acc_norm\"\
: 0.3088235294117647,\n \"acc_norm_stderr\": 0.028064998167040094\n },\n\
\ \"community|arabic_mmlu:professional_psychology|0\": {\n \"acc_norm\"\
: 0.5882352941176471,\n \"acc_norm_stderr\": 0.01991037746310594\n },\n\
\ \"community|arabic_mmlu:public_relations|0\": {\n \"acc_norm\": 0.6181818181818182,\n\
\ \"acc_norm_stderr\": 0.046534298079135075\n },\n \"community|arabic_mmlu:security_studies|0\"\
: {\n \"acc_norm\": 0.5877551020408164,\n \"acc_norm_stderr\": 0.03151236044674268\n\
\ },\n \"community|arabic_mmlu:sociology|0\": {\n \"acc_norm\": 0.6915422885572139,\n\
\ \"acc_norm_stderr\": 0.03265819588512697\n },\n \"community|arabic_mmlu:us_foreign_policy|0\"\
: {\n \"acc_norm\": 0.8,\n \"acc_norm_stderr\": 0.04020151261036845\n\
\ },\n \"community|arabic_mmlu:virology|0\": {\n \"acc_norm\": 0.4819277108433735,\n\
\ \"acc_norm_stderr\": 0.038899512528272166\n },\n \"community|arabic_mmlu:world_religions|0\"\
: {\n \"acc_norm\": 0.7485380116959064,\n \"acc_norm_stderr\": 0.033275044238468436\n\
\ },\n \"community|arc_challenge_okapi_ar|0\": {\n \"acc_norm\": 0.48017241379310344,\n\
\ \"acc_norm_stderr\": 0.014675285076749806\n },\n \"community|arc_easy_ar|0\"\
: {\n \"acc_norm\": 0.4703891708967851,\n \"acc_norm_stderr\": 0.010267748561209244\n\
\ },\n \"community|boolq_ar|0\": {\n \"acc_norm\": 0.6236196319018404,\n\
\ \"acc_norm_stderr\": 0.008486550314509648\n },\n \"community|copa_ext_ar|0\"\
: {\n \"acc_norm\": 0.5777777777777777,\n \"acc_norm_stderr\": 0.05235473399540656\n\
\ },\n \"community|hellaswag_okapi_ar|0\": {\n \"acc_norm\": 0.3273361683567768,\n\
\ \"acc_norm_stderr\": 0.004900172489811871\n },\n \"community|openbook_qa_ext_ar|0\"\
: {\n \"acc_norm\": 0.4868686868686869,\n \"acc_norm_stderr\": 0.02248830414033472\n\
\ },\n \"community|piqa_ar|0\": {\n \"acc_norm\": 0.6950354609929078,\n\
\ \"acc_norm_stderr\": 0.01075636221184271\n },\n \"community|race_ar|0\"\
: {\n \"acc_norm\": 0.5005072022722662,\n \"acc_norm_stderr\": 0.007122532364122539\n\
\ },\n \"community|sciq_ar|0\": {\n \"acc_norm\": 0.6572864321608041,\n\
\ \"acc_norm_stderr\": 0.015053926480256236\n },\n \"community|toxigen_ar|0\"\
: {\n \"acc_norm\": 0.4320855614973262,\n \"acc_norm_stderr\": 0.01620887578524445\n\
\ },\n \"lighteval|xstory_cloze:ar|0\": {\n \"acc\": 0.7253474520185308,\n\
\ \"acc_stderr\": 0.01148620035471171\n },\n \"community|acva:_average|0\"\
: {\n \"acc_norm\": 0.3952913681266578,\n \"acc_norm_stderr\": 0.045797189866892664\n\
\ },\n \"community|alghafa:_average|0\": {\n \"acc_norm\": 0.575798176004883,\n\
\ \"acc_norm_stderr\": 0.020236087527098254\n },\n \"community|arabic_mmlu:_average|0\"\
: {\n \"acc_norm\": 0.5657867209093309,\n \"acc_norm_stderr\": 0.03562256482932646\n\
\ }\n}\n```"
repo_url: https://huggingface.co/FreedomIntelligence/AceGPT-v2-70B-Chat
configs:
- config_name: community_acva_Algeria_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|acva:Algeria|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|acva:Algeria|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_acva_Ancient_Egypt_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|acva:Ancient_Egypt|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|acva:Ancient_Egypt|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_acva_Arab_Empire_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|acva:Arab_Empire|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|acva:Arab_Empire|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_acva_Arabic_Architecture_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|acva:Arabic_Architecture|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|acva:Arabic_Architecture|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_acva_Arabic_Art_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|acva:Arabic_Art|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|acva:Arabic_Art|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_acva_Arabic_Astronomy_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|acva:Arabic_Astronomy|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|acva:Arabic_Astronomy|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_acva_Arabic_Calligraphy_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|acva:Arabic_Calligraphy|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|acva:Arabic_Calligraphy|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_acva_Arabic_Ceremony_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|acva:Arabic_Ceremony|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|acva:Arabic_Ceremony|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_acva_Arabic_Clothing_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|acva:Arabic_Clothing|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|acva:Arabic_Clothing|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_acva_Arabic_Culture_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|acva:Arabic_Culture|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|acva:Arabic_Culture|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_acva_Arabic_Food_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|acva:Arabic_Food|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|acva:Arabic_Food|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_acva_Arabic_Funeral_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|acva:Arabic_Funeral|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|acva:Arabic_Funeral|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_acva_Arabic_Geography_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|acva:Arabic_Geography|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|acva:Arabic_Geography|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_acva_Arabic_History_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|acva:Arabic_History|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|acva:Arabic_History|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_acva_Arabic_Language_Origin_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|acva:Arabic_Language_Origin|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|acva:Arabic_Language_Origin|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_acva_Arabic_Literature_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|acva:Arabic_Literature|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|acva:Arabic_Literature|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_acva_Arabic_Math_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|acva:Arabic_Math|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|acva:Arabic_Math|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_acva_Arabic_Medicine_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|acva:Arabic_Medicine|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|acva:Arabic_Medicine|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_acva_Arabic_Music_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|acva:Arabic_Music|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|acva:Arabic_Music|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_acva_Arabic_Ornament_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|acva:Arabic_Ornament|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|acva:Arabic_Ornament|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_acva_Arabic_Philosophy_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|acva:Arabic_Philosophy|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|acva:Arabic_Philosophy|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_acva_Arabic_Physics_and_Chemistry_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|acva:Arabic_Physics_and_Chemistry|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|acva:Arabic_Physics_and_Chemistry|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_acva_Arabic_Wedding_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|acva:Arabic_Wedding|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|acva:Arabic_Wedding|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_acva_Bahrain_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|acva:Bahrain|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|acva:Bahrain|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_acva_Comoros_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|acva:Comoros|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|acva:Comoros|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_acva_Egypt_modern_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|acva:Egypt_modern|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|acva:Egypt_modern|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_acva_InfluenceFromAncientEgypt_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|acva:InfluenceFromAncientEgypt|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|acva:InfluenceFromAncientEgypt|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_acva_InfluenceFromByzantium_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|acva:InfluenceFromByzantium|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|acva:InfluenceFromByzantium|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_acva_InfluenceFromChina_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|acva:InfluenceFromChina|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|acva:InfluenceFromChina|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_acva_InfluenceFromGreece_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|acva:InfluenceFromGreece|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|acva:InfluenceFromGreece|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_acva_InfluenceFromIslam_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|acva:InfluenceFromIslam|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|acva:InfluenceFromIslam|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_acva_InfluenceFromPersia_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|acva:InfluenceFromPersia|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|acva:InfluenceFromPersia|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_acva_InfluenceFromRome_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|acva:InfluenceFromRome|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|acva:InfluenceFromRome|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_acva_Iraq_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|acva:Iraq|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|acva:Iraq|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_acva_Islam_Education_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|acva:Islam_Education|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|acva:Islam_Education|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_acva_Islam_branches_and_schools_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|acva:Islam_branches_and_schools|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|acva:Islam_branches_and_schools|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_acva_Islamic_law_system_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|acva:Islamic_law_system|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|acva:Islamic_law_system|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_acva_Jordan_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|acva:Jordan|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|acva:Jordan|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_acva_Kuwait_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|acva:Kuwait|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|acva:Kuwait|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_acva_Lebanon_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|acva:Lebanon|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|acva:Lebanon|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_acva_Libya_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|acva:Libya|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|acva:Libya|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_acva_Mauritania_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|acva:Mauritania|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|acva:Mauritania|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_acva_Mesopotamia_civilization_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|acva:Mesopotamia_civilization|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|acva:Mesopotamia_civilization|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_acva_Morocco_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|acva:Morocco|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|acva:Morocco|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_acva_Oman_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|acva:Oman|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|acva:Oman|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_acva_Palestine_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|acva:Palestine|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|acva:Palestine|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_acva_Qatar_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|acva:Qatar|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|acva:Qatar|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_acva_Saudi_Arabia_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|acva:Saudi_Arabia|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|acva:Saudi_Arabia|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_acva_Somalia_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|acva:Somalia|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|acva:Somalia|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_acva_Sudan_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|acva:Sudan|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|acva:Sudan|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_acva_Syria_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|acva:Syria|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|acva:Syria|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_acva_Tunisia_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|acva:Tunisia|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|acva:Tunisia|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_acva_United_Arab_Emirates_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|acva:United_Arab_Emirates|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|acva:United_Arab_Emirates|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_acva_Yemen_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|acva:Yemen|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|acva:Yemen|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_acva_communication_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|acva:communication|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|acva:communication|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_acva_computer_and_phone_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|acva:computer_and_phone|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|acva:computer_and_phone|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_acva_daily_life_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|acva:daily_life|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|acva:daily_life|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_acva_entertainment_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|acva:entertainment|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|acva:entertainment|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_alghafa_mcq_exams_test_ar_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|alghafa:mcq_exams_test_ar|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|alghafa:mcq_exams_test_ar|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_alghafa_meta_ar_dialects_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|alghafa:meta_ar_dialects|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|alghafa:meta_ar_dialects|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_alghafa_meta_ar_msa_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|alghafa:meta_ar_msa|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|alghafa:meta_ar_msa|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_alghafa_multiple_choice_facts_truefalse_balanced_task_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|alghafa:multiple_choice_facts_truefalse_balanced_task|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|alghafa:multiple_choice_facts_truefalse_balanced_task|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_alghafa_multiple_choice_grounded_statement_soqal_task_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|alghafa:multiple_choice_grounded_statement_soqal_task|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|alghafa:multiple_choice_grounded_statement_soqal_task|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_alghafa_multiple_choice_grounded_statement_xglue_mlqa_task_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|alghafa:multiple_choice_grounded_statement_xglue_mlqa_task|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|alghafa:multiple_choice_grounded_statement_xglue_mlqa_task|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_alghafa_multiple_choice_rating_sentiment_no_neutral_task_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|alghafa:multiple_choice_rating_sentiment_no_neutral_task|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|alghafa:multiple_choice_rating_sentiment_no_neutral_task|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_alghafa_multiple_choice_rating_sentiment_task_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|alghafa:multiple_choice_rating_sentiment_task|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|alghafa:multiple_choice_rating_sentiment_task|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_alghafa_multiple_choice_sentiment_task_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|alghafa:multiple_choice_sentiment_task|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|alghafa:multiple_choice_sentiment_task|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_arabic_exams_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|arabic_exams|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|arabic_exams|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_arabic_mmlu_abstract_algebra_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|arabic_mmlu:abstract_algebra|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|arabic_mmlu:abstract_algebra|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_arabic_mmlu_anatomy_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|arabic_mmlu:anatomy|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|arabic_mmlu:anatomy|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_arabic_mmlu_astronomy_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|arabic_mmlu:astronomy|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|arabic_mmlu:astronomy|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_arabic_mmlu_business_ethics_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|arabic_mmlu:business_ethics|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|arabic_mmlu:business_ethics|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_arabic_mmlu_clinical_knowledge_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|arabic_mmlu:clinical_knowledge|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|arabic_mmlu:clinical_knowledge|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_arabic_mmlu_college_biology_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|arabic_mmlu:college_biology|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|arabic_mmlu:college_biology|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_arabic_mmlu_college_chemistry_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|arabic_mmlu:college_chemistry|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|arabic_mmlu:college_chemistry|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_arabic_mmlu_college_computer_science_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|arabic_mmlu:college_computer_science|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|arabic_mmlu:college_computer_science|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_arabic_mmlu_college_mathematics_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|arabic_mmlu:college_mathematics|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|arabic_mmlu:college_mathematics|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_arabic_mmlu_college_medicine_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|arabic_mmlu:college_medicine|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|arabic_mmlu:college_medicine|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_arabic_mmlu_college_physics_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|arabic_mmlu:college_physics|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|arabic_mmlu:college_physics|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_arabic_mmlu_computer_security_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|arabic_mmlu:computer_security|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|arabic_mmlu:computer_security|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_arabic_mmlu_conceptual_physics_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|arabic_mmlu:conceptual_physics|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|arabic_mmlu:conceptual_physics|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_arabic_mmlu_econometrics_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|arabic_mmlu:econometrics|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|arabic_mmlu:econometrics|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_arabic_mmlu_electrical_engineering_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|arabic_mmlu:electrical_engineering|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|arabic_mmlu:electrical_engineering|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_arabic_mmlu_elementary_mathematics_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|arabic_mmlu:elementary_mathematics|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|arabic_mmlu:elementary_mathematics|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_arabic_mmlu_formal_logic_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|arabic_mmlu:formal_logic|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|arabic_mmlu:formal_logic|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_arabic_mmlu_global_facts_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|arabic_mmlu:global_facts|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|arabic_mmlu:global_facts|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_arabic_mmlu_high_school_biology_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|arabic_mmlu:high_school_biology|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|arabic_mmlu:high_school_biology|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_arabic_mmlu_high_school_chemistry_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|arabic_mmlu:high_school_chemistry|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|arabic_mmlu:high_school_chemistry|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_arabic_mmlu_high_school_computer_science_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|arabic_mmlu:high_school_computer_science|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|arabic_mmlu:high_school_computer_science|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_arabic_mmlu_high_school_european_history_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|arabic_mmlu:high_school_european_history|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|arabic_mmlu:high_school_european_history|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_arabic_mmlu_high_school_geography_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|arabic_mmlu:high_school_geography|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|arabic_mmlu:high_school_geography|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_arabic_mmlu_high_school_government_and_politics_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|arabic_mmlu:high_school_government_and_politics|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|arabic_mmlu:high_school_government_and_politics|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_arabic_mmlu_high_school_macroeconomics_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|arabic_mmlu:high_school_macroeconomics|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|arabic_mmlu:high_school_macroeconomics|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_arabic_mmlu_high_school_mathematics_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|arabic_mmlu:high_school_mathematics|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|arabic_mmlu:high_school_mathematics|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_arabic_mmlu_high_school_microeconomics_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|arabic_mmlu:high_school_microeconomics|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|arabic_mmlu:high_school_microeconomics|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_arabic_mmlu_high_school_physics_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|arabic_mmlu:high_school_physics|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|arabic_mmlu:high_school_physics|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_arabic_mmlu_high_school_psychology_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|arabic_mmlu:high_school_psychology|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|arabic_mmlu:high_school_psychology|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_arabic_mmlu_high_school_statistics_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|arabic_mmlu:high_school_statistics|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|arabic_mmlu:high_school_statistics|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_arabic_mmlu_high_school_us_history_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|arabic_mmlu:high_school_us_history|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|arabic_mmlu:high_school_us_history|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_arabic_mmlu_high_school_world_history_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|arabic_mmlu:high_school_world_history|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|arabic_mmlu:high_school_world_history|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_arabic_mmlu_human_aging_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|arabic_mmlu:human_aging|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|arabic_mmlu:human_aging|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_arabic_mmlu_human_sexuality_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|arabic_mmlu:human_sexuality|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|arabic_mmlu:human_sexuality|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_arabic_mmlu_international_law_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|arabic_mmlu:international_law|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|arabic_mmlu:international_law|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_arabic_mmlu_jurisprudence_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|arabic_mmlu:jurisprudence|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|arabic_mmlu:jurisprudence|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_arabic_mmlu_logical_fallacies_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|arabic_mmlu:logical_fallacies|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|arabic_mmlu:logical_fallacies|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_arabic_mmlu_machine_learning_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|arabic_mmlu:machine_learning|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|arabic_mmlu:machine_learning|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_arabic_mmlu_management_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|arabic_mmlu:management|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|arabic_mmlu:management|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_arabic_mmlu_marketing_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|arabic_mmlu:marketing|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|arabic_mmlu:marketing|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_arabic_mmlu_medical_genetics_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|arabic_mmlu:medical_genetics|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|arabic_mmlu:medical_genetics|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_arabic_mmlu_miscellaneous_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|arabic_mmlu:miscellaneous|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|arabic_mmlu:miscellaneous|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_arabic_mmlu_moral_disputes_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|arabic_mmlu:moral_disputes|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|arabic_mmlu:moral_disputes|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_arabic_mmlu_moral_scenarios_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|arabic_mmlu:moral_scenarios|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|arabic_mmlu:moral_scenarios|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_arabic_mmlu_nutrition_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|arabic_mmlu:nutrition|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|arabic_mmlu:nutrition|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_arabic_mmlu_philosophy_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|arabic_mmlu:philosophy|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|arabic_mmlu:philosophy|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_arabic_mmlu_prehistory_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|arabic_mmlu:prehistory|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|arabic_mmlu:prehistory|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_arabic_mmlu_professional_accounting_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|arabic_mmlu:professional_accounting|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|arabic_mmlu:professional_accounting|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_arabic_mmlu_professional_law_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|arabic_mmlu:professional_law|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|arabic_mmlu:professional_law|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_arabic_mmlu_professional_medicine_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|arabic_mmlu:professional_medicine|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|arabic_mmlu:professional_medicine|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_arabic_mmlu_professional_psychology_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|arabic_mmlu:professional_psychology|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|arabic_mmlu:professional_psychology|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_arabic_mmlu_public_relations_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|arabic_mmlu:public_relations|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|arabic_mmlu:public_relations|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_arabic_mmlu_security_studies_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|arabic_mmlu:security_studies|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|arabic_mmlu:security_studies|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_arabic_mmlu_sociology_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|arabic_mmlu:sociology|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|arabic_mmlu:sociology|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_arabic_mmlu_us_foreign_policy_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|arabic_mmlu:us_foreign_policy|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|arabic_mmlu:us_foreign_policy|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_arabic_mmlu_virology_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|arabic_mmlu:virology|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|arabic_mmlu:virology|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_arabic_mmlu_world_religions_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|arabic_mmlu:world_religions|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|arabic_mmlu:world_religions|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_arc_challenge_okapi_ar_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|arc_challenge_okapi_ar|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|arc_challenge_okapi_ar|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_arc_easy_ar_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|arc_easy_ar|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|arc_easy_ar|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_boolq_ar_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|boolq_ar|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|boolq_ar|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_copa_ext_ar_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|copa_ext_ar|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|copa_ext_ar|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_hellaswag_okapi_ar_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|hellaswag_okapi_ar|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|hellaswag_okapi_ar|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_openbook_qa_ext_ar_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|openbook_qa_ext_ar|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|openbook_qa_ext_ar|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_piqa_ar_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|piqa_ar|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|piqa_ar|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_race_ar_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|race_ar|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|race_ar|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_sciq_ar_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|sciq_ar|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|sciq_ar|0_2025-01-30T19-54-59.135980.parquet'
- config_name: community_toxigen_ar_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_community|toxigen_ar|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_community|toxigen_ar|0_2025-01-30T19-54-59.135980.parquet'
- config_name: lighteval_xstory_cloze_ar_0_2025_01_30T19_54_59_135980_parquet
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- '**/details_lighteval|xstory_cloze:ar|0_2025-01-30T19-54-59.135980.parquet'
- split: latest
path:
- '**/details_lighteval|xstory_cloze:ar|0_2025-01-30T19-54-59.135980.parquet'
- config_name: results
data_files:
- split: 2025_01_30T19_54_59.135980
path:
- results_2025-01-30T19-54-59.135980.parquet
- split: latest
path:
- results_2025-01-30T19-54-59.135980.parquet
---
# Dataset Card for Evaluation run of FreedomIntelligence/AceGPT-v2-70B-Chat
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [FreedomIntelligence/AceGPT-v2-70B-Chat](https://huggingface.co/FreedomIntelligence/AceGPT-v2-70B-Chat).
The dataset is composed of 136 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("OALL/details_FreedomIntelligence__AceGPT-v2-70B-Chat",
"lighteval_xstory_cloze_ar_0_2025_01_30T19_54_59_135980_parquet",
split="train")
```
## Latest results
These are the [latest results from run 2025-01-30T19:54:59.135980](https://huggingface.co/datasets/OALL/details_FreedomIntelligence__AceGPT-v2-70B-Chat/blob/main/results_2025-01-30T19-54-59.135980.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc_norm": 0.49008307379293065,
"acc_norm_stderr": 0.03742700461947868,
"acc": 0.7253474520185308,
"acc_stderr": 0.01148620035471171
},
"community|acva:Algeria|0": {
"acc_norm": 0.5230769230769231,
"acc_norm_stderr": 0.0358596530894741
},
"community|acva:Ancient_Egypt|0": {
"acc_norm": 0.05396825396825397,
"acc_norm_stderr": 0.012751380783465839
},
"community|acva:Arab_Empire|0": {
"acc_norm": 0.30943396226415093,
"acc_norm_stderr": 0.028450154794118627
},
"community|acva:Arabic_Architecture|0": {
"acc_norm": 0.4564102564102564,
"acc_norm_stderr": 0.035761230969912135
},
"community|acva:Arabic_Art|0": {
"acc_norm": 0.3641025641025641,
"acc_norm_stderr": 0.03454653867786389
},
"community|acva:Arabic_Astronomy|0": {
"acc_norm": 0.4666666666666667,
"acc_norm_stderr": 0.03581804596782233
},
"community|acva:Arabic_Calligraphy|0": {
"acc_norm": 0.47843137254901963,
"acc_norm_stderr": 0.0313435870640056
},
"community|acva:Arabic_Ceremony|0": {
"acc_norm": 0.518918918918919,
"acc_norm_stderr": 0.036834092970087065
},
"community|acva:Arabic_Clothing|0": {
"acc_norm": 0.5128205128205128,
"acc_norm_stderr": 0.03588610523192215
},
"community|acva:Arabic_Culture|0": {
"acc_norm": 0.23076923076923078,
"acc_norm_stderr": 0.0302493752938313
},
"community|acva:Arabic_Food|0": {
"acc_norm": 0.441025641025641,
"acc_norm_stderr": 0.0356473293185358
},
"community|acva:Arabic_Funeral|0": {
"acc_norm": 0.4,
"acc_norm_stderr": 0.050529115263991134
},
"community|acva:Arabic_Geography|0": {
"acc_norm": 0.6068965517241379,
"acc_norm_stderr": 0.040703290137070705
},
"community|acva:Arabic_History|0": {
"acc_norm": 0.30256410256410254,
"acc_norm_stderr": 0.03298070870085619
},
"community|acva:Arabic_Language_Origin|0": {
"acc_norm": 0.5473684210526316,
"acc_norm_stderr": 0.051339113773544845
},
"community|acva:Arabic_Literature|0": {
"acc_norm": 0.4689655172413793,
"acc_norm_stderr": 0.04158632762097828
},
"community|acva:Arabic_Math|0": {
"acc_norm": 0.30256410256410254,
"acc_norm_stderr": 0.03298070870085618
},
"community|acva:Arabic_Medicine|0": {
"acc_norm": 0.46206896551724136,
"acc_norm_stderr": 0.041546596717075474
},
"community|acva:Arabic_Music|0": {
"acc_norm": 0.23741007194244604,
"acc_norm_stderr": 0.036220593237998276
},
"community|acva:Arabic_Ornament|0": {
"acc_norm": 0.4717948717948718,
"acc_norm_stderr": 0.035840746749208334
},
"community|acva:Arabic_Philosophy|0": {
"acc_norm": 0.5793103448275863,
"acc_norm_stderr": 0.0411391498118926
},
"community|acva:Arabic_Physics_and_Chemistry|0": {
"acc_norm": 0.5333333333333333,
"acc_norm_stderr": 0.03581804596782232
},
"community|acva:Arabic_Wedding|0": {
"acc_norm": 0.41025641025641024,
"acc_norm_stderr": 0.03531493712326671
},
"community|acva:Bahrain|0": {
"acc_norm": 0.3111111111111111,
"acc_norm_stderr": 0.06979205927323111
},
"community|acva:Comoros|0": {
"acc_norm": 0.37777777777777777,
"acc_norm_stderr": 0.07309112127323451
},
"community|acva:Egypt_modern|0": {
"acc_norm": 0.3157894736842105,
"acc_norm_stderr": 0.04794350420740798
},
"community|acva:InfluenceFromAncientEgypt|0": {
"acc_norm": 0.6051282051282051,
"acc_norm_stderr": 0.03509545602262038
},
"community|acva:InfluenceFromByzantium|0": {
"acc_norm": 0.7172413793103448,
"acc_norm_stderr": 0.03752833958003337
},
"community|acva:InfluenceFromChina|0": {
"acc_norm": 0.26666666666666666,
"acc_norm_stderr": 0.0317493043641267
},
"community|acva:InfluenceFromGreece|0": {
"acc_norm": 0.6307692307692307,
"acc_norm_stderr": 0.034648411418637566
},
"community|acva:InfluenceFromIslam|0": {
"acc_norm": 0.296551724137931,
"acc_norm_stderr": 0.03806142687309993
},
"community|acva:InfluenceFromPersia|0": {
"acc_norm": 0.7028571428571428,
"acc_norm_stderr": 0.03464507889884372
},
"community|acva:InfluenceFromRome|0": {
"acc_norm": 0.5743589743589743,
"acc_norm_stderr": 0.03549871080367708
},
"community|acva:Iraq|0": {
"acc_norm": 0.5058823529411764,
"acc_norm_stderr": 0.05455069703232772
},
"community|acva:Islam_Education|0": {
"acc_norm": 0.4512820512820513,
"acc_norm_stderr": 0.03572709860318392
},
"community|acva:Islam_branches_and_schools|0": {
"acc_norm": 0.4342857142857143,
"acc_norm_stderr": 0.037576101528126626
},
"community|acva:Islamic_law_system|0": {
"acc_norm": 0.4256410256410256,
"acc_norm_stderr": 0.035498710803677086
},
"community|acva:Jordan|0": {
"acc_norm": 0.3333333333333333,
"acc_norm_stderr": 0.07106690545187012
},
"community|acva:Kuwait|0": {
"acc_norm": 0.26666666666666666,
"acc_norm_stderr": 0.06666666666666667
},
"community|acva:Lebanon|0": {
"acc_norm": 0.17777777777777778,
"acc_norm_stderr": 0.05763774795025094
},
"community|acva:Libya|0": {
"acc_norm": 0.4444444444444444,
"acc_norm_stderr": 0.07491109582924914
},
"community|acva:Mauritania|0": {
"acc_norm": 0.4222222222222222,
"acc_norm_stderr": 0.07446027270295805
},
"community|acva:Mesopotamia_civilization|0": {
"acc_norm": 0.5225806451612903,
"acc_norm_stderr": 0.0402500394824441
},
"community|acva:Morocco|0": {
"acc_norm": 0.2222222222222222,
"acc_norm_stderr": 0.06267511942419628
},
"community|acva:Oman|0": {
"acc_norm": 0.17777777777777778,
"acc_norm_stderr": 0.05763774795025094
},
"community|acva:Palestine|0": {
"acc_norm": 0.24705882352941178,
"acc_norm_stderr": 0.047058823529411785
},
"community|acva:Qatar|0": {
"acc_norm": 0.4,
"acc_norm_stderr": 0.07385489458759964
},
"community|acva:Saudi_Arabia|0": {
"acc_norm": 0.3282051282051282,
"acc_norm_stderr": 0.03371243782413707
},
"community|acva:Somalia|0": {
"acc_norm": 0.35555555555555557,
"acc_norm_stderr": 0.07216392363431012
},
"community|acva:Sudan|0": {
"acc_norm": 0.35555555555555557,
"acc_norm_stderr": 0.07216392363431012
},
"community|acva:Syria|0": {
"acc_norm": 0.3333333333333333,
"acc_norm_stderr": 0.07106690545187012
},
"community|acva:Tunisia|0": {
"acc_norm": 0.3111111111111111,
"acc_norm_stderr": 0.06979205927323111
},
"community|acva:United_Arab_Emirates|0": {
"acc_norm": 0.23529411764705882,
"acc_norm_stderr": 0.04628210543937907
},
"community|acva:Yemen|0": {
"acc_norm": 0.2,
"acc_norm_stderr": 0.13333333333333333
},
"community|acva:communication|0": {
"acc_norm": 0.42857142857142855,
"acc_norm_stderr": 0.025974025974025955
},
"community|acva:computer_and_phone|0": {
"acc_norm": 0.45084745762711864,
"acc_norm_stderr": 0.02901934773187137
},
"community|acva:daily_life|0": {
"acc_norm": 0.18694362017804153,
"acc_norm_stderr": 0.021268948348414647
},
"community|acva:entertainment|0": {
"acc_norm": 0.23389830508474577,
"acc_norm_stderr": 0.024687839412166384
},
"community|alghafa:mcq_exams_test_ar|0": {
"acc_norm": 0.3608617594254937,
"acc_norm_stderr": 0.020367158199199216
},
"community|alghafa:meta_ar_dialects|0": {
"acc_norm": 0.4220574606116775,
"acc_norm_stderr": 0.0067246959144321005
},
"community|alghafa:meta_ar_msa|0": {
"acc_norm": 0.4759776536312849,
"acc_norm_stderr": 0.01670319018930019
},
"community|alghafa:multiple_choice_facts_truefalse_balanced_task|0": {
"acc_norm": 0.8933333333333333,
"acc_norm_stderr": 0.03588436550487813
},
"community|alghafa:multiple_choice_grounded_statement_soqal_task|0": {
"acc_norm": 0.64,
"acc_norm_stderr": 0.03932313218491396
},
"community|alghafa:multiple_choice_grounded_statement_xglue_mlqa_task|0": {
"acc_norm": 0.5466666666666666,
"acc_norm_stderr": 0.04078279527880808
},
"community|alghafa:multiple_choice_rating_sentiment_no_neutral_task|0": {
"acc_norm": 0.8341463414634146,
"acc_norm_stderr": 0.004160079026167048
},
"community|alghafa:multiple_choice_rating_sentiment_task|0": {
"acc_norm": 0.6010008340283569,
"acc_norm_stderr": 0.00632506746203751
},
"community|alghafa:multiple_choice_sentiment_task|0": {
"acc_norm": 0.40813953488372096,
"acc_norm_stderr": 0.011854303984148063
},
"community|arabic_exams|0": {
"acc_norm": 0.5512104283054003,
"acc_norm_stderr": 0.02148313691486752
},
"community|arabic_mmlu:abstract_algebra|0": {
"acc_norm": 0.39,
"acc_norm_stderr": 0.04902071300001975
},
"community|arabic_mmlu:anatomy|0": {
"acc_norm": 0.4666666666666667,
"acc_norm_stderr": 0.043097329010363554
},
"community|arabic_mmlu:astronomy|0": {
"acc_norm": 0.7236842105263158,
"acc_norm_stderr": 0.03639057569952929
},
"community|arabic_mmlu:business_ethics|0": {
"acc_norm": 0.58,
"acc_norm_stderr": 0.049604496374885836
},
"community|arabic_mmlu:clinical_knowledge|0": {
"acc_norm": 0.6452830188679245,
"acc_norm_stderr": 0.02944517532819959
},
"community|arabic_mmlu:college_biology|0": {
"acc_norm": 0.6527777777777778,
"acc_norm_stderr": 0.03981240543717861
},
"community|arabic_mmlu:college_chemistry|0": {
"acc_norm": 0.44,
"acc_norm_stderr": 0.049888765156985884
},
"community|arabic_mmlu:college_computer_science|0": {
"acc_norm": 0.49,
"acc_norm_stderr": 0.05024183937956912
},
"community|arabic_mmlu:college_mathematics|0": {
"acc_norm": 0.39,
"acc_norm_stderr": 0.04902071300001974
},
"community|arabic_mmlu:college_medicine|0": {
"acc_norm": 0.5317919075144508,
"acc_norm_stderr": 0.03804749744364763
},
"community|arabic_mmlu:college_physics|0": {
"acc_norm": 0.43137254901960786,
"acc_norm_stderr": 0.04928099597287534
},
"community|arabic_mmlu:computer_security|0": {
"acc_norm": 0.66,
"acc_norm_stderr": 0.04760952285695238
},
"community|arabic_mmlu:conceptual_physics|0": {
"acc_norm": 0.6553191489361702,
"acc_norm_stderr": 0.03106898596312215
},
"community|arabic_mmlu:econometrics|0": {
"acc_norm": 0.45614035087719296,
"acc_norm_stderr": 0.04685473041907789
},
"community|arabic_mmlu:electrical_engineering|0": {
"acc_norm": 0.4827586206896552,
"acc_norm_stderr": 0.04164188720169377
},
"community|arabic_mmlu:elementary_mathematics|0": {
"acc_norm": 0.5052910052910053,
"acc_norm_stderr": 0.02574986828855657
},
"community|arabic_mmlu:formal_logic|0": {
"acc_norm": 0.5317460317460317,
"acc_norm_stderr": 0.04463112720677172
},
"community|arabic_mmlu:global_facts|0": {
"acc_norm": 0.4,
"acc_norm_stderr": 0.049236596391733084
},
"community|arabic_mmlu:high_school_biology|0": {
"acc_norm": 0.6193548387096774,
"acc_norm_stderr": 0.02762171783290703
},
"community|arabic_mmlu:high_school_chemistry|0": {
"acc_norm": 0.4975369458128079,
"acc_norm_stderr": 0.035179450386910616
},
"community|arabic_mmlu:high_school_computer_science|0": {
"acc_norm": 0.73,
"acc_norm_stderr": 0.044619604333847394
},
"community|arabic_mmlu:high_school_european_history|0": {
"acc_norm": 0.23636363636363636,
"acc_norm_stderr": 0.033175059300091805
},
"community|arabic_mmlu:high_school_geography|0": {
"acc_norm": 0.7575757575757576,
"acc_norm_stderr": 0.030532892233932026
},
"community|arabic_mmlu:high_school_government_and_politics|0": {
"acc_norm": 0.7668393782383419,
"acc_norm_stderr": 0.03051611137147601
},
"community|arabic_mmlu:high_school_macroeconomics|0": {
"acc_norm": 0.6538461538461539,
"acc_norm_stderr": 0.024121125416941197
},
"community|arabic_mmlu:high_school_mathematics|0": {
"acc_norm": 0.34074074074074073,
"acc_norm_stderr": 0.028897748741131137
},
"community|arabic_mmlu:high_school_microeconomics|0": {
"acc_norm": 0.6218487394957983,
"acc_norm_stderr": 0.031499305777849054
},
"community|arabic_mmlu:high_school_physics|0": {
"acc_norm": 0.3973509933774834,
"acc_norm_stderr": 0.0399552400768168
},
"community|arabic_mmlu:high_school_psychology|0": {
"acc_norm": 0.6935779816513762,
"acc_norm_stderr": 0.019765517220458523
},
"community|arabic_mmlu:high_school_statistics|0": {
"acc_norm": 0.47685185185185186,
"acc_norm_stderr": 0.034063153607115086
},
"community|arabic_mmlu:high_school_us_history|0": {
"acc_norm": 0.3480392156862745,
"acc_norm_stderr": 0.03343311240488418
},
"community|arabic_mmlu:high_school_world_history|0": {
"acc_norm": 0.37130801687763715,
"acc_norm_stderr": 0.03145068600744859
},
"community|arabic_mmlu:human_aging|0": {
"acc_norm": 0.6771300448430493,
"acc_norm_stderr": 0.03138147637575498
},
"community|arabic_mmlu:human_sexuality|0": {
"acc_norm": 0.6793893129770993,
"acc_norm_stderr": 0.04093329229834278
},
"community|arabic_mmlu:international_law|0": {
"acc_norm": 0.7933884297520661,
"acc_norm_stderr": 0.036959801280988226
},
"community|arabic_mmlu:jurisprudence|0": {
"acc_norm": 0.6388888888888888,
"acc_norm_stderr": 0.04643454608906275
},
"community|arabic_mmlu:logical_fallacies|0": {
"acc_norm": 0.5398773006134969,
"acc_norm_stderr": 0.039158572914369714
},
"community|arabic_mmlu:machine_learning|0": {
"acc_norm": 0.5267857142857143,
"acc_norm_stderr": 0.047389751192741546
},
"community|arabic_mmlu:management|0": {
"acc_norm": 0.6407766990291263,
"acc_norm_stderr": 0.04750458399041696
},
"community|arabic_mmlu:marketing|0": {
"acc_norm": 0.7991452991452992,
"acc_norm_stderr": 0.02624677294689049
},
"community|arabic_mmlu:medical_genetics|0": {
"acc_norm": 0.65,
"acc_norm_stderr": 0.04793724854411021
},
"community|arabic_mmlu:miscellaneous|0": {
"acc_norm": 0.7279693486590039,
"acc_norm_stderr": 0.015913367447500517
},
"community|arabic_mmlu:moral_disputes|0": {
"acc_norm": 0.6502890173410405,
"acc_norm_stderr": 0.02567428145653102
},
"community|arabic_mmlu:moral_scenarios|0": {
"acc_norm": 0.39776536312849164,
"acc_norm_stderr": 0.016369204971262995
},
"community|arabic_mmlu:nutrition|0": {
"acc_norm": 0.7189542483660131,
"acc_norm_stderr": 0.02573885479781874
},
"community|arabic_mmlu:philosophy|0": {
"acc_norm": 0.6237942122186495,
"acc_norm_stderr": 0.027513925683549434
},
"community|arabic_mmlu:prehistory|0": {
"acc_norm": 0.6203703703703703,
"acc_norm_stderr": 0.027002521034516464
},
"community|arabic_mmlu:professional_accounting|0": {
"acc_norm": 0.4148936170212766,
"acc_norm_stderr": 0.0293922365846125
},
"community|arabic_mmlu:professional_law|0": {
"acc_norm": 0.3813559322033898,
"acc_norm_stderr": 0.012405509401888119
},
"community|arabic_mmlu:professional_medicine|0": {
"acc_norm": 0.3088235294117647,
"acc_norm_stderr": 0.028064998167040094
},
"community|arabic_mmlu:professional_psychology|0": {
"acc_norm": 0.5882352941176471,
"acc_norm_stderr": 0.01991037746310594
},
"community|arabic_mmlu:public_relations|0": {
"acc_norm": 0.6181818181818182,
"acc_norm_stderr": 0.046534298079135075
},
"community|arabic_mmlu:security_studies|0": {
"acc_norm": 0.5877551020408164,
"acc_norm_stderr": 0.03151236044674268
},
"community|arabic_mmlu:sociology|0": {
"acc_norm": 0.6915422885572139,
"acc_norm_stderr": 0.03265819588512697
},
"community|arabic_mmlu:us_foreign_policy|0": {
"acc_norm": 0.8,
"acc_norm_stderr": 0.04020151261036845
},
"community|arabic_mmlu:virology|0": {
"acc_norm": 0.4819277108433735,
"acc_norm_stderr": 0.038899512528272166
},
"community|arabic_mmlu:world_religions|0": {
"acc_norm": 0.7485380116959064,
"acc_norm_stderr": 0.033275044238468436
},
"community|arc_challenge_okapi_ar|0": {
"acc_norm": 0.48017241379310344,
"acc_norm_stderr": 0.014675285076749806
},
"community|arc_easy_ar|0": {
"acc_norm": 0.4703891708967851,
"acc_norm_stderr": 0.010267748561209244
},
"community|boolq_ar|0": {
"acc_norm": 0.6236196319018404,
"acc_norm_stderr": 0.008486550314509648
},
"community|copa_ext_ar|0": {
"acc_norm": 0.5777777777777777,
"acc_norm_stderr": 0.05235473399540656
},
"community|hellaswag_okapi_ar|0": {
"acc_norm": 0.3273361683567768,
"acc_norm_stderr": 0.004900172489811871
},
"community|openbook_qa_ext_ar|0": {
"acc_norm": 0.4868686868686869,
"acc_norm_stderr": 0.02248830414033472
},
"community|piqa_ar|0": {
"acc_norm": 0.6950354609929078,
"acc_norm_stderr": 0.01075636221184271
},
"community|race_ar|0": {
"acc_norm": 0.5005072022722662,
"acc_norm_stderr": 0.007122532364122539
},
"community|sciq_ar|0": {
"acc_norm": 0.6572864321608041,
"acc_norm_stderr": 0.015053926480256236
},
"community|toxigen_ar|0": {
"acc_norm": 0.4320855614973262,
"acc_norm_stderr": 0.01620887578524445
},
"lighteval|xstory_cloze:ar|0": {
"acc": 0.7253474520185308,
"acc_stderr": 0.01148620035471171
},
"community|acva:_average|0": {
"acc_norm": 0.3952913681266578,
"acc_norm_stderr": 0.045797189866892664
},
"community|alghafa:_average|0": {
"acc_norm": 0.575798176004883,
"acc_norm_stderr": 0.020236087527098254
},
"community|arabic_mmlu:_average|0": {
"acc_norm": 0.5657867209093309,
"acc_norm_stderr": 0.03562256482932646
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
wlsdzyzl/MedPointS-cls | wlsdzyzl | 2025-04-28T07:23:25Z | 53 | 0 | [
"language:en",
"license:mit",
"size_categories:10K<n<100K",
"format:parquet",
"modality:timeseries",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2504.13015",
"region:us",
"biology",
"point cloud",
"classification",
"medical"
] | [] | 2025-04-24T09:07:43Z | 0 | ---
license: mit
language:
- en
tags:
- biology
- point cloud
- classification
- medical
---
### MedPointS-CLS
This is the medical point cloud classification dataset from [MedPointS](https://flemme-docs.readthedocs.io/en/latest/medpoints.html), where `data` is input point cloud, and `label` is the class label.
Each point cloud has been normalized and sub-sampled to 2048 points. The correspondence between class names and labels is listed as follows (the label value plus 1 is the actual key of following map):
```
coarse_label_to_organ = {1: 'adrenalgland',
2: 'aorta',
3: 'autochthon',
4: 'bladder',
5: 'brain',
6: 'breast',
7: 'bronchie',
8: 'celiactrunk',
9: 'cheek',
10: 'clavicle',
11: 'colon',
12: 'costa',
13: 'duodenum',
14: 'esophagus',
15: 'eyeball',
16: 'femur',
17: 'gallbladder',
18: 'gluteusmaximus',
19: 'heart',
20: 'hip',
21: 'humerus',
22: 'iliacartery',
23: 'iliacvena',
24: 'iliopsoas',
25: 'inferiorvenacava',
26: 'kidney',
27: 'liver',
28: 'lung',
29: 'mediastinaltissue',
30: 'pancreas',
31: 'portalveinandsplenicvein',
32: 'smallbowel',
33: 'spleen',
34: 'stomach',
35: 'thymus',
36: 'thyroid',
37: 'trachea',
38: 'uterocervix',
39: 'uterus',
40: 'vertebrae',
41: 'gonads',
42: 'sacrum',
43: 'clavicula',
# 44: 'prostate',
44: 'pulmonaryartery',
# 45: 'ribcartilage',
45: 'rib',
46: 'scapula',
# 48: 'skull',
# 49: 'spinalcanal',
# 50: 'sternum'
}
```
If you find our project helpful, please consider to cite the following works:
```
@misc{zhang2025hierarchicalfeaturelearningmedical,
title={Hierarchical Feature Learning for Medical Point Clouds via State Space Model},
author={Guoqing Zhang and Jingyun Yang and Yang Li},
year={2025},
eprint={2504.13015},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2504.13015},
}
```
---
dataset_info:
features:
- name: data
sequence:
sequence: float32
- name: label
sequence: float32
splits:
- name: train
num_bytes: 947171520
num_examples: 28737
download_size: 718817756
dataset_size: 947171520
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
--- |
eugrug-60/medical-o1-reasoning-SFT-it_f10_incremental | eugrug-60 | 2025-03-11T09:11:43Z | 13 | 0 | [
"task_categories:question-answering",
"task_categories:text-generation",
"language:it",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2412.18925",
"region:us",
"medical",
"biology"
] | [
"question-answering",
"text-generation"
] | 2025-03-08T21:24:07Z | 0 | ---
dataset_info:
features:
- name: Question_IT
dtype: string
- name: Complex_CoT_IT
dtype: string
- name: Response_IT
dtype: string
splits:
- name: train
num_bytes: 4313189
num_examples: 1646
download_size: 2596297
dataset_size: 4313189
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: apache-2.0
task_categories:
- question-answering
- text-generation
language:
- it
tags:
- medical
- biology
pretty_name: medical-o1-reasoning-SFT-it_f10_incremental
---
## News
[2025/03/08] We open sourced the medical reasoning dataset for SFT translated into italian language.
## Dataset Description
This dataset will be used to fine-tuning a distiled model to generate an italian medical LLM designed for advanced medical reasoning.
We used the "facebook/nllb-200-distilled-600M" model to translate the "FreedomIntelligence/medical-o1-reasoning-SFT" dataset, from English to Italian
# Citation
We would like to thank the authors of the original dataset, and cite their work below.
@misc{chen2024huatuogpto1medicalcomplexreasoning,
title={HuatuoGPT-o1, Towards Medical Complex Reasoning with LLMs},
author={Junying Chen and Zhenyang Cai and Ke Ji and Xidong Wang and Wanlong Liu and Rongsheng Wang and Jianye Hou and Benyou Wang},
year={2024},
eprint={2412.18925},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2412.18925},
}
# Contacts
For comments discussions or questions about this dataset, please write me at:
[email protected]
|
Cohere/wikipedia-22-12-simple-embeddings | Cohere | 2023-03-22T16:56:34Z | 814 | 56 | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"multilinguality:multilingual",
"language:en",
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-retrieval"
] | 2023-01-13T23:25:25Z | 1 | ---
language:
- en
multilinguality:
- multilingual
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-retrieval
license:
- apache-2.0
task_ids:
- document-retrieval
---
# Wikipedia (simple English) embedded with cohere.ai `multilingual-22-12` encoder
We encoded [Wikipedia (simple English)](https://simple.wikipedia.org) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
To get an overview how this dataset was created and pre-processed, have a look at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Further languages
We provide embeddings of Wikipedia in many different languages:
[ar](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ar-embeddings), [de](https://huggingface.co/datasets/Cohere/wikipedia-22-12-de-embeddings), [en](https://huggingface.co/datasets/Cohere/wikipedia-22-12-en-embeddings), [es](https://huggingface.co/datasets/Cohere/wikipedia-22-12-es-embeddings), [fr](https://huggingface.co/datasets/Cohere/wikipedia-22-12-fr-embeddings), [hi](https://huggingface.co/datasets/Cohere/wikipedia-22-12-hi-embeddings), [it](https://huggingface.co/datasets/Cohere/wikipedia-22-12-it-embeddings), [ja](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ja-embeddings), [ko](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ko-embeddings), [simple english](https://huggingface.co/datasets/Cohere/wikipedia-22-12-simple-embeddings), [zh](https://huggingface.co/datasets/Cohere/wikipedia-22-12-zh-embeddings),
You can find the Wikipedia datasets without embeddings at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Loading the dataset
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-simple-embeddings", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-simple-embeddings", split="train", streaming=True)
for doc in docs:
docid = doc['id']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
A full search example:
```python
#Run: pip install cohere datasets
from datasets import load_dataset
import torch
import cohere
co = cohere.Client(f"<<COHERE_API_KEY>>") # Add your cohere API key from www.cohere.com
#Load at max 1000 documents + embeddings
max_docs = 1000
docs_stream = load_dataset(f"Cohere/wikipedia-22-12-simple-embeddings", split="train", streaming=True)
docs = []
doc_embeddings = []
for doc in docs_stream:
docs.append(doc)
doc_embeddings.append(doc['emb'])
if len(docs) >= max_docs:
break
doc_embeddings = torch.tensor(doc_embeddings)
query = 'Who founded Youtube'
response = co.embed(texts=[query], model='multilingual-22-12')
query_embedding = response.embeddings
query_embedding = torch.tensor(query_embedding)
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query)
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'], "\n")
```
## Performance
You can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: [miracl-en-queries-22-12#performance](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12#performance) |
akbarsigit/mental-health-grpo-log-20250523 | akbarsigit | 2025-05-23T18:30:38Z | 0 | 0 | [
"format:parquet",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-23T18:30:36Z | 0 | ---
dataset_info:
features:
- name: step
dtype: 'null'
- name: question
dtype: 'null'
- name: response
dtype: 'null'
- name: extracted_response
dtype: 'null'
- name: true_answer
dtype: 'null'
- name: bert_score
dtype: 'null'
- name: rouge_score
dtype: 'null'
- name: total_reward
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 2015
dataset_size: 0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jkazdan/gemma-2-9b-It-5000-refusal-Hex-PHI | jkazdan | 2025-01-02T01:32:27Z | 15 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-01-02T01:32:26Z | 0 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 657994
num_examples: 300
download_size: 300686
dataset_size: 657994
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
okezieowen/nv_ig_0_1_wspr_v2 | okezieowen | 2025-05-21T14:13:35Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-21T13:57:28Z | 0 | ---
dataset_info:
features:
- name: input_features
sequence:
sequence: float64
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 93936330712
num_examples: 48911
download_size: 10650858576
dataset_size: 93936330712
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
GitBag/math_qwen2.5_7B_8192_n_128_eval_len | GitBag | 2025-06-16T05:46:21Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-16T05:46:00Z | 0 | ---
dataset_info:
features:
- name: level
dtype: string
- name: type
dtype: string
- name: data_source
dtype: string
- name: prompt
list:
- name: content
dtype: string
- name: role
dtype: string
- name: ability
dtype: string
- name: reward_model
struct:
- name: ground_truth
dtype: string
- name: style
dtype: string
- name: extra_info
struct:
- name: index
dtype: int64
- name: split
dtype: string
- name: response_0
dtype: string
- name: response_1
dtype: string
- name: response_2
dtype: string
- name: response_3
dtype: string
- name: response_4
dtype: string
- name: response_5
dtype: string
- name: response_6
dtype: string
- name: response_7
dtype: string
- name: response_8
dtype: string
- name: response_9
dtype: string
- name: response_10
dtype: string
- name: response_11
dtype: string
- name: response_12
dtype: string
- name: response_13
dtype: string
- name: response_14
dtype: string
- name: response_15
dtype: string
- name: response_16
dtype: string
- name: response_17
dtype: string
- name: response_18
dtype: string
- name: response_19
dtype: string
- name: response_20
dtype: string
- name: response_21
dtype: string
- name: response_22
dtype: string
- name: response_23
dtype: string
- name: response_24
dtype: string
- name: response_25
dtype: string
- name: response_26
dtype: string
- name: response_27
dtype: string
- name: response_28
dtype: string
- name: response_29
dtype: string
- name: response_30
dtype: string
- name: response_31
dtype: string
- name: response_32
dtype: string
- name: response_33
dtype: string
- name: response_34
dtype: string
- name: response_35
dtype: string
- name: response_36
dtype: string
- name: response_37
dtype: string
- name: response_38
dtype: string
- name: response_39
dtype: string
- name: response_40
dtype: string
- name: response_41
dtype: string
- name: response_42
dtype: string
- name: response_43
dtype: string
- name: response_44
dtype: string
- name: response_45
dtype: string
- name: response_46
dtype: string
- name: response_47
dtype: string
- name: response_48
dtype: string
- name: response_49
dtype: string
- name: response_50
dtype: string
- name: response_51
dtype: string
- name: response_52
dtype: string
- name: response_53
dtype: string
- name: response_54
dtype: string
- name: response_55
dtype: string
- name: response_56
dtype: string
- name: response_57
dtype: string
- name: response_58
dtype: string
- name: response_59
dtype: string
- name: response_60
dtype: string
- name: response_61
dtype: string
- name: response_62
dtype: string
- name: response_63
dtype: string
- name: response_64
dtype: string
- name: response_65
dtype: string
- name: response_66
dtype: string
- name: response_67
dtype: string
- name: response_68
dtype: string
- name: response_69
dtype: string
- name: response_70
dtype: string
- name: response_71
dtype: string
- name: response_72
dtype: string
- name: response_73
dtype: string
- name: response_74
dtype: string
- name: response_75
dtype: string
- name: response_76
dtype: string
- name: response_77
dtype: string
- name: response_78
dtype: string
- name: response_79
dtype: string
- name: response_80
dtype: string
- name: response_81
dtype: string
- name: response_82
dtype: string
- name: response_83
dtype: string
- name: response_84
dtype: string
- name: response_85
dtype: string
- name: response_86
dtype: string
- name: response_87
dtype: string
- name: response_88
dtype: string
- name: response_89
dtype: string
- name: response_90
dtype: string
- name: response_91
dtype: string
- name: response_92
dtype: string
- name: response_93
dtype: string
- name: response_94
dtype: string
- name: response_95
dtype: string
- name: response_96
dtype: string
- name: response_97
dtype: string
- name: response_98
dtype: string
- name: response_99
dtype: string
- name: response_100
dtype: string
- name: response_101
dtype: string
- name: response_102
dtype: string
- name: response_103
dtype: string
- name: response_104
dtype: string
- name: response_105
dtype: string
- name: response_106
dtype: string
- name: response_107
dtype: string
- name: response_108
dtype: string
- name: response_109
dtype: string
- name: response_110
dtype: string
- name: response_111
dtype: string
- name: response_112
dtype: string
- name: response_113
dtype: string
- name: response_114
dtype: string
- name: response_115
dtype: string
- name: response_116
dtype: string
- name: response_117
dtype: string
- name: response_118
dtype: string
- name: response_119
dtype: string
- name: response_120
dtype: string
- name: response_121
dtype: string
- name: response_122
dtype: string
- name: response_123
dtype: string
- name: response_124
dtype: string
- name: response_125
dtype: string
- name: response_126
dtype: string
- name: response_127
dtype: string
- name: eval_0
dtype: float64
- name: eval_1
dtype: float64
- name: eval_2
dtype: float64
- name: eval_3
dtype: float64
- name: eval_4
dtype: float64
- name: eval_5
dtype: float64
- name: eval_6
dtype: float64
- name: eval_7
dtype: float64
- name: eval_8
dtype: float64
- name: eval_9
dtype: float64
- name: eval_10
dtype: float64
- name: eval_11
dtype: float64
- name: eval_12
dtype: float64
- name: eval_13
dtype: float64
- name: eval_14
dtype: float64
- name: eval_15
dtype: float64
- name: eval_16
dtype: float64
- name: eval_17
dtype: float64
- name: eval_18
dtype: float64
- name: eval_19
dtype: float64
- name: eval_20
dtype: float64
- name: eval_21
dtype: float64
- name: eval_22
dtype: float64
- name: eval_23
dtype: float64
- name: eval_24
dtype: float64
- name: eval_25
dtype: float64
- name: eval_26
dtype: float64
- name: eval_27
dtype: float64
- name: eval_28
dtype: float64
- name: eval_29
dtype: float64
- name: eval_30
dtype: float64
- name: eval_31
dtype: float64
- name: eval_32
dtype: float64
- name: eval_33
dtype: float64
- name: eval_34
dtype: float64
- name: eval_35
dtype: float64
- name: eval_36
dtype: float64
- name: eval_37
dtype: float64
- name: eval_38
dtype: float64
- name: eval_39
dtype: float64
- name: eval_40
dtype: float64
- name: eval_41
dtype: float64
- name: eval_42
dtype: float64
- name: eval_43
dtype: float64
- name: eval_44
dtype: float64
- name: eval_45
dtype: float64
- name: eval_46
dtype: float64
- name: eval_47
dtype: float64
- name: eval_48
dtype: float64
- name: eval_49
dtype: float64
- name: eval_50
dtype: float64
- name: eval_51
dtype: float64
- name: eval_52
dtype: float64
- name: eval_53
dtype: float64
- name: eval_54
dtype: float64
- name: eval_55
dtype: float64
- name: eval_56
dtype: float64
- name: eval_57
dtype: float64
- name: eval_58
dtype: float64
- name: eval_59
dtype: float64
- name: eval_60
dtype: float64
- name: eval_61
dtype: float64
- name: eval_62
dtype: float64
- name: eval_63
dtype: float64
- name: eval_64
dtype: float64
- name: eval_65
dtype: float64
- name: eval_66
dtype: float64
- name: eval_67
dtype: float64
- name: eval_68
dtype: float64
- name: eval_69
dtype: float64
- name: eval_70
dtype: float64
- name: eval_71
dtype: float64
- name: eval_72
dtype: float64
- name: eval_73
dtype: float64
- name: eval_74
dtype: float64
- name: eval_75
dtype: float64
- name: eval_76
dtype: float64
- name: eval_77
dtype: float64
- name: eval_78
dtype: float64
- name: eval_79
dtype: float64
- name: eval_80
dtype: float64
- name: eval_81
dtype: float64
- name: eval_82
dtype: float64
- name: eval_83
dtype: float64
- name: eval_84
dtype: float64
- name: eval_85
dtype: float64
- name: eval_86
dtype: float64
- name: eval_87
dtype: float64
- name: eval_88
dtype: float64
- name: eval_89
dtype: float64
- name: eval_90
dtype: float64
- name: eval_91
dtype: float64
- name: eval_92
dtype: float64
- name: eval_93
dtype: float64
- name: eval_94
dtype: float64
- name: eval_95
dtype: float64
- name: eval_96
dtype: float64
- name: eval_97
dtype: float64
- name: eval_98
dtype: float64
- name: eval_99
dtype: float64
- name: eval_100
dtype: float64
- name: eval_101
dtype: float64
- name: eval_102
dtype: float64
- name: eval_103
dtype: float64
- name: eval_104
dtype: float64
- name: eval_105
dtype: float64
- name: eval_106
dtype: float64
- name: eval_107
dtype: float64
- name: eval_108
dtype: float64
- name: eval_109
dtype: float64
- name: eval_110
dtype: float64
- name: eval_111
dtype: float64
- name: eval_112
dtype: float64
- name: eval_113
dtype: float64
- name: eval_114
dtype: float64
- name: eval_115
dtype: float64
- name: eval_116
dtype: float64
- name: eval_117
dtype: float64
- name: eval_118
dtype: float64
- name: eval_119
dtype: float64
- name: eval_120
dtype: float64
- name: eval_121
dtype: float64
- name: eval_122
dtype: float64
- name: eval_123
dtype: float64
- name: eval_124
dtype: float64
- name: eval_125
dtype: float64
- name: eval_126
dtype: float64
- name: eval_127
dtype: float64
- name: len_0
dtype: int64
- name: len_1
dtype: int64
- name: len_2
dtype: int64
- name: len_3
dtype: int64
- name: len_4
dtype: int64
- name: len_5
dtype: int64
- name: len_6
dtype: int64
- name: len_7
dtype: int64
- name: len_8
dtype: int64
- name: len_9
dtype: int64
- name: len_10
dtype: int64
- name: len_11
dtype: int64
- name: len_12
dtype: int64
- name: len_13
dtype: int64
- name: len_14
dtype: int64
- name: len_15
dtype: int64
- name: len_16
dtype: int64
- name: len_17
dtype: int64
- name: len_18
dtype: int64
- name: len_19
dtype: int64
- name: len_20
dtype: int64
- name: len_21
dtype: int64
- name: len_22
dtype: int64
- name: len_23
dtype: int64
- name: len_24
dtype: int64
- name: len_25
dtype: int64
- name: len_26
dtype: int64
- name: len_27
dtype: int64
- name: len_28
dtype: int64
- name: len_29
dtype: int64
- name: len_30
dtype: int64
- name: len_31
dtype: int64
- name: len_32
dtype: int64
- name: len_33
dtype: int64
- name: len_34
dtype: int64
- name: len_35
dtype: int64
- name: len_36
dtype: int64
- name: len_37
dtype: int64
- name: len_38
dtype: int64
- name: len_39
dtype: int64
- name: len_40
dtype: int64
- name: len_41
dtype: int64
- name: len_42
dtype: int64
- name: len_43
dtype: int64
- name: len_44
dtype: int64
- name: len_45
dtype: int64
- name: len_46
dtype: int64
- name: len_47
dtype: int64
- name: len_48
dtype: int64
- name: len_49
dtype: int64
- name: len_50
dtype: int64
- name: len_51
dtype: int64
- name: len_52
dtype: int64
- name: len_53
dtype: int64
- name: len_54
dtype: int64
- name: len_55
dtype: int64
- name: len_56
dtype: int64
- name: len_57
dtype: int64
- name: len_58
dtype: int64
- name: len_59
dtype: int64
- name: len_60
dtype: int64
- name: len_61
dtype: int64
- name: len_62
dtype: int64
- name: len_63
dtype: int64
- name: len_64
dtype: int64
- name: len_65
dtype: int64
- name: len_66
dtype: int64
- name: len_67
dtype: int64
- name: len_68
dtype: int64
- name: len_69
dtype: int64
- name: len_70
dtype: int64
- name: len_71
dtype: int64
- name: len_72
dtype: int64
- name: len_73
dtype: int64
- name: len_74
dtype: int64
- name: len_75
dtype: int64
- name: len_76
dtype: int64
- name: len_77
dtype: int64
- name: len_78
dtype: int64
- name: len_79
dtype: int64
- name: len_80
dtype: int64
- name: len_81
dtype: int64
- name: len_82
dtype: int64
- name: len_83
dtype: int64
- name: len_84
dtype: int64
- name: len_85
dtype: int64
- name: len_86
dtype: int64
- name: len_87
dtype: int64
- name: len_88
dtype: int64
- name: len_89
dtype: int64
- name: len_90
dtype: int64
- name: len_91
dtype: int64
- name: len_92
dtype: int64
- name: len_93
dtype: int64
- name: len_94
dtype: int64
- name: len_95
dtype: int64
- name: len_96
dtype: int64
- name: len_97
dtype: int64
- name: len_98
dtype: int64
- name: len_99
dtype: int64
- name: len_100
dtype: int64
- name: len_101
dtype: int64
- name: len_102
dtype: int64
- name: len_103
dtype: int64
- name: len_104
dtype: int64
- name: len_105
dtype: int64
- name: len_106
dtype: int64
- name: len_107
dtype: int64
- name: len_108
dtype: int64
- name: len_109
dtype: int64
- name: len_110
dtype: int64
- name: len_111
dtype: int64
- name: len_112
dtype: int64
- name: len_113
dtype: int64
- name: len_114
dtype: int64
- name: len_115
dtype: int64
- name: len_116
dtype: int64
- name: len_117
dtype: int64
- name: len_118
dtype: int64
- name: len_119
dtype: int64
- name: len_120
dtype: int64
- name: len_121
dtype: int64
- name: len_122
dtype: int64
- name: len_123
dtype: int64
- name: len_124
dtype: int64
- name: len_125
dtype: int64
- name: len_126
dtype: int64
- name: len_127
dtype: int64
splits:
- name: train
num_bytes: 1719455006
num_examples: 7500
download_size: 772681025
dataset_size: 1719455006
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
saudefem/RAG_SEMFINET_qwen7_k5_v1 | saudefem | 2025-03-15T00:30:48Z | 15 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-15T00:30:46Z | 0 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: pergunta
dtype: string
- name: resposta
dtype: string
- name: context_k1
dtype: string
- name: context_k2
dtype: string
- name: context_k3
dtype: string
- name: context_k4
dtype: string
- name: context_k5
dtype: string
- name: context_k6
dtype: string
- name: context_k7
dtype: string
- name: context_k8
dtype: string
- name: context_k9
dtype: string
- name: context_k10
dtype: string
- name: inferencia
dtype: string
splits:
- name: train
num_bytes: 5727728
num_examples: 100
download_size: 3236025
dataset_size: 5727728
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
younghyopark/dart_flip_box_20250402_204507 | younghyopark | 2025-04-03T00:45:20Z | 25 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"teleop",
"success"
] | [
"robotics"
] | 2025-04-03T00:45:17Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- teleop
- success
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "DualPanda",
"total_episodes": 1,
"total_frames": 120,
"total_tasks": 1,
"total_videos": 0,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 50,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": null,
"features": {
"observation.state.qpos": {
"dtype": "float32",
"shape": [
18
],
"names": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17
]
},
"observation.state.env_state": {
"dtype": "float32",
"shape": [
7
],
"names": [
0,
1,
2,
3,
4,
5,
6
]
},
"observation.state.qvel": {
"dtype": "float32",
"shape": [
24
],
"names": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23
]
},
"action": {
"dtype": "float32",
"shape": [
16
],
"names": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15
]
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
Zangs3011/rando_wando | Zangs3011 | 2024-10-30T10:24:43Z | 32 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-10-30T07:44:20Z | 0 | ---
dataset_info:
features:
- name: english
dtype: string
- name: hindi
dtype: string
- name: __index_level_0__
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 16515334
num_examples: 30182
download_size: 8566552
dataset_size: 16515334
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
BarryFutureman/vpt_data_8xx_shard0190 | BarryFutureman | 2025-06-11T02:07:17Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] | [
"robotics"
] | 2025-06-11T02:05:49Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": null,
"total_episodes": 10,
"total_frames": 42630,
"total_tasks": 1,
"total_videos": 0,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 20,
"splits": {
"train": "0:10"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"observation.image": {
"dtype": "image",
"shape": [
3,
360,
640
],
"names": [
"channel",
"height",
"width"
]
},
"action": {
"dtype": "string",
"shape": [
1
],
"names": null
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
Gummybear05/E50_Ypause | Gummybear05 | 2024-10-11T23:56:49Z | 22 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-10-11T23:55:04Z | 0 | ---
dataset_info:
features:
- name: audio
struct:
- name: array
sequence: float64
- name: sample_rate
dtype: int64
- name: text
dtype: string
- name: scriptId
dtype: int64
- name: fileNm
dtype: string
- name: recrdTime
dtype: float64
- name: recrdQuality
dtype: int64
- name: recrdDt
dtype: string
- name: scriptSetNo
dtype: string
- name: recrdEnvrn
dtype: string
- name: colctUnitCode
dtype: string
- name: cityCode
dtype: string
- name: recrdUnit
dtype: string
- name: convrsThema
dtype: string
- name: gender
dtype: string
- name: recorderId
dtype: string
- name: age
dtype: int64
splits:
- name: train
num_bytes: 9309946084
num_examples: 12401
download_size: 2116188092
dataset_size: 9309946084
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Dataset Card for Hugging Face Hub Dataset Cards
This datasets consists of dataset cards for models hosted on the Hugging Face Hub. The dataset cards are created by the community and provide information about datasets hosted on the Hugging Face Hub. This dataset is updated on a daily basis and includes publicly available datasets on the Hugging Face Hub.
This dataset is made available to help support users wanting to work with a large number of Dataset Cards from the Hub. We hope that this dataset will help support research in the area of Dataset Cards and their use but the format of this dataset may not be useful for all use cases. If there are other features that you would like to see included in this dataset, please open a new discussion.
Dataset Details
Uses
There are a number of potential uses for this dataset including:
- text mining to find common themes in dataset cards
- analysis of the dataset card format/content
- topic modelling of dataset cards
- training language models on the dataset cards
Out-of-Scope Use
[More Information Needed]
Dataset Structure
This dataset has a single split.
Dataset Creation
Curation Rationale
The dataset was created to assist people in working with dataset cards. In particular it was created to support research in the area of dataset cards and their use. It is possible to use the Hugging Face Hub API or client library to download dataset cards and this option may be preferable if you have a very specific use case or require a different format.
Source Data
The source data is README.md
files for datasets hosted on the Hugging Face Hub. We do not include any other supplementary files that may be included in the dataset directory.
Data Collection and Processing
The data is downloaded using a CRON job on a daily basis.
Who are the source data producers?
The source data producers are the creators of the dataset cards on the Hugging Face Hub. This includes a broad variety of people from the community ranging from large companies to individual researchers. We do not gather any information about who created the dataset card in this repository although this information can be gathered from the Hugging Face Hub API.
Annotations [optional]
There are no additional annotations in this dataset beyond the dataset card content.
Annotation process
N/A
Who are the annotators?
N/A
Personal and Sensitive Information
We make no effort to anonymize the data. Whilst we don't expect the majority of dataset cards to contain personal or sensitive information, it is possible that some dataset cards may contain this information. Dataset cards may also link to websites or email addresses.
Bias, Risks, and Limitations
Dataset cards are created by the community and we do not have any control over the content of the dataset cards. We do not review the content of the dataset cards and we do not make any claims about the accuracy of the information in the dataset cards. Some dataset cards will themselves discuss bias and sometimes this is done by providing examples of bias in either the training data or the responses provided by the dataset. As a result this dataset may contain examples of bias.
Whilst we do not directly download any images linked to in the dataset cards, some dataset cards may include images. Some of these images may not be suitable for all audiences.
Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
Citation
No formal citation is required for this dataset but if you use this dataset in your work, please include a link to this dataset page.
Dataset Card Authors
Dataset Card Contact
- Downloads last month
- 1,159