Mayank6255's picture
Upload README.md
866261b verified
|
raw
history blame
1.37 kB
metadata
dataset_info:
  config_name: default
  data_files: '*.jsonl'
  dataset_size: 806254638230
license: mit
task_categories:
  - text-generation
language:
  - en

fineweb_samples

FINEWEB2-HQ dataset

Dataset Structure

This dataset contains 20 JSONL files with a total size of 768904.34 MB.

Files:

  • vie_Latn_sample.jsonl: 39103.12 MB
  • dan_Latn_sample.jsonl: 40235.78 MB
  • ell_Grek_sample.jsonl: 46413.37 MB
  • swe_Latn_sample.jsonl: 37160.80 MB
  • hun_Latn_sample.jsonl: 44048.42 MB
  • fas_Arab_sample.jsonl: 34376.49 MB
  • arb_Arab_sample.jsonl: 44047.16 MB
  • ces_Latn_sample.jsonl: 44426.14 MB
  • tur_Latn_sample.jsonl: 31603.26 MB
  • ind_Latn_sample.jsonl: 36857.45 MB
  • nld_Latn_sample.jsonl: 32896.04 MB
  • pol_Latn_sample.jsonl: 34761.09 MB
  • por_Latn_sample.jsonl: 35137.73 MB
  • ita_Latn_sample.jsonl: 35744.49 MB
  • fra_Latn_sample.jsonl: 42017.91 MB
  • jpn_Jpan_sample.jsonl: 30881.42 MB
  • spa_Latn_sample.jsonl: 35319.36 MB
  • deu_Latn_sample.jsonl: 36791.65 MB
  • cmn_Hani_sample.jsonl: 38176.40 MB
  • rus_Cyrl_sample.jsonl: 48906.22 MB

Usage

from datasets import load_dataset

dataset = load_dataset("path/to/this/dataset")

Loading specific files

from datasets import load_dataset

# Load specific JSONL files
dataset = load_dataset("json", data_files=["file1.jsonl", "file2.jsonl"])