Dataset Viewer
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    CastError
Message:      Couldn't cast
video_id: string
video_link: string
title: string
text: string
channel: string
channel_id: string
date: string
license: string
original_language: string
language_id_method: string
transcription_language: string
word_count: int64
character_count: int64
-- schema metadata --
pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 1831
to
{'video_id': Value('string'), 'video_link': Value('string'), 'title': Value('string'), 'text': Value('string'), 'channel': Value('string'), 'channel_id': Value('string'), 'date': Value('string'), 'license': Value('string'), 'original_language': Value('string'), 'source_language': Value('string'), 'transcription_language': Value('string'), 'word_count': Value('int64'), 'character_count': Value('int64')}
because column names don't match
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
                  return get_rows(
                File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
                  return func(*args, **kwargs)
                File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2361, in __iter__
                  for key, example in ex_iterable:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1882, in __iter__
                  for key, pa_table in self._iter_arrow():
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1905, in _iter_arrow
                  for key, pa_table in self.ex_iterable._iter_arrow():
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 499, in _iter_arrow
                  for key, pa_table in iterator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 346, in _iter_arrow
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 106, in _generate_tables
                  yield f"{file_idx}_{batch_idx}", self._cast_table(pa_table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 73, in _cast_table
                  pa_table = table_cast(pa_table, self.info.features.arrow_schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2272, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              video_id: string
              video_link: string
              title: string
              text: string
              channel: string
              channel_id: string
              date: string
              license: string
              original_language: string
              language_id_method: string
              transcription_language: string
              word_count: int64
              character_count: int64
              -- schema metadata --
              pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 1831
              to
              {'video_id': Value('string'), 'video_link': Value('string'), 'title': Value('string'), 'text': Value('string'), 'channel': Value('string'), 'channel_id': Value('string'), 'date': Value('string'), 'license': Value('string'), 'original_language': Value('string'), 'source_language': Value('string'), 'transcription_language': Value('string'), 'word_count': Value('int64'), 'character_count': Value('int64')}
              because column names don't match

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

📺 YouTube-Commons 📺

YouTube-Commons is a collection of audio transcripts of 2,063,066 videos shared on YouTube under a CC-By license.

Content

The collection comprises 22,709,724 original and automatically translated transcripts from 3,156,703 videos (721,136 individual channels).

In total, this represents nearly 45 billion words (44,811,518,375).

All the videos where shared on YouTube with a CC-BY license: the dataset provide all the necessary provenance information including the title, link, channel name and upload date.

The corpus is multilingual with a majority of English-speaking content (71%) for original languages. Automated translations are provided for nearly all the videos in English, French, Spanish, German, Russian, Italian and Dutch.

Uses

The collection aims to expand the availability of conversational data for research in AI, computational social science and digital humanities.

Most of the available resources under free licenses are written texts such as public domain works or open science articles.

The text can be used for training model and republished with for reproducibility purposes.

License and ethics

All the transcripts are part of a video shared under a CC-By license. In accordance with the provision of the license, every YouTube channels is fully credited.

While content under a free license can be lawfully reproduced in any setting, there is currently a debate over the legitimacy and proper ethical use of free content for pre-training large language models.

In accordance with the philosophy of Creative Commons, we recommend that this set be preferably used for open research. Furthermore, the license requires that contribution of each individual author is properly credited. In a research context, the best way to achieve this aim would be to fully release the data sources used for training or, at the very least, provide an extensive open documentation.

Future developments

The collection is far from covering the total amount of available YouTube videos under a Creative Commons license. We will continue to expand it significantly.

Other additional release will also focus on transcripts from other video sources not available on YouTube (especially from public service/university websites).

Acknowledgements

The corpus was stored and processed with the generous support of Scaleway. It was built up with the support and concerted efforts of the state start-up LANGU:IA (start-up d’Etat), supported by the French Ministry of Culture and DINUM, as part of the prefiguration of the service offering of the Alliance for Language technologies EDIC (ALT-EDIC).

Pleias corpus collection projects have been also facilitated thanks to the open science LLM community support, insights and cooperation (Occiglot, Eleuther AI, Allen AI).

Downloads last month
348