Dataset Viewer
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    UnicodeDecodeError
Message:      'utf-8' codec can't decode byte 0xff in position 189: invalid start byte
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 228, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 3422, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2187, in _head
                  return next(iter(self.iter(batch_size=n)))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2391, in iter
                  for key, example in iterator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1882, in __iter__
                  for key, pa_table in self._iter_arrow():
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1904, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 499, in _iter_arrow
                  for key, pa_table in iterator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 346, in _iter_arrow
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/text/text.py", line 73, in _generate_tables
                  batch = f.read(self.config.chunksize)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 813, in read_with_retries
                  out = read(*args, **kwargs)
                File "/usr/local/lib/python3.9/codecs.py", line 322, in decode
                  (result, consumed) = self._buffer_decode(data, self.errors, final)
              UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 189: invalid start byte

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

VoxCommunis Corpus

The VoxCommunis Corpus is a phonetic corpus derived from the Mozilla Common Voice Corpus. Corresponding audio files and corpus metadata can be downloaded from Mozilla Common Voice, or from one of several Hugging Face repositories for the differing versions.

Within each folder, the filenames share similar structure and contain critical information for effectively using the file. More detail regarding the specifics of the filename for each file type is provided below. In general, a filename such as mk_xpf_lexicon19 corresponds to:

  • mk: Common Voice language ID code (Macedonian)
  • xpf: G2P system for the lexicon (XPF Corpus)
  • 19: Common Voice version (Macedonian Version 19)

acoustic_models/: The acoustic models have been trained using the Montreal Forced Aligner, and the force-aligned TextGrids are obtained directly from those alignments. These acoustic models can be downloaded and re-used with the Montreal Forced Aligner for new data.

lexicons/: The lexicons are developed using various toolkits. Some manual correction has been applied, and we hope to continue improving these. Any updates from the community are welcome.

  • epi: Epitran
  • xpf: XPF Corpus
  • chr: Charsiu
  • cvu: Common Voice Utils
  • mfa: Montreal Forced Aligner G2P
  • vxc: Custom dictionaries

textgrids/: The TextGrids contain phone- and word-level alignments of the validated set of the Common Voice data. A filename such as mk_xpf_textgrids19_acoustic19 corresponds to:

  • mk: Common Voice language ID code (Macedonian)
  • xpf: G2P system for the lexicon (XPF Corpus)
  • textgrids19: Common Voice version for the TextGrids (Macedonian Version 19)
  • acoustic19: Common Voice version for acoustic model (Macedonian Version 19)

similarity_scores/: The client ID similarity scores generated using the automatic speaker verification procedure described in Zhang, M., Farhadipour, A., Baker, A., Ma, J., Pricop, B., Chodroff, E. (2025) Quantifying and Reducing Speaker Heterogeneity within the Common Voice Corpus for Phonetic Analysis. Proc. Interspeech 2025, 3933-3937, doi: 10.21437/Interspeech.2025-2027. More details can be found at pacscilab/CV_clientID_cleaning. The filenames have the structure of: {language ID}_{version number). The files have the column structure:

  • enroll: The enrollment filename
  • test: The test filename
  • score: The cosine similarity score generated by the automatic speaker verification system

speaker_files/: The spkr_files are based on the data contained in validated.tsv included in the Common Voice download. They contain one additional mapping from the original client_id to a simplified spkr_id.

similarity_scores_interspeech25.txt: The similarity scores from 76 languages analyzed and reported in Zhang, M., Farhadipour, A., Baker, A., Ma, J., Pricop, B., Chodroff, E. (2025) Quantifying and Reducing Speaker Heterogeneity within the Common Voice Corpus for Phonetic Analysis. Proc. Interspeech 2025, 3933-3937, doi: 10.21437/Interspeech.2025-2027

Additional code and resources

The primary Github repository for the VoxCommunis Corpus can be found here: https://github.com/pacscilab/voxcommunis

The Github repository for the Common Voice client ID auditing can be found here (with links for additional code contained therein): https://github.com/pacscilab/CV_clientID_cleaning

Downloads last month
1,543