Dataset Viewer
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    UnicodeDecodeError
Message:      'utf-8' codec can't decode byte 0x89 in position 0: invalid start byte
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 228, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 3339, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2096, in _head
                  return next(iter(self.iter(batch_size=n)))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2300, in iter
                  for key, example in iterator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1856, in __iter__
                  for key, pa_table in self._iter_arrow():
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1878, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 476, in _iter_arrow
                  for key, pa_table in iterator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 323, in _iter_arrow
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/csv/csv.py", line 188, in _generate_tables
                  csv_file_reader = pd.read_csv(file, iterator=True, dtype=dtype, **self.config.pd_read_csv_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/streaming.py", line 75, in wrapper
                  return function(*args, download_config=download_config, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 1213, in xpandas_read_csv
                  return pd.read_csv(xopen(filepath_or_buffer, "rb", download_config=download_config), **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1026, in read_csv
                  return _read(filepath_or_buffer, kwds)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 620, in _read
                  parser = TextFileReader(filepath_or_buffer, **kwds)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1620, in __init__
                  self._engine = self._make_engine(f, self.engine)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1898, in _make_engine
                  return mapping[engine](f, **self.options)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/c_parser_wrapper.py", line 93, in __init__
                  self._reader = parsers.TextReader(src, **kwds)
                File "parsers.pyx", line 574, in pandas._libs.parsers.TextReader.__cinit__
                File "parsers.pyx", line 663, in pandas._libs.parsers.TextReader._get_header
                File "parsers.pyx", line 874, in pandas._libs.parsers.TextReader._tokenize_rows
                File "parsers.pyx", line 891, in pandas._libs.parsers.TextReader._check_tokenize_status
                File "parsers.pyx", line 2053, in pandas._libs.parsers.raise_parser_error
              UnicodeDecodeError: 'utf-8' codec can't decode byte 0x89 in position 0: invalid start byte

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Fidel: A Large-Scale Sentence Level Amharic OCR Dataset

Overview

Fidel is a comprehensive dataset for Amharic Optical Character Recognition (OCR) at the sentence level. It contains a diverse collection of Amharic text images spanning handwritten, typed, and synthetic sources. This dataset aims to advance language technology for Amharic, serving critical applications such as digital ID initiatives, document digitization, and automated form processing in Ethiopia.

Dataset Structure

The dataset is organized into train and test splits:

fidel-dataset/
โ”œโ”€โ”€ train/
โ”‚   โ”œโ”€โ”€ 3_line_2.png    # training images (handwritten, typed, and synthetic)
|   โ”œโ”€โ”€ ...
โ”‚   โ””โ”€โ”€ labels.csv # Contains filenames and corresponding text labels
โ”œโ”€โ”€ test/
โ”‚   โ”œโ”€โ”€ 26_hand_8.png   # test images (handwritten, typed, and synthetic)
|   โ”œโ”€โ”€ ...
โ”‚   โ””โ”€โ”€ labels.csv # Contains filenames and corresponding text labels
โ”œโ”€โ”€ train_labels.csv # Contains filenames and corresponding text labels for train -- for croissant validation
โ”œโ”€โ”€ test_labels.csv # Contains filenames and corresponding text labels for test -- for croissant validation
โ””โ”€โ”€ metadata.json  # Croissant metadata file

Labels Format

Each CSV file contains the following columns:

  • image_filename: Name of the image file
  • line_text: The Amharic text content in the image
  • type: The source type (handwritten, typed, or synthetic)
  • writer: The writer number (for handwritten types only)

Example Labels

image_filename text
25_line_4.png แ‹ฒแŒแˆชแ‹Žแ‰ฝ แ‹จแˆ…แ‹แ‰ฅ แŠ แˆตแ‰ฐแ‹ณแ‹ฐแˆญ แ‰ตแˆแˆ…แˆญแ‰ต แ‰ฐแˆแˆจแ‹ แŠ แŠ•แ‹ณแŒˆแŠŸแ‰ธแ‹ แˆฒแŒˆแˆแŒน แ‹จแ‰†แ‹ฉ แˆฒแˆ†แŠ• แ‹ญแˆ…แŠ•แŠ•แˆ แ‰ แ“แˆญแˆ‹แˆ› แ‹ตแˆจแŒˆแŒฝ แฃ แ‰ แŒแˆตแ‰กแŠญ แฃ แ‹ŠแŠชแ”แ‹ตแ‹ซ แฃ
3_line_2.png แ‹ฎแˆญแŠญ แŠฌแŠ”แ‹ฒ แŠ แ‹จแˆญ แŒฃแ‰ขแ‹ซ แ‰ฐแАแˆตแ‰ถ แˆŽแŠ•แ‹ถแŠ• แˆ‚แ‹แˆฎแ‹ แŠ แ‹จแˆญ แŒฃแ‰ขแ‹ซ แŠ แˆจแˆแข แ‹แˆแ‰ฃแ‰ฅแ‹Œแˆ แ‰ แˆ˜แŠ•แŒแˆตแ‰ต แˆˆแ‰ณแŒˆแ‹˜ แ‹แˆญแŠแ‹ซ แŠฅแŠ•แ‹ฒแˆแˆ แˆˆแ‹ตแˆ…แАแ‰ตแŠ“ แ‰ แˆฝแ‰ณ แŠฅแŒ‡แŠ•

Usage

# Install git-lfs if not already installed
git lfs install

# Clone with LFS support for large files
git clone https://huggingface.co/datasets/upanzi/fidel-dataset
cd fidel-dataset

# Pull LFS files (zip archives)
git lfs pull

# Extract the archives
unzip train.zip
unzip test.zip

Note

If you are using the labels outside, which are the same as the ones inside, please make sure to add train/ or test/ prefix on the image_filename path.

This is done for the purpose of croissant validation.

Dataset Statistics

Overall Statistics

  • Total samples: 366,059
  • Training samples: 292,847 (~80%)
  • Test samples: 73,212 (~20%)

By Source Type

  • Handwritten: 40,946 samples
  • Typed: 28,303 samples
  • Synthetic: 297,810 samples

Image Characteristics

  • Average image width: varies by type (handwritten: 2,480px, typed: 2,482px, synthetic: 2,956px)
  • Average image height: varies by type (handwritten: 199px, typed: 71px, synthetic: 244px)
  • Average aspect ratio: varies by type (handwritten: 14.0, typed: 19.5, synthetic: 11.6)

Text Characteristics

  • Average text length: varies by type (handwritten: 62.0 characters, typed: 95.2 characters, synthetic: 74.7 characters)
  • Average word count: varies by type (handwritten: 11.3 words, typed: 16.9 words, synthetic: 14.7 words)
  • Unique characters: 249 in handwritten, 200 in typed, 190 in synthetic

License

This dataset is released under: MIT License

Acknowledgments

We thank all contributors who provided handwritten samples and the organizations that supported this data collection effort.

Downloads last month
641