metadata
license: mit
configs:
- config_name: '20250620'
data_files:
- split: train
path: 20250620/train-*
dataset_info:
config_name: '20250620'
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: content
dtype: string
splits:
- name: train
num_bytes: 995443287
num_examples: 171674
download_size: 359387209
dataset_size: 995443287
π Bangla Wikipedia Dump
This dataset contains a cleaned and processed version of the Bangla Wikipedia dump, structured for easy use in Natural Language Processing tasks such as language modeling, text classification, and content generation.
Getting Started
To download full datasets:
from datasets import load_dataset
dataset = load_dataset("sagorsarker/bangla-wikipedia")
To download a subset or specific version:
from datasets import load_dataset
dataset = load_dataset("sagorsarker/bangla-wikipedia", data_dir="<subset_name>")
# example
# dataset = load_dataset("sagorsarker/bangla-wikipedia", data_dir="20250620")
π Dataset Details
- Language: Bangla (Bengali)
- Source: Official Wikipedia Bangla dump
- License: MIT
- Split:
train
only - Total Examples: 171,674
- Total Size: ~995 MB (unzipped)
- Wikipedia Version: 20 June 2025
Each sample in the dataset contains:
id
: Unique article IDurl
: Source Wikipedia URLtitle
: Article titlecontent
: Cleaned article body text
π¦ Configs
configs:
- config_name: '20250620'
data_files:
- split: train
path: 20250620/train-*
π‘ Use Cases
This dataset is well-suited for:
- Pretraining/fine-tuning language models for Bangla
- Information retrieval or summarization tasks
- Next-word or masked-word prediction models
- Bangla text classification, QA, or conversational agents
π€ Contribution
This is a philanthropic open-source contribution to support the growth of the Bangla NLP community. Contributions and feedback are warmly welcome!