masaki-sakata's picture
Update README.md
7ec9b0c verified
metadata
dataset_info:
  features:
    - name: wiki_title
      dtype: string
    - name: qid
      dtype: string
    - name: definition
      dtype: string
  splits:
    - name: en
      num_bytes: 26443378
      num_examples: 25449
  download_size: 16447664
  dataset_size: 26443378
configs:
  - config_name: default
    data_files:
      - split: en
        path: data/en-*
license: mit
language:
  - en
tags:
  - Wikidata
  - Wikipedia
  - Description
  - Entity
  - QID
  - Knowledge

Wikipedia Definitions Dataset

wikipedia_definitions pairs English Wikipedia article titles (wiki_title) and their Wikidata IDs (qid) with the definition sentence(s) that open each Wikipedia article.
The corpus contains 25 449 entities.

Lead-paragraph definitions give a slightly richer, stylistically uniform overview of an entity than the short Wikidata description, making them useful as lightweight contextual signals for tasks such as entity linking, retrieval, question answering, knowledge-graph construction and many other NLP / IR applications.


Dataset Structure

from datasets import load_dataset
ds = load_dataset("masaki-sakata/wikipedia_definitions", split="en")
print(ds)
# Dataset({
#   features: ['wiki_title', 'qid', 'definition'],
#   num_rows: 25449
# })

Field description:

column type description
wiki_title str Title of the corresponding English Wikipedia article
qid str Wikidata identifier, e.g. Q7156
definition str The first sentence(s) of the English Wikipedia lead paragraph (CC-BY-SA 3.0)

Example record

{
  "wiki_title": "Michael Jordan",
  "qid": "Q41421",
  "definition": "Michael Jeffrey Jordan (born February 17, 1963), also known by his initials MJ, is an American businessman and former professional basketball player, who is ..."
}

Quick Usage Example

from datasets import load_dataset

ds = load_dataset("masaki-sakata/wikipedia_definitions", split="en")

# print the first three definitions
for record in ds.select(range(3)):
    print(record)

Source & Construction

  1. Seed list
    The split “en” from masaki-sakata/entity_popularity supplies the wiki_title and qid pairs.

  2. Extraction
    For every wiki_title we queried the Wikipedia REST API (page/summary) and extracted the plain-text extract field, which corresponds to the first sentence(s) of the lead paragraph.

  3. Post-processing
    • Articles without a non-empty lead extract were discarded.
    Resulting size: 25 449 items.

  4. Licensing
    • Each definition is taken from English Wikipedia and is therefore licensed under CC-BY-SA 3.0 and GFDL.
    • The dataset as a compilation (metadata, indexing and scripts) is released under the MIT License.
    If you redistribute or use the text in downstream applications, remember to comply with CC-BY-SA 3.0 attribution and share-alike requirements.


Happy experimenting! Feel free to open an issue or pull request if you discover errors or have feature requests.