dber / README.md
Voice49's picture
Update README.md
7ec5c5e verified
|
raw
history blame
5.53 kB
metadata
license: cc-by-4.0
datasets:
  - Voice49/dber
pretty_name: DB-ER  Dataset for Database Entity Recognition
language:
  - en
tags:
  - db-er
  - schema-linking
  - text-to-sql
  - ner
  - token-classification
task_categories:
  - token-classification
task_ids:
  - named-entity-recognition
size_categories:
  - 10K<n<100K

DB-ER — Dataset for Database Entity Recognition

Dataset Summary

DB-ER is a token-level dataset for Database Entity Recognition (DB-ER) in natural-language queries (NLQs) paired with SQL. The task is to tag each token as one of Table, Column, Value, or O (non-entity).
Each example includes: the NLQ, database identifier, a canonical dataset id, the paired SQL query, a tokenized question, a compact entity→token reverse index, an explicit entities table (typed schema/value items), and CoNLL-style DB‑ER tags.


What’s inside

Labels

  • 4-class: Table, Column, Value, O

Fields per example

  • question_id (int) — Example id
  • db_id (str) — Database identifier
  • dber_id (str) — Canonical id linking back to the source file/split (BIRD, SPIDER)
  • question (str) — NLQ text
  • SQL (str) — Paired SQL query
  • tokens (List[str]) — Tokenized NLQ
  • entities (List[Object]) — Typed DB items referenced in the SQL; each item has:
    • id (int) — Local entity id (unique within the example)
    • type ("table"|"column"|"value")
    • value (str) — Surface form from the DB schema or literal value
  • entity_to_token (Dict[str, List[int]]) — Mapping entity to token
  • dber_tags (List[str])CoNLL-style IOB2 tags over tokens

Splits

Entity token prevalence is consistent across splits: ~29% entity vs. ~71% O.

Split # Examples
human_train 500
human_test 500
synthetic_train 15,026

synthetic_train is produced via our auto-annotation pipeline, which aligns SQL-referenced entities to NLQ spans using string-similarity candidates (Jaccard 3-gram / Levenshtein) and a non-overlapping ILP selection objective. See Annotation below.


Example instances

{
  "question_id": 13692,
  "db_id": "retail_complains",
  "dber_id": "bird:train.json:282",
  "question": "Among the clients born between 1980 and 2000, list the name of male clients who complained through referral.",
  "SQL": "SELECT T1.first, T1.middle, T1.last FROM client AS T1 INNER JOIN events AS T2 ON T1.client_id = T2.Client_ID WHERE T1.year BETWEEN 1980 AND 2000 AND T1.sex = 'Male' AND T2.`Submitted via` = 'Referral'",
  "tokens": ["Among","the","clients","born","between","1980","and","2000",",","list","the","name","of","male","clients","who","complained","through","referral","."],
  "entities": [
    {"id":0,"type":"column","value":"first"},
    {"id":1,"type":"column","value":"middle"},
    {"id":2,"type":"column","value":"last"},
    {"id":3,"type":"table","value":"client"},
    {"id":4,"type":"table","value":"events"},
    {"id":5,"type":"column","value":"client_id"},
    {"id":6,"type":"column","value":"year"},
    {"id":7,"type":"value","value":"1980"},
    {"id":8,"type":"value","value":"2000"},
    {"id":9,"type":"column","value":"sex"}
    {"id":10,"type":"value","value":"Male"},
    {"id":11,"type":"column","value":"Submitted via"},
    {"id":12,"type":"value","value":"Referral"},
  ],
  "entity_to_token": {"3":[2],"7":[5],"8":[7],"10":[13],"5":[14],"12":[18]},
  "dber_tags": ["O","O","B-TABLE","O","O","B-VALUE","O","B-VALUE","O","O","O","O","O","B-VALUE","B-COLUMN","O","O","O","B-VALUE","O"]
}

How to load

Load JSONL files

from datasets import load_dataset

data_files = {
    "human_train": "https://huggingface.co/datasets/Voice49/dber/resolve/main/human_train.jsonl",
    "human_test": "https://huggingface.co/datasets/Voice49/dber/resolve/main/human_test.jsonl",
    "synthetic_train": "https://huggingface.co/datasets/Voice49/dber/resolve/main/synthetic_train.jsonl",
}
ds = load_dataset("json", data_files=data_files)
print(ds)
print(ds["human_train"][0])

Load from the Hub

from datasets import load_dataset
ds = load_dataset("Voice49/dber")

Annotation (human + synthetic)

  • Human: collaborative web UI with schema and SQL visible during labeling.
  • Synthetic: for each NLQ–SQL pair, generate candidate spans with Jaccard/Levenshtein, then solve a non-overlapping ILP to select spans maximizing similarity. Hyperparameters are validated on human data.

Data provenance


Release notes

  • v1.0: Initial public release