|
--- |
|
datasets: |
|
- relbert/semeval2012_relational_similarity |
|
model-index: |
|
- name: relbert/roberta-large-semeval2012-mask-prompt-d-nce-classification-conceptnet-validated |
|
results: |
|
- task: |
|
name: Relation Mapping |
|
type: sorting-task |
|
dataset: |
|
name: Relation Mapping |
|
args: relbert/relation_mapping |
|
type: relation-mapping |
|
metrics: |
|
- name: Accuracy |
|
type: accuracy |
|
value: 0.8544444444444445 |
|
- task: |
|
name: Analogy Questions (SAT full) |
|
type: multiple-choice-qa |
|
dataset: |
|
name: SAT full |
|
args: relbert/analogy_questions |
|
type: analogy-questions |
|
metrics: |
|
- name: Accuracy |
|
type: accuracy |
|
value: 0.6524064171122995 |
|
- task: |
|
name: Analogy Questions (SAT) |
|
type: multiple-choice-qa |
|
dataset: |
|
name: SAT |
|
args: relbert/analogy_questions |
|
type: analogy-questions |
|
metrics: |
|
- name: Accuracy |
|
type: accuracy |
|
value: 0.6498516320474778 |
|
- task: |
|
name: Analogy Questions (BATS) |
|
type: multiple-choice-qa |
|
dataset: |
|
name: BATS |
|
args: relbert/analogy_questions |
|
type: analogy-questions |
|
metrics: |
|
- name: Accuracy |
|
type: accuracy |
|
value: 0.7509727626459144 |
|
- task: |
|
name: Analogy Questions (Google) |
|
type: multiple-choice-qa |
|
dataset: |
|
name: Google |
|
args: relbert/analogy_questions |
|
type: analogy-questions |
|
metrics: |
|
- name: Accuracy |
|
type: accuracy |
|
value: 0.902 |
|
- task: |
|
name: Analogy Questions (U2) |
|
type: multiple-choice-qa |
|
dataset: |
|
name: U2 |
|
args: relbert/analogy_questions |
|
type: analogy-questions |
|
metrics: |
|
- name: Accuracy |
|
type: accuracy |
|
value: 0.6271929824561403 |
|
- task: |
|
name: Analogy Questions (U4) |
|
type: multiple-choice-qa |
|
dataset: |
|
name: U4 |
|
args: relbert/analogy_questions |
|
type: analogy-questions |
|
metrics: |
|
- name: Accuracy |
|
type: accuracy |
|
value: 0.625 |
|
- task: |
|
name: Lexical Relation Classification (BLESS) |
|
type: classification |
|
dataset: |
|
name: BLESS |
|
args: relbert/lexical_relation_classification |
|
type: relation-classification |
|
metrics: |
|
- name: F1 |
|
type: f1 |
|
value: 0.9246647581738737 |
|
- name: F1 (macro) |
|
type: f1_macro |
|
value: 0.9201116139693363 |
|
- task: |
|
name: Lexical Relation Classification (CogALexV) |
|
type: classification |
|
dataset: |
|
name: CogALexV |
|
args: relbert/lexical_relation_classification |
|
type: relation-classification |
|
metrics: |
|
- name: F1 |
|
type: f1 |
|
value: 0.8826291079812206 |
|
- name: F1 (macro) |
|
type: f1_macro |
|
value: 0.74506786895136 |
|
- task: |
|
name: Lexical Relation Classification (EVALution) |
|
type: classification |
|
dataset: |
|
name: BLESS |
|
args: relbert/lexical_relation_classification |
|
type: relation-classification |
|
metrics: |
|
- name: F1 |
|
type: f1 |
|
value: 0.7172264355362946 |
|
- name: F1 (macro) |
|
type: f1_macro |
|
value: 0.703292242462215 |
|
- task: |
|
name: Lexical Relation Classification (K&H+N) |
|
type: classification |
|
dataset: |
|
name: K&H+N |
|
args: relbert/lexical_relation_classification |
|
type: relation-classification |
|
metrics: |
|
- name: F1 |
|
type: f1 |
|
value: 0.9616748974055783 |
|
- name: F1 (macro) |
|
type: f1_macro |
|
value: 0.8934154139843127 |
|
- task: |
|
name: Lexical Relation Classification (ROOT09) |
|
type: classification |
|
dataset: |
|
name: ROOT09 |
|
args: relbert/lexical_relation_classification |
|
type: relation-classification |
|
metrics: |
|
- name: F1 |
|
type: f1 |
|
value: 0.9094327796928863 |
|
- name: F1 (macro) |
|
type: f1_macro |
|
value: 0.906471425124189 |
|
|
|
--- |
|
# relbert/roberta-large-semeval2012-mask-prompt-d-nce-classification-conceptnet-validated |
|
|
|
RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on |
|
[relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity). |
|
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). |
|
It achieves the following results on the relation understanding tasks: |
|
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-d-nce-classification-conceptnet-validated/raw/main/analogy.json)): |
|
- Accuracy on SAT (full): 0.6524064171122995 |
|
- Accuracy on SAT: 0.6498516320474778 |
|
- Accuracy on BATS: 0.7509727626459144 |
|
- Accuracy on U2: 0.6271929824561403 |
|
- Accuracy on U4: 0.625 |
|
- Accuracy on Google: 0.902 |
|
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-d-nce-classification-conceptnet-validated/raw/main/classification.json)): |
|
- Micro F1 score on BLESS: 0.9246647581738737 |
|
- Micro F1 score on CogALexV: 0.8826291079812206 |
|
- Micro F1 score on EVALution: 0.7172264355362946 |
|
- Micro F1 score on K&H+N: 0.9616748974055783 |
|
- Micro F1 score on ROOT09: 0.9094327796928863 |
|
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-d-nce-classification-conceptnet-validated/raw/main/relation_mapping.json)): |
|
- Accuracy on Relation Mapping: 0.8544444444444445 |
|
|
|
|
|
### Usage |
|
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip |
|
```shell |
|
pip install relbert |
|
``` |
|
and activate model as below. |
|
```python |
|
from relbert import RelBERT |
|
model = RelBERT("relbert/roberta-large-semeval2012-mask-prompt-d-nce-classification-conceptnet-validated") |
|
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) |
|
``` |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- model: roberta-large |
|
- max_length: 64 |
|
- mode: mask |
|
- data: relbert/semeval2012_relational_similarity |
|
- split: train |
|
- data_eval: relbert/conceptnet_high_confidence |
|
- split_eval: full |
|
- template_mode: manual |
|
- template: I wasn’t aware of this relationship, but I just read in the encyclopedia that <subj> is the <mask> of <obj> |
|
- loss_function: nce_logout |
|
- classification_loss: True |
|
- temperature_nce_constant: 0.05 |
|
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} |
|
- epoch: 30 |
|
- batch: 128 |
|
- lr: 5e-06 |
|
- lr_decay: False |
|
- lr_warmup: 1 |
|
- weight_decay: 0 |
|
- random_seed: 0 |
|
- exclude_relation: None |
|
- exclude_relation_eval: None |
|
- n_sample: 640 |
|
- gradient_accumulation: 8 |
|
|
|
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-d-nce-classification-conceptnet-validated/raw/main/trainer_config.json). |
|
|
|
### Reference |
|
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). |
|
|
|
``` |
|
|
|
@inproceedings{ushio-etal-2021-distilling-relation-embeddings, |
|
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", |
|
author = "Ushio, Asahi and |
|
Schockaert, Steven and |
|
Camacho-Collados, Jose", |
|
booktitle = "EMNLP 2021", |
|
year = "2021", |
|
address = "Online", |
|
publisher = "Association for Computational Linguistics", |
|
} |
|
|
|
``` |
|
|