World of Central Banks Model
Model Name: WCB Stance Detection Model
Model Type: Text Classification
Language: English
License: CC-BY-NC-SA 4.0
Base Model: RoBERTa
Dataset Used for Training: gtfintechlab/all_annotated_sentences_25000
Model Overview
WCB Stance Detection Model is a fine-tuned RoBERTa-based model designed to classify text data on Stance. This label is annotated in the model_WCB_stance_label dataset, which focuses on meeting minutes for the all 25 central banks, listed in the paper Words That Unite The World: A Unified Framework for Deciphering Global Central Bank Communications.
Intended Use
This model is intended for researchers and practitioners working on subjective text classification, particularly within financial and economic contexts. It is specifically designed to assess the Stance label, aiding in the analysis of subjective content in financial and economic communications.
How to Use
To utilize this model, load it using the Hugging Face transformers
library:
from transformers import pipeline, AutoTokenizer, AutoModelForSequenceClassification, AutoConfig
# Load tokenizer, model, and configuration
tokenizer = AutoTokenizer.from_pretrained("gtfintechlab/model_WCB_stance_label", do_lower_case=True, do_basic_tokenize=True)
model = AutoModelForSequenceClassification.from_pretrained("gtfintechlab/model_WCB_stance_label", num_labels=4)
config = AutoConfig.from_pretrained("gtfintechlab/model_WCB_stance_label")
# Initialize text classification pipeline
classifier = pipeline('text-classification', model=model, tokenizer=tokenizer, config=config, framework="pt")
# Classify Stance
sentences = [
"[Sentence 1]",
"[Sentence 2]"
]
results = classifier(sentences, batch_size=128, truncation="only_first")
print(results)
In this script:
Tokenizer and Model Loading:
Loads the pre-trained tokenizer and model fromgtfintechlab/model_WCB_stance_label
.Configuration:
Loads model configuration parameters, including the number of labels.Pipeline Initialization:
Initializes a text classification pipeline with the model, tokenizer, and configuration.Classification:
Labels sentences based on Stance.
Ensure your environment has the necessary dependencies installed.
Label Interpretation
- LABEL_0: Hawkish; the sentnece supports contractionary monetary policy.
- LABEL_1: Dovish; the sentence supports expansionary monetary policy.
- LABEL_2: Neutral; the sentence contains neither hawkish or dovish sentiment, or both hawkish and dovish sentiment.
- LABEL_3: Irrelevant; the sentence is not related to monetary policy.
Training Data
The model was trained on the model_WCB_stance_label dataset, comprising annotated sentences from 25 central banks, listed in the paper Words That Unite The World: A Unified Framework for Deciphering Global Central Bank Communications meeting minutes, labeled by Stance. The dataset includes training, validation, and test splits.
Citation
If you use this model in your research, please cite the model_WCB_stance_label:
@article{WCBShahSukhaniPardawala,
title={Words That Unite The World: A Unified Framework for Deciphering Global Central Bank Communications},
author={Agam Shah, Siddhant Sukhani, Huzaifa Pardawala et al.},
year={2025}
}
For more details, refer to the model_WCB_stance_label dataset documentation.
Contact
For any model_WCB_stance_label related issues and questions, please contact:
Huzaifa Pardawala: huzaifahp7[at]gatech[dot]edu
Siddhant Sukhani: ssukhani3[at]gatech[dot]edu
Agam Shah: ashah482[at]gatech[dot]edu
- Downloads last month
- 530
Model tree for gtfintechlab/model_WCB_stance_label
Base model
FacebookAI/roberta-base