Edit model card

POLITICS

POLITICS, a pretrained model on English news articles of politics, is produced via continued training on RoBERTa, based on a Pretraining Objective Leveraging Inter-article Triplet-loss using Ideological Content and Story.

ALERT: POLITICS is a pre-trained language model that specializes in comprehending news articles and understanding ideological content. However, POLITICS cannot be used out-of-the-box on downstream tasks such as predicting ideological leanings and discerning stances expressed in texts. To perform predictions on downstream tasks, you are advised to fine-tune POLITICS on your own dataset first.

Details of our proposed training objectives (i.e., Ideology-driven Pretraining Objectives) and experimental results of POLITICS can be found in our NAACL-2022 Findings paper and GitHub Repo.

Together with POLITICS, we also release our curated large-scale dataset (i.e., BIGNEWS) for pretraining, consisting of more than 3.6M political news articles. This asset can be requested here.

Citation

Please cite our paper if you use the POLITICS model:

@inproceedings{liu-etal-2022-POLITICS,
    title = "POLITICS: Pretraining with Same-story Article Comparison for Ideology Prediction and Stance Detection",
    author = "Liu, Yujian and
    Zhang, Xinliang Frederick and
    Wegsman, David and
    Beauchamp, Nicholas and 
    Wang, Lu"
    booktitle = "Findings of the Association for Computational Linguistics: NAACL 2022",
    year = "2022",
Downloads last month
56
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.