data-plumber's picture
Upload TensorFlow converted Pegasus CNN/DailyMail model
592dcf8 verified
metadata
language: en
license: apache-2.0
tags:
  - summarization
  - pegasus
  - tensorflow
  - text-generation
datasets:
  - cnn_dailymail
widget:
  - text: >-
      The US has passed the threshold of 800,000 deaths from Covid-19, the Johns
      Hopkins University tracker shows. The milestone comes as the country
      grapples with the highly transmissible Omicron variant, which has become
      the dominant strain in the US. President Joe Biden was expected to address
      the nation's loss on Tuesday. The US has the highest recorded death toll
      from Covid-19 in the world, followed by Russia and Brazil.
    example_title: News Article
model-index:
  - name: pegasus-cnn-dailymail-tf
    results:
      - task:
          type: summarization
          name: Summarization
        dataset:
          name: CNN/DailyMail
          type: cnn_dailymail
        metrics:
          - type: rouge
            value: 21.86
            name: ROUGE-1
          - type: rouge
            value: 8.9
            name: ROUGE-2
          - type: rouge
            value: 16.85
            name: ROUGE-L

Pegasus CNN/DailyMail (TensorFlow)

This is a TensorFlow version of the google/pegasus-cnn_dailymail model, converted from PyTorch weights.

Model Description

PEGASUS is a pre-training approach for abstractive text summarization. This model was fine-tuned on the CNN/DailyMail dataset for news summarization tasks.

Key Features:

  • 🔄 Converted from PyTorch to TensorFlow for better TF.js and TensorFlow ecosystem compatibility
  • 📰 Specialized for news article summarization
  • 🎯 Fine-tuned on CNN/DailyMail dataset

Usage

from transformers import TFAutoModelForSeq2SeqLM, AutoTokenizer

# Load model and tokenizer
model = TFAutoModelForSeq2SeqLM.from_pretrained("your-username/pegasus-cnn-dailymail-tf")
tokenizer = AutoTokenizer.from_pretrained("your-username/pegasus-cnn-dailymail-tf")

# Example usage
article = "Your news article text here..."
inputs = tokenizer(article, max_length=1024, return_tensors="tf", truncation=True)
summary_ids = model.generate(inputs["input_ids"], max_length=150, min_length=30, length_penalty=2.0, num_beams=4, early_stopping=True)
summary = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
print(summary)

Model Details

  • Model Type: Sequence-to-sequence (Text Summarization)
  • Language: English
  • License: Apache 2.0
  • Framework: TensorFlow
  • Base Model: google/pegasus-cnn_dailymail

Training Data

This model was originally trained on the CNN/DailyMail dataset, which contains news articles paired with human-written summaries.

Performance

This TensorFlow model should perform identically to the original PyTorch version, as it was converted directly from the same weights.

Citation

@misc{zhang2019pegasus,
      title={PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization}, 
      author={Jingqing Zhang and Yao Zhao and Mohammad Saleh and Peter J. Liu},
      year={2019},
      eprint={1912.08777},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

Conversion Notes

This model was converted from PyTorch to TensorFlow using the from_pt=True parameter in the Transformers library, ensuring weight preservation and identical performance.