File size: 3,298 Bytes
592dcf8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
---
language: en
license: apache-2.0
tags:
- summarization
- pegasus
- tensorflow
- text-generation
datasets:
- cnn_dailymail
widget:
- text: "The US has passed the threshold of 800,000 deaths from Covid-19, the Johns Hopkins University tracker shows. The milestone comes as the country grapples with the highly transmissible Omicron variant, which has become the dominant strain in the US. President Joe Biden was expected to address the nation's loss on Tuesday. The US has the highest recorded death toll from Covid-19 in the world, followed by Russia and Brazil."
  example_title: "News Article"
model-index:
- name: pegasus-cnn-dailymail-tf
  results:
  - task:
      type: summarization
      name: Summarization
    dataset:
      name: CNN/DailyMail
      type: cnn_dailymail
    metrics:
    - type: rouge
      value: 21.86
      name: ROUGE-1
    - type: rouge
      value: 8.9
      name: ROUGE-2
    - type: rouge
      value: 16.85
      name: ROUGE-L
---

# Pegasus CNN/DailyMail (TensorFlow)

This is a TensorFlow version of the [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) model, converted from PyTorch weights.

## Model Description

PEGASUS is a pre-training approach for abstractive text summarization. This model was fine-tuned on the CNN/DailyMail dataset for news summarization tasks.

**Key Features:**
- 🔄 Converted from PyTorch to TensorFlow for better TF.js and TensorFlow ecosystem compatibility
- 📰 Specialized for news article summarization
- 🎯 Fine-tuned on CNN/DailyMail dataset

## Usage

```python
from transformers import TFAutoModelForSeq2SeqLM, AutoTokenizer

# Load model and tokenizer
model = TFAutoModelForSeq2SeqLM.from_pretrained("your-username/pegasus-cnn-dailymail-tf")
tokenizer = AutoTokenizer.from_pretrained("your-username/pegasus-cnn-dailymail-tf")

# Example usage
article = "Your news article text here..."
inputs = tokenizer(article, max_length=1024, return_tensors="tf", truncation=True)
summary_ids = model.generate(inputs["input_ids"], max_length=150, min_length=30, length_penalty=2.0, num_beams=4, early_stopping=True)
summary = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
print(summary)
```

## Model Details

- **Model Type:** Sequence-to-sequence (Text Summarization)
- **Language:** English
- **License:** Apache 2.0
- **Framework:** TensorFlow
- **Base Model:** [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail)

## Training Data

This model was originally trained on the CNN/DailyMail dataset, which contains news articles paired with human-written summaries.

## Performance

This TensorFlow model should perform identically to the original PyTorch version, as it was converted directly from the same weights.

## Citation

```bibtex
@misc{zhang2019pegasus,
      title={PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization}, 
      author={Jingqing Zhang and Yao Zhao and Mohammad Saleh and Peter J. Liu},
      year={2019},
      eprint={1912.08777},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
```

## Conversion Notes

This model was converted from PyTorch to TensorFlow using the `from_pt=True` parameter in the Transformers library, ensuring weight preservation and identical performance.