π° Automatic News Summarizer (Fine-tuned on LLaMA 3.2 3B)
This model is a fine-tuned version of Meta-LLaMA-3-3B, optimized for automatic news summarization tasks. It is designed to generate concise and coherent summaries of news articles using state-of-the-art language modeling techniques.
π Model Details
- Base model: Meta-LLaMA-3-3B
- Fine-tuned for: Summarization
- Architecture: Decoder-only Transformer (LLaMA 3.2 3B)
- License: Meta LLaMA 3 Community License
π Training
- Fine-tuned on a dataset of curated news articles and summaries
- Optimized for relevance, coherence, and brevity
- Training framework: Hugging Face Transformers
- Fine-tuning platform: Google Colab
π Usage Example
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "punit16/automatic_news_summarizer"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
input_text = "The government has announced a new policy today aimed at reducing air pollution in major cities..."
inputs = tokenizer(input_text, return_tensors="pt")
summary_ids = model.generate(**inputs, max_new_tokens=512)
summary = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
print("Summary:", summary)
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
1
Ask for provider support