πŸ“° Automatic News Summarizer (Fine-tuned on LLaMA 3.2 3B)

This model is a fine-tuned version of Meta-LLaMA-3-3B, optimized for automatic news summarization tasks. It is designed to generate concise and coherent summaries of news articles using state-of-the-art language modeling techniques.

πŸ“Œ Model Details

πŸ“š Training

  • Fine-tuned on a dataset of curated news articles and summaries
  • Optimized for relevance, coherence, and brevity
  • Training framework: Hugging Face Transformers
  • Fine-tuning platform: Google Colab

πŸš€ Usage Example

from transformers import AutoTokenizer, AutoModelForCausalLM

model_name = "punit16/automatic_news_summarizer"

tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

input_text = "The government has announced a new policy today aimed at reducing air pollution in major cities..."

inputs = tokenizer(input_text, return_tensors="pt")
summary_ids = model.generate(**inputs, max_new_tokens=512)
summary = tokenizer.decode(summary_ids[0], skip_special_tokens=True)

print("Summary:", summary)
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ 1 Ask for provider support