YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Chinese to English Translation (Quantized Model)

This repository contains a quantized Chinese-To-English translation model fine-tuned on the ['wlhb/Transaltion-Chinese-2-English'] dataset and optimized using dynamic quantization for efficient CPU inference.

πŸ”§ Model Details

  • Base model: Helsinki-NLP/opus-mt-en-zh
  • Dataset: ['wlhb/Transaltion-Chinese-2-English']
  • Training platform: Kaggle (CUDA GPU)
  • Fine-tuned: On English-Chinese pairs from the Hugging Face dataset
  • Quantization: PyTorch Dynamic Quantization (torch.quantization.quantize_dynamic)
  • Tokenizer: Saved alongside the model

πŸ“ Folder Structure

quantized_model/ β”œβ”€β”€ config.json β”œβ”€β”€ pytorch_model.bin β”œβ”€β”€ tokenizer_config.json β”œβ”€β”€ tokenizer.json β”œβ”€β”€ vocab.json / merges.txt


πŸš€ Usage

πŸ”Ή 1. Load Quantized Model for Inference

import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline

# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained("./quantized_model")

# Load quantized model
model = AutoModelForSeq2SeqLM.from_pretrained("./quantized_model")
model.eval()

# Run translation
translator = pipeline("translation_zh_to_en", model=model, tokenizer=tokenizer, device=-1)

text = "δ½ ε₯½ε—"
print("Chinese:", translator(text)[0]['translation_text'])

Model Training Summary

  • Loaded dataset: wlhb/Transaltion-Chinese-2-English

  • Mapped translation data: {"zh": ..., "en": ...} before training

  • Training: 3 epochs using GPU

  • Disabled: wandb logging

  • Skipped: Evaluation phase

  • Saved: Trained + Quantized model and tokenizer

  • Quantization: torch.quantization.Quantize_dynamic is used for efficient CPU inference

Downloads last month

-

Downloads are not tracked for this model. How to track
Safetensors
Model size
77.5M params
Tensor type
F16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support