Usage

from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline

tokenizer = AutoTokenizer.from_pretrained("zeusfsx/title-instruction")
model = AutoModelForSequenceClassification.from_pretrained("zeusfsx/title-instruction")


classification = pipeline("text-classification", model=model, tokenizer=tokenizer, device="mps")  # for mac I used mps
Downloads last month
14
Safetensors
Model size
110M params
Tensor type
I64
·
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support