ONNX format of voxreality/t5_nlu_intent_recognition model
Model inference example:
from transformers import AutoTokenizer
from optimum.onnxruntime import ORTModelForSeq2SeqLM
model_path = "voxreality/t5_nlu_intent_recognition_onnx"
model = ORTModelForSeq2SeqLM.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path)
input_text = "Where is the conference room?"
input_tokenized = tokenizer.encode(input_text, return_tensors='pt')
output = model.generate(input_tokenized, max_new_tokens=100).tolist()
nlu_output_str = tokenizer.decode(output[0], skip_special_tokens=True)
print(nlu_output_str)
- Downloads last month
- 26
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support