--- language: - zh - en library_name: transformers pipeline_tag: text2text-generation --- 抽取情感四元组、三元组、二元组等 ```python import torch from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("yuyijiong/mt0-xl-bf16-sentiment-quadruple") model = AutoModelForSeq2SeqLM.from_pretrained("yuyijiong/mt0-xl-bf16-sentiment-quadruple", device_map="auto", torch_dtype=torch.bfloat16) text = '情感四元组(对象,观点,方面,极性)抽取任务 (补全null): 【已经开了4袋,普遍出现米沾在包装上了,看起来放了很久的样子】' input_ids = tokenizer(text,return_tensors="pt", padding=True)['input_ids'].cuda(0) with torch.no_grad(): with torch.autocast('cuda'): output = model.generate(input_ids=input_ids) output_str = tokenizer.batch_decode(output, skip_special_tokens=True) print(output_str) ```