Yuki131 commited on
Commit
5c80be9
verified
1 Parent(s): 154330c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +98 -3
README.md CHANGED
@@ -1,3 +1,98 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+ # EviOmni-nq_train-qwen2.5-7B
5
+
6
+ ## Introduction
7
+
8
+ EviOmni is a rational evidence extraction model. Compared to vanilla evidence extraction models, EviOmni demonstrates the superiority in terms of performance, generalization, efficiency, and robustness.
9
+
10
+ ## Requirements
11
+
12
+ The code of EviOmni has been in the latest Huggingface `transformers` and we advise you to use the latest version of `transformers`.
13
+
14
+ With `transformers<4.37.0`, you will encounter the following error:
15
+
16
+ KeyError: 'qwen2'
17
+
18
+ ## Quickstart
19
+ ```
20
+ from transformers import AutoModelForCausalLM, AutoTokenizer
21
+ from transformers import StoppingCriteria, StoppingCriteriaList
22
+ import re
23
+
24
+ class MultiTokenStoppingCriteria(StoppingCriteria):
25
+ def __init__(self, stop_ids, device):
26
+ self.stop_ids = stop_ids
27
+ self.stop_len = len(stop_ids)
28
+
29
+ def __call__(self, input_ids, scores, **kwargs):
30
+ if len(input_ids[0]) >= self.stop_len:
31
+ last_tokens = input_ids[0][-self.stop_len:].tolist()
32
+ return last_tokens == self.stop_ids
33
+ return False
34
+
35
+ model_name = "HIT-TMG/EviOmni-nq_train-qwen2.5-7B"
36
+ model = AutoModelForCausalLM.from_pretrained(
37
+ model_name,
38
+ torch_dtype="auto",
39
+ device_map="auto"
40
+ )
41
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
42
+
43
+ prompt = open("eviomni_prompt", "r").read()
44
+ question = "..."
45
+ passages = "..."
46
+ instruction = prompt.format(question=question, passages=passages)
47
+
48
+ messages = [
49
+ {"role": "system", "content": "You are a helpful assistant."},
50
+ {"role": "user", "content": instruction}
51
+ ]
52
+
53
+ stop_token = "</extract>\n\n"
54
+ stop_ids = tokenizer.encode(stop_token, add_special_tokens=False)
55
+
56
+ stopping_criteria = StoppingCriteriaList([
57
+ MultiTokenStoppingCriteria(stop_ids, model.device)
58
+ ])
59
+
60
+ text = tokenizer.apply_chat_template(
61
+ messages,
62
+ tokenize=False,
63
+ add_generation_prompt=True
64
+ )
65
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
66
+
67
+ generated_ids = model.generate(
68
+ **model_inputs,
69
+ max_new_tokens=512,
70
+ stopping_criteria=stopping_criteria
71
+ )
72
+ generated_ids = [
73
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
74
+ ]
75
+
76
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0].strip()
77
+ match = re.search(r"<extract>(.*?)</extract>", response, re.DOTALL)
78
+ evidence = match.group(1).strip()
79
+ ```
80
+ ## Performance
81
+ Main results.
82
+ ![main](./img/main_results.jpg)
83
+
84
+
85
+ ## Citation
86
+
87
+ If you find our work helpful, feel free to give us a cite.
88
+ ```
89
+ @misc{EviOmni,
90
+ title={Learning to Extract Rational Evidence via Reinforcement Learning for Retrieval-Augmented Generation},
91
+ author={Xinping Zhao and Shouzheng Huang and Yan Zhong and Xinshuo Hu and Meishan Zhang and Baotian Hu and Min Zhang},
92
+ year={2025},
93
+ eprint={2507.15586},
94
+ archivePrefix={arXiv},
95
+ primaryClass={cs.CL},
96
+ url={https://arxiv.org/abs/2507.15586},
97
+ }
98
+ ```