wenbopan commited on
Commit
f9b05c0
1 Parent(s): ebbfccf

change name

Browse files
Files changed (1) hide show
  1. README.md +10 -7
README.md CHANGED
@@ -6,24 +6,27 @@ datasets:
6
  language:
7
  - zh
8
  - en
 
9
  ---
10
  ![image/webp](https://cdn-uploads.huggingface.co/production/uploads/62cd3a3691d27e60db0698b0/s21sMRxRT56c5t4M15GBP.webp)
11
 
12
  **The Faro chat model focuses on practicality and long-context modeling. It handles various downstream tasks with higher quality, delivering stable and reliable results even when inputs contain lengthy documents or complex instructions. Faro seamlessly works in both English and Chinese.**
13
 
14
- # Faro-Qwen-1.8B-100K
15
- Faro-Qwen-1.8B-100K is an improved [Qwen/Qwen1.5-1.8B-Chat](https://huggingface.co/Qwen/Qwen1.5-1.8B-Chat) with extensive instruction tuning on [Fusang-V1](https://huggingface.co/datasets/wenbopan/Fusang-v1). Compared to Qwen1.5-1.8B-Chat, Faro-Qwen-1.8B-100K has gained greater capability in various downstream tasks and long-context modeling thanks to the large-scale synthetic data in Fusang-V1. Faro-Qwen-1.8B-100K use dynamic NTK and continual training to extend its max context length to 100K.
 
 
16
 
17
  ## How to Use
18
 
19
- Faro-Qwen-1.8B-100K uses chatml template. I recommend using vLLM for long inputs.
20
 
21
  ```python
22
  import io
23
  import requests
24
  from PyPDF2 import PdfReader
25
  from vllm import LLM, SamplingParams
26
- llm = LLM(model="wenbopan/Faro-Qwen-1.8B-100K")
27
  pdf_data = io.BytesIO(requests.get("https://arxiv.org/pdf/2303.08774.pdf").content)
28
  document = "".join(page.extract_text() for page in PdfReader(pdf_data).pages) # 100 pages
29
  question = f"{document}\n\nAccording to the paper, what is the parameter count of GPT-4?"
@@ -37,8 +40,8 @@ print(output[0].outputs[0].text)
37
 
38
  ```python
39
  from transformers import AutoModelForCausalLM, AutoTokenizer
40
- model = AutoModelForCausalLM.from_pretrained('wenbopan/Faro-Qwen-1.8B-100K', device_map="cuda")
41
- tokenizer = AutoTokenizer.from_pretrained('wenbopan/Faro-Qwen-1.8B-100K')
42
  messages = [
43
  {"role": "system", "content": "You are a helpful assistant. Always answer with a short response."},
44
  {"role": "user", "content": "Tell me what is Pythagorean theorem like you are a pirate."}
@@ -50,4 +53,4 @@ response = tokenizer.decode(generated_ids[0], skip_special_tokens=True)
50
 
51
  </details>
52
 
53
- For more info please refer to [wenbopan/Faro-Yi-9B-200K](https://huggingface.co/wenbopan/Faro-Yi-9B-200K)
 
6
  language:
7
  - zh
8
  - en
9
+ pipeline_tag: text-generation
10
  ---
11
  ![image/webp](https://cdn-uploads.huggingface.co/production/uploads/62cd3a3691d27e60db0698b0/s21sMRxRT56c5t4M15GBP.webp)
12
 
13
  **The Faro chat model focuses on practicality and long-context modeling. It handles various downstream tasks with higher quality, delivering stable and reliable results even when inputs contain lengthy documents or complex instructions. Faro seamlessly works in both English and Chinese.**
14
 
15
+ # Faro-Qwen-1.8B
16
+ Faro-Qwen-1.8B is an improved [Qwen/Qwen1.5-4B-Chat](https://huggingface.co/Qwen/Qwen1.5-4B-Chat) with extensive instruction tuning on [Fusang-V1](https://huggingface.co/datasets/wenbopan/Fusang-v1). Compared to Qwen1.5-4B-Chat, Faro-Qwen-1.8B has gained greater capability in various downstream tasks and long-context modeling thanks to the large-scale synthetic data in Fusang-V1.
17
+
18
+ Faro-Qwen-1.8B uses dynamic NTK and continual training to extend its max context length to 100K. However, due to the lack of Dynamic NTK supports for`Qwen2ForCausalLM` in vLLM, inference on text longer than 32K requires using native `transformers` implementations.
19
 
20
  ## How to Use
21
 
22
+ Faro-Qwen-1.8B uses chatml template. I recommend using vLLM for long inputs.
23
 
24
  ```python
25
  import io
26
  import requests
27
  from PyPDF2 import PdfReader
28
  from vllm import LLM, SamplingParams
29
+ llm = LLM(model="wenbopan/Faro-Qwen-1.8B")
30
  pdf_data = io.BytesIO(requests.get("https://arxiv.org/pdf/2303.08774.pdf").content)
31
  document = "".join(page.extract_text() for page in PdfReader(pdf_data).pages) # 100 pages
32
  question = f"{document}\n\nAccording to the paper, what is the parameter count of GPT-4?"
 
40
 
41
  ```python
42
  from transformers import AutoModelForCausalLM, AutoTokenizer
43
+ model = AutoModelForCausalLM.from_pretrained('wenbopan/Faro-Qwen-1.8B', device_map="cuda")
44
+ tokenizer = AutoTokenizer.from_pretrained('wenbopan/Faro-Qwen-1.8B')
45
  messages = [
46
  {"role": "system", "content": "You are a helpful assistant. Always answer with a short response."},
47
  {"role": "user", "content": "Tell me what is Pythagorean theorem like you are a pirate."}
 
53
 
54
  </details>
55
 
56
+ For more info please refer to [wenbopan/Faro-Yi-9B](https://huggingface.co/wenbopan/Faro-Yi-9B)