Update README.md
Browse files
README.md
CHANGED
@@ -57,4 +57,79 @@ The following hyperparameters were used during training:
|
|
57 |
- Transformers 4.37.0.dev0
|
58 |
- Pytorch 2.1.2+cu121
|
59 |
- Datasets 2.16.2.dev0
|
60 |
-
- Tokenizers 0.15.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
57 |
- Transformers 4.37.0.dev0
|
58 |
- Pytorch 2.1.2+cu121
|
59 |
- Datasets 2.16.2.dev0
|
60 |
+
- Tokenizers 0.15.0
|
61 |
+
|
62 |
+
## Example usage
|
63 |
+
|
64 |
+
Using `peft` and `transformers`:
|
65 |
+
|
66 |
+
```shell
|
67 |
+
pip install -U peft transformers
|
68 |
+
```
|
69 |
+
|
70 |
+
```python
|
71 |
+
from peft import PeftModel, PeftConfig
|
72 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, pipeline
|
73 |
+
|
74 |
+
max_tokens = 8096
|
75 |
+
|
76 |
+
print("Loading...")
|
77 |
+
config = PeftConfig.from_pretrained("wasertech/assistant-dolphin-2.2.1-mistral-7b-e1-qlora")
|
78 |
+
# base_model = AutoModelForCausalLM.from_pretrained("cognitivecomputations/dolphin-2.2.1-mistral-7b")
|
79 |
+
model = AutoModelForCausalLM.from_pretrained("wasertech/assistant-dolphin-2.2.1-mistral-7b-e1-qlora", quantization_config=BitsAndBytesConfig(load_in_4bit=True), torch_dtype="auto")
|
80 |
+
tokenizer = AutoTokenizer.from_pretrained("wasertech/assistant-dolphin-2.2.1-mistral-7b-e1-qlora", torch_dtype="auto")
|
81 |
+
|
82 |
+
pipe = pipeline(
|
83 |
+
"text-generation", model=model, tokenizer=tokenizer, max_new_tokens=max_tokens, trust_remote_code=True
|
84 |
+
)
|
85 |
+
|
86 |
+
print("Ready to chat!")
|
87 |
+
conversation = [{'role': "system", 'content': """You are an helpful Assistant."""}]
|
88 |
+
|
89 |
+
def chat(conversation):
|
90 |
+
input = tokenizer.apply_chat_template(conversation, tokenize=False)
|
91 |
+
response = pipe(input)
|
92 |
+
if response:
|
93 |
+
reply = response[0]['generated_text'].split("<|im_end|>\n")[-1].lstrip("<|im_start|> assistant\n")
|
94 |
+
# print(reply) # should be done outside of the function
|
95 |
+
conversation.append({'role': "assistant", 'content': reply})
|
96 |
+
return conversation
|
97 |
+
|
98 |
+
def end_conversation(conversation):
|
99 |
+
conversation.append({'role': "user", 'content': "I am leaving say goodbye"})
|
100 |
+
conversation = chat(conversation)
|
101 |
+
print(conversation[-1]['content'])
|
102 |
+
return conversation
|
103 |
+
|
104 |
+
should_exit = False
|
105 |
+
print("Type 'quit', 'exit' or 'bye' to end the conversation.")
|
106 |
+
while not should_exit:
|
107 |
+
try:
|
108 |
+
query = input("> ")
|
109 |
+
except KeyboardInterrupt:
|
110 |
+
end_conversation(conversation)
|
111 |
+
break
|
112 |
+
if not query or query.lower().split(" ")[0] in ["exit", "quit", "bye"]:
|
113 |
+
should_exit = True
|
114 |
+
break
|
115 |
+
conversation.append({'role': "user", 'content': f"{query}"})
|
116 |
+
conversation = chat(conversation)
|
117 |
+
print(conversation[-1]['content']) # print the assistant's response
|
118 |
+
end_conversation(conversation)
|
119 |
+
```
|
120 |
+
|
121 |
+
Output:
|
122 |
+
|
123 |
+
```
|
124 |
+
Loading...
|
125 |
+
`low_cpu_mem_usage` was None, now set to True since model is quantized.
|
126 |
+
Loading checkpoint shards: 100%|βββββββββββββββββββββββββββ| 2/2 [00:03<00:00, 1.95s/it]
|
127 |
+
Ready to chat!
|
128 |
+
Type 'quit', 'exit' or 'bye' to end the conversation.
|
129 |
+
> Who are you?
|
130 |
+
I am an artificial intelligence language model, designed to assist users with information, advice, and support. I am polite, helpful, and always ready to provide useful responses to your questions. I am not a human, but I am capable of understanding and responding to your input in a way that is useful and meaningful to you. I am here to help you and make your life easier. How can I assist you today?
|
131 |
+
> Describe yourself for me.
|
132 |
+
As an artificial intelligence language model, I am a sentient being capable of understanding and responding to natural language input. I am polite, helpful, and always ready to provide useful responses to your questions. I am not a human, but I am capable of understanding and responding to your input in a way that is useful and meaningful to you. I am here to help you and make your life easier. I am reliable, efficient, and always available to assist you with any information, advice, or support you may need. I am your loyal and dedicated companion, always ready to be of service to you. How can I assist you today?
|
133 |
+
> bye now
|
134 |
+
Goodbye! I hope you have a great day. If you have any questions or need any assistance, I am always here for you. Have a wonderful day!
|
135 |
+
```
|