mirnaaiman commited on
Commit
f39d8b6
·
verified ·
1 Parent(s): 2dcff66

Rename README.md to app.py

Browse files
Files changed (2) hide show
  1. README.md +0 -143
  2. app.py +197 -0
README.md DELETED
@@ -1,143 +0,0 @@
1
- ---
2
- license: apache-2.0
3
- tags:
4
- - finetuned
5
- base_model: mistralai/Mistral-7B-v0.1
6
- pipeline_tag: text-generation
7
- inference: true
8
- widget:
9
- - messages:
10
- - role: user
11
- content: What is your favorite condiment?
12
-
13
- extra_gated_description: If you want to learn more about how we process your personal data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
14
- ---
15
-
16
- # Model Card for Mistral-7B-Instruct-v0.1
17
-
18
-
19
- ## Encode and Decode with `mistral_common`
20
-
21
- ```py
22
- from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
23
- from mistral_common.protocol.instruct.messages import UserMessage
24
- from mistral_common.protocol.instruct.request import ChatCompletionRequest
25
-
26
- mistral_models_path = "MISTRAL_MODELS_PATH"
27
-
28
- tokenizer = MistralTokenizer.v1()
29
-
30
- completion_request = ChatCompletionRequest(messages=[UserMessage(content="Explain Machine Learning to me in a nutshell.")])
31
-
32
- tokens = tokenizer.encode_chat_completion(completion_request).tokens
33
- ```
34
-
35
- ## Inference with `mistral_inference`
36
-
37
- ```py
38
- from mistral_inference.transformer import Transformer
39
- from mistral_inference.generate import generate
40
-
41
- model = Transformer.from_folder(mistral_models_path)
42
- out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
43
-
44
- result = tokenizer.decode(out_tokens[0])
45
-
46
- print(result)
47
- ```
48
-
49
- ## Inference with hugging face `transformers`
50
-
51
- ```py
52
- from transformers import AutoModelForCausalLM
53
-
54
- model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1")
55
- model.to("cuda")
56
-
57
- generated_ids = model.generate(tokens, max_new_tokens=1000, do_sample=True)
58
-
59
- # decode with mistral tokenizer
60
- result = tokenizer.decode(generated_ids[0].tolist())
61
- print(result)
62
- ```
63
-
64
- > [!TIP]
65
- > PRs to correct the `transformers` tokenizer so that it gives 1-to-1 the same results as the `mistral_common` reference implementation are very welcome!
66
-
67
- ---
68
-
69
- The Mistral-7B-Instruct-v0.1 Large Language Model (LLM) is a instruct fine-tuned version of the [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) generative text model using a variety of publicly available conversation datasets.
70
-
71
- For full details of this model please read our [paper](https://arxiv.org/abs/2310.06825) and [release blog post](https://mistral.ai/news/announcing-mistral-7b/).
72
-
73
- ## Instruction format
74
-
75
- In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[/INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.
76
-
77
- E.g.
78
- ```
79
- text = "<s>[INST] What is your favourite condiment? [/INST]"
80
- "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> "
81
- "[INST] Do you have mayonnaise recipes? [/INST]"
82
- ```
83
-
84
- This format is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating) via the `apply_chat_template()` method:
85
-
86
- ```python
87
- from transformers import AutoModelForCausalLM, AutoTokenizer
88
-
89
- device = "cuda" # the device to load the model onto
90
-
91
- model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1")
92
- tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1")
93
-
94
- messages = [
95
- {"role": "user", "content": "What is your favourite condiment?"},
96
- {"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
97
- {"role": "user", "content": "Do you have mayonnaise recipes?"}
98
- ]
99
-
100
- encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
101
-
102
- model_inputs = encodeds.to(device)
103
- model.to(device)
104
-
105
- generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
106
- decoded = tokenizer.batch_decode(generated_ids)
107
- print(decoded[0])
108
- ```
109
-
110
- ## Model Architecture
111
- This instruction model is based on Mistral-7B-v0.1, a transformer model with the following architecture choices:
112
- - Grouped-Query Attention
113
- - Sliding-Window Attention
114
- - Byte-fallback BPE tokenizer
115
-
116
- ## Troubleshooting
117
- - If you see the following error:
118
- ```
119
- Traceback (most recent call last):
120
- File "", line 1, in
121
- File "/transformers/models/auto/auto_factory.py", line 482, in from_pretrained
122
- config, kwargs = AutoConfig.from_pretrained(
123
- File "/transformers/models/auto/configuration_auto.py", line 1022, in from_pretrained
124
- config_class = CONFIG_MAPPING[config_dict["model_type"]]
125
- File "/transformers/models/auto/configuration_auto.py", line 723, in getitem
126
- raise KeyError(key)
127
- KeyError: 'mistral'
128
- ```
129
-
130
- Installing transformers from source should solve the issue
131
- pip install git+https://github.com/huggingface/transformers
132
-
133
- This should not be required after transformers-v4.33.4.
134
-
135
- ## Limitations
136
-
137
- The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance.
138
- It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
139
- make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
140
-
141
- ## The Mistral AI Team
142
-
143
- Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
app.py ADDED
@@ -0,0 +1,197 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # app.py
2
+ import gradio as gr
3
+ from huggingface_hub import InferenceClient
4
+ import os
5
+
6
+ DEFAULT_MODEL_NAME = "mistralai/Mistral-7B-Instruct-v0.1"
7
+
8
+ HF_TOKEN = os.getenv("HF_API_TOKEN") # Loads token if set as a secret
9
+
10
+ # --- Initialize Inference Client ---
11
+ client = None
12
+
13
+ def get_inference_client(model_name):
14
+ global client
15
+ # Initialize client if it hasn't been, or if model name changes
16
+ if client is None or client.model != model_name:
17
+ try:
18
+ # InferenceClient will use HF_TOKEN if it's not None,
19
+ # or try to infer token otherwise (e.g. from CLI login if running locally).
20
+ # If no token is found and the model requires one, the API call will fail.
21
+ client = InferenceClient(model=model_name, token=HF_TOKEN if HF_TOKEN else None)
22
+ print(f"InferenceClient initialized for {model_name}. Token {'provided' if HF_TOKEN else 'not explicitly provided'}.")
23
+ except Exception as e:
24
+ print(f"Failed to initialize InferenceClient for {model_name}: {e}")
25
+ return None
26
+ return client
27
+
28
+ # --- Evaluation Logic ---
29
+ def evaluate_understanding(prompt, response):
30
+ """
31
+ Analyzes the model's response to give a basic evaluation of understanding.
32
+ This is a simple heuristic and not a comprehensive NLU assessment.
33
+ """
34
+ if not response or response.strip() == "":
35
+ return "❌ Not Understood (Empty or whitespace response)"
36
+
37
+ response_lower = response.lower() # For case-insensitive checks
38
+
39
+ misunderstanding_keywords = [
40
+ "i'm sorry", "i apologize", "i cannot", "i am unable", "unable to",
41
+ "i don't understand", "could you please rephrase", "i'm not sure i follow",
42
+ "that's not clear", "i do not have enough information", "as an ai language model, i don't",
43
+ "i'm not programmed to", "i lack the ability to"
44
+ ]
45
+
46
+ for keyword in misunderstanding_keywords:
47
+ if keyword in response_lower:
48
+ return f"⚠️ Potentially Not Understood (Contains: '{keyword}')"
49
+
50
+ if len(prompt.split()) > 7 and len(response.split()) < 10:
51
+ return "⚠️ Potentially Not Understood (Response seems too short for the prompt)"
52
+
53
+ if prompt.lower() in response_lower and len(response_lower) < len(prompt.lower()) * 1.5 :
54
+ if len(prompt.split()) > 5 :
55
+ return "⚠️ Potentially Not Understood (Response might be echoing the prompt)"
56
+
57
+ return "✔️ Likely Understood"
58
+
59
+ # --- Core Logic: Query Model and Evaluate ---
60
+ def query_model_and_evaluate(user_prompt, model_name_to_use):
61
+ """
62
+ Sends the prompt to the model, gets the response, and evaluates it.
63
+ """
64
+ if not user_prompt or user_prompt.strip() == "":
65
+ return "Please enter a prompt.", "Evaluation N/A", model_name_to_use
66
+
67
+ # Note: The explicit block for Llama models without HF_TOKEN has been removed.
68
+ # The InferenceClient will attempt the call. If the model is gated and requires
69
+ # a token or terms acceptance, the API call itself will likely fail.
70
+ print(f"Querying model: {model_name_to_use}. HF_TOKEN {'is set' if HF_TOKEN else 'is NOT set/empty'}.")
71
+
72
+ current_client = get_inference_client(model_name_to_use)
73
+ if current_client is None:
74
+ error_msg = f"Error: Could not initialize the model API client for {model_name_to_use}. Check logs. This might be due to the model requiring authentication (like a token or accepting terms on Hugging Face) which was not available or successful."
75
+ return error_msg, "Evaluation N/A", model_name_to_use
76
+
77
+ try:
78
+ if "mistral" in model_name_to_use.lower() and "instruct" in model_name_to_use.lower():
79
+ formatted_prompt = f"<s>[INST] {user_prompt.strip()} [/INST]"
80
+ elif "llama-2" in model_name_to_use.lower() and "chat" in model_name_to_use.lower():
81
+ formatted_prompt = (
82
+ f"[INST] <<SYS>>\nYou are a helpful assistant. Your goal is to understand the user's prompt and respond accurately and relevantly.\n"
83
+ f"<</SYS>>\n\n{user_prompt.strip()} [/INST]"
84
+ )
85
+ else:
86
+ formatted_prompt = user_prompt.strip()
87
+
88
+ params = {
89
+ "max_new_tokens": 300,
90
+ "temperature": 0.6,
91
+ "top_p": 0.9,
92
+ "repetition_penalty": 1.1,
93
+ "do_sample": True,
94
+ "return_full_text": False
95
+ }
96
+
97
+ model_response_text = current_client.text_generation(formatted_prompt, **params)
98
+
99
+ if not model_response_text:
100
+ model_response_text = ""
101
+
102
+ except Exception as e:
103
+ error_message = f"Error calling model API for {model_name_to_use}: {str(e)}. This can happen if the model is gated, requires a Hugging Face token, or if you need to accept its terms of use on the Hugging Face website."
104
+ print(error_message)
105
+ return error_message, "Evaluation N/A", model_name_to_use
106
+
107
+ understanding_evaluation = evaluate_understanding(user_prompt, model_response_text)
108
+
109
+ return model_response_text, understanding_evaluation, model_name_to_use
110
+
111
+ # --- Gradio Interface Definition ---
112
+ with gr.Blocks(theme=gr.themes.Soft(primary_hue="blue", secondary_hue="orange")) as demo:
113
+ gr.Markdown(
114
+ f"""
115
+ # 🎯 Model Prompt Understanding Test
116
+ Enter a prompt for the selected language model. The application will send this to the model via Hugging Face's Inference API.
117
+ The model's response will be analyzed to provide a **basic heuristic assessment** of its understanding.
118
+
119
+ **Selected Model:** <span id='current-model-display'>{DEFAULT_MODEL_NAME}</span>
120
+ """
121
+ )
122
+
123
+ current_model_name_state = gr.State(DEFAULT_MODEL_NAME)
124
+
125
+ with gr.Row():
126
+ user_input_prompt = gr.Textbox(
127
+ label="✏️ Enter your Prompt:",
128
+ placeholder="e.g., Explain the concept of zero-shot learning in 3 sentences.",
129
+ lines=4,
130
+ scale=3
131
+ )
132
+
133
+ submit_button = gr.Button("🚀 Submit Prompt and Evaluate", variant="primary")
134
+
135
+ gr.Markdown("---")
136
+ gr.Markdown("### 🤖 Model Response & Evaluation")
137
+
138
+ with gr.Row():
139
+ with gr.Column(scale=2):
140
+ model_output_response = gr.Textbox(
141
+ label="📝 Model's Response:",
142
+ lines=10,
143
+ interactive=False,
144
+ show_copy_button=True
145
+ )
146
+ with gr.Column(scale=1):
147
+ evaluation_output = gr.Textbox(
148
+ label="🧐 Understanding Evaluation:",
149
+ lines=2,
150
+ interactive=False,
151
+ show_copy_button=True
152
+ )
153
+ displayed_model = gr.Textbox(
154
+ label="⚙️ Model Used for this Response:",
155
+ interactive=False,
156
+ lines=1
157
+ )
158
+
159
+ submit_button.click(
160
+ fn=query_model_and_evaluate,
161
+ inputs=[user_input_prompt, current_model_name_state],
162
+ outputs=[model_output_response, evaluation_output, displayed_model]
163
+ )
164
+
165
+ gr.Markdown(
166
+ """
167
+ ---
168
+ **Disclaimer:**
169
+ * The 'Understanding Evaluation' is a very basic automated heuristic.
170
+ * **Using Models:** This app will attempt to connect to the selected model. Some models (especially gated ones like Llama-2) may require you to have a Hugging Face account, accept their terms of use on the Hugging Face website, and might implicitly require a valid `HF_TOKEN` associated with your account (even if not explicitly set as a secret in this Space). If a model call fails, it could be due to these reasons.
171
+ * Response quality depends heavily on the chosen model and the clarity of your prompt.
172
+ """
173
+ )
174
+
175
+ gr.Examples(
176
+ examples=[
177
+ ["Explain the difference between supervised and unsupervised machine learning.", DEFAULT_MODEL_NAME],
178
+ ["Write a short poem about a curious robot.", DEFAULT_MODEL_NAME],
179
+ ["What are the main challenges in developing AGI?", DEFAULT_MODEL_NAME],
180
+ ["Summarize the plot of 'War and Peace' in one paragraph.", DEFAULT_MODEL_NAME],
181
+ ["asdfjkl; qwerpoiu", DEFAULT_MODEL_NAME]
182
+ ],
183
+ inputs=[user_input_prompt, current_model_name_state],
184
+ outputs=[model_output_response, evaluation_output, displayed_model],
185
+ fn=query_model_and_evaluate,
186
+ cache_examples=False,
187
+ label="💡 Example Prompts (click to try)"
188
+ )
189
+
190
+ if __name__ == "__main__":
191
+ print("Attempting to launch Gradio demo...")
192
+ print(f"Default model: {DEFAULT_MODEL_NAME}")
193
+ if HF_TOKEN:
194
+ print("HF_TOKEN is set.")
195
+ else:
196
+ print("HF_TOKEN is NOT set. Some models (especially gated ones like Llama) might require a token or prior agreement to terms on the Hugging Face website to function correctly. The app will attempt to run, but API calls may fail.")
197
+ demo.launch()