Supa-AI commited on
Commit
71aea68
·
verified ·
1 Parent(s): dd4cef8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -6
README.md CHANGED
@@ -213,14 +213,13 @@ This document summarizes the evaluation results for various language models base
213
 
214
  ---
215
 
216
- ## Notes
217
-
218
- In the repository, there is an `eval.py` script that can be used to run the evaluation for any other LLM.
219
-
220
- The evaluation results are based on the specific dataset and methodology employed.
221
- - The "First Token Accuracy" metric emphasizes the accuracy of predicting the initial token correctly.
222
  - Further analysis might be needed to determine the models' suitability for specific tasks.
223
 
 
224
  ### Attribution for Evaluation Code
225
  The `eval.py` script is based on work from the MMLU-Pro repository:
226
  - Repository: [TIGER-AI-Lab/MMLU-Pro](https://github.com/TIGER-AI-Lab/MMLU-Pro)
 
213
 
214
  ---
215
 
216
+ ## Notes on eval.py
217
+ `eval.py` is a template for evaluating large language models (LLMs), update the script to integrate your _API calls_ or local model logic.
218
+ - The "First Token Accuracy" metric highlights initial token prediction accuracy.
219
+ - The evaluation results are based on the specific dataset and methodology employed.
 
 
220
  - Further analysis might be needed to determine the models' suitability for specific tasks.
221
 
222
+
223
  ### Attribution for Evaluation Code
224
  The `eval.py` script is based on work from the MMLU-Pro repository:
225
  - Repository: [TIGER-AI-Lab/MMLU-Pro](https://github.com/TIGER-AI-Lab/MMLU-Pro)