jerryzh168 commited on
Commit
be21cdb
·
verified ·
1 Parent(s): d682ce6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -65,17 +65,17 @@ print(f"{save_to} model:", benchmark_fn(quantized_model.generate, **inputs, max_
65
  # Model Quality
66
  We rely on [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) to evaluate the quality of the quantized model.
67
 
68
- # Installing the nightly version to get most recent updates
69
  ```
70
  pip install git+https://github.com/EleutherAI/lm-evaluation-harness
71
  ```
72
 
73
- # baseline
74
  ```
75
  lm_eval --model hf --model_args pretrained=microsoft/Phi-4-mini-instruct --tasks hellaswag --device cuda:0 --batch_size 8
76
  ```
77
 
78
- # float8dq
79
  ```
80
  lm_eval --model hf --model_args pretrained=jerryzh168/phi4-mini-float8dq --tasks hellaswag --device cuda:0 --batch_size 8
81
  ```
 
65
  # Model Quality
66
  We rely on [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) to evaluate the quality of the quantized model.
67
 
68
+ ## Installing the nightly version to get most recent updates
69
  ```
70
  pip install git+https://github.com/EleutherAI/lm-evaluation-harness
71
  ```
72
 
73
+ ## baseline
74
  ```
75
  lm_eval --model hf --model_args pretrained=microsoft/Phi-4-mini-instruct --tasks hellaswag --device cuda:0 --batch_size 8
76
  ```
77
 
78
+ ## float8dq
79
  ```
80
  lm_eval --model hf --model_args pretrained=jerryzh168/phi4-mini-float8dq --tasks hellaswag --device cuda:0 --batch_size 8
81
  ```