File size: 241 Bytes
3e9d5ca |
1 2 3 4 5 6 7 8 9 10 11 |
## Llama-3-70B-Instruct-FP8-v1
* Weights and activations are per-tensor quantized to float8_e4m3.
* Quantization with AutoFP8.
* Calibration dataset: Ultrachat (mgoin/ultrachat_2k)
* Samples: 1024
* Sequence length: 4096
## Evaluation
TBA |