TriLMs unpacked to FP16 - compatible with any implementation supporting LLaMa architecture in huggingface's transformers format.
Spectra Suite of Low Bitwdith Models
AI & ML interests
None defined yet.
Organization Card
Spectra Suite
We release the Spectra Suite consisting of 54 models ranging from 99M to 3.9B parameters across different bitwidths:
- FloatLM: LLMs pretrained in FP16 (Half-Precision).
- TriLM: LLMs pretrained with effective ternary bitwidth.
- QuantLM 8-bit: FloatLM LLMs Quantized to 8-bits.
- QuantLM 6-bit: FloatLM LLMs Quantized to 6-bits.
- QuantLM 4-bit: FloatLM LLMs Quantized to 4-bits.
- QuantLM 3-bit: FloatLM LLMs Quantized to 3-bits.
All models are released in unpacked (FP16 format) - compatible with FP16 GEMMs across any library supporting the LLaMa architecture.
Citation
If you find these models or the associated paper useful, please cite the paper:
@misc{kaushal2024spectracomprehensivestudyternary,
title={Spectra: A Comprehensive Study of Ternary, Quantized, and FP16 Language Models},
author={Ayush Kaushal and Tejas Pandey and Tejas Vaidhya and Aaryan Bhagat and Irina Rish},
year={2024},
eprint={2407.12327},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2407.12327},
}
models
54
SpectraSuite/QuantLM_99M_3bit_Unpacked
Text Generation
•
Updated
•
19
SpectraSuite/QuantLM_99M_4bit_Unpacked
Text Generation
•
Updated
•
13
SpectraSuite/QuantLM_99M_6bit_Unpacked
Text Generation
•
Updated
•
19
SpectraSuite/QuantLM_99M_8bit_Unpacked
Text Generation
•
Updated
•
13
SpectraSuite/QuantLM_190M_3bit_Unpacked
Text Generation
•
Updated
•
12
SpectraSuite/QuantLM_190M_4bit_Unpacked
Text Generation
•
Updated
•
13
SpectraSuite/QuantLM_190M_8bit_Unpacked
Text Generation
•
Updated
•
132
SpectraSuite/QuantLM_190M_6bit_Unpacked
Text Generation
•
Updated
•
20
SpectraSuite/QuantLM_390M_3bit_Unpacked
Text Generation
•
Updated
•
11
SpectraSuite/QuantLM_390M_4bit_Unpacked
Text Generation
•
Updated
•
15