LLMs quantized with GPTQ
Irina Proskurina
iproskurina
AI & ML interests
LLMs: quantization, pre-training
Recent Activity
liked
a model
about 21 hours ago
meta-llama/Meta-Llama-3-8B
new activity
about 1 month ago
TheBloke/Mistral-7B-Instruct-v0.2-GPTQ:weights not used when initializing MistralForCausalLM
updated
a model
about 1 month ago
iproskurina/Mistral-7B-v0.3-GPTQ2-4bit-g3
Organizations
Collections
4
models
43
iproskurina/Mistral-7B-v0.3-GPTQ2-4bit-g3
Text Generation
•
Updated
•
28
iproskurina/Mistral-7B-v0.3-GPTQ2-4bit-g2
Text Generation
•
Updated
•
7
iproskurina/Mistral-7B-v0.3-GPTQ2-4bit-g1
Text Generation
•
Updated
•
6
iproskurina/opt-125m-gptq2
Text Generation
•
Updated
•
9
iproskurina/distilbert-base-alternate-layers
Updated
•
1
iproskurina/en_grammar_checker
Updated
•
6
•
4
iproskurina/Mistral-7B-v0.3-gptq-3bit
Text Generation
•
Updated
•
10
iproskurina/Mistral-7B-v0.3-GPTQ-8bit-g128
Text Generation
•
Updated
•
10
iproskurina/Mistral-7B-v0.3-GPTQ-4bit-g128
Text Generation
•
Updated
•
20
iproskurina/Mistral-7B-v0.1-GPTQ-8bit-g64
Text Generation
•
Updated
•
7