drawing

Trendyol LLM 7b base v0.1

Description

This repo contains GGUF format model files for Trendyol's Trendyol LLM 7b base v0.1

Quantization methods

quantization method bits size use case recommended
Q2_K 2 2.59 GB smallest, significant quality loss - not recommended for most purposes โŒ
Q3_K_S 3 3.01 GB very small, high quality loss โŒ
Q3_K_M 3 3.36 GB very small, high quality loss โŒ
Q3_K_L 3 3.66 GB small, substantial quality loss โŒ
Q4_0 4 3.9 GB legacy; small, very high quality loss - prefer using Q3_K_M โŒ
Q4_K_M 4 4.15 GB medium, balanced quality - recommended โœ…
Q5_0 5 4.73 GB legacy; medium, balanced quality - prefer using Q4_K_M โŒ
Q5_K_S 5 4.73 GB large, low quality loss - recommended โœ…
Q5_K_M 5 4.86 GB large, very low quality loss - recommended โœ…
Q6_K 6 5.61 GB very large, extremely low quality loss โŒ
Q8_0 8 13.7 GB very large, extremely low quality loss - not recommended โŒ
Downloads last month
124
GGUF
Model size
6.84B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for sayhan/Trendyol-LLM-7b-base-v0.1-GGUF

Quantized
(2)
this model

Collection including sayhan/Trendyol-LLM-7b-base-v0.1-GGUF