metadata
license: mit
library_name: transformers
pipeline_tag: text-generation
datasets:
- yulan-team/YuLan-Mini-Datasets
- HuggingFaceFW/fineweb-edu
- bigcode/the-stack-v2
- mlfoundations/dclm-baseline-1.0
- math-ai/AutoMathText
- gair-prox/open-web-math-pro
- RUC-AIBOX/long_form_thought_data_5k
- internlm/Lean-Workbook
- internlm/Lean-Github
- deepseek-ai/DeepSeek-Prover-V1
- ScalableMath/Lean-STaR-base
- ScalableMath/Lean-STaR-plus
- ScalableMath/Lean-CoT-base
- ScalableMath/Lean-CoT-plus
- opencsg/chinese-fineweb-edu
- liwu/MNBVC
- vikp/textbook_quality_programming
- HuggingFaceTB/smollm-corpus
- OpenCoder-LLM/opc-annealing-corpus
- OpenCoder-LLM/opc-sft-stage1
- OpenCoder-LLM/opc-sft-stage2
- XinyaoHu/AMPS_mathematica
- deepmind/math_dataset
- mrfakename/basic-math-10m
- microsoft/orca-math-word-problems-200k
- AI-MO/NuminaMath-CoT
- HuggingFaceTB/cosmopedia
- MU-NLPC/Calc-ape210k
- manu/project_gutenberg
- storytracer/LoC-PD-Books
- allenai/dolma
language:
- en
- zh
tags:
- code
- math
- TensorBlock
- GGUF
arxiv: 2412.17743
base_model: yulan-team/YuLan-Mini
model-index:
- name: YuLan-Mini
results:
- task:
type: text-generation
dataset:
name: HumanEval
type: openai_humaneval
metrics:
- type: pass@1
value: 0.64
name: pass@1
verified: false
- task:
type: text-generation
dataset:
name: MBPP
type: mbpp
metrics:
- type: pass@1
value: 0.659
name: pass@1
verified: false
- task:
type: text-generation
dataset:
name: MATH-500
type: math-500
metrics:
- type: maj@1
value: 0.378
name: maj@1
verified: false
- task:
type: text-generation
dataset:
name: GSM8K
type: gsm8k
metrics:
- type: maj@1
value: 0.684
name: maj@1
verified: false

Feedback and support: TensorBlock's Twitter/X, Telegram Group and Discord server
yulan-team/YuLan-Mini - GGUF
This repo contains GGUF format model files for yulan-team/YuLan-Mini.
The files were quantized using machines provided by TensorBlock, and they are compatible with llama.cpp as of commit b4823.
Prompt template
<s>
<|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|>
<|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|>
<|start_header_id|>assistant<|end_header_id|>
Model file specification
Filename | Quant type | File Size | Description |
---|---|---|---|
YuLan-Mini-Q2_K.gguf | Q2_K | 1.468 GB | smallest, significant quality loss - not recommended for most purposes |
YuLan-Mini-Q3_K_S.gguf | Q3_K_S | 1.463 GB | very small, high quality loss |
YuLan-Mini-Q3_K_M.gguf | Q3_K_M | 1.560 GB | very small, high quality loss |
YuLan-Mini-Q3_K_L.gguf | Q3_K_L | 1.606 GB | small, substantial quality loss |
YuLan-Mini-Q4_0.gguf | Q4_0 | 1.463 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
YuLan-Mini-Q4_K_S.gguf | Q4_K_S | 1.746 GB | small, greater quality loss |
YuLan-Mini-Q4_K_M.gguf | Q4_K_M | 1.846 GB | medium, balanced quality - recommended |
YuLan-Mini-Q5_0.gguf | Q5_0 | 1.742 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
YuLan-Mini-Q5_K_S.gguf | Q5_K_S | 1.882 GB | large, low quality loss - recommended |
YuLan-Mini-Q5_K_M.gguf | Q5_K_M | 1.969 GB | large, very low quality loss - recommended |
YuLan-Mini-Q6_K.gguf | Q6_K | 2.580 GB | very large, extremely low quality loss |
YuLan-Mini-Q8_0.gguf | Q8_0 | 2.580 GB | very large, extremely low quality loss - not recommended |
Downloading instruction
Command line
Firstly, install Huggingface Client
pip install -U "huggingface_hub[cli]"
Then, downoad the individual model file the a local directory
huggingface-cli download tensorblock/YuLan-Mini-GGUF --include "YuLan-Mini-Q2_K.gguf" --local-dir MY_LOCAL_DIR
If you wanna download multiple model files with a pattern (e.g., *Q4_K*gguf
), you can try:
huggingface-cli download tensorblock/YuLan-Mini-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'