YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Quantization made by Richard Erkhov.
Qwen2.5-3B-Instruct-Ja-SFT - GGUF
- Model creator: https://huggingface.co/jaeyong2/
- Original model: https://huggingface.co/jaeyong2/Qwen2.5-3B-Instruct-Ja-SFT/
Name | Quant method | Size |
---|---|---|
Qwen2.5-3B-Instruct-Ja-SFT.Q2_K.gguf | Q2_K | 1.19GB |
Qwen2.5-3B-Instruct-Ja-SFT.Q3_K_S.gguf | Q3_K_S | 1.35GB |
Qwen2.5-3B-Instruct-Ja-SFT.Q3_K.gguf | Q3_K | 1.48GB |
Qwen2.5-3B-Instruct-Ja-SFT.Q3_K_M.gguf | Q3_K_M | 1.48GB |
Qwen2.5-3B-Instruct-Ja-SFT.Q3_K_L.gguf | Q3_K_L | 1.59GB |
Qwen2.5-3B-Instruct-Ja-SFT.IQ4_XS.gguf | IQ4_XS | 1.63GB |
Qwen2.5-3B-Instruct-Ja-SFT.Q4_0.gguf | Q4_0 | 1.7GB |
Qwen2.5-3B-Instruct-Ja-SFT.IQ4_NL.gguf | IQ4_NL | 1.71GB |
Qwen2.5-3B-Instruct-Ja-SFT.Q4_K_S.gguf | Q4_K_S | 1.71GB |
Qwen2.5-3B-Instruct-Ja-SFT.Q4_K.gguf | Q4_K | 1.8GB |
Qwen2.5-3B-Instruct-Ja-SFT.Q4_K_M.gguf | Q4_K_M | 1.8GB |
Qwen2.5-3B-Instruct-Ja-SFT.Q4_1.gguf | Q4_1 | 1.86GB |
Qwen2.5-3B-Instruct-Ja-SFT.Q5_0.gguf | Q5_0 | 2.02GB |
Qwen2.5-3B-Instruct-Ja-SFT.Q5_K_S.gguf | Q5_K_S | 2.02GB |
Qwen2.5-3B-Instruct-Ja-SFT.Q5_K.gguf | Q5_K | 2.07GB |
Qwen2.5-3B-Instruct-Ja-SFT.Q5_K_M.gguf | Q5_K_M | 2.07GB |
Qwen2.5-3B-Instruct-Ja-SFT.Q5_1.gguf | Q5_1 | 2.18GB |
Qwen2.5-3B-Instruct-Ja-SFT.Q6_K.gguf | Q6_K | 2.36GB |
Qwen2.5-3B-Instruct-Ja-SFT.Q8_0.gguf | Q8_0 | 3.06GB |
Original model description:
base_model: - Qwen/Qwen2.5-3B-Instruct language: - ja - en library_name: transformers
Evaluation
llm-jp-eval script(colab)
!git clone https://github.com/llm-jp/llm-jp-eval.git
!cd llm-jp-eval && pip install -e .
!cd llm-jp-eval && python scripts/preprocess_dataset.py --dataset-name all --output-dir ./dataset_dir
!cd llm-jp-eval && python scripts/evaluate_llm.py -cn config.yaml model.pretrained_model_name_or_path=jaeyong2/Qwen2.5-0.5B-Instruct-JaMagpie-Preview tokenizer.pretrained_model_name_or_path=jaeyong2/Qwen2.5-0.5B-Instruct-JaMagpie-Preview dataset_dir=./dataset_dir/1.4.1/evaluation/test
llm-jp-eval | Qwen2.5-3B-Instruct | finetuning-model |
---|---|---|
AVG | 0.4921 | 0.4895 |
CG | 0.1000 | 0 |
EL | 0.4770 | 0.4431 |
FA | 0.1210 | 0.1246 |
HE | 0.5550 | 0.5650 |
MC | 0.7133 | 0.7900 |
MR | 0.5400 | 0.6100 |
MT | 0.6391 | 0.5982 |
NLI | 0.6640 | 0.6640 |
QA | 0.2638 | 0.3165 |
RC | 0.8481 | 0.7837 |
Testing Data, Factors & Metrics
Testing Data
[More Information Needed]
Factors
[More Information Needed]
Metrics
[More Information Needed]
Results
[More Information Needed]
Summary
Model Examination [optional]
[More Information Needed]
Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type: [More Information Needed]
- Hours used: [More Information Needed]
- Cloud Provider: [More Information Needed]
- Compute Region: [More Information Needed]
- Carbon Emitted: [More Information Needed]
Technical Specifications [optional]
Model Architecture and Objective
[More Information Needed]
Compute Infrastructure
[More Information Needed]
Hardware
[More Information Needed]
Software
[More Information Needed]
Citation [optional]
BibTeX:
[More Information Needed]
APA:
[More Information Needed]
Glossary [optional]
[More Information Needed]
More Information [optional]
[More Information Needed]
Model Card Authors [optional]
[More Information Needed]
Model Card Contact
[More Information Needed]
- Downloads last month
- 0
Hardware compatibility
Log In
to view the estimation
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support