fein-14B π
Finetuned Qwen 3-14B on the Smol-Talk Dolphin dataset
with LoRA adapters, then merged into a single set of full-precision weights
for easy inference.
Model | Params | Base | Quant options | Checkpoints |
---|---|---|---|---|
fein-14b |
14.8 B | β (merged) | 4-bit / 8-bit | one folder, ready to load |
Table of contents
- Quick start (inference)
- Installation
- Training
- Continuing training
- Merging the adapters
- Pushing to your HF space
- Repo layout
- Citation
- License
Quick start (inference)
git clone https://huggingface.co/kieraisverybored/fein
cd fein
# Optional: create & activate conda env
conda create -n fein python=3.11 -y
conda activate fein
Now install requirements.
# 4-bit streaming chat
python infer.py --model .
# short answers (128 tokens max)
python infer.py --model . --max-new 128
Sample session
User: Hi!
Assistant: Hello! How can I assist you today? π
Installation
# Core libs
pip install torch>=2.2.0 transformers>=4.40.0 accelerate bitsandbytes
# Optional quality-of-life
pip install tqdm rich
GPU: A single 24 GB card is enough for 4-bit inference. CPU: Possible with 8-bit +
device_map="cpu"
, but sloooow.
Citation
@misc{fein2025,
title = {FEINβ14B: Smol-Talk fine-tune of Qwen 3-14B},
author = {KieraDev},
year = {2025},
howpublished= {\url{https://huggingface.co/kieraisverybored/fein}}
}
License
The base model inherits the Apache License; the finetuned weights are released under the same terms. The dataset is MIT-licensed. See for full details.
Have fun experimentingβand please open an issue if you hit a snag! π
- Downloads last month
- 79
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support