fein-14B πŸš€

Finetuned Qwen 3-14B on the Smol-Talk Dolphin dataset
with LoRA adapters, then merged into a single set of full-precision weights for easy inference.

Model Params Base Quant options Checkpoints
fein-14b 14.8 B – (merged) 4-bit / 8-bit one folder, ready to load

Table of contents

  1. Quick start (inference)
  2. Installation
  3. Training
  4. Continuing training
  5. Merging the adapters
  6. Pushing to your HF space
  7. Repo layout
  8. Citation
  9. License

Quick start (inference)

git clone https://huggingface.co/kieraisverybored/fein
cd fein

# Optional: create & activate conda env
conda create -n fein python=3.11 -y
conda activate fein

Now install requirements.

# 4-bit streaming chat
python infer.py --model .

# short answers (128 tokens max)
python infer.py --model . --max-new 128

Sample session

User: Hi!
Assistant: Hello! How can I assist you today? 😊

Installation

# Core libs
pip install torch>=2.2.0 transformers>=4.40.0 accelerate bitsandbytes
# Optional quality-of-life
pip install tqdm rich

GPU: A single 24 GB card is enough for 4-bit inference. CPU: Possible with 8-bit + device_map="cpu", but sloooow.


Citation

@misc{fein2025,
  title       = {FEIN–14B: Smol-Talk fine-tune of Qwen 3-14B},
  author      = {KieraDev},
  year        = {2025},
  howpublished= {\url{https://huggingface.co/kieraisverybored/fein}}
}

License

The base model inherits the Apache License; the finetuned weights are released under the same terms. The dataset is MIT-licensed. See for full details.


Have fun experimentingβ€”and please open an issue if you hit a snag! πŸ™Œ

Downloads last month
79
Safetensors
Model size
14.8B params
Tensor type
F16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for kieraisverybored/fein

Finetuned
Qwen/Qwen3-14B
Finetuned
(54)
this model

Dataset used to train kieraisverybored/fein