🧠 Andy‑4-tiny 🐜

file_0000000057e4622f835ec6ade102adfc.png

Andy‑4-tiny is an 360 Million‑parameter specialist model tuned for Minecraft gameplay via the Mindcraft framework. The Current version of Andy-4-tiny is Andy-4-tiny-0522.

⚠️ Certification:
Andy‑4 is not yet certified by the Mindcraft developers. Use in production at your own discretion.

πŸ” Model Specifications


πŸ“Š Training Regimen

  1. Andy‑4‑base‑1 dataset

    • Epochs: 2
    • Learning Rate: 5e-5
    • Dataset Size: 47.4k
  2. Andy‑4‑base-2 dataset

    • Epochs: 2
    • Learning Rate: 7e-5
    • Dataset Size: 49.2k
  3. Fine‑tune (FT) dataset

    • Epochs: 2.5
    • Learning Rate: 2e-5
    • Dataset Size: 4.12k
  • Optimizer: AdamW_8bit with cosine decay
  • Quantization: 4‑bit (bnb-4bit) for inference
  • Warm Up Steps: 0.1% of each dataset

πŸš€ Installation

Andy-4-tiny is an Edge-case model, built to run on the CPU and use minimal ram. These are the requirements to Run Them, not to use them while Minecraft is also running.

Quantization RAM Required
F16 CPU 2GB
Q8_0 CPU 1GB
Q4_K_M CPU 0.8GB

1. Installation directly on Ollama

  1. Visit Andy-4 on Ollama
  2. Copy the command after choosing model type / quantization
  3. Run the command in the terminal
  4. Set the profile's model to be what you installed, such as ollama/sweaterdog/andy-4:tiny-q8_0

2. Manual Download & Modelfile

  1. Download

    • From the HF Files tab, grab your chosen .GGUF quant weights (e.g. Andy-4-tiny.Q4_K_M.gguf).
    • Download the provided Modelfile.
  2. Edit

    Change

    FROM YOUR/PATH/HERE
    

    to

    FROM /path/to/Andy-4-tiny.Q4_K_M.gguf
    

Optional: Increase the parameter num_ctx to a higher value for longer conversations if you:

A. Have extra VRAM

B. Quantized the context window

C. Can use a smaller model

  1. Create
    ollama create andy-4-tiny -f Modelfile
    

This registers the Andy‑4-tiny model locally.


πŸ“Œ Acknowledgments

Click to expand

βš–οΈ License

See Andy 1.0 License.

This work uses data and models created by @Sweaterdog.

Downloads last month
58
GGUF
Model size
362M params
Architecture
llama
Hardware compatibility
Log In to view the estimation

4-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Sweaterdog/Andy-4-tiny

Quantized
(74)
this model

Datasets used to train Sweaterdog/Andy-4-tiny

Collection including Sweaterdog/Andy-4-tiny