Edit model card

a-captivating-and-surreal-image-of-a-goat-with-wil-XIKQzKvDRjihmI3sKK7IoA-0fxZxe6tRAKlXRRc4S9EPA.jpeg

INTRO

we are happy to announce our frist coding model

Model Card for Model ID

this is a finetune model of llama3.1 that can perform well in coding

  • **Developed by: kshabana4ai
  • **Funded by no one
  • **Shared by kshabana
  • **Model type: safetensors and gguf
  • **Language(s) English
  • License: Apache2.0
  • **Finetuned from model llama3.1-instruct

installation

download ollama

ollama run hf.co/kshabana/GOAT-coder-llama3.1-8b:Q4_K_M

Model Sources

https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct

  • Dataset Repository:

https://huggingface.co/datasets/Replete-AI/code_bagel

Uses

This model is finetuned specifically for coding and it has a 131072 context lingth.

Bias, Risks, and Limitations

it can some times produce a wrong answers.

Recommendations

NOTE: you shold have lm studio or ollama to use this model

IN LM-STUDIO: it is recommended to use this model with the defult lm-studio configration.

IN OLLAMA: we will be pushing the model soon to ollama.

Training Details

model trained with unsloth.

Training Data

the training dataset that i used: https://huggingface.co/datasets/Replete-AI/code_bagel

IMPORTANT LINKS

OLLAMA: https://ollama.com

LM-STUDIO: https://lmstudio.ai

llama3.1: instruct: https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct

dataset: https://huggingface.co/datasets/Replete-AI/code_bagel

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

kshabana4ai

Downloads last month
373
Safetensors
Model size
8.03B params
Tensor type
FP16
·
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for kshabana/GOAT-coder-llama3.1-8b

Quantized
(243)
this model
Finetunes
1 model
Quantizations
1 model

Dataset used to train kshabana/GOAT-coder-llama3.1-8b