File size: 9,335 Bytes
a0c34ec a7c8152 7fa824d a7c8152 a0c34ec 94417b9 fe865a0 46dd082 94417b9 fe865a0 94417b9 b3b885f 94417b9 6b8d960 abcbd52 6b8d960 94417b9 fbffc89 94417b9 fe865a0 94417b9 e13b4c9 27a5125 94417b9 a0c34ec a7c8152 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: EpistemeAI/Fireball-Mistral-Nemo-12B-cot-orcas
pipeline_tag: text-generation
model-index:
- name: Fireball-12B-v1.13a-philosophers
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 8.76
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=EpistemeAI/Fireball-12B-v1.13a-philosophers
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 30.34
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=EpistemeAI/Fireball-12B-v1.13a-philosophers
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 3.85
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=EpistemeAI/Fireball-12B-v1.13a-philosophers
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 6.82
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=EpistemeAI/Fireball-12B-v1.13a-philosophers
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 9.98
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=EpistemeAI/Fireball-12B-v1.13a-philosophers
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 26.3
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=EpistemeAI/Fireball-12B-v1.13a-philosophers
name: Open LLM Leaderboard
---
<img src="https://huggingface.co/EpistemeAI/Fireball-Mistral-Nemo-Base-2407-v1-DPO2/resolve/main/fireball.JPG" width="200"/>
# Fireball-12B-v1.13a Philosophers
This model is super fine-tune with philosophy of science, math, epistemology dataset, to provide high quality responses(from first fine-tune) than Llama-3.1-8B and Google Gemma 2 9B.
Super fine tuned with various datasets.
# Benchmark
Example from Fireball-12B
<img src="https://huggingface.co/EpistemeAI/Fireball-12B/resolve/main/benchmark2.jpg"/>
V1.3a benchmark will show later this quarter.
## Training Dataset
Fine tuned with various datasets.
# Model Card for Fireball-12B-v1.13a Philosophers
The Heavy fine-tuned Mistral-Nemo-Base-2407 Large Language Model (LLM) is a pretrained generative text model of 12B parameters trained jointly by Mistral AI and NVIDIA, it significantly outperforms existing models smaller or similar in size.
For more details about this model please refer to our release [blog post](https://mistral.ai/news/mistral-nemo/).
## Key features
- Released under the **Apache 2 License**
- Pre-trained and instructed versions
- Trained with a **128k context window**
- Trained on a large proportion of **multilingual and code data**
- Drop-in replacement of Mistral 7B
## Model Architecture
Mistral Nemo is a transformer model, with the following architecture choices:
- **Layers:** 40
- **Dim:** 5,120
- **Head dim:** 128
- **Hidden dim:** 14,436
- **Activation Function:** SwiGLU
- **Number of heads:** 32
- **Number of kv-heads:** 8 (GQA)
- **Vocabulary size:** 2**17 ~= 128k
- **Rotary embeddings (theta = 1M)**
# Guardrail/Moderation guide:
For guardrailing and moderating prompts against indirect/direct prompt injections and jailbreaking, please follow the SentinelShield AI GitHub repository:
[SentinelShield AI](https://github.com/tomtyiu/SentinelShieldAI)
#### Demo
After installing `mistral_inference`, a `mistral-demo` CLI command should be available in your environment.
### Prompt instructions - Alpaca style prompt(recommended):
```py
f"""Below is an instruction that describes a task. \
Write a response that appropriately completes the request.
### Instruction:
{x['instruction']}
### Input:
{x['input']}
### Response:
"""
```
### Transformers
> [!IMPORTANT]
> NOTE: Until a new release has been made, you need to install transformers from source:
> ```sh
> pip install mistral_inference
> pip install mistral-demo
> pip install git+https://github.com/huggingface/transformers.git
> !pip install huggingface_hub[hf_transfer]
> !HF_HUB_ENABLE_HF_TRANSFER=1
> ```
If you want to use Hugging Face `transformers` to generate text, you can do something like this.
```py
# Import necessary libraries
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
import torch
# Load the tokenizer
tokenizer = AutoTokenizer.from_pretrained("EpistemeAI2/Fireball-12B-v1.13a-philosophers")
# Configure 4-bit quantization and enable CPU offloading
quantization_config = BitsAndBytesConfig(
load_in_4bit=True,
llm_int8_enable_fp32_cpu_offload=True
)
# Load the model with 4-bit quantization and CPU offloading
model = AutoModelForCausalLM.from_pretrained(
"EpistemeAI2/Fireball-12B-v1.13a-philosophers",
quantization_config=quantization_config,
device_map="auto" # Automatically map model to devices
)
# Define the input text
input_text = "What is the difference between inductive and deductive reasoning?,"
# Tokenize the input text
input_ids = tokenizer.encode(input_text, return_tensors="pt")
# Ensure the input tensors are moved to the correct device
# Use the first parameter of the model to get the device it's on
input_ids = input_ids.to(model.device)
# Generate text using the model
output_ids = model.generate(input_ids, max_length=100, num_return_sequences=1)
# Decode the generated tokens to text
output_text = tokenizer.decode(output_ids[0], skip_special_tokens=True)
# Print the output
print(output_text)
```
Google colab - [link](https://colab.research.google.com/drive/1ZgUrbonMlK05iQ-tgWZ_lFmUZFbWZNnM?usp=sharing)
## Accelerator mode:
```py
pip install accelerate #GPU A100/L4
from transformers import AutoModelForCausalLM, AutoTokenizer
from accelerate import Accelerator
# Initialize the accelerator
accelerator = Accelerator()
# Define the model ID
model_id = "EpistemeAI2/Fireball-12B-v1.13a-philosophers"
# Load the tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_id)
# Load the model and prepare it for distributed setup using accelerate
model = AutoModelForCausalLM.from_pretrained(model_id)
# Move the model to the appropriate device using accelerate
model, = accelerator.prepare(model)
# Prepare inputs
inputs = tokenizer("Hello my name is", return_tensors="pt").to(accelerator.device)
# Generate outputs with the model
outputs = model.generate(**inputs, max_new_tokens=20)
# Decode and print the outputs
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
> [!TIP]
> Unlike previous Mistral models, Mistral Nemo requires smaller temperatures. We recommend to use a temperature of 0.3.
## Note
`EpistemeAI/Fireball-12B-v1.13a` is a pretrained base model and therefore does not have any moderation mechanisms. Go to Guardrail/Moderation guide section for moderation guide
## Version
For simulated conciousness and emotions, please use
model [link](https://huggingface.co/EpistemeAI2/Fireball-Mistral-Nemo-Instruct-emo-PHD)
# Uploaded model
- **Developed by:** EpistemeAI2
- **License:** apache-2.0
- **Finetuned from model :** EpistemeAI/Fireball-Mistral-Nemo-12B-cot-orcas
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_EpistemeAI__Fireball-12B-v1.13a-philosophers)
| Metric |Value|
|-------------------|----:|
|Avg. |14.34|
|IFEval (0-Shot) | 8.76|
|BBH (3-Shot) |30.34|
|MATH Lvl 5 (4-Shot)| 3.85|
|GPQA (0-shot) | 6.82|
|MuSR (0-shot) | 9.98|
|MMLU-PRO (5-shot) |26.30|
|