--- base_model: EpistemeAI/Fireball-Mistral-Nemo-12B-cot-orcas language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl pipeline_tag: question-answering --- # Fireball-12B-v1.13a Philosophers This model is super fine-tune with philosophy in science, math, epistemology, to provide high quality responses(from first fine-tune) than Llama-3.1-8B and Google Gemma 2 9B. Super fine tuned with various datasets. # Benchmark Example from Fireball-12B V1.3a benchmark will show later this quarter. ## Training Dataset Fine tuned with various datasets. # Model Card for Fireball-12B-v1.13a Philosophers The Heavy fine-tuned Mistral-Nemo-Base-2407 Large Language Model (LLM) is a pretrained generative text model of 12B parameters trained jointly by Mistral AI and NVIDIA, it significantly outperforms existing models smaller or similar in size. For more details about this model please refer to our release [blog post](https://mistral.ai/news/mistral-nemo/). ## Key features - Released under the **Apache 2 License** - Pre-trained and instructed versions - Trained with a **128k context window** - Trained on a large proportion of **multilingual and code data** - Drop-in replacement of Mistral 7B ## Model Architecture Mistral Nemo is a transformer model, with the following architecture choices: - **Layers:** 40 - **Dim:** 5,120 - **Head dim:** 128 - **Hidden dim:** 14,436 - **Activation Function:** SwiGLU - **Number of heads:** 32 - **Number of kv-heads:** 8 (GQA) - **Vocabulary size:** 2**17 ~= 128k - **Rotary embeddings (theta = 1M)** # Guardrail/Moderation guide: For guardrailing and moderating prompts against indirect/direct prompt injections and jailbreaking, please follow the SentinelShield AI GitHub repository: [SentinelShield AI](https://github.com/tomtyiu/SentinelShieldAI) #### Demo After installing `mistral_inference`, a `mistral-demo` CLI command should be available in your environment. ### Prompt instructions - Alpaca style prompt(recommended): ```py f"""Below is an instruction that describes a task. \ Write a response that appropriately completes the request. ### Instruction: {x['instruction']} ### Input: {x['input']} ### Response: """ ``` ### Transformers > [!IMPORTANT] > NOTE: Until a new release has been made, you need to install transformers from source: > ```sh > pip install mistral_inference > pip install mistral-demo > pip install git+https://github.com/huggingface/transformers.git > ``` If you want to use Hugging Face `transformers` to generate text, you can do something like this. ```py from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "EpistemeAI2/Fireball-12B-v1.13a-philosophers" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) inputs = tokenizer("Hello my name is", return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Accelerator mode: ```py pip install accelerate #GPU A100/L4 from transformers import AutoModelForCausalLM, AutoTokenizer from accelerate import Accelerator # Initialize the accelerator accelerator = Accelerator() # Define the model ID model_id = "EpistemeAI2/Fireball-12B-v1.13a-philosophers" # Load the tokenizer tokenizer = AutoTokenizer.from_pretrained(model_id) # Load the model and prepare it for distributed setup using accelerate model = AutoModelForCausalLM.from_pretrained(model_id) # Move the model to the appropriate device using accelerate model, = accelerator.prepare(model) # Prepare inputs inputs = tokenizer("Hello my name is", return_tensors="pt").to(accelerator.device) # Generate outputs with the model outputs = model.generate(**inputs, max_new_tokens=20) # Decode and print the outputs print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` > [!TIP] > Unlike previous Mistral models, Mistral Nemo requires smaller temperatures. We recommend to use a temperature of 0.3. ## Note `EpistemeAI/Fireball-12B-v1.13a` is a pretrained base model and therefore does not have any moderation mechanisms. Go to Guardrail/Moderation guide section for moderation guide # Uploaded model - **Developed by:** EpistemeAI2 - **License:** apache-2.0 - **Finetuned from model :** EpistemeAI/Fireball-Mistral-Nemo-12B-cot-orcas This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [](https://github.com/unslothai/unsloth)