Unsloth Dynamic 2.0 achieves superior accuracy & outperforms other leading quants.

Granite-4.0-H-Micro-Base

Model Summary: Granite-4.0-H-Micro-Base is a decoder-only, long-context language model designed for a wide range of text-to-text generation tasks. It also supports Fill-in-the-Middle (FIM) code completion through the use of specialized prefix and suffix tokens. The model is trained from scratch on approximately 18 trillion tokens following a four-stage training strategy: 10 trillion tokens in the first stage, 5 trillion in the second, 2 trillion in the third, and 0.5 trillion in the final stage.

Supported Languages: English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese. Users may finetune Granite 4.0 models for languages beyond these languages.

Intended Use: Prominent use cases of LLMs in text-to-text generation include summarization, text classification, extraction, question-answering, code-completion (including FIM), and long-context generation tasks. All Granite Base models are able to handle these tasks as they were trained on a large amount of data from various domains. Moreover, they can serve as baseline to create specialized models for specific application scenarios.

Generation: This is a simple example of how to use Granite-4.0-H-Micro-Base model.

Install the following libraries:

pip install torch torchvision torchaudio
pip install accelerate
pip install transformers

Then, copy the code snippet below to run the example.

from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda"

model_path = "ibm-granite/granite-4.0-h-micro-base"

tokenizer = AutoTokenizer.from_pretrained(model_path)
# drop device_map if running on CPU
model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device)
model.eval()
# change input text as desired
input_text = "The capital of France is"
# tokenize the text
input_tokens = tokenizer(input_text, return_tensors="pt").to(device)
# generate output tokens
output = model.generate(**input_tokens, max_length=10)
# decode output tokens into text
output = tokenizer.batch_decode(output)
# print output
print(output[0])

Expected output:

The capital of France is Paris.

Evaluation Results:

Benchmarks Metric Micro Dense H Micro Dense H Tiny MoE H Small MoE
General Tasks
MMLU 5-shot 66.47 67.43 68.90 75.85
MMLU-Pro 5-shot,CoT 37.16 34.03 35.47 48.94
BBH 3-shot, CoT 63.84 57.65 59.67 75.84
AGI EVAL 3-shot 54.32 54.59 53.69 62.05
DROP 5-shot 66.04 67.44 64.92 74.69
Math Tasks
GSM8K 8-shot 72.93 63.76 72.55 82.11
Minerva Math 4-shot 38 39.7 40.34 46.28
Code Tasks
HumanEval pass@1 [StarCoder Prompt] 76.19 73.72 77.59 83.66
HumanEval pass@1 59.76 70.73 71.34 76.22
HumanEval+ pass@1 54.27 67.07 64.02 69.51
MBPP pass@1 81.48 74.87 81.48 83.07
MBPP+ pass@1 68.25 63.23 68.78 70.37
Multilingual Tasks
MMMLU 5-shot 56.59 58.5 62.77 71.18
INCLUDE 5-shot 51.77 52.16 53.78 66.04
MGSM 8-shot 58.48 47.04 54.64 65.2
Multilingual Benchmarks and thr included languages:
Benchmarks # Langs Languages
MMMLU 11 ar, de, en, es, fr, ja, ko, pt, zh, bn, hi
INCLUDE 14 hi, bn, ta, te, ar, de, es, fr, it, ja, ko, nl, pt, zh
MGSM 5 en, es, fr, ja, zh

Model Architecture: Granite-4.0-H-Micro-Base is based on a decoder-only dense transformer architecture. Core components of this architecture are: GQA, Mamba2, MLP with SwiGLU, RMSNorm, and shared input/output embeddings.

Model Micro Dense H Micro Dense H Tiny MoE H Small MoE
Embedding size 2560 2048 1536 4096
Number of layers 40 attention 4 attention / 36 Mamba2 4 attention / 36 Mamba2 4 attention / 36 Mamba2
Attention head size 64 64 128 128
Number of attention heads 40 32 12 32
Number of KV heads 8 8 4 8
Mamba2 state size - 128 128 128
Number of Mamba2 heads - 64 48 128
MLP / Shared expert hidden size 8192 8192 1024 1536
Num. Experts - - 64 72
Num. active Experts - - 6 10
Expert hidden size - - 512 768
MLP activation SwiGLU SwiGLU SwiGLU SwiGLU
Sequence length 128K 128K 128K 128K
Position embedding RoPE NoPE NoPE NoPE
# Parameters 3B 3B 7B 32B
# Active parameters 3B 3B 1B 9B

Training Data: This model is trained on a mix of open source and proprietary data following a four-stage training strategy.

Stage Characteristics Micro Dense H Micro Dense H Tiny MoE H Small MoE
I General mixture of training data, warmup, and power scheduler for learning rate. 10 10 15 15
II General mixture of training data with higher percentages of code and math with power scheduler for learning rate. 2 5 5 5
III High quality training data, exponential decay of learning rate. 2 2 2 2
IV High quality training data, linear decay to zero for learning rate. 0.5 0.5 0.5 0.5

Infrastructure: We trained the Granite 4.0 Language Models utilizing an NVIDIA GB200 NVL72 cluster hosted in CoreWeave. Intra-rack communication occurs via the 72-GPU NVLink domain, and a non-blocking, full Fat-Tree NDR 400 Gb/s InfiniBand network provides inter-rack communication. This cluster provides a scalable and efficient infrastructure for training our models over thousands of GPUs.

Ethical Considerations and Limitations: The use of Large Language Models involves risks and ethical considerations people must be aware of, including but not limited to: bias and fairness, misinformation, and autonomous decision-making. Granite-4.0-H-Micro-Base model is not the exception in this regard. Even though this model is suited for multiple generative AI tasks, it has not undergone any safety alignment, there it may produce problematic outputs. Additionally, it remains uncertain whether smaller models might exhibit increased susceptibility to hallucination in generation scenarios by copying text verbatim from the training dataset due to their reduced sizes and memorization capacities. This aspect is currently an active area of research, and we anticipate more rigorous exploration, comprehension, and mitigations in this domain. Regarding ethics, a latent risk associated with all Large Language Models is their malicious utilization. We urge the community to use Granite-4.0-H-Micro-Base model with ethical intentions and in a responsible way.

Resources

Downloads last month
111
Safetensors
Model size
3B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for unsloth/granite-4.0-h-micro-base

Finetuned
(1)
this model

Collection including unsloth/granite-4.0-h-micro-base