Text Generation
Transformers
Safetensors
arcee

You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Company name is optional, please put NA if you would prefer not to share it.

Log in or Sign Up to review the conditions and access this model content.

Arcee AFM 4.5B

AFM-4.5B-Base

AFM-4.5B-Base is a 4.5 billion parameter instruction-tuned model developed by Arcee.ai, designed for enterprise-grade performance across diverse deployment environments from cloud to edge. The base model was trained on a dataset of 8 trillion tokens, comprising 6.5 trillion tokens of general pretraining data followed by 1.5 trillion tokens of midtraining data with enhanced focus on mathematical reasoning and code generation. Following pretraining, the model underwent supervised fine-tuning on high-quality instruction datasets. The instruction-tuned model was further refined through reinforcement learning on verifiable rewards as well as for human preference. We use a modified version of TorchTitan for pretraining, Axolotl for supervised fine-tuning, and a modified version of Verifiers for reinforcement learning.

The development of AFM-4.5B prioritized data quality as a fundamental requirement for achieving robust model performance. We collaborated with DatologyAI, a company specializing in large-scale data curation. DatologyAI's curation pipeline integrates a suite of proprietary algorithms—model-based quality filtering, embedding-based curation, target distribution-matching, source mixing, and synthetic data. Their expertise enabled the creation of a curated dataset tailored to support strong real-world performance.

The model architecture follows a standard transformer decoder-only design based on Vaswani et al., incorporating several key modifications for enhanced performance and efficiency. Notable architectural features include grouped query attention for improved inference efficiency and ReLU^2 activation functions instead of SwiGLU to enable sparsification while maintaining or exceeding performance benchmarks.

The model available in this repo is the base model following merging and context extension.


Powered by Datology

Model Details


Benchmarks

image/png

How to use with transformers

You can use the model directly with the transformers library.

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model_id = "arcee-ai/AFM-4.5B-Base"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype=torch.bfloat16,
    device_map="auto"
)

prompt = "Once upon a time "
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(model.device)

# Generate text
outputs = model.generate(
    input_ids,
    max_new_tokens=100,
    do_sample=True,
    temperature=0.7,
    top_p=0.95
)

generated_text = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]
print(generated_text)

License

AFM-4.5B is released under the Arcee Model License. If your company makes less than $1.75 million in annual revenue, you’re free to use the model for commercial purposes, as long as you’re not providing the weights to a company above that threshold. If your product or application using AFM-4.5B is sold to a larger company, that’s fine—as long as they don’t receive or run the weights directly.

We want as many developers, researchers, and builders as possible to benefit from AFM-4.5B. At the same time, this license ensures that we can continue to develop and support the model for the community.

Downloads last month
91
Safetensors
Model size
4.62B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for arcee-ai/AFM-4.5B-Base

Finetunes
1 model
Merges
1 model
Quantizations
1 model

Collection including arcee-ai/AFM-4.5B-Base