Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,126 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
library_name: transformers
|
3 |
+
tags:
|
4 |
+
- falcon-h1
|
5 |
+
license: other
|
6 |
+
license_name: falcon-llm-license
|
7 |
+
license_link: https://falconllm.tii.ae/falcon-terms-and-conditions.html
|
8 |
+
---
|
9 |
+
|
10 |
+
# Table of Contents
|
11 |
+
|
12 |
+
0. [TL;DR](#TL;DR)
|
13 |
+
1. [Model Details](#model-details)
|
14 |
+
2. [Training Details](#training-details)
|
15 |
+
3. [Usage](#usage)
|
16 |
+
4. [Evaluation](#evaluation)
|
17 |
+
5. [Citation](#citation)
|
18 |
+
|
19 |
+
# TL;DR
|
20 |
+
|
21 |
+
# Model Details
|
22 |
+
|
23 |
+
## Model Description
|
24 |
+
|
25 |
+
- **Developed by:** [https://www.tii.ae](https://www.tii.ae)
|
26 |
+
- **Model type:** Causal decoder-only
|
27 |
+
- **Architecture:** Hybrid Transformers + Mamba architecture
|
28 |
+
- **Language(s) (NLP):** English, Multilingual
|
29 |
+
- **License:** Falcon-LLM License
|
30 |
+
|
31 |
+
# Training details
|
32 |
+
|
33 |
+
For more details about the training protocol of this model, please refer to the [Falcon-H1 technical blogpost](https://falcon-lm.github.io/blog/falcon-h1/).
|
34 |
+
|
35 |
+
# Usage
|
36 |
+
|
37 |
+
Currently to use this model you can either rely on Hugging Face `transformers`, `vLLM` or our custom fork of `llama.cpp` library.
|
38 |
+
|
39 |
+
## Inference
|
40 |
+
|
41 |
+
Make sure to install the latest version of `transformers` or `vllm`, eventually install these packages from source:
|
42 |
+
|
43 |
+
```bash
|
44 |
+
pip install git+https://github.com/huggingface/transformers.git
|
45 |
+
```
|
46 |
+
|
47 |
+
Refer to [the official vLLM documentation for more details on building vLLM from source](https://docs.vllm.ai/en/latest/getting_started/installation/gpu.html#build-wheel-from-source).
|
48 |
+
|
49 |
+
### 🤗 transformers
|
50 |
+
|
51 |
+
Refer to the snippet below to run H1 models using 🤗 transformers:
|
52 |
+
|
53 |
+
```python
|
54 |
+
import torch
|
55 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
56 |
+
|
57 |
+
model_id = "tiiuae/Falcon-H1-1B-Base"
|
58 |
+
|
59 |
+
model = AutoModelForCausalLM.from_pretrained(
|
60 |
+
model_id,
|
61 |
+
torch_dtype=torch.bfloat16,
|
62 |
+
device_map="auto"
|
63 |
+
)
|
64 |
+
|
65 |
+
# Perform text generation
|
66 |
+
```
|
67 |
+
|
68 |
+
### vLLM
|
69 |
+
|
70 |
+
For vLLM, simply start a server by executing the command below:
|
71 |
+
|
72 |
+
```
|
73 |
+
# pip install vllm
|
74 |
+
vllm serve tiiuae/Falcon-H1-1B-Instruct --tensor-parallel-size 2 --data-parallel-size 1
|
75 |
+
```
|
76 |
+
|
77 |
+
### `llama.cpp`
|
78 |
+
|
79 |
+
While we are working on integrating our architecture directly into `llama.cpp` library, you can install our fork of the library and use it directly: https://github.com/tiiuae/llama.cpp-Falcon-H1
|
80 |
+
Use the same installing guidelines as `llama.cpp`.
|
81 |
+
|
82 |
+
# Evaluation
|
83 |
+
|
84 |
+
Falcon-H1 series perform very well on a variety of tasks, including reasoning tasks.
|
85 |
+
|
86 |
+
| Tasks | Falcon-H1-1.5B | Qwen3-1.7B | Qwen2.5-1.5B | Gemma3-1B | Llama3.2-1B | Falcon3-1B |
|
87 |
+
| --- | --- | --- | --- | --- | --- | --- |
|
88 |
+
| **General** | | | | | |
|
89 |
+
| BBH | **46.57** | 43.05 | 40.55 | 30.26 | 30.72 | 35.24 |
|
90 |
+
| MMLU | 61.81 | **62.46** | 61.13 | 26.33 | 32.39 | 45.14 |
|
91 |
+
| ARC-C | 53.24 | **55.72** | 54.27 | 39.33 | 39.42 | 47.87 |
|
92 |
+
| HellaSwag | 66.76 | 67.09 | **67.86** | 62.94 | 65.73 | 62.3 |
|
93 |
+
| Winogrande | 65.59 | **66.3** | 64.56 | 62.59 | 62.75 | 61.17 |
|
94 |
+
| **Math** | | | | | |
|
95 |
+
| GSM8k | 52.01 | **70.74** | 63.0 | 2.2 | 7.05 | 34.95 |
|
96 |
+
| MATH lvl5 | **20.39** | 16.39 | 8.84 | 1.21 | 0.98 | 3.4 |
|
97 |
+
| **Science** | | | | | |
|
98 |
+
| GPQA | 29.11 | **29.45** | 28.36 | 24.66 | 23.57 | 27.85 |
|
99 |
+
| MMLU-Pro | **35.53** | 33.81 | 28.72 | 11.31 | 11.8 | 16.11 |
|
100 |
+
| MMLU-stem | **63.37** | 61.53 | 54.93 | 27.59 | 30.19 | 40.06 |
|
101 |
+
| **Code** | | | | | |
|
102 |
+
| HumanEval | 50.0 | **67.68** | 35.37 | 6.71 | 18.9 | 10.37 |
|
103 |
+
| HumanEval+ | 42.68 | **60.98** | 29.27 | 5.49 | 16.46 | 9.15 |
|
104 |
+
| MBPP | 65.08 | **67.72** | 60.05 | 12.7 | 35.98 | 12.43 |
|
105 |
+
| MBPP+ | 55.03 | **58.99** | 49.47 | 9.52 | 29.89 | 9.52 |
|
106 |
+
|
107 |
+
You can check more in detail on our [our release blogpost](https://falcon-lm.github.io/blog/falcon-h1/), detailed benchmarks.
|
108 |
+
|
109 |
+
# Useful links
|
110 |
+
|
111 |
+
- View [our release blogpost](https://falcon-lm.github.io/blog/falcon-h1/).
|
112 |
+
- Feel free to join [our discord server](https://discord.gg/fwXpMyGc) if you have any questions or to interact with our researchers and developers.
|
113 |
+
|
114 |
+
# Citation
|
115 |
+
|
116 |
+
If the Falcon-H1 family of models were helpful to your work, feel free to give us a cite.
|
117 |
+
|
118 |
+
```
|
119 |
+
@misc{tiifalconh1,
|
120 |
+
title = {Falcon-H1},
|
121 |
+
author = {Falcon-LLM Team},
|
122 |
+
month = {May},
|
123 |
+
url = {https://falcon-lm.github.io/blog/falcon-h1},
|
124 |
+
year = {2025}
|
125 |
+
}
|
126 |
+
```
|