Update README.md
Browse files
README.md
CHANGED
@@ -1,6 +1,12 @@
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
-
tags:
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
---
|
5 |
|
6 |
# Model Card for Model ID
|
@@ -13,17 +19,14 @@ tags: []
|
|
13 |
|
14 |
### Model Description
|
15 |
|
16 |
-
|
17 |
|
18 |
-
|
19 |
|
20 |
-
|
21 |
-
|
22 |
-
- **
|
23 |
-
- **Model type:**
|
24 |
-
- **Language(s) (NLP):** [More Information Needed]
|
25 |
-
- **License:** [More Information Needed]
|
26 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
27 |
|
28 |
### Model Sources [optional]
|
29 |
|
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
+
tags:
|
4 |
+
- mamba
|
5 |
+
- falcon3
|
6 |
+
- reasoning
|
7 |
+
base_model:
|
8 |
+
- tiiuae/Falcon3-Mamba-7B-Instruct
|
9 |
+
pipeline_tag: text-generation
|
10 |
---
|
11 |
|
12 |
# Model Card for Model ID
|
|
|
19 |
|
20 |
### Model Description
|
21 |
|
22 |
+
This model is a fine tuned version of Falcon3 Mamba 7 billion instruct.
|
23 |
|
24 |
+
It is tailored to reason and build logic before answering to the user question. The Mamba based model scales linearly with increased number of tokens, making it a very fast reasoner while maintaining consistent response quality.
|
25 |
|
26 |
+
This is from an earlier checkpoint of the model training.
|
27 |
+
|
28 |
+
- **Developed by:** Hanzla Javaid
|
29 |
+
- **Model type:** Mamba
|
|
|
|
|
|
|
30 |
|
31 |
### Model Sources [optional]
|
32 |
|