Upload README.md
Browse files
README.md
ADDED
@@ -0,0 +1,31 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
- zh
|
5 |
+
pipeline_tag: text-generation
|
6 |
+
---
|
7 |
+
## hongyin/informer-7b-80k
|
8 |
+
|
9 |
+
I am pleased to introduce an English-Chinese bilingual autoregressive language model. The model is trained on Llama2-7b, with a unique vocabulary and 7 billion parameters. Our goal is to provide a solution that is computationally cheap and easy to reason about. It is important to note that this is a basic model, not intended for use as a chatbot, but for alchemy. We look forward to providing you with practical model products.
|
10 |
+
|
11 |
+
To put aside the damn high-sounding words, the name of each model has rich connotations and personal experience, including the previous model, and it is worth reminding us repeatedly.
|
12 |
+
|
13 |
+
```python
|
14 |
+
|
15 |
+
```
|
16 |
+
|
17 |
+
## Bibtex entry and citation info
|
18 |
+
Please cite if you find it helpful.
|
19 |
+
```
|
20 |
+
@article{zhu2023metaaid,
|
21 |
+
title={MetaAID 2.0: An Extensible Framework for Developing Metaverse Applications via Human-controllable Pre-trained Models},
|
22 |
+
author={Zhu, Hongyin},
|
23 |
+
journal={arXiv preprint arXiv:2302.13173},
|
24 |
+
year={2023}
|
25 |
+
}
|
26 |
+
|
27 |
+
```
|
28 |
+
|
29 |
+
---
|
30 |
+
license: other
|
31 |
+
---
|