hongyin/informer-7b-80k
I am pleased to introduce an English-Chinese bilingual autoregressive language model. The model is trained on Llama2-7b, with a unique vocabulary and 7 billion parameters. Our goal is to provide a solution that is computationally cheap and easy to reason about. It is important to note that this is a basic model, not intended for use as a chatbot, but for alchemy. We look forward to providing you with practical model products.
To put aside the damn high-sounding words, the name of each model has rich connotations and personal experience, including the previous model, and it is worth reminding us repeatedly.
Bibtex entry and citation info
Please cite if you find it helpful.
@article{zhu2023metaaid,
title={MetaAID 2.0: An Extensible Framework for Developing Metaverse Applications via Human-controllable Pre-trained Models},
author={Zhu, Hongyin},
journal={arXiv preprint arXiv:2302.13173},
year={2023}
}
license: other
- Downloads last month
- 13
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.