update readme
Browse files
README.md
CHANGED
@@ -1,3 +1,47 @@
|
|
1 |
---
|
|
|
|
|
|
|
|
|
2 |
license: apache-2.0
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
library_name: transformers
|
3 |
+
tags:
|
4 |
+
- multimodal
|
5 |
+
- vision-language
|
6 |
license: apache-2.0
|
7 |
+
language:
|
8 |
+
- en
|
9 |
---
|
10 |
+
|
11 |
+
# Model Card for HPT
|
12 |
+
|
13 |
+
Hyper-Pretrained Transformers (HPT) is a novel multimodal LLM framework from [HyperGAI](https://hypergai.com/), and has been trained for vision-language models that are capable of multimodal understanding for both textual and visual inputs. Here we release our best open-sourced Multimodal LLM HPT 1.5 Edge. Built with Microsoft Phi-3-mini, our hyper capable HPT 1.5 Edge packs a punch on real world understanding and complex reasoning. This repository contains the open-source weight to reproduce the evaluation results of HPT 1.5 Edge on different benchmarks.
|
14 |
+
|
15 |
+
For full details of this model please read our [technical blog post](https://hypergai.com/blog/hpt-1-5-edge-towards-multimodal-llms-for-edge-devices)
|
16 |
+
|
17 |
+
## Run the model
|
18 |
+
|
19 |
+
Please use the scripts available in our [Github repository](https://github.com/HyperGAI/HPT) to utilize the model.
|
20 |
+
|
21 |
+
## Troubleshooting
|
22 |
+
|
23 |
+
Please report the issue at our [Github repo](https://github.com/HyperGAI/HPT)
|
24 |
+
|
25 |
+
## Pretrained models used
|
26 |
+
|
27 |
+
- Pretrained LLM: [
|
28 |
+
Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct)
|
29 |
+
|
30 |
+
- Pretrained Visual Encoder: [siglip-so400m-patch14-384](https://huggingface.co/google/siglip-so400m-patch14-384)
|
31 |
+
|
32 |
+
## Disclaimer and Responsible Use
|
33 |
+
|
34 |
+
Note that the HPT Edge is a quick open release of our models to facilitate the open, responsible AI research and community development. It does not have any moderation mechanism and provides no guarantees on their results. We hope to engage with the community to make the model finely respect guardrails to allow adoptions in practical applications requiring moderated outputs.
|
35 |
+
|
36 |
+
## Contact Us
|
37 |
+
|
38 |
+
- Contact: [email protected]
|
39 |
+
- Follow us on [Twitter](https://twitter.com/hypergai).
|
40 |
+
- Follow us on [Linkedin](https://www.linkedin.com/company/hypergai/).
|
41 |
+
- Visit our [website](https://www.hypergai.com) to learn more about us.
|
42 |
+
|
43 |
+
|
44 |
+
## License
|
45 |
+
|
46 |
+
This project is released under the [Apache 2.0 license](LICENSE).
|
47 |
+
Parts of this project contain code and models from other sources, which are subject to their respective licenses and you need to apply their respective license if you may want to use for commercial purposes.
|