Initial model upload
Browse files- README.md +113 -0
- configuration.json +4 -0
- image/group.png +0 -0
README.md
ADDED
@@ -0,0 +1,113 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Introduction
|
2 |
+
|
3 |
+
RoboBrain2.0-32B-FlagOS-Nvidia provides an all-in-one deployment solution, enabling execution of RoboBrain2.0-32B on Nvidia GPUs. As the first-generation release for the H100, this package delivers three key features:
|
4 |
+
|
5 |
+
1. Comprehensive Integration:
|
6 |
+
- Integrated with FlagScale (https://github.com/FlagOpen/FlagScale).
|
7 |
+
- Open-source inference execution code, preconfigured with all necessary software and hardware settings.
|
8 |
+
- Pre-built Docker image for rapid deployment on H100.
|
9 |
+
3. Consistency Validation:
|
10 |
+
- Evaluation tests verifying consistency of results between the official and ours.
|
11 |
+
|
12 |
+
# Technical Summary
|
13 |
+
|
14 |
+
## Serving Engine
|
15 |
+
|
16 |
+
We use FlagScale as the serving engine to improve the portability of distributed inference.
|
17 |
+
|
18 |
+
FlagScale is an end-to-end framework for large models across multiple chips, maximizing computational resource efficiency while ensuring model effectiveness. It ensures both ease of use and high performance for users when deploying models across different chip architectures:
|
19 |
+
|
20 |
+
- One-Click Service Deployment: FlagScale provides a unified and simple command execution mechanism, allowing users to fast deploy services seamlessly across various hardware platforms using the same command. This significantly reduces the entry barrier and enhances user experience.
|
21 |
+
- Automated Deployment Optimization: FlagScale automatically optimizes distributed parallel strategies based on the computational capabilities of different AI chips, ensuring optimal resource allocation and efficient utilization, thereby improving overall deployment performance.
|
22 |
+
- Automatic Operator Library Switching: Leveraging FlagScale's unified Runner mechanism and deep integration with FlagGems, users can seamlessly switch to the FlagGems operator library for inference by simply adding environment variables in the configuration file.
|
23 |
+
|
24 |
+
## Triton Support
|
25 |
+
|
26 |
+
We validate the execution of RoboBrain2.0-32B model with a Triton-based operator library as a PyTorch alternative.
|
27 |
+
|
28 |
+
We use a variety of Triton-implemented operation kernels to run the RoboBrain2.0-32B model. These kernels come from two main sources:
|
29 |
+
|
30 |
+
- Most Triton kernels are provided by FlagGems (https://github.com/FlagOpen/FlagGems). You can enable FlagGems kernels by setting the environment variable USE_FLAGGEMS.
|
31 |
+
|
32 |
+
- Also included are Triton kernels from vLLM, such as fused MoE.
|
33 |
+
|
34 |
+
# Container Image Download
|
35 |
+
|
36 |
+
| | Usage | Nvidia |
|
37 |
+
| ----------- | ------------------------------------------------------------ | ------------------- |
|
38 |
+
| Basic Image | basic software environment that supports FlagOS model running | |
|
39 |
+
# Evaluation Results
|
40 |
+
|
41 |
+
## Benchmark Result
|
42 |
+
|
43 |
+
| Metrics | RoboBrain2.0-32B-H100-CUDA | RoboBrain2.0-32B-FlagOS-Nvidia |
|
44 |
+
|:-------------------|--------------------------|-----------------------------|
|
45 |
+
| livebench_new | - | 0.504 |
|
46 |
+
| aime | - | 0.167 |
|
47 |
+
| GPQA(0-shot) | - | 0.395 |
|
48 |
+
| MMLU | - | 0.697 |
|
49 |
+
| MUSR | - | 0.570 |
|
50 |
+
| TheoremQA | - | 0.151 |
|
51 |
+
|
52 |
+
|
53 |
+
# How to Run Locally
|
54 |
+
## 📌 Getting Started
|
55 |
+
### Download open-source weights
|
56 |
+
|
57 |
+
```bash
|
58 |
+
|
59 |
+
pip install modelscope
|
60 |
+
modelscope download --model <Model Name> --local_dir <Cache Path>
|
61 |
+
|
62 |
+
```
|
63 |
+
|
64 |
+
### Download the FlagOS image
|
65 |
+
|
66 |
+
```bash
|
67 |
+
docker pull
|
68 |
+
```
|
69 |
+
|
70 |
+
### Start the inference service
|
71 |
+
|
72 |
+
```bash
|
73 |
+
docker run --rm --init --detach \
|
74 |
+
--net=host --uts=host --ipc=host \
|
75 |
+
--security-opt=seccomp=unconfined \
|
76 |
+
--privileged=true \
|
77 |
+
--ulimit stack=67108864 \
|
78 |
+
--ulimit memlock=-1 \
|
79 |
+
--ulimit nofile=1048576:1048576 \
|
80 |
+
--shm-size=32G \
|
81 |
+
-v /share:/share \
|
82 |
+
--gpus all \
|
83 |
+
--name flagos \
|
84 |
+
\
|
85 |
+
sleep infinity
|
86 |
+
|
87 |
+
docker exec -it flagos bash
|
88 |
+
```
|
89 |
+
|
90 |
+
### Serve
|
91 |
+
|
92 |
+
```bash
|
93 |
+
flagscale serve <Model>
|
94 |
+
```
|
95 |
+
|
96 |
+
# Contributing
|
97 |
+
|
98 |
+
We warmly welcome global developers to join us:
|
99 |
+
1. Submit Issues to report problems
|
100 |
+
2. Create Pull Requests to contribute code
|
101 |
+
3. Improve technical documentation
|
102 |
+
4. Expand hardware adaptation support
|
103 |
+
|
104 |
+
# 📞 Contact Us
|
105 |
+
|
106 |
+
Scan the QR code below to add our WeChat group
|
107 |
+
send "FlagRelease"
|
108 |
+
|
109 |
+

|
110 |
+
|
111 |
+
# License
|
112 |
+
|
113 |
+
This project and related model weights are licensed under the MIT License.
|
configuration.json
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"framework": "Pytorch",
|
3 |
+
"task": "any-to-any"
|
4 |
+
}
|
image/group.png
ADDED
![]() |