yajunvicky commited on
Commit
5bb4476
·
verified ·
1 Parent(s): b938e78

Initial model upload

Browse files
Files changed (6) hide show
  1. .DS_Store +0 -0
  2. README.md +140 -0
  3. configuration.json +1 -0
  4. image/.DS_Store +0 -0
  5. image/DS_Store +0 -0
  6. image/group.png +0 -0
.DS_Store ADDED
Binary file (6.15 kB). View file
 
README.md ADDED
@@ -0,0 +1,140 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ MiniCPM_o_2.6-FlagOS-NVIDIA provides an all-in-one deployment solution, enabling execution of MiniCPM_o_2.6 on NVIDIA GPUs. As the first-generation release for the NVIDIA-H100, this package delivers three key features:
4
+
5
+ 1. Comprehensive Integration:
6
+ - Integrated with FlagScale (https://github.com/FlagOpen/FlagScale).
7
+ - Open-source inference execution code, preconfigured with all necessary software and hardware settings.
8
+ - Pre-built Docker image for rapid deployment on NVIDIA-H100.
9
+ 2. Consistency Validation:
10
+ - Evaluation tests verifying consistency of results between the official and ours.
11
+
12
+ # Technical Summary
13
+
14
+ ## Serving Engine
15
+
16
+ We use FlagScale as the serving engine to improve the portability of distributed inference.
17
+
18
+ FlagScale is an end-to-end framework for large models across multiple chips, maximizing computational resource efficiency while ensuring model effectiveness. It ensures both ease of use and high performance for users when deploying models across different chip architectures:
19
+
20
+ - One-Click Service Deployment: FlagScale provides a unified and simple command execution mechanism, allowing users to fast deploy services seamlessly across various hardware platforms using the same command. This significantly reduces the entry barrier and enhances user experience.
21
+ - Automated Deployment Optimization: FlagScale automatically optimizes distributed parallel strategies based on the computational capabilities of different AI chips, ensuring optimal resource allocation and efficient utilization, thereby improving overall deployment performance.
22
+ - Automatic Operator Library Switching: Leveraging FlagScale's unified Runner mechanism and deep integration with FlagGems, users can seamlessly switch to the FlagGems operator library for inference by simply adding environment variables in the configuration file.
23
+
24
+ ## Triton Support
25
+
26
+ We validate the execution of DeepSeed-R1 model with a Triton-based operator library as a PyTorch alternative.
27
+
28
+ We use a variety of Triton-implemented operation kernels—approximately 70%—to run the MiniCPM_o_2.6 model. These kernels come from two main sources:
29
+
30
+ - Most Triton kernels are provided by FlagGems (https://github.com/FlagOpen/FlagGems). You can enable FlagGems kernels by setting the environment variable USE_FLAGGEMS. For more details, please refer to the "How to Run Locally" section.
31
+
32
+ - Also included are Triton kernels from vLLM, including fused MoE.
33
+
34
+ # Bundle Download
35
+
36
+ | | Usage | Nvidia |
37
+ | ----------- | ------------------------------------------------------ | ------------------------------------------------------------ |
38
+ | Basic Image | basic software environment that supports model running | 'docker pull docker pull flagrelease-registry.cn-beijing.cr.aliyuncs.com/flagrelease/flagrelease:deepseek-flagos-nvidia |
39
+ # Evaluation Results
40
+
41
+ ## Benchmark Result
42
+
43
+ | Metrics | DeepSeek-R1-H100-CUDA | DeepSeek-R1-H100-FlagOS |
44
+ |:-------------------|-----------------------|--------------------------|
45
+ | mmmu_val | 48.11 | 48.33 |
46
+ | math_vision_test | 22.89 | 22.30 |
47
+ | ocrbench_test | 85.80 | 85.70 |
48
+ | blink_val | 54.87 | 55.81 |
49
+ | mmmvet_v2 | 57.66 | 59.03 |
50
+ | mmmu_pro_vision_test | 70.46 | 69.77 |
51
+ | mmmu_pro_standard_test | 30.46 | 30.81 |
52
+ | cmmmu_val | 39.33 | 39.33 |
53
+ | cii_bench_test | 50.07 | 50.33 |
54
+
55
+
56
+ # How to Run Locally
57
+ ## 📌 Getting Started
58
+ ### Environment Setup
59
+
60
+ ```bash
61
+ # install FlagScale
62
+ git clone https://github.com/FlagOpen/FlagScale.git
63
+ cd FlagScale
64
+ pip install .
65
+
66
+ # download image and ckpt
67
+ flagscale pull --image docker pull flagrelease-registry.cn-beijing.cr.aliyuncs.com/flagrelease/flagrelease:deepseek-flagos-nvidia --ckpt https://www.modelscope.cn/models/FlagRelease/MiniCPM_o_2.6-FlagOS-Nvidia.git --ckpt-path <CKPT_PATH>
68
+
69
+ # Note: For security reasons, this image does not have passwordless configuration. In multi-machine scenarios, you need to configure passwordless access for the image yourself.
70
+
71
+ # build and enter the container
72
+ docker run -itd --name flagrelease_nv --privileged --gpus all --net=host --ipc=host --device=/dev/infiniband --shm-size 512g --ulimit memlock=-1 -v <CKPT_PATH>:<CKPT_PATH> docker pull flagrelease-registry.cn-beijing.cr.aliyuncs.com/flagrelease/flagrelease:deepseek-flagos-nvidia /bin/bash
73
+ docker exec -it flagrelease_nv /bin/bash
74
+
75
+ conda activate flagscale-inference
76
+ ```
77
+
78
+
79
+ ### Download and install FlagGems
80
+
81
+ ```bash
82
+ git clone https://github.com/FlagOpen/FlagGems.git
83
+ cd FlagGems
84
+ pip install ./ --no-deps
85
+ cd ../
86
+ ```
87
+
88
+ ### Download FlagScale and unpatch the vendor's code to build vllm
89
+
90
+ ```bash
91
+ git clone https://github.com/FlagOpen/FlagScale.git
92
+ cd FlagScale/vllm
93
+ pip install .
94
+ cd ../
95
+ ```
96
+ ### Serve
97
+
98
+ ```bash
99
+ # config the MiniCPM_o_2.6 yaml
100
+ FlagScale/
101
+ ├── examples/
102
+ │ └── MiniCPM_o_2.6/
103
+ │ └── conf/
104
+ │ └── config_MiniCPM_o_2.6.yaml # set hostfile and ssh_port(optional), if it is passwordless access between containers, the docker field needs to be removed
105
+ │ └── serve/
106
+ │ └── MiniCPM_o_2.6.yaml # set model parameters and server port
107
+
108
+ # install flagscale
109
+ pip install .
110
+
111
+ # serve
112
+ flagscale serve MiniCPM_o_2.6
113
+ ```
114
+
115
+ # Usage Recommendations
116
+ When custom service parameters, users can run:
117
+
118
+ ```bash
119
+ flagscale serve <MiniCPM_o_2.6> <<MODEL_CONFIG_YAML>>
120
+ ```
121
+
122
+ # Contributing
123
+
124
+ We warmly welcome global developers to join us:
125
+ 1. Submit Issues to report problems
126
+ 2. Create Pull Requests to contribute code
127
+ 3. Improve technical documentation
128
+ 4. Expand hardware adaptation support
129
+
130
+ # 📞 Contact Us
131
+
132
+ Scan the QR code below to add our WeChat group
133
+
134
+ send "FlagRelease"
135
+
136
+ ![WeChat](image/group.png)
137
+
138
+ # License
139
+
140
+ This project and related model weights are licensed under the MIT License.
configuration.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"framework":"Pytorch","task":"any-to-any"}
image/.DS_Store ADDED
Binary file (6.15 kB). View file
 
image/DS_Store ADDED
Binary file (6.15 kB). View file
 
image/group.png ADDED