daiqi commited on
Commit
c3fc252
Β·
verified Β·
1 Parent(s): 3b30c22

Upload folder using huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -12,7 +12,7 @@ tags:
12
  # Microsoft Phi-Ground-4B-7C
13
 
14
  <p align="center">
15
- <a href="https://zhangmiaosen2000.github.io/Phi-Ground/" target="_blank">πŸ€– HomePage</a> | <a href="https://arxiv.org/abs/2507.23779" target="_blank">πŸ“„ Paper </a> | <a href="https://arxiv.org/abs/2507.23779" target="_blank">πŸ“„ Arxiv </a> | <a href="https://huggingface.co/microsoft/Phi-Ground" target="_blank"> 😊 Model </a> | <a href="" target="_blank"> 😊 Eval data </a>
16
  </p>
17
 
18
  ![overview](docs/images/abstract.png)
@@ -29,7 +29,7 @@ tags:
29
  ![overview](docs/images/r1.png)
30
 
31
  ### Usage
32
- he current `transformers` version can be verified with: `pip list | grep transformers`.
33
 
34
  Examples of required packages:
35
  ```
@@ -88,4 +88,4 @@ image = process_image(Image.open(image_path))
88
  ```
89
 
90
 
91
- Then you can use huggingface model or [vllm](https://github.com/vllm-project/vllm) to inference. End-to-end examples and benchmark results reproduction can be found [here]().
 
12
  # Microsoft Phi-Ground-4B-7C
13
 
14
  <p align="center">
15
+ <a href="https://microsoft.github.io/Phi-Ground/" target="_blank">πŸ€– HomePage</a> | <a href="https://huggingface.co/papers/2507.23779" target="_blank">πŸ“„ Paper </a> | <a href="https://arxiv.org/abs/2507.23779" target="_blank">πŸ“„ Arxiv </a> | <a href="https://huggingface.co/microsoft/Phi-Ground" target="_blank"> 😊 Model </a> | <a href="https://github.com/microsoft/Phi-Ground/tree/main/benchmark/new_annotations" target="_blank"> 😊 Eval data </a>
16
  </p>
17
 
18
  ![overview](docs/images/abstract.png)
 
29
  ![overview](docs/images/r1.png)
30
 
31
  ### Usage
32
+ The current `transformers` version can be verified with: `pip list | grep transformers`.
33
 
34
  Examples of required packages:
35
  ```
 
88
  ```
89
 
90
 
91
+ Then you can use huggingface model or [vllm](https://github.com/vllm-project/vllm) to inference. We also provide [End-to-end examples](https://github.com/microsoft/Phi-Ground/tree/main/examples/call_example.py) and [benchmark results reproduction](https://github.com/microsoft/Phi-Ground/tree/main/benchmark/test_sspro.sh).