hanxiao commited on
Commit
3699ace
Β·
verified Β·
1 Parent(s): 9d1a289

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +2 -24
README.md CHANGED
@@ -7,6 +7,7 @@ tags:
7
  language:
8
  - multilingual
9
  base_model: jinaai/jina-reranker-v3
 
10
  inference: false
11
  license: cc-by-nc-4.0
12
  library_name: mlx
@@ -14,15 +15,7 @@ library_name: mlx
14
 
15
  # jina-reranker-v3-mlx
16
 
17
- MLX implementation of [jina-reranker-v3](https://huggingface.co/jinaai/jina-reranker-v3), a 0.6B parameter multilingual document reranker optimized for Apple Silicon.
18
-
19
- ## Features
20
-
21
- - πŸš€ Native Apple Silicon acceleration via MLX
22
- - 🎯 100% accuracy match with original PyTorch implementation
23
- - πŸ“¦ Minimal dependencies (no transformers needed)
24
- - 🌍 Multilingual support (same as original model)
25
- - ⚑ Efficient inference on M-series chips
26
 
27
  ## Installation
28
 
@@ -101,21 +94,6 @@ reranker = MLXReranker(
101
  )
102
  ```
103
 
104
- ## Model Files
105
-
106
- This directory should contain:
107
- - `model.safetensors` - MLX-converted Qwen3 model weights
108
- - `projector.safetensors` - MLP projector weights
109
- - `tokenizer.json` - Tokenizer configuration
110
- - `config.json` - Model configuration
111
- - Other tokenizer files (vocab.json, merges.txt, etc.)
112
-
113
- ## Performance
114
-
115
- Tested on Apple M-series chips with 100% ranking accuracy compared to the original PyTorch implementation:
116
- - Mean score difference: < 0.001
117
- - Perfect ranking matches: 100%
118
- - Inference speed: ~3-4s for 6 documents (Apple M1/M2)
119
 
120
  ## Citation
121
 
 
7
  language:
8
  - multilingual
9
  base_model: jinaai/jina-reranker-v3
10
+ base_model_relation: quantized
11
  inference: false
12
  license: cc-by-nc-4.0
13
  library_name: mlx
 
15
 
16
  # jina-reranker-v3-mlx
17
 
18
+ MLX implementation of [jina-reranker-v3](https://huggingface.co/jinaai/jina-reranker-v3), a 0.6B parameter multilingual document reranker optimized for Apple Silicon. Features native Apple Silicon acceleration via MLX with 100% compatibility to the original PyTorch implementation. No transformers library required.
 
 
 
 
 
 
 
 
19
 
20
  ## Installation
21
 
 
94
  )
95
  ```
96
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
97
 
98
  ## Citation
99