sayyidsyamil commited on
Commit
708e61a
·
verified ·
1 Parent(s): bb4c8f4

Upload folder using huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +51 -232
README.md CHANGED
@@ -11,274 +11,111 @@ pipeline_tag: sentence-similarity
11
  library_name: sentence-transformers
12
  ---
13
 
14
- # SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
15
 
16
- This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
17
 
18
- ## Model Details
19
 
20
- ### Model Description
21
- - **Model Type:** Sentence Transformer
22
- - **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision c9745ed1d9f207416be6d2e6f8de32d1f16199bf -->
23
- - **Maximum Sequence Length:** 256 tokens
24
- - **Output Dimensionality:** 384 dimensions
25
- - **Similarity Function:** Cosine Similarity
26
- <!-- - **Training Dataset:** Unknown -->
27
- <!-- - **Language:** Unknown -->
28
- <!-- - **License:** Unknown -->
29
 
30
- ### Model Sources
 
 
 
 
 
31
 
32
- - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
33
- - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
34
- - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
35
-
36
- ### Full Model Architecture
37
 
38
  ```
39
  SentenceTransformer(
40
  (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
41
- (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
42
  (2): Normalize()
43
  )
44
  ```
45
 
46
  ## Usage
47
 
48
- ### Direct Usage (Sentence Transformers)
49
-
50
- First install the Sentence Transformers library:
51
-
52
  ```bash
 
53
  pip install -U sentence-transformers
54
  ```
55
 
56
- Then you can load this model and run inference.
57
  ```python
58
  from sentence_transformers import SentenceTransformer
 
59
 
60
- # Download from the 🤗 Hub
61
- model = SentenceTransformer("sentence_transformers_model_id")
62
- # Run inference
63
- sentences = [
64
- 'The weather is lovely today.',
65
- "It's so sunny outside!",
66
- 'He drove to the stadium.',
67
- ]
68
- embeddings = model.encode(sentences)
69
- print(embeddings.shape)
70
- # [3, 384]
71
-
72
- # Get the similarity scores for the embeddings
73
- similarities = model.similarity(embeddings, embeddings)
74
- print(similarities.shape)
75
- # [3, 3]
76
- ```
77
-
78
- <!--
79
- ### Direct Usage (Transformers)
80
-
81
- <details><summary>Click to see the direct usage in Transformers</summary>
82
-
83
- </details>
84
- -->
85
-
86
- <!--
87
- ### Downstream Usage (Sentence Transformers)
88
-
89
- You can finetune this model on your own dataset.
90
-
91
- <details><summary>Click to expand</summary>
92
 
93
- </details>
94
- -->
95
-
96
- <!--
97
- ### Out-of-Scope Use
98
 
99
- *List how the model may foreseeably be misused and address what users ought not to do with the model.*
100
- -->
 
101
 
102
- <!--
103
- ## Bias, Risks and Limitations
 
 
104
 
105
- *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
106
- -->
 
107
 
108
- <!--
109
- ### Recommendations
110
-
111
- *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
112
- -->
113
 
114
  ## Training Details
115
 
116
- ### Training Dataset
 
 
 
117
 
118
- #### Unnamed Dataset
 
 
 
 
 
119
 
120
- * Size: 4 training samples
121
- * Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
122
- * Approximate statistics based on the first 4 samples:
123
- | | sentence_0 | sentence_1 | label |
124
- |:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:--------------------------------------------------------------|
125
- | type | string | string | float |
126
- | details | <ul><li>min: 12 tokens</li><li>mean: 14.0 tokens</li><li>max: 16 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 13.25 tokens</li><li>max: 15 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.5</li><li>max: 1.0</li></ul> |
127
- * Samples:
128
- | sentence_0 | sentence_1 | label |
129
- |:--------------------------------------------------------------------------|:----------------------------------------------------------------------------|:-----------------|
130
- | <code>Teacher with 5 years in classroom management...</code> | <code>Looking for AI/ML engineer with Python experience.</code> | <code>0.0</code> |
131
- | <code>DevOps engineer with AWS, Docker, Jenkins...</code> | <code>Hiring cloud infrastructure engineer with AWS and CI/CD tools.</code> | <code>1.0</code> |
132
- | <code>Experienced Python developer with Flask and Django skills...</code> | <code>Looking for backend Python developer with Django experience.</code> | <code>1.0</code> |
133
- * Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
134
- ```json
135
- {
136
- "loss_fct": "torch.nn.modules.loss.MSELoss"
137
- }
138
- ```
139
 
140
- ### Training Hyperparameters
141
- #### Non-Default Hyperparameters
 
 
142
 
143
- - `per_device_train_batch_size`: 2
144
- - `per_device_eval_batch_size`: 2
145
- - `num_train_epochs`: 4
146
- - `multi_dataset_batch_sampler`: round_robin
147
-
148
- #### All Hyperparameters
149
- <details><summary>Click to expand</summary>
150
 
151
- - `overwrite_output_dir`: False
152
- - `do_predict`: False
153
- - `eval_strategy`: no
154
- - `prediction_loss_only`: True
155
  - `per_device_train_batch_size`: 2
156
  - `per_device_eval_batch_size`: 2
157
- - `per_gpu_train_batch_size`: None
158
- - `per_gpu_eval_batch_size`: None
159
- - `gradient_accumulation_steps`: 1
160
- - `eval_accumulation_steps`: None
161
- - `torch_empty_cache_steps`: None
162
  - `learning_rate`: 5e-05
163
  - `weight_decay`: 0.0
164
  - `adam_beta1`: 0.9
165
  - `adam_beta2`: 0.999
166
  - `adam_epsilon`: 1e-08
167
  - `max_grad_norm`: 1
168
- - `num_train_epochs`: 4
169
- - `max_steps`: -1
170
  - `lr_scheduler_type`: linear
171
- - `lr_scheduler_kwargs`: {}
172
- - `warmup_ratio`: 0.0
173
  - `warmup_steps`: 0
174
- - `log_level`: passive
175
- - `log_level_replica`: warning
176
- - `log_on_each_node`: True
177
- - `logging_nan_inf_filter`: True
178
- - `save_safetensors`: True
179
- - `save_on_each_node`: False
180
- - `save_only_model`: False
181
- - `restore_callback_states_from_checkpoint`: False
182
- - `no_cuda`: False
183
- - `use_cpu`: False
184
- - `use_mps_device`: False
185
  - `seed`: 42
186
- - `data_seed`: None
187
- - `jit_mode_eval`: False
188
- - `use_ipex`: False
189
- - `bf16`: False
190
- - `fp16`: False
191
- - `fp16_opt_level`: O1
192
- - `half_precision_backend`: auto
193
- - `bf16_full_eval`: False
194
- - `fp16_full_eval`: False
195
- - `tf32`: None
196
- - `local_rank`: 0
197
- - `ddp_backend`: None
198
- - `tpu_num_cores`: None
199
- - `tpu_metrics_debug`: False
200
- - `debug`: []
201
- - `dataloader_drop_last`: False
202
- - `dataloader_num_workers`: 0
203
- - `dataloader_prefetch_factor`: None
204
- - `past_index`: -1
205
- - `disable_tqdm`: False
206
- - `remove_unused_columns`: True
207
- - `label_names`: None
208
- - `load_best_model_at_end`: False
209
- - `ignore_data_skip`: False
210
- - `fsdp`: []
211
- - `fsdp_min_num_params`: 0
212
- - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
213
- - `tp_size`: 0
214
- - `fsdp_transformer_layer_cls_to_wrap`: None
215
- - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
216
- - `deepspeed`: None
217
- - `label_smoothing_factor`: 0.0
218
- - `optim`: adamw_torch
219
- - `optim_args`: None
220
- - `adafactor`: False
221
- - `group_by_length`: False
222
- - `length_column_name`: length
223
- - `ddp_find_unused_parameters`: None
224
- - `ddp_bucket_cap_mb`: None
225
- - `ddp_broadcast_buffers`: False
226
- - `dataloader_pin_memory`: True
227
- - `dataloader_persistent_workers`: False
228
- - `skip_memory_metrics`: True
229
- - `use_legacy_prediction_loop`: False
230
- - `push_to_hub`: False
231
- - `resume_from_checkpoint`: None
232
- - `hub_model_id`: None
233
- - `hub_strategy`: every_save
234
- - `hub_private_repo`: None
235
- - `hub_always_push`: False
236
- - `gradient_checkpointing`: False
237
- - `gradient_checkpointing_kwargs`: None
238
- - `include_inputs_for_metrics`: False
239
- - `include_for_metrics`: []
240
- - `eval_do_concat_batches`: True
241
- - `fp16_backend`: auto
242
- - `push_to_hub_model_id`: None
243
- - `push_to_hub_organization`: None
244
- - `mp_parameters`:
245
- - `auto_find_batch_size`: False
246
- - `full_determinism`: False
247
- - `torchdynamo`: None
248
- - `ray_scope`: last
249
- - `ddp_timeout`: 1800
250
- - `torch_compile`: False
251
- - `torch_compile_backend`: None
252
- - `torch_compile_mode`: None
253
- - `include_tokens_per_second`: False
254
- - `include_num_input_tokens_seen`: False
255
- - `neftune_noise_alpha`: None
256
- - `optim_target_modules`: None
257
- - `batch_eval_metrics`: False
258
- - `eval_on_start`: False
259
- - `use_liger_kernel`: False
260
- - `eval_use_gather_object`: False
261
- - `average_tokens_across_devices`: False
262
- - `prompts`: None
263
- - `batch_sampler`: batch_sampler
264
- - `multi_dataset_batch_sampler`: round_robin
265
 
266
  </details>
267
 
268
- ### Framework Versions
269
- - Python: 3.11.12
270
  - Sentence Transformers: 4.1.0
271
  - Transformers: 4.51.3
272
  - PyTorch: 2.6.0+cu124
273
- - Accelerate: 1.6.0
274
- - Datasets: 2.14.4
275
- - Tokenizers: 0.21.1
276
 
277
  ## Citation
278
 
279
- ### BibTeX
280
-
281
- #### Sentence Transformers
282
  ```bibtex
283
  @inproceedings{reimers-2019-sentence-bert,
284
  title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
@@ -289,22 +126,4 @@ You can finetune this model on your own dataset.
289
  publisher = "Association for Computational Linguistics",
290
  url = "https://arxiv.org/abs/1908.10084",
291
  }
292
- ```
293
-
294
- <!--
295
- ## Glossary
296
-
297
- *Clearly define terms in order to be accessible across audiences.*
298
- -->
299
-
300
- <!--
301
- ## Model Card Authors
302
-
303
- *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
304
- -->
305
-
306
- <!--
307
- ## Model Card Contact
308
-
309
- *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
310
- -->
 
11
  library_name: sentence-transformers
12
  ---
13
 
14
+ # Resume Matcher Transformer
15
 
16
+ A fine-tuned sentence transformer model based on `sentence-transformers/all-MiniLM-L6-v2` optimized for comparing resumes with job descriptions.
17
 
18
+ ## Model Overview
19
 
20
+ This model transforms resumes and job descriptions into 384-dimensional embeddings that can be compared for semantic similarity, helping to identify the best candidates for a position.
 
 
 
 
 
 
 
 
21
 
22
+ ### Key Specifications
23
+ - **Base Model**: [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2)
24
+ - **Output Dimensions**: 384
25
+ - **Sequence Length**: 256 tokens maximum
26
+ - **Similarity Function**: Cosine Similarity
27
+ - **Pooling Strategy**: Mean pooling
28
 
29
+ ## Model Architecture
 
 
 
 
30
 
31
  ```
32
  SentenceTransformer(
33
  (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
34
+ (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_mean_tokens': True})
35
  (2): Normalize()
36
  )
37
  ```
38
 
39
  ## Usage
40
 
 
 
 
 
41
  ```bash
42
+ # Install the required library
43
  pip install -U sentence-transformers
44
  ```
45
 
 
46
  ```python
47
  from sentence_transformers import SentenceTransformer
48
+ from sklearn.metrics.pairwise import cosine_similarity
49
 
50
+ # Load the model
51
+ model = SentenceTransformer("path/to/model")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
52
 
53
+ # Example job description
54
+ job_description = "Looking for a Python backend developer with Django experience."
 
 
 
55
 
56
+ # Example resumes
57
+ resume1 = "Experienced Python developer with Flask and Django skills."
58
+ resume2 = "Teacher with 5 years in classroom management experience."
59
 
60
+ # Generate embeddings
61
+ job_embedding = model.encode(job_description)
62
+ resume1_embedding = model.encode(resume1)
63
+ resume2_embedding = model.encode(resume2)
64
 
65
+ # Calculate similarity
66
+ similarity1 = cosine_similarity([job_embedding], [resume1_embedding])[0][0]
67
+ similarity2 = cosine_similarity([job_embedding], [resume2_embedding])[0][0]
68
 
69
+ print(f"Similarity with Resume 1: {similarity1:.4f}")
70
+ print(f"Similarity with Resume 2: {similarity2:.4f}")
71
+ ```
 
 
72
 
73
  ## Training Details
74
 
75
+ ### Dataset Information
76
+ - **Size**: 4 training samples
77
+ - **Format**: Pairs of text samples with similarity labels (0.0 = no match, 1.0 = match)
78
+ - **Loss Function**: CosineSimilarityLoss with MSELoss
79
 
80
+ ### Sample Training Data
81
+ | Resume/Profile | Job Description | Match Score |
82
+ |:--------------|:---------------|:-----------|
83
+ | Teacher with classroom management experience | Looking for AI/ML engineer with Python experience | 0.0 |
84
+ | DevOps engineer with AWS, Docker, Jenkins | Hiring cloud infrastructure engineer with AWS and CI/CD tools | 1.0 |
85
+ | Experienced Python developer with Flask and Django | Looking for backend Python developer with Django experience | 1.0 |
86
 
87
+ ## Training Hyperparameters
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
88
 
89
+ - Training epochs: 4
90
+ - Batch size: 2
91
+ - Learning rate: 5e-05
92
+ - Optimizer: AdamW
93
 
94
+ <details><summary>View all hyperparameters</summary>
 
 
 
 
 
 
95
 
 
 
 
 
96
  - `per_device_train_batch_size`: 2
97
  - `per_device_eval_batch_size`: 2
98
+ - `num_train_epochs`: 4
 
 
 
 
99
  - `learning_rate`: 5e-05
100
  - `weight_decay`: 0.0
101
  - `adam_beta1`: 0.9
102
  - `adam_beta2`: 0.999
103
  - `adam_epsilon`: 1e-08
104
  - `max_grad_norm`: 1
 
 
105
  - `lr_scheduler_type`: linear
 
 
106
  - `warmup_steps`: 0
 
 
 
 
 
 
 
 
 
 
 
107
  - `seed`: 42
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
108
 
109
  </details>
110
 
111
+ ## Framework Versions
 
112
  - Sentence Transformers: 4.1.0
113
  - Transformers: 4.51.3
114
  - PyTorch: 2.6.0+cu124
115
+ - Python: 3.11.12
 
 
116
 
117
  ## Citation
118
 
 
 
 
119
  ```bibtex
120
  @inproceedings{reimers-2019-sentence-bert,
121
  title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
 
126
  publisher = "Association for Computational Linguistics",
127
  url = "https://arxiv.org/abs/1908.10084",
128
  }
129
+ ```