Resume Matcher Transformer
A fine-tuned sentence transformer model based on sentence-transformers/all-MiniLM-L6-v2
optimized for comparing resumes with job descriptions.
Model Overview
This model transforms resumes and job descriptions into 384-dimensional embeddings that can be compared for semantic similarity, helping to identify the best candidates for a position.
Key Specifications
- Base Model: sentence-transformers/all-MiniLM-L6-v2
- Output Dimensions: 384
- Sequence Length: 256 tokens maximum
- Similarity Function: Cosine Similarity
- Pooling Strategy: Mean pooling
Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_mean_tokens': True})
(2): Normalize()
)
Usage
# Install the required library
pip install -U sentence-transformers
from sentence_transformers import SentenceTransformer
from sklearn.metrics.pairwise import cosine_similarity
# Load the model
model = SentenceTransformer("path/to/model")
# Example job description
job_description = "Looking for a Python backend developer with Django experience."
# Example resumes
resume1 = "Experienced Python developer with Flask and Django skills."
resume2 = "Teacher with 5 years in classroom management experience."
# Generate embeddings
job_embedding = model.encode(job_description)
resume1_embedding = model.encode(resume1)
resume2_embedding = model.encode(resume2)
# Calculate similarity
similarity1 = cosine_similarity([job_embedding], [resume1_embedding])[0][0]
similarity2 = cosine_similarity([job_embedding], [resume2_embedding])[0][0]
print(f"Similarity with Resume 1: {similarity1:.4f}")
print(f"Similarity with Resume 2: {similarity2:.4f}")
Training Details
Dataset Information
- Size: 4 training samples
- Format: Pairs of text samples with similarity labels (0.0 = no match, 1.0 = match)
- Loss Function: CosineSimilarityLoss with MSELoss
Sample Training Data
Resume/Profile | Job Description | Match Score |
---|---|---|
Teacher with classroom management experience | Looking for AI/ML engineer with Python experience | 0.0 |
DevOps engineer with AWS, Docker, Jenkins | Hiring cloud infrastructure engineer with AWS and CI/CD tools | 1.0 |
Experienced Python developer with Flask and Django | Looking for backend Python developer with Django experience | 1.0 |
Training Hyperparameters
- Training epochs: 4
- Batch size: 2
- Learning rate: 5e-05
- Optimizer: AdamW
View all hyperparameters
per_device_train_batch_size
: 2per_device_eval_batch_size
: 2num_train_epochs
: 4learning_rate
: 5e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1lr_scheduler_type
: linearwarmup_steps
: 0seed
: 42
Framework Versions
- Sentence Transformers: 4.1.0
- Transformers: 4.51.3
- PyTorch: 2.6.0+cu124
- Python: 3.11.12
Citation
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
- Downloads last month
- 9
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for sayyidsyamil/fine_tuned_resume_matcher
Base model
sentence-transformers/all-MiniLM-L6-v2