SentenceTransformer based on yahyaabd/allstats-search-mini-v1-1-mnrl

This is a sentence-transformers model finetuned from yahyaabd/allstats-search-mini-v1-1-mnrl on the bps-sts-dataset-v1 dataset. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("yahyaabd/allstats-search-mini-v1-1-mnrl-1")
# Run inference
sentences = [
    'PDRB per kapita Provinsi Riau sangat dipengaruhi oleh harga minyak bumi dunia.',
    'The Riau Islands province is known for its beautiful beaches and marine tourism.',
    'Di wilayah perkotaan, angka kemiskinan pada Maret 2023 adalah 7,29%.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Semantic Similarity

Metric sts-dev sts-test
pearson_cosine 0.8599 0.8885
spearman_cosine 0.8569 0.8818

Training Details

Training Dataset

bps-sts-dataset-v1

  • Dataset: bps-sts-dataset-v1 at 5c8f96e
  • Size: 2,436 training samples
  • Columns: sentence1, sentence2, and score
  • Approximate statistics based on the first 1000 samples:
    sentence1 sentence2 score
    type string string float
    details
    • min: 6 tokens
    • mean: 20.49 tokens
    • max: 36 tokens
    • min: 9 tokens
    • mean: 20.71 tokens
    • max: 45 tokens
    • min: 0.0
    • mean: 0.51
    • max: 1.0
  • Samples:
    sentence1 sentence2 score
    bagaimana capaian Tujuan Pembangunan Berkelanjutan di Indonesia? Laporan Pencapaian Indikator Tujuan Pembangunan Berkelanjutan (TPB/SDGs) Indonesia, Edisi 2024 0.8
    Jumlah perpustakaan umum di Indonesia tahun 2022 sebanyak 170.000 unit. Minat baca masyarakat Indonesia masih perlu ditingkatkan melalui berbagai program literasi. 0.4
    Jumlah sekolah negeri jenjang SMP di Kota Bandar Lampung adalah 30 sekolah. Laju deforestasi di Provinsi Kalimantan Tengah masih mengkhawatirkan. 0.0
  • Loss: CosineSimilarityLoss with these parameters:
    {
        "loss_fct": "torch.nn.modules.loss.MSELoss"
    }
    

Evaluation Dataset

bps-sts-dataset-v1

  • Dataset: bps-sts-dataset-v1 at 5c8f96e
  • Size: 522 evaluation samples
  • Columns: sentence1, sentence2, and score
  • Approximate statistics based on the first 522 samples:
    sentence1 sentence2 score
    type string string float
    details
    • min: 9 tokens
    • mean: 20.83 tokens
    • max: 39 tokens
    • min: 8 tokens
    • mean: 20.84 tokens
    • max: 44 tokens
    • min: 0.0
    • mean: 0.5
    • max: 1.0
  • Samples:
    sentence1 sentence2 score
    Persentase desa yang memiliki fasilitas internet di Provinsi Y pada tahun 2021 adalah 85%. Luas perkebunan kelapa sawit di Provinsi Y pada tahun 2021 adalah 500.000 hektar. 0.2
    Kontribusi sektor UMKM terhadap PDRB Kota Malang pada tahun 2023 sebesar 60%. Usaha Mikro, Kecil, dan Menengah menyumbang 60 persen terhadap total Produk Domestik Regional Bruto di kota pendidikan Malang pada tahun 2023. 1.0
    Jumlah Industri Kecil dan Menengah (IKM) di Kabupaten Tegal, Jawa Tengah, bertambah 200 unit pada tahun 2024. Di Tegal, sebuah kabupaten di Jateng, terjadi penambahan 200 unit IKM sepanjang tahun 2024. 1.0
  • Loss: CosineSimilarityLoss with these parameters:
    {
        "loss_fct": "torch.nn.modules.loss.MSELoss"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • learning_rate: 1e-05
  • num_train_epochs: 6
  • warmup_ratio: 0.1
  • fp16: True
  • load_best_model_at_end: True
  • label_smoothing_factor: 0.01
  • eval_on_start: True

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 1e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 6
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • tp_size: 0
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.01
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: True
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss Validation Loss sts-dev_spearman_cosine sts-test_spearman_cosine
0 0 - 0.0588 0.7404 -
0.0654 10 0.0541 0.0586 0.7412 -
0.1307 20 0.0546 0.0579 0.7444 -
0.1961 30 0.0441 0.0565 0.7500 -
0.2614 40 0.0503 0.0546 0.7580 -
0.3268 50 0.0546 0.0528 0.7648 -
0.3922 60 0.0538 0.0509 0.7739 -
0.4575 70 0.0455 0.0490 0.7834 -
0.5229 80 0.0471 0.0472 0.7925 -
0.5882 90 0.0417 0.0455 0.8017 -
0.6536 100 0.0427 0.0441 0.8095 -
0.7190 110 0.0445 0.0432 0.8138 -
0.7843 120 0.0382 0.0425 0.8168 -
0.8497 130 0.0443 0.0413 0.8220 -
0.9150 140 0.0449 0.0405 0.8264 -
0.9804 150 0.0407 0.0401 0.8287 -
1.0458 160 0.0377 0.0400 0.8312 -
1.1111 170 0.0285 0.0392 0.8327 -
1.1765 180 0.033 0.0389 0.8329 -
1.2418 190 0.0299 0.0388 0.8331 -
1.3072 200 0.029 0.0387 0.8333 -
1.3725 210 0.031 0.0384 0.8340 -
1.4379 220 0.0274 0.0384 0.8351 -
1.5033 230 0.0312 0.0382 0.8367 -
1.5686 240 0.0301 0.0378 0.8383 -
1.6340 250 0.0304 0.0375 0.8390 -
1.6993 260 0.0226 0.0374 0.8389 -
1.7647 270 0.0264 0.0373 0.8399 -
1.8301 280 0.0295 0.0370 0.8418 -
1.8954 290 0.0298 0.0368 0.8419 -
1.9608 300 0.0291 0.0366 0.8422 -
2.0261 310 0.0279 0.0365 0.8426 -
2.0915 320 0.0231 0.0363 0.8432 -
2.1569 330 0.0249 0.0361 0.8446 -
2.2222 340 0.0253 0.0359 0.8454 -
2.2876 350 0.024 0.0358 0.8463 -
2.3529 360 0.0239 0.0357 0.8471 -
2.4183 370 0.0222 0.0355 0.8473 -
2.4837 380 0.0284 0.0354 0.8476 -
2.5490 390 0.0176 0.0353 0.8486 -
2.6144 400 0.0184 0.0352 0.8489 -
2.6797 410 0.023 0.0351 0.8495 -
2.7451 420 0.0201 0.0351 0.8494 -
2.8105 430 0.0252 0.0351 0.8499 -
2.8758 440 0.0206 0.0350 0.8503 -
2.9412 450 0.0188 0.0350 0.8499 -
3.0065 460 0.017 0.0348 0.8501 -
3.0719 470 0.0174 0.0347 0.8505 -
3.1373 480 0.0171 0.0345 0.8515 -
3.2026 490 0.0226 0.0344 0.8520 -
3.2680 500 0.0233 0.0344 0.8520 -
3.3333 510 0.0177 0.0344 0.8523 -
3.3987 520 0.0155 0.0343 0.8522 -
3.4641 530 0.0155 0.0344 0.8522 -
3.5294 540 0.0249 0.0343 0.8523 -
3.5948 550 0.0177 0.0343 0.8522 -
3.6601 560 0.0149 0.0343 0.8520 -
3.7255 570 0.0178 0.0343 0.8517 -
3.7908 580 0.0181 0.0343 0.8520 -
3.8562 590 0.018 0.0342 0.8525 -
3.9216 600 0.0178 0.0341 0.8525 -
3.9869 610 0.0225 0.0340 0.8530 -
4.0523 620 0.0194 0.0339 0.8541 -
4.1176 630 0.0145 0.0338 0.8548 -
4.1830 640 0.0151 0.0337 0.8554 -
4.2484 650 0.0187 0.0336 0.8560 -
4.3137 660 0.0142 0.0336 0.8561 -
4.3791 670 0.0162 0.0336 0.8557 -
4.4444 680 0.0167 0.0335 0.8558 -
4.5098 690 0.013 0.0335 0.8555 -
4.5752 700 0.0174 0.0336 0.8555 -
4.6405 710 0.0156 0.0336 0.8556 -
4.7059 720 0.0155 0.0336 0.8555 -
4.7712 730 0.0179 0.0336 0.8553 -
4.8366 740 0.0158 0.0335 0.8553 -
4.9020 750 0.0143 0.0335 0.8553 -
4.9673 760 0.019 0.0335 0.8557 -
5.0327 770 0.0143 0.0334 0.8559 -
5.0980 780 0.0136 0.0334 0.8559 -
5.1634 790 0.0138 0.0334 0.8560 -
5.2288 800 0.0134 0.0333 0.8561 -
5.2941 810 0.0173 0.0333 0.8563 -
5.3595 820 0.0128 0.0333 0.8562 -
5.4248 830 0.0145 0.0333 0.8564 -
5.4902 840 0.0153 0.0333 0.8566 -
5.5556 850 0.0166 0.0333 0.8566 -
5.6209 860 0.0179 0.0332 0.8569 -
5.6863 870 0.0151 0.0332 0.8569 -
5.7516 880 0.0168 0.0332 0.8570 -
5.8170 890 0.0129 0.0332 0.8570 -
5.8824 900 0.015 0.0332 0.8569 -
5.9477 910 0.0148 0.0332 0.8569 -
-1 -1 - - - 0.8818
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.11.12
  • Sentence Transformers: 3.4.0
  • Transformers: 4.51.3
  • PyTorch: 2.6.0+cu124
  • Accelerate: 1.6.0
  • Datasets: 3.2.0
  • Tokenizers: 0.21.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}
Downloads last month
5
Safetensors
Model size
118M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for yahyaabd/allstats-search-mini-v1-1-mnrl-1

Dataset used to train yahyaabd/allstats-search-mini-v1-1-mnrl-1

Evaluation results