File size: 4,029 Bytes
ff11158
 
 
 
 
 
 
 
 
 
 
 
6cb14ed
ff11158
84c63cf
ff11158
aa835aa
ff11158
45a04ee
 
 
ff11158
 
45a04ee
ff11158
45a04ee
 
 
ff11158
 
45a04ee
ff11158
 
 
 
45a04ee
 
 
 
98eddee
45a04ee
 
ff11158
 
 
 
45a04ee
ff11158
66655b6
ff11158
4467914
ff11158
aa835aa
ff11158
45a04ee
ff11158
aa835aa
45a04ee
aa835aa
45a04ee
aa835aa
 
 
45a04ee
 
 
ff11158
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity

---

# dwulff/mpnet-personality

This is a [sentence-transformers](https://www.SBERT.net) model that maps personality-related items or texts into a 768-dimensional dense vector space and can be used for many tasks in personality psychology, such as clustering personality items and scales, mapping personality scales to personality constructs, and others.

The model has been generated by fine-tuning [all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) using unsigned empirical correlations of 200k pairs of personality items. The model, therefore, encodes the content of personality-related texts independent of the direction (e.g., negation).  

See [Wulff & Mata (2025)](https://doi.org/10.1038/s41562-024-02089-y) (see [Supplement](https://static-content.springer.com/esm/art%3A10.1038%2Fs41562-024-02089-y/MediaObjects/41562_2024_2089_MOESM1_ESM.pdf)) for details.

## Usage

Make sure [sentence-transformers](https://www.SBERT.net) is installed:

```
# latest version
pip install -U sentence-transformers

# latest dev version
pip install git+https://github.com/UKPLab/sentence-transformers.git
```

You can extract embeddings in the following way:

```python
from sentence_transformers import SentenceTransformer

# personality sentences
sentences = ["Rarely think about how I feel.", "Make decisions quickly."]

# load model
model = SentenceTransformer('dwulff/mpnet-personality')

# extract embeddings
embeddings = model.encode(sentences)
print(embeddings)
```

## Evaluation Results

The model has been evaluated on public personality data. For standard personality inventories, such as the BIG5 or HEXACO inventories, the model predicts the empirical correlations between personality items at Pearson r ~ .6 and empirical correlations between scales at Pearson r ~ .7. 

Performance can be higher on the many common personality items it has been trained on due to memorization (r ~ .9). Performance will be worse for more specialized personality assessments and texts beyond personality items, as well as for personality factors due to the reduced variance in correlations.

See [Wulff & Mata (2025)](https://doi.org/10.1038/s41562-024-02089-y) (see [Supplement](https://static-content.springer.com/esm/art%3A10.1038%2Fs41562-024-02089-y/MediaObjects/41562_2024_2089_MOESM1_ESM.pdf)) for details. 

## Citing


```
@article{wulff2024taxonomic,
  author       = {Wulff, Dirk U. and Mata, Rui},
  title        = {Semantic embeddings reveal and address taxonomic incommensurability in psychological measurement},
  journal      = {Nature Human Behavior},
  doi          = {https://doi.org/10.1038/s41562-024-02089-y}
}

```

## Training
The model was trained with the parameters:

**DataLoader**:

`torch.utils.data.dataloader.DataLoader` of length 3125 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```

**Loss**:

`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` 

Parameters of the fit()-Method:
```
{
    "epochs": 3,
    "evaluation_steps": 0,
    "evaluator": "NoneType",
    "max_grad_norm": 1,
    "optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
    "optimizer_params": {
        "lr": 2e-05
    },
    "scheduler": "WarmupLinear",
    "steps_per_epoch": null,
    "warmup_steps": 625,
    "weight_decay": 0.01
}
```


## Full Model Architecture
```
SentenceTransformer(
  (0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)
```