Multilingual Style Representation
This is the Style Representation model, presented in Leveraging Multilingual Training for Authorship Representation: Enhancing Generalization across Languages and Domains.
The Style Representation model encodes documents written by the same author as nearby vectors in the embedding space. The model can be used for authorship attribution, style similarity, machine-generated text detection, and more.
For training and evaluation code, refer to our repository here.
For the Style Representation model based on Llama-3.2, refer to Blablablab/multilingual-style-representation-Llama-3.2.
Model Details
- Model Type: Sentence Transformer
- Base model: FacebookAI/xlm-roberta-large
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 1024 dimensions
- Similarity Function: Cosine Similarity
Usage
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("Blablablab/multilingual-style-representation")
# Run inference
sentences = [
'The weather is lovely today.',
"It's so sunny outside!",
'He drove to the stadium.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
- Downloads last month
- 4,020
Model tree for Blablablab/multilingual-style-representation
Base model
FacebookAI/xlm-roberta-large