--- tags: - sentence-transformers - sentence-similarity - feature-extraction - dense - ecommerce - e-commerce - retail - marketplace - shopping - amazon - ebay - alibaba - google - rakuten - bestbuy - walmart - flipkart - wayfair - shein - target - etsy - shopify - taobao - asos - carrefour - costco - overstock - pretraining - encoder - language-modeling - foundation-model pipeline_tag: sentence-similarity library_name: sentence-transformers --- # RexBERT-base-embed-pf-v0.3 ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Maximum Sequence Length:** 2048 tokens - **Output Dimensionality:** 768 dimensions - **Similarity Function:** Cosine Similarity ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 2048, 'do_lower_case': False, 'architecture': 'ModernBertModel'}) (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("sentence_transformers_model_id") # Run inference sentences = [ 'The weather is lovely today.', "It's so sunny outside!", 'He drove to the stadium.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities) # tensor([[0.9961, 0.8477, 0.8750], # [0.8477, 0.9961, 0.8047], # [0.8750, 0.8047, 1.0078]], dtype=torch.bfloat16) ``` ## Training Details ### Framework Versions - Python: 3.12.8 - Sentence Transformers: 5.1.1 - Transformers: 4.53.3 - PyTorch: 2.7.0 - Accelerate: 1.10.0 - Datasets: 3.6.0 - Tokenizers: 0.21.4 ## Citation ### BibTeX