Om-Shandilya commited on
Commit
73b531f
·
verified ·
1 Parent(s): 9cdbd6b

Delete dapt_minilm_sentence_transformer/README.md

Browse files
dapt_minilm_sentence_transformer/README.md DELETED
@@ -1,143 +0,0 @@
1
- ---
2
- tags:
3
- - sentence-transformers
4
- - sentence-similarity
5
- - feature-extraction
6
- - dense
7
- pipeline_tag: sentence-similarity
8
- library_name: sentence-transformers
9
- ---
10
-
11
- # SentenceTransformer
12
-
13
- This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
14
-
15
- ## Model Details
16
-
17
- ### Model Description
18
- - **Model Type:** Sentence Transformer
19
- <!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
20
- - **Maximum Sequence Length:** 512 tokens
21
- - **Output Dimensionality:** 384 dimensions
22
- - **Similarity Function:** Cosine Similarity
23
- <!-- - **Training Dataset:** Unknown -->
24
- <!-- - **Language:** Unknown -->
25
- <!-- - **License:** Unknown -->
26
-
27
- ### Model Sources
28
-
29
- - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
30
- - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
31
- - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
32
-
33
- ### Full Model Architecture
34
-
35
- ```
36
- SentenceTransformer(
37
- (0): Transformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'BertModel'})
38
- (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
39
- )
40
- ```
41
-
42
- ## Usage
43
-
44
- ### Direct Usage (Sentence Transformers)
45
-
46
- First install the Sentence Transformers library:
47
-
48
- ```bash
49
- pip install -U sentence-transformers
50
- ```
51
-
52
- Then you can load this model and run inference.
53
- ```python
54
- from sentence_transformers import SentenceTransformer
55
-
56
- # Download from the 🤗 Hub
57
- model = SentenceTransformer("sentence_transformers_model_id")
58
- # Run inference
59
- sentences = [
60
- 'The weather is lovely today.',
61
- "It's so sunny outside!",
62
- 'He drove to the stadium.',
63
- ]
64
- embeddings = model.encode(sentences)
65
- print(embeddings.shape)
66
- # [3, 384]
67
-
68
- # Get the similarity scores for the embeddings
69
- similarities = model.similarity(embeddings, embeddings)
70
- print(similarities)
71
- # tensor([[1.0000, 0.6310, 0.6802],
72
- # [0.6310, 1.0000, 0.5113],
73
- # [0.6802, 0.5113, 1.0000]])
74
- ```
75
-
76
- <!--
77
- ### Direct Usage (Transformers)
78
-
79
- <details><summary>Click to see the direct usage in Transformers</summary>
80
-
81
- </details>
82
- -->
83
-
84
- <!--
85
- ### Downstream Usage (Sentence Transformers)
86
-
87
- You can finetune this model on your own dataset.
88
-
89
- <details><summary>Click to expand</summary>
90
-
91
- </details>
92
- -->
93
-
94
- <!--
95
- ### Out-of-Scope Use
96
-
97
- *List how the model may foreseeably be misused and address what users ought not to do with the model.*
98
- -->
99
-
100
- <!--
101
- ## Bias, Risks and Limitations
102
-
103
- *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
104
- -->
105
-
106
- <!--
107
- ### Recommendations
108
-
109
- *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
110
- -->
111
-
112
- ## Training Details
113
-
114
- ### Framework Versions
115
- - Python: 3.10.18
116
- - Sentence Transformers: 5.1.0
117
- - Transformers: 4.55.2
118
- - PyTorch: 2.5.1+cu121
119
- - Accelerate: 1.10.0
120
- - Datasets: 4.0.0
121
- - Tokenizers: 0.21.4
122
-
123
- ## Citation
124
-
125
- ### BibTeX
126
-
127
- <!--
128
- ## Glossary
129
-
130
- *Clearly define terms in order to be accessible across audiences.*
131
- -->
132
-
133
- <!--
134
- ## Model Card Authors
135
-
136
- *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
137
- -->
138
-
139
- <!--
140
- ## Model Card Contact
141
-
142
- *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
143
- -->