Improve model card: Add metadata, link to paper and code, and basic description
Browse filesThis PR improves the model card by:
* Adding the `text-generation` pipeline tag.
* Linking to the paper.
* Linking to the GitHub repository.
* Adding License information (assumed MIT).
* Adding a brief model description.
* Adding basic usage example.
* Adding tags.
Please review and update the model card with more details as needed.
README.md
CHANGED
@@ -1,127 +1,110 @@
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
4 |
---
|
5 |
|
6 |
# Model Card for Model ID
|
7 |
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
|
12 |
## Model Details
|
13 |
|
14 |
### Model Description
|
15 |
|
16 |
-
|
17 |
-
|
18 |
-
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
|
19 |
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
|
28 |
-
### Model Sources [optional]
|
29 |
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
- **Paper [optional]:** [More Information Needed]
|
34 |
-
- **Demo [optional]:** [More Information Needed]
|
35 |
|
36 |
## Uses
|
37 |
|
38 |
-
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
39 |
-
|
40 |
### Direct Use
|
41 |
|
42 |
-
|
43 |
|
44 |
-
|
45 |
-
|
46 |
-
### Downstream Use [optional]
|
47 |
|
48 |
-
|
49 |
-
|
50 |
-
[More Information Needed]
|
51 |
|
52 |
### Out-of-Scope Use
|
53 |
|
54 |
-
|
55 |
-
|
56 |
-
[More Information Needed]
|
57 |
|
58 |
## Bias, Risks, and Limitations
|
59 |
|
60 |
-
|
61 |
-
|
62 |
-
[More Information Needed]
|
63 |
|
64 |
### Recommendations
|
65 |
|
66 |
-
|
67 |
-
|
68 |
-
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
69 |
|
70 |
## How to Get Started with the Model
|
71 |
|
72 |
Use the code below to get started with the model.
|
73 |
|
74 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
75 |
|
76 |
## Training Details
|
77 |
|
78 |
### Training Data
|
79 |
|
80 |
-
|
81 |
-
|
82 |
-
[More Information Needed]
|
83 |
|
84 |
### Training Procedure
|
85 |
|
86 |
-
|
87 |
-
|
88 |
-
#### Preprocessing [optional]
|
89 |
|
90 |
[More Information Needed]
|
91 |
|
92 |
-
|
93 |
#### Training Hyperparameters
|
94 |
|
95 |
-
|
96 |
-
|
97 |
-
#### Speeds, Sizes, Times [optional]
|
98 |
|
99 |
-
|
100 |
|
101 |
[More Information Needed]
|
102 |
|
103 |
## Evaluation
|
104 |
|
105 |
-
<!-- This section describes the evaluation protocols and provides the results. -->
|
106 |
-
|
107 |
### Testing Data, Factors & Metrics
|
108 |
|
109 |
#### Testing Data
|
110 |
|
111 |
-
|
112 |
-
|
113 |
-
[More Information Needed]
|
114 |
|
115 |
#### Factors
|
116 |
|
117 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
118 |
-
|
119 |
[More Information Needed]
|
120 |
|
121 |
#### Metrics
|
122 |
|
123 |
-
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
124 |
-
|
125 |
[More Information Needed]
|
126 |
|
127 |
### Results
|
@@ -130,27 +113,23 @@ Use the code below to get started with the model.
|
|
130 |
|
131 |
#### Summary
|
132 |
|
|
|
133 |
|
134 |
-
|
135 |
-
## Model Examination [optional]
|
136 |
-
|
137 |
-
<!-- Relevant interpretability work for the model goes here -->
|
138 |
|
139 |
[More Information Needed]
|
140 |
|
141 |
## Environmental Impact
|
142 |
|
143 |
-
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
144 |
-
|
145 |
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
146 |
|
147 |
-
|
148 |
-
|
149 |
-
|
150 |
-
|
151 |
-
|
152 |
|
153 |
-
## Technical Specifications [optional]
|
154 |
|
155 |
### Model Architecture and Objective
|
156 |
|
@@ -168,29 +147,27 @@ Carbon emissions can be estimated using the [Machine Learning Impact calculator]
|
|
168 |
|
169 |
[More Information Needed]
|
170 |
|
171 |
-
## Citation [optional]
|
172 |
|
173 |
-
|
174 |
|
175 |
**BibTeX:**
|
176 |
|
177 |
-
[More Information Needed]
|
178 |
|
179 |
**APA:**
|
180 |
|
181 |
-
[More Information Needed]
|
182 |
-
|
183 |
-
## Glossary [optional]
|
184 |
|
185 |
-
|
186 |
|
187 |
[More Information Needed]
|
188 |
|
189 |
-
## More Information [optional]
|
190 |
|
191 |
[More Information Needed]
|
192 |
|
193 |
-
## Model Card Authors [optional]
|
194 |
|
195 |
[More Information Needed]
|
196 |
|
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
+
pipeline_tag: text-generation
|
4 |
+
tags:
|
5 |
+
- llama
|
6 |
+
- text-generation
|
7 |
+
- causal-lm
|
8 |
+
license: mit
|
9 |
---
|
10 |
|
11 |
# Model Card for Model ID
|
12 |
|
13 |
+
This model is a large language model trained using the methods described in the paper [Pretraining Language Models for Diachronic Linguistic Change Discovery](https://huggingface.co/papers/2504.05523). It can be used for text generation tasks.
|
|
|
|
|
14 |
|
15 |
## Model Details
|
16 |
|
17 |
### Model Description
|
18 |
|
19 |
+
This model is a large language model trained on a historical text corpus. Further details about the model architecture and training are available in the provided links.
|
|
|
|
|
20 |
|
21 |
+
* **Developed by:** [More Information Needed]
|
22 |
+
* **Funded by \[optional]:** [More Information Needed]
|
23 |
+
* **Shared by \[optional]:** [More Information Needed]
|
24 |
+
* **Model type:** Llama
|
25 |
+
* **Language(s) (NLP):** English
|
26 |
+
* **License:** MIT
|
27 |
+
* **Finetuned from model \[optional]:** [More Information Needed]
|
28 |
|
29 |
+
### Model Sources \[optional]
|
30 |
|
31 |
+
* **Repository:** [https://github.com/comp-int-hum/historical-perspectival-lm](https://github.com/comp-int-hum/historical-perspectival-lm)
|
32 |
+
* **Paper \[optional]:** [https://huggingface.co/papers/2504.05523](https://huggingface.co/papers/2504.05523)
|
33 |
+
* **Demo \[optional]:** [More Information Needed]
|
|
|
|
|
34 |
|
35 |
## Uses
|
36 |
|
|
|
|
|
37 |
### Direct Use
|
38 |
|
39 |
+
The model can be used directly for text generation tasks, such as generating text in a specific historical style or completing text prompts.
|
40 |
|
41 |
+
### Downstream Use \[optional]
|
|
|
|
|
42 |
|
43 |
+
The model could be fine-tuned for various downstream tasks such as text classification, summarization, or question answering related to historical text.
|
|
|
|
|
44 |
|
45 |
### Out-of-Scope Use
|
46 |
|
47 |
+
The model may not perform well on tasks outside of text generation and historical text analysis. Its performance on contemporary language tasks is likely to be suboptimal.
|
|
|
|
|
48 |
|
49 |
## Bias, Risks, and Limitations
|
50 |
|
51 |
+
The model's training data consists of historical texts, which may reflect biases present in those texts. The model may generate outputs that perpetuate these biases. The model's performance will vary based on the characteristics of the input text. More information is needed for a comprehensive analysis of bias and risk.
|
|
|
|
|
52 |
|
53 |
### Recommendations
|
54 |
|
55 |
+
Users should be aware of the potential for bias in the model's outputs and use caution when interpreting its predictions. The model should not be used for applications where biased or inaccurate outputs could have harmful consequences.
|
|
|
|
|
56 |
|
57 |
## How to Get Started with the Model
|
58 |
|
59 |
Use the code below to get started with the model.
|
60 |
|
61 |
+
```python
|
62 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
63 |
+
|
64 |
+
model_id = "your_model_id" # Replace this with the actual model id.
|
65 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
66 |
+
model = AutoModelForCausalLM.from_pretrained(model_id)
|
67 |
+
|
68 |
+
prompt = "This is a test prompt:"
|
69 |
+
inputs = tokenizer(prompt, return_tensors="pt")
|
70 |
+
outputs = model.generate(**inputs)
|
71 |
+
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
72 |
+
```
|
73 |
|
74 |
## Training Details
|
75 |
|
76 |
### Training Data
|
77 |
|
78 |
+
[More Information Needed, Link to Dataset Card and description]
|
|
|
|
|
79 |
|
80 |
### Training Procedure
|
81 |
|
82 |
+
#### Preprocessing \[optional]
|
|
|
|
|
83 |
|
84 |
[More Information Needed]
|
85 |
|
|
|
86 |
#### Training Hyperparameters
|
87 |
|
88 |
+
* **Training regime:** [More Information Needed]
|
|
|
|
|
89 |
|
90 |
+
#### Speeds, Sizes, Times \[optional]
|
91 |
|
92 |
[More Information Needed]
|
93 |
|
94 |
## Evaluation
|
95 |
|
|
|
|
|
96 |
### Testing Data, Factors & Metrics
|
97 |
|
98 |
#### Testing Data
|
99 |
|
100 |
+
[More Information Needed, Link to Dataset Card and description]
|
|
|
|
|
101 |
|
102 |
#### Factors
|
103 |
|
|
|
|
|
104 |
[More Information Needed]
|
105 |
|
106 |
#### Metrics
|
107 |
|
|
|
|
|
108 |
[More Information Needed]
|
109 |
|
110 |
### Results
|
|
|
113 |
|
114 |
#### Summary
|
115 |
|
116 |
+
[More Information Needed]
|
117 |
|
118 |
+
## Model Examination \[optional]
|
|
|
|
|
|
|
119 |
|
120 |
[More Information Needed]
|
121 |
|
122 |
## Environmental Impact
|
123 |
|
|
|
|
|
124 |
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
125 |
|
126 |
+
* **Hardware Type:** [More Information Needed]
|
127 |
+
* **Hours used:** [More Information Needed]
|
128 |
+
* **Cloud Provider:** [More Information Needed]
|
129 |
+
* **Compute Region:** [More Information Needed]
|
130 |
+
* **Carbon Emitted:** [More Information Needed]
|
131 |
|
132 |
+
## Technical Specifications \[optional]
|
133 |
|
134 |
### Model Architecture and Objective
|
135 |
|
|
|
147 |
|
148 |
[More Information Needed]
|
149 |
|
150 |
+
## Citation \[optional]
|
151 |
|
152 |
+
If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section.
|
153 |
|
154 |
**BibTeX:**
|
155 |
|
156 |
+
[More Information Needed, Add Bibtex for 2504.05523]
|
157 |
|
158 |
**APA:**
|
159 |
|
160 |
+
[More Information Needed, Add APA for 2504.05523]
|
|
|
|
|
161 |
|
162 |
+
## Glossary \[optional]
|
163 |
|
164 |
[More Information Needed]
|
165 |
|
166 |
+
## More Information \[optional]
|
167 |
|
168 |
[More Information Needed]
|
169 |
|
170 |
+
## Model Card Authors \[optional]
|
171 |
|
172 |
[More Information Needed]
|
173 |
|