Update README.md
Browse files
README.md
CHANGED
@@ -176,3 +176,14 @@ We would like to thank Yejin Choi, Liwei Jiang, Arthur Conmy, Grace Braithwaite,
|
|
176 |
GPUs donated to EleutherAI by CoreWeave enabled our research to develop our filters. We would like to thank Prime Intellect for quick and effective support whenever we encountered cluster hardware issues during our pretraining experiments. Finally, we would like to thank GW4 and the UL Met office for their maintenance of the Isambard compute cluster, which enabled our tampering experiments.
|
177 |
|
178 |
Our README was inspired by the Pythia, Qwen, and OLMo2 model suites.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
176 |
GPUs donated to EleutherAI by CoreWeave enabled our research to develop our filters. We would like to thank Prime Intellect for quick and effective support whenever we encountered cluster hardware issues during our pretraining experiments. Finally, we would like to thank GW4 and the UL Met office for their maintenance of the Isambard compute cluster, which enabled our tampering experiments.
|
177 |
|
178 |
Our README was inspired by the Pythia, Qwen, and OLMo2 model suites.
|
179 |
+
|
180 |
+
# Citation
|
181 |
+
|
182 |
+
```
|
183 |
+
@article{obrien2025deepignorance,
|
184 |
+
title={Deep Ignorance: Filtering Pretraining Data Builds Tamper-Resistant Safeguards into Open-Weight LLMs},
|
185 |
+
author={O'Brien, Kyle and Casper, Stephen and Anthony, Quentin and Korbak, Tomek and Kirk, Robert and Davies, Xander and Mishra, Ishan and Irving, Geoffrey and Gal, Yarin and Biderman, Stella},
|
186 |
+
journal={arXiv preprint arXiv:2508.06601},
|
187 |
+
year={2025}
|
188 |
+
}
|
189 |
+
|