Update README.md
Browse files
README.md
CHANGED
@@ -109,6 +109,8 @@ Data used for model training and how the data was processed.
|
|
109 |
|
110 |
Using Gemma as the base model, Athene CodeGemma 2 7B pretrained variants are further trained on an additional 500 billion tokens of primarily English language data from publicly available code repositories, open source mathematics datasets and synthetically generated code.
|
111 |
|
|
|
|
|
112 |
|
113 |
# Uploaded model
|
114 |
|
|
|
109 |
|
110 |
Using Gemma as the base model, Athene CodeGemma 2 7B pretrained variants are further trained on an additional 500 billion tokens of primarily English language data from publicly available code repositories, open source mathematics datasets and synthetically generated code.
|
111 |
|
112 |
+
### Example: Athene CodeGemma 2 7B v1.1
|
113 |
+
Athene CodeGemma 2 7B v1.1 successfully created snake game without errors compare to original codegemma-7b-it
|
114 |
|
115 |
# Uploaded model
|
116 |
|