legolasyiu commited on
Commit
992728e
·
verified ·
1 Parent(s): ca9d1e6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -17,7 +17,7 @@ Summary description and brief definition of inputs and outputs.
17
 
18
  ### Description
19
 
20
- Athene CodeGemma 2 7B v1.1 is a collection of lightweight open code models built on top of Gemma. CodeGemma models are text-to-text and text-to-code decoder-only models and are available as a 7 billion pretrained variant that specializes in code completion and code generation tasks, a 7 billion parameter instruction-tuned variant for code chat and instruction following and a 2 billion parameter pretrained variant for fast code completion.
21
  Supervised Fine-tuning with coding datasets.
22
 
23
  similar to:
@@ -37,8 +37,8 @@ This model is intended to answer questions about code fragments, to generate cod
37
 
38
  ```python
39
  from transformers import GemmaTokenizer, AutoModelForCausalLM
40
- tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.1")
41
- model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.1")
42
  input_text = "Write me a Python function to calculate the nth fibonacci number."
43
  input_ids = tokenizer(input_text, return_tensors="pt")
44
  outputs = model.generate(**input_ids)
@@ -56,7 +56,7 @@ Let's load the model and apply the chat template to a conversation. In this exam
56
  from transformers import AutoTokenizer, AutoModelForCausalLM
57
  import transformers
58
  import torch
59
- model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.1"
60
  dtype = torch.bfloat16
61
  tokenizer = AutoTokenizer.from_pretrained(model_id)
62
  model = AutoModelForCausalLM.from_pretrained(
@@ -110,7 +110,7 @@ Data used for model training and how the data was processed.
110
 
111
  Supervised Fine-tuning with coding python, java datasets
112
 
113
- ### Example: Athene CodeGemma 2 7B v1.1
114
  Athene CodeGemma 2 7B v1.1 successfully created snake game without errors compare to original codegemma-7b-it
115
 
116
 
 
17
 
18
  ### Description
19
 
20
+ Athene CodeGemma 2 7B v1.2 is a collection of lightweight open code models built on top of Gemma. CodeGemma models are text-to-text and text-to-code decoder-only models and are available as a 7 billion pretrained variant that specializes in code completion and code generation tasks, a 7 billion parameter instruction-tuned variant for code chat and instruction following and a 2 billion parameter pretrained variant for fast code completion.
21
  Supervised Fine-tuning with coding datasets.
22
 
23
  similar to:
 
37
 
38
  ```python
39
  from transformers import GemmaTokenizer, AutoModelForCausalLM
40
+ tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.2")
41
+ model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.2")
42
  input_text = "Write me a Python function to calculate the nth fibonacci number."
43
  input_ids = tokenizer(input_text, return_tensors="pt")
44
  outputs = model.generate(**input_ids)
 
56
  from transformers import AutoTokenizer, AutoModelForCausalLM
57
  import transformers
58
  import torch
59
+ model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.2"
60
  dtype = torch.bfloat16
61
  tokenizer = AutoTokenizer.from_pretrained(model_id)
62
  model = AutoModelForCausalLM.from_pretrained(
 
110
 
111
  Supervised Fine-tuning with coding python, java datasets
112
 
113
+ ### Example: Athene CodeGemma 2 7B v1.2
114
  Athene CodeGemma 2 7B v1.1 successfully created snake game without errors compare to original codegemma-7b-it
115
 
116