legolasyiu commited on
Commit
9c26e12
·
verified ·
1 Parent(s): eaaa3e9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +181 -0
README.md CHANGED
@@ -11,6 +11,187 @@ tags:
11
  - trl
12
  ---
13
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
  # Uploaded model
15
 
16
  - **Developed by:** EpistemeAI
 
11
  - trl
12
  ---
13
 
14
+ ## Model Information
15
+
16
+ Summary description and brief definition of inputs and outputs.
17
+
18
+ Fine tuned by Episteme.AI's coding and reasoning dataset.
19
+
20
+ ### Description
21
+
22
+ Athene CodeGemma 2 7B v1.3 is a collection of lightweight open code models built on top of Gemma. CodeGemma models are text-to-text and text-to-code decoder-only models and are available as a 7 billion pretrained variant that specializes in code completion and code generation tasks, a 7 billion parameter instruction-tuned variant for code chat and instruction following and a 2 billion parameter pretrained variant for fast code completion.
23
+ Supervised Fine-tuning with coding datasets.
24
+
25
+ similar to:
26
+
27
+ | | [codegemma-2b](https://huggingface.co/google/codegemma-2b) | [codegemma-7b](https://huggingface.co/google/codegemma-7b) | [**codegemma-7b-it**](https://huggingface.co/google/codegemma-7b-it) |
28
+ |----------------------------------|:----------------------------------------------------------------:|:----------------------------------------------------------:|:----------------------------------------------------------------:|
29
+ | Code Completion | ✅ | ✅ | |
30
+ | Generation from natural language | | ✅ | ✅ |
31
+ | Chat | | | ✅ |
32
+ | Instruction Following | | | ✅ |
33
+
34
+ ### Sample Usage
35
+
36
+ This model is intended to answer questions about code fragments, to generate code from natural language, or to engage in a conversation with the user about programming or technical problems. If you need to use code completion (for example, integrated in an IDE), we recommend you use one of the pre-trained models instead: [CodeGemma 7B](https://huggingface.co/google/codegemma-7b), or [CodeGemma 2B](https://huggingface.co/google/codegemma-2b).
37
+
38
+ #### For Code Generation
39
+
40
+ ```python
41
+ from transformers import GemmaTokenizer, AutoModelForCausalLM
42
+ tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
43
+ model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3")
44
+ input_text = "Write me a Python function to calculate the nth fibonacci number."
45
+ input_ids = tokenizer(input_text, return_tensors="pt")
46
+ outputs = model.generate(**input_ids)
47
+ print(tokenizer.decode(outputs[0]))
48
+ ```
49
+
50
+ #### Chat Template
51
+
52
+ The instruction-tuned models use a chat template that must be adhered to for conversational use.
53
+ The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet.
54
+
55
+ Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction:
56
+
57
+ ```py
58
+ from transformers import AutoTokenizer, AutoModelForCausalLM
59
+ import transformers
60
+ import torch
61
+ model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.3"
62
+ dtype = torch.bfloat16
63
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
64
+ model = AutoModelForCausalLM.from_pretrained(
65
+ model_id,
66
+ device_map="cuda",
67
+ torch_dtype=dtype,
68
+ )
69
+ chat = [
70
+ { "role": "user", "content": "Write a hello world program" },
71
+ ]
72
+ prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
73
+ ```
74
+
75
+ At this point, the prompt contains the following text:
76
+
77
+ ```
78
+ <bos><start_of_turn>user
79
+ Write a hello world program<end_of_turn>
80
+ <start_of_turn>model
81
+ ```
82
+
83
+ As you can see, each turn is preceded by a `<start_of_turn>` delimiter and then the role of the entity
84
+ (either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with
85
+ the `<end_of_turn>` token.
86
+
87
+ You can follow this format to build the prompt manually, if you need to do it without the tokenizer's
88
+ chat template.
89
+
90
+ After the prompt is ready, generation can be performed like this:
91
+
92
+ ```py
93
+ inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
94
+ outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
95
+ ```
96
+
97
+ ### Inputs and Outputs
98
+
99
+ Inputs
100
+ : For pretrained model variants: code prefix and/or suffix for code completion and generation scenarios, or natural language text or prompt
101
+ : For instruction tuned model variant: natural language text or prompt
102
+
103
+ Outputs
104
+ : For pretrained model variants: fill-in-the-middle code completion, code and natural language
105
+ : For instruction tuned model variant: code and natural language
106
+
107
+ ## Model Data
108
+
109
+ Data used for model training and how the data was processed.
110
+
111
+ ### Training Dataset
112
+
113
+ Supervised Fine-tuning with coding datasets.
114
+
115
+ ### Example: Athene CodeGemma 2 7B v1.3
116
+ Athene CodeGemma 2 7B v1.1 successfully created snake game without errors compare to original codegemma-7b-it
117
+
118
+
119
+ ```py
120
+ import pygame
121
+ import sys
122
+ import time
123
+ import random
124
+ # Initialize Pygame
125
+ pygame.init()
126
+ # Set up some constants
127
+ WIDTH = 800
128
+ HEIGHT = 600
129
+ BLOCK_SIZE = 20
130
+ # Create the game screen
131
+ screen = pygame.display.set_mode((WIDTH, HEIGHT))
132
+ # Set up the colors
133
+ BLACK = (0, 0, 0)
134
+ WHITE = (255, 255, 255)
135
+ RED = (255, 0, 0)
136
+ GREEN = (0, 255, 0)
137
+ # Set up the font
138
+ font = pygame.font.Font(None, 36)
139
+ # Set up the snake and food
140
+ snake = [(200, 200), (220, 200), (240, 200)]
141
+ food = (400, 300)
142
+ # Set up the direction
143
+ direction = 'RIGHT'
144
+ # Game loop
145
+ while True:
146
+ for event in pygame.event.get():
147
+ if event.type == pygame.QUIT:
148
+ pygame.quit()
149
+ sys.exit()
150
+ elif event.type == pygame.KEYDOWN:
151
+ if event.key == pygame.K_UP and direction!= 'DOWN':
152
+ direction = 'UP'
153
+ elif event.key == pygame.K_DOWN and direction!= 'UP':
154
+ direction = 'DOWN'
155
+ elif event.key == pygame.K_LEFT and direction!= 'RIGHT':
156
+ direction = 'LEFT'
157
+ elif event.key == pygame.K_RIGHT and direction!= 'LEFT':
158
+ direction = 'RIGHT'
159
+ # Move the snake
160
+ head = snake[-1]
161
+ if direction == 'UP':
162
+ new_head = (head[0], head[1] - BLOCK_SIZE)
163
+ elif direction == 'DOWN':
164
+ new_head = (head[0], head[1] + BLOCK_SIZE)
165
+ elif direction == 'LEFT':
166
+ new_head = (head[0] - BLOCK_SIZE, head[1])
167
+ elif direction == 'RIGHT':
168
+ new_head = (head[0] + BLOCK_SIZE, head[1])
169
+ snake.append(new_head)
170
+ # Check if the snake has eaten the food
171
+ if snake[-1] == food:
172
+ food = (random.randint(0, WIDTH - BLOCK_SIZE) // BLOCK_SIZE * BLOCK_SIZE,
173
+ random.randint(0, HEIGHT - BLOCK_SIZE) // BLOCK_SIZE * BLOCK_SIZE)
174
+ else:
175
+ snake.pop(0)
176
+ # Check if the snake has collided with the edge or itself
177
+ if (snake[-1][0] < 0 or snake[-1][0] >= WIDTH or
178
+ snake[-1][1] < 0 or snake[-1][1] >= HEIGHT or
179
+ snake[-1] in snake[:-1]):
180
+ print("Game Over!")
181
+ time.sleep(2)
182
+ break
183
+ # Draw the game screen
184
+ screen.fill(BLACK)
185
+ for pos in snake:
186
+ pygame.draw.rect(screen, GREEN, (pos[0], pos[1], BLOCK_SIZE, BLOCK_SIZE))
187
+ pygame.draw.rect(screen, RED, (food[0], food[1], BLOCK_SIZE, BLOCK_SIZE))
188
+ text = font.render(f'Score: {len(snake) - 3}', True, WHITE)
189
+ screen.blit(text, (10, 10))
190
+ pygame.display.flip()
191
+ # Cap the frame rate
192
+ pygame.time.Clock().tick(10)
193
+ ```
194
+
195
  # Uploaded model
196
 
197
  - **Developed by:** EpistemeAI