legolasyiu commited on
Commit
d63a5f9
·
verified ·
1 Parent(s): 21b3106

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +179 -0
README.md CHANGED
@@ -11,6 +11,185 @@ tags:
11
  - trl
12
  ---
13
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
  # Uploaded model
15
 
16
  - **Developed by:** EpistemeAI
 
11
  - trl
12
  ---
13
 
14
+ ## Model Information
15
+
16
+ Summary description and brief definition of inputs and outputs.
17
+
18
+ ### Description
19
+
20
+ Athene CodeGemma 2 7B v1.1 is a collection of lightweight open code models built on top of Gemma. CodeGemma models are text-to-text and text-to-code decoder-only models and are available as a 7 billion pretrained variant that specializes in code completion and code generation tasks, a 7 billion parameter instruction-tuned variant for code chat and instruction following and a 2 billion parameter pretrained variant for fast code completion.
21
+ Supervised Fine-tuning with coding datasets.
22
+
23
+ similar to:
24
+
25
+ | | [codegemma-2b](https://huggingface.co/google/codegemma-2b) | [codegemma-7b](https://huggingface.co/google/codegemma-7b) | [**codegemma-7b-it**](https://huggingface.co/google/codegemma-7b-it) |
26
+ |----------------------------------|:----------------------------------------------------------------:|:----------------------------------------------------------:|:----------------------------------------------------------------:|
27
+ | Code Completion | ✅ | ✅ | |
28
+ | Generation from natural language | | ✅ | ✅ |
29
+ | Chat | | | ✅ |
30
+ | Instruction Following | | | ✅ |
31
+
32
+ ### Sample Usage
33
+
34
+ This model is intended to answer questions about code fragments, to generate code from natural language, or to engage in a conversation with the user about programming or technical problems. If you need to use code completion (for example, integrated in an IDE), we recommend you use one of the pre-trained models instead: [CodeGemma 7B](https://huggingface.co/google/codegemma-7b), or [CodeGemma 2B](https://huggingface.co/google/codegemma-2b).
35
+
36
+ #### For Code Generation
37
+
38
+ ```python
39
+ from transformers import GemmaTokenizer, AutoModelForCausalLM
40
+ tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.1")
41
+ model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.1")
42
+ input_text = "Write me a Python function to calculate the nth fibonacci number."
43
+ input_ids = tokenizer(input_text, return_tensors="pt")
44
+ outputs = model.generate(**input_ids)
45
+ print(tokenizer.decode(outputs[0]))
46
+ ```
47
+
48
+ #### Chat Template
49
+
50
+ The instruction-tuned models use a chat template that must be adhered to for conversational use.
51
+ The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet.
52
+
53
+ Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction:
54
+
55
+ ```py
56
+ from transformers import AutoTokenizer, AutoModelForCausalLM
57
+ import transformers
58
+ import torch
59
+ model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.1"
60
+ dtype = torch.bfloat16
61
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
62
+ model = AutoModelForCausalLM.from_pretrained(
63
+ model_id,
64
+ device_map="cuda",
65
+ torch_dtype=dtype,
66
+ )
67
+ chat = [
68
+ { "role": "user", "content": "Write a hello world program" },
69
+ ]
70
+ prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
71
+ ```
72
+
73
+ At this point, the prompt contains the following text:
74
+
75
+ ```
76
+ <bos><start_of_turn>user
77
+ Write a hello world program<end_of_turn>
78
+ <start_of_turn>model
79
+ ```
80
+
81
+ As you can see, each turn is preceded by a `<start_of_turn>` delimiter and then the role of the entity
82
+ (either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with
83
+ the `<end_of_turn>` token.
84
+
85
+ You can follow this format to build the prompt manually, if you need to do it without the tokenizer's
86
+ chat template.
87
+
88
+ After the prompt is ready, generation can be performed like this:
89
+
90
+ ```py
91
+ inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
92
+ outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
93
+ ```
94
+
95
+ ### Inputs and Outputs
96
+
97
+ Inputs
98
+ : For pretrained model variants: code prefix and/or suffix for code completion and generation scenarios, or natural language text or prompt
99
+ : For instruction tuned model variant: natural language text or prompt
100
+
101
+ Outputs
102
+ : For pretrained model variants: fill-in-the-middle code completion, code and natural language
103
+ : For instruction tuned model variant: code and natural language
104
+
105
+ ## Model Data
106
+
107
+ Data used for model training and how the data was processed.
108
+
109
+ ### Training Dataset
110
+
111
+ Supervised Fine-tuning with coding datasets.
112
+
113
+ ### Example: Athene CodeGemma 2 7B v1.1
114
+ Athene CodeGemma 2 7B v1.1 successfully created snake game without errors compare to original codegemma-7b-it
115
+
116
+
117
+ ```py
118
+ import pygame
119
+ import sys
120
+ import time
121
+ import random
122
+ # Initialize Pygame
123
+ pygame.init()
124
+ # Set up some constants
125
+ WIDTH = 800
126
+ HEIGHT = 600
127
+ BLOCK_SIZE = 20
128
+ # Create the game screen
129
+ screen = pygame.display.set_mode((WIDTH, HEIGHT))
130
+ # Set up the colors
131
+ BLACK = (0, 0, 0)
132
+ WHITE = (255, 255, 255)
133
+ RED = (255, 0, 0)
134
+ GREEN = (0, 255, 0)
135
+ # Set up the font
136
+ font = pygame.font.Font(None, 36)
137
+ # Set up the snake and food
138
+ snake = [(200, 200), (220, 200), (240, 200)]
139
+ food = (400, 300)
140
+ # Set up the direction
141
+ direction = 'RIGHT'
142
+ # Game loop
143
+ while True:
144
+ for event in pygame.event.get():
145
+ if event.type == pygame.QUIT:
146
+ pygame.quit()
147
+ sys.exit()
148
+ elif event.type == pygame.KEYDOWN:
149
+ if event.key == pygame.K_UP and direction!= 'DOWN':
150
+ direction = 'UP'
151
+ elif event.key == pygame.K_DOWN and direction!= 'UP':
152
+ direction = 'DOWN'
153
+ elif event.key == pygame.K_LEFT and direction!= 'RIGHT':
154
+ direction = 'LEFT'
155
+ elif event.key == pygame.K_RIGHT and direction!= 'LEFT':
156
+ direction = 'RIGHT'
157
+ # Move the snake
158
+ head = snake[-1]
159
+ if direction == 'UP':
160
+ new_head = (head[0], head[1] - BLOCK_SIZE)
161
+ elif direction == 'DOWN':
162
+ new_head = (head[0], head[1] + BLOCK_SIZE)
163
+ elif direction == 'LEFT':
164
+ new_head = (head[0] - BLOCK_SIZE, head[1])
165
+ elif direction == 'RIGHT':
166
+ new_head = (head[0] + BLOCK_SIZE, head[1])
167
+ snake.append(new_head)
168
+ # Check if the snake has eaten the food
169
+ if snake[-1] == food:
170
+ food = (random.randint(0, WIDTH - BLOCK_SIZE) // BLOCK_SIZE * BLOCK_SIZE,
171
+ random.randint(0, HEIGHT - BLOCK_SIZE) // BLOCK_SIZE * BLOCK_SIZE)
172
+ else:
173
+ snake.pop(0)
174
+ # Check if the snake has collided with the edge or itself
175
+ if (snake[-1][0] < 0 or snake[-1][0] >= WIDTH or
176
+ snake[-1][1] < 0 or snake[-1][1] >= HEIGHT or
177
+ snake[-1] in snake[:-1]):
178
+ print("Game Over!")
179
+ time.sleep(2)
180
+ break
181
+ # Draw the game screen
182
+ screen.fill(BLACK)
183
+ for pos in snake:
184
+ pygame.draw.rect(screen, GREEN, (pos[0], pos[1], BLOCK_SIZE, BLOCK_SIZE))
185
+ pygame.draw.rect(screen, RED, (food[0], food[1], BLOCK_SIZE, BLOCK_SIZE))
186
+ text = font.render(f'Score: {len(snake) - 3}', True, WHITE)
187
+ screen.blit(text, (10, 10))
188
+ pygame.display.flip()
189
+ # Cap the frame rate
190
+ pygame.time.Clock().tick(10)
191
+ ```
192
+
193
  # Uploaded model
194
 
195
  - **Developed by:** EpistemeAI