File size: 8,494 Bytes
fc90012 d63a5f9 992728e d63a5f9 992728e d63a5f9 992728e d63a5f9 c3ece66 d63a5f9 992728e d63a5f9 fc90012 ca9d1e6 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 |
---
base_model: EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.1
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
---
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Athene CodeGemma 2 7B v1.2 is a collection of lightweight open code models built on top of Gemma. CodeGemma models are text-to-text and text-to-code decoder-only models and are available as a 7 billion pretrained variant that specializes in code completion and code generation tasks, a 7 billion parameter instruction-tuned variant for code chat and instruction following and a 2 billion parameter pretrained variant for fast code completion.
Supervised Fine-tuning with coding datasets.
similar to:
| | [codegemma-2b](https://huggingface.co/google/codegemma-2b) | [codegemma-7b](https://huggingface.co/google/codegemma-7b) | [**codegemma-7b-it**](https://huggingface.co/google/codegemma-7b-it) |
|----------------------------------|:----------------------------------------------------------------:|:----------------------------------------------------------:|:----------------------------------------------------------------:|
| Code Completion | β
| β
| |
| Generation from natural language | | β
| β
|
| Chat | | | β
|
| Instruction Following | | | β
|
### Sample Usage
This model is intended to answer questions about code fragments, to generate code from natural language, or to engage in a conversation with the user about programming or technical problems. If you need to use code completion (for example, integrated in an IDE), we recommend you use one of the pre-trained models instead: [CodeGemma 7B](https://huggingface.co/google/codegemma-7b), or [CodeGemma 2B](https://huggingface.co/google/codegemma-2b).
#### For Code Generation
```python
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.2")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.2")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Chat Template
The instruction-tuned models use a chat template that must be adhered to for conversational use.
The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet.
Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction:
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.2"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=dtype,
)
chat = [
{ "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
```
At this point, the prompt contains the following text:
```
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
```
As you can see, each turn is preceded by a `<start_of_turn>` delimiter and then the role of the entity
(either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with
the `<end_of_turn>` token.
You can follow this format to build the prompt manually, if you need to do it without the tokenizer's
chat template.
After the prompt is ready, generation can be performed like this:
```py
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
```
### Inputs and Outputs
Inputs
: For pretrained model variants: code prefix and/or suffix for code completion and generation scenarios, or natural language text or prompt
: For instruction tuned model variant: natural language text or prompt
Outputs
: For pretrained model variants: fill-in-the-middle code completion, code and natural language
: For instruction tuned model variant: code and natural language
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
Supervised Fine-tuning with coding python, java datasets
### Example: Athene CodeGemma 2 7B v1.2
Athene CodeGemma 2 7B v1.1 successfully created snake game without errors compare to original codegemma-7b-it
```py
import pygame
import sys
import time
import random
# Initialize Pygame
pygame.init()
# Set up some constants
WIDTH = 800
HEIGHT = 600
BLOCK_SIZE = 20
# Create the game screen
screen = pygame.display.set_mode((WIDTH, HEIGHT))
# Set up the colors
BLACK = (0, 0, 0)
WHITE = (255, 255, 255)
RED = (255, 0, 0)
GREEN = (0, 255, 0)
# Set up the font
font = pygame.font.Font(None, 36)
# Set up the snake and food
snake = [(200, 200), (220, 200), (240, 200)]
food = (400, 300)
# Set up the direction
direction = 'RIGHT'
# Game loop
while True:
for event in pygame.event.get():
if event.type == pygame.QUIT:
pygame.quit()
sys.exit()
elif event.type == pygame.KEYDOWN:
if event.key == pygame.K_UP and direction!= 'DOWN':
direction = 'UP'
elif event.key == pygame.K_DOWN and direction!= 'UP':
direction = 'DOWN'
elif event.key == pygame.K_LEFT and direction!= 'RIGHT':
direction = 'LEFT'
elif event.key == pygame.K_RIGHT and direction!= 'LEFT':
direction = 'RIGHT'
# Move the snake
head = snake[-1]
if direction == 'UP':
new_head = (head[0], head[1] - BLOCK_SIZE)
elif direction == 'DOWN':
new_head = (head[0], head[1] + BLOCK_SIZE)
elif direction == 'LEFT':
new_head = (head[0] - BLOCK_SIZE, head[1])
elif direction == 'RIGHT':
new_head = (head[0] + BLOCK_SIZE, head[1])
snake.append(new_head)
# Check if the snake has eaten the food
if snake[-1] == food:
food = (random.randint(0, WIDTH - BLOCK_SIZE) // BLOCK_SIZE * BLOCK_SIZE,
random.randint(0, HEIGHT - BLOCK_SIZE) // BLOCK_SIZE * BLOCK_SIZE)
else:
snake.pop(0)
# Check if the snake has collided with the edge or itself
if (snake[-1][0] < 0 or snake[-1][0] >= WIDTH or
snake[-1][1] < 0 or snake[-1][1] >= HEIGHT or
snake[-1] in snake[:-1]):
print("Game Over!")
time.sleep(2)
break
# Draw the game screen
screen.fill(BLACK)
for pos in snake:
pygame.draw.rect(screen, GREEN, (pos[0], pos[1], BLOCK_SIZE, BLOCK_SIZE))
pygame.draw.rect(screen, RED, (food[0], food[1], BLOCK_SIZE, BLOCK_SIZE))
text = font.render(f'Score: {len(snake) - 3}', True, WHITE)
screen.blit(text, (10, 10))
pygame.display.flip()
# Cap the frame rate
pygame.time.Clock().tick(10)
```
# Uploaded model
- **Developed by:** EpistemeAI
- **License:** apache-2.0
- **Finetuned from model :** EpistemeAI/Athene-codegemma-2-7b-it-alpaca-v1.1
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
### Notice:
Gemma is provided under and subject to the Gemma Terms of Use found at ai.google.dev/gemma/terms |