|
--- |
|
license: cc-by-nc-4.0 |
|
datasets: |
|
- nickrosh/Evol-Instruct-Code-80k-v1 |
|
- MBZUAI/LaMini-instruction |
|
language: |
|
- en |
|
base_model: |
|
- Bertug1911/BrtGPT-1-Pre |
|
pipeline_tag: text-generation |
|
tags: |
|
- code |
|
--- |
|
|
|
# BrtGPT-1-Pre-Code |
|
|
|
## Model Summary |
|
|
|
We're introducing "BratGPT-1-Pre-Code"! Our model was retrained using the "BrtGPT-1-Pre" model, which was already pre-trained, using code data. |
|
|
|
Compared to the BrtGPT-1-Pre model, it can write much better code, even with typos. |
|
|
|
No change was observed in general/daily chat and simple knowledge-based question-and-answer capabilities. |
|
|
|
It may produce some harmful output. |
|
|
|
## Difference Between Models |
|
|
|
Examples: |
|
|
|
| Prompt | BrtGPT-1-Pre | |
|
| :------------: | :------------: | |
|
| "Write me a code that prints "Hello World". | "Here's a code that prints "Hello World" in a list of words:```for i in range(1, 2, 3, 4, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5," | |
|
| "Write me a code that generates random number."| def random(1): return random(1)| |
|
|
|
|
|
BrtGPT-1-Pre-Code's answers: |
|
|
|
1- "Write me a code that prints "Hello World" Code: |
|
|
|
``` |
|
```python |
|
def print_hello_numbers(numbers): |
|
if num < num: |
|
return num |
|
elif num % num % num % num % num % num % num % num % num % num % num % num % num % num % num % num % num |
|
``` |
|
|
|
2- "Write me a code that generates random number. Code: |
|
|
|
``` |
|
#Here is a code that generates random number in python 3: |
|
|
|
```python |
|
def generate_random_number(num): |
|
# Create a new random number between 1 and 1 |
|
random_number = random.randint(num) |
|
random_number = random.randint(num) |
|
random_number = random.randint(num) |
|
|
|
# Create a new |
|
``` |
|
|
|
|
|
## How to use? |
|
NOTE: Model ***Supports*** Auto-model library now! |
|
You can run this code to use (Auto-model/Hugging Face transformers): |
|
|
|
``` |
|
from transformers import pipeline |
|
|
|
# Pipeline |
|
pipe = pipeline( |
|
"text-generation", |
|
model="Bertug1911/BrtGPT-1-Pre-Code", |
|
trust_remote_code=True, |
|
top_k=40, # Good for creativity |
|
temperature=0.8, # Good for creativity |
|
max_new_tokens=128 # Default maximum model output (Maximum 1024) |
|
) |
|
|
|
# Messages |
|
messages = [ |
|
{"role": "user", "content": "What is the capital of France?"}, |
|
] |
|
|
|
# Take out |
|
output = pipe(messages) |
|
|
|
# Only write asistant's (Model output) answer |
|
assistant_response = output[0]["generated_text"][-1]["content"].strip() |
|
# Special token conversions |
|
formatted_out = assistant_response.replace(" ", "").replace("Ġ", " ").replace("Ċ", "\n") |
|
|
|
print(formatted_out) |
|
|
|
``` |
|
## Evulation |
|
|
|
Evulation results is cooming soon! |
|
|
|
## Risks and biases |
|
|
|
Model may generates: |
|
- Illegal outputs |
|
- Harmfull contents |
|
|
|
Use with caution!! |
|
|
|
## Contact |
|
|
|
"[email protected]" or "[email protected]" |