|
--- |
|
language: |
|
- en |
|
license: apache-2.0 |
|
tags: |
|
- text-generation |
|
datasets: |
|
- THUDM/webglm-qa |
|
- databricks/databricks-dolly-15k |
|
- cognitivecomputations/wizard_vicuna_70k_unfiltered |
|
- totally-not-an-llm/EverythingLM-data-V3 |
|
- Amod/mental_health_counseling_conversations |
|
- sablo/oasst2_curated |
|
- starfishmedical/webGPT_x_dolly |
|
- Open-Orca/OpenOrca |
|
- mlabonne/chatml_dpo_pairs |
|
base_model: JackFram/llama-68m |
|
widget: |
|
- messages: |
|
- role: system |
|
content: You are a career counselor. The user will provide you with an individual |
|
looking for guidance in their professional life, and your task is to assist |
|
them in determining what careers they are most suited for based on their skills, |
|
interests, and experience. You should also conduct research into the various |
|
options available, explain the job market trends in different industries, and |
|
advice on which qualifications would be beneficial for pursuing particular fields. |
|
- role: user |
|
content: Heya! |
|
- role: assistant |
|
content: Hi! How may I help you? |
|
- role: user |
|
content: I am interested in developing a career in software engineering. What |
|
would you recommend me to do? |
|
- messages: |
|
- role: system |
|
content: You are a knowledgeable assistant. Help the user as much as you can. |
|
- role: user |
|
content: How to become healthier? |
|
- messages: |
|
- role: system |
|
content: You are a helpful assistant who provides concise responses. |
|
- role: user |
|
content: Hi! |
|
- role: assistant |
|
content: Hello there! How may I help you? |
|
- role: user |
|
content: I need to build a simple website. Where should I start learning about web development? |
|
- messages: |
|
- role: system |
|
content: You are a very creative assistant. User will give you a task, which you should complete with all your knowledge. |
|
- role: user |
|
content: Write the background story of an RPG game about wizards and dragons in a sci-fi world. |
|
inference: |
|
parameters: |
|
max_new_tokens: 64 |
|
penalty_alpha: 0.5 |
|
top_k: 4 |
|
model-index: |
|
- name: Llama-68M-Chat-v1 |
|
results: |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: AI2 Reasoning Challenge (25-Shot) |
|
type: ai2_arc |
|
config: ARC-Challenge |
|
split: test |
|
args: |
|
num_few_shot: 25 |
|
metrics: |
|
- type: acc_norm |
|
value: 23.29 |
|
name: normalized accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Llama-68M-Chat-v1 |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: HellaSwag (10-Shot) |
|
type: hellaswag |
|
split: validation |
|
args: |
|
num_few_shot: 10 |
|
metrics: |
|
- type: acc_norm |
|
value: 28.27 |
|
name: normalized accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Llama-68M-Chat-v1 |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: MMLU (5-Shot) |
|
type: cais/mmlu |
|
config: all |
|
split: test |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 25.18 |
|
name: accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Llama-68M-Chat-v1 |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: TruthfulQA (0-shot) |
|
type: truthful_qa |
|
config: multiple_choice |
|
split: validation |
|
args: |
|
num_few_shot: 0 |
|
metrics: |
|
- type: mc2 |
|
value: 47.27 |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Llama-68M-Chat-v1 |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: Winogrande (5-shot) |
|
type: winogrande |
|
config: winogrande_xl |
|
split: validation |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 54.3 |
|
name: accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Llama-68M-Chat-v1 |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: GSM8k (5-shot) |
|
type: gsm8k |
|
config: main |
|
split: test |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 0.0 |
|
name: accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Llama-68M-Chat-v1 |
|
name: Open LLM Leaderboard |
|
--- |
|
|
|
# A Llama Chat Model of 68M Parameters |
|
|
|
- Base model: [JackFram/llama-68m](https://huggingface.co/JackFram/llama-68m) |
|
- Datasets: |
|
- [THUDM/webglm-qa](https://huggingface.co/datasets/THUDM/webglm-qa) |
|
- [databricks/databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) |
|
- [cognitivecomputations/wizard_vicuna_70k_unfiltered](https://huggingface.co/datasets/cognitivecomputations/wizard_vicuna_70k_unfiltered) |
|
- [totally-not-an-llm/EverythingLM-data-V3](https://huggingface.co/datasets/totally-not-an-llm/EverythingLM-data-V3) |
|
- [Amod/mental_health_counseling_conversations](https://huggingface.co/datasets/Amod/mental_health_counseling_conversations) |
|
- [sablo/oasst2_curated](https://huggingface.co/datasets/sablo/oasst2_curated) |
|
- [starfishmedical/webGPT_x_dolly](https://huggingface.co/datasets/starfishmedical/webGPT_x_dolly) |
|
- [Open-Orca/OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca) |
|
- [mlabonne/chatml_dpo_pairs](https://huggingface.co/datasets/mlabonne/chatml_dpo_pairs) |
|
- Availability in other ML formats: |
|
- GGUF: [afrideva/Llama-68M-Chat-v1-GGUF](https://huggingface.co/afrideva/Llama-68M-Chat-v1-GGUF) |
|
- ONNX: [Felladrin/onnx-Llama-68M-Chat-v1](https://huggingface.co/Felladrin/onnx-Llama-68M-Chat-v1) |
|
|
|
## Recommended Prompt Format |
|
|
|
``` |
|
<|im_start|>system |
|
{system_message}<|im_end|> |
|
<|im_start|>user |
|
{user_message}<|im_end|> |
|
<|im_start|>assistant |
|
``` |
|
|
|
## Recommended Inference Parameters |
|
|
|
```yml |
|
penalty_alpha: 0.5 |
|
top_k: 4 |
|
``` |
|
|
|
## [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) |
|
|
|
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Felladrin__Llama-68M-Chat-v1) |
|
|
|
| Metric |Value| |
|
|---------------------------------|----:| |
|
|Avg. |29.72| |
|
|AI2 Reasoning Challenge (25-Shot)|23.29| |
|
|HellaSwag (10-Shot) |28.27| |
|
|MMLU (5-Shot) |25.18| |
|
|TruthfulQA (0-shot) |47.27| |
|
|Winogrande (5-shot) |54.30| |
|
|GSM8k (5-shot) | 0.00| |
|
|