RichardErkhov
commited on
Commit
•
437e63a
1
Parent(s):
dae6eb3
uploaded readme
Browse files
README.md
ADDED
@@ -0,0 +1,233 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Quantization made by Richard Erkhov.
|
2 |
+
|
3 |
+
[Github](https://github.com/RichardErkhov)
|
4 |
+
|
5 |
+
[Discord](https://discord.gg/pvy7H8DZMG)
|
6 |
+
|
7 |
+
[Request more models](https://github.com/RichardErkhov/quant_request)
|
8 |
+
|
9 |
+
|
10 |
+
Llama-3-Swallow-70B-v0.1 - GGUF
|
11 |
+
- Model creator: https://huggingface.co/tokyotech-llm/
|
12 |
+
- Original model: https://huggingface.co/tokyotech-llm/Llama-3-Swallow-70B-v0.1/
|
13 |
+
|
14 |
+
|
15 |
+
| Name | Quant method | Size |
|
16 |
+
| ---- | ---- | ---- |
|
17 |
+
| [Llama-3-Swallow-70B-v0.1.Q2_K.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Llama-3-Swallow-70B-v0.1-gguf/blob/main/Llama-3-Swallow-70B-v0.1.Q2_K.gguf) | Q2_K | 24.56GB |
|
18 |
+
| [Llama-3-Swallow-70B-v0.1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Llama-3-Swallow-70B-v0.1-gguf/blob/main/Llama-3-Swallow-70B-v0.1.IQ3_XS.gguf) | IQ3_XS | 27.29GB |
|
19 |
+
| [Llama-3-Swallow-70B-v0.1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Llama-3-Swallow-70B-v0.1-gguf/blob/main/Llama-3-Swallow-70B-v0.1.IQ3_S.gguf) | IQ3_S | 28.79GB |
|
20 |
+
| [Llama-3-Swallow-70B-v0.1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Llama-3-Swallow-70B-v0.1-gguf/blob/main/Llama-3-Swallow-70B-v0.1.Q3_K_S.gguf) | Q3_K_S | 28.79GB |
|
21 |
+
| [Llama-3-Swallow-70B-v0.1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Llama-3-Swallow-70B-v0.1-gguf/blob/main/Llama-3-Swallow-70B-v0.1.IQ3_M.gguf) | IQ3_M | 29.74GB |
|
22 |
+
| [Llama-3-Swallow-70B-v0.1.Q3_K.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Llama-3-Swallow-70B-v0.1-gguf/blob/main/Llama-3-Swallow-70B-v0.1.Q3_K.gguf) | Q3_K | 31.91GB |
|
23 |
+
| [Llama-3-Swallow-70B-v0.1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Llama-3-Swallow-70B-v0.1-gguf/blob/main/Llama-3-Swallow-70B-v0.1.Q3_K_M.gguf) | Q3_K_M | 31.91GB |
|
24 |
+
| [Llama-3-Swallow-70B-v0.1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Llama-3-Swallow-70B-v0.1-gguf/blob/main/Llama-3-Swallow-70B-v0.1.Q3_K_L.gguf) | Q3_K_L | 34.59GB |
|
25 |
+
| [Llama-3-Swallow-70B-v0.1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Llama-3-Swallow-70B-v0.1-gguf/blob/main/Llama-3-Swallow-70B-v0.1.IQ4_XS.gguf) | IQ4_XS | 35.64GB |
|
26 |
+
| [Llama-3-Swallow-70B-v0.1.Q4_0.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Llama-3-Swallow-70B-v0.1-gguf/blob/main/Llama-3-Swallow-70B-v0.1.Q4_0.gguf) | Q4_0 | 37.22GB |
|
27 |
+
| [Llama-3-Swallow-70B-v0.1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Llama-3-Swallow-70B-v0.1-gguf/tree/main/) | IQ4_NL | 37.58GB |
|
28 |
+
| [Llama-3-Swallow-70B-v0.1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Llama-3-Swallow-70B-v0.1-gguf/tree/main/) | Q4_K_S | 37.58GB |
|
29 |
+
| [Llama-3-Swallow-70B-v0.1.Q4_K.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Llama-3-Swallow-70B-v0.1-gguf/tree/main/) | Q4_K | 39.6GB |
|
30 |
+
| [Llama-3-Swallow-70B-v0.1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Llama-3-Swallow-70B-v0.1-gguf/tree/main/) | Q4_K_M | 39.6GB |
|
31 |
+
| [Llama-3-Swallow-70B-v0.1.Q4_1.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Llama-3-Swallow-70B-v0.1-gguf/tree/main/) | Q4_1 | 41.27GB |
|
32 |
+
| [Llama-3-Swallow-70B-v0.1.Q5_0.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Llama-3-Swallow-70B-v0.1-gguf/tree/main/) | Q5_0 | 45.32GB |
|
33 |
+
| [Llama-3-Swallow-70B-v0.1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Llama-3-Swallow-70B-v0.1-gguf/tree/main/) | Q5_K_S | 45.32GB |
|
34 |
+
| [Llama-3-Swallow-70B-v0.1.Q5_K.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Llama-3-Swallow-70B-v0.1-gguf/tree/main/) | Q5_K | 46.52GB |
|
35 |
+
| [Llama-3-Swallow-70B-v0.1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Llama-3-Swallow-70B-v0.1-gguf/tree/main/) | Q5_K_M | 46.52GB |
|
36 |
+
| [Llama-3-Swallow-70B-v0.1.Q5_1.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Llama-3-Swallow-70B-v0.1-gguf/tree/main/) | Q5_1 | 49.36GB |
|
37 |
+
| [Llama-3-Swallow-70B-v0.1.Q6_K.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Llama-3-Swallow-70B-v0.1-gguf/tree/main/) | Q6_K | 53.91GB |
|
38 |
+
| [Llama-3-Swallow-70B-v0.1.Q8_0.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Llama-3-Swallow-70B-v0.1-gguf/tree/main/) | Q8_0 | 69.83GB |
|
39 |
+
|
40 |
+
|
41 |
+
|
42 |
+
|
43 |
+
Original model description:
|
44 |
+
---
|
45 |
+
language:
|
46 |
+
- en
|
47 |
+
- ja
|
48 |
+
library_name: transformers
|
49 |
+
pipeline_tag: text-generation
|
50 |
+
license: llama3
|
51 |
+
model_type: llama
|
52 |
+
---
|
53 |
+
|
54 |
+
# Llama3 Swallow
|
55 |
+
|
56 |
+
Our Swallow model has undergone continual pre-training from the [Llama 3 family](https://huggingface.co/collections/meta-llama/meta-llama-3-66214712577ca38149ebb2b6), primarily with the addition of Japanese language data. The Instruct versions use supervised fine-tuning (SFT) and Chat Vector. Links to other models can be found in the index.
|
57 |
+
|
58 |
+
|
59 |
+
# Model Release Updates
|
60 |
+
|
61 |
+
We are excited to share the release schedule for our latest models:
|
62 |
+
- **July 1, 2024**: Released the [Llama-3-Swallow-8B-v0.1](https://huggingface.co/tokyotech-llm/Llama-3-Swallow-8B-v0.1), [Llama-3-Swallow-8B-Instruct-v0.1](https://huggingface.co/tokyotech-llm/Llama-3-Swallow-8B-Instruct-v0.1), [Llama-3-Swallow-70B-v0.1](https://huggingface.co/tokyotech-llm/Llama-3-Swallow-70B-v0.1), and [Llama-3-Swallow-70B-Instruct-v0.1](https://huggingface.co/tokyotech-llm/Llama-3-Swallow-70B-Instruct-v0.1).
|
63 |
+
|
64 |
+
## Swallow Model Index
|
65 |
+
|
66 |
+
|Model|Llama-3-Swallow|Llama3 Swallow Instruct|
|
67 |
+
|---|---|---|
|
68 |
+
|8B| [Link](https://huggingface.co/tokyotech-llm/Llama-3-Swallow-8B-v0.1) | [Link](https://huggingface.co/tokyotech-llm/Llama-3-Swallow-8B-Instruct-v0.1) |
|
69 |
+
|70B| [Link](https://huggingface.co/tokyotech-llm/Llama-3-Swallow-70B-v0.1) | [Link](https://huggingface.co/tokyotech-llm/Llama-3-Swallow-70B-Instruct-v0.1) |
|
70 |
+
|
71 |
+
![logo](./logo.png)
|
72 |
+
|
73 |
+
This repository provides large language models developed by [Swallow-LLM](https://swallow-llm.github.io/).
|
74 |
+
Read our [blog post](https://zenn.dev/tokyotech_lm/articles/f65989d76baf2c).
|
75 |
+
|
76 |
+
## Model Details
|
77 |
+
|
78 |
+
* **Model type**: Please refer to [Llama 3 MODEL_CARD](https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md) for details on the model architecture.
|
79 |
+
* **Language(s)**: Japanese English
|
80 |
+
* **Library**: [Megatron-LM](https://github.com/NVIDIA/Megatron-LM)
|
81 |
+
* **Tokenizer**: Please refer to [Llama 3 blog](https://ai.meta.com/blog/meta-llama-3/) for details on the tokenizer.
|
82 |
+
* **Contact**: swallow[at]nlp.c.titech.ac.jp
|
83 |
+
|
84 |
+
## Model Performance
|
85 |
+
|
86 |
+
### Japanese tasks
|
87 |
+
|
88 |
+
|Model|Size|JCom.|JEMHopQA|NIILC|JSQuAD|XL-Sum|MGSM|WMT20-en-ja|WMT20-ja-en|JMMLU|JHumanEval|Ja Avg|
|
89 |
+
|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
90 |
+
| | |4-shot|4-shot|4-shot|4-shot|1-shot|4-shot|4-shot|4-shot|5-shot|0-shot| |
|
91 |
+
| | |EM acc|Char-F1|Char-F1|Char-F1|ROUGE-2|EM acc|BLEU|BLEU|EM acc|pass@1| |
|
92 |
+
|Llama-2-70b|70B|0.8651|0.5157|0.5464|0.9130|0.2372|0.3640|0.2657|0.2402|0.5496|0.2841|0.4781|
|
93 |
+
|Swallow-70b-hf|70B|0.9178|0.6178|**0.6910**|0.9208|0.2279|0.4720|0.3046|0.2301|0.5750|0.2262|0.5183|
|
94 |
+
|Qwen2-72B|72B|0.9607|0.6399|0.5617|**0.9261**|0.2362|**0.7560**|0.2747|0.2419|**0.7831**|**0.5567**|**0.5937**|
|
95 |
+
|Meta-Llama-3-70B|70B|0.9473|0.6042|0.5965|0.9207|0.2254|0.6720|0.2855|0.2526|0.6975|0.4799|0.5682|
|
96 |
+
|Llama-3-Swallow-70B-v0.1|70B|**0.9714**|**0.6695**|0.6881|0.9218|**0.2404**|0.7080|**0.3072**|**0.2548**|0.7049|0.4683|0.5934|
|
97 |
+
|
98 |
+
### English tasks
|
99 |
+
|
100 |
+
|Model|Size|OpenBookQA|TriviaQA|HellaSWAG|SQuAD2.0|XWINO|MMLU|GSM8K|BBH|HumanEval|En Avg|
|
101 |
+
|---|---|---|---|---|---|---|---|---|---|---|---|
|
102 |
+
| | |4-shot|4-shot|4-shot|4-shot|4-shot|5-shot|4-shot|3-shot|0-shot| |
|
103 |
+
| | |Acc|EM acc|Acc|EM acc|Acc|Acc|EM acc|CoT EM Acc|pass@1| |
|
104 |
+
|Llama-2-70b|70B|0.4260|0.7988|0.6681|0.3379|**0.9256**|0.6876|0.5466|0.6643|0.3152|0.5967|
|
105 |
+
|Swallow-70b-hf|70B|0.4160|0.7610|0.6433|0.3345|0.9191|0.6571|0.5080|0.6537|0.2409|0.5704|
|
106 |
+
|Qwen2-72B|72B|0.4160|0.7890|0.6766|0.4052|0.9161|**0.8428**|**0.8908**|0.6388|**0.6049**|0.6867|
|
107 |
+
|Meta-Llama-3-70B|70B|**0.4360**|**0.8263**|**0.6909**|**0.4071**|0.9213|0.7870|0.8014|**0.8266**|0.5177|**0.6905**|
|
108 |
+
|Llama-3-Swallow-70B-v0.1|70B|0.4240|0.8231|0.6828|0.4059|0.9234|0.7745|0.8143|0.7352|0.4909|0.6749|
|
109 |
+
|
110 |
+
## Evaluation Benchmarks
|
111 |
+
|
112 |
+
### Japanese evaluation benchmarks
|
113 |
+
|
114 |
+
We used llm-jp-eval(v1.3.0), JP Language Model Evaluation Harness(commit #9b42d41) and Code Generation LM Evaluation Harness(commit #0261c52). The details are as follows:
|
115 |
+
|
116 |
+
- Multiple-choice question answering (JCommonsenseQA [Kurihara et al., 2022])
|
117 |
+
- Open-ended question answering (JEMHopQA [Ishii et al., 2024])
|
118 |
+
- Open-ended question answering (NIILC [関根, 2003])
|
119 |
+
- Machine reading comprehension (JSQuAD [Kurihara et al., 2022])
|
120 |
+
- Automatic summarization (XL-Sum [Hasan et al., 2021])
|
121 |
+
- Machine translation (WMT2020 ja-en [Barrault et al., 2020])
|
122 |
+
- Machine translation (WMT2020 en-ja [Barrault et al., 2020])
|
123 |
+
- Mathematical reasoning (MGSM [Shi et al., 2023])
|
124 |
+
- Academic exams (JMMLU [尹ら, 2024])
|
125 |
+
- Code generation (JHumanEval [佐藤ら, 2024])
|
126 |
+
|
127 |
+
### English evaluation benchmarks
|
128 |
+
|
129 |
+
We used the Language Model Evaluation Harness(v.0.4.2) and Code Generation LM Evaluation Harness(commit #0261c52). The details are as follows:
|
130 |
+
|
131 |
+
- Multiple-choice question answering (OpenBookQA [Mihaylov et al., 2018])
|
132 |
+
- Open-ended question answering (TriviaQA [Joshi et al., 2017])
|
133 |
+
- Machine reading comprehension (SQuAD2 [Rajpurkar et al., 2018])
|
134 |
+
- Commonsense reasoning (XWINO [Tikhonov and Ryabinin, 2021])
|
135 |
+
- Natural language inference (HellaSwag [Zellers et al., 2019])
|
136 |
+
- Mathematical reasoning (GSM8K [Cobbe et al., 2021])
|
137 |
+
- Reasoning (BBH (BIG-Bench-Hard) [Suzgun et al., 2023])
|
138 |
+
- Academic exams (MMLU [Hendrycks et al., 2021])
|
139 |
+
- Code generation (HumanEval [Chen et al., 2021])
|
140 |
+
|
141 |
+
## Training Datasets
|
142 |
+
|
143 |
+
### Continual Pre-Training
|
144 |
+
The following datasets were used for continual pre-training.
|
145 |
+
|
146 |
+
- [Algebraic Stack](https://huggingface.co/datasets/EleutherAI/proof-pile-2)
|
147 |
+
- [Cosmopedia](https://huggingface.co/datasets/HuggingFaceTB/cosmopedia)
|
148 |
+
- [English Wikipedia](https://dumps.wikimedia.org/other/cirrussearch)
|
149 |
+
- [Japanese Wikipedia](https://dumps.wikimedia.org/other/cirrussearch)
|
150 |
+
- [Laboro ParaCorpus](https://github.com/laboroai/Laboro-ParaCorpus)
|
151 |
+
- [OpenWebMath](https://huggingface.co/datasets/EleutherAI/proof-pile-2)
|
152 |
+
- [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb)
|
153 |
+
- [Swallow Corpus](https://arxiv.org/abs/2404.17733)
|
154 |
+
|
155 |
+
## Risks and Limitations
|
156 |
+
|
157 |
+
The models released here are still in the early stages of our research and development and have not been tuned to ensure outputs align with human intent and safety considerations.
|
158 |
+
|
159 |
+
## Acknowledgements
|
160 |
+
|
161 |
+
We thank Meta Research for releasing Llama 3 under an open license for others to build on.
|
162 |
+
|
163 |
+
Our project is supported by the [Large Generative AI Development Support Program](https://abci.ai/en/link/lfm_support_program.html) of the National Institute of Advanced Industrial Science and Technology.
|
164 |
+
|
165 |
+
## License
|
166 |
+
|
167 |
+
[META LLAMA 3 COMMUNITY LICENSE](https://llama.meta.com/llama3/license/)
|
168 |
+
|
169 |
+
## Authors
|
170 |
+
|
171 |
+
Here are the team members:
|
172 |
+
- From [Tokyo Institute of Technology Okazaki Laboratory](https://www.nlp.c.titech.ac.jp/index.en.html), the following members:
|
173 |
+
- [Naoaki Okazaki](https://www.chokkan.org/index.ja.html)
|
174 |
+
- [Sakae Mizuki](https://s-mizuki-nlp.github.io/)
|
175 |
+
- [Youmi Ma](https://www.nlp.c.titech.ac.jp/member/youmi.en.html)
|
176 |
+
- [Koki Maeda](https://sites.google.com/view/silviase)
|
177 |
+
- [Kakeru Hattori](https://aya-se.vercel.app/)
|
178 |
+
- [Masanari Ohi](https://sites.google.com/view/masanariohi)
|
179 |
+
- [Taihei Shiotani](https://github.com/inatoihs)
|
180 |
+
- [Koshiro Saito](https://sites.google.com/view/koshiro-saito)
|
181 |
+
- From [Tokyo Institute of Technology YOKOTA Laboratory](https://www.rio.gsic.titech.ac.jp/en/index.html), the following members:
|
182 |
+
- [Rio Yokota](https://twitter.com/rioyokota)
|
183 |
+
- [Kazuki Fujii](https://twitter.com/okoge_kaz)
|
184 |
+
- [Taishi Nakamura](https://twitter.com/Setuna7777_2)
|
185 |
+
- [Takumi Okamoto](https://www.linkedin.com/in/takumi-okamoto)
|
186 |
+
- [Ishida Shigeki](https://www.wantedly.com/id/reborn27)
|
187 |
+
- From [Artificial Intelligence Research Center, AIST, Japan](https://www.airc.aist.go.jp/en/teams/), the following members:
|
188 |
+
- [Hiroya Takamura](https://sites.google.com/view/hjtakamura)
|
189 |
+
|
190 |
+
## How to cite
|
191 |
+
|
192 |
+
If you find our work helpful, please feel free to cite us.
|
193 |
+
|
194 |
+
```
|
195 |
+
@inproceedings{Fujii:COLM2024,
|
196 |
+
title={Continual Pre-Training for Cross-Lingual LLM Adaptation:
|
197 |
+
Enhancing Japanese Language Capabilities},
|
198 |
+
author={Kazuki Fujii and Taishi Nakamura and Mengsay Loem and Hiroki
|
199 |
+
Iida and Masanari Ohi and Kakeru Hattori and Hirai Shota and Sakae
|
200 |
+
Mizuki and Rio Yokota and Naoaki Okazaki},
|
201 |
+
booktitle="Proceedings of the First Conference on Language Modeling",
|
202 |
+
series={COLM},
|
203 |
+
pages="(to appear)",
|
204 |
+
year="2024",
|
205 |
+
month=oct,
|
206 |
+
address={University of Pennsylvania, USA},
|
207 |
+
}
|
208 |
+
|
209 |
+
@inproceedings{Okazaki:COLM2024,
|
210 |
+
title={Building a Large Japanese Web Corpus for Large Language Models},
|
211 |
+
author={Naoaki Okazaki and Kakeru Hattori and Hirai Shota and Hiroki
|
212 |
+
Iida and Masanari Ohi and Kazuki Fujii and Taishi Nakamura and Mengsay
|
213 |
+
Loem and Rio Yokota and Sakae Mizuki},
|
214 |
+
booktitle="Proceedings of the First Conference on Language Modeling",
|
215 |
+
series={COLM},
|
216 |
+
pages="(to appear)",
|
217 |
+
year="2024",
|
218 |
+
month=oct,
|
219 |
+
address={University of Pennsylvania, USA},
|
220 |
+
}
|
221 |
+
```
|
222 |
+
|
223 |
+
### Citations
|
224 |
+
|
225 |
+
```tex
|
226 |
+
@article{llama3modelcard,
|
227 |
+
title={Llama 3 Model Card},
|
228 |
+
author={AI@Meta},
|
229 |
+
year={2024},
|
230 |
+
url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
|
231 |
+
}
|
232 |
+
```
|
233 |
+
|