grok-2-GGUF / README.md
danielhanchen's picture
Update README.md
7b560ae verified
---
library_name: transformers
tags:
- grok
- unsloth
license: other
license_name: grok-2
license_link: https://huggingface.co/xai-org/grok-2/blob/main/LICENSE
base_model:
- xai-org/grok-2
---
<div>
<p style="margin-bottom: 0; margin-top: 0;">
<strong>Learn how to run Grok 2 correctly - <a href="https://docs.unsloth.ai/basics/grok-2">Read our Guide</a>.</strong>
</p>
<p style="margin-top: 0;margin-bottom: 0;">
<em><a href="https://docs.unsloth.ai/basics/unsloth-dynamic-v2.0-gguf">Unsloth Dynamic 2.0</a> achieves superior accuracy & outperforms other leading quants.</em>
</p>
<div style="display: flex; gap: 5px; align-items: center; ">
<a href="https://github.com/unslothai/unsloth/">
<img src="https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png" width="133">
</a>
<a href="https://discord.gg/unsloth">
<img src="https://github.com/unslothai/unsloth/raw/main/images/Discord%20button.png" width="173">
</a>
<a href="https://docs.unsloth.ai/basics/grok-2">
<img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="143">
</a>
</div>
<h1 style="margin-top: 0rem;">Grok 2 Usage Guidelines</h1>
</div>
- Use `--jinja` for `llama.cpp`. You must use [PR 15539](https://github.com/ggml-org/llama.cpp/pull/15539). For example use the code below: <br>
- `git clone https://github.com/ggml-org/llama.cpp`
- `cd llama.cpp && git fetch origin pull/15539/head:MASTER && git checkout MASTER && cd ..`
Utilizes Alvaro's Grok-2 HF compatible tokenizer as provided [here](https://huggingface.co/alvarobartt/grok-2-tokenizer)
# Grok 2
This repository contains the weights of Grok 2, a model trained and used at xAI in 2024.
## Usage: Serving with SGLang
- Download the weights. You can replace `/local/grok-2` with any other folder name you prefer.
```
hf download xai-org/grok-2 --local-dir /local/grok-2
```
You might encounter some errors during the download. Please retry until the download is successful.
If the download succeeds, the folder should contain **42 files** and be approximately 500 GB.
- Launch a server.
Install the latest SGLang inference engine (>= v0.5.1) from https://github.com/sgl-project/sglang/
Use the command below to launch an inference server. This checkpoint is TP=8, so you will need 8 GPUs (each with > 40GB of memory).
```
python3 -m sglang.launch_server --model /local/grok-2 --tokenizer-path /local/grok-2/tokenizer.tok.json --tp 8 --quantization fp8 --attention-backend triton
```
- Send a request.
This is a post-trained model, so please use the correct [chat template](https://github.com/sgl-project/sglang/blob/97a38ee85ba62e268bde6388f1bf8edfe2ca9d76/python/sglang/srt/tokenizer/tiktoken_tokenizer.py#L106).
```
python3 -m sglang.test.send_one --prompt "Human: What is your name?<|separator|>\n\nAssistant:"
```
You should be able to see the model output its name, Grok.
Learn more about other ways to send requests [here](https://docs.sglang.ai/basic_usage/send_request.html).
## License
The weights are licensed under the [Grok 2 Community License Agreement](https://huggingface.co/xai-org/grok-2/blob/main/LICENSE).