--- base_model: NousResearch/Llama-2-7b-chat-hf inference: false model_type: llama prompt_template: | [INST] {prompt} [/INST] quantized_by: mwitiderrick tags: - deepsparse --- # Llama-2-7b-chat-hf - DeepSparse This repo contains model files for [Llama-2-7b-chat-hf](https://huggingface.co/NousResearch/Llama-2-7b-chat-hf) optimized for [DeepSparse](https://github.com/neuralmagic/deepsparse), a CPU inference runtime for sparse models. This model was quantized and pruned with [SparseGPT](https://arxiv.org/abs/2301.00774), using [SparseML](https://github.com/neuralmagic/sparseml). ## Inference Install [DeepSparse LLM](https://github.com/neuralmagic/deepsparse) for fast inference on CPUs: ```bash pip install deepsparse-nightly[llm] ``` Run in a [Python pipeline](https://github.com/neuralmagic/deepsparse/blob/main/docs/llms/text-generation-pipeline.md): ```python from deepsparse import TextGeneration prompt = "How to make banana bread?" formatted_prompt = f"[INST]{prompt}[/INST]" model = TextGeneration(model_path="hf:neuralmagic/Llama2-7b-chat-pruned50-quant-ds") print(model(formatted_prompt, max_new_tokens=500).generations[0].text) """ Banana bread is a delicious and easy-to-make treat that can be enjoyed year-round. Here is a basic recipe for banana bread that you can try at home: Ingredients: * 3 ripe bananas, peeled and sliced * 1/2 cup (120 ml) vegetable oil * 2 tbsp (30 ml) sugar * 2 tbsp (30 ml) water * 2 tbsp (30 ml) all-purpose flour * 1 tsp (2.5 ml) baking powder * 1 tsp (2.5 ml) salt * 1 tbsp (30 ml) vanilla extract Instructions: 1. Preheat the oven to 3500°F (175°C). 2. In a large mixing bowl, combine the sliced bananas, sugar, water, flour, baking powder, salt, and vanilla extract. Mix well. 3. Pour the mixture into a greased 9x5-inch (23x13-cm) loaf pan. 4. Bake for 55 to 60 minutes, or until a toothpick inserted into the center of the bread comes out clean. 5. Remove the bread from the oven and let it cool for 10 to 15 minutes. 6. Slice and serve. Tips: * To add flavor to the bread, try adding 1 or 2 tbsp (30 or 60 ml) of an additional ingredient, such as honey, maple syrup, or chopped nuts. * To make a moist, more bread, try adding 1 or 2 tbsp (30 or 60 ml) of additional water to the mixture. * To make a more flavorful bread, try adding 1 or 2 tbsp (30 or 60 ml) of an additional ingredient, such as vanilla essence, cocoa powder, or chopped nuts. * To make a bread with a """ ``` ## Prompt template ``` [INST] [/INST] ``` ## Sparsification For details on how this model was sparsified, see the `recipe.yaml` in this repo and follow the instructions below. ```bash git clone https://github.com/neuralmagic/sparseml pip install -e "sparseml[transformers]" python sparseml/src/sparseml/transformers/sparsification/obcq/obcq.py NousResearch/Llama-2-7b-chat-hf open_platypus --precision float16 --recipe recipe.yaml --save True python sparseml/src/sparseml/transformers/sparsification/obcq/export.py --task text-generation --model_path obcq_deployment cp deployment/model.onnx deployment/model-orig.onnx ``` Run this kv-cache injection to speed up the model at inference by caching the Key and Value states: ```python import os import onnx from sparseml.exporters.kv_cache_injector import KeyValueCacheInjector input_file = "deployment/model-orig.onnx" output_file = "deployment/model.onnx" model = onnx.load(input_file, load_external_data=False) model = KeyValueCacheInjector(model_path=os.path.dirname(input_file)).apply(model) onnx.save(model, output_file) print(f"Modified model saved to: {output_file}") ``` Follow the instructions on our [One Shot With SparseML](https://github.com/neuralmagic/sparseml/tree/main/src/sparseml/transformers/sparsification/obcq) page for a step-by-step guide for performing one-shot quantization of large language models. ## Slack For further support, and discussions on these models and AI in general, join [Neural Magic's Slack Community](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ)