File size: 3,196 Bytes
92820cb
 
287a850
 
 
 
98f3e86
 
 
 
 
 
287a850
98f3e86
92820cb
 
6f3b9f6
92820cb
 
 
d5320e8
92820cb
 
 
d5320e8
 
 
 
 
 
 
67f9ebc
d5320e8
67f9ebc
d5320e8
67f9ebc
d5320e8
63dc88a
d5320e8
63dc88a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d5320e8
63dc88a
 
d5320e8
63dc88a
92820cb
63dc88a
 
 
 
92820cb
 
63dc88a
92820cb
63dc88a
92820cb
63dc88a
 
 
 
 
92820cb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
---
library_name: transformers
tags:
- 4bit
- bnb
- bitsandbytes
- llama
- llama-2
- facebook
- meta
- 7b
- quantized
license: llama2
pipeline_tag: text-generation
---

# Model Card for alokabhishek/Llama-2-7b-chat-hf-bnb-4bit

<!-- Provide a quick summary of what the model is/does. -->

This repo contains 4-bit quantized (using bitsandbytes) model of Meta's meta-llama/Llama-2-7b-chat-hf

## Model Details

- Model creator: [Meta](https://huggingface.co/meta-llama)
- Original model: [Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)


### About 4 bit quantization using bitsandbytes


QLoRA: Efficient Finetuning of Quantized LLMs: [arXiv - QLoRA: Efficient Finetuning of Quantized LLMs](https://arxiv.org/abs/2305.14314)

Hugging Face Blog post on 4-bit quantization using bitsandbytes: [Making LLMs even more accessible with bitsandbytes, 4-bit quantization and QLoRA](https://huggingface.co/blog/4bit-transformers-bitsandbytes)

bitsandbytes github repo: [bitsandbytes github repo](https://github.com/TimDettmers/bitsandbytes)

# How to Get Started with the Model

Use the code below to get started with the model.

## How to run from Python code

#### First install the package
```shell
pip install -q -U bitsandbytes accelerate torch huggingface_hub
pip install -q -U git+https://github.com/huggingface/transformers.git # Install latest version of transformers
pip install -q -U git+https://github.com/huggingface/peft.git
pip install flash-attn --no-build-isolation
```

#### Import 

```python
import torch
import os
from torch import bfloat16
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline, BitsAndBytesConfig, LlamaForCausalLM
```

#### Use a pipeline as a high-level helper

```python
model_id_llama = "alokabhishek/Llama-2-7b-chat-hf-bnb-4bit"

tokenizer_llama = AutoTokenizer.from_pretrained(model_id_llama, use_fast=True)

model_llama = AutoModelForCausalLM.from_pretrained(
    model_id_llama,
    device_map="auto"
)


pipe_llama = pipeline(model=model_llama, tokenizer=tokenizer_llama, task='text-generation')

prompt_llama = "Tell me a funny joke about Large Language Models meeting a Blackhole in an intergalactic Bar."

output_llama = pipe_llama(prompt_llama, max_new_tokens=512)

print(output_llama[0]["generated_text"])

```


## Uses

<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->

### Direct Use

<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->

[More Information Needed]

### Downstream Use [optional]

<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->

[More Information Needed]

### Out-of-Scope Use

<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->

[More Information Needed]

## Bias, Risks, and Limitations

<!-- This section is meant to convey both technical and sociotechnical limitations. -->

[More Information Needed]



## Model Card Authors [optional]

[More Information Needed]

## Model Card Contact

[More Information Needed]