Text Generation
Transformers
Safetensors
English
mistral
sparse
pruned
wanda
conversational
text-generation-inference
File size: 5,966 Bytes
ccbd872
 
7563713
 
 
 
 
 
 
 
 
 
 
ccbd872
 
7563713
ccbd872
7563713
ccbd872
 
 
 
7563713
ccbd872
 
 
 
7563713
ccbd872
 
 
7563713
ccbd872
 
7563713
ccbd872
7563713
ccbd872
 
 
 
7563713
ccbd872
 
 
 
 
a581c28
7563713
ccbd872
7563713
ccbd872
7563713
 
 
 
 
 
 
ccbd872
7563713
 
 
 
ccbd872
7563713
 
 
 
ccbd872
7563713
ccbd872
7563713
 
 
 
 
 
 
 
 
 
 
ccbd872
7563713
a581c28
ccbd872
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a581c28
7563713
 
 
 
 
 
 
 
a581c28
 
7563713
 
 
 
 
 
 
 
a581c28
ccbd872
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
---
library_name: transformers
tags:
- mistral
- sparse
- pruned
- wanda
license: mit
datasets:
- HuggingFaceH4/ultrachat_200k
- HuggingFaceH4/ultrafeedback_binarized
language:
- en
---

# Model Card for kettleguts/zephyr-7b-beta_sparse05

This is a pruned version of HuggingFaceH4/zephyr-7b-beta found [here](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta). Wanda pruning was used to introduce 50% sparsity into the linear layers. Read the paper [here](https://arxiv.org/abs/2306.11695).



### Model Description
[Here](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta#model-description)



## Uses
This model is only useful for research purposes. The quality of its text generation is highly dependent on how it is prompted. Since it is heavily pruned, it sometimes behaves like a mush smaller model.

### Direct Use

This model is not suitable for direct use outside of research.


# Out-of-Scope Use

This model should never be used for critical decisions involving health, life, employment, housing, law, etc. It should also never be used to harm anyone.


## Bias, Risks, and Limitations

[No safegaurd have been added to this model.](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta#bias-risks-and-limitations)

## How to Get Started with the Model

Use the code below to get started with the model.

'''python
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig, pipeline

model_name = 'kettleguts/zephyr-7b-beta_sparse05'

#quantize model for mode efficient performance
bnb_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_use_double_quant=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_compute_dtype=torch.bfloat16
)

#load model
model = AutoModelForCausalLM.from_pretrained(model_name,
                                             device_map = "auto",
                                             quantization_config=bnb_config)

#load toeknizer
tokenizer = AutoTokenizer.from_pretrained(model_name)
if tokenizer.pad_token is None:
    tokenizer.add_special_tokens({'pad_token': '[PAD]'})

pipe = pipeline("text-generation",model=model, tokenizer=tokenizer)

messages = [
    {
        "role": "system",
        "content": "You are a friendly chatbot who always responds as briefly as possible with prefect grammar.",
    },
    {"role": "user", "content": "Briefly describe network pruning."},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95,pad_token_id = tokenizer.pad_token_id)
text = str(outputs[0]).split('<|assistant|>\\n')
print(text[-1])

#Network pruning, in the context of artificial intelligence and machine learning, refers to the process of removing unimportant or redundant connections, or "pruning," from a neural network\'s architecture. This is done to simplify and optimize the network\'s structure, reduce overfitting, and improve its efficiency, while preserving its overall performance. Pruning typically involves removing connections, neurons, or entire layers, based on metrics such as the weight or sparsity of the connection, or the amount of improvement gained by removing the connection. The goal is to prune the network in a way that balances the trade-off between model size and accuracy, while reducing the network\'s overall complexity and resource requirements. Pruning techniques can range from simple heuristics such as early stopping, to more sophisticated methods such as compressed and pruned models, and iterative and incremental pruning.'}
'''




## Evaluation

<!-- This section describes the evaluation protocols and provides the results. -->

### Testing Data, Factors & Metrics

#### Testing Data

<!-- This should link to a Dataset Card if possible. -->

[More Information Needed]

#### Factors

<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->

[More Information Needed]

#### Metrics

<!-- These are the evaluation metrics being used, ideally with a description of why. -->

[More Information Needed]

### Results

[More Information Needed]

#### Summary



## Model Examination [optional]

<!-- Relevant interpretability work for the model goes here -->

[More Information Needed]

## Environmental Impact

<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->

Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).

- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]

## Technical Specifications [optional]

### Model Architecture and Objective

[More Information Needed]

### Compute Infrastructure

[More Information Needed]

#### Hardware

[More Information Needed]

#### Software

[More Information Needed]

## Citation [optional]

**BibTeX:**
'''
@misc{tunstall2023zephyr,
      title={Zephyr: Direct Distillation of LM Alignment}, 
      author={Lewis Tunstall and Edward Beeching and Nathan Lambert and Nazneen Rajani and Kashif Rasul and Younes Belkada and Shengyi Huang and Leandro von Werra and Clémentine Fourrier and Nathan Habib and Nathan Sarrazin and Omar Sanseviero and Alexander M. Rush and Thomas Wolf},
      year={2023},
      eprint={2310.16944},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}
'''
'''
@misc{sun2023simple,
      title={A Simple and Effective Pruning Approach for Large Language Models}, 
      author={Mingjie Sun and Zhuang Liu and Anna Bair and J. Zico Kolter},
      year={2023},
      eprint={2306.11695},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
'''