mwitiderrick commited on
Commit
f4d957f
·
1 Parent(s): ddd98e9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +17 -1
README.md CHANGED
@@ -56,4 +56,20 @@ python sparseml/src/sparseml/transformers/sparsification/obcq/obcq.py GeneZC/Min
56
  python sparseml/src/sparseml/transformers/sparsification/obcq/export.py --task text-generation --model_path obcq_deployment
57
  cp deployment/model.onnx deployment/model-orig.onnx
58
  python onnx_kv_inject.py --input-file deployment/model-orig.onnx --output-file deployment/model.onnx
59
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
56
  python sparseml/src/sparseml/transformers/sparsification/obcq/export.py --task text-generation --model_path obcq_deployment
57
  cp deployment/model.onnx deployment/model-orig.onnx
58
  python onnx_kv_inject.py --input-file deployment/model-orig.onnx --output-file deployment/model.onnx
59
+ ```
60
+ Run this kv-cache injection to speed up the model at inference by caching the Key and Value states:
61
+ ```python
62
+ import os
63
+ import onnx
64
+ from sparseml.exporters.kv_cache_injector import KeyValueCacheInjector
65
+ input_file = "deployment/model-orig.onnx"
66
+ output_file = "deployment/model.onnx"
67
+ model = onnx.load(input_file, load_external_data=False)
68
+ model = KeyValueCacheInjector(model_path=os.path.dirname(input_file)).apply(model)
69
+ onnx.save(model, output_file)
70
+ print(f"Modified model saved to: {output_file}")
71
+ ```
72
+ Follow the instructions on our [One Shot With SparseML](https://github.com/neuralmagic/sparseml/tree/main/src/sparseml/transformers/sparsification/obcq) page for a step-by-step guide for performing one-shot quantization of large language models.
73
+ ## Slack
74
+
75
+ For further support, and discussions on these models and AI in general, join [Neural Magic's Slack Community](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ)