Text Generation
Transformers
Safetensors
English
mistral
text-generation-inference
unsloth
trl
Eval Results
legolasyiu commited on
Commit
26b8494
·
verified ·
1 Parent(s): 0978750

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -1
README.md CHANGED
@@ -82,7 +82,7 @@ After installing `mistral_inference`, a `mistral-demo` CLI command should be ava
82
  > ```sh
83
  > pip install mistral_inference
84
  > pip install mistral-demo
85
- >
86
  > pip install git+https://github.com/huggingface/transformers.git
87
  > ```
88
 
@@ -90,6 +90,10 @@ If you want to use Hugging Face `transformers` to generate text, you can do some
90
 
91
  ```py
92
  from transformers import AutoModelForCausalLM, AutoTokenizer
 
 
 
 
93
  model_id = "EpistemeAI/Fireball-12B"
94
  tokenizer = AutoTokenizer.from_pretrained(model_id)
95
  model = AutoModelForCausalLM.from_pretrained(model_id)
 
82
  > ```sh
83
  > pip install mistral_inference
84
  > pip install mistral-demo
85
+ > pip install accelerate #GPU A100/L4
86
  > pip install git+https://github.com/huggingface/transformers.git
87
  > ```
88
 
 
90
 
91
  ```py
92
  from transformers import AutoModelForCausalLM, AutoTokenizer
93
+ from accelerate import Accelerator #Use only GPU A100/L4
94
+
95
+ accelerator = Accelerator() #Use only GPU A100/L4
96
+
97
  model_id = "EpistemeAI/Fireball-12B"
98
  tokenizer = AutoTokenizer.from_pretrained(model_id)
99
  model = AutoModelForCausalLM.from_pretrained(model_id)