MISHANM commited on
Commit
d92ab67
·
verified ·
1 Parent(s): 98bef84

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -4
README.md CHANGED
@@ -18,10 +18,6 @@ This model is an advanced fp8 quantized version of google/gemma-3-27b-it, meticu
18
 
19
  1. GPUs: 1*AMD Instinct™ MI210 Accelerators
20
 
21
-
22
-
23
-
24
- # Inference with HuggingFace
25
 
26
  ## Transformers library
27
 
@@ -29,6 +25,8 @@ This model is an advanced fp8 quantized version of google/gemma-3-27b-it, meticu
29
  pip install git+https://github.com/huggingface/[email protected]
30
  ```
31
 
 
 
32
  ```python3
33
 
34
  from transformers import AutoProcessor, Gemma3ForConditionalGeneration, BitsAndBytesConfig
 
18
 
19
  1. GPUs: 1*AMD Instinct™ MI210 Accelerators
20
 
 
 
 
 
21
 
22
  ## Transformers library
23
 
 
25
  pip install git+https://github.com/huggingface/[email protected]
26
  ```
27
 
28
+ # Inference with HuggingFace
29
+
30
  ```python3
31
 
32
  from transformers import AutoProcessor, Gemma3ForConditionalGeneration, BitsAndBytesConfig