pcuenq HF Staff olegshulyakov commited on
Commit
6111d6a
·
verified ·
1 Parent(s): d0cb2e1

Update README.md (#1)

Browse files

- Update README.md (f9298aa0fea2b912bf00d2c400c2b3c6e2af7fe8)


Co-authored-by: Oleg Shulyakov <[email protected]>

Files changed (1) hide show
  1. README.md +9 -7
README.md CHANGED
@@ -4,20 +4,22 @@ library_name: transformers
4
  tags:
5
  - mlx
6
  extra_gated_heading: Access CodeGemma on Hugging Face
7
- extra_gated_prompt: To access CodeGemma on Hugging Face, you’re required to review
8
- and agree to Google’s usage license. To do this, please ensure you’re logged-in
9
- to Hugging Face and click below. Requests are processed immediately.
 
10
  extra_gated_button_content: Acknowledge license
11
  pipeline_tag: text-generation
12
  widget:
13
- - text: '<start_of_turn>user Write a Python function to calculate the nth fibonacci
 
14
  number.<end_of_turn> <start_of_turn>model
15
-
16
- '
17
  inference:
18
  parameters:
19
  max_new_tokens: 200
20
  license_link: https://ai.google.dev/gemma/terms
 
 
21
  ---
22
 
23
  # mlx-community/codegemma-1.1-7b-it-8bit
@@ -34,4 +36,4 @@ from mlx_lm import load, generate
34
 
35
  model, tokenizer = load("mlx-community/codegemma-1.1-7b-it-8bit")
36
  response = generate(model, tokenizer, prompt="hello", verbose=True)
37
- ```
 
4
  tags:
5
  - mlx
6
  extra_gated_heading: Access CodeGemma on Hugging Face
7
+ extra_gated_prompt: >-
8
+ To access CodeGemma on Hugging Face, you’re required to review and agree to
9
+ Google’s usage license. To do this, please ensure you’re logged-in to Hugging
10
+ Face and click below. Requests are processed immediately.
11
  extra_gated_button_content: Acknowledge license
12
  pipeline_tag: text-generation
13
  widget:
14
+ - text: >
15
+ <start_of_turn>user Write a Python function to calculate the nth fibonacci
16
  number.<end_of_turn> <start_of_turn>model
 
 
17
  inference:
18
  parameters:
19
  max_new_tokens: 200
20
  license_link: https://ai.google.dev/gemma/terms
21
+ base_model:
22
+ - google/codegemma-1.1-7b-it
23
  ---
24
 
25
  # mlx-community/codegemma-1.1-7b-it-8bit
 
36
 
37
  model, tokenizer = load("mlx-community/codegemma-1.1-7b-it-8bit")
38
  response = generate(model, tokenizer, prompt="hello", verbose=True)
39
+ ```