Xenova HF Staff whitphx HF Staff commited on
Commit
7956d98
·
verified ·
1 Parent(s): 3137696

Add/update the quantized ONNX model files and README.md for Transformers.js v3 (#9)

Browse files

- Add/update the quantized ONNX model files and README.md for Transformers.js v3 (9deeba132f79306a18ef3bf01132a4de6f481a1e)


Co-authored-by: Yuichiro Tachibana <[email protected]>

Files changed (1) hide show
  1. README.md +7 -2
README.md CHANGED
@@ -8,7 +8,7 @@ tags:
8
 
9
  # GPT-4o Tokenizer
10
 
11
- A 🤗-compatible version of the **GPT-4o tokenizer** (adapted from [openai/tiktoken](https://github.com/openai/tiktoken)). This means it can be used with Hugging Face libraries including [Transformers](https://github.com/huggingface/transformers), [Tokenizers](https://github.com/huggingface/tokenizers), and [Transformers.js](https://github.com/xenova/transformers.js).
12
 
13
  ## Example usage:
14
 
@@ -21,8 +21,13 @@ assert tokenizer.encode('hello world') == [24912, 2375]
21
  ```
22
 
23
  ### Transformers.js
 
 
 
 
 
24
  ```js
25
- import { AutoTokenizer } from '@xenova/transformers';
26
 
27
  const tokenizer = await AutoTokenizer.from_pretrained('Xenova/gpt-4o');
28
  const tokens = tokenizer.encode('hello world'); // [24912, 2375]
 
8
 
9
  # GPT-4o Tokenizer
10
 
11
+ A 🤗-compatible version of the **GPT-4o tokenizer** (adapted from [openai/tiktoken](https://github.com/openai/tiktoken)). This means it can be used with Hugging Face libraries including [Transformers](https://github.com/huggingface/transformers), [Tokenizers](https://github.com/huggingface/tokenizers), and [Transformers.js](https://github.com/huggingface/transformers.js).
12
 
13
  ## Example usage:
14
 
 
21
  ```
22
 
23
  ### Transformers.js
24
+ If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@huggingface/transformers) using:
25
+ ```bash
26
+ npm i @huggingface/transformers
27
+ ```
28
+
29
  ```js
30
+ import { AutoTokenizer } from '@huggingface/transformers';
31
 
32
  const tokenizer = await AutoTokenizer.from_pretrained('Xenova/gpt-4o');
33
  const tokens = tokenizer.encode('hello world'); // [24912, 2375]