Add/update the quantized ONNX model files and README.md for Transformers.js v3
Applied Quantizations
✅ Based on decoder_model.onnx
with slimming
↳ ✅ fp16
: decoder_model_fp16.onnx
(added)
↳ ✅ int8
: decoder_model_int8.onnx
(added)
↳ ✅ uint8
: decoder_model_uint8.onnx
(added)
↳ ✅ q4
: decoder_model_q4.onnx
(added)
↳ ✅ q4f16
: decoder_model_q4f16.onnx
(added)
↳ ✅ bnb4
: decoder_model_bnb4.onnx
(added)
✅ Based on decoder_model.onnx
with slimming
↳ ✅ fp16
: decoder_model_fp16.onnx
(added)
↳ ✅ int8
: decoder_model_int8.onnx
(added)
↳ ✅ uint8
: decoder_model_uint8.onnx
(added)
↳ ✅ q4
: decoder_model_q4.onnx
(added)
↳ ✅ q4f16
: decoder_model_q4f16.onnx
(added)
↳ ✅ bnb4
: decoder_model_bnb4.onnx
(added)
❌ Based on encoder_model.onnx
with slimming
None
↳ ❌ int8
: encoder_model_int8.onnx
(added but JS-based E2E test failed)
dtype not specified for "decoder_model_merged". Using the default dtype (fp32) for this device (cpu).
/home/ubuntu/src/tjsmigration/node_modules/.pnpm/[email protected]/node_modules/onnxruntime-node/dist/backend.js:25
__classPrivateFieldGet(this, _OnnxruntimeSessionHandler_inferenceSession, "f").loadModel(pathOrBuffer, options);
^
Error: Could not find an implementation for ConvInteger(10) node with name '/embeddings/patch_embeddings/projection/Conv_quant'
at new OnnxruntimeSessionHandler (/home/ubuntu/src/tjsmigration/node_modules/.pnpm/[email protected]/node_modules/onnxruntime-node/dist/backend.js:25:92)
at Immediate.<anonymous> (/home/ubuntu/src/tjsmigration/node_modules/.pnpm/[email protected]/node_modules/onnxruntime-node/dist/backend.js:67:29)
at process.processImmediate (node:internal/timers:485:21)
Node.js v22.16.0
↳ ✅ uint8
: encoder_model_uint8.onnx
(added)
↳ ✅ q4
: encoder_model_q4.onnx
(added)
↳ ✅ q4f16
: encoder_model_q4f16.onnx
(added)
↳ ✅ bnb4
: encoder_model_bnb4.onnx
(added)
❌ Based on encoder_model.onnx
with slimming
None
↳ ❌ int8
: encoder_model_int8.onnx
(added but JS-based E2E test failed)
dtype not specified for "decoder_model_merged". Using the default dtype (fp32) for this device (cpu).
/home/ubuntu/src/tjsmigration/node_modules/.pnpm/[email protected]/node_modules/onnxruntime-node/dist/backend.js:25
__classPrivateFieldGet(this, _OnnxruntimeSessionHandler_inferenceSession, "f").loadModel(pathOrBuffer, options);
^
Error: Could not find an implementation for ConvInteger(10) node with name '/embeddings/patch_embeddings/projection/Conv_quant'
at new OnnxruntimeSessionHandler (/home/ubuntu/src/tjsmigration/node_modules/.pnpm/[email protected]/node_modules/onnxruntime-node/dist/backend.js:25:92)
at Immediate.<anonymous> (/home/ubuntu/src/tjsmigration/node_modules/.pnpm/[email protected]/node_modules/onnxruntime-node/dist/backend.js:67:29)
at process.processImmediate (node:internal/timers:485:21)
Node.js v22.16.0
↳ ✅ uint8
: encoder_model_uint8.onnx
(added)
↳ ✅ q4
: encoder_model_q4.onnx
(added)
↳ ✅ q4f16
: encoder_model_q4f16.onnx
(added)
↳ ✅ bnb4
: encoder_model_bnb4.onnx
(added)
✅ Based on decoder_with_past_model.onnx
with slimming
↳ ✅ fp16
: decoder_with_past_model_fp16.onnx
(added)
↳ ✅ int8
: decoder_with_past_model_int8.onnx
(added)
↳ ✅ uint8
: decoder_with_past_model_uint8.onnx
(added)
↳ ✅ q4
: decoder_with_past_model_q4.onnx
(added)
↳ ✅ q4f16
: decoder_with_past_model_q4f16.onnx
(added)
↳ ✅ bnb4
: decoder_with_past_model_bnb4.onnx
(added)
✅ Based on decoder_with_past_model.onnx
with slimming
↳ ✅ fp16
: decoder_with_past_model_fp16.onnx
(added)
↳ ✅ int8
: decoder_with_past_model_int8.onnx
(added)
↳ ✅ uint8
: decoder_with_past_model_uint8.onnx
(added)
↳ ✅ q4
: decoder_with_past_model_q4.onnx
(added)
↳ ✅ q4f16
: decoder_with_past_model_q4f16.onnx
(added)
↳ ✅ bnb4
: decoder_with_past_model_bnb4.onnx
(added)
❌ Based on decoder_model_merged.onnx
with slimming
0%| | 0/1 [00:00<?, ?it/s]
Processing /tmp/tmpmif_vzn4/decoder_model_merged.onnx: 0%| | 0/1 [00:00<?, ?it/s]
0%| | 0/6 [00:00<?, ?it/s][A
- Quantizing to fp16: 0%| | 0/6 [00:00<?, ?it/s][A/home/ubuntu/src/tjsmigration/transformers.js/scripts/float16.py:73: UserWarning: the float32 number 5.960464477539063e-08 will be truncated to 1e-07
warnings.warn(
/home/ubuntu/src/tjsmigration/transformers.js/scripts/float16.py:92: UserWarning: the float32 number -5.960464477539063e-08 will be truncated to -1e-07
warnings.warn(
/home/ubuntu/src/tjsmigration/transformers.js/scripts/float16.py:85: UserWarning: the float32 number -3.4028234663852886e+38 will be truncated to -10000.0
warnings.warn(
- Quantizing to fp16: 0%| | 0/6 [00:16<?, ?it/s]
Processing /tmp/tmpmif_vzn4/decoder_model_merged.onnx: 0%| | 0/1 [00:16<?, ?it/s]
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/home/ubuntu/src/tjsmigration/transformers.js/scripts/quantize.py", line 377, in <module>
main()
File "/home/ubuntu/src/tjsmigration/transformers.js/scripts/quantize.py", line 374, in main
quantize(input_folder, output_folder, quantization_args)
File "/home/ubuntu/src/tjsmigration/transformers.js/scripts/quantize.py", line 309, in quantize
quantize_fp16(
File "/home/ubuntu/src/tjsmigration/transformers.js/scripts/quantize.py", line 223, in quantize_fp16
check_and_save_model(model_fp16, save_path)
File "/home/ubuntu/src/tjsmigration/transformers.js/scripts/utils.py", line 29, in check_and_save_model
strict_check_model(model)
File "/home/ubuntu/src/tjsmigration/transformers.js/scripts/utils.py", line 21, in strict_check_model
raise e
File "/home/ubuntu/src/tjsmigration/transformers.js/scripts/utils.py", line 16, in strict_check_model
onnx.checker.check_model(model_or_path, full_check=True)
File "/home/ubuntu/.cache/uv/archive-v0/7hYcxZ8pwavXeKpAYRaHY/lib/python3.12/site-packages/onnx/checker.py", line 179, in check_model
C.check_model(
onnx.onnx_cpp2py_export.shape_inference.InferenceError: [ShapeInferenceError] Inference error(s): (op_type:If, node name: optimum::if): [ShapeInferenceError] Inference error(s): (op_type:Add, node name: /decoder/decoder/embed_positions/Add): [ShapeInferenceError] Inferred shape and existing shape differ in rank: (1) vs (0)
✅ Based on decoder_model_merged.onnx
without slimming
↳ ✅ fp16
: decoder_model_merged_fp16.onnx
(replaced because it was invalid)
↳ ✅ int8
: decoder_model_merged_int8.onnx
(added)
↳ ✅ uint8
: decoder_model_merged_uint8.onnx
(added)
↳ ✅ q4
: decoder_model_merged_q4.onnx
(added)
↳ ✅ q4f16
: decoder_model_merged_q4f16.onnx
(added)
↳ ✅ bnb4
: decoder_model_merged_bnb4.onnx
(added)
Great work! Thanks so much! 🤗