Upload tokenizer
57d47ee
verified
-
1.57 kB
Upload tokenizer
-
8.57 kB
Update README.md
-
707 Bytes
Upload tokenizer
-
4.17 kB
Upload tokenizer
-
1.9 kB
Upload Qwen3ForCausalLM
-
214 Bytes
Upload Qwen3ForCausalLM
-
1.67 MB
Upload tokenizer
pytorch_model-00001-of-00007.bin
Detected Pickle imports (20)
- "torch.FloatStorage",
- "torchao.dtypes.floatx.float8_layout.Float8AQTTensorImpl",
- "torch._utils._rebuild_tensor_v3",
- "torchao.quantization.quant_primitives.ZeroPointDomain",
- "torch.storage.UntypedStorage",
- "torch._utils._rebuild_wrapper_subclass",
- "torchao.dtypes.floatx.float8_layout.Float8Layout",
- "torchao.dtypes.affine_quantized_tensor.AffineQuantizedTensor",
- "torchao.float8.inference.Float8MMConfig",
- "torch.device",
- "torch._tensor._rebuild_from_type_v2",
- "torch.bfloat16",
- "torch.serialization._get_layout",
- "torch._utils._rebuild_tensor_v2",
- "torchao.quantization.quant_api._input_activation_quant_func_fp8",
- "torch.float8_e4m3fn",
- "collections.OrderedDict",
- "torchao.quantization.granularity.PerRow",
- "torchao.quantization.linear_activation_quantized_tensor.LinearActivationQuantizedTensor",
- "torch.BFloat16Storage"
How to fix it?
4.97 GB
Upload Qwen3ForCausalLM
pytorch_model-00002-of-00007.bin
Detected Pickle imports (20)
- "torch.FloatStorage",
- "torchao.dtypes.floatx.float8_layout.Float8AQTTensorImpl",
- "torch._utils._rebuild_tensor_v3",
- "torchao.quantization.quant_primitives.ZeroPointDomain",
- "torch.storage.UntypedStorage",
- "torch._utils._rebuild_wrapper_subclass",
- "torchao.dtypes.floatx.float8_layout.Float8Layout",
- "torchao.dtypes.affine_quantized_tensor.AffineQuantizedTensor",
- "torchao.float8.inference.Float8MMConfig",
- "torch.device",
- "torch._tensor._rebuild_from_type_v2",
- "torch.bfloat16",
- "torch.serialization._get_layout",
- "torch._utils._rebuild_tensor_v2",
- "torchao.quantization.quant_api._input_activation_quant_func_fp8",
- "torch.float8_e4m3fn",
- "collections.OrderedDict",
- "torchao.quantization.granularity.PerRow",
- "torchao.quantization.linear_activation_quantized_tensor.LinearActivationQuantizedTensor",
- "torch.BFloat16Storage"
How to fix it?
4.97 GB
Upload Qwen3ForCausalLM
pytorch_model-00003-of-00007.bin
Detected Pickle imports (20)
- "torch.FloatStorage",
- "torchao.dtypes.floatx.float8_layout.Float8AQTTensorImpl",
- "torch._utils._rebuild_tensor_v3",
- "torchao.quantization.quant_primitives.ZeroPointDomain",
- "torch.storage.UntypedStorage",
- "torch._utils._rebuild_wrapper_subclass",
- "torchao.dtypes.floatx.float8_layout.Float8Layout",
- "torchao.dtypes.affine_quantized_tensor.AffineQuantizedTensor",
- "torchao.float8.inference.Float8MMConfig",
- "torch.device",
- "torch._tensor._rebuild_from_type_v2",
- "torch.bfloat16",
- "torch.serialization._get_layout",
- "torch._utils._rebuild_tensor_v2",
- "torchao.quantization.quant_api._input_activation_quant_func_fp8",
- "torch.float8_e4m3fn",
- "collections.OrderedDict",
- "torchao.quantization.granularity.PerRow",
- "torchao.quantization.linear_activation_quantized_tensor.LinearActivationQuantizedTensor",
- "torch.BFloat16Storage"
How to fix it?
4.88 GB
Upload Qwen3ForCausalLM
pytorch_model-00004-of-00007.bin
Detected Pickle imports (20)
- "torchao.quantization.quant_api._input_activation_quant_func_fp8",
- "torchao.float8.inference.Float8MMConfig",
- "torch._utils._rebuild_wrapper_subclass",
- "torchao.dtypes.floatx.float8_layout.Float8Layout",
- "torch.device",
- "torch._utils._rebuild_tensor_v3",
- "torch.storage.UntypedStorage",
- "torchao.quantization.linear_activation_quantized_tensor.LinearActivationQuantizedTensor",
- "torchao.dtypes.floatx.float8_layout.Float8AQTTensorImpl",
- "torch._utils._rebuild_tensor_v2",
- "torchao.dtypes.affine_quantized_tensor.AffineQuantizedTensor",
- "torch.serialization._get_layout",
- "torchao.quantization.quant_primitives.ZeroPointDomain",
- "torchao.quantization.granularity.PerRow",
- "torch.float8_e4m3fn",
- "torch.bfloat16",
- "torch._tensor._rebuild_from_type_v2",
- "torch.FloatStorage",
- "torch.BFloat16Storage",
- "collections.OrderedDict"
How to fix it?
4.88 GB
Upload Qwen3ForCausalLM
pytorch_model-00005-of-00007.bin
Detected Pickle imports (20)
- "torchao.quantization.quant_api._input_activation_quant_func_fp8",
- "torchao.float8.inference.Float8MMConfig",
- "torch._utils._rebuild_wrapper_subclass",
- "torchao.dtypes.floatx.float8_layout.Float8Layout",
- "torch.device",
- "torch._utils._rebuild_tensor_v3",
- "torch.storage.UntypedStorage",
- "torchao.quantization.linear_activation_quantized_tensor.LinearActivationQuantizedTensor",
- "torchao.dtypes.floatx.float8_layout.Float8AQTTensorImpl",
- "torch._utils._rebuild_tensor_v2",
- "torchao.dtypes.affine_quantized_tensor.AffineQuantizedTensor",
- "torch.serialization._get_layout",
- "torchao.quantization.quant_primitives.ZeroPointDomain",
- "torchao.quantization.granularity.PerRow",
- "torch.float8_e4m3fn",
- "torch.bfloat16",
- "torch._tensor._rebuild_from_type_v2",
- "torch.FloatStorage",
- "torch.BFloat16Storage",
- "collections.OrderedDict"
How to fix it?
4.88 GB
Upload Qwen3ForCausalLM
pytorch_model-00006-of-00007.bin
Detected Pickle imports (20)
- "torchao.quantization.quant_api._input_activation_quant_func_fp8",
- "torchao.float8.inference.Float8MMConfig",
- "torch._utils._rebuild_wrapper_subclass",
- "torchao.dtypes.floatx.float8_layout.Float8Layout",
- "torch.device",
- "torch._utils._rebuild_tensor_v3",
- "torch.storage.UntypedStorage",
- "torchao.quantization.linear_activation_quantized_tensor.LinearActivationQuantizedTensor",
- "torchao.dtypes.floatx.float8_layout.Float8AQTTensorImpl",
- "torch._utils._rebuild_tensor_v2",
- "torchao.dtypes.affine_quantized_tensor.AffineQuantizedTensor",
- "torch.serialization._get_layout",
- "torchao.quantization.quant_primitives.ZeroPointDomain",
- "torchao.quantization.granularity.PerRow",
- "torch.float8_e4m3fn",
- "torch.bfloat16",
- "torch._tensor._rebuild_from_type_v2",
- "torch.FloatStorage",
- "torch.BFloat16Storage",
- "collections.OrderedDict"
How to fix it?
4.88 GB
Upload Qwen3ForCausalLM
pytorch_model-00007-of-00007.bin
Detected Pickle imports (20)
- "torchao.quantization.quant_api._input_activation_quant_func_fp8",
- "torchao.float8.inference.Float8MMConfig",
- "torch._utils._rebuild_wrapper_subclass",
- "torchao.dtypes.floatx.float8_layout.Float8Layout",
- "torch.device",
- "torch._utils._rebuild_tensor_v3",
- "torch.storage.UntypedStorage",
- "torchao.quantization.linear_activation_quantized_tensor.LinearActivationQuantizedTensor",
- "torchao.dtypes.floatx.float8_layout.Float8AQTTensorImpl",
- "torch._utils._rebuild_tensor_v2",
- "torchao.dtypes.affine_quantized_tensor.AffineQuantizedTensor",
- "torch.serialization._get_layout",
- "torchao.quantization.quant_primitives.ZeroPointDomain",
- "torchao.quantization.granularity.PerRow",
- "torch.float8_e4m3fn",
- "torch.bfloat16",
- "torch._tensor._rebuild_from_type_v2",
- "torch.FloatStorage",
- "torch.BFloat16Storage",
- "collections.OrderedDict"
How to fix it?
4.88 GB
Upload Qwen3ForCausalLM
-
58.3 kB
Upload Qwen3ForCausalLM
-
613 Bytes
Upload tokenizer
-
11.4 MB
Upload tokenizer
-
5.4 kB
Upload tokenizer
-
2.78 MB
Upload tokenizer