/home/floriadmin/miniforge3/envs/mlc/bin/python -m mlc_llm gen_config ../dist/models/Qwen1.5-1.8B --quantization q8f32_1 --conv-template chatml --output /tmp/tmpnawifqwj [2024-03-18 19:06:21] INFO auto_config.py:115: Found model configuration: ../dist/models/Qwen1.5-1.8B/config.json [2024-03-18 19:06:21] INFO auto_config.py:153: Found model type: qwen2. Use `--model-type` to override. [2024-03-18 19:06:21] INFO qwen2_model.py:46: context_window_size not found in config.json. Falling back to max_position_embeddings (32768) [2024-03-18 19:06:21] INFO qwen2_model.py:60: prefill_chunk_size defaults to context_window_size (32768) [2024-03-18 19:06:21] WARNING config.py:99: Warning: Cannot override max_batch_size, because QWen2Config does not have this field [2024-03-18 19:06:21] INFO gen_config.py:133: [generation_config.json] Setting bos_token_id: 151643 [2024-03-18 19:06:21] INFO gen_config.py:133: [generation_config.json] Setting eos_token_id: 151643 [2024-03-18 19:06:21] INFO gen_config.py:147: Not found tokenizer config: ../dist/models/Qwen1.5-1.8B/tokenizer.model [2024-03-18 19:06:21] INFO gen_config.py:145: Found tokenizer config: ../dist/models/Qwen1.5-1.8B/tokenizer.json. Copying to /tmp/tmpnawifqwj/tokenizer.json [2024-03-18 19:06:21] INFO gen_config.py:145: Found tokenizer config: ../dist/models/Qwen1.5-1.8B/vocab.json. Copying to /tmp/tmpnawifqwj/vocab.json [2024-03-18 19:06:21] INFO gen_config.py:145: Found tokenizer config: ../dist/models/Qwen1.5-1.8B/merges.txt. Copying to /tmp/tmpnawifqwj/merges.txt [2024-03-18 19:06:21] INFO gen_config.py:147: Not found tokenizer config: ../dist/models/Qwen1.5-1.8B/added_tokens.json [2024-03-18 19:06:21] INFO gen_config.py:145: Found tokenizer config: ../dist/models/Qwen1.5-1.8B/tokenizer_config.json. Copying to /tmp/tmpnawifqwj/tokenizer_config.json [2024-03-18 19:06:21] INFO gen_config.py:75: [System default] Setting pad_token_id: 0 [2024-03-18 19:06:21] INFO gen_config.py:75: [System default] Setting temperature: 0.7 [2024-03-18 19:06:21] INFO gen_config.py:75: [System default] Setting presence_penalty: 0.0 [2024-03-18 19:06:21] INFO gen_config.py:75: [System default] Setting frequency_penalty: 0.0 [2024-03-18 19:06:21] INFO gen_config.py:75: [System default] Setting repetition_penalty: 1.0 [2024-03-18 19:06:21] INFO gen_config.py:75: [System default] Setting top_p: 0.95 [2024-03-18 19:06:21] INFO gen_config.py:75: [System default] Setting mean_gen_len: 128 [2024-03-18 19:06:21] INFO gen_config.py:75: [System default] Setting max_gen_len: 512 [2024-03-18 19:06:21] INFO gen_config.py:75: [System default] Setting shift_fill_factor: 0.3 [2024-03-18 19:06:21] INFO gen_config.py:198: Dumping configuration file to: /tmp/tmpnawifqwj/mlc-chat-config.json /home/floriadmin/miniforge3/envs/mlc/bin/python -m mlc_llm convert_weight ../dist/models/Qwen1.5-1.8B --quantization q8f32_1 --source-format auto --output /tmp/tmpnawifqwj [2024-03-18 19:06:22] INFO auto_config.py:115: Found model configuration: ../dist/models/Qwen1.5-1.8B/config.json [2024-03-18 19:06:22] INFO auto_device.py:76: Found device: cuda:0 [2024-03-18 19:06:22] INFO auto_device.py:76: Found device: cuda:1 [2024-03-18 19:06:22] INFO auto_device.py:76: Found device: cuda:2 [2024-03-18 19:06:22] INFO auto_device.py:76: Found device: cuda:3 [2024-03-18 19:06:22] INFO auto_device.py:76: Found device: cuda:4 [2024-03-18 19:06:22] INFO auto_device.py:76: Found device: cuda:5 [2024-03-18 19:06:22] INFO auto_device.py:76: Found device: cuda:6 [2024-03-18 19:06:22] INFO auto_device.py:76: Found device: cuda:7 [2024-03-18 19:06:22] INFO auto_device.py:76: Found device: cuda:8 [2024-03-18 19:06:22] INFO auto_device.py:76: Found device: cuda:9 [2024-03-18 19:06:23] INFO auto_device.py:85: Not found device: rocm:0 [2024-03-18 19:06:24] INFO auto_device.py:85: Not found device: metal:0 [2024-03-18 19:06:26] INFO auto_device.py:76: Found device: vulkan:0 [2024-03-18 19:06:26] INFO auto_device.py:76: Found device: vulkan:1 [2024-03-18 19:06:26] INFO auto_device.py:76: Found device: vulkan:2 [2024-03-18 19:06:26] INFO auto_device.py:76: Found device: vulkan:3 [2024-03-18 19:06:26] INFO auto_device.py:76: Found device: vulkan:4 [2024-03-18 19:06:26] INFO auto_device.py:76: Found device: vulkan:5 [2024-03-18 19:06:26] INFO auto_device.py:76: Found device: vulkan:6 [2024-03-18 19:06:26] INFO auto_device.py:76: Found device: vulkan:7 [2024-03-18 19:06:26] INFO auto_device.py:76: Found device: vulkan:8 [2024-03-18 19:06:26] INFO auto_device.py:76: Found device: vulkan:9 [2024-03-18 19:06:26] INFO auto_device.py:76: Found device: vulkan:10 [2024-03-18 19:06:27] INFO auto_device.py:85: Not found device: opencl:0 [2024-03-18 19:06:27] INFO auto_device.py:33: Using device: cuda:0 [2024-03-18 19:06:27] INFO auto_weight.py:70: Finding weights in: ../dist/models/Qwen1.5-1.8B [2024-03-18 19:06:27] INFO auto_weight.py:136: Not found Huggingface PyTorch [2024-03-18 19:06:27] INFO auto_weight.py:143: Found source weight format: huggingface-safetensor. Source configuration: ../dist/models/Qwen1.5-1.8B/model.safetensors.index.json [2024-03-18 19:06:27] INFO auto_weight.py:106: Using source weight configuration: ../dist/models/Qwen1.5-1.8B/model.safetensors.index.json. Use `--source` to override. [2024-03-18 19:06:27] INFO auto_weight.py:110: Using source weight format: huggingface-safetensor. Use `--source-format` to override. [2024-03-18 19:06:27] INFO auto_config.py:153: Found model type: qwen2. Use `--model-type` to override. [2024-03-18 19:06:27] INFO qwen2_model.py:46: context_window_size not found in config.json. Falling back to max_position_embeddings (32768) [2024-03-18 19:06:27] INFO qwen2_model.py:60: prefill_chunk_size defaults to context_window_size (32768) Weight conversion with arguments: --config ../dist/models/Qwen1.5-1.8B/config.json --quantization GroupQuantize(name='q8f32_1', kind='group-quant', group_size=32, quantize_dtype='int8', storage_dtype='uint32', model_dtype='float32', linear_weight_layout='NK', quantize_embedding=True, quantize_final_fc=True, num_elem_per_storage=4, num_storage_per_group=8, max_int_value=127) --model-type qwen2 --device cuda:0 --source ../dist/models/Qwen1.5-1.8B/model.safetensors.index.json --source-format huggingface-safetensor --output /tmp/tmpnawifqwj Start storing to cache /tmp/tmpnawifqwj 0%| | 0/171 [00:00 type is zero. setattr(self, word, getattr(machar, word).flat[0]) /home/floriadmin/miniforge3/envs/mlc/lib/python3.11/site-packages/numpy/core/getlimits.py:89: UserWarning: The value of the smallest subnormal for type is zero. return self._float_to_str(self.smallest_subnormal) 1%|▌ | 1/171 [00:08<23:16, 8.21s/it] [2024-03-18 19:06:41] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.embed_tokens.q_weight", shape: (151936, 512), dtype: uint32 1%|▌ | 1/171 [00:11<23:16, 8.21s/it] [2024-03-18 19:06:42] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.embed_tokens.q_scale", shape: (151936, 64), dtype: float32 1%|▌ | 1/171 [00:13<23:16, 8.21s/it] 1%|█ | 2/171 [00:13<18:15, 6.48s/it] [2024-03-18 19:06:43] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.0.input_layernorm.weight", shape: (2048,), dtype: float32 1%|█ | 2/171 [00:13<18:15, 6.48s/it] [2024-03-18 19:06:43] INFO group_quantization.py:232: Compiling quantize function for key: ((2048, 5504), float32, cuda, axis=1, output_transpose=False) 1%|█ | 2/171 [00:13<18:15, 6.48s/it] [2024-03-18 19:06:43] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.0.mlp.down_proj.q_weight", shape: (2048, 1376), dtype: uint32 1%|█ | 2/171 [00:14<18:15, 6.48s/it] [2024-03-18 19:06:43] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.0.mlp.down_proj.q_scale", shape: (2048, 172), dtype: float32 1%|█ | 2/171 [00:14<18:15, 6.48s/it] 2%|██▏ | 4/171 [00:14<07:17, 2.62s/it] [2024-03-18 19:06:44] INFO group_quantization.py:232: Compiling quantize function for key: ((11008, 2048), float32, cuda, axis=1, output_transpose=False) 2%|██▏ | 4/171 [00:14<07:17, 2.62s/it] [2024-03-18 19:06:44] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.0.mlp.gate_up_proj.q_weight", shape: (11008, 512), dtype: uint32 2%|██▏ | 4/171 [00:14<07:17, 2.62s/it] [2024-03-18 19:06:44] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.0.mlp.gate_up_proj.q_scale", shape: (11008, 64), dtype: float32 2%|██▏ | 4/171 [00:14<07:17, 2.62s/it] 3%|██▋ | 5/171 [00:14<05:43, 2.07s/it] [2024-03-18 19:06:44] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.0.post_attention_layernorm.weight", shape: (2048,), dtype: float32 3%|██▋ | 5/171 [00:14<05:43, 2.07s/it] [2024-03-18 19:06:44] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.0.self_attn.c_attn.bias", shape: (6144,), dtype: float32 3%|██▋ | 5/171 [00:14<05:43, 2.07s/it] [2024-03-18 19:06:44] INFO group_quantization.py:232: Compiling quantize function for key: ((6144, 2048), float32, cuda, axis=1, output_transpose=False) 3%|██▋ | 5/171 [00:15<05:43, 2.07s/it] [2024-03-18 19:06:45] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.0.self_attn.c_attn.q_weight", shape: (6144, 512), dtype: uint32 3%|██▋ | 5/171 [00:15<05:43, 2.07s/it] [2024-03-18 19:06:45] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.0.self_attn.c_attn.q_scale", shape: (6144, 64), dtype: float32 3%|██▋ | 5/171 [00:15<05:43, 2.07s/it] 5%|████▎ | 8/171 [00:15<02:45, 1.02s/it] [2024-03-18 19:06:45] INFO group_quantization.py:232: Compiling quantize function for key: ((2048, 2048), float32, cuda, axis=1, output_transpose=False) 5%|████▎ | 8/171 [00:15<02:45, 1.02s/it] [2024-03-18 19:06:45] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.0.self_attn.o_proj.q_weight", shape: (2048, 512), dtype: uint32 5%|████▎ | 8/171 [00:16<02:45, 1.02s/it] [2024-03-18 19:06:45] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.0.self_attn.o_proj.q_scale", shape: (2048, 64), dtype: float32 5%|████▎ | 8/171 [00:16<02:45, 1.02s/it] 5%|████▊ | 9/171 [00:16<02:25, 1.11it/s] [2024-03-18 19:06:45] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.1.input_layernorm.weight", shape: (2048,), dtype: float32 5%|████▊ | 9/171 [00:16<02:25, 1.11it/s] [2024-03-18 19:06:45] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.1.mlp.down_proj.q_weight", shape: (2048, 1376), dtype: uint32 5%|████▊ | 9/171 [00:16<02:25, 1.11it/s] [2024-03-18 19:06:45] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.1.mlp.down_proj.q_scale", shape: (2048, 172), dtype: float32 5%|████▊ | 9/171 [00:16<02:25, 1.11it/s] 6%|█████▊ | 11/171 [00:16<01:34, 1.69it/s] [2024-03-18 19:06:46] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.1.mlp.gate_up_proj.q_weight", shape: (11008, 512), dtype: uint32 6%|█████▊ | 11/171 [00:16<01:34, 1.69it/s] [2024-03-18 19:06:46] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.1.mlp.gate_up_proj.q_scale", shape: (11008, 64), dtype: float32 6%|█████▊ | 11/171 [00:16<01:34, 1.69it/s] 7%|██████▍ | 12/171 [00:16<01:28, 1.80it/s] [2024-03-18 19:06:46] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.1.post_attention_layernorm.weight", shape: (2048,), dtype: float32 7%|██████▍ | 12/171 [00:16<01:28, 1.80it/s] [2024-03-18 19:06:46] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.1.self_attn.c_attn.bias", shape: (6144,), dtype: float32 7%|██████▍ | 12/171 [00:16<01:28, 1.80it/s] [2024-03-18 19:06:46] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.1.self_attn.c_attn.q_weight", shape: (6144, 512), dtype: uint32 7%|██████▍ | 12/171 [00:16<01:28, 1.80it/s] [2024-03-18 19:06:46] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.1.self_attn.c_attn.q_scale", shape: (6144, 64), dtype: float32 7%|██████▍ | 12/171 [00:16<01:28, 1.80it/s] 9%|███████▉ | 15/171 [00:16<00:51, 3.01it/s] [2024-03-18 19:06:46] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.1.self_attn.o_proj.q_weight", shape: (2048, 512), dtype: uint32 9%|███████▉ | 15/171 [00:17<00:51, 3.01it/s] [2024-03-18 19:06:46] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.1.self_attn.o_proj.q_scale", shape: (2048, 64), dtype: float32 9%|███████▉ | 15/171 [00:17<00:51, 3.01it/s] [2024-03-18 19:06:46] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.10.input_layernorm.weight", shape: (2048,), dtype: float32 9%|███████▉ | 15/171 [00:17<00:51, 3.01it/s] [2024-03-18 19:06:46] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.10.mlp.down_proj.q_weight", shape: (2048, 1376), dtype: uint32 9%|███████▉ | 15/171 [00:17<00:51, 3.01it/s] [2024-03-18 19:06:46] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.10.mlp.down_proj.q_scale", shape: (2048, 172), dtype: float32 9%|███████▉ | 15/171 [00:17<00:51, 3.01it/s] 11%|█████████▌ | 18/171 [00:17<00:35, 4.36it/s] [2024-03-18 19:06:47] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.10.mlp.gate_up_proj.q_weight", shape: (11008, 512), dtype: uint32 11%|█████████▌ | 18/171 [00:17<00:35, 4.36it/s] [2024-03-18 19:06:47] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.10.mlp.gate_up_proj.q_scale", shape: (11008, 64), dtype: float32 11%|█████████▌ | 18/171 [00:17<00:35, 4.36it/s] 11%|██████████ | 19/171 [00:17<00:38, 3.92it/s] [2024-03-18 19:06:47] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.10.post_attention_layernorm.weight", shape: (2048,), dtype: float32 11%|██████████ | 19/171 [00:17<00:38, 3.92it/s] [2024-03-18 19:06:47] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.10.self_attn.c_attn.bias", shape: (6144,), dtype: float32 11%|██████████ | 19/171 [00:17<00:38, 3.92it/s] [2024-03-18 19:06:47] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.10.self_attn.c_attn.q_weight", shape: (6144, 512), dtype: uint32 11%|██████████ | 19/171 [00:17<00:38, 3.92it/s] [2024-03-18 19:06:47] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.10.self_attn.c_attn.q_scale", shape: (6144, 64), dtype: float32 11%|██████████ | 19/171 [00:17<00:38, 3.92it/s] 13%|███████████▋ | 22/171 [00:17<00:27, 5.36it/s] [2024-03-18 19:06:47] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.10.self_attn.o_proj.q_weight", shape: (2048, 512), dtype: uint32 13%|███████████▋ | 22/171 [00:17<00:27, 5.36it/s] [2024-03-18 19:06:47] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.10.self_attn.o_proj.q_scale", shape: (2048, 64), dtype: float32 13%|███████████▋ | 22/171 [00:17<00:27, 5.36it/s] [2024-03-18 19:06:47] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.11.input_layernorm.weight", shape: (2048,), dtype: float32 13%|███████████▋ | 22/171 [00:17<00:27, 5.36it/s] [2024-03-18 19:06:47] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.11.mlp.down_proj.q_weight", shape: (2048, 1376), dtype: uint32 13%|███████████▋ | 22/171 [00:18<00:27, 5.36it/s] [2024-03-18 19:06:47] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.11.mlp.down_proj.q_scale", shape: (2048, 172), dtype: float32 13%|███████████▋ | 22/171 [00:18<00:27, 5.36it/s] 15%|█████████████▎ | 25/171 [00:18<00:21, 6.89it/s] [2024-03-18 19:06:48] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.11.mlp.gate_up_proj.q_weight", shape: (11008, 512), dtype: uint32 15%|█████████████▎ | 25/171 [00:18<00:21, 6.89it/s] [2024-03-18 19:06:48] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.11.mlp.gate_up_proj.q_scale", shape: (11008, 64), dtype: float32 15%|█████████████▎ | 25/171 [00:18<00:21, 6.89it/s] 15%|█████████████▊ | 26/171 [00:18<00:26, 5.52it/s] [2024-03-18 19:06:48] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.11.post_attention_layernorm.weight", shape: (2048,), dtype: float32 15%|█████████████▊ | 26/171 [00:18<00:26, 5.52it/s] [2024-03-18 19:06:48] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.11.self_attn.c_attn.bias", shape: (6144,), dtype: float32 15%|█████████████▊ | 26/171 [00:18<00:26, 5.52it/s] [2024-03-18 19:06:48] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.11.self_attn.c_attn.q_weight", shape: (6144, 512), dtype: uint32 15%|█████████████▊ | 26/171 [00:18<00:26, 5.52it/s] [2024-03-18 19:06:48] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.11.self_attn.c_attn.q_scale", shape: (6144, 64), dtype: float32 15%|█████████████▊ | 26/171 [00:18<00:26, 5.52it/s] 17%|███████████████▍ | 29/171 [00:18<00:20, 6.77it/s] [2024-03-18 19:06:48] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.11.self_attn.o_proj.q_weight", shape: (2048, 512), dtype: uint32 17%|███████████████▍ | 29/171 [00:18<00:20, 6.77it/s] [2024-03-18 19:06:48] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.11.self_attn.o_proj.q_scale", shape: (2048, 64), dtype: float32 17%|███████████████▍ | 29/171 [00:18<00:20, 6.77it/s] [2024-03-18 19:06:48] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.12.input_layernorm.weight", shape: (2048,), dtype: float32 17%|███████████████▍ | 29/171 [00:18<00:20, 6.77it/s] [2024-03-18 19:06:48] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.12.mlp.down_proj.q_weight", shape: (2048, 1376), dtype: uint32 17%|███████████████▍ | 29/171 [00:18<00:20, 6.77it/s] [2024-03-18 19:06:48] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.12.mlp.down_proj.q_scale", shape: (2048, 172), dtype: float32 17%|███████████████▍ | 29/171 [00:18<00:20, 6.77it/s] 19%|█████████████████ | 32/171 [00:18<00:16, 8.20it/s] [2024-03-18 19:06:48] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.12.mlp.gate_up_proj.q_weight", shape: (11008, 512), dtype: uint32 19%|█████████████████ | 32/171 [00:19<00:16, 8.20it/s] [2024-03-18 19:06:49] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.12.mlp.gate_up_proj.q_scale", shape: (11008, 64), dtype: float32 19%|█████████████████ | 32/171 [00:19<00:16, 8.20it/s] 19%|█████████████████▌ | 33/171 [00:19<00:21, 6.28it/s] [2024-03-18 19:06:49] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.12.post_attention_layernorm.weight", shape: (2048,), dtype: float32 19%|█████████████████▌ | 33/171 [00:19<00:21, 6.28it/s] [2024-03-18 19:06:49] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.12.self_attn.c_attn.bias", shape: (6144,), dtype: float32 19%|█████████████████▌ | 33/171 [00:19<00:21, 6.28it/s] [2024-03-18 19:06:49] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.12.self_attn.c_attn.q_weight", shape: (6144, 512), dtype: uint32 19%|█████████████████▌ | 33/171 [00:19<00:21, 6.28it/s] [2024-03-18 19:06:49] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.12.self_attn.c_attn.q_scale", shape: (6144, 64), dtype: float32 19%|█████████████████▌ | 33/171 [00:19<00:21, 6.28it/s] 21%|███████████████████▏ | 36/171 [00:19<00:17, 7.69it/s] [2024-03-18 19:06:49] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.12.self_attn.o_proj.q_weight", shape: (2048, 512), dtype: uint32 21%|███████████████████▏ | 36/171 [00:19<00:17, 7.69it/s] [2024-03-18 19:06:49] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.12.self_attn.o_proj.q_scale", shape: (2048, 64), dtype: float32 21%|███████████████████▏ | 36/171 [00:19<00:17, 7.69it/s] [2024-03-18 19:06:49] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.13.input_layernorm.weight", shape: (2048,), dtype: float32 21%|███████████████████▏ | 36/171 [00:19<00:17, 7.69it/s] [2024-03-18 19:06:49] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.13.mlp.down_proj.q_weight", shape: (2048, 1376), dtype: uint32 21%|███████████████████▏ | 36/171 [00:19<00:17, 7.69it/s] [2024-03-18 19:06:49] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.13.mlp.down_proj.q_scale", shape: (2048, 172), dtype: float32 21%|███████████████████▏ | 36/171 [00:19<00:17, 7.69it/s] 23%|████████████████████▊ | 39/171 [00:19<00:14, 9.03it/s] [2024-03-18 19:06:49] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.13.mlp.gate_up_proj.q_weight", shape: (11008, 512), dtype: uint32 23%|████████████████████▊ | 39/171 [00:20<00:14, 9.03it/s] [2024-03-18 19:06:49] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.13.mlp.gate_up_proj.q_scale", shape: (11008, 64), dtype: float32 23%|████████████████████▊ | 39/171 [00:20<00:14, 9.03it/s] [2024-03-18 19:06:49] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.13.post_attention_layernorm.weight", shape: (2048,), dtype: float32 23%|████████████████████▊ | 39/171 [00:20<00:14, 9.03it/s] 24%|█████████████████████▊ | 41/171 [00:20<00:17, 7.59it/s] [2024-03-18 19:06:49] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.13.self_attn.c_attn.bias", shape: (6144,), dtype: float32 24%|█████████████████████▊ | 41/171 [00:20<00:17, 7.59it/s] [2024-03-18 19:06:50] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.13.self_attn.c_attn.q_weight", shape: (6144, 512), dtype: uint32 24%|█████████████████████▊ | 41/171 [00:20<00:17, 7.59it/s] [2024-03-18 19:06:50] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.13.self_attn.c_attn.q_scale", shape: (6144, 64), dtype: float32 24%|█████████████████████▊ | 41/171 [00:20<00:17, 7.59it/s] 25%|██████████████████████▉ | 43/171 [00:20<00:16, 7.63it/s] [2024-03-18 19:06:50] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.13.self_attn.o_proj.q_weight", shape: (2048, 512), dtype: uint32 25%|██████████████████████▉ | 43/171 [00:20<00:16, 7.63it/s] [2024-03-18 19:06:50] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.13.self_attn.o_proj.q_scale", shape: (2048, 64), dtype: float32 25%|██████████████████████▉ | 43/171 [00:20<00:16, 7.63it/s] [2024-03-18 19:06:50] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.14.input_layernorm.weight", shape: (2048,), dtype: float32 25%|██████████████████████▉ | 43/171 [00:20<00:16, 7.63it/s] [2024-03-18 19:06:50] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.14.mlp.down_proj.q_weight", shape: (2048, 1376), dtype: uint32 25%|██████████████████████▉ | 43/171 [00:20<00:16, 7.63it/s] [2024-03-18 19:06:50] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.14.mlp.down_proj.q_scale", shape: (2048, 172), dtype: float32 25%|██████████████████████▉ | 43/171 [00:20<00:16, 7.63it/s] 27%|████████████████████████▍ | 46/171 [00:20<00:14, 8.89it/s] [2024-03-18 19:06:50] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.14.mlp.gate_up_proj.q_weight", shape: (11008, 512), dtype: uint32 27%|████████████████████████▍ | 46/171 [00:20<00:14, 8.89it/s] [2024-03-18 19:06:50] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.14.mlp.gate_up_proj.q_scale", shape: (11008, 64), dtype: float32 27%|████████████████████████▍ | 46/171 [00:21<00:14, 8.89it/s] 27%|█████████████████████████ | 47/171 [00:21<00:19, 6.51it/s] [2024-03-18 19:06:50] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.14.post_attention_layernorm.weight", shape: (2048,), dtype: float32 27%|█████████████████████████ | 47/171 [00:21<00:19, 6.51it/s] [2024-03-18 19:06:50] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.14.self_attn.c_attn.bias", shape: (6144,), dtype: float32 27%|█████████████████████████ | 47/171 [00:21<00:19, 6.51it/s] [2024-03-18 19:06:50] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.14.self_attn.c_attn.q_weight", shape: (6144, 512), dtype: uint32 27%|█████████████████████████ | 47/171 [00:21<00:19, 6.51it/s] [2024-03-18 19:06:51] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.14.self_attn.c_attn.q_scale", shape: (6144, 64), dtype: float32 27%|█████████████████████████ | 47/171 [00:21<00:19, 6.51it/s] 29%|██████████████████████████▌ | 50/171 [00:21<00:15, 7.89it/s] [2024-03-18 19:06:51] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.14.self_attn.o_proj.q_weight", shape: (2048, 512), dtype: uint32 29%|██████████████████████████▌ | 50/171 [00:21<00:15, 7.89it/s] [2024-03-18 19:06:51] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.14.self_attn.o_proj.q_scale", shape: (2048, 64), dtype: float32 29%|██████████████████████████▌ | 50/171 [00:21<00:15, 7.89it/s] [2024-03-18 19:06:51] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.15.input_layernorm.weight", shape: (2048,), dtype: float32 29%|██████████████████████████▌ | 50/171 [00:21<00:15, 7.89it/s] [2024-03-18 19:06:51] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.15.mlp.down_proj.q_weight", shape: (2048, 1376), dtype: uint32 29%|██████████████████████████▌ | 50/171 [00:21<00:15, 7.89it/s] [2024-03-18 19:06:51] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.15.mlp.down_proj.q_scale", shape: (2048, 172), dtype: float32 29%|██████████████████████████▌ | 50/171 [00:21<00:15, 7.89it/s] 31%|████████████████████████████▏ | 53/171 [00:21<00:12, 9.25it/s] [2024-03-18 19:06:51] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.15.mlp.gate_up_proj.q_weight", shape: (11008, 512), dtype: uint32 31%|████████████████████████████▏ | 53/171 [00:21<00:12, 9.25it/s] [2024-03-18 19:06:51] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.15.mlp.gate_up_proj.q_scale", shape: (11008, 64), dtype: float32 31%|████████████████████████████▏ | 53/171 [00:21<00:12, 9.25it/s] [2024-03-18 19:06:51] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.15.post_attention_layernorm.weight", shape: (2048,), dtype: float32 31%|████████████████████████████▏ | 53/171 [00:21<00:12, 9.25it/s] 32%|█████████████████████████████▎ | 55/171 [00:21<00:14, 7.77it/s] [2024-03-18 19:06:51] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.15.self_attn.c_attn.bias", shape: (6144,), dtype: float32 32%|█████████████████████████████▎ | 55/171 [00:21<00:14, 7.77it/s] [2024-03-18 19:06:51] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.15.self_attn.c_attn.q_weight", shape: (6144, 512), dtype: uint32 32%|█████████████████████████████▎ | 55/171 [00:22<00:14, 7.77it/s] [2024-03-18 19:06:51] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.15.self_attn.c_attn.q_scale", shape: (6144, 64), dtype: float32 32%|█████████████████████████████▎ | 55/171 [00:22<00:14, 7.77it/s] 33%|██████████████████████████████▎ | 57/171 [00:22<00:14, 7.82it/s] [2024-03-18 19:06:51] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.15.self_attn.o_proj.q_weight", shape: (2048, 512), dtype: uint32 33%|██████████████████████████████▎ | 57/171 [00:22<00:14, 7.82it/s] [2024-03-18 19:06:51] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.15.self_attn.o_proj.q_scale", shape: (2048, 64), dtype: float32 33%|██████████████████████████████▎ | 57/171 [00:22<00:14, 7.82it/s] [2024-03-18 19:06:51] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.16.input_layernorm.weight", shape: (2048,), dtype: float32 33%|██████████████████████████████▎ | 57/171 [00:22<00:14, 7.82it/s] [2024-03-18 19:06:52] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.16.mlp.down_proj.q_weight", shape: (2048, 1376), dtype: uint32 33%|██████████████████████████████▎ | 57/171 [00:22<00:14, 7.82it/s] [2024-03-18 19:06:52] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.16.mlp.down_proj.q_scale", shape: (2048, 172), dtype: float32 33%|██████████████████████████████▎ | 57/171 [00:22<00:14, 7.82it/s] 35%|███████████████████████████████▉ | 60/171 [00:22<00:12, 9.24it/s] [2024-03-18 19:06:52] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.16.mlp.gate_up_proj.q_weight", shape: (11008, 512), dtype: uint32 35%|███████████████████████████████▉ | 60/171 [00:22<00:12, 9.24it/s] [2024-03-18 19:06:52] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.16.mlp.gate_up_proj.q_scale", shape: (11008, 64), dtype: float32 35%|███████████████████████████████▉ | 60/171 [00:22<00:12, 9.24it/s] [2024-03-18 19:06:52] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.16.post_attention_layernorm.weight", shape: (2048,), dtype: float32 35%|███████████████████████████████▉ | 60/171 [00:22<00:12, 9.24it/s] 36%|████████████████████████████████▉ | 62/171 [00:22<00:14, 7.65it/s] [2024-03-18 19:06:52] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.16.self_attn.c_attn.bias", shape: (6144,), dtype: float32 36%|████████████████████████████████▉ | 62/171 [00:22<00:14, 7.65it/s] [2024-03-18 19:06:52] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.16.self_attn.c_attn.q_weight", shape: (6144, 512), dtype: uint32 36%|████████████████████████████████▉ | 62/171 [00:22<00:14, 7.65it/s] [2024-03-18 19:06:52] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.16.self_attn.c_attn.q_scale", shape: (6144, 64), dtype: float32 36%|████████████████████████████████▉ | 62/171 [00:23<00:14, 7.65it/s] 37%|██████████████████████████████████ | 64/171 [00:23<00:13, 7.73it/s] [2024-03-18 19:06:52] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.16.self_attn.o_proj.q_weight", shape: (2048, 512), dtype: uint32 37%|██████████████████████████████████ | 64/171 [00:23<00:13, 7.73it/s] [2024-03-18 19:06:52] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.16.self_attn.o_proj.q_scale", shape: (2048, 64), dtype: float32 37%|██████████████████████████████████ | 64/171 [00:23<00:13, 7.73it/s] [2024-03-18 19:06:52] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.17.input_layernorm.weight", shape: (2048,), dtype: float32 37%|██████████████████████████████████ | 64/171 [00:23<00:13, 7.73it/s] [2024-03-18 19:06:52] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.17.mlp.down_proj.q_weight", shape: (2048, 1376), dtype: uint32 37%|██████████████████████████████████ | 64/171 [00:23<00:13, 7.73it/s] [2024-03-18 19:06:52] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.17.mlp.down_proj.q_scale", shape: (2048, 172), dtype: float32 37%|██████████████████████████████████ | 64/171 [00:23<00:13, 7.73it/s] 39%|███████████████████████████████████▋ | 67/171 [00:23<00:11, 9.17it/s] [2024-03-18 19:06:53] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.17.mlp.gate_up_proj.q_weight", shape: (11008, 512), dtype: uint32 39%|███████████████████████████████████▋ | 67/171 [00:23<00:11, 9.17it/s] [2024-03-18 19:06:53] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.17.mlp.gate_up_proj.q_scale", shape: (11008, 64), dtype: float32 39%|███████████████████████████████████▋ | 67/171 [00:23<00:11, 9.17it/s] 40%|████████████████████████████████████▏ | 68/171 [00:23<00:15, 6.63it/s] [2024-03-18 19:06:53] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.17.post_attention_layernorm.weight", shape: (2048,), dtype: float32 40%|████████████████████████████████████▏ | 68/171 [00:23<00:15, 6.63it/s] [2024-03-18 19:06:53] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.17.self_attn.c_attn.bias", shape: (6144,), dtype: float32 40%|████████████████████████████████████▏ | 68/171 [00:23<00:15, 6.63it/s] [2024-03-18 19:06:53] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.17.self_attn.c_attn.q_weight", shape: (6144, 512), dtype: uint32 40%|████████████████████████████████████▏ | 68/171 [00:23<00:15, 6.63it/s] [2024-03-18 19:06:53] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.17.self_attn.c_attn.q_scale", shape: (6144, 64), dtype: float32 40%|████████████████████████████████████▏ | 68/171 [00:23<00:15, 6.63it/s] 42%|█████████████████████████████████████▊ | 71/171 [00:23<00:12, 7.99it/s] [2024-03-18 19:06:53] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.17.self_attn.o_proj.q_weight", shape: (2048, 512), dtype: uint32 42%|█████████████████████████████████████▊ | 71/171 [00:23<00:12, 7.99it/s] [2024-03-18 19:06:53] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.17.self_attn.o_proj.q_scale", shape: (2048, 64), dtype: float32 42%|█████████████████████████████████████▊ | 71/171 [00:23<00:12, 7.99it/s] [2024-03-18 19:06:53] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.18.input_layernorm.weight", shape: (2048,), dtype: float32 42%|█████████████████████████████████████▊ | 71/171 [00:23<00:12, 7.99it/s] [2024-03-18 19:06:53] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.18.mlp.down_proj.q_weight", shape: (2048, 1376), dtype: uint32 42%|█████████████████████████████████████▊ | 71/171 [00:24<00:12, 7.99it/s] [2024-03-18 19:06:53] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.18.mlp.down_proj.q_scale", shape: (2048, 172), dtype: float32 42%|█████████████████████████████████████▊ | 71/171 [00:24<00:12, 7.99it/s] 43%|███████████████████████████████████████▍ | 74/171 [00:24<00:10, 9.33it/s] [2024-03-18 19:06:54] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.18.mlp.gate_up_proj.q_weight", shape: (11008, 512), dtype: uint32 43%|███████████████████████████████████████▍ | 74/171 [00:24<00:10, 9.33it/s] [2024-03-18 19:06:54] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.18.mlp.gate_up_proj.q_scale", shape: (11008, 64), dtype: float32 43%|███████████████████████████████████████▍ | 74/171 [00:24<00:10, 9.33it/s] [2024-03-18 19:06:54] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.18.post_attention_layernorm.weight", shape: (2048,), dtype: float32 43%|███████████████████████████████████████▍ | 74/171 [00:24<00:10, 9.33it/s] 44%|████████████████████████████████████████▍ | 76/171 [00:24<00:12, 7.81it/s] [2024-03-18 19:06:54] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.18.self_attn.c_attn.bias", shape: (6144,), dtype: float32 44%|████████████████████████████████████████▍ | 76/171 [00:24<00:12, 7.81it/s] [2024-03-18 19:06:54] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.18.self_attn.c_attn.q_weight", shape: (6144, 512), dtype: uint32 44%|████████████████████████████████████████▍ | 76/171 [00:24<00:12, 7.81it/s] [2024-03-18 19:06:54] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.18.self_attn.c_attn.q_scale", shape: (6144, 64), dtype: float32 44%|████████████████████████████████████████▍ | 76/171 [00:24<00:12, 7.81it/s] 46%|█████████████████████████████████████████▌ | 78/171 [00:24<00:11, 7.83it/s] [2024-03-18 19:06:54] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.18.self_attn.o_proj.q_weight", shape: (2048, 512), dtype: uint32 46%|█████████████████████████████████████████▌ | 78/171 [00:24<00:11, 7.83it/s] [2024-03-18 19:06:54] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.18.self_attn.o_proj.q_scale", shape: (2048, 64), dtype: float32 46%|█████████████████████████████████████████▌ | 78/171 [00:24<00:11, 7.83it/s] [2024-03-18 19:06:54] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.19.input_layernorm.weight", shape: (2048,), dtype: float32 46%|█████████████████████████████████████████▌ | 78/171 [00:24<00:11, 7.83it/s] [2024-03-18 19:06:54] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.19.mlp.down_proj.q_weight", shape: (2048, 1376), dtype: uint32 46%|█████████████████████████████████████████▌ | 78/171 [00:24<00:11, 7.83it/s] [2024-03-18 19:06:54] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.19.mlp.down_proj.q_scale", shape: (2048, 172), dtype: float32 46%|█████████████████████████████████████████▌ | 78/171 [00:25<00:11, 7.83it/s] 47%|███████████████████████████████████████████ | 81/171 [00:25<00:09, 9.22it/s] [2024-03-18 19:06:54] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.19.mlp.gate_up_proj.q_weight", shape: (11008, 512), dtype: uint32 47%|███████████████████████████████████████████ | 81/171 [00:25<00:09, 9.22it/s] [2024-03-18 19:06:55] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.19.mlp.gate_up_proj.q_scale", shape: (11008, 64), dtype: float32 47%|███████████████████████████████████████████ | 81/171 [00:25<00:09, 9.22it/s] 48%|███████████████████████████████████████████▋ | 82/171 [00:25<00:13, 6.72it/s] [2024-03-18 19:06:55] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.19.post_attention_layernorm.weight", shape: (2048,), dtype: float32 48%|███████████████████████████████████████████▋ | 82/171 [00:25<00:13, 6.72it/s] [2024-03-18 19:06:55] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.19.self_attn.c_attn.bias", shape: (6144,), dtype: float32 48%|███████████████████████████████████████████▋ | 82/171 [00:25<00:13, 6.72it/s] [2024-03-18 19:06:55] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.19.self_attn.c_attn.q_weight", shape: (6144, 512), dtype: uint32 48%|███████████████████████████████████████████▋ | 82/171 [00:25<00:13, 6.72it/s] [2024-03-18 19:06:55] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.19.self_attn.c_attn.q_scale", shape: (6144, 64), dtype: float32 48%|███████████████████████████████████████████▋ | 82/171 [00:25<00:13, 6.72it/s] 50%|█████████████████████████████████████████████▏ | 85/171 [00:25<00:10, 8.09it/s] [2024-03-18 19:06:55] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.19.self_attn.o_proj.q_weight", shape: (2048, 512), dtype: uint32 50%|█████████████████████████████████████████████▏ | 85/171 [00:25<00:10, 8.09it/s] [2024-03-18 19:06:55] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.19.self_attn.o_proj.q_scale", shape: (2048, 64), dtype: float32 50%|█████████████████████████████████████████████▏ | 85/171 [00:25<00:10, 8.09it/s] [2024-03-18 19:06:55] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.2.input_layernorm.weight", shape: (2048,), dtype: float32 50%|█████████████████████████████████████████████▏ | 85/171 [00:25<00:10, 8.09it/s] [2024-03-18 19:06:55] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.2.mlp.down_proj.q_weight", shape: (2048, 1376), dtype: uint32 50%|█████████████████████████████████████████████▏ | 85/171 [00:25<00:10, 8.09it/s] [2024-03-18 19:06:55] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.2.mlp.down_proj.q_scale", shape: (2048, 172), dtype: float32 50%|█████████████████████████████████████████████▏ | 85/171 [00:25<00:10, 8.09it/s] 51%|██████████████████████████████████████████████▊ | 88/171 [00:25<00:08, 9.44it/s] [2024-03-18 19:06:55] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.2.mlp.gate_up_proj.q_weight", shape: (11008, 512), dtype: uint32 51%|██████████████████████████████████████████████▊ | 88/171 [00:26<00:08, 9.44it/s] [2024-03-18 19:06:55] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.2.mlp.gate_up_proj.q_scale", shape: (11008, 64), dtype: float32 51%|██████████████████████████████████████████████▊ | 88/171 [00:26<00:08, 9.44it/s] [2024-03-18 19:06:55] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.2.post_attention_layernorm.weight", shape: (2048,), dtype: float32 51%|██████████████████████████████████████████████▊ | 88/171 [00:26<00:08, 9.44it/s] 53%|███████████████████████████████████████████████▉ | 90/171 [00:26<00:10, 7.85it/s] [2024-03-18 19:06:55] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.2.self_attn.c_attn.bias", shape: (6144,), dtype: float32 53%|███████████████████████████████████████████████▉ | 90/171 [00:26<00:10, 7.85it/s] [2024-03-18 19:06:56] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.2.self_attn.c_attn.q_weight", shape: (6144, 512), dtype: uint32 53%|███████████████████████████████████████████████▉ | 90/171 [00:26<00:10, 7.85it/s] [2024-03-18 19:06:56] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.2.self_attn.c_attn.q_scale", shape: (6144, 64), dtype: float32 53%|███████████████████████████████████████████████▉ | 90/171 [00:26<00:10, 7.85it/s] 54%|████████████████████████████████████████████████▉ | 92/171 [00:26<00:10, 7.90it/s] [2024-03-18 19:06:56] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.2.self_attn.o_proj.q_weight", shape: (2048, 512), dtype: uint32 54%|████████████████████████████████████████████████▉ | 92/171 [00:26<00:10, 7.90it/s] [2024-03-18 19:06:56] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.2.self_attn.o_proj.q_scale", shape: (2048, 64), dtype: float32 54%|████████████████████████████████████████████████▉ | 92/171 [00:26<00:10, 7.90it/s] [2024-03-18 19:06:56] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.20.input_layernorm.weight", shape: (2048,), dtype: float32 54%|████████████████████████████████████████████████▉ | 92/171 [00:26<00:10, 7.90it/s] [2024-03-18 19:06:56] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.20.mlp.down_proj.q_weight", shape: (2048, 1376), dtype: uint32 54%|████████████████████████████████████████████████▉ | 92/171 [00:26<00:10, 7.90it/s] [2024-03-18 19:06:56] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.20.mlp.down_proj.q_scale", shape: (2048, 172), dtype: float32 54%|████████████████████████████████████████████████▉ | 92/171 [00:26<00:10, 7.90it/s] 56%|██████████████████████████████████████████████████▌ | 95/171 [00:26<00:08, 9.30it/s] [2024-03-18 19:06:56] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.20.mlp.gate_up_proj.q_weight", shape: (11008, 512), dtype: uint32 56%|██████████████████████████████████████████████████▌ | 95/171 [00:26<00:08, 9.30it/s] [2024-03-18 19:06:56] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.20.mlp.gate_up_proj.q_scale", shape: (11008, 64), dtype: float32 56%|██████████████████████████████████████████████████▌ | 95/171 [00:27<00:08, 9.30it/s] [2024-03-18 19:06:56] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.20.post_attention_layernorm.weight", shape: (2048,), dtype: float32 56%|██████████████████████████████████████████████████▌ | 95/171 [00:27<00:08, 9.30it/s] 57%|███████████████████████████████████████████████████▌ | 97/171 [00:27<00:09, 7.77it/s] [2024-03-18 19:06:56] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.20.self_attn.c_attn.bias", shape: (6144,), dtype: float32 57%|███████████████████████████████████████████████████▌ | 97/171 [00:27<00:09, 7.77it/s] [2024-03-18 19:06:56] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.20.self_attn.c_attn.q_weight", shape: (6144, 512), dtype: uint32 57%|███████████████████████████████████████████████████▌ | 97/171 [00:27<00:09, 7.77it/s] [2024-03-18 19:06:57] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.20.self_attn.c_attn.q_scale", shape: (6144, 64), dtype: float32 57%|███████████████████████████████████████████████████▌ | 97/171 [00:27<00:09, 7.77it/s] 58%|████████████████████████████████████████████████████▋ | 99/171 [00:27<00:09, 7.81it/s] [2024-03-18 19:06:57] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.20.self_attn.o_proj.q_weight", shape: (2048, 512), dtype: uint32 58%|████████████████████████████████████████████████████▋ | 99/171 [00:27<00:09, 7.81it/s] [2024-03-18 19:06:57] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.20.self_attn.o_proj.q_scale", shape: (2048, 64), dtype: float32 58%|████████████████████████████████████████████████████▋ | 99/171 [00:27<00:09, 7.81it/s] [2024-03-18 19:06:57] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.21.input_layernorm.weight", shape: (2048,), dtype: float32 58%|████████████████████████████████████████████████████▋ | 99/171 [00:27<00:09, 7.81it/s] [2024-03-18 19:06:57] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.21.mlp.down_proj.q_weight", shape: (2048, 1376), dtype: uint32 58%|████████████████████████████████████████████████████▋ | 99/171 [00:27<00:09, 7.81it/s] [2024-03-18 19:06:57] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.21.mlp.down_proj.q_scale", shape: (2048, 172), dtype: float32 58%|████████████████████████████████████████████████████▋ | 99/171 [00:27<00:09, 7.81it/s] 60%|█████████████████████████████████████████████████████▋ | 102/171 [00:27<00:07, 9.22it/s] [2024-03-18 19:06:57] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.21.mlp.gate_up_proj.q_weight", shape: (11008, 512), dtype: uint32 60%|█████████████████████████████████████████████████████▋ | 102/171 [00:27<00:07, 9.22it/s] [2024-03-18 19:06:57] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.21.mlp.gate_up_proj.q_scale", shape: (11008, 64), dtype: float32 60%|█████████████████████████████████████████████████████▋ | 102/171 [00:27<00:07, 9.22it/s] 60%|██████████████████████████████████████████████████████▏ | 103/171 [00:27<00:10, 6.74it/s] [2024-03-18 19:06:57] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.21.post_attention_layernorm.weight", shape: (2048,), dtype: float32 60%|██████████████████████████████████████████████████████▏ | 103/171 [00:27<00:10, 6.74it/s] [2024-03-18 19:06:57] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.21.self_attn.c_attn.bias", shape: (6144,), dtype: float32 60%|██████████████████████████████████████████████████████▏ | 103/171 [00:27<00:10, 6.74it/s] [2024-03-18 19:06:57] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.21.self_attn.c_attn.q_weight", shape: (6144, 512), dtype: uint32 60%|██████████████████████████████████████████████████████▏ | 103/171 [00:28<00:10, 6.74it/s] [2024-03-18 19:06:57] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.21.self_attn.c_attn.q_scale", shape: (6144, 64), dtype: float32 60%|██████████████████████████████████████████████████████▏ | 103/171 [00:28<00:10, 6.74it/s] 62%|███████████████████████████████████████████████████████▊ | 106/171 [00:28<00:08, 8.11it/s] [2024-03-18 19:06:57] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.21.self_attn.o_proj.q_weight", shape: (2048, 512), dtype: uint32 62%|███████████████████████████████████████████████████████▊ | 106/171 [00:28<00:08, 8.11it/s] [2024-03-18 19:06:57] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.21.self_attn.o_proj.q_scale", shape: (2048, 64), dtype: float32 62%|███████████████████████████████████████████████████████▊ | 106/171 [00:28<00:08, 8.11it/s] [2024-03-18 19:06:57] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.22.input_layernorm.weight", shape: (2048,), dtype: float32 62%|███████████████████████████████████████████████████████▊ | 106/171 [00:28<00:08, 8.11it/s] [2024-03-18 19:06:58] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.22.mlp.down_proj.q_weight", shape: (2048, 1376), dtype: uint32 62%|███████████████████████████████████████████████████████▊ | 106/171 [00:28<00:08, 8.11it/s] [2024-03-18 19:06:58] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.22.mlp.down_proj.q_scale", shape: (2048, 172), dtype: float32 62%|███████████████████████████████████████████████████████▊ | 106/171 [00:28<00:08, 8.11it/s] 64%|█████████████████████████████████████████████████████████▎ | 109/171 [00:28<00:06, 9.44it/s] [2024-03-18 19:06:58] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.22.mlp.gate_up_proj.q_weight", shape: (11008, 512), dtype: uint32 64%|█████████████████████████████████████████████████████████▎ | 109/171 [00:28<00:06, 9.44it/s] [2024-03-18 19:06:58] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.22.mlp.gate_up_proj.q_scale", shape: (11008, 64), dtype: float32 64%|█████████████████████████████████████████████████████████▎ | 109/171 [00:28<00:06, 9.44it/s] [2024-03-18 19:06:58] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.22.post_attention_layernorm.weight", shape: (2048,), dtype: float32 64%|█████████████████████████████████████████████████████████▎ | 109/171 [00:28<00:06, 9.44it/s] 65%|██████████████████████████████████████████████████████████▍ | 111/171 [00:28<00:07, 7.83it/s] [2024-03-18 19:06:58] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.22.self_attn.c_attn.bias", shape: (6144,), dtype: float32 65%|██████████████████████████████████████████████████████████▍ | 111/171 [00:28<00:07, 7.83it/s] [2024-03-18 19:06:58] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.22.self_attn.c_attn.q_weight", shape: (6144, 512), dtype: uint32 65%|██████████████████████████████████████████████████████████▍ | 111/171 [00:28<00:07, 7.83it/s] [2024-03-18 19:06:58] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.22.self_attn.c_attn.q_scale", shape: (6144, 64), dtype: float32 65%|██████████████████████████████████████████████████████████▍ | 111/171 [00:29<00:07, 7.83it/s] 66%|███████████████████████████████████████████████████████████▍ | 113/171 [00:29<00:07, 7.88it/s] [2024-03-18 19:06:58] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.22.self_attn.o_proj.q_weight", shape: (2048, 512), dtype: uint32 66%|███████████████████████████████████████████████████████████▍ | 113/171 [00:29<00:07, 7.88it/s] [2024-03-18 19:06:58] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.22.self_attn.o_proj.q_scale", shape: (2048, 64), dtype: float32 66%|███████████████████████████████████████████████████████████▍ | 113/171 [00:29<00:07, 7.88it/s] [2024-03-18 19:06:58] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.23.input_layernorm.weight", shape: (2048,), dtype: float32 66%|███████████████████████████████████████████████████████████▍ | 113/171 [00:29<00:07, 7.88it/s] [2024-03-18 19:06:58] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.23.mlp.down_proj.q_weight", shape: (2048, 1376), dtype: uint32 66%|███████████████████████████████████████████████████████████▍ | 113/171 [00:29<00:07, 7.88it/s] [2024-03-18 19:06:58] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.23.mlp.down_proj.q_scale", shape: (2048, 172), dtype: float32 66%|███████████████████████████████████████████████████████████▍ | 113/171 [00:29<00:07, 7.88it/s] 68%|█████████████████████████████████████████████████████████████ | 116/171 [00:29<00:05, 9.28it/s] [2024-03-18 19:06:59] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.23.mlp.gate_up_proj.q_weight", shape: (11008, 512), dtype: uint32 68%|█████████████████████████████████████████████████████████████ | 116/171 [00:29<00:05, 9.28it/s] [2024-03-18 19:06:59] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.23.mlp.gate_up_proj.q_scale", shape: (11008, 64), dtype: float32 68%|█████████████████████████████████████████████████████████████ | 116/171 [00:29<00:05, 9.28it/s] [2024-03-18 19:06:59] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.23.post_attention_layernorm.weight", shape: (2048,), dtype: float32 68%|█████████████████████████████████████████████████████████████ | 116/171 [00:29<00:05, 9.28it/s] 69%|██████████████████████████████████████████████████████████████ | 118/171 [00:29<00:06, 7.77it/s] [2024-03-18 19:06:59] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.23.self_attn.c_attn.bias", shape: (6144,), dtype: float32 69%|██████████████████████████████████████████████████████████████ | 118/171 [00:29<00:06, 7.77it/s] [2024-03-18 19:06:59] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.23.self_attn.c_attn.q_weight", shape: (6144, 512), dtype: uint32 69%|██████████████████████████████████████████████████████████████ | 118/171 [00:29<00:06, 7.77it/s] [2024-03-18 19:06:59] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.23.self_attn.c_attn.q_scale", shape: (6144, 64), dtype: float32 69%|██████████████████████████████████████████████████████████████ | 118/171 [00:29<00:06, 7.77it/s] 70%|███████████████████████████████████████████████████████████████▏ | 120/171 [00:29<00:06, 7.83it/s] [2024-03-18 19:06:59] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.23.self_attn.o_proj.q_weight", shape: (2048, 512), dtype: uint32 70%|███████████████████████████████████████████████████████████████▏ | 120/171 [00:29<00:06, 7.83it/s] [2024-03-18 19:06:59] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.23.self_attn.o_proj.q_scale", shape: (2048, 64), dtype: float32 70%|███████████████████████████████████████████████████████████████▏ | 120/171 [00:29<00:06, 7.83it/s] [2024-03-18 19:06:59] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.3.input_layernorm.weight", shape: (2048,), dtype: float32 70%|███████████████████████████████████████████████████████████████▏ | 120/171 [00:29<00:06, 7.83it/s] [2024-03-18 19:06:59] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.3.mlp.down_proj.q_weight", shape: (2048, 1376), dtype: uint32 70%|███████████████████████████████████████████████████████████████▏ | 120/171 [00:30<00:06, 7.83it/s] [2024-03-18 19:06:59] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.3.mlp.down_proj.q_scale", shape: (2048, 172), dtype: float32 70%|███████████████████████████████████████████████████████████████▏ | 120/171 [00:30<00:06, 7.83it/s] 72%|████████████████████████████████████████████████████████████████▋ | 123/171 [00:30<00:05, 9.26it/s] [2024-03-18 19:07:00] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.3.mlp.gate_up_proj.q_weight", shape: (11008, 512), dtype: uint32 72%|████████████████████████████████████████████████████████████████▋ | 123/171 [00:30<00:05, 9.26it/s] [2024-03-18 19:07:00] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.3.mlp.gate_up_proj.q_scale", shape: (11008, 64), dtype: float32 72%|████████████████████████████████████████████████████████████████▋ | 123/171 [00:30<00:05, 9.26it/s] 73%|█████████████████████████████████████████████████████████████████▎ | 124/171 [00:30<00:07, 6.64it/s] [2024-03-18 19:07:00] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.3.post_attention_layernorm.weight", shape: (2048,), dtype: float32 73%|█████████████████████████████████████████████████████████████████▎ | 124/171 [00:30<00:07, 6.64it/s] [2024-03-18 19:07:00] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.3.self_attn.c_attn.bias", shape: (6144,), dtype: float32 73%|█████████████████████████████████████████████████████████████████▎ | 124/171 [00:30<00:07, 6.64it/s] [2024-03-18 19:07:00] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.3.self_attn.c_attn.q_weight", shape: (6144, 512), dtype: uint32 73%|█████████████████████████████████████████████████████████████████▎ | 124/171 [00:30<00:07, 6.64it/s] [2024-03-18 19:07:00] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.3.self_attn.c_attn.q_scale", shape: (6144, 64), dtype: float32 73%|█████████████████████████████████████████████████████████████████▎ | 124/171 [00:30<00:07, 6.64it/s] 74%|██████████████████████████████████████████████████████████████████▊ | 127/171 [00:30<00:05, 8.01it/s] [2024-03-18 19:07:00] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.3.self_attn.o_proj.q_weight", shape: (2048, 512), dtype: uint32 74%|██████████████████████████████████████████████████████████████████▊ | 127/171 [00:30<00:05, 8.01it/s] [2024-03-18 19:07:00] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.3.self_attn.o_proj.q_scale", shape: (2048, 64), dtype: float32 74%|██████████████████████████████████████████████████████████████████▊ | 127/171 [00:30<00:05, 8.01it/s] [2024-03-18 19:07:00] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.4.input_layernorm.weight", shape: (2048,), dtype: float32 74%|██████████████████████████████████████████████████████████████████▊ | 127/171 [00:30<00:05, 8.01it/s] [2024-03-18 19:07:00] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.4.mlp.down_proj.q_weight", shape: (2048, 1376), dtype: uint32 74%|██████████████████████████████████████████████████████████████████▊ | 127/171 [00:30<00:05, 8.01it/s] [2024-03-18 19:07:00] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.4.mlp.down_proj.q_scale", shape: (2048, 172), dtype: float32 74%|██████████████████████████████████████████████████████████████████▊ | 127/171 [00:31<00:05, 8.01it/s] 76%|████████████████████████████████████████████████████████████████████▍ | 130/171 [00:31<00:04, 9.35it/s] [2024-03-18 19:07:00] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.4.mlp.gate_up_proj.q_weight", shape: (11008, 512), dtype: uint32 76%|████████████████████████████████████████████████████████████████████▍ | 130/171 [00:31<00:04, 9.35it/s] [2024-03-18 19:07:01] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.4.mlp.gate_up_proj.q_scale", shape: (11008, 64), dtype: float32 76%|████████████████████████████████████████████████████████████████████▍ | 130/171 [00:31<00:04, 9.35it/s] [2024-03-18 19:07:01] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.4.post_attention_layernorm.weight", shape: (2048,), dtype: float32 76%|████████████████████████████████████████████████████████████████████▍ | 130/171 [00:31<00:04, 9.35it/s] 77%|█████████████████████████████████████████████████████████████████████▍ | 132/171 [00:31<00:04, 7.80it/s] [2024-03-18 19:07:01] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.4.self_attn.c_attn.bias", shape: (6144,), dtype: float32 77%|█████████████████████████████████████████████████████████████████████▍ | 132/171 [00:31<00:04, 7.80it/s] [2024-03-18 19:07:01] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.4.self_attn.c_attn.q_weight", shape: (6144, 512), dtype: uint32 77%|█████████████████████████████████████████████████████████████████████▍ | 132/171 [00:31<00:04, 7.80it/s] [2024-03-18 19:07:01] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.4.self_attn.c_attn.q_scale", shape: (6144, 64), dtype: float32 77%|█████████████████████████████████████████████████████████████████████▍ | 132/171 [00:31<00:04, 7.80it/s] 78%|██████████████████████████████████████████████████████████████████████▌ | 134/171 [00:31<00:04, 7.85it/s] [2024-03-18 19:07:01] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.4.self_attn.o_proj.q_weight", shape: (2048, 512), dtype: uint32 78%|██████████████████████████████████████████████████████████████████████▌ | 134/171 [00:31<00:04, 7.85it/s] [2024-03-18 19:07:01] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.4.self_attn.o_proj.q_scale", shape: (2048, 64), dtype: float32 78%|██████████████████████████████████████████████████████████████████████▌ | 134/171 [00:31<00:04, 7.85it/s] [2024-03-18 19:07:01] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.5.input_layernorm.weight", shape: (2048,), dtype: float32 78%|██████████████████████████████████████████████████████████████████████▌ | 134/171 [00:31<00:04, 7.85it/s] [2024-03-18 19:07:01] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.5.mlp.down_proj.q_weight", shape: (2048, 1376), dtype: uint32 78%|██████████████████████████████████████████████████████████████████████▌ | 134/171 [00:31<00:04, 7.85it/s] [2024-03-18 19:07:01] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.5.mlp.down_proj.q_scale", shape: (2048, 172), dtype: float32 78%|██████████████████████████████████████████████████████████████████████▌ | 134/171 [00:31<00:04, 7.85it/s] 80%|████████████████████████████████████████████████████████████████████████ | 137/171 [00:31<00:03, 9.24it/s] [2024-03-18 19:07:01] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.5.mlp.gate_up_proj.q_weight", shape: (11008, 512), dtype: uint32 80%|████████████████████████████████████████████████████████████████████████ | 137/171 [00:32<00:03, 9.24it/s] [2024-03-18 19:07:01] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.5.mlp.gate_up_proj.q_scale", shape: (11008, 64), dtype: float32 80%|████████████████████████████████████████████████████████████████████████ | 137/171 [00:32<00:03, 9.24it/s] [2024-03-18 19:07:01] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.5.post_attention_layernorm.weight", shape: (2048,), dtype: float32 80%|████████████████████████████████████████████████████████████████████████ | 137/171 [00:32<00:03, 9.24it/s] 81%|█████████████████████████████████████████████████████████████████████████▏ | 139/171 [00:32<00:04, 7.61it/s] [2024-03-18 19:07:01] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.5.self_attn.c_attn.bias", shape: (6144,), dtype: float32 81%|█████████████████████████████████████████████████████████████████████████▏ | 139/171 [00:32<00:04, 7.61it/s] [2024-03-18 19:07:02] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.5.self_attn.c_attn.q_weight", shape: (6144, 512), dtype: uint32 81%|█████████████████████████████████████████████████████████████████████████▏ | 139/171 [00:32<00:04, 7.61it/s] [2024-03-18 19:07:02] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.5.self_attn.c_attn.q_scale", shape: (6144, 64), dtype: float32 81%|█████████████████████████████████████████████████████████████████████████▏ | 139/171 [00:32<00:04, 7.61it/s] 82%|██████████████████████████████████████████████████████████████████████████▏ | 141/171 [00:32<00:03, 7.73it/s] [2024-03-18 19:07:02] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.5.self_attn.o_proj.q_weight", shape: (2048, 512), dtype: uint32 82%|██████████████████████████████████████████████████████████████████████████▏ | 141/171 [00:32<00:03, 7.73it/s] [2024-03-18 19:07:02] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.5.self_attn.o_proj.q_scale", shape: (2048, 64), dtype: float32 82%|██████████████████████████████████████████████████████████████████████████▏ | 141/171 [00:32<00:03, 7.73it/s] [2024-03-18 19:07:02] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.6.input_layernorm.weight", shape: (2048,), dtype: float32 82%|██████████████████████████████████████████████████████████████████████████▏ | 141/171 [00:32<00:03, 7.73it/s] [2024-03-18 19:07:02] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.6.mlp.down_proj.q_weight", shape: (2048, 1376), dtype: uint32 82%|██████████████████████████████████████████████████████████████████████████▏ | 141/171 [00:32<00:03, 7.73it/s] [2024-03-18 19:07:02] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.6.mlp.down_proj.q_scale", shape: (2048, 172), dtype: float32 82%|██████████████████████████████████████████████████████████████████████████▏ | 141/171 [00:32<00:03, 7.73it/s] 84%|███████████████████████████████████████████████████████████████████████████▊ | 144/171 [00:32<00:02, 9.13it/s] [2024-03-18 19:07:02] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.6.mlp.gate_up_proj.q_weight", shape: (11008, 512), dtype: uint32 84%|███████████████████████████████████████████████████████████████████████████▊ | 144/171 [00:33<00:02, 9.13it/s] [2024-03-18 19:07:02] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.6.mlp.gate_up_proj.q_scale", shape: (11008, 64), dtype: float32 84%|███████████████████████████████████████████████████████████████████████████▊ | 144/171 [00:33<00:02, 9.13it/s] 85%|████████████████████████████████████████████████████████████████████████████▎ | 145/171 [00:33<00:04, 6.43it/s] [2024-03-18 19:07:02] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.6.post_attention_layernorm.weight", shape: (2048,), dtype: float32 85%|████████████████████████████████████████████████████████████████████████████▎ | 145/171 [00:33<00:04, 6.43it/s] [2024-03-18 19:07:02] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.6.self_attn.c_attn.bias", shape: (6144,), dtype: float32 85%|████████████████████████████████████████████████████████████████████████████▎ | 145/171 [00:33<00:04, 6.43it/s] [2024-03-18 19:07:03] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.6.self_attn.c_attn.q_weight", shape: (6144, 512), dtype: uint32 85%|████████████████████████████████████████████████████████████████████████████▎ | 145/171 [00:33<00:04, 6.43it/s] [2024-03-18 19:07:03] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.6.self_attn.c_attn.q_scale", shape: (6144, 64), dtype: float32 85%|████████████████████████████████████████████████████████████████████████████▎ | 145/171 [00:33<00:04, 6.43it/s] 87%|█████████████████████████████████████████████████████████████████████████████▉ | 148/171 [00:33<00:02, 7.84it/s] [2024-03-18 19:07:03] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.6.self_attn.o_proj.q_weight", shape: (2048, 512), dtype: uint32 87%|█████████████████████████████████████████████████████████████████████████████▉ | 148/171 [00:33<00:02, 7.84it/s] [2024-03-18 19:07:03] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.6.self_attn.o_proj.q_scale", shape: (2048, 64), dtype: float32 87%|█████████████████████████████████████████████████████████████████████████████▉ | 148/171 [00:33<00:02, 7.84it/s] [2024-03-18 19:07:03] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.7.input_layernorm.weight", shape: (2048,), dtype: float32 87%|█████████████████████████████████████████████████████████████████████████████▉ | 148/171 [00:33<00:02, 7.84it/s] [2024-03-18 19:07:03] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.7.mlp.down_proj.q_weight", shape: (2048, 1376), dtype: uint32 87%|█████████████████████████████████████████████████████████████████████████████▉ | 148/171 [00:33<00:02, 7.84it/s] [2024-03-18 19:07:03] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.7.mlp.down_proj.q_scale", shape: (2048, 172), dtype: float32 87%|█████████████████████████████████████████████████████████████████████████████▉ | 148/171 [00:33<00:02, 7.84it/s] 88%|███████████████████████████████████████████████████████████████████████████████▍ | 151/171 [00:33<00:02, 9.19it/s] [2024-03-18 19:07:03] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.7.mlp.gate_up_proj.q_weight", shape: (11008, 512), dtype: uint32 88%|███████████████████████████████████████████████████████████████████████████████▍ | 151/171 [00:33<00:02, 9.19it/s] [2024-03-18 19:07:03] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.7.mlp.gate_up_proj.q_scale", shape: (11008, 64), dtype: float32 88%|███████████████████████████████████████████████████████████████████████████████▍ | 151/171 [00:34<00:02, 9.19it/s] [2024-03-18 19:07:03] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.7.post_attention_layernorm.weight", shape: (2048,), dtype: float32 88%|███████████████████████████████████████████████████████████████████████████████▍ | 151/171 [00:34<00:02, 9.19it/s] 89%|████████████████████████████████████████████████████████████████████████████████▌ | 153/171 [00:34<00:02, 7.36it/s] [2024-03-18 19:07:03] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.7.self_attn.c_attn.bias", shape: (6144,), dtype: float32 89%|████████████████████████████████████████████████████████████████████████████████▌ | 153/171 [00:34<00:02, 7.36it/s] [2024-03-18 19:07:03] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.7.self_attn.c_attn.q_weight", shape: (6144, 512), dtype: uint32 89%|████████████████████████████████████████████████████████████████████████████████▌ | 153/171 [00:34<00:02, 7.36it/s] [2024-03-18 19:07:04] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.7.self_attn.c_attn.q_scale", shape: (6144, 64), dtype: float32 89%|████████████████████████████████████████████████████████████████████████████████▌ | 153/171 [00:34<00:02, 7.36it/s] 91%|█████████████████████████████████████████████████████████████████████████████████▌ | 155/171 [00:34<00:02, 7.50it/s] [2024-03-18 19:07:04] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.7.self_attn.o_proj.q_weight", shape: (2048, 512), dtype: uint32 91%|█████████████████████████████████████████████████████████████████████████████████▌ | 155/171 [00:34<00:02, 7.50it/s] [2024-03-18 19:07:04] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.7.self_attn.o_proj.q_scale", shape: (2048, 64), dtype: float32 91%|█████████████████████████████████████████████████████████████████████████████████▌ | 155/171 [00:34<00:02, 7.50it/s] [2024-03-18 19:07:04] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.8.input_layernorm.weight", shape: (2048,), dtype: float32 91%|█████████████████████████████████████████████████████████████████████████████████▌ | 155/171 [00:34<00:02, 7.50it/s] [2024-03-18 19:07:04] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.8.mlp.down_proj.q_weight", shape: (2048, 1376), dtype: uint32 91%|█████████████████████████████████████████████████████████████████████████████████▌ | 155/171 [00:34<00:02, 7.50it/s] [2024-03-18 19:07:04] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.8.mlp.down_proj.q_scale", shape: (2048, 172), dtype: float32 91%|█████████████████████████████████████████████████████████████████████████████████▌ | 155/171 [00:34<00:02, 7.50it/s] 92%|███████████████████████████████████████████████████████████████████████████████████▏ | 158/171 [00:34<00:01, 8.91it/s] [2024-03-18 19:07:04] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.8.mlp.gate_up_proj.q_weight", shape: (11008, 512), dtype: uint32 92%|███████████████████████████████████████████████████████████████████████████████████▏ | 158/171 [00:34<00:01, 8.91it/s] [2024-03-18 19:07:04] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.8.mlp.gate_up_proj.q_scale", shape: (11008, 64), dtype: float32 92%|███████████████████████████████████████████████████████████████████████████████████▏ | 158/171 [00:34<00:01, 8.91it/s] 93%|███████████████████████████████████████████████████████████████████████████████████▋ | 159/171 [00:34<00:01, 6.22it/s] [2024-03-18 19:07:04] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.8.post_attention_layernorm.weight", shape: (2048,), dtype: float32 93%|███████████████████████████████████████████████████████████████████████████████████▋ | 159/171 [00:34<00:01, 6.22it/s] [2024-03-18 19:07:04] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.8.self_attn.c_attn.bias", shape: (6144,), dtype: float32 93%|███████████████████████████████████████████████████████████████████████████████████▋ | 159/171 [00:34<00:01, 6.22it/s] [2024-03-18 19:07:04] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.8.self_attn.c_attn.q_weight", shape: (6144, 512), dtype: uint32 93%|███████████████████████████████████████████████████████████████████████████████████▋ | 159/171 [00:35<00:01, 6.22it/s] [2024-03-18 19:07:04] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.8.self_attn.c_attn.q_scale", shape: (6144, 64), dtype: float32 93%|███████████████████████████████████████████████████████████████████████████████████▋ | 159/171 [00:35<00:01, 6.22it/s] 95%|█████████████████████████████████████████████████████████████████████████████████████▎ | 162/171 [00:35<00:01, 7.59it/s] [2024-03-18 19:07:04] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.8.self_attn.o_proj.q_weight", shape: (2048, 512), dtype: uint32 95%|█████████████████████████████████████████████████████████████████████████████████████▎ | 162/171 [00:35<00:01, 7.59it/s] [2024-03-18 19:07:05] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.8.self_attn.o_proj.q_scale", shape: (2048, 64), dtype: float32 95%|█████████████████████████████████████████████████████████████████████████████████████▎ | 162/171 [00:35<00:01, 7.59it/s] [2024-03-18 19:07:05] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.9.input_layernorm.weight", shape: (2048,), dtype: float32 95%|█████████████████████████████████████████████████████████████████████████████████████▎ | 162/171 [00:35<00:01, 7.59it/s] [2024-03-18 19:07:05] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.9.mlp.down_proj.q_weight", shape: (2048, 1376), dtype: uint32 95%|█████████████████████████████████████████████████████████████████████████████████████▎ | 162/171 [00:35<00:01, 7.59it/s] [2024-03-18 19:07:05] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.9.mlp.down_proj.q_scale", shape: (2048, 172), dtype: float32 95%|█████████████████████████████████████████████████████████████████████████████████████▎ | 162/171 [00:35<00:01, 7.59it/s] 96%|██████████████████████████████████████████████████████████████████████████████████████▊ | 165/171 [00:35<00:00, 8.97it/s] [2024-03-18 19:07:05] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.9.mlp.gate_up_proj.q_weight", shape: (11008, 512), dtype: uint32 96%|██████████████████████████████████████████████████████████████████████████████████████▊ | 165/171 [00:35<00:00, 8.97it/s] [2024-03-18 19:07:05] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.9.mlp.gate_up_proj.q_scale", shape: (11008, 64), dtype: float32 96%|██████████████████████████████████████████████████████████████████████████████████████▊ | 165/171 [00:35<00:00, 8.97it/s] [2024-03-18 19:07:05] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.9.post_attention_layernorm.weight", shape: (2048,), dtype: float32 96%|██████████████████████████████████████████████████████████████████████████████████████▊ | 165/171 [00:35<00:00, 8.97it/s] 98%|███████████████████████████████████████████████████████████████████████████████████████▉ | 167/171 [00:35<00:00, 7.45it/s] [2024-03-18 19:07:05] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.layers.9.self_attn.c_attn.bias", shape: (6144,), dtype: float32 98%|███████████████████████████████████████████████████████████████████████████████████████▉ | 167/171 [00:35<00:00, 7.45it/s] [2024-03-18 19:07:05] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.9.self_attn.c_attn.q_weight", shape: (6144, 512), dtype: uint32 98%|███████████████████████████████████████████████████████████████████████████████████████▉ | 167/171 [00:36<00:00, 7.45it/s] [2024-03-18 19:07:05] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.9.self_attn.c_attn.q_scale", shape: (6144, 64), dtype: float32 98%|███████████████████████████████████████████████████████████████████████████████████████▉ | 167/171 [00:36<00:00, 7.45it/s] 99%|████████████████████████████████████████████████████████████████████████████████████████▉ | 169/171 [00:36<00:00, 7.59it/s] [2024-03-18 19:07:05] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.9.self_attn.o_proj.q_weight", shape: (2048, 512), dtype: uint32 99%|████████████████████████████████████████████████████████████████████████████████████████▉ | 169/171 [00:36<00:00, 7.59it/s] [2024-03-18 19:07:05] INFO huggingface_loader.py:164: [Quantized] Parameter: "model.layers.9.self_attn.o_proj.q_scale", shape: (2048, 64), dtype: float32 99%|████████████████████████████████████████████████████████████████████████████████████████▉ | 169/171 [00:36<00:00, 7.59it/s] [2024-03-18 19:07:05] INFO huggingface_loader.py:172: [Not quantized] Parameter: "model.norm.weight", shape: (2048,), dtype: float32 99%|████████████████████████████████████████████████████████████████████████████████████████▉ | 169/171 [00:36<00:00, 7.59it/s] 100%|██████████████████████████████████████████████████████████████████████████████████████████| 171/171 [00:36<00:00, 4.73it/s] [2024-03-18 19:07:05] INFO huggingface_loader.py:194: Unloading HF weight file: ../dist/models/Qwen1.5-1.8B/model.safetensors [2024-03-18 19:07:06] INFO stats.py:76: Time usage: HF loading: 2.175 sec; Pre-quantization mapping: 1.753 sec; Quantization: 2.717 sec [2024-03-18 19:07:06] INFO stats.py:90: RAM usage: Peak RAM: 6.843 GB. Total bytes loaded from disk: 6.843 GB [2024-03-18 19:07:06] INFO convert_weight.py:156: Parameter size after quantization: 1.925 GB [2024-03-18 19:07:06] INFO convert_weight.py:161: Total parameters: 1,836,828,672 [2024-03-18 19:07:06] INFO convert_weight.py:162: Bits per parameter: 9.003 [2024-03-18 19:07:06] INFO convert_weight.py:167: Saved to directory: /tmp/tmpnawifqwj All finished, 52 total shards committed, record saved to /tmp/tmpnawifqwj/ndarray-cache.json Also saved a bf16 record to /tmp/tmpnawifqwj/ndarray-cache-b16.json