Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
pavan01729
/
samsung_svdllm_compressed
like
0
Model card
Files
Files and versions
Community
main
samsung_svdllm_compressed
1 contributor
History:
9 commits
pavan01729
Upload new_gptq_8_llama_7b_hf_whitening_0.8.pt with huggingface_hub
894100b
verified
about 2 months ago
.gitattributes
Safe
1.52 kB
initial commit
2 months ago
gptq_4_llama_7b_hf_whitening_0.65.pt
pickle
Detected Pickle imports (24)
"component.svd_llama.LlamaRotaryEmbedding"
,
"torch.nn.modules.linear.Linear"
,
"transformers.tokenization_utils.Trie"
,
"_codecs.encode"
,
"transformers.models.llama.modeling_llama.LlamaForCausalLM"
,
"__builtin__.set"
,
"transformers.models.llama.tokenization_llama.LlamaTokenizer"
,
"transformers.models.llama.modeling_llama.LlamaModel"
,
"torch._utils._rebuild_tensor_v2"
,
"transformers.activations.SiLUActivation"
,
"transformers.models.llama.modeling_llama.LlamaRMSNorm"
,
"torch.float16"
,
"collections.OrderedDict"
,
"torch._utils._rebuild_parameter"
,
"transformers.models.llama.configuration_llama.LlamaConfig"
,
"component.svd_llama.SVD_LlamaMLP"
,
"torch.device"
,
"component.svd_llama.SVD_LlamaAttention"
,
"transformers.generation.configuration_utils.GenerationConfig"
,
"torch.HalfStorage"
,
"tokenizers.AddedToken"
,
"torch.nn.modules.container.ModuleList"
,
"transformers.models.llama.modeling_llama.LlamaDecoderLayer"
,
"torch.nn.modules.sparse.Embedding"
How to fix it?
8.98 GB
LFS
Upload gptq_4_llama_7b_hf_whitening_0.65.pt with huggingface_hub
2 months ago
gptq_4_llama_7b_hf_whitening_0.8.pt
pickle
Detected Pickle imports (24)
"torch._utils._rebuild_parameter"
,
"torch.nn.modules.sparse.Embedding"
,
"torch.nn.modules.linear.Linear"
,
"transformers.models.llama.configuration_llama.LlamaConfig"
,
"torch.float16"
,
"_codecs.encode"
,
"torch.nn.modules.container.ModuleList"
,
"transformers.activations.SiLUActivation"
,
"transformers.models.llama.modeling_llama.LlamaRMSNorm"
,
"transformers.models.llama.modeling_llama.LlamaForCausalLM"
,
"collections.OrderedDict"
,
"__builtin__.set"
,
"transformers.models.llama.modeling_llama.LlamaModel"
,
"transformers.models.llama.tokenization_llama.LlamaTokenizer"
,
"component.svd_llama.SVD_LlamaAttention"
,
"tokenizers.AddedToken"
,
"torch.device"
,
"component.svd_llama.SVD_LlamaMLP"
,
"torch.HalfStorage"
,
"transformers.generation.configuration_utils.GenerationConfig"
,
"transformers.models.llama.modeling_llama.LlamaDecoderLayer"
,
"component.svd_llama.LlamaRotaryEmbedding"
,
"torch._utils._rebuild_tensor_v2"
,
"transformers.tokenization_utils.Trie"
How to fix it?
10.9 GB
LFS
Upload gptq_4_llama_7b_hf_whitening_0.8.pt with huggingface_hub
2 months ago
gptq_8_llama_7b_hf_whitening_0.8.pt
pickle
Detected Pickle imports (24)
"torch._utils._rebuild_parameter"
,
"torch.nn.modules.sparse.Embedding"
,
"torch.nn.modules.linear.Linear"
,
"transformers.models.llama.configuration_llama.LlamaConfig"
,
"torch.float16"
,
"_codecs.encode"
,
"torch.nn.modules.container.ModuleList"
,
"transformers.activations.SiLUActivation"
,
"transformers.models.llama.modeling_llama.LlamaRMSNorm"
,
"transformers.models.llama.modeling_llama.LlamaForCausalLM"
,
"collections.OrderedDict"
,
"__builtin__.set"
,
"transformers.models.llama.modeling_llama.LlamaModel"
,
"transformers.models.llama.tokenization_llama.LlamaTokenizer"
,
"component.svd_llama.SVD_LlamaAttention"
,
"tokenizers.AddedToken"
,
"torch.device"
,
"component.svd_llama.SVD_LlamaMLP"
,
"torch.HalfStorage"
,
"transformers.generation.configuration_utils.GenerationConfig"
,
"transformers.models.llama.modeling_llama.LlamaDecoderLayer"
,
"component.svd_llama.LlamaRotaryEmbedding"
,
"torch._utils._rebuild_tensor_v2"
,
"transformers.tokenization_utils.Trie"
How to fix it?
10.9 GB
LFS
Upload gptq_8_llama_7b_hf_whitening_0.8.pt with huggingface_hub
2 months ago
jeffwan_llama_7b_hf_whitening_only_0.65.pt
17.4 GB
LFS
Upload jeffwan_llama_7b_hf_whitening_only_0.65.pt with huggingface_hub
2 months ago
jeffwan_llama_7b_hf_whitening_only_0.7.pt
18.7 GB
LFS
Upload jeffwan_llama_7b_hf_whitening_only_0.7.pt with huggingface_hub
2 months ago
jeffwan_llama_7b_hf_whitening_only_0.8.pt
pickle
Detected Pickle imports (25)
"torch._utils._rebuild_parameter"
,
"torch.nn.modules.sparse.Embedding"
,
"torch.nn.modules.linear.Linear"
,
"transformers.models.llama.configuration_llama.LlamaConfig"
,
"torch.float16"
,
"_codecs.encode"
,
"torch.nn.modules.container.ModuleList"
,
"transformers.activations.SiLUActivation"
,
"transformers.models.llama.modeling_llama.LlamaRMSNorm"
,
"torch.FloatStorage"
,
"transformers.models.llama.modeling_llama.LlamaForCausalLM"
,
"collections.OrderedDict"
,
"__builtin__.set"
,
"transformers.models.llama.modeling_llama.LlamaModel"
,
"transformers.models.llama.tokenization_llama.LlamaTokenizer"
,
"component.svd_llama.SVD_LlamaAttention"
,
"tokenizers.AddedToken"
,
"torch.device"
,
"component.svd_llama.SVD_LlamaMLP"
,
"torch.HalfStorage"
,
"transformers.generation.configuration_utils.GenerationConfig"
,
"transformers.models.llama.modeling_llama.LlamaDecoderLayer"
,
"component.svd_llama.LlamaRotaryEmbedding"
,
"torch._utils._rebuild_tensor_v2"
,
"transformers.tokenization_utils.Trie"
How to fix it?
21.3 GB
LFS
Upload jeffwan_llama_7b_hf_whitening_only_0.8.pt with huggingface_hub
2 months ago
new_gptq_4_llama_7b_hf_whitening_0.8.pt
10.9 GB
LFS
Upload new_gptq_4_llama_7b_hf_whitening_0.8.pt with huggingface_hub
about 2 months ago
new_gptq_8_llama_7b_hf_whitening_0.8.pt
pickle
Detected Pickle imports (24)
"torch._utils._rebuild_parameter"
,
"torch.nn.modules.sparse.Embedding"
,
"torch.nn.modules.linear.Linear"
,
"transformers.models.llama.configuration_llama.LlamaConfig"
,
"torch.float16"
,
"_codecs.encode"
,
"torch.nn.modules.container.ModuleList"
,
"transformers.activations.SiLUActivation"
,
"transformers.models.llama.modeling_llama.LlamaRMSNorm"
,
"transformers.models.llama.modeling_llama.LlamaForCausalLM"
,
"collections.OrderedDict"
,
"__builtin__.set"
,
"transformers.models.llama.modeling_llama.LlamaModel"
,
"transformers.models.llama.tokenization_llama.LlamaTokenizer"
,
"component.svd_llama.SVD_LlamaAttention"
,
"tokenizers.AddedToken"
,
"torch.device"
,
"component.svd_llama.SVD_LlamaMLP"
,
"torch.HalfStorage"
,
"transformers.generation.configuration_utils.GenerationConfig"
,
"transformers.models.llama.modeling_llama.LlamaDecoderLayer"
,
"component.svd_llama.LlamaRotaryEmbedding"
,
"torch._utils._rebuild_tensor_v2"
,
"transformers.tokenization_utils.Trie"
How to fix it?
10.9 GB
LFS
Upload new_gptq_8_llama_7b_hf_whitening_0.8.pt with huggingface_hub
about 2 months ago