Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
Spaces:
Boni98
/
PixLore
like
0
Runtime error
App
Files
Files
Community
Fetching metadata from the HF Docker repository...
main
PixLore
1 contributor
History:
3 commits
Boni98
Upload finetunned_blipv2_epoch_5_loss_0.4936.pth
a31b1e3
over 1 year ago
.gitattributes
Safe
1.52 kB
initial commit
over 1 year ago
README.md
Safe
249 Bytes
initial commit
over 1 year ago
app.py
Safe
1.94 kB
Create app.py
over 1 year ago
finetunned_blipv2_epoch_5_loss_0.4936.pth
pickle
Detected Pickle imports (48)
"transformers.models.blip_2.modeling_blip_2.Blip2Encoder"
,
"torch.nn.modules.dropout.Dropout"
,
"torch.nn.modules.container.ModuleDict"
,
"transformers.models.blip_2.modeling_blip_2.Blip2QFormerLayer"
,
"transformers.models.blip_2.modeling_blip_2.Blip2QFormerSelfOutput"
,
"transformers.models.blip_2.modeling_blip_2.Blip2QFormerOutput"
,
"collections.OrderedDict"
,
"transformers.models.blip_2.modeling_blip_2.Blip2QFormerIntermediate"
,
"torch.nn.modules.activation.ReLU"
,
"torch.nn.modules.container.ModuleList"
,
"transformers.activations.GELUActivation"
,
"peft.tuners.lora.layer.Linear"
,
"transformers.models.opt.modeling_opt.OPTModel"
,
"transformers.models.blip_2.modeling_blip_2.Blip2QFormerModel"
,
"transformers.models.blip_2.modeling_blip_2.Blip2VisionModel"
,
"torch.nn.modules.container.ParameterDict"
,
"torch._C._nn.gelu"
,
"transformers.models.blip_2.modeling_blip_2.Blip2QFormerAttention"
,
"transformers.models.blip_2.configuration_blip_2.Blip2QFormerConfig"
,
"__builtin__.set"
,
"torch._utils._rebuild_tensor_v2"
,
"torch.nn.modules.linear.Linear"
,
"transformers.models.blip_2.configuration_blip_2.Blip2Config"
,
"transformers.models.blip_2.modeling_blip_2.Blip2Attention"
,
"peft.tuners.lora.model.LoraModel"
,
"peft.peft_model.PeftModel"
,
"transformers.models.blip_2.modeling_blip_2.Blip2VisionEmbeddings"
,
"torch.nn.modules.sparse.Embedding"
,
"transformers.models.opt.modeling_opt.OPTLearnedPositionalEmbedding"
,
"transformers.generation.configuration_utils.GenerationConfig"
,
"torch.nn.modules.normalization.LayerNorm"
,
"transformers.models.blip_2.modeling_blip_2.Blip2QFormerEncoder"
,
"transformers.models.blip_2.modeling_blip_2.Blip2ForConditionalGeneration"
,
"transformers.models.opt.modeling_opt.OPTAttention"
,
"transformers.models.opt.modeling_opt.OPTDecoderLayer"
,
"transformers.models.blip_2.modeling_blip_2.Blip2MLP"
,
"transformers.models.opt.modeling_opt.OPTForCausalLM"
,
"torch.FloatStorage"
,
"transformers.models.opt.configuration_opt.OPTConfig"
,
"peft.utils.peft_types.PeftType"
,
"transformers.models.blip_2.configuration_blip_2.Blip2VisionConfig"
,
"torch.float16"
,
"torch.nn.modules.conv.Conv2d"
,
"peft.tuners.lora.config.LoraConfig"
,
"transformers.models.blip_2.modeling_blip_2.Blip2QFormerMultiHeadAttention"
,
"torch._utils._rebuild_parameter"
,
"transformers.models.blip_2.modeling_blip_2.Blip2EncoderLayer"
,
"transformers.models.opt.modeling_opt.OPTDecoder"
How to fix it?
15 GB
LFS
Upload finetunned_blipv2_epoch_5_loss_0.4936.pth
over 1 year ago