Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
masters-thesis-vm
/
whispered_TIA_small_ad_tokenization_nosil
like
0
Follow
master's thesis Vivien Meyer
3
Automatic Speech Recognition
Transformers
PyTorch
English
whisper
speech processing
nlp
asr
domain adaptation
arxiv:
2212.04356
License:
unknown
Model card
Files
Files and versions
Community
1
Train
Deploy
Use this model
main
whispered_TIA_small_ad_tokenization_nosil
Ctrl+K
Ctrl+K
1 contributor
History:
9 commits
vimey
Update README.md
a60d7da
almost 2 years ago
results
upload model
almost 2 years ago
scripts
upload model
almost 2 years ago
.gitattributes
Safe
1.52 kB
initial commit
almost 2 years ago
README.md
Safe
3.07 kB
Update README.md
almost 2 years ago
added_tokens.json
Safe
2.08 kB
upload model
almost 2 years ago
config.json
Safe
1.29 kB
upload model
almost 2 years ago
generation_config.json
Safe
3.5 kB
upload model
almost 2 years ago
merges.txt
Safe
494 kB
upload model
almost 2 years ago
normalizer.json
Safe
52.7 kB
upload model
almost 2 years ago
optimizer.pt
Safe
pickle
Detected Pickle imports (3)
"torch._utils._rebuild_tensor_v2"
,
"collections.OrderedDict"
,
"torch.FloatStorage"
What is a pickle import?
1.93 GB
LFS
upload model
almost 2 years ago
preprocessor_config.json
Safe
339 Bytes
upload model
almost 2 years ago
pytorch_model.bin
Safe
pickle
Detected Pickle imports (3)
"torch._utils._rebuild_tensor_v2"
,
"torch.FloatStorage"
,
"collections.OrderedDict"
What is a pickle import?
963 MB
LFS
upload model
almost 2 years ago
requirements.txt
Safe
1.75 kB
upload model
almost 2 years ago
rng_state.pth
pickle
Detected Pickle imports (7)
"_codecs.encode"
,
"torch._utils._rebuild_tensor_v2"
,
"torch.ByteStorage"
,
"numpy.core.multiarray._reconstruct"
,
"numpy.dtype"
,
"collections.OrderedDict"
,
"numpy.ndarray"
How to fix it?
14.6 kB
LFS
upload model
almost 2 years ago
scheduler.pt
Safe
pickle
Pickle imports
No problematic imports detected
What is a pickle import?
627 Bytes
LFS
upload model
almost 2 years ago
special_tokens_map.json
Safe
2.08 kB
upload model
almost 2 years ago
tokenizer_config.json
Safe
805 Bytes
upload model
almost 2 years ago
trainer_state.json
Safe
14.5 kB
upload model
almost 2 years ago
vocab.json
Safe
1.04 MB
upload model
almost 2 years ago