repo_id
stringlengths 4
110
| author
stringlengths 2
27
โ | model_type
stringlengths 2
29
โ | files_per_repo
int64 2
15.4k
| downloads_30d
int64 0
19.9M
| library
stringlengths 2
37
โ | likes
int64 0
4.34k
| pipeline
stringlengths 5
30
โ | pytorch
bool 2
classes | tensorflow
bool 2
classes | jax
bool 2
classes | license
stringlengths 2
30
| languages
stringlengths 4
1.63k
โ | datasets
stringlengths 2
2.58k
โ | co2
stringclasses 29
values | prs_count
int64 0
125
| prs_open
int64 0
120
| prs_merged
int64 0
15
| prs_closed
int64 0
28
| discussions_count
int64 0
218
| discussions_open
int64 0
148
| discussions_closed
int64 0
70
| tags
stringlengths 2
513
| has_model_index
bool 2
classes | has_metadata
bool 1
class | has_text
bool 1
class | text_length
int64 401
598k
| is_nc
bool 1
class | readme
stringlengths 0
598k
| hash
stringlengths 32
32
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Ni4z/my_awesome_wnut_model
|
Ni4z
|
distilbert
| 12 | 1 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null |
['wnut_17']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,445 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_wnut_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the wnut_17 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2777
- Precision: 0.5676
- Recall: 0.2919
- F1: 0.3856
- Accuracy: 0.9412
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 213 | 0.2872 | 0.4563 | 0.2373 | 0.3122 | 0.9377 |
| No log | 2.0 | 426 | 0.2777 | 0.5676 | 0.2919 | 0.3856 | 0.9412 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
5d193a33126f3fb03ddaa656cf83e90d
|
iakl/knight-big
|
iakl
| null | 19 | 5 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 2 | 2 | 0 | 0 | 0 | 0 | 0 |
['text-to-image', 'stable-diffusion']
| false | true | true | 703 | false |
### knight_big Dreambooth model trained by iakl with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb)
Sample pictures of this concept:
.png)
|
a4d9f972eb45206d3db22c99d6f49ef6
|
bitsanlp/roberta-retrained-100k
|
bitsanlp
|
roberta
| 11 | 2 |
transformers
| 0 |
fill-mask
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 911 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-retrained_100k
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
43aa20c8b813d46d3a5c285dc23b711b
|
dkssud/wav2vec2-base-demo-colab
|
dkssud
|
wav2vec2
| 17 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,635 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4171
- Wer: 0.3452
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.0054 | 4.0 | 500 | 1.5456 | 0.9005 |
| 0.8183 | 8.0 | 1000 | 0.4738 | 0.4839 |
| 0.3019 | 12.0 | 1500 | 0.4280 | 0.4047 |
| 0.1738 | 16.0 | 2000 | 0.4584 | 0.3738 |
| 0.1285 | 20.0 | 2500 | 0.4418 | 0.3593 |
| 0.1104 | 24.0 | 3000 | 0.4110 | 0.3479 |
| 0.0828 | 28.0 | 3500 | 0.4171 | 0.3452 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu102
- Datasets 1.14.0
- Tokenizers 0.10.3
|
36b53473265bb53faeba21dc68b8f999
|
Roshan777/finetuning-sentiment-model-300-samples
|
Roshan777
|
distilbert
| 17 | 4 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['imdb']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,054 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-300-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6567
- Accuracy: 0.6833
- F1: 0.6154
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
1c69018c5d19559c83271ad6c8087c1c
|
HuyenNguyen/Vin5-P3
|
HuyenNguyen
|
whisper
| 15 | 19 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,324 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Vin5-P3
This model is a fine-tuned version of [HuyenNguyen/Vin4-P3](https://huggingface.co/HuyenNguyen/Vin4-P3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2358
- Wer: 12.7944
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 600
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.2786 | 0.77 | 300 | 0.2359 | 13.5655 |
| 0.2338 | 1.54 | 600 | 0.2358 | 12.7944 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
4ff77f2ac4559503480aa855c57f0801
|
fathyshalab/all-roberta-large-v1-home-1-16-5
|
fathyshalab
|
roberta
| 11 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,509 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-home-1-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3789
- Accuracy: 0.3356
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7614 | 1.0 | 1 | 2.6146 | 0.1889 |
| 2.2082 | 2.0 | 2 | 2.5232 | 0.2667 |
| 1.8344 | 3.0 | 3 | 2.4516 | 0.2933 |
| 1.4601 | 4.0 | 4 | 2.4033 | 0.3267 |
| 1.2748 | 5.0 | 5 | 2.3789 | 0.3356 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
c801795e7669cbb12d51119964636053
|
sd-concepts-library/borderlands
|
sd-concepts-library
| null | 9 | 0 | null | 14 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,038 | false |
### borderlands on Stable Diffusion
This is the `<borderlands>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:




|
b25f592f8338eca1744180f34ed09ebd
|
haor/Evt_M
|
haor
| null | 15 | 0 |
diffusers
| 10 |
text-to-image
| false | false | false |
creativeml-openrail-m
|
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'diffusers']
| false | true | true | 3,584 | false |
# Evt_M
Evt_M is a model derived from Evt_V4 EP06.
It retains the characteristics of Evt_V4, and the batch generation of images with the same set of parameters is no longer rigid and monotonous, and has more possibilities.
## Examples
**Prompt1:**




```
{Masterpiece, Kaname_Madoka, tall and long double tails, well rooted hair, (pink hair), pink eyes, crossed bangs, ojousama, jk, thigh bandages, wrist cuffs, (pink bow: 1.2)}, plain color, sketch, masterpiece, high detail, masterpiece portrait, best quality, ray tracing, {:<, look at the edge}
Negative prompt: ((((ugly)))), (((duplicate))), ((morbid)), ((mutilated)),extra fingers, mutated hands, ((poorly drawn hands)), ((poorly drawn face)), (((bad proportions))), ((extra limbs)), (((deformed))), (((disfigured))), cloned face, gross proportions, (malformed limbs), ((missing arms)), ((missing legs)), (((extra arms))), (((extra legs))), too many fingers, (((long neck))), (((low quality))), normal quality, blurry, bad feet, text font ui, ((((worst quality)))), anatomical nonsense, (((bad shadow))), unnatural body, liquid body, 3D, 3D game, 3D game scene, 3D character, bad hairs, poorly drawn hairs, fused hairs, big muscles, bad face, extra eyes, furry, pony, mosaic, disappearing calf, disappearing legs, extra digit, fewer digit, fused digit, missing digit, fused feet, poorly drawn eyes, big face, long face, bad eyes, thick lips, obesity, strong girl, beard๏ผExcess legs
Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 7, Clip skip: 2
```
**Prompt2:**



```
best quality, illustration,highly detailed,1girl,upper body,beautiful detailed eyes, medium_breasts, long hair,grey hair, grey eyes, curly hair, bangs,empty eyes,expressionless, ((masterpiece)),twintails,beautiful detailed sky, beautiful detailed water, cinematic lighting, dramatic angle,((back to the viewer)),(an extremely delicate and beautiful),school uniform,black ribbon,light smile,
Negative prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry,artist name,bad feet
Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 7, Clip skip: 2
```
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
|
ef7aa5243562b96774fbeb86c8de173a
|
philschmid/roberta-large-sst2
|
philschmid
|
roberta
| 17 | 223 |
transformers
| 0 |
text-classification
| true | false | false |
mit
| null |
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,566 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-sst2
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1400
- Accuracy: 0.9644
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- distributed_type: sagemaker_data_parallel
- num_devices: 8
- total_train_batch_size: 256
- total_eval_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3688 | 1.0 | 264 | 0.1444 | 0.9564 |
| 0.1529 | 2.0 | 528 | 0.1502 | 0.9518 |
| 0.107 | 3.0 | 792 | 0.1388 | 0.9530 |
| 0.0666 | 4.0 | 1056 | 0.1400 | 0.9644 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4
- Tokenizers 0.11.6
|
a5ac64f87b6e197dcabe4edf69f25045
|
habib1030/distilbert-base-uncased-finetuned-squad
|
habib1030
|
distilbert
| 14 | 5 |
transformers
| 0 |
question-answering
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,280 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.8711
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 1 | 5.9634 |
| No log | 2.0 | 2 | 5.9013 |
| No log | 3.0 | 3 | 5.8711 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
7c6e210a9bb7db5e09e8195e1e79b525
|
Gladiator/albert-large-v2_ner_wikiann
|
Gladiator
|
albert
| 12 | 7 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null |
['wikiann']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,710 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-large-v2_ner_wikiann
This model is a fine-tuned version of [albert-large-v2](https://huggingface.co/albert-large-v2) on the wikiann dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3416
- Precision: 0.8240
- Recall: 0.8375
- F1: 0.8307
- Accuracy: 0.9270
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.3451 | 1.0 | 2500 | 0.3555 | 0.7745 | 0.7850 | 0.7797 | 0.9067 |
| 0.2995 | 2.0 | 5000 | 0.2927 | 0.7932 | 0.8240 | 0.8083 | 0.9205 |
| 0.252 | 3.0 | 7500 | 0.2936 | 0.8094 | 0.8236 | 0.8164 | 0.9239 |
| 0.1676 | 4.0 | 10000 | 0.3302 | 0.8256 | 0.8359 | 0.8307 | 0.9268 |
| 0.1489 | 5.0 | 12500 | 0.3416 | 0.8240 | 0.8375 | 0.8307 | 0.9270 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
fa20dbc6fa2908aac7a11ad107e69f2d
|
Cacau/anglaludicmindtwo
|
Cacau
| null | 27 | 2 |
diffusers
| 0 | null | false | false | false |
creativeml-openrail-m
| null | null | null | 2 | 2 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,250 | false |
### anglaLudicMindTwo on Stable Diffusion via Dreambooth trained on the [fast-DreamBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
#### Model by Cacau s your the Stable Diffusion model fine-tuned the anglaLudicMindTwo concept taught to Stable Diffusion with Dreambooth.
You can also train your own concepts and upload them to the library by using [the fast-DremaBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb).
You can run your new concept via A1111 Colab :[Fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Or you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
sample_pictures.png.

|
2abf3ddd2875f7b1b84a4b8a290bd7c6
|
bitsanlp/roberta-retrained-500k
|
bitsanlp
|
roberta
| 11 | 0 |
transformers
| 0 |
fill-mask
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 950 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-retrained-500k
This model is a fine-tuned version of [bitsanlp/roberta-retrained-350k](https://huggingface.co/bitsanlp/roberta-retrained-350k) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
edb84fae944a904b368dab17dca40c14
|
andrewkroening/GalaxyFarAway-DialoGPT-LukeSkywalker
|
andrewkroening
|
gpt2
| 9 | 6 |
transformers
| 0 |
conversational
| true | false | false |
cc
|
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['conversational']
| false | true | true | 3,285 | false |
# GPT-2
This model is based on a GPT-2 model which was fine-tuned on a Hugging Face dataset. It is intended largely as an illustrative example and is not intended to be used for any serious purpose. It's trained on a movie script for goodness' sake.
Disclaimer: The team releasing GPT-2 also wrote a
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md) for their model. Content from this model card
has been written by the Hugging Face team to complete the information they provided and give specific examples of bias.
## Acknowledgements
There are several sources of inspiration and insight for the project that spawned this model. I'd like to recognize them up front:
* The [Microsoft DialoGPT-Medium](https://huggingface.co/microsoft/DialoGPT-medium?text=Hi.) model page was very insightful for getting stated.
* Lynn Zheng [r3dhummingbird](https://huggingface.co/r3dhummingbird/DialoGPT-medium-joshua?text=Hey+my+name+is+Thomas%21+How+are+you%3F) put together one heck of an awesome tutorial on how to fine-tune GPT-2 for conversational purposes. I used her tutorial as a starting point for this project. Check out the [Github repo here.](https://github.com/RuolinZheng08/twewy-discord-chatbot)
* [This article](https://towardsdatascience.com/make-your-own-rick-sanchez-bot-with-transformers-and-dialogpt-fine-tuning-f85e6d1f4e30) was also very insightful. Written by Rostyslav Neskorozhenyi.
* From a lineage standpoint, it looks like Nathan Cooper kicked this whole thing off with this [notebook.](https://github.com/ncoop57/i-am-a-nerd/blob/master/_notebooks/2020-05-12-chatbot-part-1.ipynb)
* Noah Gift figured out a few of the big pieces in [this repository.](https://github.com/nogibjj/hugging-face-tutorial-practice)
* I'd be remiss if I also didn't mention Hugging Face's own support [documentation](https://huggingface.co/transformers/v2.0.0/examples.html#gpt-2-gpt-and-causal-language-modeling) and team. All around great.
## Model description
This model uses GPT-2 Medium as a base model and was fine-tuned using scripts from the original (and best) Star Wars Trilogy. In this particular case, it was fine-tuned on Luke Skywalker's 490-some lines. This is not a lot, and thus the model should not be assumed to have serious integrity. It's just a fun little project.
## Intended uses & limitations
This model is intended to be used for fun and entertainment. Don't take it too seriously.
### Ways to use
You can always chat with the model directly on the Hugging Face website. Just click the "Chat" button on the right side of the model page.
If you want to use the model in your own project, I recommend you train it better using much more data.
To access the GitHub repository I used to train this model, click [here](https://github.com/nogibjj/hugging-face-gpt-trainer/tree/gpt-fine-tune)
## Fine-tuning data
The script to generate this model takes a Hugging Face data set in this approximate format:
| Speaker | Text |
| --- | --- |
| Luke | Hello there. |
| Han | General Kenobi. |
| Luke | You are a bold one. |
The script then asks the user to define parameters for making the dataset and proceeding to fine-tuning. The actual dataset for this model can be found [here.](andrewkroening/Star-wars-scripts-dialogue-IV-VI)
|
7806222ffecee064800ba812bf3de4ac
|
ukeeba/test1-1-1-1
|
ukeeba
| null | 18 | 2 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['text-to-image', 'stable-diffusion']
| false | true | true | 419 | false |
### test1.1.1.1 Dreambooth model trained by ukeeba with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
2ebdd40549ab9373e06399ad62128860
|
dchaplinsky/uk_ner_web_trf_large
|
dchaplinsky
| null | 16 | 15 |
spacy
| 4 |
token-classification
| false | false | false |
mit
|
['uk']
|
['ner-uk']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['spacy', 'token-classification']
| false | true | true | 907 | false |
# uk_ner_web_trf_large
## Model description
**uk_ner_web_trf_large** is a fine-tuned [XLM-Roberta model](https://huggingface.co/xlm-roberta-large) that is ready to use for **Named Entity Recognition** and achieves a **SoA** performance for the NER task for Ukrainian language. It outperforms another SpaCy model, [uk_core_news_trf](https://huggingface.co/ukr-models/uk_core_news_trf) on a NER task.
It has been trained to recognize four types of entities: location (LOC), organizations (ORG), person (PERS) and Miscellaneous (MISC).
The model was fine-tuned on the [NER-UK dataset](https://github.com/lang-uk/ner-uk), released by the [lang-uk](https://lang.org.ua).
Smaller transformer based model for the SpaCy is available [here](https://huggingface.co/dchaplinsky/uk_ner_web_trf_base).
Copyright: [Dmytro Chaplynskyi](https://twitter.com/dchaplinsky), [lang-uk project](https://lang.org.ua), 2022
|
45c0d69523eacf10f22f210ccffb5109
|
anas-awadalla/t5-base-few-shot-k-1024-finetuned-squad-infilling-seed-0
|
anas-awadalla
|
t5
| 17 | 1 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null |
['squad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 966 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-few-shot-k-1024-finetuned-squad-infilling-seed-0
This model is a fine-tuned version of [google/t5-v1_1-base](https://huggingface.co/google/t5-v1_1-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 35.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.11.6
|
29af56df842b6c233792d1fac36e44bf
|
IIIT-L/xlm-roberta-base-finetuned-non-code-mixed-DS
|
IIIT-L
|
xlm-roberta
| 9 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,595 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-non-code-mixed-DS
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1771
- Accuracy: 0.6365
- Precision: 0.6252
- Recall: 0.6242
- F1: 0.6242
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.6820964947491663e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 43
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.9475 | 2.0 | 926 | 0.8620 | 0.6278 | 0.6197 | 0.6042 | 0.6081 |
| 0.6661 | 3.99 | 1852 | 0.9578 | 0.6451 | 0.6356 | 0.6281 | 0.6301 |
| 0.4457 | 5.99 | 2778 | 1.1771 | 0.6365 | 0.6252 | 0.6242 | 0.6242 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.10.1+cu111
- Datasets 2.3.2
- Tokenizers 0.12.1
|
01740beb4fddf9841c39ef4ae98dbc46
|
jonatasgrosman/exp_w2v2r_de_xls-r_age_teens-2_sixties-8_s878
|
jonatasgrosman
|
wav2vec2
| 10 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['de']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'de']
| false | true | true | 475 | false |
# exp_w2v2r_de_xls-r_age_teens-2_sixties-8_s878
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
aa509589e494a98f67e326e6aeb5a7be
|
nakamura196/roberta-small-hi-char-mlm
|
nakamura196
|
roberta
| 15 | 5 |
transformers
| 1 |
fill-mask
| true | false | false |
cc-by-sa-4.0
|
['ja']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['japanese', 'masked-lm']
| false | true | true | 414 | false |
# roberta-small-hi-char-mlm
## Model Description
This is a RoBERTa model pre-trained on HI texts with character tokenizer.
This uses `is_decoder=False` option.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("nakamura196/roberta-small-hi-char-mlm")
model=AutoModelForMaskedLM.from_pretrained("nakamura196/roberta-small-hi-char-mlm")
```
|
6d6374eb0ffe355fe70e68344e20c9cc
|
mrm8488/data2vec-text-base-finetuned-rte
|
mrm8488
|
data2vec-text
| 14 | 4 |
transformers
| 0 |
text-classification
| true | false | false |
mit
| null |
['glue']
| null | 1 | 0 | 1 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,477 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# data2vec-text-base-finetuned-rte
This model is a fine-tuned version of [facebook/data2vec-text-base](https://huggingface.co/facebook/data2vec-text-base) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6670
- Accuracy: 0.6209
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 156 | 0.7091 | 0.4729 |
| No log | 2.0 | 312 | 0.6893 | 0.5271 |
| No log | 3.0 | 468 | 0.6670 | 0.6209 |
| 0.6919 | 4.0 | 624 | 0.6740 | 0.5921 |
| 0.6919 | 5.0 | 780 | 0.6644 | 0.6101 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
0131fcd0c0840839649932324dd9f80c
|
Botnoi/wav2vec2-xls-r-300m-th-v7_0
|
Botnoi
|
wav2vec2
| 52 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 3,599 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-th-v7_0
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4099
- Wer: 0.9988
- Cer: 0.7861
- Clean Cer: 0.7617
- Learning Rate: 0.0000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer | Clean Cer | Rate |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:---------:|:------:|
| 8.5484 | 0.4 | 500 | 3.6234 | 1.0 | 1.0 | 1.0 | 0.0000 |
| 3.2275 | 0.8 | 1000 | 2.2960 | 0.9998 | 0.7081 | 0.6540 | 0.0000 |
| 0.9955 | 1.2 | 1500 | 1.2224 | 0.9549 | 0.4327 | 0.3756 | 0.0000 |
| 0.66 | 1.61 | 2000 | 0.9559 | 0.9232 | 0.3651 | 0.3040 | 0.0000 |
| 0.546 | 2.01 | 2500 | 0.9207 | 0.9481 | 0.3585 | 0.2826 | 0.0000 |
| 0.4459 | 2.41 | 3000 | 0.7701 | 0.8693 | 0.2940 | 0.2383 | 0.0000 |
| 0.4041 | 2.81 | 3500 | 0.7756 | 0.8224 | 0.2949 | 0.2634 | 0.0000 |
| 0.3637 | 3.21 | 4000 | 0.6015 | 0.7015 | 0.2064 | 0.1807 | 0.0000 |
| 0.334 | 3.61 | 4500 | 0.5615 | 0.6675 | 0.1907 | 0.1638 | 0.0000 |
| 0.3283 | 4.02 | 5000 | 0.6205 | 0.7073 | 0.2092 | 0.1803 | 0.0000 |
| 0.3762 | 4.42 | 5500 | 0.7517 | 0.6366 | 0.1778 | 0.1600 | 0.0000 |
| 0.4954 | 4.82 | 6000 | 0.9374 | 0.7073 | 0.2023 | 0.1735 | 0.0000 |
| 0.5568 | 5.22 | 6500 | 0.8859 | 0.7027 | 0.1982 | 0.1666 | 0.0000 |
| 0.6756 | 5.62 | 7000 | 1.0252 | 0.6802 | 0.1920 | 0.1628 | 0.0000 |
| 0.7752 | 6.02 | 7500 | 1.1259 | 0.7657 | 0.2309 | 0.1908 | 0.0000 |
| 0.8305 | 6.43 | 8000 | 1.3857 | 0.9029 | 0.3252 | 0.2668 | 0.0000 |
| 1.7385 | 6.83 | 8500 | 3.2320 | 0.9998 | 0.9234 | 0.9114 | 0.0000 |
| 2.7839 | 7.23 | 9000 | 3.3238 | 0.9999 | 0.9400 | 0.9306 | 0.0000 |
| 2.8307 | 7.63 | 9500 | 3.2678 | 0.9998 | 0.9167 | 0.9053 | 0.0000 |
| 2.7672 | 8.03 | 10000 | 3.2435 | 0.9995 | 0.8992 | 0.8867 | 0.0000 |
| 2.7426 | 8.43 | 10500 | 3.2396 | 0.9995 | 0.8720 | 0.8561 | 0.0000 |
| 2.7608 | 8.84 | 11000 | 3.2689 | 0.9993 | 0.8399 | 0.8202 | 0.0000 |
| 2.8195 | 9.24 | 11500 | 3.3283 | 0.9989 | 0.8084 | 0.7865 | 0.0000 |
| 2.9044 | 9.64 | 12000 | 3.4099 | 0.9988 | 0.7861 | 0.7617 | 0.0000 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
35f7e067a5223a1a724796feaba090d2
|
sberbank-ai/ruSciBERT
|
sberbank-ai
|
roberta
| 7 | 497 |
transformers
| 3 |
fill-mask
| true | false | false |
apache-2.0
|
['ru']
| null | null | 1 | 0 | 0 | 1 | 0 | 0 | 0 |
['Transformers', 'bert']
| false | true | true | 536 | false |
# ruSciBERT
Model was trained by Sber AI team and MLSA Lab of Institute for AI, MSU.
If you use our model for your project, please tell us about it ([[email protected]]([email protected])).
[Presentation at the AI Journey 2022](https://ai-journey.ru/archive/?year=2022&video=https://vk.com/video_ext.phpq3u4e5st6io8nm7a0rkoid=-22522055a2n3did=456242496a2n3dhash=ae9efe06acf647fd)
* Task: `mask filling`
* Type: `encoder`
* Tokenizer: `bpe`
* Dict size: `50265`
* Num Parameters: `123 M`
* Training Data Volume: `6.5 GB`
|
6fae7842f079834f0ef6830044cde2e9
|
sd-concepts-library/low-poly-hd-logos-icons
|
sd-concepts-library
| null | 26 | 0 | null | 7 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 3,257 | false |
### Low Poly HD Logos & Icons on Stable Diffusion
This is the `<low-poly-hd-logos-icons>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:









Result






for more result check files
|
0a2aeb39b928430cc47fe54872c766f8
|
Tom11/xlm-roberta-base-finetuned-panx-it
|
Tom11
|
xlm-roberta
| 9 | 5 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null |
['xtreme']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,318 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2462
- F1: 0.8240
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.7922 | 1.0 | 70 | 0.3091 | 0.7421 |
| 0.2842 | 2.0 | 140 | 0.2508 | 0.8013 |
| 0.1815 | 3.0 | 210 | 0.2462 | 0.8240 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cpu
- Datasets 1.16.1
- Tokenizers 0.13.2
|
c1f4894ef0d81745489960dfbfd641bc
|
jonatasgrosman/exp_w2v2t_sv-se_unispeech_s449
|
jonatasgrosman
|
unispeech
| 10 | 7 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['sv-SE']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'sv-SE']
| false | true | true | 475 | false |
# exp_w2v2t_sv-se_unispeech_s449
Fine-tuned [microsoft/unispeech-large-1500h-cv](https://huggingface.co/microsoft/unispeech-large-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (sv-SE)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
da4a0390a222d133d0044c2b6498a685
|
steveabecassis/mt5-small-finetuned-xsum
|
steveabecassis
|
mt5
| 10 | 0 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 3,946 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-xsum
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5196
- Rouge1: 0.3378
- Rouge2: 0.275
- Rougel: 0.3372
- Rougelsum: 0.3367
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| No log | 1.0 | 21 | 11.8500 | 0.0 | 0.0 | 0.0 | 0.0 |
| No log | 2.0 | 42 | 11.1279 | 0.0 | 0.0 | 0.0 | 0.0 |
| No log | 3.0 | 63 | 10.0382 | 0.0 | 0.0 | 0.0 | 0.0 |
| No log | 4.0 | 84 | 9.1579 | 0.0 | 0.0 | 0.0 | 0.0 |
| No log | 5.0 | 105 | 8.6827 | 0.0 | 0.0 | 0.0 | 0.0 |
| No log | 6.0 | 126 | 7.3651 | 0.0028 | 0.0016 | 0.0028 | 0.0028 |
| No log | 7.0 | 147 | 6.4400 | 0.019 | 0.0129 | 0.0191 | 0.0197 |
| No log | 8.0 | 168 | 5.2631 | 0.0272 | 0.0229 | 0.0288 | 0.0288 |
| No log | 9.0 | 189 | 4.5832 | 0.1095 | 0.0688 | 0.1053 | 0.1051 |
| No log | 10.0 | 210 | 4.2350 | 0.1263 | 0.0824 | 0.1216 | 0.1235 |
| No log | 11.0 | 231 | 3.9249 | 0.1541 | 0.1051 | 0.1513 | 0.1532 |
| No log | 12.0 | 252 | 3.5469 | 0.1701 | 0.1156 | 0.1665 | 0.1683 |
| No log | 13.0 | 273 | 3.3689 | 0.2672 | 0.2095 | 0.2667 | 0.2659 |
| No log | 14.0 | 294 | 3.1733 | 0.3102 | 0.2483 | 0.3103 | 0.3104 |
| No log | 15.0 | 315 | 3.0810 | 0.3073 | 0.2457 | 0.3074 | 0.3071 |
| No log | 16.0 | 336 | 3.0005 | 0.3071 | 0.2451 | 0.3075 | 0.3069 |
| No log | 17.0 | 357 | 2.9663 | 0.3015 | 0.2364 | 0.3022 | 0.3018 |
| No log | 18.0 | 378 | 2.8718 | 0.3195 | 0.2583 | 0.3197 | 0.3187 |
| No log | 19.0 | 399 | 2.8061 | 0.3159 | 0.2554 | 0.316 | 0.3143 |
| No log | 20.0 | 420 | 2.7009 | 0.3351 | 0.273 | 0.3338 | 0.3341 |
| No log | 21.0 | 441 | 2.6307 | 0.3384 | 0.2763 | 0.3382 | 0.3381 |
| No log | 22.0 | 462 | 2.6006 | 0.3364 | 0.2743 | 0.3362 | 0.3357 |
| No log | 23.0 | 483 | 2.5819 | 0.3334 | 0.2712 | 0.3331 | 0.3333 |
| 13.1102 | 24.0 | 504 | 2.5606 | 0.3309 | 0.269 | 0.3302 | 0.3305 |
| 13.1102 | 25.0 | 525 | 2.5458 | 0.338 | 0.2744 | 0.3369 | 0.3373 |
| 13.1102 | 26.0 | 546 | 2.5366 | 0.3361 | 0.2715 | 0.3352 | 0.3352 |
| 13.1102 | 27.0 | 567 | 2.5301 | 0.3413 | 0.2787 | 0.3408 | 0.3406 |
| 13.1102 | 28.0 | 588 | 2.5236 | 0.341 | 0.2783 | 0.3402 | 0.3401 |
| 13.1102 | 29.0 | 609 | 2.5206 | 0.3405 | 0.2779 | 0.3399 | 0.3397 |
| 13.1102 | 30.0 | 630 | 2.5196 | 0.3378 | 0.275 | 0.3372 | 0.3367 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0
- Datasets 2.8.0
- Tokenizers 0.13.2
|
fa1b04d2d6ba2f5edf7b8e3024f10660
|
EMBEDDIA/est-roberta
|
EMBEDDIA
|
camembert
| 9 | 174 |
transformers
| 2 |
fill-mask
| true | false | false |
cc-by-sa-4.0
|
['et']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 579 | false |
# Usage
Load in transformers library with:
```
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("EMBEDDIA/est-roberta")
model = AutoModelForMaskedLM.from_pretrained("EMBEDDIA/est-roberta")
```
# Est-RoBERTa
Est-RoBERTa model is a monolingual Estonian BERT-like model. It is closely related to French Camembert model https://camembert-model.fr/. The Estonian corpora used for training the model have 2.51 billion tokens in total. The subword vocabulary contains 40,000 tokens.
Est-RoBERTa was trained for 40 epochs.
|
3c4b9968d2755df3fbd36529d114985c
|
Helsinki-NLP/opus-mt-de-bi
|
Helsinki-NLP
|
marian
| 10 | 10 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 | false |
### opus-mt-de-bi
* source languages: de
* target languages: bi
* OPUS readme: [de-bi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-bi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-bi/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-bi/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-bi/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.de.bi | 25.7 | 0.450 |
|
6b9c25801ddc6c443cc5d42566e88188
|
Hormigo/roberta-base-bne-finetuned-amazon_reviews_multi
|
Hormigo
|
roberta
| 13 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
cc-by-4.0
| null |
['amazon_reviews_multi']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| false | true | true | 1,317 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-amazon_reviews_multi
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2275
- Accuracy: 0.9335
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1909 | 1.0 | 1250 | 0.1717 | 0.9333 |
| 0.0932 | 2.0 | 2500 | 0.2275 | 0.9335 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
23b03a0388ec96dee43e46d77514c078
|
CennetOguz/bert-large-uncased-finetuned-youcook_2
|
CennetOguz
|
bert
| 9 | 2 |
transformers
| 0 |
fill-mask
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,342 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-finetuned-youcook_2
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9929
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.3915 | 1.0 | 206 | 2.1036 |
| 2.0412 | 2.0 | 412 | 2.2207 |
| 1.9062 | 3.0 | 618 | 1.7281 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0a0+17540c5
- Datasets 2.3.2
- Tokenizers 0.12.1
|
b72b5b601e1ebf0c24fe218378b4dfac
|
susnato/xlm-roberta-base-finetuned-panx-de-fr
|
susnato
|
xlm-roberta
| 9 | 12 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,323 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2871
- F1: 0.8596
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.2911 | 1.0 | 3718 | 0.2709 | 0.8020 |
| 0.1344 | 2.0 | 7436 | 0.2659 | 0.8432 |
| 0.0631 | 3.0 | 11154 | 0.2871 | 0.8596 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
14fdb7cf98be1b91745f966b38b0e45f
|
Sandeepanie/clinical-finetuned-data2
|
Sandeepanie
|
bert
| 12 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,498 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clinical-finetuned-data2
This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4949
- F1: 0.7800
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.66 | 1.0 | 50 | 0.6269 | 0.6659 |
| 0.5476 | 2.0 | 100 | 0.5311 | 0.7615 |
| 0.3766 | 3.0 | 150 | 0.4457 | 0.7970 |
| 0.2785 | 4.0 | 200 | 0.5251 | 0.7615 |
| 0.2274 | 5.0 | 250 | 0.4949 | 0.7800 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
285fb56d1b3bf4976a15521d4a4b2da5
|
chrisvinsen/wav2vec2-2
|
chrisvinsen
|
wav2vec2
| 16 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,898 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-2
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9253
- Wer: 0.8133
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 8.4469 | 0.34 | 200 | 3.7440 | 1.0 |
| 3.1152 | 0.69 | 400 | 3.3755 | 1.0 |
| 2.9228 | 1.03 | 600 | 3.0427 | 1.0 |
| 2.8661 | 1.38 | 800 | 2.9406 | 1.0 |
| 2.8402 | 1.72 | 1000 | 2.9034 | 1.0 |
| 2.8301 | 2.07 | 1200 | 2.8850 | 1.0 |
| 2.8088 | 2.41 | 1400 | 2.8479 | 1.0 |
| 2.6892 | 2.75 | 1600 | 2.5800 | 1.0 |
| 2.3249 | 3.1 | 1800 | 2.1310 | 1.0 |
| 1.9687 | 3.44 | 2000 | 1.7652 | 0.9982 |
| 1.7338 | 3.79 | 2200 | 1.5430 | 0.9974 |
| 1.5698 | 4.13 | 2400 | 1.3927 | 0.9985 |
| 1.4475 | 4.48 | 2600 | 1.3186 | 0.9911 |
| 1.3764 | 4.82 | 2800 | 1.2406 | 0.9647 |
| 1.3022 | 5.16 | 3000 | 1.1954 | 0.9358 |
| 1.2409 | 5.51 | 3200 | 1.1450 | 0.8990 |
| 1.1989 | 5.85 | 3400 | 1.1107 | 0.8794 |
| 1.1478 | 6.2 | 3600 | 1.0839 | 0.8667 |
| 1.106 | 6.54 | 3800 | 1.0507 | 0.8573 |
| 1.0792 | 6.88 | 4000 | 1.0179 | 0.8463 |
| 1.0636 | 7.23 | 4200 | 0.9974 | 0.8355 |
| 1.0224 | 7.57 | 4400 | 0.9757 | 0.8343 |
| 1.0166 | 7.92 | 4600 | 0.9641 | 0.8261 |
| 0.9925 | 8.26 | 4800 | 0.9553 | 0.8183 |
| 0.9934 | 8.61 | 5000 | 0.9466 | 0.8199 |
| 0.9741 | 8.95 | 5200 | 0.9353 | 0.8172 |
| 0.9613 | 9.29 | 5400 | 0.9331 | 0.8133 |
| 0.9714 | 9.64 | 5600 | 0.9272 | 0.8144 |
| 0.9593 | 9.98 | 5800 | 0.9253 | 0.8133 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
a7fdcc1f1cdbaa8e9ac88b815952400a
|
FloatingPoint/MiloManara
|
FloatingPoint
| null | 3 | 0 | null | 1 | null | false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,327 | false |
**Milo Manara Style**
This is the Alpha release of a Stable Diffusion model trained to achieve the style of the Italian illustration master Milo Manara.
Use the token **in the style of ->Manara** in your prompts for the style.
**Sample result**

**Warning**: Due to the nature of the style, NSFW images may be easily generated using this model.
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
|
c55d7713e83cf68db44706e3f9c06010
|
chmanoj/xls-r-2B-te
|
chmanoj
|
wav2vec2
| 33 | 7 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['te']
|
['openslr', 'SLR66']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'openslr_SLR66', 'generated_from_trainer', 'robust-speech-event', 'hf-asr-leaderboard']
| true | true | true | 1,690 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-2b](https://huggingface.co/facebook/wav2vec2-xls-r-2b) on the OPENSLR_SLR66 - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4253
- Wer: 0.5109
### Evaluation metrics
| Metric | Split | Decode with LM | Value |
|:------:|:------:|:--------------:|:---------:|
| WER | Train | No | |
| CER | Train | No | |
| WER | Test | No | |
| CER | Test | No | |
| WER | Train | Yes | |
| CER | Train | Yes | |
| WER | Test | Yes | |
| CER | Test | Yes | |
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 12
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- learning_rate: 3e-6
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 150.0
- hidden_dropout: 0.15
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
f5960dfa2f0f278a33dcc2cde53d3873
|
sd-concepts-library/liqwid-aquafarmer
|
sd-concepts-library
| null | 38 | 0 | null | 0 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 4,446 | false |
### liqwid_aquafarmer on Stable Diffusion
This is the `<aquafarmer>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:

































|
ea75af9a94159970d992e60633ea1f91
|
fathyshalab/all-roberta-large-v1-utility-1000-16-5-oos
|
fathyshalab
|
roberta
| 11 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,519 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-utility-1000-16-5-oos
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.2920
- Accuracy: 0.3733
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 5.0353 | 1.0 | 1 | 4.7572 | 0.2044 |
| 4.377 | 2.0 | 2 | 4.5884 | 0.3111 |
| 3.8842 | 3.0 | 3 | 4.4469 | 0.3467 |
| 3.3633 | 4.0 | 4 | 4.3454 | 0.3644 |
| 3.0949 | 5.0 | 5 | 4.2920 | 0.3733 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
d1f4f3e94ee904ac8c3e332932e00745
|
raileymontalan/distilbert-base-cased-finetuned-fake-news-detection
|
raileymontalan
|
distilbert
| 12 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,347 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-cased-finetuned-fake-news-detection
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0043
- F1: 0.9996
- Accuracy: 0.9996
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|
| No log | 1.0 | 1684 | 0.0043 | 0.9993 | 0.9993 |
| No log | 2.0 | 3368 | 0.0043 | 0.9996 | 0.9996 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
271dd94e5a3c461e2bf5fcc5982cfff6
|
DOOGLAK/Article_250v1_NER_Model_3Epochs_AUGMENTED
|
DOOGLAK
|
bert
| 13 | 5 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null |
['article250v1_wikigold_split']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,559 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Article_250v1_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the article250v1_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2324
- Precision: 0.6699
- Recall: 0.6657
- F1: 0.6678
- Accuracy: 0.9256
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 94 | 0.2546 | 0.5933 | 0.5539 | 0.5729 | 0.9127 |
| No log | 2.0 | 188 | 0.2337 | 0.6564 | 0.6629 | 0.6596 | 0.9242 |
| No log | 3.0 | 282 | 0.2324 | 0.6699 | 0.6657 | 0.6678 | 0.9256 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
13874f722b4be6b25a5e24eb247e1a57
|
cm-mueller/BACnet-Klassifizierung-Sanitaertechnik
|
cm-mueller
|
bert
| 14 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
mit
|
['de']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 4,287 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BACnet-Klassifizierung-Sanitaertechnik-bert-base-german-cased
This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on the [gart-labor](https://huggingface.co/gart-labor) "klassifizierung_sanitaer_v2" dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0039
- F1: [1. 1. 1.]
## Model description
This model makes it possible to classify the sanitary technology components described with the BACnet standard into different categories.
The model is based on a German-language data set.
## Intended uses & limitations
The model divides descriptive texts into the following sanitary engineering categories:
Other, pressure boosting system, softening system, lifting system, sanitary_general, waste water, drinking water heating system and water meter.
## Training and evaluation data
The model is based on a German-language data set.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 40.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:----------:|
| 0.0507 | 1.0 | 1 | 0.1080 | [1. 1. 1.] |
| 0.0547 | 2.0 | 2 | 0.0589 | [1. 1. 1.] |
| 0.0407 | 3.0 | 3 | 0.0427 | [1. 1. 1.] |
| 0.0294 | 4.0 | 4 | 0.0465 | [1. 1. 1.] |
| 0.0284 | 5.0 | 5 | 0.0291 | [1. 1. 1.] |
| 0.0208 | 6.0 | 6 | 0.0232 | [1. 1. 1.] |
| 0.0171 | 7.0 | 7 | 0.0198 | [1. 1. 1.] |
| 0.0153 | 8.0 | 8 | 0.0170 | [1. 1. 1.] |
| 0.0134 | 9.0 | 9 | 0.0144 | [1. 1. 1.] |
| 0.0126 | 10.0 | 10 | 0.0124 | [1. 1. 1.] |
| 0.0108 | 11.0 | 11 | 0.0109 | [1. 1. 1.] |
| 0.0096 | 12.0 | 12 | 0.0098 | [1. 1. 1.] |
| 0.0084 | 13.0 | 13 | 0.0089 | [1. 1. 1.] |
| 0.0082 | 14.0 | 14 | 0.0083 | [1. 1. 1.] |
| 0.0071 | 15.0 | 15 | 0.0077 | [1. 1. 1.] |
| 0.0068 | 16.0 | 16 | 0.0073 | [1. 1. 1.] |
| 0.0064 | 17.0 | 17 | 0.0069 | [1. 1. 1.] |
| 0.0059 | 18.0 | 18 | 0.0065 | [1. 1. 1.] |
| 0.0053 | 19.0 | 19 | 0.0061 | [1. 1. 1.] |
| 0.0052 | 20.0 | 20 | 0.0058 | [1. 1. 1.] |
| 0.005 | 21.0 | 21 | 0.0056 | [1. 1. 1.] |
| 0.0047 | 22.0 | 22 | 0.0053 | [1. 1. 1.] |
| 0.0044 | 23.0 | 23 | 0.0051 | [1. 1. 1.] |
| 0.0042 | 24.0 | 24 | 0.0050 | [1. 1. 1.] |
| 0.0043 | 25.0 | 25 | 0.0048 | [1. 1. 1.] |
| 0.004 | 26.0 | 26 | 0.0047 | [1. 1. 1.] |
| 0.004 | 27.0 | 27 | 0.0045 | [1. 1. 1.] |
| 0.004 | 28.0 | 28 | 0.0044 | [1. 1. 1.] |
| 0.0037 | 29.0 | 29 | 0.0044 | [1. 1. 1.] |
| 0.0037 | 30.0 | 30 | 0.0043 | [1. 1. 1.] |
| 0.0037 | 31.0 | 31 | 0.0042 | [1. 1. 1.] |
| 0.0035 | 32.0 | 32 | 0.0042 | [1. 1. 1.] |
| 0.0036 | 33.0 | 33 | 0.0041 | [1. 1. 1.] |
| 0.0035 | 34.0 | 34 | 0.0041 | [1. 1. 1.] |
| 0.0037 | 35.0 | 35 | 0.0040 | [1. 1. 1.] |
| 0.0034 | 36.0 | 36 | 0.0040 | [1. 1. 1.] |
| 0.0033 | 37.0 | 37 | 0.0040 | [1. 1. 1.] |
| 0.0034 | 38.0 | 38 | 0.0040 | [1. 1. 1.] |
| 0.0034 | 39.0 | 39 | 0.0040 | [1. 1. 1.] |
| 0.0034 | 40.0 | 40 | 0.0039 | [1. 1. 1.] |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
1fe7c559d655e9c2246cc5a479516906
|
TransQuest/monotransquest-hter-en_de-it-nmt
|
TransQuest
|
xlm-roberta
| 8 | 5 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['en-de']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['Quality Estimation', 'monotransquest', 'hter']
| false | true | true | 5,312 | false |
# TransQuest: Translation Quality Estimation with Cross-lingual Transformers
The goal of quality estimation (QE) is to evaluate the quality of a translation without having access to a reference translation. High-accuracy QE that can be easily deployed for a number of language pairs is the missing piece in many commercial translation workflows as they have numerous potential uses. They can be employed to select the best translation when several translation engines are available or can inform the end user about the reliability of automatically translated content. In addition, QE systems can be used to decide whether a translation can be published as it is in a given context, or whether it requires human post-editing before publishing or translation from scratch by a human. The quality estimation can be done at different levels: document level, sentence level and word level.
With TransQuest, we have opensourced our research in translation quality estimation which also won the sentence-level direct assessment quality estimation shared task in [WMT 2020](http://www.statmt.org/wmt20/quality-estimation-task.html). TransQuest outperforms current open-source quality estimation frameworks such as [OpenKiwi](https://github.com/Unbabel/OpenKiwi) and [DeepQuest](https://github.com/sheffieldnlp/deepQuest).
## Features
- Sentence-level translation quality estimation on both aspects: predicting post editing efforts and direct assessment.
- Word-level translation quality estimation capable of predicting quality of source words, target words and target gaps.
- Outperform current state-of-the-art quality estimation methods like DeepQuest and OpenKiwi in all the languages experimented.
- Pre-trained quality estimation models for fifteen language pairs are available in [HuggingFace.](https://huggingface.co/TransQuest)
## Installation
### From pip
```bash
pip install transquest
```
### From Source
```bash
git clone https://github.com/TharinduDR/TransQuest.git
cd TransQuest
pip install -r requirements.txt
```
## Using Pre-trained Models
```python
import torch
from transquest.algo.sentence_level.monotransquest.run_model import MonoTransQuestModel
model = MonoTransQuestModel("xlmroberta", "TransQuest/monotransquest-hter-en_de-it-nmt", num_labels=1, use_cuda=torch.cuda.is_available())
predictions, raw_outputs = model.predict([["Reducerea acestor conflicte este importantฤ pentru conservare.", "Reducing these conflicts is not important for preservation."]])
print(predictions)
```
## Documentation
For more details follow the documentation.
1. **[Installation](https://tharindudr.github.io/TransQuest/install/)** - Install TransQuest locally using pip.
2. **Architectures** - Checkout the architectures implemented in TransQuest
1. [Sentence-level Architectures](https://tharindudr.github.io/TransQuest/architectures/sentence_level_architectures/) - We have released two architectures; MonoTransQuest and SiameseTransQuest to perform sentence level quality estimation.
2. [Word-level Architecture](https://tharindudr.github.io/TransQuest/architectures/word_level_architecture/) - We have released MicroTransQuest to perform word level quality estimation.
3. **Examples** - We have provided several examples on how to use TransQuest in recent WMT quality estimation shared tasks.
1. [Sentence-level Examples](https://tharindudr.github.io/TransQuest/examples/sentence_level_examples/)
2. [Word-level Examples](https://tharindudr.github.io/TransQuest/examples/word_level_examples/)
4. **Pre-trained Models** - We have provided pretrained quality estimation models for fifteen language pairs covering both sentence-level and word-level
1. [Sentence-level Models](https://tharindudr.github.io/TransQuest/models/sentence_level_pretrained/)
2. [Word-level Models](https://tharindudr.github.io/TransQuest/models/word_level_pretrained/)
5. **[Contact](https://tharindudr.github.io/TransQuest/contact/)** - Contact us for any issues with TransQuest
## Citations
If you are using the word-level architecture, please consider citing this paper which is accepted to [ACL 2021](https://2021.aclweb.org/).
```bash
@InProceedings{ranasinghe2021,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {An Exploratory Analysis of Multilingual Word Level Quality Estimation with Cross-Lingual Transformers},
booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics},
year = {2021}
}
```
If you are using the sentence-level architectures, please consider citing these papers which were presented in [COLING 2020](https://coling2020.org/) and in [WMT 2020](http://www.statmt.org/wmt20/) at EMNLP 2020.
```bash
@InProceedings{transquest:2020a,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest: Translation Quality Estimation with Cross-lingual Transformers},
booktitle = {Proceedings of the 28th International Conference on Computational Linguistics},
year = {2020}
}
```
```bash
@InProceedings{transquest:2020b,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest at WMT2020: Sentence-Level Direct Assessment},
booktitle = {Proceedings of the Fifth Conference on Machine Translation},
year = {2020}
}
```
|
5b14ce90b2c51b880f1b37716e13f78c
|
Akashpb13/Hausa_xlsr
|
Akashpb13
|
wav2vec2
| 12 | 8 |
transformers
| 1 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['ha']
|
['mozilla-foundation/common_voice_8_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'generated_from_trainer', 'ha', 'hf-asr-leaderboard', 'model_for_talk', 'mozilla-foundation/common_voice_8_0', 'robust-speech-event']
| true | true | true | 2,353 | false |
# Akashpb13/Hausa_xlsr
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m)
It achieves the following results on the evaluation set (which is 10 percent of train data set merged with invalidated data, reported, other, and dev datasets):
- Loss: 0.275118
- Wer: 0.329955
## Model description
"facebook/wav2vec2-xls-r-300m" was finetuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
Training data -
Common voice Hausa train.tsv, dev.tsv, invalidated.tsv, reported.tsv and other.tsv
Only those points were considered where upvotes were greater than downvotes and duplicates were removed after concatenation of all the datasets given in common voice 7.0
## Training procedure
For creating the training dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000096
- train_batch_size: 16
- eval_batch_size: 16
- seed: 13
- gradient_accumulation_steps: 2
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 500
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Step | Training Loss | Validation Loss | Wer |
|------|---------------|-----------------|----------|
| 500 | 5.175900 | 2.750914 | 1.000000 |
| 1000 | 1.028700 | 0.338649 | 0.497999 |
| 1500 | 0.332200 | 0.246896 | 0.402241 |
| 2000 | 0.227300 | 0.239640 | 0.395839 |
| 2500 | 0.175000 | 0.239577 | 0.373966 |
| 3000 | 0.140400 | 0.243272 | 0.356095 |
| 3500 | 0.119200 | 0.263761 | 0.365164 |
| 4000 | 0.099300 | 0.265954 | 0.353428 |
| 4500 | 0.084400 | 0.276367 | 0.349693 |
| 5000 | 0.073700 | 0.282631 | 0.343825 |
| 5500 | 0.068000 | 0.282344 | 0.341158 |
| 6000 | 0.064500 | 0.281591 | 0.342491 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.18.3
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id Akashpb13/Hausa_xlsr --dataset mozilla-foundation/common_voice_8_0 --config ha --split test
```
|
8f941ccd77626ecff0315c2e68552b57
|
it5/mt5-small-question-answering
|
it5
|
mt5
| 11 | 5 |
transformers
| 0 |
text2text-generation
| true | true | true |
apache-2.0
|
['it']
|
['squad_it']
|
{'emissions': '17g"', 'source': 'Google Cloud Platform Carbon Footprint', 'training_type': 'fine-tuning', 'geographical_location': 'Eemshaven, Netherlands, Europe', 'hardware_used': '1 TPU v3-8 VM'}
| 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['italian', 'sequence-to-sequence', 'squad_it', 'text2text-question-answering', 'text2text-generation']
| true | true | true | 2,664 | false |
# mT5 Small for Question Answering โ๏ธ ๐ฎ๐น
This repository contains the checkpoint for the [mT5 Small](https://huggingface.co/google/mt5-small) model fine-tuned on extractive question answering on the [SQuAD-IT corpus](https://huggingface.co/datasets/squad_it) as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io).
A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.
## Using the model
Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:
```python
from transformers import pipelines
qa = pipeline("text2text-generation", model='it5/mt5-small-question-answering')
qa("In seguito all' evento di estinzione del Cretaceo-Paleogene, l' estinzione dei dinosauri e il clima umido possono aver permesso alla foresta pluviale tropicale di diffondersi in tutto il continente. Dal 66-34 Mya, la foresta pluviale si estendeva fino a sud fino a 45ยฐ. Le fluttuazioni climatiche degli ultimi 34 milioni di anni hanno permesso alle regioni della savana di espandersi fino ai tropici. Durante l' Oligocene, ad esempio, la foresta pluviale ha attraversato una banda relativamente stretta. Si espandeva di nuovo durante il Miocene medio, poi si ritrasse ad una formazione prevalentemente interna all' ultimo massimo glaciale. Tuttavia, la foresta pluviale รจ riuscita ancora a prosperare durante questi periodi glaciali, consentendo la sopravvivenza e l' evoluzione di un' ampia varietร di specie. Domanda: La foresta pluviale amazzonica รจ diventata per lo piรน una foresta interna intorno a quale evento globale?")
>>> [{"generated_text": "ultimo massimo glaciale"}]
```
or loaded using autoclasses:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("it5/mt5-small-question-answering")
model = AutoModelForSeq2SeqLM.from_pretrained("it5/mt5-small-question-answering")
```
If you use this model in your research, please cite our work as:
```bibtex
@article{sarti-nissim-2022-it5,
title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
author={Sarti, Gabriele and Nissim, Malvina},
journal={ArXiv preprint 2203.03759},
url={https://arxiv.org/abs/2203.03759},
year={2022},
month={mar}
}
```
|
8281a5bfdc71065354ad156b7d140db8
|
gokuls/distilbert_sa_GLUE_Experiment_data_aug_stsb_96
|
gokuls
|
distilbert
| 17 | 0 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['en']
|
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,894 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_sa_GLUE_Experiment_data_aug_stsb_96
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE STSB dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7659
- Pearson: 0.1744
- Spearmanr: 0.1818
- Combined Score: 0.1781
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:|
| 2.2123 | 1.0 | 1259 | 2.7659 | 0.1744 | 0.1818 | 0.1781 |
| 0.689 | 2.0 | 2518 | 2.9511 | 0.1794 | 0.1858 | 0.1826 |
| 0.5239 | 3.0 | 3777 | 2.9043 | 0.1731 | 0.1733 | 0.1732 |
| 0.4171 | 4.0 | 5036 | 2.9002 | 0.1794 | 0.1788 | 0.1791 |
| 0.3402 | 5.0 | 6295 | 2.8190 | 0.1899 | 0.1926 | 0.1912 |
| 0.2843 | 6.0 | 7554 | 2.8391 | 0.1948 | 0.2004 | 0.1976 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
2bb7dfb43dcefa386f24fc993a49df37
|
edraper88/distilbert-base-uncased-finetuned-imdb
|
edraper88
|
distilbert
| 16 | 5 |
transformers
| 0 |
fill-mask
| true | false | false |
apache-2.0
| null |
['imdb']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,318 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7086 | 1.0 | 157 | 2.4898 |
| 2.5796 | 2.0 | 314 | 2.4230 |
| 2.5269 | 3.0 | 471 | 2.4354 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
7792bf06ab6022764715b298d89c3441
|
EffyLi/bert-base-NER-finetuned-ner
|
EffyLi
|
bert
| 10 | 3 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null |
['conll2003']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 906 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-NER-finetuned-ner
This model is a fine-tuned version of [dslim/bert-base-NER](https://huggingface.co/dslim/bert-base-NER) on the conll2003 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.18.0
- Pytorch 1.12.0
- Datasets 2.7.1
- Tokenizers 0.11.0
|
f3f9903e37187f9843025fb3c367b6e2
|
nlp-esg-scoring/bert-base-finetuned-esg-a4s-clean
|
nlp-esg-scoring
|
bert
| 8 | 2 |
transformers
| 0 |
fill-mask
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,909 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nlp-esg-scoring/bert-base-finetuned-esg-a4s-clean
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.5224
- Validation Loss: 2.2196
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -824, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.5170 | 2.3060 | 0 |
| 2.5229 | 2.3220 | 1 |
| 2.5077 | 2.3155 | 2 |
| 2.5059 | 2.3151 | 3 |
| 2.5052 | 2.2596 | 4 |
| 2.5250 | 2.4044 | 5 |
| 2.5120 | 2.2901 | 6 |
| 2.5042 | 2.2847 | 7 |
| 2.4972 | 2.3168 | 8 |
| 2.5224 | 2.2196 | 9 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
|
39a2ee3d154e35006376ec0463a9269b
|
no3/azura-wd-1.3-beta3
|
no3
| null | 24 | 5 |
diffusers
| 0 | null | false | false | false |
mit
| null | null | null | 3 | 0 | 3 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 2,404 | false |
### azura from [vibrant venture](https://store.steampowered.com/app/1264520), on **waifu diffusion** via Dreambooth
#### model by no3
This your the **waifu diffusion** model fine-tuned the azura from [vibrant venture](https://store.steampowered.com/app/1264520) taught to **waifu diffusion** with Dreambooth.
It can be used by modifying the `instance_prompt`: **sks_azura**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
### Note
This model is based on **waifu diffusion** keep that in mind if you want to use this model with [diffusers](https://github.com/huggingface/diffusers).
If you want to convert diffusers to .ckpt to use in webUI like [AUTOMATIC1111](https://github.com/AUTOMATIC1111/stable-diffusion-webui) or any UI that's uses .ckpt file, Use this [script](https://gist.github.com/Christopher-Hayes/636ba25e0ae2e7020722d5386ac2571b) and if you use this method don't type **sks_azura** just use generic prompt like `a woman` or `a girl` you can add `, blue hair` first, if it not helping you can also add `, blue hoodie, blue pants, glasses, black eyes` for consistent outputs, you can customize it as you witch, I tried with sks_azura and it give me the same output no matter what the prompt was.
If you have issues or questions feel free to visit the Community Tab and start discussion about it.
Here are the images used for training this concept:






|
3b66c5294f324e34e84cec483eab38bf
|
aemili/distilbert-base-uncased-finetuned-cola
|
aemili
|
distilbert
| 92 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['glue']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,570 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7578
- Matthews Correlation: 0.5317
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5239 | 1.0 | 535 | 0.5219 | 0.4097 |
| 0.3483 | 2.0 | 1070 | 0.5775 | 0.4913 |
| 0.2296 | 3.0 | 1605 | 0.6440 | 0.4903 |
| 0.1734 | 4.0 | 2140 | 0.7578 | 0.5317 |
| 0.137 | 5.0 | 2675 | 0.8612 | 0.5192 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.7.1+cu110
- Datasets 2.4.0
- Tokenizers 0.12.1
|
194de05722ad27b05c72dbf99758deff
|
ying-tina/wav2vec2-base-timit-demo-colab
|
ying-tina
|
wav2vec2
| 12 | 8 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,061 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5127
- Wer: 0.3082
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.7645 | 2.01 | 500 | 2.5179 | 0.9999 |
| 1.1873 | 4.02 | 1000 | 0.5464 | 0.4798 |
| 0.46 | 6.02 | 1500 | 0.4625 | 0.4025 |
| 0.2869 | 8.03 | 2000 | 0.4252 | 0.3650 |
| 0.2213 | 10.04 | 2500 | 0.4340 | 0.3585 |
| 0.1905 | 12.05 | 3000 | 0.4310 | 0.3404 |
| 0.1545 | 14.06 | 3500 | 0.4547 | 0.3381 |
| 0.1206 | 16.06 | 4000 | 0.4902 | 0.3384 |
| 0.1116 | 18.07 | 4500 | 0.4767 | 0.3253 |
| 0.0925 | 20.08 | 5000 | 0.5248 | 0.3160 |
| 0.0897 | 22.09 | 5500 | 0.4960 | 0.3126 |
| 0.0687 | 24.1 | 6000 | 0.4876 | 0.3086 |
| 0.063 | 26.1 | 6500 | 0.4895 | 0.3065 |
| 0.0558 | 28.11 | 7000 | 0.5127 | 0.3082 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
dcc88c9264c19f1bbd82e787608f458b
|
jonatasgrosman/exp_w2v2t_th_wav2vec2_s35
|
jonatasgrosman
|
wav2vec2
| 10 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['th']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'th']
| false | true | true | 458 | false |
# exp_w2v2t_th_wav2vec2_s35
Fine-tuned [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) for speech recognition on Thai using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
0d233969bafeaa8b998c4e5e0f5748ff
|
sd-concepts-library/shiny-polyman
|
sd-concepts-library
| null | 10 | 0 | null | 0 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,174 | false |
### Shiny polyman on Stable Diffusion
This is the `<shiny-polyman>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:





|
b8e1925ec5d53e140ec7118243f45d43
|
anas-awadalla/bert-base-uncased-few-shot-k-512-finetuned-squad-seed-10
|
anas-awadalla
|
bert
| 16 | 7 |
transformers
| 0 |
question-answering
| true | false | false |
apache-2.0
| null |
['squad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 999 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-512-finetuned-squad-seed-10
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
9fdda9fbd677effe278ad92c8d247e01
|
sd-dreambooth-library/duregar
|
sd-dreambooth-library
| null | 25 | 2 |
diffusers
| 1 | null | false | false | false |
mit
| null | null | null | 2 | 2 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,553 | false |
### Duregar on Stable Diffusion via Dreambooth
#### model by euler95
This your the Stable Diffusion model fine-tuned the Duregar concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **a painting of sks character**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:







|
5a536cbf2075dc1538847714545ed229
|
okho0653/distilbert-base-uncased-finetuned-sst-2-english-finetuned-cad-20pc
|
okho0653
|
distilbert
| 13 | 4 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,600 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sst-2-english-finetuned-cad-20pc
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0001
- Accuracy: 1.0
- F1: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---:|
| No log | 1.0 | 7 | 0.0032 | 1.0 | 1.0 |
| No log | 2.0 | 14 | 0.0002 | 1.0 | 1.0 |
| No log | 3.0 | 21 | 0.0001 | 1.0 | 1.0 |
| No log | 4.0 | 28 | 0.0001 | 1.0 | 1.0 |
| No log | 5.0 | 35 | 0.0001 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
4b4a8d3692d3a02c076415da02570cb6
|
sahita/lang-VoxLingua107-ecapa
|
sahita
| null | 8 | 10 |
speechbrain
| 0 |
audio-classification
| true | false | false |
apache-2.0
|
['multilingual', 'en', 'mr']
|
['VoxLingua107']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['audio-classification', 'speechbrain', 'embeddings', 'Language', 'Identification', 'pytorch', 'ECAPA-TDNN', 'TDNN', 'VoxLingua107']
| false | true | true | 6,980 | false |
# VoxLingua107 ECAPA-TDNN Spoken Language Identification Model
## Model description
This is a spoken language recognition model trained on the VoxLingua107 dataset using SpeechBrain.
The model uses the ECAPA-TDNN architecture that has previously been used for speaker recognition. However, it uses
more fully connected hidden layers after the embedding layer, and cross-entropy loss was used for training.
We observed that this improved the performance of extracted utterance embeddings for downstream tasks.
The system is trained with recordings sampled at 16kHz (single channel).
The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *classify_file* if needed.
The model can classify a speech utterance according to the language spoken.
It covers 2 different languages (
English,
Hindi).
## Intended uses & limitations
The model has two uses:
- use 'as is' for spoken language recognition
- use as an utterance-level feature (embedding) extractor, for creating a dedicated language ID model on your own data
The model is trained on automatically collected YouTube data. For more
information about the dataset, see [here](http://bark.phon.ioc.ee/voxlingua107/).
#### How to use
```python
import torchaudio
from speechbrain.pretrained import EncoderClassifier
language_id = EncoderClassifier.from_hparams(source="sahita/lang-VoxLingua-ecapa", savedir="tmp")
# Download Thai language sample from Omniglot and cvert to suitable form
signal = language_id.load_audio("https://omniglot.com/soundfiles/udhr/udhr_th.mp3")
prediction = language_id.classify_batch(signal)
print(prediction)
# (tensor([[-2.8646e+01, -3.0346e+01, -2.0748e+01, -2.9562e+01, -2.2187e+01,
# -3.2668e+01, -3.6677e+01, -3.3573e+01, -3.2545e+01, -2.4365e+01,
# -2.4688e+01, -3.1171e+01, -2.7743e+01, -2.9918e+01, -2.4770e+01,
# -3.2250e+01, -2.4727e+01, -2.6087e+01, -2.1870e+01, -3.2821e+01,
# -2.2128e+01, -2.2822e+01, -3.0888e+01, -3.3564e+01, -2.9906e+01,
# -2.2392e+01, -2.5573e+01, -2.6443e+01, -3.2429e+01, -3.2652e+01,
# -3.0030e+01, -2.4607e+01, -2.2967e+01, -2.4396e+01, -2.8578e+01,
# -2.5153e+01, -2.8475e+01, -2.6409e+01, -2.5230e+01, -2.7957e+01,
# -2.6298e+01, -2.3609e+01, -2.5863e+01, -2.8225e+01, -2.7225e+01,
# -3.0486e+01, -2.1185e+01, -2.7938e+01, -3.3155e+01, -1.9076e+01,
# -2.9181e+01, -2.2160e+01, -1.8352e+01, -2.5866e+01, -3.3636e+01,
# -4.2016e+00, -3.1581e+01, -3.1894e+01, -2.7834e+01, -2.5429e+01,
# -3.2235e+01, -3.2280e+01, -2.8786e+01, -2.3366e+01, -2.6047e+01,
# -2.2075e+01, -2.3770e+01, -2.2518e+01, -2.8101e+01, -2.5745e+01,
# -2.6441e+01, -2.9822e+01, -2.7109e+01, -3.0225e+01, -2.4566e+01,
# -2.9268e+01, -2.7651e+01, -3.4221e+01, -2.9026e+01, -2.6009e+01,
# -3.1968e+01, -3.1747e+01, -2.8156e+01, -2.9025e+01, -2.7756e+01,
# -2.8052e+01, -2.9341e+01, -2.8806e+01, -2.1636e+01, -2.3992e+01,
# -2.3794e+01, -3.3743e+01, -2.8332e+01, -2.7465e+01, -1.5085e-02,
# -2.9094e+01, -2.1444e+01, -2.9780e+01, -3.6046e+01, -3.7401e+01,
# -3.0888e+01, -3.3172e+01, -1.8931e+01, -2.2679e+01, -3.0225e+01,
# -2.4995e+01, -2.1028e+01]]), tensor([-0.0151]), tensor([94]), ['th'])
# The scores in the prediction[0] tensor can be interpreted as log-likelihoods that
# the given utterance belongs to the given language (i.e., the larger the better)
# The linear-scale likelihood can be retrieved using the following:
print(prediction[1].exp())
# tensor([0.9850])
# The identified language ISO code is given in prediction[3]
print(prediction[3])
# ['th: Thai']
# Alternatively, use the utterance embedding extractor:
emb = language_id.encode_batch(signal)
print(emb.shape)
# torch.Size([1, 1, 256])
```
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
The system is trained with recordings sampled at 16kHz (single channel).
The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *classify_file* if needed. Make sure your input tensor is compliant with the expected sampling rate if you use *encode_batch* and *classify_batch*.
#### Limitations and bias
Since the model is trained on VoxLingua107, it has many limitations and biases, some of which are:
- Probably it's accuracy on smaller languages is quite limited
- Probably it works worse on female speech than male speech (because YouTube data includes much more male speech)
- Based on subjective experiments, it doesn't work well on speech with a foreign accent
- Probably it doesn't work well on children's speech and on persons with speech disorders
## Training data
The model is trained on [VoxLingua107](http://bark.phon.ioc.ee/voxlingua107/).
VoxLingua107 is a speech dataset for training spoken language identification models.
The dataset consists of short speech segments automatically extracted from YouTube videos and labeled according the language of the video title and description, with some post-processing steps to filter out false positives.
VoxLingua107 contains data for 107 languages. The total amount of speech in the training set is 6628 hours.
The average amount of data per language is 62 hours. However, the real amount per language varies a lot. There is also a seperate development set containing 1609 speech segments from 33 languages, validated by at least two volunteers to really contain the given language.
## Training procedure
See the [SpeechBrain recipe](https://github.com/speechbrain/speechbrain/tree/voxlingua107/recipes/VoxLingua107/lang_id).
## Evaluation results
Error rate: 6.7% on the VoxLingua107 development dataset
#### Referencing SpeechBrain
```bibtex
@misc{speechbrain,
title={{SpeechBrain}: A General-Purpose Speech Toolkit},
author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and Franรงois Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
year={2021},
eprint={2106.04624},
archivePrefix={arXiv},
primaryClass={eess.AS},
note={arXiv:2106.04624}
}
```
### Referencing VoxLingua107
```bibtex
@inproceedings{valk2021slt,
title={{VoxLingua107}: a Dataset for Spoken Language Recognition},
author={J{\"o}rgen Valk and Tanel Alum{\"a}e},
booktitle={Proc. IEEE SLT Workshop},
year={2021},
}
```
#### About SpeechBrain
SpeechBrain is an open-source and all-in-one speech toolkit. It is designed to be simple, extremely flexible, and user-friendly. Competitive or state-of-the-art performance is obtained in various domains.
Website: https://speechbrain.github.io/
GitHub: https://github.com/speechbrain/speechbrain
|
46fe03fa2b2c2ac9a004e462093073d7
|
zigg-ai/unnecessaryinventions
|
zigg-ai
| null | 31 | 3 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['text-to-image']
| false | true | true | 1,505 | false |
### unnecessaryinventions Dreambooth model trained by zigg-ai with with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
sdcid (use that on your prompt)

|
8975df6d61d73b2accfebe370c52550f
|
chandank/bart-base-finetuned-kaggglenews-baseline-final
|
chandank
|
bart
| 13 | 2 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,625 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-finetuned-kaggglenews-baseline-final
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6942
- Rouge1: 28.581
- Rouge2: 16.3417
- Rougel: 24.1277
- Rougelsum: 25.9797
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 495 | 1.7514 | 27.911 | 15.7038 | 23.6466 | 25.2111 | 20.0 |
| 2.0585 | 2.0 | 990 | 1.6655 | 28.7581 | 16.4875 | 24.2669 | 26.1676 | 20.0 |
| 1.4173 | 3.0 | 1485 | 1.6942 | 28.581 | 16.3417 | 24.1277 | 25.9797 | 20.0 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
06a18778096cd135ea61d2339d49e508
|
marvind434/swin-tiny-patch4-window7-224-finetuned-eurosat
|
marvind434
|
swin
| 26 | 3 |
transformers
| 0 |
image-classification
| true | false | false |
apache-2.0
| null |
['imagefolder']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,544 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3026
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 1 | 1.0940 | 0.25 |
| No log | 2.0 | 2 | 0.9836 | 0.25 |
| No log | 3.0 | 3 | 0.7624 | 0.25 |
| No log | 4.0 | 4 | 0.6527 | 0.5 |
| No log | 5.0 | 5 | 0.5697 | 0.75 |
| No log | 6.0 | 6 | 0.5167 | 1.0 |
| No log | 7.0 | 7 | 0.4898 | 0.75 |
| No log | 8.0 | 8 | 0.4572 | 0.75 |
| No log | 9.0 | 9 | 0.4286 | 0.75 |
| 0.299 | 10.0 | 10 | 0.3976 | 0.75 |
| 0.299 | 11.0 | 11 | 0.3678 | 1.0 |
| 0.299 | 12.0 | 12 | 0.3531 | 1.0 |
| 0.299 | 13.0 | 13 | 0.3384 | 1.0 |
| 0.299 | 14.0 | 14 | 0.3264 | 1.0 |
| 0.299 | 15.0 | 15 | 0.3188 | 1.0 |
| 0.299 | 16.0 | 16 | 0.3114 | 1.0 |
| 0.299 | 17.0 | 17 | 0.3083 | 1.0 |
| 0.299 | 18.0 | 18 | 0.3071 | 1.0 |
| 0.299 | 19.0 | 19 | 0.3041 | 1.0 |
| 0.2051 | 20.0 | 20 | 0.3026 | 1.0 |
### Framework versions
- Transformers 4.21.3
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
67c39e08ca23f89aa25d54c32cc0b4bf
|
ylh1013/fintune-ja-chatbot
|
ylh1013
|
gpt2
| 10 | 6 |
transformers
| 0 |
text-generation
| true | false | false |
mit
|
['finetuned_from']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 955 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fintune-ja-chatbot
This model is a fine-tuned version of [rinna/japanese-gpt2-medium](https://huggingface.co/rinna/japanese-gpt2-medium) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 50
### Training results
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu102
- Tokenizers 0.10.3
|
eccb97b8d85a43e502959b0086d99bca
|
gchhablani/fnet-large-finetuned-cola
|
gchhablani
|
fnet
| 51 | 2 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['en']
|
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,399 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-large-finetuned-cola
This model is a fine-tuned version of [google/fnet-large](https://huggingface.co/google/fnet-large) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6243
- Matthews Correlation: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.6195 | 1.0 | 2138 | 0.6527 | 0.0 |
| 0.6168 | 2.0 | 4276 | 0.6259 | 0.0 |
| 0.616 | 3.0 | 6414 | 0.6243 | 0.0 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
7db46275685808723bae4fb1249982b7
|
jcblaise/bert-tagalog-base-uncased
|
jcblaise
|
bert
| 10 | 18 |
transformers
| 0 |
fill-mask
| true | false | true |
gpl-3.0
|
['tl']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['bert', 'tagalog', 'filipino']
| false | true | true | 1,644 | false |
**Deprecation Notice**
This model is deprecated. New Filipino Transformer models trained with a much larger corpora are available.
Use [`jcblaise/roberta-tagalog-base`](https://huggingface.co/jcblaise/roberta-tagalog-base) or [`jcblaise/roberta-tagalog-large`](https://huggingface.co/jcblaise/roberta-tagalog-large) instead for better performance.
---
# BERT Tagalog Base Uncased
Tagalog version of BERT trained on a large preprocessed text corpus scraped and sourced from the internet. This model is part of a larger research project. We open-source the model to allow greater usage within the Filipino NLP community.
## Citations
All model details and training setups can be found in our papers. If you use our model or find it useful in your projects, please cite our work:
```
@article{cruz2020establishing,
title={Establishing Baselines for Text Classification in Low-Resource Languages},
author={Cruz, Jan Christian Blaise and Cheng, Charibeth},
journal={arXiv preprint arXiv:2005.02068},
year={2020}
}
@article{cruz2019evaluating,
title={Evaluating Language Model Finetuning Techniques for Low-resource Languages},
author={Cruz, Jan Christian Blaise and Cheng, Charibeth},
journal={arXiv preprint arXiv:1907.00409},
year={2019}
}
```
## Data and Other Resources
Data used to train this model as well as other benchmark datasets in Filipino can be found in my website at https://blaisecruz.com
## Contact
If you have questions, concerns, or if you just want to chat about NLP and low-resource languages in general, you may reach me through my work email at [email protected]
|
2a7230371b6ae7fac6819e1d4c31f0d7
|
dminiotas05/camembert-base-finetuned-ft750_reg2
|
dminiotas05
|
camembert
| 10 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,418 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# camembert-base-finetuned-ft750_reg2
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6449
- Mse: 0.6449
- Mae: 0.6171
- R2: 0.3929
- Accuracy: 0.504
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mse | Mae | R2 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:--------:|
| 0.6283 | 1.0 | 750 | 0.6074 | 0.6074 | 0.6086 | 0.4282 | 0.4887 |
| 0.5007 | 2.0 | 1500 | 0.6449 | 0.6449 | 0.6171 | 0.3929 | 0.504 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
4b1e8f2922ee7942a87227de00f25d14
|
Alred/distilbert-base-uncased-finetuned-squad-ver4
|
Alred
|
distilbert
| 14 | 8 |
transformers
| 0 |
question-answering
| true | false | false |
apache-2.0
| null |
['squad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,284 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad-ver4
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4931
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.8147 | 1.0 | 554 | 1.6712 |
| 1.4844 | 2.0 | 1108 | 1.4681 |
| 1.0993 | 3.0 | 1662 | 1.4931 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
8c7c617d9613e55cdd93e9e3d47ce26b
|
chrisvinsen/wav2vec2-base-commonvoice-demo-colab-2
|
chrisvinsen
|
wav2vec2
| 12 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,844 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-commonvoice-demo-colab-2
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 4.7784 | 2.58 | 500 | 2.9962 | 1.0 |
| 3.0067 | 5.15 | 1000 | 3.0303 | 1.0 |
| 3.0098 | 7.73 | 1500 | 3.0305 | 1.0 |
| 3.0015 | 10.31 | 2000 | 3.0308 | 1.0 |
| 3.0062 | 12.89 | 2500 | 3.0310 | 1.0 |
| 3.0074 | 15.46 | 3000 | 3.0311 | 1.0 |
| 3.0085 | 18.04 | 3500 | 3.0313 | 1.0 |
| 3.0046 | 20.62 | 4000 | 3.0314 | 1.0 |
| 3.0065 | 23.2 | 4500 | nan | 1.0 |
| 0.0 | 25.77 | 5000 | nan | 1.0 |
| 0.0 | 28.35 | 5500 | nan | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
17e67804a3161c38681e59f460768825
|
henryscheible/mrpc_bert-base-uncased_144_v2
|
henryscheible
| null | 13 | 0 | null | 0 | null | true | false | false |
apache-2.0
|
['en']
|
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,058 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mrpc_bert-base-uncased_144_v2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4933
- Accuracy: 0.8480
- F1: 0.8935
- Combined Score: 0.8708
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1
- Datasets 2.6.1
- Tokenizers 0.13.1
|
c616ab824fb2dbe28453f906099758ed
|
NbAiLab/xls-npsc-oh
|
NbAiLab
|
wav2vec2
| 21 | 9 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
cc0-1.0
| null |
['npsc']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'NbAiLab/NPSC', 'generated_from_trainer']
| true | true | true | 1,364 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xls-npsc-oh
This model is a fine-tuned version of [KBLab/wav2vec2-large-voxrex](https://huggingface.co/KBLab/wav2vec2-large-voxrex) on the NBAILAB/NPSC - 48K_MP3 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2106
- Wer: 0.8586
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.1093 | 2.61 | 1000 | 0.2572 | 0.9293 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.1.dev0
- Tokenizers 0.11.0
|
be7720545b6ddbc600af8dd23e172759
|
rugo/distilbert-base-uncased-finetuned-imdb
|
rugo
|
distilbert
| 13 | 2 |
transformers
| 0 |
fill-mask
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,320 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1486
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7549 | 1.0 | 157 | 1.3539 |
| 1.398 | 2.0 | 314 | 1.1894 |
| 1.2894 | 3.0 | 471 | 1.1480 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
4e05d62836b1c859cdc613ab20aa00d1
|
abigailp/vaccinated
|
abigailp
|
bert
| 13 | 9 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,050 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vaccinated
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6907
- Accuracy: 0.9036
- F1: 0.9048
- Recall: 0.8636
- Precision: 0.95
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 40
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
4afed0aca0df37b5574824a54eed627b
|
sd-concepts-library/ddattender
|
sd-concepts-library
| null | 11 | 0 | null | 0 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,246 | false |
### ddattender on Stable Diffusion
This is the `<ddattender>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:






|
4ad19c259f9402054389269c63e856b2
|
bergum/xtremedistil-l6-h384-go-emotion
|
bergum
|
bert
| 8 | 687 |
transformers
| 6 |
text-classification
| true | false | false |
apache-2.0
| null |
['go_emotions']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| true | true | true | 1,271 | false |
# xtremedistil-l6-h384-go-emotion
This model is a fine-tuned version of [microsoft/xtremedistil-l6-h384-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h384-uncased) on the
[go_emotions dataset](https://huggingface.co/datasets/go_emotions).
See notebook for how the model was trained and converted to ONNX format [](https://colab.research.google.com/github/jobergum/emotion/blob/main/TrainGoEmotions.ipynb)
This model is deployed to [aiserv.cloud](https://aiserv.cloud/) for live demo of the model.
See [https://github.com/jobergum/browser-ml-inference](https://github.com/jobergum/browser-ml-inference) for how to reproduce.
### Training hyperparameters
- batch size 128
- learning_rate=3e-05
- epocs 4
<pre>
Num examples = 211225
Num Epochs = 4
Instantaneous batch size per device = 128
Total train batch size (w. parallel, distributed & accumulation) = 128
Gradient Accumulation steps = 1
Total optimization steps = 6604
[6604/6604 53:23, Epoch 4/4]
Step Training Loss
500 0.263200
1000 0.156900
1500 0.152500
2000 0.145400
2500 0.140500
3000 0.135900
3500 0.132800
4000 0.129400
4500 0.127200
5000 0.125700
5500 0.124400
6000 0.124100
6500 0.123400
</pre>
|
787765551ebbc5bce03dd5202b305b97
|
emre/wav2vec2-xls-r-300m-gl-CV8
|
emre
|
wav2vec2
| 15 | 6 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['gl']
|
['common_voice']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer', 'hf-asr-leaderboard', 'robust-speech-event']
| true | true | true | 1,483 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-gl-CV8
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2151
- Wer: 0.2080
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.9427 | 4.9 | 500 | 2.8801 | 1.0 |
| 2.1594 | 9.8 | 1000 | 0.4092 | 0.4001 |
| 0.7332 | 14.71 | 1500 | 0.2151 | 0.2080 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.1
- Tokenizers 0.10.3
|
f7fe6b1b98be88cddc228f482a51475e
|
spacy/en_core_web_md
|
spacy
| null | 28 | 131 |
spacy
| 0 |
token-classification
| false | false | false |
mit
|
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['spacy', 'token-classification']
| false | true | true | 2,745 | false |
### Details: https://spacy.io/models/en#en_core_web_md
English pipeline optimized for CPU. Components: tok2vec, tagger, parser, senter, ner, attribute_ruler, lemmatizer.
| Feature | Description |
| --- | --- |
| **Name** | `en_core_web_md` |
| **Version** | `3.5.0` |
| **spaCy** | `>=3.5.0,<3.6.0` |
| **Default Pipeline** | `tok2vec`, `tagger`, `parser`, `attribute_ruler`, `lemmatizer`, `ner` |
| **Components** | `tok2vec`, `tagger`, `parser`, `senter`, `attribute_ruler`, `lemmatizer`, `ner` |
| **Vectors** | 514157 keys, 20000 unique vectors (300 dimensions) |
| **Sources** | [OntoNotes 5](https://catalog.ldc.upenn.edu/LDC2013T19) (Ralph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Nianwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, Mohammed El-Bachouti, Robert Belvin, Ann Houston)<br />[ClearNLP Constituent-to-Dependency Conversion](https://github.com/clir/clearnlp-guidelines/blob/master/md/components/dependency_conversion.md) (Emory University)<br />[WordNet 3.0](https://wordnet.princeton.edu/) (Princeton University)<br />[Explosion Vectors (OSCAR 2109 + Wikipedia + OpenSubtitles + WMT News Crawl)](https://github.com/explosion/spacy-vectors-builder) (Explosion) |
| **License** | `MIT` |
| **Author** | [Explosion](https://explosion.ai) |
### Label Scheme
<details>
<summary>View label scheme (113 labels for 3 components)</summary>
| Component | Labels |
| --- | --- |
| **`tagger`** | `$`, `''`, `,`, `-LRB-`, `-RRB-`, `.`, `:`, `ADD`, `AFX`, `CC`, `CD`, `DT`, `EX`, `FW`, `HYPH`, `IN`, `JJ`, `JJR`, `JJS`, `LS`, `MD`, `NFP`, `NN`, `NNP`, `NNPS`, `NNS`, `PDT`, `POS`, `PRP`, `PRP$`, `RB`, `RBR`, `RBS`, `RP`, `SYM`, `TO`, `UH`, `VB`, `VBD`, `VBG`, `VBN`, `VBP`, `VBZ`, `WDT`, `WP`, `WP$`, `WRB`, `XX`, `_SP`, ```` |
| **`parser`** | `ROOT`, `acl`, `acomp`, `advcl`, `advmod`, `agent`, `amod`, `appos`, `attr`, `aux`, `auxpass`, `case`, `cc`, `ccomp`, `compound`, `conj`, `csubj`, `csubjpass`, `dative`, `dep`, `det`, `dobj`, `expl`, `intj`, `mark`, `meta`, `neg`, `nmod`, `npadvmod`, `nsubj`, `nsubjpass`, `nummod`, `oprd`, `parataxis`, `pcomp`, `pobj`, `poss`, `preconj`, `predet`, `prep`, `prt`, `punct`, `quantmod`, `relcl`, `xcomp` |
| **`ner`** | `CARDINAL`, `DATE`, `EVENT`, `FAC`, `GPE`, `LANGUAGE`, `LAW`, `LOC`, `MONEY`, `NORP`, `ORDINAL`, `ORG`, `PERCENT`, `PERSON`, `PRODUCT`, `QUANTITY`, `TIME`, `WORK_OF_ART` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `TOKEN_ACC` | 99.86 |
| `TOKEN_P` | 99.57 |
| `TOKEN_R` | 99.58 |
| `TOKEN_F` | 99.57 |
| `TAG_ACC` | 97.33 |
| `SENTS_P` | 92.21 |
| `SENTS_R` | 89.37 |
| `SENTS_F` | 90.77 |
| `DEP_UAS` | 92.05 |
| `DEP_LAS` | 90.23 |
| `ENTS_P` | 84.94 |
| `ENTS_R` | 85.49 |
| `ENTS_F` | 85.22 |
|
9a2ba84077277c1f09dbfda7cf0a2c04
|
google/t5-efficient-large-dl12
|
google
|
t5
| 12 | 7 |
transformers
| 0 |
text2text-generation
| true | true | true |
apache-2.0
|
['en']
|
['c4']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['deep-narrow']
| false | true | true | 6,258 | false |
# T5-Efficient-LARGE-DL12 (Deep-Narrow version)
T5-Efficient-LARGE-DL12 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the modelโs depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-large-dl12** - is of model type **Large** with the following variations:
- **dl** is **12**
It has **536.34** million parameters and thus requires *ca.* **2145.37 MB** of memory in full precision (*fp32*)
or **1072.69 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
|
dda893cccdf7a8f0203c8164c249d8dc
|
jonatasgrosman/exp_w2v2t_pt_xlsr-53_s829
|
jonatasgrosman
|
wav2vec2
| 10 | 2 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['pt']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'pt']
| false | true | true | 461 | false |
# exp_w2v2t_pt_xlsr-53_s829
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
317c2f6919bd1e889c37c4034c48d973
|
UMCU/RobBERT_NegationDetection_32xTokenWindow
|
UMCU
|
roberta
| 9 | 7 |
transformers
| 1 |
token-classification
| true | false | false |
mit
|
['nl']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 3,021 | false |
# MedRoBERTa.nl finetuned for negation
## Description
This model is a finetuned RoBERTa-based model called RobBERT, this model is pre-trained on the Dutch section of OSCAR. All code used for the creation of RobBERT can be found here https://github.com/iPieter/RobBERT. The publication associated with the negation detection task can be found at https://arxiv.org/abs/2209.00470. The code for finetuning the model can be found at https://github.com/umcu/negation-detection.
## Intended use
The model is finetuned for negation detection on Dutch clinical text. Since it is a domain-specific model trained on medical data, it is meant to be used on medical NLP tasks for Dutch. This particular model is trained on a 32-max token windows surrounding the concept-to-be negated. Note that we also trained a biLSTM which can be incorporated in [MedCAT](https://github.com/CogStack/MedCAT).
## Minimal example
```python
tokenizer = AutoTokenizer\
.from_pretrained("UMCU/MedRoBERTa.nl_NegationDetection")
model = AutoModelForTokenClassification\
.from_pretrained("UMCU/MedRoBERTa.nl_NegationDetection")
some_text = "De patient was niet aanspreekbaar en hij zag er grauw uit. \
Hij heeft de inspanningstest echter goed doorstaan."
inputs = tokenizer(some_text, return_tensors='pt')
output = model.forward(inputs)
probas = torch.nn.functional.softmax(output.logits[0]).detach().numpy()
# koppel aan tokens
input_tokens = tokenizer.convert_ids_to_tokens(inputs['input_ids'][0])
target_map = {0: 'B-Negated', 1:'B-NotNegated',2:'I-Negated',3:'I-NotNegated'}
results = [{'token': input_tokens[idx],
'proba_negated': proba_arr[0]+proba_arr[2],
'proba_not_negated': proba_arr[1]+proba_arr[3]
}
for idx,proba_arr in enumerate(probas)]
```
It is perhaps good to note that we assume the [Inside-Outside-Beginning](https://en.wikipedia.org/wiki/Inside%E2%80%93outside%E2%80%93beginning_(tagging)) format.
## Data
The pre-trained model was trained the Dutch section of OSCAR (about 39GB), and is described here: http://dx.doi.org/10.18653/v1/2020.findings-emnlp.292.
## Authors
RobBERT: Pieter Delobelle, Thomas Winters, Bettina Berendt,
Finetuning: Bram van Es, Sebastiaan Arends.
## Contact
If you are having problems with this model please add an issue on our git: https://github.com/umcu/negation-detection/issues
## Usage
If you use the model in your work please refer either to
https://doi.org/10.5281/zenodo.6980076 or https://doi.org/10.48550/arXiv.2209.00470
## References
Paper: Pieter Delobelle, Thomas Winters, Bettina Berendt (2020), RobBERT: a Dutch RoBERTa-based Language Model, Findings of the Association for Computational Linguistics: EMNLP 2020
Paper: Bram van Es, Leon C. Reteig, Sander C. Tan, Marijn Schraagen, Myrthe M. Hemker, Sebastiaan R.S. Arends, Miguel A.R. Rios, Saskia Haitjema (2022): Negation detection in Dutch clinical texts: an evaluation of rule-based and machine learning methods, Arxiv
|
ba4e283e9a23588ba2b82c351a702291
|
jonatasgrosman/exp_w2v2t_es_vp-nl_s878
|
jonatasgrosman
|
wav2vec2
| 10 | 3 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['es']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'es']
| false | true | true | 469 | false |
# exp_w2v2t_es_vp-nl_s878
Fine-tuned [facebook/wav2vec2-large-nl-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-nl-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
26e310f92e34852fd274099bac2c74d6
|
MatFil99/bert-nlp-project-ft-news-ds-imdb
|
MatFil99
|
bert
| 10 | 4 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,814 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-nlp-project-ft-news-ds-imdb
This model is a fine-tuned version of [jestemleon/bert-nlp-project-news](https://huggingface.co/jestemleon/bert-nlp-project-news) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2678
- Accuracy: 0.944
- F1: 0.9433
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.2722 | 0.38 | 750 | 0.1888 | 0.9283 | 0.9262 |
| 0.2133 | 0.75 | 1500 | 0.1709 | 0.939 | 0.9363 |
| 0.1752 | 1.12 | 2250 | 0.2139 | 0.9395 | 0.9397 |
| 0.1234 | 1.5 | 3000 | 0.2063 | 0.944 | 0.9428 |
| 0.117 | 1.88 | 3750 | 0.2787 | 0.9327 | 0.9336 |
| 0.0766 | 2.25 | 4500 | 0.2711 | 0.9417 | 0.9412 |
| 0.0603 | 2.62 | 5250 | 0.2659 | 0.9423 | 0.9406 |
| 0.0563 | 3.0 | 6000 | 0.2678 | 0.944 | 0.9433 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
88d4e2d919e845bcbf2d571710b8cdea
|
domischwimmbeck/bert-base-german-cased-fine-tuned-ner
|
domischwimmbeck
|
bert
| 16 | 19 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null |
['germa_ner']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,552 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-german-cased-fine-tuned-ner
This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on the germa_ner dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0966
- Precision: 0.8089
- Recall: 0.8728
- F1: 0.8397
- Accuracy: 0.9749
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.159 | 1.0 | 737 | 0.0922 | 0.7472 | 0.8461 | 0.7936 | 0.9703 |
| 0.0714 | 2.0 | 1474 | 0.0916 | 0.7886 | 0.8713 | 0.8279 | 0.9731 |
| 0.0319 | 3.0 | 2211 | 0.0966 | 0.8089 | 0.8728 | 0.8397 | 0.9749 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.9.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
10ba87925e4c3236e9063f49fa8b256f
|
Helsinki-NLP/opus-mt-es-ber
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 778 | false |
### opus-mt-es-ber
* source languages: es
* target languages: ber
* OPUS readme: [es-ber](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-ber/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-ber/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ber/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ber/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.es.ber | 21.8 | 0.444 |
|
b958aa6b51e7413df5216964f0d0b142
|
shibing624/bert4ner-base-uncased
|
shibing624
|
bert
| 8 | 8 |
transformers
| 1 |
token-classification
| true | false | false |
apache-2.0
|
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['bert', 'pytorch', 'en', 'ner']
| false | true | true | 3,782 | false |
# BERT for English Named Entity Recognition(bert4ner) Model
่ฑๆๅฎไฝ่ฏๅซๆจกๅ
`bert4ner-base-uncased` evaluate CoNLL-2003 test data๏ผ
The overall performance of BERT on CoNLL-2003 **test**:
| | Accuracy | Recall | F1 |
| ------------ | ------------------ | ------------------ | ------------------ |
| BertSoftmax | 0.8956 | 0.9132 | 0.9043 |
ๅจCoNLL-2003็ๆต่ฏ้ไธ่พพๅฐๆฅ่ฟSOTAๆฐดๅนณใ
BertSoftmax็็ฝ็ป็ปๆ(ๅ็BERT)ใ
ๆฌ้กน็ฎๅผๆบๅจๅฎไฝ่ฏๅซ้กน็ฎ๏ผ[nerpy](https://github.com/shibing624/nerpy)๏ผๅฏๆฏๆbert4nerๆจกๅ๏ผ้่ฟๅฆไธๅฝไปค่ฐ็จ๏ผ
#### ่ฑๆๅฎไฝ่ฏๅซ๏ผ
```shell
>>> from nerpy import NERModel
>>> model = NERModel("bert", "shibing624/bert4ner-base-uncased")
>>> predictions, raw_outputs, entities = model.predict(["AL-AIN, United Arab Emirates 1996-12-06"], split_on_space=True)
entities: [('AL-AIN,', 'LOC'), ('United Arab Emirates', 'LOC')]
```
ๆจกๅๆไปถ็ปๆ๏ผ
```
bert4ner-base-uncased
โโโ config.json
โโโ model_args.json
โโโ pytorch_model.bin
โโโ special_tokens_map.json
โโโ tokenizer_config.json
โโโ vocab.txt
```
## Usage (HuggingFace Transformers)
Without [nerpy](https://github.com/shibing624/nerpy), you can use the model like this:
First, you pass your input through the transformer model, then you have to apply the bio tag to get the entity words.
Install package:
```
pip install transformers seqeval
```
```python
import os
import torch
from transformers import AutoTokenizer, AutoModelForTokenClassification
from seqeval.metrics.sequence_labeling import get_entities
os.environ["KMP_DUPLICATE_LIB_OK"] = "TRUE"
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained("shibing624/bert4ner-base-uncased")
model = AutoModelForTokenClassification.from_pretrained("shibing624/bert4ner-base-uncased")
label_list = ["E-ORG", "E-LOC", "S-MISC", "I-MISC", "S-PER", "E-PER", "B-MISC", "O", "S-LOC",
"E-MISC", "B-ORG", "S-ORG", "I-ORG", "B-LOC", "I-LOC", "B-PER", "I-PER"]
sentence = "AL-AIN, United Arab Emirates 1996-12-06"
def get_entity(sentence):
tokens = tokenizer.tokenize(sentence)
inputs = tokenizer.encode(sentence, return_tensors="pt")
with torch.no_grad():
outputs = model(inputs).logits
predictions = torch.argmax(outputs, dim=2)
word_tags = [(token, label_list[prediction]) for token, prediction in zip(tokens, predictions[0].numpy()[1:-1])]
print(sentence)
print(word_tags)
pred_labels = [i[1] for i in word_tags]
entities = []
line_entities = get_entities(pred_labels)
for i in line_entities:
word = tokens[i[1]: i[2] + 1]
entity_type = i[0]
entities.append((word, entity_type))
print("Sentence entity:")
print(entities)
get_entity(sentence)
```
### ๆฐๆฎ้
#### ๅฎไฝ่ฏๅซๆฐๆฎ้
| ๆฐๆฎ้ | ่ฏญๆ | ไธ่ฝฝ้พๆฅ | ๆไปถๅคงๅฐ |
| :------- | :--------- | :---------: | :---------: |
| **`CNERไธญๆๅฎไฝ่ฏๅซๆฐๆฎ้`** | CNER(12ไธๅญ) | [CNER github](https://github.com/shibing624/nerpy/tree/main/examples/data/cner)| 1.1MB |
| **`PEOPLEไธญๆๅฎไฝ่ฏๅซๆฐๆฎ้`** | ไบบๆฐๆฅๆฅๆฐๆฎ้๏ผ200ไธๅญ๏ผ | [PEOPLE github](https://github.com/shibing624/nerpy/tree/main/examples/data/people)| 12.8MB |
| **`CoNLL03่ฑๆๅฎไฝ่ฏๅซๆฐๆฎ้`** | CoNLL-2003ๆฐๆฎ้๏ผ22ไธๅญ๏ผ | [CoNLL03 github](https://github.com/shibing624/nerpy/tree/main/examples/data/conll03)| 1.7MB |
### input format
Input format (prefer BIOES tag scheme), with each character its label for one line. Sentences are splited with a null line.
```text
EU S-ORG
rejects O
German S-MISC
call O
to O
boycott O
British S-MISC
lamb O
. O
Peter B-PER
Blackburn E-PER
```
ๅฆๆ้่ฆ่ฎญ็ปbert4ner๏ผ่ฏทๅ่[https://github.com/shibing624/nerpy/tree/main/examples](https://github.com/shibing624/nerpy/tree/main/examples)
## Citation
```latex
@software{nerpy,
author = {Xu Ming},
title = {nerpy: Named Entity Recognition toolkit},
year = {2022},
url = {https://github.com/shibing624/nerpy},
}
```
|
06c610a3b26cc65d95c3fd908e9af624
|
nvidia/tts_hifigan
|
nvidia
| null | 3 | 502 |
nemo
| 6 |
text-to-speech
| true | false | false |
cc-by-4.0
|
['en']
|
['ljspeech']
| null | 0 | 0 | 0 | 0 | 1 | 1 | 0 |
['text-to-speech', 'speech', 'audio', 'Vocoder', 'GAN', 'pytorch', 'NeMo', 'Riva']
| false | true | true | 4,300 | false |
# NVIDIA Hifigan Vocoder (en-US)
<style>
img {
display: inline;
}
</style>
| [](#model-architecture)
| [](#model-architecture)
| [](#datasets)
| [](#deployment-with-nvidia-riva) |
HiFiGAN [1] is a generative adversarial network (GAN) model that generates audio from mel spectrograms. The generator uses transposed convolutions to upsample mel spectrograms to audio.
## Usage
The model is available for use in the NeMo toolkit [2] and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
To train, fine-tune or play with the model you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo). We recommend you install it after you've installed the latest PyTorch version.
```
pip install nemo_toolkit['all']
```
### Automatically instantiate the model
NOTE: In order to generate audio, you also need a spectrogram generator from NeMo. This example uses the FastPitch model.
```python
# Load FastPitch
from nemo.collections.tts.models import FastPitchModel
spec_generator = FastPitchModel.from_pretrained("nvidia/tts_en_fastpitch")
# Load vocoder
from nemo.collections.tts.models import HifiGanModel
model = HifiGanModel.from_pretrained(model_name="nvidia/tts_hifigan")
```
### Generate audio
```python
import soundfile as sf
parsed = spec_generator.parse("You can type your sentence here to get nemo to produce speech.")
spectrogram = spec_generator.generate_spectrogram(tokens=parsed)
audio = model.convert_spectrogram_to_audio(spec=spectrogram)
```
### Save the generated audio file
```python
# Save the audio to disk in a file called speech.wav
sf.write("speech.wav", audio.to('cpu').numpy(), 22050)
```
### Input
This model accepts batches of mel spectrograms.
### Output
This model outputs audio at 22050Hz.
## Model Architecture
HiFi-GAN [1] consists of one generator and two discriminators: multi-scale and multi-period discriminators. The generator and discriminators are trained adversarially, along with two additional losses for
improving training stability and model performance.
## Training
The NeMo toolkit [3] was used for training the models for several epochs. These model are trained with this [example script](https://github.com/NVIDIA/NeMo/blob/main/examples/tts/hifigan.py) and this [base config](https://github.com/NVIDIA/NeMo/blob/main/examples/tts/conf/hifigan/hifigan.yaml).
### Datasets
This model is trained on LJSpeech sampled at 22050Hz, and has been tested on generating female English voices with an American accent.
## Performance
No performance information is available at this time.
## Limitations
If the spectrogram generator model (example FastPitch) is trained/finetuned on new speaker's data it is recommended to finetune HiFi-GAN also. HiFi-GAN shows improvement using synthesized mel spectrograms, so the first step is to generate mel spectrograms with our finetuned FastPitch model to use as input to finetune HiFiGAN.
## Deployment with NVIDIA Riva
For the best real-time accuracy, latency, and throughput, deploy the model with [NVIDIA Riva](https://developer.nvidia.com/riva), an accelerated speech AI SDK deployable on-prem, in all clouds, multi-cloud, hybrid, at the edge, and embedded.
Additionally, Riva provides:
* World-class out-of-the-box accuracy for the most common languages with model checkpoints trained on proprietary data with hundreds of thousands of GPU-compute hours
* Best in class accuracy with run-time word boosting (e.g., brand and product names) and customization of acoustic model, language model, and inverse text normalization
* Streaming speech recognition, Kubernetes compatible scaling, and Enterprise-grade support
Check out [Riva live demo](https://developer.nvidia.com/riva#demos).
## References
- [1] [HiFi-GAN: Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis](https://arxiv.org/abs/2010.05646)
- [2] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)
|
0fc9eb00051c7d49081da07873c6336d
|
sentence-transformers/bert-large-nli-stsb-mean-tokens
|
sentence-transformers
|
bert
| 13 | 3,044 |
sentence-transformers
| 1 |
sentence-similarity
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
| false | true | true | 3,831 | false |
**โ ๏ธ This model is deprecated. Please don't use it as it produces sentence embeddings of low quality. You can find recommended sentence embedding models here: [SBERT.net - Pretrained Models](https://www.sbert.net/docs/pretrained_models.html)**
# sentence-transformers/bert-large-nli-stsb-mean-tokens
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/bert-large-nli-stsb-mean-tokens')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/bert-large-nli-stsb-mean-tokens')
model = AutoModel.from_pretrained('sentence-transformers/bert-large-nli-stsb-mean-tokens')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/bert-large-nli-stsb-mean-tokens)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
```
|
91be473baee25bcc7ada442c67d95b5e
|
anas-awadalla/bart-base-few-shot-k-512-finetuned-squad-seed-2
|
anas-awadalla
|
bart
| 16 | 3 |
transformers
| 0 |
question-answering
| true | false | false |
apache-2.0
| null |
['squad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 988 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-few-shot-k-512-finetuned-squad-seed-2
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.11.6
|
bdf4211486bdfd96b453a0d75c14872e
|
quincyqiang/dashdash-wonderland-heywhale
|
quincyqiang
| null | 17 | 11 |
diffusers
| 0 |
text-to-image
| true | false | false |
creativeml-openrail-m
| null | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'wildcard']
| false | true | true | 830 | false |
# DreamBooth model for the dashdash concept trained by quincyqiang.
This is a Stable Diffusion model fine-tuned on the dashdash concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of dashdash wonderland**
This model was created as part of the DreamBooth Hackathon ๐ฅ. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
## Description
This is a Stable Diffusion model fine-tuned on `wonderland` images for the wildcard theme,
for the Hugging Face DreamBooth Hackathon, from the HF CN Community,
corporated with the HeyWhale.
## Usage
```python
from diffusers import StableDiffusionPipeline
pipeline = StableDiffusionPipeline.from_pretrained('quincyqiang/dashdash-wonderland-heywhale')
image = pipeline().images[0]
image
```
|
88b24ed9849d8b746b2c36f73cba3415
|
haroonrahimi/wav2vec2-large-xls-r-300m-pu-colab
|
haroonrahimi
|
wav2vec2
| 9 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null |
['common_voice']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,100 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-pu-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
00b8196916cef077f0d5b6de0aaaa856
|
vumichien/AnimeGANv2_Hayao
|
vumichien
| null | 3 | 0 | null | 0 | null | false | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['AnimeGanv2']
| false | true | true | 678 | false |
## Model Description
Transforming photos of real-world scenes into anime style images is a meaningful and challenging task in terms of computer vision and artistic style transfer.
AnimeGANv2_Haya Made by Asher Chan.
The official code in [here](https://github.com/TachibanaYoshino/AnimeGANv2)
## License
This repo is made freely available to academic and
non-academic entities for non-commercial purposes such
as academic research, teaching, scientific publications.
Permission is granted to use the AnimeGAN given
that you agree to my license terms. Regarding the
request for commercial use, please contact us via
email to help you obtain the authorization letter.
|
c04bc76a20f131f71addd446d49342a0
|
BakhtUllah123/xls-r-ur-large
|
BakhtUllah123
|
wav2vec2
| 17 | 2 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null |
['common_voice_8_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,773 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xls-r-ur-large
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice_8_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8056
- Wer: 0.4716
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 6.5282 | 3.25 | 1000 | 3.0650 | 0.9989 |
| 1.7351 | 6.49 | 2000 | 0.8798 | 0.6284 |
| 0.7662 | 9.74 | 3000 | 0.7720 | 0.5399 |
| 0.5675 | 12.99 | 4000 | 0.7661 | 0.5229 |
| 0.4591 | 16.23 | 5000 | 0.7849 | 0.5041 |
| 0.3881 | 19.48 | 6000 | 0.8065 | 0.4893 |
| 0.3522 | 22.73 | 7000 | 0.7915 | 0.4804 |
| 0.3127 | 25.97 | 8000 | 0.8119 | 0.4804 |
| 0.2932 | 29.22 | 9000 | 0.8056 | 0.4716 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
d32f642fafe6bef569630a3f8e7e5fd6
|
jamesesguerra/distilbart-cnn-12-6-finetuned-1.3.1
|
jamesesguerra
|
bart
| 14 | 3 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,478 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbart-cnn-12-6-finetuned-1.3.1
This model is a fine-tuned version of [sshleifer/distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7396
- Rouge1: 50.4939
- Rouge2: 23.7745
- Rougel: 35.3779
- Rougelsum: 45.8578
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 2.0871 | 1.0 | 982 | 1.8224 | 49.5128 | 23.1207 | 34.3412 | 44.7552 |
| 1.5334 | 2.0 | 1964 | 1.7396 | 50.4939 | 23.7745 | 35.3779 | 45.8578 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
5ea2a23c99f76c48bdedc3bade30a396
|
izzy-lazerson/wav2vec2-base-timit-demo-colab
|
izzy-lazerson
|
wav2vec2
| 12 | 8 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,641 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4545
- Wer: 0.3450
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.3801 | 4.0 | 500 | 1.1501 | 0.8820 |
| 0.561 | 8.0 | 1000 | 0.4583 | 0.4211 |
| 0.2198 | 12.0 | 1500 | 0.4467 | 0.3997 |
| 0.1255 | 16.0 | 2000 | 0.4390 | 0.3677 |
| 0.0862 | 20.0 | 2500 | 0.4934 | 0.3603 |
| 0.0617 | 24.0 | 3000 | 0.4641 | 0.3549 |
| 0.0465 | 28.0 | 3500 | 0.4545 | 0.3450 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
12d8374d9670a9db613927de7430cbda
|
frieza/ddpm-butterflies-128
|
frieza
| null | 13 | 0 |
diffusers
| 0 | null | false | false | false |
apache-2.0
|
['en']
|
['huggan/few-shot-grumpy-cat']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,217 | false |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [๐ค Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/few-shot-grumpy-cat` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
๐ [TensorBoard logs](https://huggingface.co/frieza/ddpm-butterflies-128/tensorboard?#scalars)
|
4c99149dde8c4879cb45cd3f88ebef1e
|
ali2066/finetuned_token_itr0_0.0002_essays_16_02_2022-21_04_02
|
ali2066
|
distilbert
| 13 | 10 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,801 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_itr0_0.0002_essays_16_02_2022-21_04_02
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2158
- Precision: 0.5814
- Recall: 0.7073
- F1: 0.6382
- Accuracy: 0.9248
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 11 | 0.3920 | 0.4392 | 0.6069 | 0.5096 | 0.8593 |
| No log | 2.0 | 22 | 0.3304 | 0.4282 | 0.6260 | 0.5085 | 0.8672 |
| No log | 3.0 | 33 | 0.3361 | 0.4840 | 0.6336 | 0.5488 | 0.8685 |
| No log | 4.0 | 44 | 0.3258 | 0.5163 | 0.6641 | 0.5810 | 0.8722 |
| No log | 5.0 | 55 | 0.3472 | 0.5192 | 0.6718 | 0.5857 | 0.8743 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
be416d65bee285769883d3460d13abcf
|
kiddothe2b/hierarchical-transformer-EC2-mini-1024
|
kiddothe2b
|
hierarchical-transformer
| 12 | 0 |
transformers
| 0 |
fill-mask
| true | false | false |
cc-by-sa-4.0
|
['en']
|
['wikipedia']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['long-documents']
| true | true | true | 4,283 | false |
# Hierarchical Attention Transformer (HAT) / hierarchical-transformer-EC2-mini-1024
## Model description
This is a Hierarchical Attention Transformer (HAT) model as presented in [An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification (Chalkidis et al., 2022)](https://arxiv.org/abs/2210.05529).
The model has been warm-started re-using the weights of miniature BERT (Turc et al., 2019), and continued pre-trained for MLM following the paradigm of Longformer released by Beltagy et al. (2020). It supports sequences of length up to 1,024.
HAT uses hierarchical attention, which is a combination of segment-wise and cross-segment attention operations. You can think of segments as paragraphs or sentences.
## Intended uses & limitations
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
See the [model hub](https://huggingface.co/models?other=hierarchical-transformer) to look for other versions of HAT or fine-tuned versions of a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole document to make decisions, such as document classification, sequential sentence classification, or question answering.
## How to use
You can use this model directly for masked language modeling:
```python
from transformers import AutoTokenizer, AutoModelforForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("kiddothe2b/hierarchical-transformer-EC2-mini-1024", trust_remote_code=True)
mlm_model = AutoModelforForMaskedLM(kiddothe2b/hierarchical-transformer-EC2-mini-1024", trust_remote_code=True)
```
You can also fine-tune it for SequenceClassification, SequentialSentenceClassification, and MultipleChoice down-stream tasks:
```python
from transformers import AutoTokenizer, AutoModelforSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("kiddothe2b/hierarchical-transformer-EC2-mini-1024", trust_remote_code=True)
doc_classifier = AutoModelforSequenceClassification("kiddothe2b/hierarchical-transformer-EC2-mini-1024", trust_remote_code=True)
```
## Limitations and bias
The training data used for this model contains a lot of unfiltered content from the internet, which is far from
neutral. Therefore, the model can have biased predictions.
## Training procedure
### Training and evaluation data
The model has been warm-started from [google/bert_uncased_L-6_H-256_A-4](https://huggingface.co/google/bert_uncased_L-6_H-256_A-4) checkpoint and has been continued pre-trained for additional 50k steps on English [Wikipedia](https://huggingface.co/datasets/wikipedia).
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: tpu
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 50000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.3798 | 0.2 | 10000 | 2.2014 |
| 2.3267 | 0.4 | 20000 | 2.1535 |
| 2.2976 | 0.6 | 30000 | 2.1234 |
| 2.2649 | 0.8 | 40000 | 2.1010 |
| 2.254 | 1.14 | 50000 | 2.0870 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.0.0
- Tokenizers 0.11.6
## Citing
If you use HAT in your research, please cite:
[An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification](https://arxiv.org/abs/2210.05529). Ilias Chalkidis, Xiang Dai, Manos Fergadiotis, Prodromos Malakasiotis, and Desmond Elliott. 2022. arXiv:2210.05529 (Preprint).
```
@misc{chalkidis-etal-2022-hat,
url = {https://arxiv.org/abs/2210.05529},
author = {Chalkidis, Ilias and Dai, Xiang and Fergadiotis, Manos and Malakasiotis, Prodromos and Elliott, Desmond},
title = {An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification},
publisher = {arXiv},
year = {2022},
}
```
|
e7a10dc825a41b8240cd1cabb25d577d
|
jonatasgrosman/exp_w2v2t_it_unispeech-ml_s213
|
jonatasgrosman
|
unispeech
| 10 | 4 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['it']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'it']
| false | true | true | 500 | false |
# exp_w2v2t_it_unispeech-ml_s213
Fine-tuned [microsoft/unispeech-large-multi-lingual-1500h-cv](https://huggingface.co/microsoft/unispeech-large-multi-lingual-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (it)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
601c38dd027de1aa03e99cc5f8b2d15c
|
tensorspeech/tts-tacotron2-kss-ko
|
tensorspeech
| null | 5 | 0 |
tensorflowtts
| 3 |
text-to-speech
| false | false | false |
apache-2.0
|
['ko']
|
['kss']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['tensorflowtts', 'audio', 'text-to-speech', 'text-to-mel']
| false | true | true | 2,660 | false |
# Tacotron 2 with Guided Attention trained on KSS (Korean)
This repository provides a pretrained [Tacotron2](https://arxiv.org/abs/1712.05884) trained with [Guided Attention](https://arxiv.org/abs/1710.08969) on KSS dataset (KO). For a detail of the model, we encourage you to read more about
[TensorFlowTTS](https://github.com/TensorSpeech/TensorFlowTTS).
## Install TensorFlowTTS
First of all, please install TensorFlowTTS with the following command:
```
pip install TensorFlowTTS
```
### Converting your Text to Mel Spectrogram
```python
import numpy as np
import soundfile as sf
import yaml
import tensorflow as tf
from tensorflow_tts.inference import AutoProcessor
from tensorflow_tts.inference import TFAutoModel
processor = AutoProcessor.from_pretrained("tensorspeech/tts-tacotron2-kss-ko")
tacotron2 = TFAutoModel.from_pretrained("tensorspeech/tts-tacotron2-kss-ko")
text = "์ ์ ์ฐ๋ฆฌ์ ์ํ ๋ฌธ์ ์๋ ๊ด์ฌ์ด ์๋ค. ์ ์ ๋ค๋ง ๊ฒฝํ์ ์ผ๋ก ํตํฉํ ๋ฟ์ด๋ค."
input_ids = processor.text_to_sequence(text)
decoder_output, mel_outputs, stop_token_prediction, alignment_history = tacotron2.inference(
input_ids=tf.expand_dims(tf.convert_to_tensor(input_ids, dtype=tf.int32), 0),
input_lengths=tf.convert_to_tensor([len(input_ids)], tf.int32),
speaker_ids=tf.convert_to_tensor([0], dtype=tf.int32),
)
```
#### Referencing Tacotron 2
```
@article{DBLP:journals/corr/abs-1712-05884,
author = {Jonathan Shen and
Ruoming Pang and
Ron J. Weiss and
Mike Schuster and
Navdeep Jaitly and
Zongheng Yang and
Zhifeng Chen and
Yu Zhang and
Yuxuan Wang and
R. J. Skerry{-}Ryan and
Rif A. Saurous and
Yannis Agiomyrgiannakis and
Yonghui Wu},
title = {Natural {TTS} Synthesis by Conditioning WaveNet on Mel Spectrogram
Predictions},
journal = {CoRR},
volume = {abs/1712.05884},
year = {2017},
url = {http://arxiv.org/abs/1712.05884},
archivePrefix = {arXiv},
eprint = {1712.05884},
timestamp = {Thu, 28 Nov 2019 08:59:52 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1712-05884.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
#### Referencing TensorFlowTTS
```
@misc{TFTTS,
author = {Minh Nguyen, Alejandro Miguel Velasquez, Erogol, Kuan Chen, Dawid Kobus, Takuya Ebata,
Trinh Le and Yunchao He},
title = {TensorflowTTS},
year = {2020},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\\url{https://github.com/TensorSpeech/TensorFlowTTS}},
}
```
|
7a803c440b434f3e1ebf8c5e7bf8dc28
|
FredZhang7/google-safesearch-mini-v2
|
FredZhang7
| null | 6 | 85 |
timm
| 3 |
image-classification
| true | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
['safety-checker', 'explicit-filter']
| false | true | true | 3,188 | false |
## Google Safesearch Mini V2 is an ultra-precise multi-class image classifier that accurately detects explicit content
Google Safesearch Mini V2 took a different approach to its training than [V1](https://huggingface.co/FredZhang7/google-safesearch-mini); it used the InceptionResNetV2 architecture and a dataset of roughly **3,400,000 images** randomly sourced from the internet, some of which were generated via data argumentation.
The training and validation data are sourced from Google Images, Reddit, Kaggle, and Imgur, and were classified as safe or nsfw by companies, Google SafeSearch, and moderators.
After training the model for 5 epochs with cross entropy loss and evaluating it on both the training and validation sets to identify images with predicted probabilities below 0.90, some necessary corrections were made to the curated dataset and the model was trained for an additional 8 epochs.
Next, I tested the model on various cases that it may struggle to classify and observed that it was mistaking the fur of a brown cat for human skin.
To improve the accuracy, I fine-tuned the model with [15 additional datasets from Kaggle](./kaggle-datasets.txt) for one epoch, and then trained it for the last epoch with a combination of training and test data.
This resulted in **97% accuracy** on both training and validation data.
A safesearch filter is not only a great tool for moderating social media, but it also can be used to filter datasets. Compared to stable diffusion safety checkers, this model offers a major advantage - users can save 1.0 GB of RAM and disk space.
## PyTorch
```bash
pip install --upgrade torchvision
```
```python
import torch, os
from torchvision import transforms
from PIL import Image
import urllib.request
import timm
image_path = "https://www.allaboutcats.ca/wp-content/uploads/sites/235/2022/03/shutterstock_320462102-2500-e1647917149997.jpg"
device = "cuda"
def preprocess_image(image_path):
# Define image pre-processing transforms
transform = transforms.Compose([
transforms.Resize(299),
transforms.CenterCrop(299),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
if image_path.startswith('http://') or image_path.startswith('https://'):
import requests
from io import BytesIO
response = requests.get(image_path)
img = Image.open(BytesIO(response.content)).convert('RGB')
else:
img = Image.open(image_path).convert('RGB')
img = transform(img).unsqueeze(0)
img = img.cuda() if device.lower() == "cuda" else img.cpu()
return img
def eval():
model = timm.create_model("hf_hub:FredZhang7/google-safesearch-mini-v2", pretrained=True)
model.to(device)
img = preprocess_image(image_path)
with torch.no_grad():
out = model(img)
_, predicted = torch.max(out.data, 1)
classes = {
0: 'nsfw_gore',
1: 'nsfw_suggestive',
2: 'safe'
}
print('\n\033[1;31m' + classes[predicted.item()] + '\033[0m' if predicted.item() != 2 else '\033[1;32m' + classes[predicted.item()] + '\033[0m\n')
if __name__ == '__main__':
eval()
```
|
63d69f51980b0ca3909217428dfcb903
|
tonyalves/output
|
tonyalves
|
wav2vec2
| 13 | 4 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['pt']
|
['common_voice']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer']
| true | true | true | 6,006 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - PT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1505
- Wer: 0.1352
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 4.1367 | 0.64 | 500 | 3.8825 | 1.0 |
| 2.9677 | 1.29 | 1000 | 2.9498 | 1.0 |
| 1.5884 | 1.93 | 1500 | 0.6722 | 0.6493 |
| 1.2292 | 2.57 | 2000 | 0.3635 | 0.3202 |
| 1.1314 | 3.22 | 2500 | 0.2970 | 0.2680 |
| 1.0879 | 3.86 | 3000 | 0.2671 | 0.2486 |
| 1.0344 | 4.5 | 3500 | 0.2625 | 0.2239 |
| 1.0109 | 5.15 | 4000 | 0.2520 | 0.2230 |
| 0.9966 | 5.79 | 4500 | 0.2280 | 0.2105 |
| 0.9815 | 6.43 | 5000 | 0.2254 | 0.2179 |
| 0.9744 | 7.08 | 5500 | 0.2301 | 0.2137 |
| 0.9487 | 7.72 | 6000 | 0.2224 | 0.2051 |
| 0.9431 | 8.37 | 6500 | 0.2105 | 0.1992 |
| 0.9365 | 9.01 | 7000 | 0.2114 | 0.2019 |
| 0.9268 | 9.65 | 7500 | 0.2097 | 0.1988 |
| 0.9292 | 10.3 | 8000 | 0.2120 | 0.1986 |
| 0.929 | 10.94 | 8500 | 0.2048 | 0.1998 |
| 0.9017 | 11.58 | 9000 | 0.2035 | 0.1999 |
| 0.8898 | 12.23 | 9500 | 0.1961 | 0.1908 |
| 0.8799 | 12.87 | 10000 | 0.1945 | 0.1817 |
| 0.869 | 13.51 | 10500 | 0.1929 | 0.1844 |
| 0.8572 | 14.16 | 11000 | 0.1941 | 0.1888 |
| 0.8691 | 14.8 | 11500 | 0.1912 | 0.1804 |
| 0.8645 | 15.44 | 12000 | 0.1950 | 0.1851 |
| 0.8468 | 16.09 | 12500 | 0.1879 | 0.1770 |
| 0.8405 | 16.73 | 13000 | 0.1881 | 0.1759 |
| 0.8647 | 17.37 | 13500 | 0.1861 | 0.1740 |
| 0.8477 | 18.02 | 14000 | 0.1782 | 0.1702 |
| 0.811 | 18.66 | 14500 | 0.1915 | 0.1757 |
| 0.8165 | 19.3 | 15000 | 0.1820 | 0.1724 |
| 0.8166 | 19.95 | 15500 | 0.1798 | 0.1697 |
| 0.8167 | 20.59 | 16000 | 0.1805 | 0.1752 |
| 0.7908 | 21.24 | 16500 | 0.1761 | 0.1699 |
| 0.7925 | 21.88 | 17000 | 0.1740 | 0.1709 |
| 0.7803 | 22.52 | 17500 | 0.1815 | 0.1727 |
| 0.7839 | 23.17 | 18000 | 0.1737 | 0.1694 |
| 0.7815 | 23.81 | 18500 | 0.1732 | 0.1630 |
| 0.767 | 24.45 | 19000 | 0.1724 | 0.1648 |
| 0.7672 | 25.1 | 19500 | 0.1706 | 0.1596 |
| 0.7691 | 25.74 | 20000 | 0.1718 | 0.1618 |
| 0.7547 | 26.38 | 20500 | 0.1694 | 0.1565 |
| 0.7498 | 27.03 | 21000 | 0.1706 | 0.1582 |
| 0.7459 | 27.67 | 21500 | 0.1663 | 0.1586 |
| 0.7374 | 28.31 | 22000 | 0.1651 | 0.1567 |
| 0.7499 | 28.96 | 22500 | 0.1668 | 0.1549 |
| 0.7471 | 29.6 | 23000 | 0.1667 | 0.1553 |
| 0.7369 | 30.24 | 23500 | 0.1659 | 0.1556 |
| 0.7389 | 30.89 | 24000 | 0.1668 | 0.1538 |
| 0.7197 | 31.53 | 24500 | 0.1687 | 0.1561 |
| 0.71 | 32.17 | 25000 | 0.1666 | 0.1516 |
| 0.7199 | 32.82 | 25500 | 0.1640 | 0.1523 |
| 0.7194 | 33.46 | 26000 | 0.1659 | 0.1528 |
| 0.6923 | 34.11 | 26500 | 0.1662 | 0.1507 |
| 0.7054 | 34.75 | 27000 | 0.1641 | 0.1486 |
| 0.6955 | 35.39 | 27500 | 0.1634 | 0.1497 |
| 0.7084 | 36.04 | 28000 | 0.1618 | 0.1478 |
| 0.6917 | 36.68 | 28500 | 0.1589 | 0.1471 |
| 0.687 | 37.32 | 29000 | 0.1589 | 0.1450 |
| 0.6914 | 37.97 | 29500 | 0.1588 | 0.1465 |
| 0.6646 | 38.61 | 30000 | 0.1602 | 0.1468 |
| 0.6667 | 39.25 | 30500 | 0.1588 | 0.1444 |
| 0.6754 | 39.9 | 31000 | 0.1587 | 0.1455 |
| 0.6632 | 40.54 | 31500 | 0.1586 | 0.1461 |
| 0.6619 | 41.18 | 32000 | 0.1571 | 0.1441 |
| 0.6561 | 41.83 | 32500 | 0.1564 | 0.1420 |
| 0.6492 | 42.47 | 33000 | 0.1539 | 0.1437 |
| 0.6649 | 43.11 | 33500 | 0.1512 | 0.1406 |
| 0.6511 | 43.76 | 34000 | 0.1539 | 0.1384 |
| 0.6551 | 44.4 | 34500 | 0.1520 | 0.1384 |
| 0.6452 | 45.05 | 35000 | 0.1510 | 0.1368 |
| 0.6155 | 45.69 | 35500 | 0.1522 | 0.1375 |
| 0.628 | 46.33 | 36000 | 0.1522 | 0.1366 |
| 0.6389 | 46.97 | 36500 | 0.1513 | 0.1377 |
| 0.6265 | 47.62 | 37000 | 0.1512 | 0.1369 |
| 0.6197 | 48.26 | 37500 | 0.1511 | 0.1362 |
| 0.621 | 48.91 | 38000 | 0.1510 | 0.1357 |
| 0.6259 | 49.55 | 38500 | 0.1506 | 0.1353 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.9.1+cu102
- Datasets 2.0.0
- Tokenizers 0.11.6
|
7576bb0700d9b1d72e32e4dbba570239
|
polejowska/convnext-tiny-224-finetuned-eurosat-att
|
polejowska
|
convnext
| 11 | 5 |
transformers
| 0 |
image-classification
| true | false | false |
apache-2.0
| null |
['imagefolder']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,036 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnext-tiny-224-finetuned-eurosat-att
This model is a fine-tuned version of [facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
f19c62ab843b5b7f9fd1d759613d4b59
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.