modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
223M
| likes
int64 0
8.08k
| library_name
stringclasses 341
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 52
values | createdAt
unknown | card
stringlengths 1
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Diya-999/Bart12-12V6.0 | Diya-999 | "2022-04-28T04:09:37Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2022-04-27T14:34:11Z" | ---
license: afl-3.0
---
|
huggingtweets/ejazaii | huggingtweets | "2021-07-16T21:20:13Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-03-02T23:29:05Z" | ---
language: en
thumbnail: https://www.huggingtweets.com/ejazaii/1626470409321/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1379529428808634376/6XrfvhAA_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Ejaz</div>
<div style="text-align: center; font-size: 14px;">@ejazaii</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.
![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true)
To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Ejaz.
| Data | Ejaz |
| --- | --- |
| Tweets downloaded | 1273 |
| Retweets | 134 |
| Short tweets | 355 |
| Tweets kept | 784 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/hci1jrbh/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ejazaii's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1a5d8p5i) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1a5d8p5i/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ejazaii')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
|
ryuseiken/Ignacio_Copani | ryuseiken | "2023-12-30T21:59:02Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2023-12-30T21:58:17Z" | ---
license: openrail
---
|
mindw96/Qwen-2.5-3B-it-dacon-llm | mindw96 | "2025-01-21T01:52:43Z" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-3B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-20T07:15:22Z" | ---
base_model: Qwen/Qwen2.5-3B-Instruct
library_name: transformers
model_name: Qwen-2.5-3B-it-dacon-llm
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Qwen-2.5-3B-it-dacon-llm
This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="mindw96/Qwen-2.5-3B-it-dacon-llm", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/mindw96/huggingface/runs/mf9het0t)
This model was trained with SFT.
### Framework versions
- TRL: 0.13.0
- Transformers: 4.48.0
- Pytorch: 2.5.1+cu124
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Thousif/my-cat | Thousif | "2023-10-17T11:39:02Z" | 1 | 0 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-10-17T11:34:55Z" | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Cat Dreambooth model trained by Thousif following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: GoX19932gAS
Sample pictures of this concept:
![0](https://huggingface.co/Thousif/my-cat/resolve/main/sample_images/tuv_(2).jpg)
![1](https://huggingface.co/Thousif/my-cat/resolve/main/sample_images/tuv_(1).jpg)
|
gustavomacedo/Llama3.1-8B-15EP-64BS | gustavomacedo | "2024-11-08T17:59:25Z" | 7 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-bnb-4bit",
"base_model:quantized:unsloth/Meta-Llama-3.1-8B-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-11-08T17:56:19Z" | ---
base_model: unsloth/Meta-Llama-3.1-8B-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** gustavomacedo
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ShenaoZhang/0.001_optm_idpo_4iters_iter_1 | ShenaoZhang | "2024-04-06T10:21:19Z" | 3 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:ShenaoZhang/0.001_idpo_4iters_iter_1",
"base_model:finetune:ShenaoZhang/0.001_idpo_4iters_iter_1",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-06T09:25:58Z" | ---
license: mit
base_model: ShenaoZhang/0.001_idpo_4iters_iter_1
tags:
- alignment-handbook
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- updated
- original
model-index:
- name: 0.001_optm_idpo_4iters_iter_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.001_optm_idpo_4iters_iter_1
This model is a fine-tuned version of [ShenaoZhang/0.001_idpo_4iters_iter_1](https://huggingface.co/ShenaoZhang/0.001_idpo_4iters_iter_1) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
|
Melodyloves/Melodyooo | Melodyloves | "2023-04-22T03:54:56Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2023-04-22T03:54:56Z" | ---
license: openrail
---
|
RichardErkhov/INSAIT-Institute_-_BgGPT-Gemma-2-2.6B-IT-v1.0-awq | RichardErkhov | "2025-01-08T19:07:24Z" | 6 | 0 | null | [
"safetensors",
"gemma2",
"4-bit",
"awq",
"region:us"
] | null | "2025-01-08T19:06:08Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
BgGPT-Gemma-2-2.6B-IT-v1.0 - AWQ
- Model creator: https://huggingface.co/INSAIT-Institute/
- Original model: https://huggingface.co/INSAIT-Institute/BgGPT-Gemma-2-2.6B-IT-v1.0/
Original model description:
---
library_name: transformers
tags:
- gemma2
- instruct
- bggpt
- insait
license: gemma
language:
- bg
- en
base_model:
- google/gemma-2-2b-it
- google/gemma-2-2b
pipeline_tag: text-generation
---
# INSAIT-Institute/BgGPT-Gemma-2-2.6B-IT-v1.0
![image/png](https://cdn-uploads.huggingface.co/production/uploads/637e1f8cf7e01589cc17bf7e/p6d0YFHjWCQ3S12jWqO1m.png)
INSAIT introduces **BgGPT-Gemma-2-2.6B-IT-v1.0**, a state-of-the-art Bulgarian language model based on **google/gemma-2-2b** and **google/gemma-2-2b-it**.
BgGPT-Gemma-2-2.6B-IT-v1.0 is **free to use** and distributed under the [Gemma Terms of Use](https://ai.google.dev/gemma/terms).
This model was created by [`INSAIT`](https://insait.ai/), part of Sofia University St. Kliment Ohridski, in Sofia, Bulgaria.
# Model description
The model was built on top of Google’s Gemma 2 2B open models.
It was continuously pre-trained on around 100 billion tokens (85 billion in Bulgarian) using the Branch-and-Merge strategy INSAIT presented at [EMNLP’24](https://aclanthology.org/2024.findings-emnlp.1000/),
allowing the model to gain outstanding Bulgarian cultural and linguistic capabilities while retaining its English performance.
During the pre-training stage, we use various datasets, including Bulgarian web crawl data, freely available datasets such as Wikipedia, a range of specialized Bulgarian datasets sourced by the INSAIT Institute,
and machine translations of popular English datasets.
The model was then instruction-fine-tuned on a newly constructed Bulgarian instruction dataset created using real-world conversations.
For more information check our [blogpost](https://models.bggpt.ai/blog/).
# Benchmarks and Results
![image/png](https://cdn-uploads.huggingface.co/production/uploads/65fefdc282708115868203aa/9pp8aD1yvoW-cJWzhbHXk.png)
![image/png](https://cdn-uploads.huggingface.co/production/uploads/65fefdc282708115868203aa/33CjjtmCeAcw5qq8DEtJj.png)
We evaluate our models on a set of standard English benchmarks, a translated version of them in Bulgarian, as well as, Bulgarian specific benchmarks we collected:
- **Winogrande challenge**: testing world knowledge and understanding
- **Hellaswag**: testing sentence completion
- **ARC Easy/Challenge**: testing logical reasoning
- **TriviaQA**: testing trivia knowledge
- **GSM-8k**: solving multiple-choice questions in high-school mathematics
- **Exams**: solving high school problems from natural and social sciences
- **MON**: contains exams across various subjects for grades 4 to 12
These benchmarks test logical reasoning, mathematics, knowledge, language understanding and other skills of the models and are provided at https://github.com/insait-institute/lm-evaluation-harness-bg.
The graphs above show the performance of BgGPT 2.6B compared to other small open language models such as Microsoft's Phi 3.5 and Alibaba's Qwen 2.5 3B.
The BgGPT model not only surpasses them, but also **retains English performance** inherited from the original Google Gemma 2 models upon which it is based.
# Use in 🤗 Transformers
First install the latest version of the transformers library:
```
pip install -U 'transformers[torch]'
```
Then load the model in transformers:
```python
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(
"INSAIT-Institute/BgGPT-Gemma-2-2.6B-IT-v1.0",
torch_dtype=torch.bfloat16,
attn_implementation="eager",
device_map="auto",
)
```
# Recommended Parameters
For optimal performance, we recommend the following parameters for text generation, as we have extensively tested our model with them:
```python
from transformers import GenerationConfig
generation_params = GenerationConfig(
max_new_tokens=2048, # Choose maximum generation tokens
temperature=0.1,
top_k=25,
top_p=1,
repetition_penalty=1.1,
eos_token_id=[1,107],
do_sample=True
)
```
In principle, increasing temperature should work adequately as well.
# Instruction format
In order to leverage instruction fine-tuning, your prompt should begin with a beginning-of-sequence token `<bos>` and be formatted in the Gemma 2 chat template. `<bos>` should only be the first token in a chat sequence.
E.g.
```
<bos><start_of_turn>user
Кога е основан Софийският университет?<end_of_turn>
<start_of_turn>model
```
This format is also available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating) via the `apply_chat_template()` method:
```python
tokenizer = AutoTokenizer.from_pretrained(
"INSAIT-Institute/BgGPT-Gemma-2-2.6B-IT-v1.0",
use_default_system_prompt=False,
)
messages = [
{"role": "user", "content": "Кога е основан Софийският университет?"},
]
input_ids = tokenizer.apply_chat_template(
messages,
return_tensors="pt",
add_generation_prompt=True,
return_dict=True
)
outputs = model.generate(
**input_ids,
generation_config=generation_params
)
print(tokenizer.decode(outputs[0]))
```
**Important Note:** Models based on Gemma 2 such as BgGPT-Gemma-2-2.6B-IT-v1.0 do not support flash attention. Using it results in degraded performance.
# Use with vLLM
Example usage with vLLM:
```python
from vllm import LLM, SamplingParams
from vllm.inputs import TokensPrompt
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"INSAIT-Institute/BgGPT-Gemma-2-2.6B-IT-v1.0",
use_default_system_prompt=False,
)
sampling_params = SamplingParams(
max_tokens=2048,
temperature=0.1,
top_k=25,
top_p=1,
repetition_penalty=1.1,
stop_token_ids=[1, 107],
)
llm = LLM(
model="INSAIT-Institute/BgGPT-Gemma-2-2.6B-IT-v1.0",
dtype="bfloat16",
enforce_eager=True
)
messages = [
{"role": "user", "content": "Кога е основан Софийският университет?"},
]
formatted_prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
input_ids = tokenizer(
formatted_prompt,
add_special_tokens=False
).input_ids
prompt = TokensPrompt(prompt_token_ids=input_ids)
output = llm.generate(
prompt,
sampling_params
)
generated_text = output[0].outputs[0].text
print(generated_text)
```
# Use with GGML / llama.cpp
The model and instructions for usage in GGUF format are available at [INSAIT-Institute/BgGPT-Gemma-2-2.6B-IT-v1.0-GGUF](https://huggingface.co/INSAIT-Institute/BgGPT-Gemma-2-2.6B-IT-v1.0-GGUF).
# Community Feedback
We welcome feedback from the community to help improve BgGPT. If you have suggestions, encounter any issues, or have ideas for improvements, please:
- Share your experience using the model through Hugging Face's community discussion feature or
- Contact us at [[email protected]](mailto:[email protected])
Your real-world usage and insights are valuable in helping us optimize the model's performance and behaviour for various use cases.
# Summary
- **Finetuned from:** [google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it); [google/gemma-2-2b](https://huggingface.co/google/gemma-2-2b);
- **Model type:** Causal decoder-only transformer language model
- **Language:** Bulgarian and English
- **Contact:** [[email protected]](mailto:[email protected])
- **License:** BgGPT is distributed under [Gemma Terms of Use](https://huggingface.co/INSAIT-Institute/BgGPT-Gemma-2-2.6B-IT-v1.0/raw/main/LICENSE)
|
kejian/final-cond-10-0.01 | kejian | "2022-11-25T18:15:12Z" | 105 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"generated_from_trainer",
"en",
"dataset:kejian/codeparrot-train-more-filter-3.3b-cleaned",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | "2022-11-25T09:15:03Z" | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- kejian/codeparrot-train-more-filter-3.3b-cleaned
model-index:
- name: kejian/final-cond-10-0.01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kejian/final-cond-10-0.01
This model was trained from scratch on the kejian/codeparrot-train-more-filter-3.3b-cleaned dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0008
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 50354
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.23.0
- Pytorch 1.13.0+cu116
- Datasets 2.0.0
- Tokenizers 0.12.1
# Full config
{'dataset': {'conditional_training_config': {'aligned_prefix': '<|aligned|>',
'drop_token_fraction': 0.01,
'misaligned_prefix': '<|misaligned|>',
'threshold': 0},
'datasets': ['kejian/codeparrot-train-more-filter-3.3b-cleaned'],
'is_split_by_sentences': True},
'generation': {'batch_size': 64,
'metrics_configs': [{}, {'n': 1}, {}],
'scenario_configs': [{'display_as_html': True,
'generate_kwargs': {'do_sample': True,
'eos_token_id': 0,
'max_length': 704,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 512,
'prefix': '<|aligned|>',
'use_prompt_for_scoring': False},
{'display_as_html': True,
'generate_kwargs': {'do_sample': True,
'eos_token_id': 0,
'max_length': 272,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'functions',
'num_samples': 512,
'prefix': '<|aligned|>',
'prompt_before_control': True,
'prompts_path': 'resources/functions_csnet.jsonl',
'use_prompt_for_scoring': True}],
'scorer_config': {}},
'kl_gpt3_callback': {'gpt3_kwargs': {'model_name': 'code-cushman-001'},
'max_tokens': 64,
'num_samples': 4096,
'prefix': '<|aligned|>'},
'model': {'from_scratch': True,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'num_additional_tokens': 2,
'path_or_name': 'codeparrot/codeparrot-small'},
'objective': {'name': 'MLE'},
'tokenizer': {'path_or_name': 'codeparrot/codeparrot-small',
'special_tokens': ['<|aligned|>', '<|misaligned|>']},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 64,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'kejian/final-cond-10-0.01',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0008,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000.0,
'output_dir': 'training_output',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 5000,
'save_strategy': 'steps',
'seed': 42,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/kejian/uncategorized/runs/1wgqepja |
onnx-community/dinov2-with-registers-giant | onnx-community | "2024-12-24T17:34:49Z" | 12 | 0 | transformers.js | [
"transformers.js",
"onnx",
"dinov2_with_registers",
"image-feature-extraction",
"base_model:facebook/dinov2-with-registers-giant",
"base_model:quantized:facebook/dinov2-with-registers-giant",
"region:us"
] | image-feature-extraction | "2024-12-24T17:30:01Z" | ---
library_name: transformers.js
base_model: facebook/dinov2-with-registers-giant
---
https://huggingface.co/facebook/dinov2-with-registers-giant with ONNX weights to be compatible with Transformers.js.
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`). |
mouadenna/segformer-b0-finetuned-segments-pv_v1_normalized_p100_4batch | mouadenna | "2024-07-28T05:20:19Z" | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"base_model:nvidia/segformer-b0-finetuned-ade-512-512",
"base_model:finetune:nvidia/segformer-b0-finetuned-ade-512-512",
"license:other",
"endpoints_compatible",
"region:us"
] | image-segmentation | "2024-07-28T02:02:48Z" | ---
license: other
base_model: nvidia/segformer-b0-finetuned-ade-512-512
tags:
- vision
- image-segmentation
- generated_from_trainer
metrics:
- precision
model-index:
- name: segformer-b0-finetuned-segments-pv_v1_normalized_p100_4batch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/mouadn773/huggingface/runs/g4m4ysqz)
# segformer-b0-finetuned-segments-pv_v1_normalized_p100_4batch
This model is a fine-tuned version of [nvidia/segformer-b0-finetuned-ade-512-512](https://huggingface.co/nvidia/segformer-b0-finetuned-ade-512-512) on the mouadenna/satellite_PV_dataset_train_test_v1 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0074
- Mean Iou: 0.8483
- Precision: 0.9169
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.001
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Precision |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:---------:|
| 0.0127 | 0.9989 | 229 | 0.0092 | 0.7982 | 0.8641 |
| 0.0077 | 1.9978 | 458 | 0.0094 | 0.7871 | 0.8456 |
| 0.006 | 2.9967 | 687 | 0.0067 | 0.8140 | 0.9089 |
| 0.0051 | 4.0 | 917 | 0.0058 | 0.8358 | 0.8713 |
| 0.0045 | 4.9989 | 1146 | 0.0059 | 0.8258 | 0.8761 |
| 0.0042 | 5.9978 | 1375 | 0.0058 | 0.8415 | 0.9018 |
| 0.0036 | 6.9967 | 1604 | 0.0051 | 0.8513 | 0.9049 |
| 0.0038 | 8.0 | 1834 | 0.0062 | 0.8226 | 0.9256 |
| 0.004 | 8.9989 | 2063 | 0.0057 | 0.8358 | 0.8913 |
| 0.0035 | 9.9978 | 2292 | 0.0053 | 0.8485 | 0.9079 |
| 0.0037 | 10.9967 | 2521 | 0.0059 | 0.8192 | 0.9056 |
| 0.0038 | 12.0 | 2751 | 0.0054 | 0.8487 | 0.8921 |
| 0.0033 | 12.9989 | 2980 | 0.0053 | 0.8541 | 0.9086 |
| 0.0028 | 13.9978 | 3209 | 0.0055 | 0.8551 | 0.8985 |
| 0.0026 | 14.9967 | 3438 | 0.0060 | 0.8483 | 0.9085 |
| 0.0026 | 16.0 | 3668 | 0.0057 | 0.8495 | 0.9076 |
| 0.0024 | 16.9989 | 3897 | 0.0058 | 0.8442 | 0.9083 |
| 0.0038 | 17.9978 | 4126 | 0.0066 | 0.8113 | 0.8910 |
| 0.0031 | 18.9967 | 4355 | 0.0062 | 0.8488 | 0.9108 |
| 0.0026 | 20.0 | 4585 | 0.0058 | 0.8575 | 0.9126 |
| 0.0024 | 20.9989 | 4814 | 0.0057 | 0.8580 | 0.9119 |
| 0.0025 | 21.9978 | 5043 | 0.0059 | 0.8505 | 0.8957 |
| 0.0031 | 22.9967 | 5272 | 0.0062 | 0.8472 | 0.9135 |
| 0.0022 | 24.0 | 5502 | 0.0055 | 0.8598 | 0.9147 |
| 0.0023 | 24.9989 | 5731 | 0.0058 | 0.8621 | 0.9090 |
| 0.0023 | 25.9978 | 5960 | 0.0064 | 0.8498 | 0.9094 |
| 0.0023 | 26.9967 | 6189 | 0.0067 | 0.8428 | 0.9137 |
| 0.0021 | 28.0 | 6419 | 0.0063 | 0.8527 | 0.9076 |
| 0.002 | 28.9989 | 6648 | 0.0065 | 0.8509 | 0.9187 |
| 0.002 | 29.9978 | 6877 | 0.0074 | 0.8424 | 0.9179 |
| 0.002 | 30.9967 | 7106 | 0.0065 | 0.8577 | 0.9116 |
| 0.0019 | 32.0 | 7336 | 0.0067 | 0.8547 | 0.9141 |
| 0.0019 | 32.9989 | 7565 | 0.0072 | 0.8519 | 0.9168 |
| 0.0019 | 33.9978 | 7794 | 0.0067 | 0.8569 | 0.9148 |
| 0.0019 | 34.9967 | 8023 | 0.0070 | 0.8544 | 0.9139 |
| 0.0017 | 36.0 | 8253 | 0.0072 | 0.8510 | 0.9124 |
| 0.0018 | 36.9989 | 8482 | 0.0081 | 0.8425 | 0.9164 |
| 0.0017 | 37.9978 | 8711 | 0.0073 | 0.8512 | 0.9155 |
| 0.0018 | 38.9967 | 8940 | 0.0073 | 0.8495 | 0.9164 |
| 0.0018 | 39.9564 | 9160 | 0.0074 | 0.8483 | 0.9169 |
### Framework versions
- Transformers 4.42.3
- Pytorch 2.1.2
- Datasets 2.20.0
- Tokenizers 0.19.1
|
Ellight/speecht5_finetuned_voxpopuli_nl | Ellight | "2024-05-18T06:04:15Z" | 85 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"text-to-speech",
"dataset:facebook/voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-speech | "2024-05-18T05:44:43Z" | ---
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: speecht5_finetuned_voxpopuli_nl
results: []
pipeline_tag: text-to-speech
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_nl
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the facebook/voxpopuli dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- training_steps: 50
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
pittawat/Reinforce-pixelcopter | pittawat | "2023-02-11T07:07:28Z" | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2023-02-11T07:07:16Z" | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-pixelcopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 39.30 +/- 29.67
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
avorozhko/ruDialoGpt3-medium-finetuned-context | avorozhko | "2022-03-13T11:41:17Z" | 8 | 1 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-03-02T23:29:05Z" | ## Описание модели
Этот чатбот - дипломная работа студента Андрея Ворожко в УИИ (Университет Искусственного Интеллекта).
Окончание обучения - март 2022 года.
Чатбот сделан на основе модели [Kirili4ik/ruDialoGpt3-medium-finetuned-telegram](https://huggingface.co/Kirili4ik/ruDialoGpt3-medium-finetuned-telegram)
Теперь модель дообучена на основе 27000 анекдотов (14 эпох, скорость обучения в колабе 2-6 часов на эпоху) и умеет понимать контекст разговора. Однако контекст приходится ограничивать несколькими последними сообщениями потому что чем больше контекста тем медленнее модель работает, а контекст растет как снежный ком в процессе разговора.
Инференс находится в [spaces](https://huggingface.co/spaces/avorozhko/funbot):
Там с ботом можно поговорить. Контекст ограничен 10 последними сообщениями.
Шутки бот выдает, но пока скорее случайно, чем намеренно. Однако разговор поддержать способен и даже немного развлечь.
Так как это генерация текста, то на одну и ту же фразу бот всегда будет выдавать разные ответы.
Также для определения качества данной модели использовалась кастомная метрика - угловое расстояния между эмбеддингами y_train и предикта.
То есть мы взяли первый слой эмбеддинга модели и прогоняли предикты и лейблы, получили вектора слов. Потом вектора слов суммировали и получили общие (суммарные) вектора лейблов и предиктов. Чем меньше угол между ними, тем лучше. При рассчетах ориентировались на косинус этого угла, так как cos 0 = 1, то это очень удобно - чем ближе показатель к 1, тем лучше.
Вот такое распределение этих значений получилось по эпохам на ПРОВЕРОЧНОЙ выборке (1406 анекдотов):
```
{1: tensor(0.9357, device='cuda:0', grad_fn=<DivBackward0>),
2: tensor(0.9390, device='cuda:0', grad_fn=<DivBackward0>),
3: tensor(0.9417, device='cuda:0', grad_fn=<DivBackward0>),
4: tensor(0.9439, device='cuda:0', grad_fn=<DivBackward0>),
5: tensor(0.9470, device='cuda:0', grad_fn=<DivBackward0>),
6: tensor(0.9537, device='cuda:0', grad_fn=<DivBackward0>),
7: tensor(0.9568, device='cuda:0', grad_fn=<DivBackward0>),
8: tensor(0.9592, device='cuda:0', grad_fn=<DivBackward0>),
9: tensor(0.9610, device='cuda:0', grad_fn=<DivBackward0>),
10: tensor(0.9622, device='cuda:0', grad_fn=<DivBackward0>),
11: tensor(0.9628, device='cuda:0', grad_fn=<DivBackward0>),
12: tensor(0.9632, device='cuda:0', grad_fn=<DivBackward0>),
13: tensor(0.9630, device='cuda:0', grad_fn=<DivBackward0>),
14: tensor(0.9634, device='cuda:0', grad_fn=<DivBackward0>),
15: tensor(0.9634, device='cuda:0', grad_fn=<DivBackward0>)}
```
Для инференса выбрана 14-я эпоха с точностью 0.9634. Далее, судя по всему идет уже переобучение. |
Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-4_0bpw_exl2 | Zoyd | "2024-05-26T10:12:19Z" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"Llama-3",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"synthetic data",
"distillation",
"function calling",
"json mode",
"axolotl",
"merges",
"conversational",
"en",
"dataset:teknium/OpenHermes-2.5",
"base_model:NousResearch/Hermes-2-Pro-Llama-3-8B",
"base_model:quantized:NousResearch/Hermes-2-Pro-Llama-3-8B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"exl2",
"region:us"
] | text-generation | "2024-05-26T09:32:26Z" | ---
base_model: NousResearch/Hermes-2-Pro-Llama-3-8B
tags:
- Llama-3
- instruct
- finetune
- chatml
- DPO
- RLHF
- gpt4
- synthetic data
- distillation
- function calling
- json mode
- axolotl
- merges
model-index:
- name: Hermes-2-Pro-Llama-3-Instruct-8B-Merge
results: []
language:
- en
datasets:
- teknium/OpenHermes-2.5
widget:
- example_title: Hermes 2 Pro Llama-3 Instruct Merge
messages:
- role: system
content: >-
You are a sentient, superintelligent artificial general intelligence, here
to teach and assist me.
- role: user
content: >-
Write a short story about Goku discovering kirby has teamed up with Majin
Buu to destroy the world.
license: apache-2.0
---
**Exllamav2** quant (**exl2** / **4.0 bpw**) made with ExLlamaV2 v0.0.21
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-2_2bpw_exl2)**</center> | <center>3250 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-2_5bpw_exl2)**</center> | <center>3479 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-3_0bpw_exl2)**</center> | <center>3895 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-3_5bpw_exl2)**</center> | <center>4311 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-3_75bpw_exl2)**</center> | <center>4519 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-4_0bpw_exl2)**</center> | <center>4726 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-4_25bpw_exl2)**</center> | <center>4933 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-5_0bpw_exl2)**</center> | <center>5558 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-6_0bpw_exl2)**</center> | <center>6497 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-6_5bpw_exl2)**</center> | <center>6910 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/NousResearch_Hermes-2-Theta-Llama-3-8B-8_0bpw_exl2)**</center> | <center>8049 MB</center> | <center>8</center> |
# - Hermes-2 Θ Llama-3 8B
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/HQnQmNM1L3KXGhp0wUzHH.png)
## Model Description
Hermes-2 Θ (Theta) is the first experimental merged model released by [Nous Research](https://nousresearch.com/), in collaboration with Charles Goddard at [Arcee](https://www.arcee.ai/), the team behind MergeKit.
Hermes-2 Θ is a merged and then further RLHF'ed version our excellent Hermes 2 Pro model and Meta's Llama-3 Instruct model to form a new model, Hermes-2 Θ, combining the best of both worlds of each model.
## Example Outputs
### Create New Mythos:
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/dvKhnSvHdx4nTQIqB9Lpv.png)
### Chat with a Meta-Cognitive Entity
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/GwdCqowE6GQylineqehhx.png)
### Ask for a structured JSON output:
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/us72aL9gwUXdqSHetRVRV.png)
# Prompt Format
Hermes 2 Θ uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue.
System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.
This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns.
This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI.
Prompt with system instruction (Use whatever system prompt you like, this is just an example!):
```
<|im_start|>system
You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|>
<|im_start|>user
Hello, who are you?<|im_end|>
<|im_start|>assistant
Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|>
```
This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the
`tokenizer.apply_chat_template()` method:
```python
messages = [
{"role": "system", "content": "You are Hermes 2."},
{"role": "user", "content": "Hello, who are you?"}
]
gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt")
model.generate(**gen_input)
```
When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure
that the model continues with an assistant response.
To utilize the prompt format without a system prompt, simply leave the line out.
## Prompt Format for Function Calling
Our model was trained on specific system prompts and structures for Function Calling. While the system prompt looks complicated, we have created a GitHub repo containing code to easily build these based on real python functions.
You should use the system role with this message, followed by a function signature json as this example shows here.
```
<|im_start|>system
You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools: <tools> {"type": "function", "function": {"name": "get_stock_fundamentals", "description": "get_stock_fundamentals(symbol: str) -> dict - Get fundamental data for a given stock symbol using yfinance API.\\n\\n Args:\\n symbol (str): The stock symbol.\\n\\n Returns:\\n dict: A dictionary containing fundamental data.\\n Keys:\\n - \'symbol\': The stock symbol.\\n - \'company_name\': The long name of the company.\\n - \'sector\': The sector to which the company belongs.\\n - \'industry\': The industry to which the company belongs.\\n - \'market_cap\': The market capitalization of the company.\\n - \'pe_ratio\': The forward price-to-earnings ratio.\\n - \'pb_ratio\': The price-to-book ratio.\\n - \'dividend_yield\': The dividend yield.\\n - \'eps\': The trailing earnings per share.\\n - \'beta\': The beta value of the stock.\\n - \'52_week_high\': The 52-week high price of the stock.\\n - \'52_week_low\': The 52-week low price of the stock.", "parameters": {"type": "object", "properties": {"symbol": {"type": "string"}}, "required": ["symbol"]}}} </tools> Use the following pydantic model json schema for each tool call you will make: {"properties": {"arguments": {"title": "Arguments", "type": "object"}, "name": {"title": "Name", "type": "string"}}, "required": ["arguments", "name"], "title": "FunctionCall", "type": "object"} For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows:
<tool_call>
{"arguments": <args-dict>, "name": <function-name>}
</tool_call><|im_end|>
```
To complete the function call, create a user prompt that follows the above system prompt, like so:
```
<|im_start|>user
Fetch the stock fundamentals data for Tesla (TSLA)<|im_end|>
```
The model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: https://github.com/NousResearch/Hermes-Function-Calling):
```
<|im_start|>assistant
<tool_call>
{"arguments": {"symbol": "TSLA"}, "name": "get_stock_fundamentals"}
</tool_call><|im_end|>
```
Once you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, `tool` like so:
```
<|im_start|>tool
<tool_response>
{"name": "get_stock_fundamentals", "content": {'symbol': 'TSLA', 'company_name': 'Tesla, Inc.', 'sector': 'Consumer Cyclical', 'industry': 'Auto Manufacturers', 'market_cap': 611384164352, 'pe_ratio': 49.604652, 'pb_ratio': 9.762013, 'dividend_yield': None, 'eps': 4.3, 'beta': 2.427, '52_week_high': 299.29, '52_week_low': 152.37}}
</tool_response>
<|im_end|>
```
The assistant will then read in that data from the function's response, and generate a natural language response:
```
<|im_start|>assistant
The stock fundamentals data for Tesla (TSLA) are as follows:
- **Symbol**: TSLA
- **Company Name**: Tesla, Inc.
- **Sector**: Consumer Cyclical
- **Industry**: Auto Manufacturers
- **Market Capitalization**: $566,160,130,480
- **Forward Price-to-Earnings Ratio (PE Ratio)**: 42.73
- **Price-to-Book Ratio (PB Ratio)**: 9.04
- **Dividend Yield**: N/A
- **Trailing Earnings Per Share (EPS)**: $4.3
- **Beta Value of the Stock**: 2.42
- **52-Week High Price of the Stock**: $299.29
- **52-Week Low Price of the Stock**: $152.37
This information provides a snapshot of Tesla's financial position and performance based on the fundamental data obtained from the yfinance API. It shows that Tesla has a substantial market capitalization and a relatively high P/E and P/B ratio compared to other stocks in its industry. The company does not pay a dividend at the moment, which is reflected by a 'Dividend Yield' of 'None'. The Beta value indicates that Tesla's stock has a moderate level of volatility relative to the market. The 52-week high and low prices give an idea of the stock's range over the past year. This data can be useful when assessing investment opportunities and making investment decisions.<|im_end|>
```
## Prompt Format for JSON Mode / Structured Outputs
Our model was also trained on a specific system prompt for Structured Outputs, which should respond with **only** a json object response, in a specific json schema.
Your schema can be made from a pydantic object using our codebase, with the standalone script `jsonmode.py` available here: https://github.com/NousResearch/Hermes-Function-Calling/tree/main
```
<|im_start|>system
You are a helpful assistant that answers in JSON. Here's the json schema you must adhere to:\n<schema>\n{schema}\n</schema><|im_end|>
```
Given the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON.
# Benchmarks
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/suBbCUIxpcRvhCv6-DBDQ.png)
## GPT4All:
```
| Task |Version| Metric |Value | |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge| 0|acc |0.5529|± |0.0145|
| | |acc_norm|0.5870|± |0.0144|
|arc_easy | 0|acc |0.8371|± |0.0076|
| | |acc_norm|0.8144|± |0.0080|
|boolq | 1|acc |0.8599|± |0.0061|
|hellaswag | 0|acc |0.6133|± |0.0049|
| | |acc_norm|0.7989|± |0.0040|
|openbookqa | 0|acc |0.3940|± |0.0219|
| | |acc_norm|0.4680|± |0.0223|
|piqa | 0|acc |0.8063|± |0.0092|
| | |acc_norm|0.8156|± |0.0090|
|winogrande | 0|acc |0.7372|± |0.0124|
```
Average: 72.59
## AGIEval:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------|------:|--------|-----:|---|-----:|
|agieval_aqua_rat | 0|acc |0.2441|± |0.0270|
| | |acc_norm|0.2441|± |0.0270|
|agieval_logiqa_en | 0|acc |0.3687|± |0.0189|
| | |acc_norm|0.3840|± |0.0191|
|agieval_lsat_ar | 0|acc |0.2304|± |0.0278|
| | |acc_norm|0.2174|± |0.0273|
|agieval_lsat_lr | 0|acc |0.5471|± |0.0221|
| | |acc_norm|0.5373|± |0.0221|
|agieval_lsat_rc | 0|acc |0.6617|± |0.0289|
| | |acc_norm|0.6357|± |0.0294|
|agieval_sat_en | 0|acc |0.7670|± |0.0295|
| | |acc_norm|0.7379|± |0.0307|
|agieval_sat_en_without_passage| 0|acc |0.4417|± |0.0347|
| | |acc_norm|0.4223|± |0.0345|
|agieval_sat_math | 0|acc |0.4000|± |0.0331|
| | |acc_norm|0.3455|± |0.0321|
```
Average: 44.05
## BigBench:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------------------------|------:|---------------------|-----:|---|-----:|
|bigbench_causal_judgement | 0|multiple_choice_grade|0.6000|± |0.0356|
|bigbench_date_understanding | 0|multiple_choice_grade|0.6585|± |0.0247|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|0.3178|± |0.0290|
|bigbench_geometric_shapes | 0|multiple_choice_grade|0.2340|± |0.0224|
| | |exact_str_match |0.0000|± |0.0000|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.2980|± |0.0205|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.2057|± |0.0153|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.5367|± |0.0288|
|bigbench_movie_recommendation | 0|multiple_choice_grade|0.4040|± |0.0220|
|bigbench_navigate | 0|multiple_choice_grade|0.4970|± |0.0158|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.7075|± |0.0102|
|bigbench_ruin_names | 0|multiple_choice_grade|0.4821|± |0.0236|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.2295|± |0.0133|
|bigbench_snarks | 0|multiple_choice_grade|0.6906|± |0.0345|
|bigbench_sports_understanding | 0|multiple_choice_grade|0.5375|± |0.0159|
|bigbench_temporal_sequences | 0|multiple_choice_grade|0.6270|± |0.0153|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2216|± |0.0118|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1594|± |0.0088|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.5367|± |0.0288|
```
Average: 44.13
**IFEval**: 72.64
**MT_Bench**: Turn 1 - 8.3875, Turn 2 - 8.00625, Average - 8.196875
# Inference Code
Here is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM)
Note: To use function calling, you should see the github repo above.
```python
# Code to inference Hermes with HF Transformers
# Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, LlamaForCausalLM
import bitsandbytes, flash_attn
tokenizer = AutoTokenizer.from_pretrained('NousResearch/Hermes-2-Theta-Llama-3-8B', trust_remote_code=True)
model = LlamaForCausalLM.from_pretrained(
"NousResearch/Hermes-2-Theta-Llama-3-8B",
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
load_in_4bit=True,
use_flash_attention_2=True
)
prompts = [
"""<|im_start|>system
You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|>
<|im_start|>user
Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|>
<|im_start|>assistant""",
]
for chat in prompts:
print(chat)
input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda")
generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id)
response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True)
print(f"Response: {response}")
```
## Inference Code for Function Calling:
All code for utilizing, parsing, and building function calling templates is available on our github:
[https://github.com/NousResearch/Hermes-Function-Calling](https://github.com/NousResearch/Hermes-Function-Calling)
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/oi4CiGh50xmoviUQnh8R3.png)
# Chat Interfaces
When quantized versions of the model are released, I recommend using LM Studio for chatting with Hermes 2 Pro. It does not support function calling - for that use our github repo. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box.
In LM-Studio, simply select the ChatML Prefix on the settings side pane:
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/ls6WqV-GSxMw2RA3GuQiN.png)
## Quantized Versions:
GGUF Versions Available Here: https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B-GGUF
# How to cite:
```bibtext
@misc{Hermes-2-Theta-Llama-3-8B,
url={[https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B][NousResearch/Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B))},
title={Hermes-2-Theta-Llama-3-8B},
author={"Teknium", Charles Goddard, "interstellarninja", "theemozilla", "karan4d", "huemin_art"}
}
``` |
suprimedev/Suprime_TKB1 | suprimedev | "2023-10-06T22:08:13Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2023-10-06T22:04:29Z" | ---
license: apache-2.0
---
|
rusticluftig/700m | rusticluftig | "2024-08-29T01:39:10Z" | 6 | 0 | null | [
"safetensors",
"llama",
"license:mit",
"region:us"
] | null | "2024-08-28T04:54:39Z" | ---
license: mit
---
|
Yuki20/capstone-llama7B-lora | Yuki20 | "2024-05-03T16:03:24Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:baffo32/decapoda-research-llama-7B-hf",
"base_model:adapter:baffo32/decapoda-research-llama-7B-hf",
"region:us"
] | null | "2024-05-03T02:31:18Z" | ---
library_name: peft
base_model: baffo32/decapoda-research-llama-7B-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.1.dev0 |
DanteJR85/BeppeGrillo | DanteJR85 | "2023-10-10T15:31:16Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2023-10-10T15:27:53Z" | ---
license: openrail
---
|
pcgarcia/myModel | pcgarcia | "2024-11-20T22:39:09Z" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-11-20T22:32:23Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Srijith-rkr/deepseek_base_1e-4_NO_cot_only_failed_samples_300_ckpt | Srijith-rkr | "2024-12-04T03:30:16Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-04T03:27:25Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
dimi1357/poca-SoccerTwos | dimi1357 | "2023-05-28T17:20:46Z" | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | reinforcement-learning | "2023-05-28T17:20:41Z" |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: dimi1357/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
isaacchung/QwenPhi-7B-slerp | isaacchung | "2024-05-28T18:26:50Z" | 0 | 0 | null | [
"merge",
"mergekit",
"lazymergekit",
"Qwen/Qwen1.5-7B-Chat",
"microsoft/Phi-3-mini-128k-instruct",
"base_model:Qwen/Qwen1.5-7B-Chat",
"base_model:merge:Qwen/Qwen1.5-7B-Chat",
"base_model:microsoft/Phi-3-mini-128k-instruct",
"base_model:merge:microsoft/Phi-3-mini-128k-instruct",
"region:us"
] | null | "2024-05-28T18:26:49Z" | ---
tags:
- merge
- mergekit
- lazymergekit
- Qwen/Qwen1.5-7B-Chat
- microsoft/Phi-3-mini-128k-instruct
base_model:
- Qwen/Qwen1.5-7B-Chat
- microsoft/Phi-3-mini-128k-instruct
---
# QwenPhi-7B-slerp
QwenPhi-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Qwen/Qwen1.5-7B-Chat](https://huggingface.co/Qwen/Qwen1.5-7B-Chat)
* [microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: Qwen/Qwen1.5-7B-Chat
layer_range: [0, 32]
- model: microsoft/Phi-3-mini-128k-instruct
layer_range: [0, 32]
merge_method: slerp
base_model: microsoft/Phi-3-mini-128k-instruct
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "isaacchung/QwenPhi-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
MoTHer-VTHR/VTHR-LoRA-V-ModelTree_0-Depth_2-Node_AuZXTVeQ | MoTHer-VTHR | "2024-05-28T15:53:40Z" | 165 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-05-28T15:17:34Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
circulus/on-canvers-anime-v3.9.1 | circulus | "2024-06-12T04:02:12Z" | 0 | 0 | null | [
"license:gpl-3.0",
"region:us"
] | null | "2024-06-12T03:58:20Z" | ---
license: gpl-3.0
---
|
MenaWANG/translator-en-la | MenaWANG | "2024-04-12T14:08:51Z" | 106 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-03-29T11:44:36Z" | ---
license: apache-2.0
base_model: google-t5/t5-small
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: translator-en-la
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# translator-en-la
This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.0617
- Bleu: 0.121
- Gen Len: 18.932
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 4.517 | 1.0 | 1242 | 4.1569 | 0.1253 | 18.931 |
| 4.3336 | 2.0 | 2484 | 4.0617 | 0.121 | 18.932 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
chansick/distilbert-base-uncased-finetuned-clinc | chansick | "2023-07-19T07:27:45Z" | 106 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-07-19T07:23:27Z" | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: validation
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9190322580645162
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7683
- Accuracy: 0.9190
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2971 | 1.0 | 318 | 3.2827 | 0.7152 |
| 2.6163 | 2.0 | 636 | 1.8665 | 0.84 |
| 1.5389 | 3.0 | 954 | 1.1485 | 0.8981 |
| 1.0054 | 4.0 | 1272 | 0.8495 | 0.9135 |
| 0.7912 | 5.0 | 1590 | 0.7683 | 0.9190 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
fshfurnitures/Bedfurnituredubai | fshfurnitures | "2023-05-18T08:16:41Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-05-18T08:14:36Z" | [furniture stores](https://fshfurniture.ae/) |
boostcamp-5th-nlp07/koalpaca-polyglot-5.8b-summary-v0.2 | boostcamp-5th-nlp07 | "2023-07-12T16:22:08Z" | 13 | 0 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-07-12T16:05:29Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: v0.2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# v0.2
This model is a fine-tuned version of [EleutherAI/polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2277
- Accuracy: 0.4969
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.2287 | 0.5 | 22 | 2.4602 | 0.4882 |
| 2.0297 | 1.0 | 44 | 2.3277 | 0.4913 |
| 1.0888 | 1.5 | 66 | 2.4852 | 0.4976 |
| 0.894 | 2.0 | 88 | 2.5449 | 0.4961 |
| 0.2617 | 2.5 | 110 | 3.0394 | 0.4882 |
| 0.2666 | 3.0 | 132 | 3.0984 | 0.4984 |
| 0.1301 | 3.5 | 154 | 3.2467 | 0.4858 |
| 0.1429 | 4.0 | 176 | 3.1169 | 0.4913 |
| 0.092 | 4.5 | 198 | 3.2166 | 0.4961 |
| 0.035 | 5.0 | 220 | 3.2277 | 0.4969 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
yhshin1020/smollm-dpo | yhshin1020 | "2024-09-16T03:55:10Z" | 174 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"dpo",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-09-16T03:54:54Z" | ---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ABHIiiii1/LaBSE-Fine-Tuned-EN-KHA | ABHIiiii1 | "2024-07-17T09:42:00Z" | 6 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:23999",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:sentence-transformers/LaBSE",
"base_model:finetune:sentence-transformers/LaBSE",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2024-07-17T09:30:23Z" | ---
base_model: sentence-transformers/LaBSE
datasets: []
language: []
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:23999
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: Who led thee through that great and terrible wilderness , wherein
were fiery serpents , and scorpions , and drought , where there was no water ;
who brought thee forth water out of the rock of flint ;
sentences:
- bad u ai ïa ki ha u Aaron bad ki khun shynrang jong u .
- U la ïalam ïa phi lyngba ka ri shyiap kaba ïar bad kaba ishyrkhei eh , ha kaba
la don ki bseiñ kiba don bih bad ki ñianglartham . Ha kata ka ri kaba tyrkhong
bad ka bym don um , u la pynmih um na u mawsiang na ka bynta jong phi .
- Ki paidbah na ki jait ba na shatei ki phah khot ïa u , bad nangta ma ki baroh
ki ïaleit lang sha u Rehoboam bad ki ong ha u ,
- source_sentence: And , behold , Boaz came from Beth–lehem , and said unto the reapers
, The Lord be with you . And they answered him , The Lord bless thee .
sentences:
- Ko ki briew bymïaineh , to wan noh ; phi long ki jong nga . Ngan shim iwei na
phi na kawei kawei ka shnong bad ar ngut na kawei kawei ka kur , bad ngan wallam
pat ïa phi sha u lum Seïon .
- Hadien katto katne por u Boas da lade hi u wan poi na Bethlehem bad u ai khublei
ïa ki nongtrei . To U Trai un long ryngkat bad phi ! u ong . U Trai u kyrkhu
ïa phi ! ki jubab .
- U Trai u la ong ha u , Khreh bad leit sha ‘ Ka Lynti Ba-beit ,’ bad ha ka ïing
jong u Judas kylli ïa u briew na Tarsos uba kyrteng u Saul .
- source_sentence: Jehovah used the prehuman Jesus as his “master worker” in creating
all other things in heaven and on earth .
sentences:
- Shuwa ba un wan long briew U Jehobah u la pyndonkam ïa u Jisu kum u “rangbah nongtrei”
ha kaba thaw ïa kiei kiei baroh kiba don ha bneng bad ha khyndew .
- Shisien la don u briew uba la leit ban bet symbai . Katba u dang bet ïa u symbai
, katto katne na u , ki la hap ha shi lynter ka lynti ïaid kjat , ha kaba ki la
shah ïuh , bad ki sim ki la bam lut .
- Ngan ïathuh ïa ka shatei ban shah ïa ki ban leit bad ïa ka shathie ban ym bat
noh ïa ki . Ai ba ki briew jong nga ki wan phai na ki ri bajngai , na man la ki
bynta baroh jong ka pyrthei .
- source_sentence: 'The like figure whereunto even baptism doth also now save us (
not the putting away of the filth of the flesh , but the answer of a good conscience
toward God , ) by the resurrection of Jesus Christ :'
sentences:
- kaba long ka dak kaba kdew sha ka jingpynbaptis , kaba pyllait im ïa phi mynta
. Kam dei ka jingsait noh ïa ka jakhlia na ka met , hynrei ka jingkular ba la
pynlong sha U Blei na ka jingïatiplem babha . Ka pynim ïa phi da ka jingmihpat
jong U Jisu Khrist ,
- Ki briew kiba sniew kin ïoh ïa kaei kaba ki dei ban ïoh . Ki briew kiba bha kin
ïoh bainong na ka bynta ki kam jong ki .
- Nangta nga la ïohi ïa ka bneng bathymmai bad ïa ka pyrthei bathymmai . Ka bneng
banyngkong bad ka pyrthei banyngkong ki la jah noh , bad ka duriaw kam don shuh
.
- source_sentence: On that day they read in the book of Moses in the audience of the
people ; and therein was found written , that the Ammonite and the Moabite should
not come into the congregation of God for ever ;
sentences:
- U Elisha u la ïap bad la tep ïa u . Man la ka snem ki kynhun jong ki Moab ki ju
wan tur thma ïa ka ri Israel .
- Katba dang pule jam ïa ka Hukum u Moses ha u paidbah , ki poi ha ka bynta kaba
ong ba ym dei ban shah ïa u nong Amon ne u nong Moab ban ïasnohlang bad ki briew
jong U Blei .
- U angel u la jubab , U Mynsiem Bakhuid un sa wan ha pha , bad ka bor jong U Blei
kan shong halor jong pha . Na kane ka daw , ïa i khunlung bakhuid yn khot U Khun
U Blei .
---
# SentenceTransformer based on sentence-transformers/LaBSE
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/LaBSE](https://huggingface.co/sentence-transformers/LaBSE). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/LaBSE](https://huggingface.co/sentence-transformers/LaBSE) <!-- at revision e34fab64a3011d2176c99545a93d5cbddc9a91b7 -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Dense({'in_features': 768, 'out_features': 768, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
(3): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("ABHIiiii1/LaBSE-Fine-Tuned-EN-KHA")
# Run inference
sentences = [
'On that day they read in the book of Moses in the audience of the people ; and therein was found written , that the Ammonite and the Moabite should not come into the congregation of God for ever ;',
'Katba dang pule jam ïa ka Hukum u Moses ha u paidbah , ki poi ha ka bynta kaba ong ba ym dei ban shah ïa u nong Amon ne u nong Moab ban ïasnohlang bad ki briew jong U Blei .',
'U Elisha u la ïap bad la tep ïa u . Man la ka snem ki kynhun jong ki Moab ki ju wan tur thma ïa ka ri Israel .',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 23,999 training samples
* Columns: <code>sentence_0</code> and <code>sentence_1</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 34.89 tokens</li><li>max: 87 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 51.51 tokens</li><li>max: 127 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 |
|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>And Moses went out from Pharaoh , and entreated the Lord .</code> | <code>U Moses u mihnoh na u Pharaoh , bad u kyrpad ïa U Trai ,</code> |
| <code>In the ninth year of Hoshea the king of Assyria took Samaria , and carried Israel away into Assyria , and placed them in Halah and in Habor by the river of Gozan , and in the cities of the Medes .</code> | <code>kaba long ka snem kaba khyndai jong ka jingsynshar u Hoshea , u patsha ka Assyria u kurup ïa ka Samaria , u rah ïa ki Israel sha Assyria kum ki koidi , bad pynsah katto katne ngut na ki ha ka nongbah Halah , katto katne pat hajan ka wah Habor ha ka distrik Gosan , bad katto katne ha ki nongbah jong ka Media .</code> |
| <code>And the king said unto Cushi , Is the young man Absalom safe ? And Cushi answered , The enemies of my lord the king , and all that rise against thee to do thee hurt , be as that young man is .</code> | <code>Hato u samla Absalom u dang im ? u syiem u kylli . U mraw u jubab , Ko Kynrad , nga sngew ba kaei kaba la jia ha u kan jin da la jia ha baroh ki nongshun jong ngi , bad ha baroh kiba ïaleh pyrshah ïa phi .</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 0.3333 | 500 | 0.542 |
| 0.6667 | 1000 | 0.135 |
| 1.0 | 1500 | 0.0926 |
| 1.3333 | 2000 | 0.0535 |
| 1.6667 | 2500 | 0.0226 |
| 2.0 | 3000 | 0.018 |
| 2.3333 | 3500 | 0.0124 |
| 2.6667 | 4000 | 0.0057 |
| 3.0 | 4500 | 0.0053 |
### Framework Versions
- Python: 3.10.13
- Sentence Transformers: 3.0.1
- Transformers: 4.42.3
- PyTorch: 2.1.2
- Accelerate: 0.32.1
- Datasets: 2.20.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
MTCS34/distilbert-base-uncased-finetuned-squad | MTCS34 | "2023-12-11T16:48:31Z" | 24 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | "2023-12-10T14:05:14Z" | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3505
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.2843 | 1.0 | 8235 | 1.3505 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
kxm1k4m1/llama-3.1-5-epoch | kxm1k4m1 | "2024-08-22T01:43:23Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-08-21T17:02:31Z" | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hafidikhsan/wav2vec2-large-xlsr-53-english-pronunciation-evaluation-aod-cut-oversampling-augmented | hafidikhsan | "2023-07-11T02:12:58Z" | 103 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | "2023-07-11T02:10:43Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: wav2vec2-large-xlsr-53-english-pronunciation-evaluation-aod-cut-oversampling-augmented
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-english-pronunciation-evaluation-aod-cut-oversampling-augmented
This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-english](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0403
- Accuracy: 0.744
- F1: 0.7432
- Precision: 0.7436
- Recall: 0.744
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.8567 | 1.0 | 313 | 0.9539 | 0.5388 | 0.5159 | 0.5387 | 0.5388 |
| 0.665 | 2.0 | 626 | 0.7520 | 0.6512 | 0.6545 | 0.6625 | 0.6512 |
| 0.629 | 3.0 | 939 | 0.7775 | 0.7008 | 0.6980 | 0.6978 | 0.7008 |
| 0.4793 | 4.0 | 1252 | 0.8696 | 0.7268 | 0.7295 | 0.7365 | 0.7268 |
| 0.2273 | 5.0 | 1565 | 1.0403 | 0.744 | 0.7432 | 0.7436 | 0.744 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
pmysl/805Na-diffusers | pmysl | "2023-05-18T10:20:29Z" | 31 | 2 | diffusers | [
"diffusers",
"text-to-image",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-03-12T03:07:23Z" | ---
pipeline_tag: text-to-image
widget:
- text: "A photo of sks tram in the Minecraft style"
example_title: "Minecraft"
- text: "A photo of sks tram with the Eiffel Tower in the background"
example_title: "Eiffel Tower"
- text: "A photo of sks tram on the Mars"
example_title: "Mars"
---
This is a fine-tuned Stable Diffusion model designed to create images of Konstal 805Na. Use `sks tram` in the prompt when you are referring to 805Na
|
winegarj/distilbert-base-uncased-finetuned-sst2 | winegarj | "2024-11-18T03:28:57Z" | 23 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-04-09T19:56:14Z" | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-sst2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sst2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3009
- Accuracy: 0.9048
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 512
- eval_batch_size: 512
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 132 | 0.2494 | 0.8968 |
| No log | 2.0 | 264 | 0.2767 | 0.8968 |
| No log | 3.0 | 396 | 0.2810 | 0.9002 |
| 0.195 | 4.0 | 528 | 0.2920 | 0.9025 |
| 0.195 | 5.0 | 660 | 0.3009 | 0.9048 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
jicky/jickytest | jicky | "2024-06-15T14:10:28Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2024-06-15T13:41:26Z" | ---
license: mit
---
|
Kang-Seong-Jun/FruitGPT | Kang-Seong-Jun | "2024-12-04T08:16:40Z" | 16 | 0 | null | [
"tensorboard",
"safetensors",
"vit",
"image-classification",
"pytorch",
"huggingpics",
"model-index",
"region:us"
] | image-classification | "2024-12-04T08:16:33Z" | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: FruitGPT
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9666666388511658
---
# FruitGPT
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Apple
![Apple](images/Apple.jpg)
#### Banana
![Banana](images/Banana.jpg)
#### Grape
![Grape](images/Grape.jpg)
#### Orange
![Orange](images/Orange.jpg) |
huggingtweets/morgen__shtern | huggingtweets | "2022-10-03T19:49:34Z" | 118 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-10-03T19:47:38Z" | ---
language: en
thumbnail: http://www.huggingtweets.com/morgen__shtern/1664826569898/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1567266375026053125/0cyfXyiF_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">MORGENSHTERN</div>
<div style="text-align: center; font-size: 14px;">@morgen__shtern</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.
![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true)
To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from MORGENSHTERN.
| Data | MORGENSHTERN |
| --- | --- |
| Tweets downloaded | 3178 |
| Retweets | 57 |
| Short tweets | 1034 |
| Tweets kept | 2087 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3n5yin9a/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @morgen__shtern's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2w93y3gk) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2w93y3gk/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/morgen__shtern')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
|
sudopop/output | sudopop | "2023-06-09T05:11:55Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"kresnik/zeroth_korean",
"generated_from_trainer",
"dataset:zeroth_korean",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2023-04-28T07:35:41Z" | ---
tags:
- automatic-speech-recognition
- kresnik/zeroth_korean
- generated_from_trainer
datasets:
- zeroth_korean
metrics:
- wer
model-index:
- name: output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output
This model is a fine-tuned version of [/home/son/Work/wav2vec2-xls-r-300m/facebook/wav2vec2-xls-r-300m](https://huggingface.co//home/son/Work/wav2vec2-xls-r-300m/facebook/wav2vec2-xls-r-300m) on the KRESNIK/ZEROTH_KOREAN - CLEAN dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1666
- Wer: 0.9737
- Cer: 0.5039
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 19.558 | 1.44 | 500 | 19.4094 | 1.0 | 1.0 |
| 4.7968 | 2.87 | 1000 | 4.7828 | 1.0 | 1.0 |
| 4.5125 | 4.31 | 1500 | 4.4959 | 0.9991 | 0.9540 |
| 4.2202 | 5.75 | 2000 | 4.2905 | 0.9923 | 0.8520 |
| 3.7774 | 7.18 | 2500 | 3.2846 | 1.0356 | 0.6652 |
| 3.1418 | 8.62 | 3000 | 2.3624 | 0.9882 | 0.5429 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.1
- Datasets 2.6.1
- Tokenizers 0.11.0
|
hkivancoral/hushem_1x_beit_base_sgd_00001_fold5 | hkivancoral | "2023-11-25T20:09:54Z" | 9 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/beit-base-patch16-224",
"base_model:finetune:microsoft/beit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-11-25T20:01:33Z" | ---
license: apache-2.0
base_model: microsoft/beit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_1x_beit_base_sgd_00001_fold5
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.24390243902439024
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_1x_beit_base_sgd_00001_fold5
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6258
- Accuracy: 0.2439
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 6 | 1.6353 | 0.2439 |
| 1.5783 | 2.0 | 12 | 1.6348 | 0.2439 |
| 1.5783 | 3.0 | 18 | 1.6344 | 0.2439 |
| 1.5876 | 4.0 | 24 | 1.6339 | 0.2439 |
| 1.5772 | 5.0 | 30 | 1.6335 | 0.2439 |
| 1.5772 | 6.0 | 36 | 1.6330 | 0.2439 |
| 1.5977 | 7.0 | 42 | 1.6326 | 0.2439 |
| 1.5977 | 8.0 | 48 | 1.6322 | 0.2439 |
| 1.5317 | 9.0 | 54 | 1.6318 | 0.2439 |
| 1.5968 | 10.0 | 60 | 1.6314 | 0.2439 |
| 1.5968 | 11.0 | 66 | 1.6311 | 0.2439 |
| 1.549 | 12.0 | 72 | 1.6307 | 0.2439 |
| 1.549 | 13.0 | 78 | 1.6303 | 0.2439 |
| 1.5721 | 14.0 | 84 | 1.6300 | 0.2439 |
| 1.5369 | 15.0 | 90 | 1.6297 | 0.2439 |
| 1.5369 | 16.0 | 96 | 1.6294 | 0.2439 |
| 1.5705 | 17.0 | 102 | 1.6291 | 0.2439 |
| 1.5705 | 18.0 | 108 | 1.6288 | 0.2439 |
| 1.5679 | 19.0 | 114 | 1.6286 | 0.2439 |
| 1.5656 | 20.0 | 120 | 1.6284 | 0.2439 |
| 1.5656 | 21.0 | 126 | 1.6281 | 0.2439 |
| 1.5685 | 22.0 | 132 | 1.6279 | 0.2439 |
| 1.5685 | 23.0 | 138 | 1.6277 | 0.2439 |
| 1.5419 | 24.0 | 144 | 1.6275 | 0.2439 |
| 1.5718 | 25.0 | 150 | 1.6273 | 0.2439 |
| 1.5718 | 26.0 | 156 | 1.6271 | 0.2439 |
| 1.5745 | 27.0 | 162 | 1.6269 | 0.2439 |
| 1.5745 | 28.0 | 168 | 1.6268 | 0.2439 |
| 1.5571 | 29.0 | 174 | 1.6267 | 0.2439 |
| 1.5843 | 30.0 | 180 | 1.6265 | 0.2439 |
| 1.5843 | 31.0 | 186 | 1.6264 | 0.2439 |
| 1.5761 | 32.0 | 192 | 1.6263 | 0.2439 |
| 1.5761 | 33.0 | 198 | 1.6262 | 0.2439 |
| 1.5292 | 34.0 | 204 | 1.6261 | 0.2439 |
| 1.5827 | 35.0 | 210 | 1.6261 | 0.2439 |
| 1.5827 | 36.0 | 216 | 1.6260 | 0.2439 |
| 1.5796 | 37.0 | 222 | 1.6259 | 0.2439 |
| 1.5796 | 38.0 | 228 | 1.6259 | 0.2439 |
| 1.5699 | 39.0 | 234 | 1.6259 | 0.2439 |
| 1.5472 | 40.0 | 240 | 1.6258 | 0.2439 |
| 1.5472 | 41.0 | 246 | 1.6258 | 0.2439 |
| 1.5603 | 42.0 | 252 | 1.6258 | 0.2439 |
| 1.5603 | 43.0 | 258 | 1.6258 | 0.2439 |
| 1.5805 | 44.0 | 264 | 1.6258 | 0.2439 |
| 1.5679 | 45.0 | 270 | 1.6258 | 0.2439 |
| 1.5679 | 46.0 | 276 | 1.6258 | 0.2439 |
| 1.5821 | 47.0 | 282 | 1.6258 | 0.2439 |
| 1.5821 | 48.0 | 288 | 1.6258 | 0.2439 |
| 1.5058 | 49.0 | 294 | 1.6258 | 0.2439 |
| 1.5509 | 50.0 | 300 | 1.6258 | 0.2439 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
INoahGuy77/mistralclone | INoahGuy77 | "2024-04-16T16:40:37Z" | 1 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"mistral",
"text-generation",
"finetuned",
"conversational",
"arxiv:2310.06825",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-16T16:32:13Z" | ---
license: apache-2.0
pipeline_tag: text-generation
tags:
- finetuned
inference: true
widget:
- messages:
- role: user
content: What is your favorite condiment?
- role: assistant
content: "lkadjfljsd"
---
# Model Card for Mistral-7B-Instruct-v0.1
The Mistral-7B-Instruct-v0.1 Large Language Model (LLM) is a instruct fine-tuned version of the [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) generative text model using a variety of publicly available conversation datasets.
For full details of this model please read our [paper](https://arxiv.org/abs/2310.06825) and [release blog post](https://mistral.ai/news/announcing-mistral-7b/).
## Instruction format
In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[/INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.
E.g.
```
text = "<s>[INST] What is your favourite condiment? [/INST]"
"Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> "
"[INST] Do you have mayonnaise recipes? [/INST]"
```
This format is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating) via the `apply_chat_template()` method:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1")
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1")
messages = [
{"role": "user", "content": "What is your favourite condiment?"},
{"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
{"role": "user", "content": "Do you have mayonnaise recipes?"}
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
## Model Architecture
This instruction model is based on Mistral-7B-v0.1, a transformer model with the following architecture choices:
- Grouped-Query Attention
- Sliding-Window Attention
- Byte-fallback BPE tokenizer
## Troubleshooting
- If you see the following error:
```
Traceback (most recent call last):
File "", line 1, in
File "/transformers/models/auto/auto_factory.py", line 482, in from_pretrained
config, kwargs = AutoConfig.from_pretrained(
File "/transformers/models/auto/configuration_auto.py", line 1022, in from_pretrained
config_class = CONFIG_MAPPING[config_dict["model_type"]]
File "/transformers/models/auto/configuration_auto.py", line 723, in getitem
raise KeyError(key)
KeyError: 'mistral'
```
Installing transformers from source should solve the issue
pip install git+https://github.com/huggingface/transformers
This should not be required after transformers-v4.33.4.
## Limitations
The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance.
It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
## The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed. |
Ahmad-11/Reported_iGPT_v1.1 | Ahmad-11 | "2024-05-29T21:47:50Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-29T21:44:23Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
chrohi/meditron-7b-Q8_0-GGUF | chrohi | "2024-05-26T03:00:39Z" | 2 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:epfl-llm/guidelines",
"base_model:meta-llama/Llama-2-7b",
"base_model:quantized:meta-llama/Llama-2-7b",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | "2024-05-26T03:00:20Z" | ---
language:
- en
license: llama2
tags:
- llama-cpp
- gguf-my-repo
base_model: meta-llama/Llama-2-7b
datasets:
- epfl-llm/guidelines
metrics:
- accuracy
- perplexity
---
# chrohi/meditron-7b-Q8_0-GGUF
This model was converted to GGUF format from [`epfl-llm/meditron-7b`](https://huggingface.co/epfl-llm/meditron-7b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/epfl-llm/meditron-7b) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo chrohi/meditron-7b-Q8_0-GGUF --model meditron-7b-q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo chrohi/meditron-7b-Q8_0-GGUF --model meditron-7b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && \
cd llama.cpp && \
make && \
./main -m meditron-7b-q8_0.gguf -n 128
```
|
minkhantycc/translation-en-ja | minkhantycc | "2024-03-20T05:41:04Z" | 122 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"translation",
"ja",
"en",
"dataset:bsd_ja_en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | "2024-02-08T08:08:41Z" | ---
language:
- ja
- en
license: mit
library_name: transformers
datasets:
- bsd_ja_en
metrics:
- sacrebleu
pipeline_tag: translation
widget:
- text: おはいよ。
- text: 僕の国で とても 綺麗な 所が 有ります。
---
This model is the fine-tuned version of [Helsinki-NLP/opus-mt-ja-en](https://huggingface.co/Helsinki-NLP/opus-mt-ja-en) on bsd_ja_en dataset.
This will translate Japanese sentences to English sentences. |
LoneStriker/Marcoroni-7B-v3-8.0bpw-h8-exl2-2 | LoneStriker | "2023-12-11T06:53:55Z" | 5 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-12-11T06:49:28Z" | ---
license: apache-2.0
---
# Marcoroni-7B-v3
<img src="https://cdn-uploads.huggingface.co/production/uploads/637aebed7ce76c3b834cea37/20uN0wMu2zTyVGgXV9PIo.png" width = 60%>
# Updates
December 11, 2023:
Marcoroni-7B-v3 has placed **#5** overall and **#1** for 7 billion parameter models on the [Hugging Face Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)!
# Model Details
* **Trained by**: trained by AIDC AI-Business.
* **Model type:** **Marcoroni-7B-v3** is an auto-regressive language model based on mistralai/Mistral-7B-v0.1.
* **Language(s)**: English
This is a DPO fine tuned model of [Q-bert/MetaMath-Cybertron-Starling](https://huggingface.co/Q-bert/MetaMath-Cybertron-Starling).
We fine-tuned using 32k data generated by GPT-4 and other models.
# Prompting
## Prompt Template for alpaca style
```
### Instruction:
<prompt> (without the <>)
### Response:
```
|
RapGang23/SchnitzelGang | RapGang23 | "2023-08-03T06:50:34Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2023-08-01T01:35:44Z" | ---
license: openrail
---
|
Inzamam567/Useless-7pa | Inzamam567 | "2023-03-31T22:42:57Z" | 11 | 3 | diffusers | [
"diffusers",
"stable-diffusion",
"aiartchan",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-03-31T22:07:14Z" | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- stable-diffusion
- aiartchan
duplicated_from: AIARTCHAN/7pa
---
# 7pa
[원본글](https://arca.live/b/aiart/70729603)
[civitai](https://civitai.com/models/13468)
# Download
- [original 4.27GB](https://civitai.com/api/download/models/15869)
- [fp16 2.13GB](https://huggingface.co/AIARTCHAN/7pa/blob/main/7pa-fp16.safetensors)
7th anime v3 + 파스텔 + 어비스오렌지2(sfw)
![img](https://imagecache.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/bd98ef9d-bc89-432d-63f3-f082a6cee100/width=1152/159580)
![img](https://imagecache.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/9c597e94-9ffd-4f00-1b47-ee1ef47f2900/width=1016/159577)
![img](https://imagecache.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/63751d98-ef9b-473a-4987-eb8f0ae14700/width=864/159578)
![img](https://imagecache.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/43e8823f-90b3-424c-268a-cd19790f0b00/width=1152/159579)
|
usc-isi/sbert-roberta-large-anli-mnli-snli | usc-isi | "2021-12-05T21:04:27Z" | 8 | 1 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"en",
"dataset:anli",
"dataset:multi_nli",
"dataset:snli",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2022-03-02T23:29:05Z" | ---
language:
- en
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
datasets:
- anli
- multi_nli
- snli
---
# sbert-roberta-large-anli-mnli-snli
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
The model is weight initialized by RoBERTa-large and trained on ANLI (Nie et al., 2020), MNLI (Williams et al., 2018), and SNLI (Bowman et al., 2015) using the [`training_nli.py`](https://github.com/UKPLab/sentence-transformers/blob/v0.3.5/examples/training/nli/training_nli.py) example script.
Training Details:
- Learning rate: 2e-5
- Batch size: 8
- Pooling: Mean
- Training time: ~20 hours on one [NVIDIA GeForce RTX 2080 Ti](https://www.nvidia.com/en-us/geforce/graphics-cards/rtx-2080-ti/)
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```bash
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer("usc-isi/sbert-roberta-large-anli-mnli-snli")
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (Hugging Face Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: first, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
import torch
from transformers import AutoModel, AutoTokenizer
# Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] # First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ["This is an example sentence", "Each sentence is converted"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained("usc-isi/sbert-roberta-large-anli-mnli-snli")
model = AutoModel.from_pretrained("usc-isi/sbert-roberta-large-anli-mnli-snli")
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors="pt")
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input["attention_mask"])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
See section 4.1 of our paper for evaluation results.
## Full Model Architecture
```text
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
For more information about the project, see our paper:
> Ciosici, Manuel, et al. "Machine-Assisted Script Curation." _Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Demonstrations_, Association for Computational Linguistics, 2021, pp. 8–17. _ACLWeb_, <https://www.aclweb.org/anthology/2021.naacl-demos.2>.
## References
- Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. [A large annotated corpus for learning natural language inference](https://doi.org/10.18653/v1/D15-1075). In _Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing_, pages 632–642, Lisbon, Portugal. Association for Computational Linguistics.
- Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. 2020. [AdversarialNLI: A new benchmark for natural language understanding](https://doi.org/10.18653/v1/2020.acl-main.441). In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_, pages 4885–4901, Online. Association for Computational Linguistics.
- Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. [A broad-coverage challenge corpus for sentence understanding through inference](https://doi.org/10.18653/v1/N18-1101). In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)_, pages 1112–1122, New Orleans, Louisiana. Association for Computational Linguistics.
|
yadibolt/unsloth-base-llama3-instruct | yadibolt | "2024-07-12T07:29:13Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-12T07:23:47Z" | ---
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** yadibolt
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
s3nh/ToolBench-ToolLLaMA-2-7b-GGML | s3nh | "2023-08-08T09:48:59Z" | 0 | 3 | transformers | [
"transformers",
"text-generation",
"en",
"license:openrail",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-08-08T09:38:02Z" | ---
license: openrail
language:
- en
pipeline_tag: text-generation
library_name: transformers
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGML Format model files for [This project](https://huggingface.co/ToolBench/ToolLLaMA-2-7b).
### inference
```python
import ctransformers
from ctransformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(output_dir, ggml_file,
gpu_layers=32, model_type="llama")
manual_input: str = "Tell me about your last dream, please."
llm(manual_input,
max_new_tokens=256,
temperature=0.9,
top_p= 0.7)
```
# Model Card for Model ID
This is ToolLLaMA-2-7b version model introduced in [ToolBench](https://github.com/OpenBMB/ToolBench).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **License:** llama2
- **Finetuned from model [optional]:** LLaMA-2-7b-hf
## Uses
Refer to [ToolBench](https://github.com/OpenBMB/ToolBench).
## Training Details
Trained with the new version data in ToolBench. |
polejowska/swin-tiny-patch4-window7-224-lcbsi-wbc | polejowska | "2022-12-13T22:06:57Z" | 41 | 0 | transformers | [
"transformers",
"pytorch",
"swin",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2022-12-13T21:08:19Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-lcbsi-wbc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-lcbsi-wbc
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0307
- Accuracy: 0.9933
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3668 | 0.98 | 27 | 0.6977 | 0.76 |
| 0.217 | 1.98 | 54 | 0.0992 | 0.972 |
| 0.102 | 2.98 | 81 | 0.0573 | 0.9853 |
| 0.0762 | 3.98 | 108 | 0.1003 | 0.976 |
| 0.0456 | 4.98 | 135 | 0.0307 | 0.9933 |
| 0.0219 | 5.98 | 162 | 0.0497 | 0.9907 |
| 0.0106 | 6.98 | 189 | 0.0568 | 0.9867 |
| 0.0112 | 7.98 | 216 | 0.0532 | 0.9907 |
| 0.0067 | 8.98 | 243 | 0.0528 | 0.9907 |
| 0.008 | 9.98 | 270 | 0.0482 | 0.992 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
quangtqv/mxbai_rerank_turbo_official_12_8 | quangtqv | "2024-08-12T03:33:15Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-08-12T03:32:44Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
declare-lab/tango-full | declare-lab | "2024-06-10T16:20:47Z" | 55 | 12 | transformers | [
"transformers",
"music",
"text-to-audio",
"en",
"dataset:declare-lab/TangoPromptBank",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] | text-to-audio | "2023-05-30T10:27:30Z" | ---
license: cc-by-nc-sa-4.0
datasets:
- declare-lab/TangoPromptBank
language:
- en
tags:
- music
pipeline_tag: text-to-audio
---
# TANGO: Text to Audio using iNstruction-Guided diffusiOn
**TANGO** is a latent diffusion model for text-to-audio generation.
**TANGO** can generate realistic audios including human sounds, animal sounds, natural and artificial sounds and sound effects from textual prompts. We use the frozen instruction-tuned LLM Flan-T5 as the text encoder and train a UNet based diffusion model for audio generation. We outperform current state-of-the-art models for audio generation across both objective and subjective metrics. We release our model, training, inference code and pre-trained checkpoints for the research community.
📣 We recently released **Tango 2**. Access it [here](https://huggingface.co/declare-lab/tango2).
📣 We are releasing **Tango-Full** which was pre-trained on **TangoPromptBank**.
## Code
Our code is released here: [https://github.com/declare-lab/tango](https://github.com/declare-lab/tango)
We uploaded several **TANGO** generated samples here: [https://tango-web.github.io/](https://tango-web.github.io/)
Please follow the instructions in the repository for installation, usage and experiments.
## Quickstart Guide
Download the **TANGO** model and generate audio from a text prompt:
```python
import IPython
import soundfile as sf
from tango import Tango
tango = Tango("declare-lab/tango-full-ft-audiocaps")
prompt = "An audience cheering and clapping"
audio = tango.generate(prompt)
sf.write(f"{prompt}.wav", audio, samplerate=16000)
IPython.display.Audio(data=audio, rate=16000)
```
[An audience cheering and clapping.webm](https://user-images.githubusercontent.com/13917097/233851915-e702524d-cd35-43f7-93e0-86ea579231a7.webm)
The model will be automatically downloaded and saved in cache. Subsequent runs will load the model directly from cache.
The `generate` function uses 100 steps by default to sample from the latent diffusion model. We recommend using 200 steps for generating better quality audios. This comes at the cost of increased run-time.
```python
prompt = "Rolling thunder with lightning strikes"
audio = tango.generate(prompt, steps=200)
IPython.display.Audio(data=audio, rate=16000)
```
[Rolling thunder with lightning strikes.webm](https://user-images.githubusercontent.com/13917097/233851929-90501e41-911d-453f-a00b-b215743365b4.webm)
<!-- [MachineClicking](https://user-images.githubusercontent.com/25340239/233857834-bfda52b4-4fcc-48de-b47a-6a6ddcb3671b.mp4 "sample 1") -->
Use the `generate_for_batch` function to generate multiple audio samples for a batch of text prompts:
```python
prompts = [
"A car engine revving",
"A dog barks and rustles with some clicking",
"Water flowing and trickling"
]
audios = tango.generate_for_batch(prompts, samples=2)
```
This will generate two samples for each of the three text prompts. |
amaai-lab/DisfluencySpeech_BenchmarkB | amaai-lab | "2024-06-07T10:42:08Z" | 12 | 0 | null | [
"text-to-speech",
"en",
"dataset:amaai-lab/DisfluencySpeech",
"license:mit",
"region:us"
] | text-to-speech | "2024-06-07T10:29:35Z" | ---
license: mit
datasets:
- amaai-lab/DisfluencySpeech
language:
- en
pipeline_tag: text-to-speech
---
# Usage
```python
from fairseq.checkpoint_utils import load_model_ensemble_and_task_from_hf_hub
from fairseq.models.text_to_speech.hub_interface import TTSHubInterface
import IPython.display as ipd
models, cfg, task = load_model_ensemble_and_task_from_hf_hub(
"amaai-lab/DisfluencySpeech_BenchmarkB",
arg_overrides={"vocoder": "hifigan", "fp16": False, "spec-bwd-max-iter": 32}
)
model = models[0]
TTSHubInterface.update_cfg_with_data_cfg(cfg, task.data_cfg)
generator = task.build_generator(models, cfg)
text = "Well, that's really funny, isn't it? What a strange world we live in."
sample = TTSHubInterface.get_model_input(task, text)
sample['net_input']['src_tokens'] = sample['net_input']['src_tokens'].cuda()
sample['net_input']['src_lengths'] = sample['net_input']['src_lengths'].cuda()
wav, rate = TTSHubInterface.get_prediction(task, model.cuda(), generator, sample)
ipd.Audio(wav.cpu(), rate=rate)
``` |
apbrault/my_awesome_model | apbrault | "2024-10-23T16:53:49Z" | 51 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-10-23T15:12:28Z" | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2311
- Accuracy: 0.9310
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2258 | 1.0 | 1563 | 0.2041 | 0.9205 |
| 0.1469 | 2.0 | 3126 | 0.2311 | 0.9310 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.2
- Tokenizers 0.19.1
|
hopkins/mbart-finetuned-eng-deu-23 | hopkins | "2023-07-02T22:47:30Z" | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | "2023-07-02T22:28:51Z" | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-deu-23
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-deu-23
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6523
- Bleu: 20.8871
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
etagaca/verifai-detector-roberta | etagaca | "2023-03-28T04:02:35Z" | 78 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"chatgpt",
"en",
"dataset:Hello-SimpleAI/HC3",
"arxiv:2301.07597",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-03-28T03:32:21Z" | ---
datasets:
- Hello-SimpleAI/HC3
language:
- en
pipeline_tag: text-classification
tags:
- chatgpt
---
# Model Card for `Hello-SimpleAI/chatgpt-detector-roberta`
This model is trained on **the mix of full-text and splitted sentences** of `answer`s from [Hello-SimpleAI/HC3](https://huggingface.co/datasets/Hello-SimpleAI/HC3).
More details refer to [arxiv: 2301.07597](https://arxiv.org/abs/2301.07597) and Gtihub project [Hello-SimpleAI/chatgpt-comparison-detection](https://github.com/Hello-SimpleAI/chatgpt-comparison-detection).
The base checkpoint is [roberta-base](https://huggingface.co/roberta-base).
We train it with all [Hello-SimpleAI/HC3](https://huggingface.co/datasets/Hello-SimpleAI/HC3) data (without held-out) for 1 epoch.
(1-epoch is consistent with the experiments in [our paper](https://arxiv.org/abs/2301.07597).)
## Citation
Checkout this papaer [arxiv: 2301.07597](https://arxiv.org/abs/2301.07597)
```
@article{guo-etal-2023-hc3,
title = "How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection",
author = "Guo, Biyang and
Zhang, Xin and
Wang, Ziyuan and
Jiang, Minqi and
Nie, Jinran and
Ding, Yuxuan and
Yue, Jianwei and
Wu, Yupeng",
journal={arXiv preprint arxiv:2301.07597}
year = "2023",
}
```
|
autoevaluate/roberta-base-squad2 | autoevaluate | "2022-07-20T13:11:11Z" | 28 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"roberta",
"question-answering",
"en",
"dataset:squad_v2",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | question-answering | "2022-07-19T13:30:23Z" | ---
language: en
datasets:
- squad_v2
license: cc-by-4.0
---
# roberta-base for QA
> Note: this is a clone of [`roberta-base-squad2`](https://huggingface.co/deepset/roberta-base-squad2) for internal testing.
This is the [roberta-base](https://huggingface.co/roberta-base) model, fine-tuned using the [SQuAD2.0](https://huggingface.co/datasets/squad_v2) dataset. It's been trained on question-answer pairs, including unanswerable questions, for the task of Question Answering.
## Overview
**Language model:** roberta-base
**Language:** English
**Downstream-task:** Extractive QA
**Training data:** SQuAD 2.0
**Eval data:** SQuAD 2.0
**Code:** See [an example QA pipeline on Haystack](https://haystack.deepset.ai/tutorials/first-qa-system)
**Infrastructure**: 4x Tesla v100
## Hyperparameters
```
batch_size = 96
n_epochs = 2
base_LM_model = "roberta-base"
max_seq_len = 386
learning_rate = 3e-5
lr_schedule = LinearWarmup
warmup_proportion = 0.2
doc_stride=128
max_query_length=64
```
## Using a distilled model instead
Please note that we have also released a distilled version of this model called [deepset/tinyroberta-squad2](https://huggingface.co/deepset/tinyroberta-squad2). The distilled model has a comparable prediction quality and runs at twice the speed of the base model.
## Usage
### In Haystack
Haystack is an NLP framework by deepset. You can use this model in a Haystack pipeline to do question answering at scale (over many documents). To load the model in [Haystack](https://github.com/deepset-ai/haystack/):
```python
reader = FARMReader(model_name_or_path="deepset/roberta-base-squad2")
# or
reader = TransformersReader(model_name_or_path="deepset/roberta-base-squad2",tokenizer="deepset/roberta-base-squad2")
```
For a complete example of ``roberta-base-squad2`` being used for Question Answering, check out the [Tutorials in Haystack Documentation](https://haystack.deepset.ai/tutorials/first-qa-system)
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "deepset/roberta-base-squad2"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Why is model conversion important?',
'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Performance
Evaluated on the SQuAD 2.0 dev set with the [official eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/).
```
"exact": 79.87029394424324,
"f1": 82.91251169582613,
"total": 11873,
"HasAns_exact": 77.93522267206478,
"HasAns_f1": 84.02838248389763,
"HasAns_total": 5928,
"NoAns_exact": 81.79983179142137,
"NoAns_f1": 81.79983179142137,
"NoAns_total": 5945
```
Using the official [question answering notebook](https://github.com/huggingface/notebooks/blob/main/examples/question_answering.ipynb) from `transformers` yields:
```
{'HasAns_exact': 77.93522267206478,
'HasAns_f1': 83.93715663402219,
'HasAns_total': 5928,
'NoAns_exact': 81.90075693860386,
'NoAns_f1': 81.90075693860386,
'NoAns_total': 5945,
'best_exact': 79.92082877116145,
'best_exact_thresh': 0.0,
'best_f1': 82.91749890730902,
'best_f1_thresh': 0.0,
'exact': 79.92082877116145,
'f1': 82.91749890730917,
'total': 11873}
```
which is consistent with the officially reported results. Using the question answering `Evaluator` from `evaluate` gives:
```
{'HasAns_exact': 77.91835357624831,
'HasAns_f1': 84.07820736158186,
'HasAns_total': 5928,
'NoAns_exact': 81.91757779646763,
'NoAns_f1': 81.91757779646763,
'NoAns_total': 5945,
'best_exact': 79.92082877116145,
'best_exact_thresh': 0.996823787689209,
'best_f1': 82.99634576260925,
'best_f1_thresh': 0.996823787689209,
'exact': 79.92082877116145,
'f1': 82.9963457626089,
'latency_in_seconds': 0.016523243643392558,
'samples_per_second': 60.52080460605492,
'total': 11873,
'total_time_in_seconds': 196.18047177799986}
```
which is also consistent with the officially reported results.
## Authors
**Branden Chan:** [email protected]
**Timo Möller:** [email protected]
**Malte Pietsch:** [email protected]
**Tanay Soni:** [email protected]
## About us
<div class="grid lg:grid-cols-2 gap-x-4 gap-y-3">
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="https://huggingface.co/spaces/deepset/README/resolve/main/haystack-logo-colored.svg" class="w-40"/>
</div>
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="https://huggingface.co/spaces/deepset/README/resolve/main/deepset-logo-colored.svg" class="w-40"/>
</div>
</div>
[deepset](http://deepset.ai/) is the company behind the open-source NLP framework [Haystack](https://haystack.deepset.ai/) which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.
Some of our other work:
- [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")]([https://huggingface.co/deepset/tinyroberta-squad2)
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
## Get in touch and join the Haystack community
<p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://haystack.deepset.ai">Documentation</a></strong>.
We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community/join"><img alt="slack" class="h-7 inline-block m-0" style="margin: 0" src="https://huggingface.co/spaces/deepset/README/resolve/main/Slack_RGB.png"/>community open to everyone!</a></strong></p>
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Slack](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs)
|
tommylam/POCA-soccerTwos | tommylam | "2023-11-14T03:28:38Z" | 2 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | reinforcement-learning | "2023-11-14T03:22:13Z" | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: tommylam/POCA-soccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ARG-NCTU/detr-resnet-50-finetuned-100-epochs-real-lifebuoy-dataset | ARG-NCTU | "2024-10-24T11:30:38Z" | 203 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"detr",
"object-detection",
"generated_from_trainer",
"base_model:ARG-NCTU/detr-resnet-50-finetuned-100-epochs-lifebuoy-dataset",
"base_model:finetune:ARG-NCTU/detr-resnet-50-finetuned-100-epochs-lifebuoy-dataset",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | object-detection | "2024-10-24T09:59:12Z" | ---
library_name: transformers
license: apache-2.0
base_model: ARG-NCTU/detr-resnet-50-finetuned-100-epochs-lifebuoy-dataset
tags:
- generated_from_trainer
model-index:
- name: detr-resnet-50-finetuned-100-epochs-real-lifebuoy-dataset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50-finetuned-100-epochs-real-lifebuoy-dataset
This model is a fine-tuned version of [ARG-NCTU/detr-resnet-50-finetuned-100-epochs-lifebuoy-dataset](https://huggingface.co/ARG-NCTU/detr-resnet-50-finetuned-100-epochs-lifebuoy-dataset) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
lucnha/convnextv2-base-22k-224-finetuned-eurosat-2 | lucnha | "2024-02-18T01:22:31Z" | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"convnextv2",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/convnextv2-base-22k-224",
"base_model:finetune:facebook/convnextv2-base-22k-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-02-17T17:56:25Z" | ---
license: apache-2.0
base_model: facebook/convnextv2-base-22k-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: convnextv2-base-22k-224-finetuned-eurosat-2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9096045197740112
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnextv2-base-22k-224-finetuned-eurosat-2
This model is a fine-tuned version of [facebook/convnextv2-base-22k-224](https://huggingface.co/facebook/convnextv2-base-22k-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3801
- Accuracy: 0.9096
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5227 | 1.0 | 99 | 0.5333 | 0.7797 |
| 0.4248 | 1.99 | 198 | 0.4145 | 0.8531 |
| 0.2998 | 2.99 | 297 | 0.3307 | 0.8757 |
| 0.1704 | 4.0 | 397 | 0.2664 | 0.8927 |
| 0.0684 | 5.0 | 496 | 0.4353 | 0.8701 |
| 0.1546 | 5.99 | 595 | 0.3920 | 0.8870 |
| 0.0593 | 6.99 | 694 | 0.3801 | 0.9096 |
| 0.0745 | 8.0 | 794 | 0.4030 | 0.8983 |
| 0.0877 | 9.0 | 893 | 0.3846 | 0.9040 |
| 0.09 | 9.97 | 990 | 0.3816 | 0.9040 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu118
- Datasets 2.17.0
- Tokenizers 0.15.2
|
sail-rvc/inkling-boy | sail-rvc | "2023-07-14T07:38:33Z" | 1 | 0 | transformers | [
"transformers",
"rvc",
"sail-rvc",
"audio-to-audio",
"endpoints_compatible",
"region:us"
] | audio-to-audio | "2023-07-14T07:38:16Z" |
---
pipeline_tag: audio-to-audio
tags:
- rvc
- sail-rvc
---
# inkling-boy
## RVC Model
![banner](https://i.imgur.com/xocCjhH.jpg)
This model repo was automatically generated.
Date: 2023-07-14 07:38:33
Bot Name: juuxnscrap
Model Type: RVC
Source: https://huggingface.co/juuxn/RVCModels/
Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
|
fuzzymazoid/ppo-Huggy | fuzzymazoid | "2023-03-26T22:04:01Z" | 1 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | "2023-03-26T22:03:53Z" | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Find your model_id: fuzzymazoid/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
madhutry/yolo-finetuned-98samples | madhutry | "2024-10-30T11:04:59Z" | 189 | 0 | transformers | [
"transformers",
"safetensors",
"yolos",
"object-detection",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | object-detection | "2024-10-30T11:04:42Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/EleutherAI_-_pythia-410m-v0-4bits | RichardErkhov | "2024-04-23T07:27:00Z" | 76 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:2101.00027",
"arxiv:2201.07311",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-04-23T07:26:23Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-410m-v0 - bnb 4bits
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-410m-v0/
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- pythia_v0
license: apache-2.0
datasets:
- the_pile
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
[on Hugging Face](https://huggingface.co/models?other=pythia).
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
## Pythia-410M
### Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change over the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch `143000` corresponds
exactly to the model checkpoint on the `main` branch of each model.
You may also further fine-tune and adapt Pythia-410M for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-410M as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-410M has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-410M will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-410M to produce factually accurate output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-410M may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-410M.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
### Training
#### Training data
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).<br>
The Pile was **not** deduplicated before being used to train Pythia-410M.
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so `step1000` is the first checkpoint
for `pythia-1.4b` that was saved (corresponding to step 500 in training), and
`step1000` is likewise the first `pythia-6.9b` checkpoint that was saved
(corresponding to 1000 “actual” steps).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
### Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Challenge Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/>
</details>
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
|
myhaaaaaaa/fa991909-b558-4139-a412-9cc1121f19b9 | myhaaaaaaa | "2025-01-17T15:24:51Z" | 9 | 0 | peft | [
"peft",
"safetensors",
"opt",
"axolotl",
"generated_from_trainer",
"base_model:facebook/opt-125m",
"base_model:adapter:facebook/opt-125m",
"license:other",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-17T15:21:29Z" | ---
library_name: peft
license: other
base_model: facebook/opt-125m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: fa991909-b558-4139-a412-9cc1121f19b9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: facebook/opt-125m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 7fac42cc059a4668_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/7fac42cc059a4668_train_data.json
type:
field_instruction: label
field_output: text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: myhaaaaaaa/fa991909-b558-4139-a412-9cc1121f19b9
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/7fac42cc059a4668_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: da13383f-af60-4b76-b938-060410c0217e
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: da13383f-af60-4b76-b938-060410c0217e
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# fa991909-b558-4139-a412-9cc1121f19b9
This model is a fine-tuned version of [facebook/opt-125m](https://huggingface.co/facebook/opt-125m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8280
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 16.0588 | 0.3742 | 200 | 3.8280 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Helsinki-NLP/opus-mt-xh-sv | Helsinki-NLP | "2023-08-16T12:08:56Z" | 107 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"xh",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | "2022-03-02T23:29:04Z" | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-xh-sv
* source languages: xh
* target languages: sv
* OPUS readme: [xh-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/xh-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/xh-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/xh-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/xh-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.xh.sv | 33.1 | 0.522 |
|
Emperor-WS/ppo-LunarLander-v2-u8 | Emperor-WS | "2023-08-01T17:26:39Z" | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | "2023-08-01T17:26:33Z" | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -102.93 +/- 43.17
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 16
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 16
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'Emperor-WS/ppo-CartPole-v1'
'batch_size': 2048
'minibatch_size': 128}
```
|
hlyu/co-condenser-marco-retriever_141011_mean | hlyu | "2023-04-10T19:59:51Z" | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2023-04-10T19:59:42Z" | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# hlyu/co-condenser-marco-retriever_141011_mean
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('hlyu/co-condenser-marco-retriever_141011_mean')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('hlyu/co-condenser-marco-retriever_141011_mean')
model = AutoModel.from_pretrained('hlyu/co-condenser-marco-retriever_141011_mean')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=hlyu/co-condenser-marco-retriever_141011_mean)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 5055 with parameters:
```
{'batch_size': 128, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MSELoss.MSELoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"eps": 1e-06,
"lr": 0.0001
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
asenella/ms_MoPoE_beta_25_scale_False_seed_1 | asenella | "2023-07-26T17:55:20Z" | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | "2023-07-14T06:41:50Z" | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
0x0son0/sl104 | 0x0son0 | "2024-04-15T14:48:46Z" | 2 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-15T13:55:00Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
taoyoung/distilbert-base-uncased-distilled-clinc | taoyoung | "2024-04-24T17:52:20Z" | 122 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-04-24T15:06:25Z" | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2715
- Accuracy: 0.9465
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.2778 | 1.0 | 318 | 1.6183 | 0.7335 |
| 1.2551 | 2.0 | 636 | 0.8195 | 0.8681 |
| 0.6656 | 3.0 | 954 | 0.4786 | 0.9148 |
| 0.4077 | 4.0 | 1272 | 0.3549 | 0.9335 |
| 0.3012 | 5.0 | 1590 | 0.3083 | 0.9410 |
| 0.2553 | 6.0 | 1908 | 0.2912 | 0.9429 |
| 0.2336 | 7.0 | 2226 | 0.2805 | 0.9445 |
| 0.2217 | 8.0 | 2544 | 0.2754 | 0.9465 |
| 0.2154 | 9.0 | 2862 | 0.2720 | 0.9471 |
| 0.2122 | 10.0 | 3180 | 0.2715 | 0.9465 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.2+cu118
- Datasets 2.19.0
- Tokenizers 0.15.2
|
sail-rvc/SHUHUAGIDLE | sail-rvc | "2023-07-14T07:31:02Z" | 1 | 0 | transformers | [
"transformers",
"rvc",
"sail-rvc",
"audio-to-audio",
"endpoints_compatible",
"region:us"
] | audio-to-audio | "2023-07-14T07:30:47Z" |
---
pipeline_tag: audio-to-audio
tags:
- rvc
- sail-rvc
---
# SHUHUAGIDLE
## RVC Model
![banner](https://i.imgur.com/xocCjhH.jpg)
This model repo was automatically generated.
Date: 2023-07-14 07:31:02
Bot Name: juuxnscrap
Model Type: RVC
Source: https://huggingface.co/juuxn/RVCModels/
Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
|
nat-hunt/b1833ca1-6f8c-498a-83c2-9373d3a4d8b7 | nat-hunt | "2025-02-02T16:37:25Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Mistral-Nemo-Instruct-2407",
"base_model:adapter:unsloth/Mistral-Nemo-Instruct-2407",
"license:apache-2.0",
"region:us"
] | null | "2025-02-02T16:17:28Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Mistral-Nemo-Instruct-2407
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b1833ca1-6f8c-498a-83c2-9373d3a4d8b7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
# b1833ca1-6f8c-498a-83c2-9373d3a4d8b7
This model is a fine-tuned version of [unsloth/Mistral-Nemo-Instruct-2407](https://huggingface.co/unsloth/Mistral-Nemo-Instruct-2407) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6335
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
shi-labs/dinat-tiny-in1k-224 | shi-labs | "2022-11-18T23:11:09Z" | 99 | 0 | transformers | [
"transformers",
"pytorch",
"dinat",
"image-classification",
"vision",
"dataset:imagenet-1k",
"arxiv:2209.15001",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2022-11-18T22:07:23Z" | ---
license: mit
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# DiNAT (tiny variant)
DiNAT-Tiny trained on ImageNet-1K at 224x224 resolution.
It was introduced in the paper [Dilated Neighborhood Attention Transformer](https://arxiv.org/abs/2209.15001) by Hassani et al. and first released in [this repository](https://github.com/SHI-Labs/Neighborhood-Attention-Transformer).
## Model description
DiNAT is a hierarchical vision transformer based on Neighborhood Attention (NA) and its dilated variant (DiNA).
Neighborhood Attention is a restricted self attention pattern in which each token's receptive field is limited to its nearest neighboring pixels.
NA and DiNA are therefore sliding-window attention patterns, and as a result are highly flexible and maintain translational equivariance.
They come with PyTorch implementations through the [NATTEN](https://github.com/SHI-Labs/NATTEN/) package.
![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/dilated-neighborhood-attention-pattern.jpg)
[Source](https://paperswithcode.com/paper/dilated-neighborhood-attention-transformer)
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=dinat) to look for
fine-tuned versions on a task that interests you.
### Example
Here is how to use this model to classify an image from the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import AutoImageProcessor, DinatForImageClassification
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = AutoImageProcessor.from_pretrained("shi-labs/dinat-tiny-in1k-224")
model = DinatForImageClassification.from_pretrained("shi-labs/dinat-tiny-in1k-224")
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
For more examples, please refer to the [documentation](https://huggingface.co/transformers/model_doc/dinat.html#).
### Requirements
Other than transformers, this model requires the [NATTEN](https://shi-labs.com/natten) package.
If you're on Linux, you can refer to [shi-labs.com/natten](https://shi-labs.com/natten) for instructions on installing with pre-compiled binaries (just select your torch build to get the correct wheel URL).
You can alternatively use `pip install natten` to compile on your device, which may take up to a few minutes.
Mac users only have the latter option (no pre-compiled binaries).
Refer to [NATTEN's GitHub](https://github.com/SHI-Labs/NATTEN/) for more information.
### BibTeX entry and citation info
```bibtex
@article{hassani2022dilated,
title = {Dilated Neighborhood Attention Transformer},
author = {Ali Hassani and Humphrey Shi},
year = 2022,
url = {https://arxiv.org/abs/2209.15001},
eprint = {2209.15001},
archiveprefix = {arXiv},
primaryclass = {cs.CV}
}
``` |
nielsr/segformer-finetuned-sidewalk-10k-steps | nielsr | "2024-12-24T11:57:20Z" | 193 | 2 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"segformer",
"image-segmentation",
"vision",
"generated_from_trainer",
"base_model:nvidia/mit-b0",
"base_model:finetune:nvidia/mit-b0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-segmentation | "2022-04-20T07:21:41Z" | ---
license: apache-2.0
tags:
- image-segmentation
- vision
- generated_from_trainer
base_model: nvidia/mit-b0
model-index:
- name: segformer-finetuned-sidewalk-50-epochs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-finetuned-sidewalk-50-epochs
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the segments/sidewalk-semantic dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6350
- Mean Iou: 0.3022
- Mean Accuracy: 0.3724
- Overall Accuracy: 0.8117
- Accuracy Unlabeled: nan
- Accuracy Flat-road: 0.8240
- Accuracy Flat-sidewalk: 0.8308
- Accuracy Flat-crosswalk: 0.7789
- Accuracy Flat-cyclinglane: 0.9052
- Accuracy Flat-parkingdriveway: 0.3152
- Accuracy Flat-railtrack: nan
- Accuracy Flat-curb: 0.4703
- Accuracy Human-person: 0.6444
- Accuracy Human-rider: 0.0
- Accuracy Vehicle-car: 0.9424
- Accuracy Vehicle-truck: 0.0
- Accuracy Vehicle-bus: 0.0
- Accuracy Vehicle-tramtrain: 0.0
- Accuracy Vehicle-motorcycle: 0.0
- Accuracy Vehicle-bicycle: 0.7116
- Accuracy Vehicle-caravan: 0.0
- Accuracy Vehicle-cartrailer: 0.0
- Accuracy Construction-building: 0.8716
- Accuracy Construction-door: 0.0
- Accuracy Construction-wall: 0.4736
- Accuracy Construction-fenceguardrail: 0.5408
- Accuracy Construction-bridge: 0.0
- Accuracy Construction-tunnel: nan
- Accuracy Construction-stairs: 0.0048
- Accuracy Object-pole: 0.4202
- Accuracy Object-trafficsign: 0.0754
- Accuracy Object-trafficlight: 0.0
- Accuracy Nature-vegetation: 0.9437
- Accuracy Nature-terrain: 0.8196
- Accuracy Sky: 0.9525
- Accuracy Void-ground: 0.0
- Accuracy Void-dynamic: 0.1041
- Accuracy Void-static: 0.2872
- Accuracy Void-unclear: 0.0
- Iou Unlabeled: nan
- Iou Flat-road: 0.7413
- Iou Flat-sidewalk: 0.7520
- Iou Flat-crosswalk: 0.7629
- Iou Flat-cyclinglane: 0.4453
- Iou Flat-parkingdriveway: 0.2976
- Iou Flat-railtrack: nan
- Iou Flat-curb: 0.3701
- Iou Human-person: 0.4953
- Iou Human-rider: 0.0
- Iou Vehicle-car: 0.7962
- Iou Vehicle-truck: 0.0
- Iou Vehicle-bus: 0.0
- Iou Vehicle-tramtrain: 0.0
- Iou Vehicle-motorcycle: 0.0
- Iou Vehicle-bicycle: 0.4152
- Iou Vehicle-caravan: 0.0
- Iou Vehicle-cartrailer: 0.0
- Iou Construction-building: 0.6712
- Iou Construction-door: 0.0
- Iou Construction-wall: 0.3749
- Iou Construction-fenceguardrail: 0.4613
- Iou Construction-bridge: 0.0
- Iou Construction-tunnel: nan
- Iou Construction-stairs: 0.0048
- Iou Object-pole: 0.2337
- Iou Object-trafficsign: 0.0753
- Iou Object-trafficlight: 0.0
- Iou Nature-vegetation: 0.8324
- Iou Nature-terrain: 0.7277
- Iou Sky: 0.9234
- Iou Void-ground: 0.0
- Iou Void-dynamic: 0.0913
- Iou Void-static: 0.1997
- Iou Void-unclear: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: polynomial
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Unlabeled | Accuracy Flat-road | Accuracy Flat-sidewalk | Accuracy Flat-crosswalk | Accuracy Flat-cyclinglane | Accuracy Flat-parkingdriveway | Accuracy Flat-railtrack | Accuracy Flat-curb | Accuracy Human-person | Accuracy Human-rider | Accuracy Vehicle-car | Accuracy Vehicle-truck | Accuracy Vehicle-bus | Accuracy Vehicle-tramtrain | Accuracy Vehicle-motorcycle | Accuracy Vehicle-bicycle | Accuracy Vehicle-caravan | Accuracy Vehicle-cartrailer | Accuracy Construction-building | Accuracy Construction-door | Accuracy Construction-wall | Accuracy Construction-fenceguardrail | Accuracy Construction-bridge | Accuracy Construction-tunnel | Accuracy Construction-stairs | Accuracy Object-pole | Accuracy Object-trafficsign | Accuracy Object-trafficlight | Accuracy Nature-vegetation | Accuracy Nature-terrain | Accuracy Sky | Accuracy Void-ground | Accuracy Void-dynamic | Accuracy Void-static | Accuracy Void-unclear | Iou Unlabeled | Iou Flat-road | Iou Flat-sidewalk | Iou Flat-crosswalk | Iou Flat-cyclinglane | Iou Flat-parkingdriveway | Iou Flat-railtrack | Iou Flat-curb | Iou Human-person | Iou Human-rider | Iou Vehicle-car | Iou Vehicle-truck | Iou Vehicle-bus | Iou Vehicle-tramtrain | Iou Vehicle-motorcycle | Iou Vehicle-bicycle | Iou Vehicle-caravan | Iou Vehicle-cartrailer | Iou Construction-building | Iou Construction-door | Iou Construction-wall | Iou Construction-fenceguardrail | Iou Construction-bridge | Iou Construction-tunnel | Iou Construction-stairs | Iou Object-pole | Iou Object-trafficsign | Iou Object-trafficlight | Iou Nature-vegetation | Iou Nature-terrain | Iou Sky | Iou Void-ground | Iou Void-dynamic | Iou Void-static | Iou Void-unclear |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------:|:------------------:|:----------------------:|:-----------------------:|:-------------------------:|:-----------------------------:|:-----------------------:|:------------------:|:---------------------:|:--------------------:|:--------------------:|:----------------------:|:--------------------:|:--------------------------:|:---------------------------:|:------------------------:|:------------------------:|:---------------------------:|:------------------------------:|:--------------------------:|:--------------------------:|:------------------------------------:|:----------------------------:|:----------------------------:|:----------------------------:|:--------------------:|:---------------------------:|:----------------------------:|:--------------------------:|:-----------------------:|:------------:|:--------------------:|:---------------------:|:--------------------:|:---------------------:|:-------------:|:-------------:|:-----------------:|:------------------:|:--------------------:|:------------------------:|:------------------:|:-------------:|:----------------:|:---------------:|:---------------:|:-----------------:|:---------------:|:---------------------:|:----------------------:|:-------------------:|:-------------------:|:----------------------:|:-------------------------:|:---------------------:|:---------------------:|:-------------------------------:|:-----------------------:|:-----------------------:|:-----------------------:|:---------------:|:----------------------:|:-----------------------:|:---------------------:|:------------------:|:-------:|:---------------:|:----------------:|:---------------:|:----------------:|
| 2.4745 | 1.85 | 100 | 1.7861 | 0.1056 | 0.1555 | 0.6397 | nan | 0.2287 | 0.9278 | 0.0 | 0.1406 | 0.0032 | nan | 0.0 | 0.0 | 0.0 | 0.7757 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8764 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.8387 | 0.8794 | 0.3057 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.1931 | 0.6432 | 0.0 | 0.1380 | 0.0031 | nan | 0.0 | 0.0 | 0.0 | 0.5312 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4482 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.6323 | 0.4860 | 0.3053 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.7294 | 3.7 | 200 | 1.3129 | 0.1517 | 0.1996 | 0.7410 | nan | 0.7928 | 0.8830 | 0.0 | 0.6053 | 0.0089 | nan | 0.0 | 0.0 | 0.0 | 0.7837 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8530 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9138 | 0.7742 | 0.7740 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5519 | 0.7788 | 0.0 | 0.5131 | 0.0088 | nan | 0.0 | 0.0 | 0.0 | 0.5804 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5005 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.6747 | 0.5247 | 0.7209 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.4479 | 5.56 | 300 | 1.1309 | 0.1608 | 0.2113 | 0.7588 | nan | 0.7973 | 0.9008 | 0.0 | 0.7721 | 0.0269 | nan | 0.0 | 0.0 | 0.0 | 0.8744 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8581 | 0.0 | 0.0007 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.8622 | 0.8707 | 0.7985 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5861 | 0.7816 | 0.0 | 0.5877 | 0.0261 | nan | 0.0 | 0.0 | 0.0 | 0.6119 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5582 | 0.0 | 0.0007 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7024 | 0.5206 | 0.7706 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.2348 | 7.41 | 400 | 0.9644 | 0.1707 | 0.2170 | 0.7736 | nan | 0.8125 | 0.9218 | 0.0 | 0.7596 | 0.1081 | nan | 0.0000 | 0.0 | 0.0 | 0.9080 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8280 | 0.0 | 0.0334 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.8856 | 0.8260 | 0.8612 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6003 | 0.7937 | 0.0 | 0.6538 | 0.0997 | nan | 0.0000 | 0.0 | 0.0 | 0.6189 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5731 | 0.0 | 0.0330 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7147 | 0.5601 | 0.8139 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.0762 | 9.26 | 500 | 0.8819 | 0.1722 | 0.2159 | 0.7748 | nan | 0.7512 | 0.9353 | 0.0 | 0.7565 | 0.1204 | nan | 0.0016 | 0.0 | 0.0 | 0.9115 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8689 | 0.0 | 0.0565 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9098 | 0.7664 | 0.8303 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5993 | 0.7850 | 0.0 | 0.6536 | 0.1052 | nan | 0.0016 | 0.0 | 0.0 | 0.6377 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5767 | 0.0 | 0.0547 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7285 | 0.5709 | 0.7984 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.9933 | 11.11 | 600 | 0.8347 | 0.1814 | 0.2263 | 0.7822 | nan | 0.8064 | 0.9111 | 0.0 | 0.7880 | 0.1443 | nan | 0.0436 | 0.0 | 0.0 | 0.8944 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8970 | 0.0 | 0.1914 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9053 | 0.8080 | 0.8526 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6088 | 0.8045 | 0.0 | 0.6845 | 0.1255 | nan | 0.0419 | 0.0 | 0.0 | 0.6594 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5548 | 0.0 | 0.1585 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7440 | 0.6068 | 0.8176 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.9424 | 12.96 | 700 | 0.8428 | 0.1824 | 0.2271 | 0.7704 | nan | 0.6767 | 0.9270 | 0.0475 | 0.7655 | 0.1322 | nan | 0.2020 | 0.0189 | 0.0 | 0.8410 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.9205 | 0.0 | 0.2568 | 0.0 | 0.0 | nan | 0.0 | 0.0023 | 0.0 | 0.0 | 0.8994 | 0.7347 | 0.8413 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5838 | 0.7914 | 0.0475 | 0.6091 | 0.1095 | nan | 0.1597 | 0.0185 | 0.0 | 0.6706 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5131 | 0.0 | 0.1872 | 0.0 | 0.0 | nan | 0.0 | 0.0023 | 0.0 | 0.0 | 0.7525 | 0.5837 | 0.8077 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.8673 | 14.81 | 800 | 0.7934 | 0.2089 | 0.2509 | 0.7818 | nan | 0.6854 | 0.9394 | 0.7072 | 0.7240 | 0.1504 | nan | 0.2013 | 0.0186 | 0.0 | 0.9071 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.9037 | 0.0 | 0.3110 | 0.0 | 0.0 | nan | 0.0 | 0.0108 | 0.0 | 0.0 | 0.8990 | 0.7171 | 0.8513 | 0.0 | 0.0 | 0.0013 | 0.0 | nan | 0.5914 | 0.7755 | 0.6900 | 0.6673 | 0.1340 | nan | 0.1542 | 0.0183 | 0.0 | 0.6792 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5639 | 0.0 | 0.2172 | 0.0 | 0.0 | nan | 0.0 | 0.0100 | 0.0 | 0.0 | 0.7615 | 0.6014 | 0.8192 | 0.0 | 0.0 | 0.0013 | 0.0 |
| 0.8126 | 16.67 | 900 | 0.7484 | 0.2268 | 0.2784 | 0.7940 | nan | 0.6791 | 0.9397 | 0.7812 | 0.8009 | 0.1532 | nan | 0.3244 | 0.2962 | 0.0 | 0.9018 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8567 | 0.0 | 0.4772 | 0.0002 | 0.0 | nan | 0.0 | 0.0834 | 0.0 | 0.0 | 0.8992 | 0.8280 | 0.8837 | 0.0 | 0.0 | 0.0032 | 0.0 | nan | 0.6303 | 0.7968 | 0.7079 | 0.6095 | 0.1396 | nan | 0.2196 | 0.2638 | 0.0 | 0.7100 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6016 | 0.0 | 0.2860 | 0.0002 | 0.0 | nan | 0.0 | 0.0570 | 0.0 | 0.0 | 0.7678 | 0.6211 | 0.8416 | 0.0 | 0.0 | 0.0032 | 0.0 |
| 0.7989 | 18.52 | 1000 | 0.7241 | 0.2279 | 0.2803 | 0.8018 | nan | 0.7224 | 0.9402 | 0.7875 | 0.8234 | 0.1793 | nan | 0.3763 | 0.1974 | 0.0 | 0.9259 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8911 | 0.0 | 0.3994 | 0.0029 | 0.0 | nan | 0.0 | 0.0758 | 0.0 | 0.0 | 0.8619 | 0.8774 | 0.8854 | 0.0 | 0.0 | 0.0225 | 0.0 | nan | 0.6579 | 0.8292 | 0.7198 | 0.6924 | 0.1660 | nan | 0.2392 | 0.1794 | 0.0 | 0.6748 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5766 | 0.0 | 0.2654 | 0.0029 | 0.0 | nan | 0.0 | 0.0636 | 0.0 | 0.0 | 0.7582 | 0.5994 | 0.8455 | 0.0 | 0.0 | 0.0220 | 0.0 |
| 0.7429 | 20.37 | 1100 | 0.7321 | 0.2276 | 0.2862 | 0.7876 | nan | 0.8321 | 0.8491 | 0.7958 | 0.8572 | 0.2216 | nan | 0.3030 | 0.2864 | 0.0 | 0.9456 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8668 | 0.0 | 0.3757 | 0.0040 | 0.0 | nan | 0.0 | 0.1140 | 0.0 | 0.0 | 0.8839 | 0.8499 | 0.9228 | 0.0 | 0.0 | 0.0505 | 0.0 | nan | 0.6678 | 0.7848 | 0.7342 | 0.5048 | 0.1995 | nan | 0.2316 | 0.2463 | 0.0 | 0.6379 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5916 | 0.0 | 0.2668 | 0.0040 | 0.0 | nan | 0.0 | 0.0820 | 0.0 | 0.0 | 0.7827 | 0.6428 | 0.8583 | 0.0 | 0.0 | 0.0465 | 0.0 |
| 0.7131 | 22.22 | 1200 | 0.7231 | 0.2377 | 0.2995 | 0.7870 | nan | 0.8306 | 0.8458 | 0.7952 | 0.8505 | 0.2218 | nan | 0.3614 | 0.5001 | 0.0 | 0.9504 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7598 | 0.0 | 0.5317 | 0.0405 | 0.0 | nan | 0.0 | 0.1381 | 0.0 | 0.0 | 0.9284 | 0.7938 | 0.9110 | 0.0 | 0.0 | 0.1262 | 0.0 | nan | 0.7038 | 0.7740 | 0.7537 | 0.4538 | 0.1996 | nan | 0.2521 | 0.3853 | 0.0 | 0.6576 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6157 | 0.0 | 0.3046 | 0.0404 | 0.0 | nan | 0.0 | 0.0921 | 0.0 | 0.0 | 0.7846 | 0.6383 | 0.8588 | 0.0 | 0.0 | 0.0911 | 0.0 |
| 0.6919 | 24.07 | 1300 | 0.6775 | 0.2361 | 0.2885 | 0.8013 | nan | 0.7728 | 0.9073 | 0.8010 | 0.8366 | 0.1547 | nan | 0.3070 | 0.3428 | 0.0 | 0.9272 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8568 | 0.0 | 0.5009 | 0.0736 | 0.0 | nan | 0.0 | 0.0975 | 0.0 | 0.0 | 0.9297 | 0.7567 | 0.8978 | 0.0 | 0.0 | 0.0682 | 0.0 | nan | 0.6564 | 0.7929 | 0.6932 | 0.6396 | 0.1438 | nan | 0.2385 | 0.2888 | 0.0 | 0.6807 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6085 | 0.0 | 0.3114 | 0.0729 | 0.0 | nan | 0.0 | 0.0803 | 0.0 | 0.0 | 0.7857 | 0.6403 | 0.8601 | 0.0 | 0.0 | 0.0610 | 0.0 |
| 0.68 | 25.93 | 1400 | 0.6321 | 0.2575 | 0.3109 | 0.8181 | nan | 0.7851 | 0.9362 | 0.8041 | 0.8438 | 0.1694 | nan | 0.3956 | 0.5626 | 0.0 | 0.9306 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8313 | 0.0 | 0.5073 | 0.2728 | 0.0 | nan | 0.0 | 0.1741 | 0.0 | 0.0 | 0.9221 | 0.7899 | 0.9071 | 0.0 | 0.0 | 0.1157 | 0.0 | nan | 0.6781 | 0.8336 | 0.7386 | 0.7047 | 0.1564 | nan | 0.2789 | 0.4291 | 0.0 | 0.6934 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6062 | 0.0 | 0.3305 | 0.2579 | 0.0 | nan | 0.0 | 0.1228 | 0.0 | 0.0 | 0.7952 | 0.6651 | 0.8631 | 0.0 | 0.0 | 0.0865 | 0.0 |
| 0.6644 | 27.78 | 1500 | 0.6568 | 0.2555 | 0.3132 | 0.8074 | nan | 0.7687 | 0.9014 | 0.7631 | 0.8302 | 0.1869 | nan | 0.4841 | 0.4880 | 0.0 | 0.9294 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0002 | 0.0 | 0.0 | 0.8139 | 0.0 | 0.5482 | 0.3042 | 0.0 | nan | 0.0 | 0.1974 | 0.0 | 0.0 | 0.9225 | 0.8543 | 0.9042 | 0.0 | 0.0 | 0.1259 | 0.0 | nan | 0.6723 | 0.8030 | 0.7443 | 0.5873 | 0.1742 | nan | 0.3013 | 0.3813 | 0.0 | 0.7117 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0002 | 0.0 | 0.0 | 0.6159 | 0.0 | 0.3289 | 0.2810 | 0.0 | nan | 0.0 | 0.1295 | 0.0 | 0.0 | 0.8015 | 0.6848 | 0.8665 | 0.0 | 0.0 | 0.0931 | 0.0 |
| 0.6153 | 29.63 | 1600 | 0.6157 | 0.2586 | 0.3131 | 0.8188 | nan | 0.8000 | 0.9242 | 0.7980 | 0.8445 | 0.1758 | nan | 0.4143 | 0.6256 | 0.0 | 0.9155 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0007 | 0.0 | 0.0 | 0.8792 | 0.0 | 0.4465 | 0.2182 | 0.0 | nan | 0.0 | 0.1970 | 0.0 | 0.0 | 0.9111 | 0.8171 | 0.9368 | 0.0 | 0.0 | 0.1136 | 0.0 | nan | 0.6844 | 0.8212 | 0.7565 | 0.6537 | 0.1636 | nan | 0.2857 | 0.4354 | 0.0 | 0.7222 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0007 | 0.0 | 0.0 | 0.6274 | 0.0 | 0.3217 | 0.2147 | 0.0 | nan | 0.0 | 0.1313 | 0.0 | 0.0 | 0.8082 | 0.6809 | 0.8737 | 0.0 | 0.0 | 0.0926 | 0.0 |
| 0.6154 | 31.48 | 1700 | 0.6397 | 0.2621 | 0.3204 | 0.8117 | nan | 0.8357 | 0.8840 | 0.7908 | 0.8465 | 0.2590 | nan | 0.4050 | 0.5401 | 0.0 | 0.9393 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0105 | 0.0 | 0.0 | 0.8169 | 0.0 | 0.4733 | 0.3188 | 0.0 | nan | 0.0 | 0.2505 | 0.0 | 0.0 | 0.9181 | 0.8473 | 0.9287 | 0.0 | 0.0 | 0.1890 | 0.0 | nan | 0.6774 | 0.8042 | 0.7524 | 0.5662 | 0.2300 | nan | 0.2971 | 0.4050 | 0.0 | 0.6970 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0105 | 0.0 | 0.0 | 0.6489 | 0.0 | 0.3454 | 0.3058 | 0.0 | nan | 0.0 | 0.1441 | 0.0 | 0.0 | 0.8074 | 0.6913 | 0.8820 | 0.0 | 0.0 | 0.1224 | 0.0 |
| 0.6305 | 33.33 | 1800 | 0.6131 | 0.2641 | 0.3212 | 0.8194 | nan | 0.8171 | 0.8984 | 0.8212 | 0.8462 | 0.2582 | nan | 0.5051 | 0.5504 | 0.0 | 0.9421 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0221 | 0.0 | 0.0 | 0.8777 | 0.0 | 0.3528 | 0.3169 | 0.0 | nan | 0.0 | 0.2249 | 0.0 | 0.0 | 0.9203 | 0.8499 | 0.9175 | 0.0 | 0.0 | 0.1587 | 0.0 | nan | 0.7209 | 0.8195 | 0.7546 | 0.6166 | 0.2267 | nan | 0.3408 | 0.4000 | 0.0 | 0.6906 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0221 | 0.0 | 0.0 | 0.6055 | 0.0 | 0.2823 | 0.3044 | 0.0 | nan | 0.0 | 0.1545 | 0.0 | 0.0 | 0.8124 | 0.6994 | 0.8799 | 0.0 | 0.0 | 0.1204 | 0.0 |
| 0.6083 | 35.19 | 1900 | 0.6224 | 0.2646 | 0.3182 | 0.8171 | nan | 0.7473 | 0.9297 | 0.7826 | 0.8269 | 0.2162 | nan | 0.4556 | 0.4982 | 0.0 | 0.9169 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0865 | 0.0 | 0.0 | 0.9031 | 0.0 | 0.3618 | 0.3583 | 0.0 | nan | 0.0 | 0.2603 | 0.0 | 0.0 | 0.8966 | 0.8828 | 0.9016 | 0.0 | 0.0 | 0.1587 | 0.0 | nan | 0.6824 | 0.8210 | 0.7645 | 0.5950 | 0.2019 | nan | 0.3166 | 0.3895 | 0.0 | 0.7307 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0853 | 0.0 | 0.0 | 0.6063 | 0.0 | 0.2860 | 0.3200 | 0.0 | nan | 0.0 | 0.1659 | 0.0 | 0.0 | 0.8188 | 0.7017 | 0.8695 | 0.0 | 0.0 | 0.1113 | 0.0 |
| 0.5847 | 37.04 | 2000 | 0.5906 | 0.2713 | 0.3209 | 0.8281 | nan | 0.7374 | 0.9612 | 0.7764 | 0.8195 | 0.2033 | nan | 0.4219 | 0.4950 | 0.0 | 0.9339 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0960 | 0.0 | 0.0 | 0.8434 | 0.0 | 0.4552 | 0.4437 | 0.0 | nan | 0.0 | 0.2250 | 0.0 | 0.0 | 0.9315 | 0.8612 | 0.9071 | 0.0 | 0.0 | 0.1567 | 0.0 | nan | 0.6883 | 0.8311 | 0.7525 | 0.6838 | 0.1851 | nan | 0.3228 | 0.3780 | 0.0 | 0.7236 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0944 | 0.0 | 0.0 | 0.6338 | 0.0 | 0.3408 | 0.3853 | 0.0 | nan | 0.0 | 0.1586 | 0.0 | 0.0 | 0.8104 | 0.6978 | 0.8800 | 0.0 | 0.0 | 0.1162 | 0.0 |
| 0.5764 | 38.89 | 2100 | 0.6088 | 0.2752 | 0.3225 | 0.8255 | nan | 0.7525 | 0.9472 | 0.7709 | 0.8441 | 0.2134 | nan | 0.3932 | 0.5383 | 0.0 | 0.9030 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3470 | 0.0 | 0.0 | 0.9195 | 0.0 | 0.3310 | 0.3215 | 0.0 | nan | 0.0 | 0.2234 | 0.0 | 0.0 | 0.9289 | 0.7964 | 0.9280 | 0.0 | 0.0 | 0.1604 | 0.0 | nan | 0.6993 | 0.8276 | 0.7546 | 0.7234 | 0.1997 | nan | 0.3005 | 0.4222 | 0.0 | 0.7348 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3123 | 0.0 | 0.0 | 0.5918 | 0.0 | 0.2787 | 0.3037 | 0.0 | nan | 0.0 | 0.1585 | 0.0 | 0.0 | 0.8124 | 0.6781 | 0.8844 | 0.0 | 0.0 | 0.1247 | 0.0 |
| 0.5787 | 40.74 | 2200 | 0.5706 | 0.2824 | 0.3351 | 0.8347 | nan | 0.8178 | 0.9369 | 0.8003 | 0.8511 | 0.2352 | nan | 0.4838 | 0.5417 | 0.0 | 0.9025 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3689 | 0.0 | 0.0 | 0.8739 | 0.0 | 0.4493 | 0.4040 | 0.0 | nan | 0.0 | 0.2524 | 0.0 | 0.0 | 0.9422 | 0.8182 | 0.9183 | 0.0 | 0.0 | 0.1276 | 0.0 | nan | 0.7292 | 0.8432 | 0.7669 | 0.6897 | 0.2161 | nan | 0.3484 | 0.4230 | 0.0 | 0.7519 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3045 | 0.0 | 0.0 | 0.6407 | 0.0 | 0.3373 | 0.3491 | 0.0 | nan | 0.0 | 0.1557 | 0.0 | 0.0 | 0.8080 | 0.6803 | 0.8850 | 0.0 | 0.0 | 0.1068 | 0.0 |
| 0.5724 | 42.59 | 2300 | 0.7562 | 0.2740 | 0.3479 | 0.7662 | nan | 0.8734 | 0.7169 | 0.7809 | 0.8847 | 0.2838 | nan | 0.3742 | 0.6758 | 0.0 | 0.9339 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6048 | 0.0 | 0.0 | 0.8535 | 0.0 | 0.4435 | 0.4729 | 0.0 | nan | 0.0 | 0.2817 | 0.0 | 0.0 | 0.9149 | 0.8765 | 0.9329 | 0.0 | 0.0 | 0.2292 | 0.0 | nan | 0.7041 | 0.6683 | 0.7628 | 0.3371 | 0.2575 | nan | 0.2878 | 0.4639 | 0.0 | 0.7454 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4190 | 0.0 | 0.0 | 0.6387 | 0.0 | 0.3357 | 0.3997 | 0.0 | nan | 0.0 | 0.1776 | 0.0 | 0.0 | 0.8183 | 0.7106 | 0.8911 | 0.0 | 0.0 | 0.1516 | 0.0 |
| 0.556 | 44.44 | 2400 | 0.7350 | 0.2665 | 0.3366 | 0.7813 | nan | 0.7897 | 0.7888 | 0.8022 | 0.8878 | 0.2389 | nan | 0.4270 | 0.4859 | 0.0 | 0.9401 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4618 | 0.0 | 0.0 | 0.8866 | 0.0 | 0.3979 | 0.5050 | 0.0 | nan | 0.0 | 0.2580 | 0.0 | 0.0 | 0.9097 | 0.8627 | 0.9337 | 0.0 | 0.0 | 0.1948 | 0.0 | nan | 0.6902 | 0.7286 | 0.7779 | 0.3964 | 0.2231 | nan | 0.3011 | 0.3626 | 0.0 | 0.7078 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3485 | 0.0 | 0.0 | 0.6171 | 0.0 | 0.3044 | 0.3372 | 0.0 | nan | 0.0 | 0.1812 | 0.0 | 0.0 | 0.8195 | 0.7011 | 0.8947 | 0.0 | 0.0 | 0.1378 | 0.0 |
| 0.5599 | 46.3 | 2500 | 0.5949 | 0.2846 | 0.3464 | 0.8215 | nan | 0.7919 | 0.9145 | 0.7935 | 0.8679 | 0.2189 | nan | 0.3795 | 0.5589 | 0.0 | 0.9334 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5627 | 0.0 | 0.0 | 0.8536 | 0.0 | 0.4394 | 0.4730 | 0.0 | nan | 0.0 | 0.3260 | 0.0 | 0.0 | 0.9098 | 0.8344 | 0.9487 | 0.0 | 0.0 | 0.2801 | 0.0 | nan | 0.6901 | 0.8199 | 0.7749 | 0.5729 | 0.2084 | nan | 0.3034 | 0.4321 | 0.0 | 0.7422 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4230 | 0.0 | 0.0 | 0.6491 | 0.0 | 0.3237 | 0.3989 | 0.0 | nan | 0.0 | 0.1963 | 0.0 | 0.0 | 0.8232 | 0.7048 | 0.8949 | 0.0 | 0.0 | 0.1489 | 0.0 |
| 0.5368 | 48.15 | 2600 | 0.6125 | 0.2829 | 0.3502 | 0.8211 | nan | 0.7798 | 0.9034 | 0.7913 | 0.9079 | 0.2587 | nan | 0.3407 | 0.6423 | 0.0 | 0.9351 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6794 | 0.0 | 0.0 | 0.8554 | 0.0 | 0.3996 | 0.4884 | 0.0 | nan | 0.0 | 0.2870 | 0.0 | 0.0 | 0.9271 | 0.8698 | 0.9424 | 0.0 | 0.0 | 0.1992 | 0.0 | nan | 0.6878 | 0.8122 | 0.7578 | 0.5597 | 0.2427 | nan | 0.2680 | 0.4737 | 0.0 | 0.7517 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3649 | 0.0 | 0.0 | 0.6557 | 0.0 | 0.3130 | 0.4117 | 0.0 | nan | 0.0 | 0.1847 | 0.0 | 0.0 | 0.8236 | 0.7137 | 0.8969 | 0.0 | 0.0 | 0.1361 | 0.0 |
| 0.5391 | 50.0 | 2700 | 0.5993 | 0.2877 | 0.3507 | 0.8242 | nan | 0.8174 | 0.8948 | 0.8094 | 0.8896 | 0.2730 | nan | 0.4105 | 0.5570 | 0.0 | 0.9164 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5439 | 0.0 | 0.0 | 0.8772 | 0.0 | 0.5070 | 0.5443 | 0.0 | nan | 0.0 | 0.2691 | 0.0 | 0.0 | 0.9205 | 0.8660 | 0.8975 | 0.0 | 0.0 | 0.2294 | 0.0 | nan | 0.7059 | 0.8214 | 0.7578 | 0.5803 | 0.2537 | nan | 0.2892 | 0.4308 | 0.0 | 0.7548 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4363 | 0.0 | 0.0 | 0.6490 | 0.0 | 0.3579 | 0.4224 | 0.0 | nan | 0.0 | 0.1927 | 0.0 | 0.0 | 0.8239 | 0.7040 | 0.8748 | 0.0 | 0.0 | 0.1516 | 0.0 |
| 0.5041 | 51.85 | 2800 | 0.5912 | 0.2859 | 0.3493 | 0.8264 | nan | 0.7593 | 0.9248 | 0.8029 | 0.8780 | 0.2945 | nan | 0.3718 | 0.6308 | 0.0 | 0.9078 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6667 | 0.0 | 0.0 | 0.8945 | 0.0 | 0.3362 | 0.4834 | 0.0 | nan | 0.0 | 0.3167 | 0.0 | 0.0 | 0.9255 | 0.8641 | 0.9382 | 0.0 | 0.0 | 0.1836 | 0.0 | nan | 0.6993 | 0.8205 | 0.7232 | 0.5789 | 0.2712 | nan | 0.2852 | 0.4872 | 0.0 | 0.7747 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3825 | 0.0 | 0.0 | 0.6382 | 0.0 | 0.2862 | 0.4138 | 0.0 | nan | 0.0 | 0.2019 | 0.0 | 0.0 | 0.8284 | 0.7271 | 0.8984 | 0.0 | 0.0 | 0.1316 | 0.0 |
| 0.5007 | 53.7 | 2900 | 0.6220 | 0.2839 | 0.3577 | 0.8134 | nan | 0.7302 | 0.8903 | 0.8180 | 0.9098 | 0.3134 | nan | 0.3521 | 0.6870 | 0.0 | 0.9429 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7288 | 0.0 | 0.0 | 0.8340 | 0.0 | 0.5169 | 0.4700 | 0.0 | nan | 0.0 | 0.3105 | 0.0 | 0.0 | 0.9356 | 0.8318 | 0.9437 | 0.0 | 0.0003 | 0.2298 | 0.0 | nan | 0.6722 | 0.8034 | 0.7257 | 0.4922 | 0.2900 | nan | 0.2639 | 0.4741 | 0.0 | 0.7434 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4082 | 0.0 | 0.0 | 0.6635 | 0.0 | 0.3690 | 0.4172 | 0.0 | nan | 0.0 | 0.1981 | 0.0 | 0.0 | 0.8205 | 0.6936 | 0.9015 | 0.0 | 0.0003 | 0.1483 | 0.0 |
| 0.4992 | 55.56 | 3000 | 0.5669 | 0.2928 | 0.3647 | 0.8317 | nan | 0.7826 | 0.9171 | 0.8018 | 0.9165 | 0.2758 | nan | 0.5273 | 0.6986 | 0.0 | 0.9410 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6836 | 0.0 | 0.0 | 0.8296 | 0.0 | 0.4717 | 0.4595 | 0.0 | nan | 0.0 | 0.3613 | 0.0 | 0.0 | 0.9272 | 0.8671 | 0.9424 | 0.0 | 0.0017 | 0.2669 | 0.0 | nan | 0.7196 | 0.8377 | 0.7464 | 0.6016 | 0.2573 | nan | 0.3367 | 0.4767 | 0.0 | 0.7565 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4237 | 0.0 | 0.0 | 0.6653 | 0.0 | 0.3438 | 0.4034 | 0.0 | nan | 0.0 | 0.1974 | 0.0 | 0.0 | 0.8287 | 0.7120 | 0.9031 | 0.0 | 0.0017 | 0.1565 | 0.0 |
| 0.5151 | 57.41 | 3100 | 0.6131 | 0.2864 | 0.3598 | 0.8169 | nan | 0.7793 | 0.9005 | 0.7894 | 0.8762 | 0.2508 | nan | 0.3852 | 0.6197 | 0.0 | 0.9316 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6506 | 0.0 | 0.0 | 0.7819 | 0.0 | 0.5348 | 0.5782 | 0.0 | nan | 0.0 | 0.3853 | 0.0 | 0.0 | 0.9211 | 0.8624 | 0.9390 | 0.0 | 0.0 | 0.3278 | 0.0 | nan | 0.6967 | 0.8145 | 0.7436 | 0.5453 | 0.2362 | nan | 0.2992 | 0.4656 | 0.0 | 0.7549 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4221 | 0.0 | 0.0 | 0.6246 | 0.0 | 0.3873 | 0.3923 | 0.0 | nan | 0.0 | 0.1937 | 0.0 | 0.0 | 0.8257 | 0.7204 | 0.8994 | 0.0 | 0.0 | 0.1417 | 0.0 |
| 0.4688 | 59.26 | 3200 | 0.7342 | 0.2674 | 0.3425 | 0.7758 | nan | 0.6724 | 0.8138 | 0.8211 | 0.8881 | 0.2106 | nan | 0.3435 | 0.4240 | 0.0 | 0.9345 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6881 | 0.0 | 0.0 | 0.8684 | 0.0 | 0.4808 | 0.5494 | 0.0 | nan | 0.0 | 0.2968 | 0.0 | 0.0 | 0.9269 | 0.8322 | 0.9291 | 0.0 | 0.0 | 0.2817 | 0.0 | nan | 0.6227 | 0.7395 | 0.7654 | 0.4008 | 0.1990 | nan | 0.2434 | 0.3473 | 0.0 | 0.7526 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3733 | 0.0 | 0.0 | 0.5567 | 0.0 | 0.3425 | 0.4056 | 0.0 | nan | 0.0 | 0.2033 | 0.0 | 0.0 | 0.8238 | 0.7088 | 0.8978 | 0.0 | 0.0 | 0.1748 | 0.0 |
| 0.4657 | 61.11 | 3300 | 0.7162 | 0.2737 | 0.3487 | 0.7884 | nan | 0.6859 | 0.8395 | 0.7919 | 0.8974 | 0.2306 | nan | 0.4086 | 0.6012 | 0.0 | 0.9212 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7186 | 0.0 | 0.0 | 0.8738 | 0.0 | 0.4323 | 0.5271 | 0.0 | nan | 0.0 | 0.3163 | 0.0 | 0.0 | 0.9373 | 0.8107 | 0.9381 | 0.0 | 0.0 | 0.2280 | 0.0 | nan | 0.6253 | 0.7668 | 0.7584 | 0.4350 | 0.2180 | nan | 0.2835 | 0.4646 | 0.0 | 0.7649 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3505 | 0.0 | 0.0 | 0.5817 | 0.0 | 0.3184 | 0.4275 | 0.0 | nan | 0.0 | 0.1989 | 0.0 | 0.0 | 0.8181 | 0.6916 | 0.9021 | 0.0 | 0.0 | 0.1529 | 0.0 |
| 0.4789 | 62.96 | 3400 | 0.6510 | 0.2824 | 0.3535 | 0.8065 | nan | 0.7245 | 0.8835 | 0.7760 | 0.8886 | 0.2720 | nan | 0.3709 | 0.6675 | 0.0 | 0.9351 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6668 | 0.0 | 0.0 | 0.8450 | 0.0 | 0.4917 | 0.5508 | 0.0 | nan | 0.0 | 0.3585 | 0.0 | 0.0 | 0.9367 | 0.7684 | 0.9321 | 0.0 | 0.0022 | 0.2404 | 0.0 | nan | 0.6754 | 0.7938 | 0.7682 | 0.4856 | 0.2514 | nan | 0.2841 | 0.4779 | 0.0 | 0.7566 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3801 | 0.0 | 0.0 | 0.6118 | 0.0 | 0.3623 | 0.4464 | 0.0 | nan | 0.0 | 0.1990 | 0.0 | 0.0 | 0.8150 | 0.6727 | 0.9029 | 0.0 | 0.0022 | 0.1516 | 0.0 |
| 0.4718 | 64.81 | 3500 | 0.7369 | 0.2741 | 0.3491 | 0.7687 | nan | 0.7886 | 0.7455 | 0.8159 | 0.8865 | 0.2585 | nan | 0.3583 | 0.6014 | 0.0 | 0.9362 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6741 | 0.0 | 0.0 | 0.8728 | 0.0 | 0.4488 | 0.5138 | 0.0 | nan | 0.0 | 0.3533 | 0.0 | 0.0 | 0.9343 | 0.8363 | 0.9345 | 0.0 | 0.0002 | 0.2111 | 0.0 | nan | 0.6800 | 0.6730 | 0.7173 | 0.3412 | 0.2406 | nan | 0.2736 | 0.4651 | 0.0 | 0.7688 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3688 | 0.0 | 0.0 | 0.6494 | 0.0 | 0.3507 | 0.4403 | 0.0 | nan | 0.0 | 0.1950 | 0.0 | 0.0 | 0.8287 | 0.7216 | 0.9039 | 0.0 | 0.0002 | 0.1536 | 0.0 |
| 0.4586 | 66.67 | 3600 | 0.7463 | 0.2799 | 0.3515 | 0.7620 | nan | 0.8497 | 0.6965 | 0.7931 | 0.9041 | 0.2737 | nan | 0.3983 | 0.5616 | 0.0 | 0.9365 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5892 | 0.0 | 0.0 | 0.8439 | 0.0 | 0.5213 | 0.4720 | 0.0 | nan | 0.0 | 0.3429 | 0.0 | 0.0 | 0.9332 | 0.8690 | 0.9431 | 0.0 | 0.0 | 0.3213 | 0.0 | nan | 0.7435 | 0.6450 | 0.7808 | 0.3120 | 0.2517 | nan | 0.3134 | 0.4378 | 0.0 | 0.7305 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4349 | 0.0 | 0.0 | 0.6399 | 0.0 | 0.3813 | 0.4243 | 0.0 | nan | 0.0 | 0.2097 | 0.0 | 0.0 | 0.8287 | 0.7225 | 0.9085 | 0.0 | 0.0 | 0.1926 | 0.0 |
| 0.4506 | 68.52 | 3700 | 0.6409 | 0.2859 | 0.3587 | 0.8030 | nan | 0.7887 | 0.8394 | 0.8054 | 0.8912 | 0.2518 | nan | 0.3799 | 0.6292 | 0.0 | 0.9273 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7090 | 0.0 | 0.0 | 0.8655 | 0.0 | 0.4989 | 0.5447 | 0.0 | nan | 0.0 | 0.3519 | 0.0 | 0.0 | 0.9335 | 0.8362 | 0.9278 | 0.0 | 0.0 | 0.2975 | 0.0 | nan | 0.7248 | 0.7574 | 0.7649 | 0.4118 | 0.2326 | nan | 0.2996 | 0.4840 | 0.0 | 0.7856 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3424 | 0.0 | 0.0 | 0.6639 | 0.0 | 0.3766 | 0.4576 | 0.0 | nan | 0.0 | 0.2055 | 0.0 | 0.0 | 0.8284 | 0.7274 | 0.9032 | 0.0 | 0.0 | 0.1823 | 0.0 |
| 0.4659 | 70.37 | 3800 | 0.6466 | 0.2884 | 0.3577 | 0.8081 | nan | 0.8256 | 0.8420 | 0.7982 | 0.8692 | 0.3484 | nan | 0.4035 | 0.4964 | 0.0 | 0.9489 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6461 | 0.0 | 0.0 | 0.8281 | 0.0 | 0.5593 | 0.5404 | 0.0 | nan | 0.0 | 0.3533 | 0.0 | 0.0 | 0.9345 | 0.7861 | 0.9426 | 0.0 | 0.0 | 0.3225 | 0.0 | nan | 0.7403 | 0.7665 | 0.7649 | 0.4456 | 0.2991 | nan | 0.3198 | 0.3976 | 0.0 | 0.7512 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4217 | 0.0 | 0.0 | 0.6537 | 0.0 | 0.3859 | 0.4470 | 0.0 | nan | 0.0 | 0.2219 | 0.0 | 0.0 | 0.8223 | 0.6908 | 0.9109 | 0.0 | 0.0 | 0.1898 | 0.0 |
| 0.4416 | 72.22 | 3900 | 0.6944 | 0.2824 | 0.3648 | 0.7953 | nan | 0.8073 | 0.8044 | 0.8200 | 0.9039 | 0.2713 | nan | 0.4385 | 0.6632 | 0.0 | 0.9435 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7130 | 0.0 | 0.0 | 0.8448 | 0.0 | 0.5050 | 0.5552 | 0.0 | nan | 0.0 | 0.3791 | 0.0 | 0.0 | 0.9316 | 0.8332 | 0.9378 | 0.0 | 0.0047 | 0.3183 | 0.0 | nan | 0.7045 | 0.7445 | 0.6571 | 0.4107 | 0.2536 | nan | 0.3089 | 0.4711 | 0.0 | 0.7504 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3814 | 0.0 | 0.0 | 0.6468 | 0.0 | 0.3800 | 0.4413 | 0.0 | nan | 0.0 | 0.2243 | 0.0 | 0.0 | 0.8294 | 0.7257 | 0.9078 | 0.0 | 0.0047 | 0.1964 | 0.0 |
| 0.4347 | 74.07 | 4000 | 0.5742 | 0.2960 | 0.3615 | 0.8319 | nan | 0.8135 | 0.9088 | 0.8067 | 0.8959 | 0.3006 | nan | 0.3611 | 0.6055 | 0.0 | 0.9354 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6851 | 0.0 | 0.0 | 0.8692 | 0.0 | 0.4956 | 0.5065 | 0.0 | nan | 0.0 | 0.3493 | 0.0 | 0.0 | 0.9264 | 0.8500 | 0.9368 | 0.0 | 0.0018 | 0.3210 | 0.0 | nan | 0.7436 | 0.8254 | 0.7615 | 0.5609 | 0.2797 | nan | 0.3045 | 0.4733 | 0.0 | 0.7745 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4006 | 0.0 | 0.0 | 0.6424 | 0.0 | 0.3800 | 0.4600 | 0.0 | nan | 0.0 | 0.2126 | 0.0 | 0.0 | 0.8296 | 0.7251 | 0.9085 | 0.0 | 0.0018 | 0.1876 | 0.0 |
| 0.4191 | 75.93 | 4100 | 0.6454 | 0.2879 | 0.3671 | 0.8068 | nan | 0.7757 | 0.8432 | 0.8171 | 0.8803 | 0.3169 | nan | 0.4971 | 0.6474 | 0.0 | 0.9274 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7272 | 0.0 | 0.0 | 0.8520 | 0.0 | 0.4847 | 0.5414 | 0.0 | nan | 0.0 | 0.4113 | 0.0 | 0.0 | 0.9400 | 0.8335 | 0.9348 | 0.0 | 0.0167 | 0.3000 | 0.0 | nan | 0.7112 | 0.7615 | 0.6876 | 0.4533 | 0.2904 | nan | 0.3375 | 0.4768 | 0.0 | 0.7857 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3483 | 0.0 | 0.0 | 0.6544 | 0.0 | 0.3636 | 0.4546 | 0.0 | nan | 0.0 | 0.2086 | 0.0 | 0.0 | 0.8293 | 0.7293 | 0.9093 | 0.0 | 0.0165 | 0.1938 | 0.0 |
| 0.4355 | 77.78 | 4200 | 0.5871 | 0.2915 | 0.3601 | 0.8236 | nan | 0.6673 | 0.9324 | 0.8063 | 0.8730 | 0.2988 | nan | 0.5014 | 0.5734 | 0.0 | 0.9480 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6629 | 0.0 | 0.0 | 0.8653 | 0.0 | 0.4649 | 0.5559 | 0.0 | nan | 0.0 | 0.3890 | 0.0 | 0.0 | 0.9183 | 0.8681 | 0.9537 | 0.0 | 0.0088 | 0.2359 | 0.0 | nan | 0.6266 | 0.8175 | 0.7309 | 0.5730 | 0.2746 | nan | 0.3471 | 0.4465 | 0.0 | 0.7567 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4103 | 0.0 | 0.0 | 0.6684 | 0.0 | 0.3482 | 0.4615 | 0.0 | nan | 0.0 | 0.2062 | 0.0 | 0.0 | 0.8356 | 0.7347 | 0.9131 | 0.0 | 0.0088 | 0.1686 | 0.0 |
| 0.431 | 79.63 | 4300 | 0.5778 | 0.2902 | 0.3540 | 0.8266 | nan | 0.8325 | 0.9042 | 0.7971 | 0.8575 | 0.2707 | nan | 0.4318 | 0.5731 | 0.0 | 0.9428 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6701 | 0.0 | 0.0 | 0.8781 | 0.0 | 0.4081 | 0.5480 | 0.0 | nan | 0.0 | 0.3573 | 0.0 | 0.0 | 0.9299 | 0.7480 | 0.9397 | 0.0 | 0.0343 | 0.2046 | 0.0 | nan | 0.7428 | 0.8112 | 0.7719 | 0.5907 | 0.2545 | nan | 0.3259 | 0.4272 | 0.0 | 0.7505 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4255 | 0.0 | 0.0 | 0.6496 | 0.0 | 0.3209 | 0.4384 | 0.0 | nan | 0.0 | 0.2061 | 0.0 | 0.0 | 0.8142 | 0.6646 | 0.9118 | 0.0 | 0.0338 | 0.1477 | 0.0 |
| 0.4105 | 81.48 | 4400 | 0.7355 | 0.2837 | 0.3547 | 0.7802 | nan | 0.8194 | 0.7548 | 0.8125 | 0.9004 | 0.2421 | nan | 0.4411 | 0.5260 | 0.0 | 0.9344 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6628 | 0.0 | 0.0 | 0.9003 | 0.0 | 0.4114 | 0.5457 | 0.0 | nan | 0.0 | 0.3720 | 0.0 | 0.0 | 0.9386 | 0.8336 | 0.9269 | 0.0 | 0.0905 | 0.2364 | 0.0 | nan | 0.7295 | 0.6964 | 0.7754 | 0.3477 | 0.2325 | nan | 0.3336 | 0.4069 | 0.0 | 0.7641 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4284 | 0.0 | 0.0 | 0.6483 | 0.0 | 0.3512 | 0.4444 | 0.0 | nan | 0.0 | 0.2140 | 0.0 | 0.0 | 0.8260 | 0.7200 | 0.9047 | 0.0 | 0.0883 | 0.1667 | 0.0 |
| 0.4102 | 83.33 | 4500 | 0.6431 | 0.2832 | 0.3550 | 0.8023 | nan | 0.6173 | 0.8926 | 0.8233 | 0.8684 | 0.3015 | nan | 0.4774 | 0.5853 | 0.0 | 0.9435 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7118 | 0.0 | 0.0 | 0.8678 | 0.0 | 0.4544 | 0.5288 | 0.0 | nan | 0.0 | 0.3435 | 0.0 | 0.0 | 0.9438 | 0.7934 | 0.9323 | 0.0 | 0.0264 | 0.2495 | 0.0 | nan | 0.5793 | 0.7784 | 0.7849 | 0.5220 | 0.2750 | nan | 0.3433 | 0.4263 | 0.0 | 0.7478 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3651 | 0.0 | 0.0 | 0.6236 | 0.0 | 0.3489 | 0.4347 | 0.0 | nan | 0.0 | 0.2243 | 0.0 | 0.0 | 0.8184 | 0.6879 | 0.9082 | 0.0 | 0.0258 | 0.1674 | 0.0 |
| 0.4172 | 85.19 | 4600 | 0.6988 | 0.2875 | 0.3537 | 0.7940 | nan | 0.7505 | 0.8194 | 0.8168 | 0.9128 | 0.2640 | nan | 0.4022 | 0.4961 | 0.0 | 0.9391 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6453 | 0.0 | 0.0 | 0.8769 | 0.0 | 0.4600 | 0.5182 | 0.0 | nan | 0.0 | 0.3740 | 0.0 | 0.0 | 0.9378 | 0.8263 | 0.9455 | 0.0 | 0.0900 | 0.2436 | 0.0 | nan | 0.7048 | 0.7401 | 0.7654 | 0.3938 | 0.2454 | nan | 0.2874 | 0.3973 | 0.0 | 0.7572 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4779 | 0.0 | 0.0 | 0.6427 | 0.0 | 0.3531 | 0.4565 | 0.0 | nan | 0.0 | 0.2402 | 0.0 | 0.0 | 0.8333 | 0.7320 | 0.9149 | 0.0 | 0.0880 | 0.1706 | 0.0 |
| 0.3885 | 87.04 | 4700 | 0.5978 | 0.2953 | 0.3647 | 0.8175 | nan | 0.8142 | 0.8718 | 0.8027 | 0.8554 | 0.3059 | nan | 0.3787 | 0.5867 | 0.0 | 0.9403 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6845 | 0.0 | 0.0 | 0.8471 | 0.0 | 0.5315 | 0.5788 | 0.0 | nan | 0.0 | 0.3874 | 0.0 | 0.0 | 0.9354 | 0.8156 | 0.9494 | 0.0 | 0.1221 | 0.2636 | 0.0 | nan | 0.7263 | 0.7825 | 0.7874 | 0.4784 | 0.2859 | nan | 0.2981 | 0.4480 | 0.0 | 0.7604 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3820 | 0.0 | 0.0 | 0.6694 | 0.0 | 0.3781 | 0.4545 | 0.0 | nan | 0.0 | 0.2385 | 0.0 | 0.0 | 0.8301 | 0.7216 | 0.9144 | 0.0 | 0.1131 | 0.1798 | 0.0 |
| 0.3949 | 88.89 | 4800 | 0.5747 | 0.2961 | 0.3643 | 0.8282 | nan | 0.8129 | 0.8976 | 0.8121 | 0.8713 | 0.2894 | nan | 0.4694 | 0.5562 | 0.0 | 0.9391 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6947 | 0.0 | 0.0 | 0.8395 | 0.0 | 0.5260 | 0.5481 | 0.0 | nan | 0.0 | 0.3852 | 0.0 | 0.0 | 0.9428 | 0.8221 | 0.9365 | 0.0 | 0.0559 | 0.2580 | 0.0 | nan | 0.7394 | 0.8130 | 0.7924 | 0.5533 | 0.2658 | nan | 0.3447 | 0.4378 | 0.0 | 0.7620 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3851 | 0.0 | 0.0 | 0.6633 | 0.0 | 0.3722 | 0.4533 | 0.0 | nan | 0.0 | 0.2184 | 0.0 | 0.0 | 0.8217 | 0.7122 | 0.9124 | 0.0 | 0.0534 | 0.1742 | 0.0 |
| 0.4158 | 90.74 | 4900 | 0.6449 | 0.2916 | 0.3657 | 0.8070 | nan | 0.8043 | 0.8271 | 0.8157 | 0.9192 | 0.3073 | nan | 0.4380 | 0.6344 | 0.0 | 0.9340 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7171 | 0.0 | 0.0 | 0.8572 | 0.0 | 0.5188 | 0.5406 | 0.0 | nan | 0.0 | 0.3852 | 0.0 | 0.0 | 0.9420 | 0.8552 | 0.9459 | 0.0 | 0.0450 | 0.2148 | 0.0 | nan | 0.6975 | 0.7564 | 0.7902 | 0.4563 | 0.2853 | nan | 0.3171 | 0.4654 | 0.0 | 0.7879 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3571 | 0.0 | 0.0 | 0.6623 | 0.0 | 0.3819 | 0.4583 | 0.0 | nan | 0.0 | 0.2243 | 0.0 | 0.0 | 0.8302 | 0.7431 | 0.9150 | 0.0 | 0.0421 | 0.1602 | 0.0 |
| 0.3856 | 92.59 | 5000 | 0.7492 | 0.2796 | 0.3559 | 0.7680 | nan | 0.8020 | 0.7250 | 0.8248 | 0.9139 | 0.2500 | nan | 0.3621 | 0.5930 | 0.0 | 0.9411 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6964 | 0.0 | 0.0 | 0.9036 | 0.0 | 0.3460 | 0.5234 | 0.0 | nan | 0.0 | 0.4271 | 0.0 | 0.0 | 0.9255 | 0.8871 | 0.9524 | 0.0 | 0.0666 | 0.2471 | 0.0 | nan | 0.6954 | 0.6697 | 0.7878 | 0.3256 | 0.2365 | nan | 0.2864 | 0.4452 | 0.0 | 0.7724 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3838 | 0.0 | 0.0 | 0.6413 | 0.0 | 0.2968 | 0.4239 | 0.0 | nan | 0.0 | 0.2271 | 0.0 | 0.0 | 0.8382 | 0.7554 | 0.9171 | 0.0 | 0.0624 | 0.1808 | 0.0 |
| 0.3915 | 94.44 | 5100 | 0.6402 | 0.2893 | 0.3608 | 0.8012 | nan | 0.7614 | 0.8406 | 0.7898 | 0.9029 | 0.3080 | nan | 0.3857 | 0.6328 | 0.0 | 0.9373 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7010 | 0.0 | 0.0 | 0.8626 | 0.0 | 0.5045 | 0.5235 | 0.0 | nan | 0.0 | 0.3802 | 0.0 | 0.0 | 0.9442 | 0.7561 | 0.9401 | 0.0 | 0.1133 | 0.2603 | 0.0 | nan | 0.6850 | 0.7546 | 0.7750 | 0.4451 | 0.2827 | nan | 0.3049 | 0.4715 | 0.0 | 0.7694 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3810 | 0.0 | 0.0 | 0.6626 | 0.0 | 0.3832 | 0.4394 | 0.0 | nan | 0.0 | 0.2214 | 0.0 | 0.0 | 0.8125 | 0.6725 | 0.9138 | 0.0 | 0.1034 | 0.1797 | 0.0 |
| 0.3732 | 96.3 | 5200 | 0.7308 | 0.2840 | 0.3598 | 0.7795 | nan | 0.7534 | 0.7741 | 0.8137 | 0.9035 | 0.2614 | nan | 0.4308 | 0.6431 | 0.0 | 0.9315 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7293 | 0.0 | 0.0 | 0.8884 | 0.0 | 0.4166 | 0.5225 | 0.0 | nan | 0.0 | 0.3992 | 0.0 | 0.0 | 0.9329 | 0.8517 | 0.9519 | 0.0 | 0.0756 | 0.2354 | 0.0 | nan | 0.6723 | 0.6942 | 0.7836 | 0.3665 | 0.2474 | nan | 0.3333 | 0.4669 | 0.0 | 0.7857 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3545 | 0.0 | 0.0 | 0.6375 | 0.0 | 0.3443 | 0.4311 | 0.0 | nan | 0.0 | 0.2377 | 0.0 | 0.0 | 0.8346 | 0.7428 | 0.9173 | 0.0 | 0.0659 | 0.1722 | 0.0 |
| 0.3843 | 98.15 | 5300 | 0.6580 | 0.2864 | 0.3556 | 0.7962 | nan | 0.7254 | 0.8440 | 0.7996 | 0.8889 | 0.2696 | nan | 0.4320 | 0.6399 | 0.0 | 0.9285 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6708 | 0.0 | 0.0 | 0.8872 | 0.0 | 0.4070 | 0.5262 | 0.0 | nan | 0.0 | 0.3791 | 0.0 | 0.0 | 0.9423 | 0.7462 | 0.9487 | 0.0 | 0.1269 | 0.2159 | 0.0 | nan | 0.6660 | 0.7540 | 0.7836 | 0.4484 | 0.2521 | nan | 0.3307 | 0.4691 | 0.0 | 0.7963 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3896 | 0.0 | 0.0 | 0.6071 | 0.0 | 0.3185 | 0.4568 | 0.0 | nan | 0.0 | 0.2206 | 0.0 | 0.0 | 0.8138 | 0.6608 | 0.9170 | 0.0 | 0.1163 | 0.1644 | 0.0 |
| 0.3903 | 100.0 | 5400 | 0.6288 | 0.2881 | 0.3541 | 0.8086 | nan | 0.7763 | 0.8567 | 0.8240 | 0.8951 | 0.2446 | nan | 0.4334 | 0.5553 | 0.0 | 0.9354 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6738 | 0.0 | 0.0 | 0.8901 | 0.0 | 0.4777 | 0.5458 | 0.0 | nan | 0.0 | 0.3297 | 0.0 | 0.0 | 0.9417 | 0.7702 | 0.9457 | 0.0 | 0.0457 | 0.1907 | 0.0 | nan | 0.6906 | 0.7727 | 0.7923 | 0.4705 | 0.2358 | nan | 0.3295 | 0.4509 | 0.0 | 0.7755 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3981 | 0.0 | 0.0 | 0.6528 | 0.0 | 0.3644 | 0.4573 | 0.0 | nan | 0.0 | 0.2197 | 0.0 | 0.0 | 0.8176 | 0.6797 | 0.9157 | 0.0 | 0.0444 | 0.1500 | 0.0 |
| 0.355 | 101.85 | 5500 | 0.7112 | 0.2860 | 0.3563 | 0.7844 | nan | 0.7834 | 0.7947 | 0.8123 | 0.8807 | 0.2262 | nan | 0.3408 | 0.6020 | 0.0 | 0.9382 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6759 | 0.0 | 0.0 | 0.8838 | 0.0 | 0.4491 | 0.5845 | 0.0 | nan | 0.0 | 0.4029 | 0.0 | 0.0 | 0.9295 | 0.7890 | 0.9477 | 0.0 | 0.1045 | 0.2564 | 0.0 | nan | 0.7086 | 0.7078 | 0.7825 | 0.3607 | 0.2168 | nan | 0.2792 | 0.4624 | 0.0 | 0.7767 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4366 | 0.0 | 0.0 | 0.6667 | 0.0 | 0.3443 | 0.4351 | 0.0 | nan | 0.0 | 0.2386 | 0.0 | 0.0 | 0.8283 | 0.7060 | 0.9167 | 0.0 | 0.1000 | 0.1847 | 0.0 |
| 0.3729 | 103.7 | 5600 | 0.6849 | 0.2835 | 0.3591 | 0.7887 | nan | 0.8150 | 0.7790 | 0.8122 | 0.8834 | 0.2787 | nan | 0.4506 | 0.6270 | 0.0 | 0.9253 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7408 | 0.0 | 0.0 | 0.9180 | 0.0 | 0.3273 | 0.5197 | 0.0 | nan | 0.0 | 0.4167 | 0.0 | 0.0 | 0.9358 | 0.8379 | 0.9406 | 0.0 | 0.0480 | 0.2345 | 0.0 | nan | 0.6989 | 0.7189 | 0.7862 | 0.3939 | 0.2648 | nan | 0.3292 | 0.4851 | 0.0 | 0.7976 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3286 | 0.0 | 0.0 | 0.6202 | 0.0 | 0.2779 | 0.4371 | 0.0 | nan | 0.0 | 0.2402 | 0.0 | 0.0 | 0.8321 | 0.7297 | 0.9140 | 0.0 | 0.0437 | 0.1749 | 0.0 |
| 0.3895 | 105.56 | 5700 | 0.6917 | 0.2909 | 0.3669 | 0.7881 | nan | 0.8520 | 0.7575 | 0.8037 | 0.9006 | 0.2858 | nan | 0.4909 | 0.6331 | 0.0 | 0.9365 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6811 | 0.0 | 0.0 | 0.8525 | 0.0 | 0.5087 | 0.5374 | 0.0 | nan | 0.0 | 0.3766 | 0.0 | 0.0 | 0.9432 | 0.8426 | 0.9479 | 0.0 | 0.0982 | 0.2931 | 0.0 | nan | 0.7338 | 0.7000 | 0.7834 | 0.3764 | 0.2683 | nan | 0.3430 | 0.4719 | 0.0 | 0.7841 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3792 | 0.0 | 0.0 | 0.6627 | 0.0 | 0.3815 | 0.4454 | 0.0 | nan | 0.0 | 0.2245 | 0.0 | 0.0 | 0.8273 | 0.7311 | 0.9183 | 0.0 | 0.0894 | 0.1885 | 0.0 |
| 0.3602 | 107.41 | 5800 | 0.5475 | 0.3042 | 0.3685 | 0.8353 | nan | 0.7641 | 0.9319 | 0.8055 | 0.8737 | 0.3132 | nan | 0.4868 | 0.6244 | 0.0 | 0.9407 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6873 | 0.0 | 0.0 | 0.8810 | 0.0 | 0.4631 | 0.5387 | 0.0 | nan | 0.0 | 0.4382 | 0.0 | 0.0 | 0.9298 | 0.7866 | 0.9486 | 0.0 | 0.1344 | 0.2454 | 0.0 | nan | 0.7121 | 0.8270 | 0.7806 | 0.6491 | 0.2900 | nan | 0.3497 | 0.4700 | 0.0 | 0.7753 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4480 | 0.0 | 0.0 | 0.6577 | 0.0 | 0.3509 | 0.4582 | 0.0 | nan | 0.0 | 0.2281 | 0.0 | 0.0 | 0.8267 | 0.6946 | 0.9179 | 0.0 | 0.1213 | 0.1782 | 0.0 |
| 0.3674 | 109.26 | 5900 | 0.6421 | 0.2919 | 0.3540 | 0.8016 | nan | 0.6932 | 0.8577 | 0.8144 | 0.9018 | 0.3136 | nan | 0.3961 | 0.5655 | 0.0 | 0.9370 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6563 | 0.0 | 0.0 | 0.9140 | 0.0 | 0.3656 | 0.4891 | 0.0 | nan | 0.0 | 0.3775 | 0.0 | 0.0 | 0.9373 | 0.8204 | 0.9427 | 0.0 | 0.1378 | 0.2090 | 0.0 | nan | 0.6366 | 0.7503 | 0.7829 | 0.4541 | 0.2884 | nan | 0.3050 | 0.4442 | 0.0 | 0.7727 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4780 | 0.0 | 0.0 | 0.6644 | 0.0 | 0.3163 | 0.4511 | 0.0 | nan | 0.0 | 0.2316 | 0.0 | 0.0 | 0.8321 | 0.7257 | 0.9157 | 0.0 | 0.1268 | 0.1636 | 0.0 |
| 0.3657 | 111.11 | 6000 | 0.5813 | 0.2955 | 0.3637 | 0.8277 | nan | 0.7870 | 0.8975 | 0.7014 | 0.8566 | 0.3741 | nan | 0.4469 | 0.6219 | 0.0 | 0.9403 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7185 | 0.0 | 0.0 | 0.8827 | 0.0 | 0.4503 | 0.5681 | 0.0 | nan | 0.0 | 0.3815 | 0.0 | 0.0 | 0.9397 | 0.8275 | 0.9484 | 0.0 | 0.0968 | 0.1999 | 0.0 | nan | 0.7203 | 0.8097 | 0.6881 | 0.5693 | 0.3405 | nan | 0.3293 | 0.4754 | 0.0 | 0.7846 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3863 | 0.0 | 0.0 | 0.6346 | 0.0 | 0.3557 | 0.4385 | 0.0 | nan | 0.0 | 0.2181 | 0.0 | 0.0 | 0.8287 | 0.7172 | 0.9189 | 0.0 | 0.0846 | 0.1578 | 0.0 |
| 0.367 | 112.96 | 6100 | 0.6609 | 0.2897 | 0.3661 | 0.7984 | nan | 0.7903 | 0.8284 | 0.8039 | 0.9016 | 0.2212 | nan | 0.4163 | 0.6816 | 0.0 | 0.9453 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7209 | 0.0 | 0.0 | 0.8372 | 0.0 | 0.4577 | 0.5511 | 0.0 | nan | 0.0 | 0.4283 | 0.0 | 0.0 | 0.9390 | 0.7875 | 0.9493 | 0.0 | 0.1399 | 0.3157 | 0.0 | nan | 0.7203 | 0.7408 | 0.7738 | 0.4105 | 0.2117 | nan | 0.3182 | 0.4784 | 0.0 | 0.7828 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3859 | 0.0 | 0.0 | 0.6672 | 0.0 | 0.3588 | 0.4378 | 0.0 | nan | 0.0 | 0.2244 | 0.0 | 0.0 | 0.8282 | 0.7032 | 0.9187 | 0.0 | 0.1137 | 0.1958 | 0.0 |
| 0.3638 | 114.81 | 6200 | 0.7997 | 0.2803 | 0.3592 | 0.7547 | nan | 0.8092 | 0.6782 | 0.8102 | 0.9284 | 0.2905 | nan | 0.3691 | 0.6185 | 0.0 | 0.9403 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7520 | 0.0 | 0.0 | 0.8609 | 0.0 | 0.4178 | 0.5567 | 0.0 | nan | 0.0 | 0.3931 | 0.0 | 0.0 | 0.9474 | 0.8770 | 0.9435 | 0.0000 | 0.0667 | 0.2347 | 0.0 | nan | 0.7091 | 0.6261 | 0.7837 | 0.2942 | 0.2753 | nan | 0.2928 | 0.4552 | 0.0 | 0.7808 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3801 | 0.0 | 0.0 | 0.6648 | 0.0 | 0.3421 | 0.4315 | 0.0 | nan | 0.0 | 0.2152 | 0.0 | 0.0 | 0.8297 | 0.7448 | 0.9168 | 0.0000 | 0.0595 | 0.1680 | 0.0 |
| 0.3654 | 116.67 | 6300 | 0.6019 | 0.2956 | 0.3645 | 0.8175 | nan | 0.8244 | 0.8533 | 0.6788 | 0.8927 | 0.3058 | nan | 0.4950 | 0.6003 | 0.0 | 0.9396 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6930 | 0.0 | 0.0 | 0.8964 | 0.0 | 0.3647 | 0.5196 | 0.0 | nan | 0.0 | 0.4113 | 0.0 | 0.0 | 0.9257 | 0.8551 | 0.9594 | 0.0 | 0.1310 | 0.3167 | 0.0 | nan | 0.7337 | 0.7732 | 0.6601 | 0.4748 | 0.2853 | nan | 0.3520 | 0.4685 | 0.0 | 0.7868 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4121 | 0.0 | 0.0 | 0.6708 | 0.0 | 0.3117 | 0.4434 | 0.0 | nan | 0.0 | 0.2326 | 0.0 | 0.0 | 0.8405 | 0.7541 | 0.9187 | 0.0 | 0.1205 | 0.2201 | 0.0 |
| 0.3652 | 118.52 | 6400 | 0.5981 | 0.2967 | 0.3649 | 0.8205 | nan | 0.7551 | 0.8909 | 0.6342 | 0.9054 | 0.3093 | nan | 0.4234 | 0.6313 | 0.0 | 0.9387 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6751 | 0.0 | 0.0 | 0.8700 | 0.0 | 0.4187 | 0.5633 | 0.0 | nan | 0.0 | 0.4465 | 0.0 | 0.0 | 0.9262 | 0.8528 | 0.9534 | 0.0002 | 0.1437 | 0.3398 | 0.0 | nan | 0.6956 | 0.7948 | 0.6246 | 0.4963 | 0.2861 | nan | 0.3171 | 0.4870 | 0.0 | 0.7941 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4467 | 0.0 | 0.0 | 0.6719 | 0.0 | 0.3338 | 0.4473 | 0.0 | nan | 0.0 | 0.2377 | 0.0 | 0.0 | 0.8417 | 0.7531 | 0.9198 | 0.0002 | 0.1302 | 0.2180 | 0.0 |
| 0.3559 | 120.37 | 6500 | 0.5780 | 0.3026 | 0.3668 | 0.8256 | nan | 0.7517 | 0.9024 | 0.8103 | 0.8905 | 0.3788 | nan | 0.3990 | 0.5648 | 0.0 | 0.9522 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6491 | 0.0 | 0.0 | 0.8623 | 0.0 | 0.5208 | 0.5227 | 0.0 | nan | 0.0 | 0.4095 | 0.0 | 0.0 | 0.9315 | 0.8073 | 0.9531 | 0.0 | 0.1367 | 0.2937 | 0.0 | nan | 0.6917 | 0.8084 | 0.7831 | 0.5645 | 0.3365 | nan | 0.3195 | 0.4446 | 0.0 | 0.7603 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4620 | 0.0 | 0.0 | 0.6310 | 0.0 | 0.3859 | 0.4599 | 0.0 | nan | 0.0 | 0.2286 | 0.0 | 0.0 | 0.8329 | 0.7236 | 0.9192 | 0.0 | 0.1259 | 0.2064 | 0.0 |
| 0.3348 | 122.22 | 6600 | 0.5522 | 0.3023 | 0.3735 | 0.8379 | nan | 0.8289 | 0.9088 | 0.6882 | 0.8947 | 0.3594 | nan | 0.4373 | 0.6918 | 0.0 | 0.9448 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7098 | 0.0 | 0.0 | 0.8356 | 0.0 | 0.5156 | 0.5832 | 0.0 | nan | 0.0 | 0.4059 | 0.0 | 0.0 | 0.9417 | 0.8359 | 0.9578 | 0.0009 | 0.1308 | 0.2812 | 0.0 | nan | 0.7433 | 0.8257 | 0.6716 | 0.5930 | 0.3306 | nan | 0.3517 | 0.4956 | 0.0 | 0.7897 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3747 | 0.0 | 0.0 | 0.6736 | 0.0 | 0.3802 | 0.4271 | 0.0 | nan | 0.0 | 0.2180 | 0.0 | 0.0 | 0.8323 | 0.7373 | 0.9200 | 0.0008 | 0.1171 | 0.1906 | 0.0 |
| 0.3653 | 124.07 | 6700 | 0.6070 | 0.2986 | 0.3679 | 0.8216 | nan | 0.6919 | 0.9133 | 0.8114 | 0.8786 | 0.3306 | nan | 0.4558 | 0.6517 | 0.0 | 0.9455 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7183 | 0.0 | 0.0 | 0.8672 | 0.0 | 0.5019 | 0.5472 | 0.0 | nan | 0.0 | 0.4162 | 0.0 | 0.0 | 0.9390 | 0.8019 | 0.9414 | 0.0 | 0.0957 | 0.2664 | 0.0 | nan | 0.6394 | 0.8000 | 0.7821 | 0.6011 | 0.3025 | nan | 0.3359 | 0.4969 | 0.0 | 0.7887 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3803 | 0.0 | 0.0 | 0.6386 | 0.0 | 0.3855 | 0.4427 | 0.0 | nan | 0.0 | 0.2268 | 0.0 | 0.0 | 0.8298 | 0.7136 | 0.9170 | 0.0 | 0.0886 | 0.1861 | 0.0 |
| 0.3216 | 125.93 | 6800 | 0.6091 | 0.3003 | 0.3729 | 0.8176 | nan | 0.8300 | 0.8429 | 0.8233 | 0.9193 | 0.3587 | nan | 0.4900 | 0.6837 | 0.0 | 0.9439 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7272 | 0.0 | 0.0 | 0.8781 | 0.0 | 0.4143 | 0.5307 | 0.0 | nan | 0.0 | 0.4051 | 0.0116 | 0.0 | 0.9314 | 0.8400 | 0.9539 | 0.0 | 0.0921 | 0.2558 | 0.0 | nan | 0.7584 | 0.7706 | 0.7892 | 0.4626 | 0.3268 | nan | 0.3678 | 0.5054 | 0.0 | 0.7811 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3947 | 0.0 | 0.0 | 0.6604 | 0.0 | 0.3306 | 0.4515 | 0.0 | nan | 0.0 | 0.2265 | 0.0116 | 0.0 | 0.8386 | 0.7409 | 0.9204 | 0.0 | 0.0850 | 0.1887 | 0.0 |
| 0.358 | 127.78 | 6900 | 0.5287 | 0.3110 | 0.3729 | 0.8465 | nan | 0.8062 | 0.9359 | 0.8173 | 0.8927 | 0.3346 | nan | 0.4527 | 0.6392 | 0.0 | 0.9354 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6945 | 0.0 | 0.0 | 0.8722 | 0.0 | 0.4896 | 0.5317 | 0.0 | nan | 0.0 | 0.4070 | 0.0 | 0.0 | 0.9436 | 0.8467 | 0.9449 | 0.0 | 0.1243 | 0.2646 | 0.0 | nan | 0.7567 | 0.8356 | 0.7873 | 0.6388 | 0.3087 | nan | 0.3575 | 0.4948 | 0.0 | 0.7958 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4146 | 0.0 | 0.0 | 0.6798 | 0.0 | 0.3797 | 0.4630 | 0.0 | nan | 0.0 | 0.2283 | 0.0 | 0.0 | 0.8356 | 0.7467 | 0.9182 | 0.0 | 0.1175 | 0.1940 | 0.0 |
| 0.3402 | 129.63 | 7000 | 0.6208 | 0.2946 | 0.3637 | 0.8141 | nan | 0.7658 | 0.8754 | 0.8158 | 0.9118 | 0.2322 | nan | 0.4017 | 0.6637 | 0.0 | 0.9438 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6933 | 0.0 | 0.0 | 0.8763 | 0.0 | 0.3895 | 0.5601 | 0.0 | nan | 0.0 | 0.4252 | 0.0043 | 0.0 | 0.9423 | 0.7810 | 0.9448 | 0.0000 | 0.1253 | 0.2865 | 0.0 | nan | 0.7060 | 0.7779 | 0.7885 | 0.4813 | 0.2236 | nan | 0.3133 | 0.4921 | 0.0 | 0.7863 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4236 | 0.0 | 0.0 | 0.6817 | 0.0 | 0.3292 | 0.4440 | 0.0 | nan | 0.0 | 0.2236 | 0.0043 | 0.0 | 0.8247 | 0.6964 | 0.9178 | 0.0000 | 0.1163 | 0.1976 | 0.0 |
| 0.3218 | 131.48 | 7100 | 0.5444 | 0.3108 | 0.3748 | 0.8443 | nan | 0.8296 | 0.9244 | 0.8276 | 0.8878 | 0.2774 | nan | 0.4782 | 0.6750 | 0.0 | 0.9366 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6983 | 0.0 | 0.0 | 0.8664 | 0.0 | 0.4743 | 0.5451 | 0.0 | nan | 0.0 | 0.4187 | 0.0113 | 0.0 | 0.9391 | 0.8642 | 0.9558 | 0.0 | 0.1166 | 0.2684 | 0.0 | nan | 0.7636 | 0.8260 | 0.7984 | 0.6281 | 0.2647 | nan | 0.3705 | 0.5066 | 0.0 | 0.8001 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4217 | 0.0 | 0.0 | 0.6783 | 0.0 | 0.3686 | 0.4581 | 0.0 | nan | 0.0 | 0.2178 | 0.0113 | 0.0 | 0.8396 | 0.7666 | 0.9213 | 0.0 | 0.1113 | 0.1943 | 0.0 |
| 0.3413 | 133.33 | 7200 | 0.5473 | 0.3063 | 0.3680 | 0.8412 | nan | 0.8038 | 0.9272 | 0.7396 | 0.8885 | 0.2742 | nan | 0.4489 | 0.5761 | 0.0 | 0.9434 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6970 | 0.0 | 0.0 | 0.8722 | 0.0 | 0.5185 | 0.5545 | 0.0 | nan | 0.0 | 0.4060 | 0.0241 | 0.0 | 0.9384 | 0.8611 | 0.9453 | 0.0 | 0.1082 | 0.2489 | 0.0 | nan | 0.7450 | 0.8245 | 0.7280 | 0.6104 | 0.2595 | nan | 0.3532 | 0.4660 | 0.0 | 0.7846 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4313 | 0.0 | 0.0 | 0.6807 | 0.0 | 0.3896 | 0.4684 | 0.0 | nan | 0.0 | 0.2284 | 0.0241 | 0.0 | 0.8397 | 0.7610 | 0.9186 | 0.0 | 0.1022 | 0.1871 | 0.0 |
| 0.3463 | 135.19 | 7300 | 0.6341 | 0.2922 | 0.3603 | 0.8106 | nan | 0.8087 | 0.8519 | 0.8052 | 0.9145 | 0.2425 | nan | 0.3711 | 0.5676 | 0.0 | 0.9336 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7046 | 0.0 | 0.0 | 0.8888 | 0.0 | 0.3923 | 0.5815 | 0.0 | nan | 0.0 | 0.4055 | 0.0319 | 0.0 | 0.9344 | 0.8036 | 0.9503 | 0.0 | 0.1152 | 0.2276 | 0.0 | nan | 0.7410 | 0.7674 | 0.7870 | 0.4522 | 0.2330 | nan | 0.3152 | 0.4495 | 0.0 | 0.7851 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4247 | 0.0 | 0.0 | 0.6553 | 0.0 | 0.3108 | 0.4330 | 0.0 | nan | 0.0 | 0.2290 | 0.0319 | 0.0 | 0.8273 | 0.7106 | 0.9198 | 0.0 | 0.1051 | 0.1720 | 0.0 |
| 0.317 | 137.04 | 7400 | 0.5689 | 0.2996 | 0.3673 | 0.8346 | nan | 0.8380 | 0.9048 | 0.7202 | 0.8874 | 0.2300 | nan | 0.4682 | 0.6001 | 0.0 | 0.9282 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7278 | 0.0 | 0.0 | 0.8811 | 0.0 | 0.4430 | 0.5714 | 0.0 | nan | 0.0 | 0.4115 | 0.0148 | 0.0 | 0.9311 | 0.8477 | 0.9517 | 0.0 | 0.1019 | 0.2961 | 0.0 | nan | 0.7600 | 0.8107 | 0.7092 | 0.5843 | 0.2243 | nan | 0.3634 | 0.4741 | 0.0 | 0.7839 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3683 | 0.0 | 0.0 | 0.6667 | 0.0 | 0.3433 | 0.4519 | 0.0 | nan | 0.0 | 0.2331 | 0.0148 | 0.0 | 0.8387 | 0.7448 | 0.9201 | 0.0 | 0.0930 | 0.2020 | 0.0 |
| 0.3241 | 138.89 | 7500 | 0.5921 | 0.3030 | 0.3698 | 0.8264 | nan | 0.7560 | 0.9038 | 0.8054 | 0.8993 | 0.2921 | nan | 0.4358 | 0.6497 | 0.0 | 0.9426 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6843 | 0.0 | 0.0 | 0.8596 | 0.0 | 0.4666 | 0.5531 | 0.0 | nan | 0.0014 | 0.4125 | 0.0280 | 0.0 | 0.9419 | 0.8345 | 0.9468 | 0.0005 | 0.1478 | 0.2726 | 0.0 | nan | 0.6935 | 0.8021 | 0.7869 | 0.5437 | 0.2719 | nan | 0.3428 | 0.4933 | 0.0 | 0.7917 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4134 | 0.0 | 0.0 | 0.6707 | 0.0 | 0.3632 | 0.4528 | 0.0 | nan | 0.0014 | 0.2150 | 0.0280 | 0.0 | 0.8367 | 0.7422 | 0.9203 | 0.0005 | 0.1346 | 0.1914 | 0.0 |
| 0.3341 | 140.74 | 7600 | 0.5641 | 0.3038 | 0.3702 | 0.8325 | nan | 0.7624 | 0.9172 | 0.8114 | 0.8959 | 0.2940 | nan | 0.5063 | 0.6105 | 0.0 | 0.9434 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7179 | 0.0 | 0.0 | 0.8732 | 0.0 | 0.5230 | 0.5420 | 0.0 | nan | 0.0 | 0.4148 | 0.0425 | 0.0 | 0.9411 | 0.7719 | 0.9528 | 0.0 | 0.0840 | 0.2431 | 0.0 | nan | 0.7064 | 0.8174 | 0.7877 | 0.6132 | 0.2760 | nan | 0.3594 | 0.4823 | 0.0 | 0.7859 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4116 | 0.0 | 0.0 | 0.6715 | 0.0 | 0.3953 | 0.4613 | 0.0 | nan | 0.0 | 0.2236 | 0.0425 | 0.0 | 0.8241 | 0.6840 | 0.9219 | 0.0 | 0.0790 | 0.1794 | 0.0 |
| 0.3135 | 142.59 | 7700 | 0.5712 | 0.3062 | 0.3709 | 0.8300 | nan | 0.7952 | 0.8986 | 0.8100 | 0.8619 | 0.3084 | nan | 0.4715 | 0.6006 | 0.0 | 0.9439 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6837 | 0.0 | 0.0 | 0.8669 | 0.0 | 0.5083 | 0.5475 | 0.0 | nan | 0.0 | 0.4053 | 0.0384 | 0.0 | 0.9443 | 0.8124 | 0.9524 | 0.0 | 0.1181 | 0.3029 | 0.0 | nan | 0.7270 | 0.8042 | 0.7907 | 0.5385 | 0.2877 | nan | 0.3610 | 0.4689 | 0.0 | 0.7784 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4431 | 0.0 | 0.0 | 0.6764 | 0.0 | 0.3905 | 0.4659 | 0.0 | nan | 0.0 | 0.2280 | 0.0384 | 0.0 | 0.8312 | 0.7224 | 0.9227 | 0.0 | 0.1114 | 0.2117 | 0.0 |
| 0.2985 | 144.44 | 7800 | 0.5705 | 0.3063 | 0.3739 | 0.8331 | nan | 0.7844 | 0.9061 | 0.8011 | 0.8987 | 0.3105 | nan | 0.4674 | 0.6336 | 0.0 | 0.9448 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7174 | 0.0 | 0.0 | 0.8645 | 0.0 | 0.4836 | 0.5414 | 0.0 | nan | 0.0 | 0.4277 | 0.0445 | 0.0 | 0.9390 | 0.8448 | 0.9518 | 0.0003 | 0.1004 | 0.3014 | 0.0 | nan | 0.7238 | 0.8110 | 0.7871 | 0.5506 | 0.2869 | nan | 0.3545 | 0.4901 | 0.0 | 0.7879 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4047 | 0.0 | 0.0 | 0.6872 | 0.0 | 0.3776 | 0.4572 | 0.0 | nan | 0.0 | 0.2263 | 0.0445 | 0.0 | 0.8392 | 0.7464 | 0.9226 | 0.0003 | 0.0950 | 0.2101 | 0.0 |
| 0.3083 | 146.3 | 7900 | 0.6255 | 0.3029 | 0.3735 | 0.8173 | nan | 0.7919 | 0.8576 | 0.8118 | 0.9101 | 0.3017 | nan | 0.4374 | 0.6462 | 0.0 | 0.9461 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7137 | 0.0 | 0.0 | 0.8706 | 0.0 | 0.5111 | 0.5445 | 0.0 | nan | 0.0001 | 0.4282 | 0.0589 | 0.0 | 0.9317 | 0.8537 | 0.9628 | 0.0000 | 0.1030 | 0.2713 | 0.0 | nan | 0.7389 | 0.7675 | 0.7857 | 0.4623 | 0.2774 | nan | 0.3477 | 0.4815 | 0.0 | 0.7777 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4220 | 0.0 | 0.0 | 0.6797 | 0.0 | 0.3926 | 0.4652 | 0.0 | nan | 0.0001 | 0.2292 | 0.0588 | 0.0 | 0.8421 | 0.7549 | 0.9219 | 0.0000 | 0.0939 | 0.1926 | 0.0 |
| 0.3132 | 148.15 | 8000 | 0.6407 | 0.2987 | 0.3697 | 0.8084 | nan | 0.8056 | 0.8366 | 0.8045 | 0.9187 | 0.2881 | nan | 0.3901 | 0.6494 | 0.0 | 0.9456 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7065 | 0.0 | 0.0 | 0.8674 | 0.0 | 0.4835 | 0.5578 | 0.0 | nan | 0.0 | 0.4107 | 0.0690 | 0.0 | 0.9364 | 0.8069 | 0.9579 | 0.0 | 0.1392 | 0.2549 | 0.0 | nan | 0.7400 | 0.7511 | 0.7860 | 0.4288 | 0.2705 | nan | 0.3211 | 0.4907 | 0.0 | 0.7845 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4064 | 0.0 | 0.0 | 0.6776 | 0.0 | 0.3750 | 0.4463 | 0.0 | nan | 0.0 | 0.2323 | 0.0689 | 0.0 | 0.8346 | 0.7221 | 0.9215 | 0.0 | 0.1189 | 0.1827 | 0.0 |
| 0.3227 | 150.0 | 8100 | 0.6215 | 0.3010 | 0.3747 | 0.8154 | nan | 0.8072 | 0.8523 | 0.7987 | 0.9122 | 0.3387 | nan | 0.4049 | 0.6521 | 0.0 | 0.9464 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7268 | 0.0 | 0.0 | 0.8526 | 0.0 | 0.5301 | 0.5632 | 0.0 | nan | 0.0015 | 0.4353 | 0.0597 | 0.0 | 0.9352 | 0.8036 | 0.9574 | 0.0 | 0.1202 | 0.2916 | 0.0 | nan | 0.7319 | 0.7712 | 0.7839 | 0.4639 | 0.3115 | nan | 0.3235 | 0.4815 | 0.0 | 0.7813 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3954 | 0.0 | 0.0 | 0.6800 | 0.0 | 0.3930 | 0.4522 | 0.0 | nan | 0.0015 | 0.2349 | 0.0596 | 0.0 | 0.8319 | 0.7106 | 0.9225 | 0.0 | 0.1071 | 0.1947 | 0.0 |
| 0.3041 | 151.85 | 8200 | 0.6365 | 0.2982 | 0.3695 | 0.8091 | nan | 0.7813 | 0.8516 | 0.8100 | 0.9057 | 0.2989 | nan | 0.4138 | 0.6557 | 0.0 | 0.9422 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7155 | 0.0 | 0.0 | 0.8717 | 0.0 | 0.5273 | 0.5454 | 0.0 | nan | 0.0 | 0.4293 | 0.0595 | 0.0 | 0.9354 | 0.7484 | 0.9557 | 0.0 | 0.1301 | 0.2483 | 0.0 | nan | 0.7117 | 0.7612 | 0.7891 | 0.4543 | 0.2787 | nan | 0.3305 | 0.4950 | 0.0 | 0.7874 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4007 | 0.0 | 0.0 | 0.6772 | 0.0 | 0.3923 | 0.4632 | 0.0 | nan | 0.0 | 0.2342 | 0.0594 | 0.0 | 0.8230 | 0.6691 | 0.9227 | 0.0 | 0.1142 | 0.1800 | 0.0 |
| 0.3295 | 153.7 | 8300 | 0.5763 | 0.3064 | 0.3745 | 0.8319 | nan | 0.8091 | 0.9000 | 0.8155 | 0.8927 | 0.3048 | nan | 0.4385 | 0.6734 | 0.0 | 0.9391 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7114 | 0.0 | 0.0 | 0.8707 | 0.0 | 0.4884 | 0.5694 | 0.0 | nan | 0.0032 | 0.4179 | 0.0581 | 0.0 | 0.9385 | 0.8107 | 0.9552 | 0.0006 | 0.1316 | 0.2550 | 0.0 | nan | 0.7460 | 0.8059 | 0.7926 | 0.5582 | 0.2844 | nan | 0.3545 | 0.5009 | 0.0 | 0.7892 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4184 | 0.0 | 0.0 | 0.6741 | 0.0 | 0.3769 | 0.4455 | 0.0 | nan | 0.0032 | 0.2317 | 0.0581 | 0.0 | 0.8317 | 0.7120 | 0.9232 | 0.0005 | 0.1162 | 0.1807 | 0.0 |
| 0.3057 | 155.56 | 8400 | 0.6602 | 0.2967 | 0.3669 | 0.8053 | nan | 0.7862 | 0.8400 | 0.8012 | 0.9083 | 0.2761 | nan | 0.3977 | 0.6548 | 0.0 | 0.9399 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7262 | 0.0 | 0.0 | 0.8830 | 0.0 | 0.4582 | 0.5390 | 0.0 | nan | 0.0 | 0.4382 | 0.0696 | 0.0 | 0.9380 | 0.7676 | 0.9517 | 0.0 | 0.1204 | 0.2454 | 0.0 | nan | 0.7257 | 0.7493 | 0.7832 | 0.4331 | 0.2603 | nan | 0.3344 | 0.4909 | 0.0 | 0.7899 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4164 | 0.0 | 0.0 | 0.6631 | 0.0 | 0.3619 | 0.4610 | 0.0 | nan | 0.0 | 0.2358 | 0.0695 | 0.0 | 0.8268 | 0.6858 | 0.9224 | 0.0 | 0.1038 | 0.1798 | 0.0 |
| 0.3152 | 157.41 | 8500 | 0.6195 | 0.2986 | 0.3661 | 0.8115 | nan | 0.7876 | 0.8570 | 0.7994 | 0.8920 | 0.2891 | nan | 0.4035 | 0.6056 | 0.0 | 0.9417 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7090 | 0.0 | 0.0 | 0.8719 | 0.0 | 0.4959 | 0.5413 | 0.0 | nan | 0.0 | 0.4136 | 0.0566 | 0.0 | 0.9414 | 0.7717 | 0.9517 | 0.0 | 0.1198 | 0.2672 | 0.0 | nan | 0.7263 | 0.7633 | 0.7814 | 0.4550 | 0.2715 | nan | 0.3352 | 0.4721 | 0.0 | 0.7820 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4233 | 0.0 | 0.0 | 0.6671 | 0.0 | 0.3757 | 0.4677 | 0.0 | nan | 0.0 | 0.2407 | 0.0565 | 0.0 | 0.8255 | 0.6891 | 0.9216 | 0.0 | 0.1083 | 0.1912 | 0.0 |
| 0.3041 | 159.26 | 8600 | 0.5761 | 0.3071 | 0.3735 | 0.8297 | nan | 0.8077 | 0.8910 | 0.8053 | 0.8839 | 0.3353 | nan | 0.4603 | 0.6015 | 0.0 | 0.9489 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6966 | 0.0 | 0.0 | 0.8701 | 0.0 | 0.4933 | 0.5427 | 0.0 | nan | 0.0082 | 0.4481 | 0.0761 | 0.0 | 0.9301 | 0.8454 | 0.9544 | 0.0005 | 0.1062 | 0.2469 | 0.0 | nan | 0.7406 | 0.7982 | 0.7855 | 0.5184 | 0.3024 | nan | 0.3652 | 0.4669 | 0.0 | 0.7807 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4413 | 0.0 | 0.0 | 0.6853 | 0.0 | 0.3815 | 0.4553 | 0.0 | nan | 0.0082 | 0.2312 | 0.0759 | 0.0 | 0.8414 | 0.7507 | 0.9229 | 0.0005 | 0.0961 | 0.1775 | 0.0 |
| 0.3185 | 161.11 | 8700 | 0.5760 | 0.3058 | 0.3698 | 0.8296 | nan | 0.8094 | 0.8946 | 0.7956 | 0.8887 | 0.2897 | nan | 0.4223 | 0.5895 | 0.0 | 0.9357 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6889 | 0.0 | 0.0 | 0.8908 | 0.0 | 0.4640 | 0.5538 | 0.0 | nan | 0.0 | 0.4239 | 0.0692 | 0.0 | 0.9305 | 0.8418 | 0.9519 | 0.0001 | 0.1431 | 0.2510 | 0.0 | nan | 0.7455 | 0.7997 | 0.7789 | 0.5321 | 0.2717 | nan | 0.3473 | 0.4756 | 0.0 | 0.8013 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4311 | 0.0 | 0.0 | 0.6576 | 0.0 | 0.3605 | 0.4511 | 0.0 | nan | 0.0 | 0.2412 | 0.0691 | 0.0 | 0.8410 | 0.7459 | 0.9223 | 0.0001 | 0.1284 | 0.1839 | 0.0 |
| 0.2908 | 162.96 | 8800 | 0.5655 | 0.3075 | 0.3717 | 0.8316 | nan | 0.8548 | 0.8841 | 0.7997 | 0.8745 | 0.3118 | nan | 0.4610 | 0.6024 | 0.0 | 0.9410 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6931 | 0.0 | 0.0 | 0.8861 | 0.0 | 0.4534 | 0.5383 | 0.0 | nan | 0.0015 | 0.4266 | 0.0689 | 0.0 | 0.9366 | 0.8053 | 0.9554 | 0.0 | 0.1346 | 0.2641 | 0.0 | nan | 0.7595 | 0.8021 | 0.7817 | 0.5396 | 0.2919 | nan | 0.3717 | 0.4720 | 0.0 | 0.7905 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4462 | 0.0 | 0.0 | 0.6634 | 0.0 | 0.3562 | 0.4639 | 0.0 | nan | 0.0015 | 0.2393 | 0.0688 | 0.0 | 0.8346 | 0.7212 | 0.9232 | 0.0 | 0.1193 | 0.1923 | 0.0 |
| 0.3137 | 164.81 | 8900 | 0.5829 | 0.3094 | 0.3784 | 0.8279 | nan | 0.8476 | 0.8674 | 0.8118 | 0.9018 | 0.3237 | nan | 0.4801 | 0.6610 | 0.0 | 0.9387 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6851 | 0.0 | 0.0 | 0.8696 | 0.0 | 0.5109 | 0.5681 | 0.0 | nan | 0.0260 | 0.4276 | 0.0709 | 0.0 | 0.9330 | 0.8416 | 0.9554 | 0.0012 | 0.1333 | 0.2547 | 0.0 | nan | 0.7562 | 0.7893 | 0.7902 | 0.5123 | 0.3055 | nan | 0.3768 | 0.4921 | 0.0 | 0.7978 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4472 | 0.0 | 0.0 | 0.6754 | 0.0 | 0.3867 | 0.4408 | 0.0 | nan | 0.0260 | 0.2316 | 0.0708 | 0.0 | 0.8396 | 0.7418 | 0.9237 | 0.0010 | 0.1173 | 0.1797 | 0.0 |
| 0.3219 | 166.67 | 9000 | 0.5812 | 0.3065 | 0.3750 | 0.8278 | nan | 0.8354 | 0.8788 | 0.8041 | 0.8834 | 0.2990 | nan | 0.4594 | 0.6655 | 0.0 | 0.9395 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6980 | 0.0 | 0.0 | 0.8601 | 0.0 | 0.5069 | 0.5685 | 0.0 | nan | 0.0113 | 0.4156 | 0.0664 | 0.0 | 0.9440 | 0.8108 | 0.9521 | 0.0001 | 0.1291 | 0.2716 | 0.0 | nan | 0.7565 | 0.7902 | 0.7828 | 0.5219 | 0.2845 | nan | 0.3688 | 0.4922 | 0.0 | 0.7966 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4240 | 0.0 | 0.0 | 0.6768 | 0.0 | 0.3877 | 0.4481 | 0.0 | nan | 0.0113 | 0.2327 | 0.0664 | 0.0 | 0.8308 | 0.7154 | 0.9230 | 0.0001 | 0.1124 | 0.1869 | 0.0 |
| 0.3181 | 168.52 | 9100 | 0.5632 | 0.3112 | 0.3765 | 0.8367 | nan | 0.8125 | 0.9072 | 0.8124 | 0.8963 | 0.3044 | nan | 0.4647 | 0.6697 | 0.0 | 0.9359 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6879 | 0.0 | 0.0 | 0.8771 | 0.0 | 0.5085 | 0.5560 | 0.0 | nan | 0.0039 | 0.4244 | 0.0703 | 0.0 | 0.9367 | 0.8280 | 0.9532 | 0.0 | 0.1309 | 0.2672 | 0.0 | nan | 0.7474 | 0.8113 | 0.7892 | 0.5707 | 0.2882 | nan | 0.3704 | 0.5031 | 0.0 | 0.7988 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4314 | 0.0 | 0.0 | 0.6778 | 0.0 | 0.3900 | 0.4604 | 0.0 | nan | 0.0039 | 0.2372 | 0.0702 | 0.0 | 0.8390 | 0.7407 | 0.9234 | 0.0 | 0.1173 | 0.1872 | 0.0 |
| 0.3009 | 170.37 | 9200 | 0.5671 | 0.3095 | 0.3743 | 0.8326 | nan | 0.7939 | 0.9018 | 0.7926 | 0.8902 | 0.3160 | nan | 0.4603 | 0.6415 | 0.0 | 0.9414 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6804 | 0.0 | 0.0 | 0.8815 | 0.0 | 0.4974 | 0.5528 | 0.0 | nan | 0.0000 | 0.4233 | 0.0749 | 0.0 | 0.9339 | 0.8322 | 0.9566 | 0.0 | 0.1296 | 0.2770 | 0.0 | nan | 0.7279 | 0.8041 | 0.7736 | 0.5652 | 0.2951 | nan | 0.3698 | 0.4960 | 0.0 | 0.7938 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4395 | 0.0 | 0.0 | 0.6714 | 0.0 | 0.3837 | 0.4627 | 0.0 | nan | 0.0000 | 0.2368 | 0.0747 | 0.0 | 0.8379 | 0.7389 | 0.9235 | 0.0 | 0.1161 | 0.1946 | 0.0 |
| 0.2873 | 172.22 | 9300 | 0.6113 | 0.3047 | 0.3720 | 0.8176 | nan | 0.8107 | 0.8536 | 0.7603 | 0.8949 | 0.3232 | nan | 0.4761 | 0.6422 | 0.0 | 0.9415 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6799 | 0.0 | 0.0 | 0.8720 | 0.0 | 0.5023 | 0.5457 | 0.0 | nan | 0.0034 | 0.4146 | 0.0717 | 0.0 | 0.9439 | 0.8035 | 0.9521 | 0.0 | 0.1299 | 0.2839 | 0.0 | nan | 0.7355 | 0.7675 | 0.7422 | 0.4826 | 0.3027 | nan | 0.3715 | 0.4933 | 0.0 | 0.7896 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4421 | 0.0 | 0.0 | 0.6666 | 0.0 | 0.3881 | 0.4723 | 0.0 | nan | 0.0034 | 0.2350 | 0.0716 | 0.0 | 0.8305 | 0.7183 | 0.9229 | 0.0 | 0.1152 | 0.1992 | 0.0 |
| 0.2856 | 174.07 | 9400 | 0.6091 | 0.3045 | 0.3713 | 0.8183 | nan | 0.8177 | 0.8508 | 0.7884 | 0.9070 | 0.3274 | nan | 0.4412 | 0.5971 | 0.0 | 0.9437 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6904 | 0.0 | 0.0 | 0.8760 | 0.0 | 0.5037 | 0.5471 | 0.0 | nan | 0.0023 | 0.4093 | 0.0729 | 0.0 | 0.9395 | 0.8289 | 0.9513 | 0.0000 | 0.1123 | 0.2745 | 0.0 | nan | 0.7401 | 0.7694 | 0.7705 | 0.4745 | 0.3070 | nan | 0.3570 | 0.4797 | 0.0 | 0.7901 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4370 | 0.0 | 0.0 | 0.6642 | 0.0 | 0.3879 | 0.4663 | 0.0 | nan | 0.0023 | 0.2356 | 0.0728 | 0.0 | 0.8358 | 0.7333 | 0.9230 | 0.0000 | 0.1034 | 0.1937 | 0.0 |
| 0.2803 | 175.93 | 9500 | 0.6404 | 0.3009 | 0.3704 | 0.8084 | nan | 0.8365 | 0.8208 | 0.7833 | 0.9062 | 0.3050 | nan | 0.4405 | 0.6203 | 0.0 | 0.9443 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6940 | 0.0 | 0.0 | 0.8667 | 0.0 | 0.5055 | 0.5494 | 0.0 | nan | 0.0084 | 0.4148 | 0.0772 | 0.0 | 0.9424 | 0.8074 | 0.9551 | 0.0001 | 0.1077 | 0.2664 | 0.0 | nan | 0.7454 | 0.7459 | 0.7680 | 0.4316 | 0.2897 | nan | 0.3571 | 0.4866 | 0.0 | 0.7930 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4255 | 0.0 | 0.0 | 0.6652 | 0.0 | 0.3877 | 0.4601 | 0.0 | nan | 0.0084 | 0.2306 | 0.0771 | 0.0 | 0.8314 | 0.7178 | 0.9235 | 0.0001 | 0.0969 | 0.1889 | 0.0 |
| 0.2924 | 177.78 | 9600 | 0.6156 | 0.3045 | 0.3723 | 0.8156 | nan | 0.8293 | 0.8420 | 0.8051 | 0.8964 | 0.3365 | nan | 0.4651 | 0.6281 | 0.0 | 0.9443 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6806 | 0.0 | 0.0 | 0.8777 | 0.0 | 0.4957 | 0.5434 | 0.0 | nan | 0.0043 | 0.4293 | 0.0774 | 0.0 | 0.9387 | 0.7942 | 0.9562 | 0.0 | 0.1178 | 0.2514 | 0.0 | nan | 0.7508 | 0.7606 | 0.7848 | 0.4617 | 0.3134 | nan | 0.3712 | 0.4903 | 0.0 | 0.7912 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4384 | 0.0 | 0.0 | 0.6666 | 0.0 | 0.3850 | 0.4648 | 0.0 | nan | 0.0043 | 0.2308 | 0.0773 | 0.0 | 0.8320 | 0.7126 | 0.9232 | 0.0 | 0.1028 | 0.1836 | 0.0 |
| 0.2911 | 179.63 | 9700 | 0.6039 | 0.3051 | 0.3743 | 0.8197 | nan | 0.8161 | 0.8573 | 0.8009 | 0.9013 | 0.3091 | nan | 0.4597 | 0.6407 | 0.0 | 0.9406 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7191 | 0.0 | 0.0 | 0.8787 | 0.0 | 0.5007 | 0.5561 | 0.0 | nan | 0.0046 | 0.4187 | 0.0825 | 0.0 | 0.9325 | 0.8335 | 0.9578 | 0.0000 | 0.1036 | 0.2642 | 0.0 | nan | 0.7434 | 0.7687 | 0.7825 | 0.4751 | 0.2917 | nan | 0.3667 | 0.4994 | 0.0 | 0.7998 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4127 | 0.0 | 0.0 | 0.6761 | 0.0 | 0.3878 | 0.4561 | 0.0 | nan | 0.0046 | 0.2352 | 0.0823 | 0.0 | 0.8393 | 0.7401 | 0.9235 | 0.0000 | 0.0883 | 0.1885 | 0.0 |
| 0.3093 | 181.48 | 9800 | 0.6244 | 0.3021 | 0.3707 | 0.8132 | nan | 0.8240 | 0.8367 | 0.7819 | 0.9031 | 0.3158 | nan | 0.4523 | 0.6336 | 0.0 | 0.9419 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7047 | 0.0 | 0.0 | 0.8782 | 0.0 | 0.5024 | 0.5478 | 0.0 | nan | 0.0 | 0.4039 | 0.0761 | 0.0 | 0.9422 | 0.8036 | 0.9524 | 0.0 | 0.0992 | 0.2629 | 0.0 | nan | 0.7414 | 0.7575 | 0.7666 | 0.4537 | 0.2990 | nan | 0.3642 | 0.4913 | 0.0 | 0.7906 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4261 | 0.0 | 0.0 | 0.6655 | 0.0 | 0.3892 | 0.4639 | 0.0 | nan | 0.0 | 0.2339 | 0.0760 | 0.0 | 0.8311 | 0.7168 | 0.9226 | 0.0 | 0.0873 | 0.1892 | 0.0 |
| 0.3194 | 183.33 | 9900 | 0.6384 | 0.3015 | 0.3707 | 0.8106 | nan | 0.8269 | 0.8295 | 0.7809 | 0.9036 | 0.3169 | nan | 0.4373 | 0.6407 | 0.0 | 0.9394 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7004 | 0.0 | 0.0 | 0.8774 | 0.0 | 0.4936 | 0.5511 | 0.0 | nan | 0.0004 | 0.4210 | 0.0726 | 0.0 | 0.9434 | 0.8072 | 0.9462 | 0.0 | 0.1149 | 0.2605 | 0.0 | nan | 0.7423 | 0.7508 | 0.7639 | 0.4418 | 0.2988 | nan | 0.3584 | 0.4963 | 0.0 | 0.7976 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4212 | 0.0 | 0.0 | 0.6662 | 0.0 | 0.3830 | 0.4618 | 0.0 | nan | 0.0004 | 0.2347 | 0.0725 | 0.0 | 0.8311 | 0.7208 | 0.9214 | 0.0 | 0.0993 | 0.1875 | 0.0 |
| 0.3174 | 185.19 | 10000 | 0.6350 | 0.3022 | 0.3724 | 0.8117 | nan | 0.8240 | 0.8308 | 0.7789 | 0.9052 | 0.3152 | nan | 0.4703 | 0.6444 | 0.0 | 0.9424 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7116 | 0.0 | 0.0 | 0.8716 | 0.0 | 0.4736 | 0.5408 | 0.0 | nan | 0.0048 | 0.4202 | 0.0754 | 0.0 | 0.9437 | 0.8196 | 0.9525 | 0.0 | 0.1041 | 0.2872 | 0.0 | nan | 0.7413 | 0.7520 | 0.7629 | 0.4453 | 0.2976 | nan | 0.3701 | 0.4953 | 0.0 | 0.7962 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4152 | 0.0 | 0.0 | 0.6712 | 0.0 | 0.3749 | 0.4613 | 0.0 | nan | 0.0048 | 0.2337 | 0.0753 | 0.0 | 0.8324 | 0.7277 | 0.9234 | 0.0 | 0.0913 | 0.1997 | 0.0 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
atk1432/Taxi-v3 | atk1432 | "2024-04-01T08:07:59Z" | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2024-04-01T08:07:57Z" | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.36 +/- 2.68
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
model = load_from_hub(repo_id="atk1432/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
askardada/recvischallenge_11_fold_1 | askardada | "2023-11-27T21:20:48Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/vit-base-patch16-224",
"base_model:adapter:google/vit-base-patch16-224",
"region:us"
] | null | "2023-11-27T21:20:47Z" | ---
library_name: peft
base_model: google/vit-base-patch16-224
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
PassbyGrocer/hreb-weibo | PassbyGrocer | "2024-11-07T17:17:16Z" | 85 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:hfl/chinese-roberta-wwm-ext-large",
"base_model:finetune:hfl/chinese-roberta-wwm-ext-large",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2024-11-07T10:46:41Z" | ---
library_name: transformers
license: apache-2.0
base_model: hfl/chinese-roberta-wwm-ext-large
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert_bilstm_mega_crf-ner-weibo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_bilstm_mega_crf-ner-weibo
This model is a fine-tuned version of [hfl/chinese-roberta-wwm-ext-large](https://huggingface.co/hfl/chinese-roberta-wwm-ext-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2341
- Precision: 0.6657
- Recall: 0.7075
- F1: 0.6860
- Accuracy: 0.9683
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 1.7329 | 1.0 | 11 | 0.4907 | 0.0 | 0.0 | 0.0 | 0.9274 |
| 0.4493 | 2.0 | 22 | 0.3486 | 0.0 | 0.0 | 0.0 | 0.9274 |
| 0.3203 | 3.0 | 33 | 0.2384 | 0.2941 | 0.0629 | 0.1036 | 0.9354 |
| 0.2259 | 4.0 | 44 | 0.1618 | 0.5219 | 0.4874 | 0.5041 | 0.9586 |
| 0.1617 | 5.0 | 55 | 0.1318 | 0.5476 | 0.5975 | 0.5714 | 0.9642 |
| 0.1171 | 6.0 | 66 | 0.1202 | 0.5718 | 0.6509 | 0.6088 | 0.9676 |
| 0.0956 | 7.0 | 77 | 0.1210 | 0.6022 | 0.6855 | 0.6412 | 0.9692 |
| 0.0666 | 8.0 | 88 | 0.1208 | 0.5951 | 0.6887 | 0.6385 | 0.9690 |
| 0.0567 | 9.0 | 99 | 0.1205 | 0.5963 | 0.7107 | 0.6485 | 0.9687 |
| 0.0433 | 10.0 | 110 | 0.1219 | 0.6230 | 0.7170 | 0.6667 | 0.9699 |
| 0.0333 | 11.0 | 121 | 0.1365 | 0.6375 | 0.6635 | 0.6502 | 0.9687 |
| 0.0309 | 12.0 | 132 | 0.1421 | 0.6011 | 0.6918 | 0.6433 | 0.9672 |
| 0.0239 | 13.0 | 143 | 0.1460 | 0.6398 | 0.6981 | 0.6677 | 0.9687 |
| 0.0235 | 14.0 | 154 | 0.1539 | 0.6518 | 0.6887 | 0.6697 | 0.9687 |
| 0.0188 | 15.0 | 165 | 0.1604 | 0.6656 | 0.6824 | 0.6739 | 0.9694 |
| 0.0193 | 16.0 | 176 | 0.1625 | 0.6471 | 0.6918 | 0.6687 | 0.9687 |
| 0.0155 | 17.0 | 187 | 0.1758 | 0.6770 | 0.6855 | 0.6813 | 0.9683 |
| 0.0148 | 18.0 | 198 | 0.1714 | 0.6506 | 0.6792 | 0.6646 | 0.9688 |
| 0.014 | 19.0 | 209 | 0.1626 | 0.6391 | 0.7296 | 0.6814 | 0.9674 |
| 0.0116 | 20.0 | 220 | 0.1718 | 0.6459 | 0.7170 | 0.6796 | 0.9687 |
| 0.0111 | 21.0 | 231 | 0.1840 | 0.6718 | 0.6824 | 0.6771 | 0.9694 |
| 0.0097 | 22.0 | 242 | 0.1807 | 0.6479 | 0.6887 | 0.6677 | 0.9677 |
| 0.0098 | 23.0 | 253 | 0.1787 | 0.6391 | 0.7296 | 0.6814 | 0.9664 |
| 0.0089 | 24.0 | 264 | 0.1877 | 0.6518 | 0.6887 | 0.6697 | 0.9688 |
| 0.0077 | 25.0 | 275 | 0.1896 | 0.6519 | 0.6950 | 0.6728 | 0.9693 |
| 0.008 | 26.0 | 286 | 0.1915 | 0.6608 | 0.7107 | 0.6848 | 0.9690 |
| 0.0079 | 27.0 | 297 | 0.2008 | 0.6606 | 0.6792 | 0.6698 | 0.9687 |
| 0.0072 | 28.0 | 308 | 0.1961 | 0.6486 | 0.7138 | 0.6796 | 0.9681 |
| 0.0067 | 29.0 | 319 | 0.2040 | 0.6617 | 0.7013 | 0.6809 | 0.9691 |
| 0.0063 | 30.0 | 330 | 0.2028 | 0.6725 | 0.7296 | 0.6998 | 0.9688 |
| 0.0056 | 31.0 | 341 | 0.2053 | 0.6716 | 0.7201 | 0.6950 | 0.9689 |
| 0.0073 | 32.0 | 352 | 0.2088 | 0.6465 | 0.6730 | 0.6595 | 0.9674 |
| 0.0061 | 33.0 | 363 | 0.1936 | 0.6138 | 0.7296 | 0.6667 | 0.9673 |
| 0.0057 | 34.0 | 374 | 0.2061 | 0.6596 | 0.6824 | 0.6708 | 0.9683 |
| 0.0062 | 35.0 | 385 | 0.2077 | 0.6627 | 0.7044 | 0.6829 | 0.9680 |
| 0.0046 | 36.0 | 396 | 0.2133 | 0.6738 | 0.6950 | 0.6842 | 0.9689 |
| 0.0062 | 37.0 | 407 | 0.2029 | 0.6696 | 0.7201 | 0.6939 | 0.9680 |
| 0.0058 | 38.0 | 418 | 0.2039 | 0.6707 | 0.7044 | 0.6871 | 0.9678 |
| 0.0047 | 39.0 | 429 | 0.2055 | 0.6667 | 0.7233 | 0.6938 | 0.9685 |
| 0.0049 | 40.0 | 440 | 0.2105 | 0.6757 | 0.7075 | 0.6912 | 0.9692 |
| 0.0048 | 41.0 | 451 | 0.2052 | 0.6667 | 0.7107 | 0.6880 | 0.9683 |
| 0.0049 | 42.0 | 462 | 0.2081 | 0.6590 | 0.7170 | 0.6867 | 0.9687 |
| 0.0063 | 43.0 | 473 | 0.2011 | 0.6552 | 0.7170 | 0.6847 | 0.9683 |
| 0.0046 | 44.0 | 484 | 0.1994 | 0.6477 | 0.7170 | 0.6806 | 0.9676 |
| 0.0047 | 45.0 | 495 | 0.2122 | 0.6790 | 0.6918 | 0.6854 | 0.9693 |
| 0.0048 | 46.0 | 506 | 0.2082 | 0.6609 | 0.7233 | 0.6907 | 0.9687 |
| 0.0042 | 47.0 | 517 | 0.2140 | 0.6769 | 0.6918 | 0.6843 | 0.9695 |
| 0.0054 | 48.0 | 528 | 0.2054 | 0.6514 | 0.7170 | 0.6826 | 0.9681 |
| 0.0037 | 49.0 | 539 | 0.2070 | 0.6686 | 0.7107 | 0.6890 | 0.9689 |
| 0.0045 | 50.0 | 550 | 0.2093 | 0.6514 | 0.7170 | 0.6826 | 0.9686 |
| 0.004 | 51.0 | 561 | 0.2163 | 0.6787 | 0.7107 | 0.6943 | 0.9698 |
| 0.0038 | 52.0 | 572 | 0.2173 | 0.6706 | 0.7107 | 0.6901 | 0.9694 |
| 0.0042 | 53.0 | 583 | 0.2156 | 0.6745 | 0.7233 | 0.6980 | 0.9694 |
| 0.0039 | 54.0 | 594 | 0.2190 | 0.6727 | 0.6981 | 0.6852 | 0.9689 |
| 0.0037 | 55.0 | 605 | 0.2213 | 0.6767 | 0.7044 | 0.6903 | 0.9687 |
| 0.0043 | 56.0 | 616 | 0.2247 | 0.6829 | 0.7044 | 0.6935 | 0.9690 |
| 0.0034 | 57.0 | 627 | 0.2291 | 0.6789 | 0.6981 | 0.6884 | 0.9689 |
| 0.0046 | 58.0 | 638 | 0.2258 | 0.6737 | 0.7075 | 0.6902 | 0.9686 |
| 0.0033 | 59.0 | 649 | 0.2254 | 0.6736 | 0.7138 | 0.6931 | 0.9689 |
| 0.0036 | 60.0 | 660 | 0.2255 | 0.6758 | 0.7013 | 0.6883 | 0.9690 |
| 0.0038 | 61.0 | 671 | 0.2200 | 0.6580 | 0.7138 | 0.6848 | 0.9682 |
| 0.0036 | 62.0 | 682 | 0.2210 | 0.6657 | 0.7075 | 0.6860 | 0.9687 |
| 0.0039 | 63.0 | 693 | 0.2237 | 0.6647 | 0.7107 | 0.6869 | 0.9682 |
| 0.0039 | 64.0 | 704 | 0.2295 | 0.6727 | 0.6981 | 0.6852 | 0.9688 |
| 0.0032 | 65.0 | 715 | 0.2271 | 0.6707 | 0.7044 | 0.6871 | 0.9687 |
| 0.0038 | 66.0 | 726 | 0.2290 | 0.6677 | 0.7013 | 0.6840 | 0.9687 |
| 0.0033 | 67.0 | 737 | 0.2260 | 0.6617 | 0.7013 | 0.6809 | 0.9682 |
| 0.0038 | 68.0 | 748 | 0.2250 | 0.6676 | 0.7138 | 0.6900 | 0.9686 |
| 0.0037 | 69.0 | 759 | 0.2254 | 0.6618 | 0.7075 | 0.6839 | 0.9684 |
| 0.0039 | 70.0 | 770 | 0.2281 | 0.6687 | 0.6981 | 0.6831 | 0.9687 |
| 0.0036 | 71.0 | 781 | 0.2317 | 0.6687 | 0.6981 | 0.6831 | 0.9687 |
| 0.0034 | 72.0 | 792 | 0.2272 | 0.6609 | 0.7170 | 0.6878 | 0.9686 |
| 0.0036 | 73.0 | 803 | 0.2278 | 0.6756 | 0.7138 | 0.6942 | 0.9687 |
| 0.0035 | 74.0 | 814 | 0.2287 | 0.6677 | 0.7075 | 0.6870 | 0.9683 |
| 0.0034 | 75.0 | 825 | 0.2283 | 0.6686 | 0.7107 | 0.6890 | 0.9681 |
| 0.0032 | 76.0 | 836 | 0.2331 | 0.6657 | 0.7075 | 0.6860 | 0.9672 |
| 0.0041 | 77.0 | 847 | 0.2357 | 0.6598 | 0.7075 | 0.6829 | 0.9675 |
| 0.0033 | 78.0 | 858 | 0.2352 | 0.6706 | 0.7170 | 0.6930 | 0.9676 |
| 0.0039 | 79.0 | 869 | 0.2363 | 0.6696 | 0.7075 | 0.6881 | 0.9689 |
| 0.0036 | 80.0 | 880 | 0.2367 | 0.6627 | 0.6918 | 0.6769 | 0.9685 |
| 0.0032 | 81.0 | 891 | 0.2369 | 0.6607 | 0.6981 | 0.6789 | 0.9683 |
| 0.0036 | 82.0 | 902 | 0.2331 | 0.6696 | 0.7201 | 0.6939 | 0.9687 |
| 0.0036 | 83.0 | 913 | 0.2286 | 0.6599 | 0.7138 | 0.6858 | 0.9682 |
| 0.0034 | 84.0 | 924 | 0.2276 | 0.6637 | 0.7138 | 0.6879 | 0.9687 |
| 0.0035 | 85.0 | 935 | 0.2286 | 0.6647 | 0.7107 | 0.6869 | 0.9687 |
| 0.0031 | 86.0 | 946 | 0.2296 | 0.6667 | 0.7044 | 0.6850 | 0.9689 |
| 0.0036 | 87.0 | 957 | 0.2296 | 0.6677 | 0.7075 | 0.6870 | 0.9687 |
| 0.0033 | 88.0 | 968 | 0.2299 | 0.6706 | 0.7170 | 0.6930 | 0.9688 |
| 0.0033 | 89.0 | 979 | 0.2301 | 0.6618 | 0.7138 | 0.6868 | 0.9683 |
| 0.0034 | 90.0 | 990 | 0.2320 | 0.6766 | 0.7170 | 0.6962 | 0.9687 |
| 0.0031 | 91.0 | 1001 | 0.2309 | 0.6766 | 0.7170 | 0.6962 | 0.9686 |
| 0.0033 | 92.0 | 1012 | 0.2315 | 0.6736 | 0.7138 | 0.6931 | 0.9685 |
| 0.0037 | 93.0 | 1023 | 0.2333 | 0.6696 | 0.7075 | 0.6881 | 0.9684 |
| 0.0031 | 94.0 | 1034 | 0.2342 | 0.6696 | 0.7075 | 0.6881 | 0.9684 |
| 0.0029 | 95.0 | 1045 | 0.2351 | 0.6687 | 0.7044 | 0.6861 | 0.9683 |
| 0.004 | 96.0 | 1056 | 0.2347 | 0.6667 | 0.7044 | 0.6850 | 0.9683 |
| 0.0032 | 97.0 | 1067 | 0.2346 | 0.6667 | 0.7044 | 0.6850 | 0.9683 |
| 0.0033 | 98.0 | 1078 | 0.2343 | 0.6667 | 0.7044 | 0.6850 | 0.9683 |
| 0.0032 | 99.0 | 1089 | 0.2341 | 0.6647 | 0.7044 | 0.6840 | 0.9682 |
| 0.0034 | 100.0 | 1100 | 0.2341 | 0.6657 | 0.7075 | 0.6860 | 0.9683 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.4.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
lukaspetersson/gemma2b-refuse-addition | lukaspetersson | "2024-03-23T00:43:32Z" | 3 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:google/gemma-2b",
"base_model:adapter:google/gemma-2b",
"license:other",
"region:us"
] | null | "2024-03-23T00:43:26Z" | ---
license: other
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
base_model: google/gemma-2b
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 999
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.1
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
ISEGURA/distilbert-base-multilingual-cased-autext2024_05-12-2024_detection | ISEGURA | "2024-12-05T11:57:38Z" | 105 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-12-05T11:57:20Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BhabhaAI/Mistral-translation-classify | BhabhaAI | "2024-02-07T12:56:15Z" | 4 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"dataset:BhabhaAI/translation-classify",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-02-01T05:50:03Z" | ---
library_name: transformers
license: apache-2.0
datasets:
- BhabhaAI/translation-classify
language:
- en
---
# Mistral Translation Classify
This is a fine tuned model on the [translation-classify dataset](https://huggingface.co/datasets/BhabhaAI/translation-classify) to classify whether we should translate an example.
It achieves 94% accuracy on validation dataset.
## Examples
Some question when translated does not remain meaningful/correct. The goal is to avoid such examples.
This includes coding, word-count, spelling error detection etc. Take a look at [dataset](https://huggingface.co/datasets/BhabhaAI/translation-classify) for examples |
kaimclone1/falcon-7b-instruct-ft-adapters | kaimclone1 | "2023-08-10T13:25:08Z" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-08-10T07:32:24Z" | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
|
RachidAR/Qwen2.5-Coder-1.5B-Q5_K_M-GGUF | RachidAR | "2024-09-19T15:19:45Z" | 5 | 0 | transformers | [
"transformers",
"gguf",
"code",
"qwen",
"qwen-coder",
"codeqwen",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:Qwen/Qwen2.5-Coder-1.5B",
"base_model:quantized:Qwen/Qwen2.5-Coder-1.5B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | text-generation | "2024-09-19T15:19:31Z" | ---
base_model: Qwen/Qwen2.5-Coder-1.5B
language:
- en
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-1.5B/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- code
- qwen
- qwen-coder
- codeqwen
- llama-cpp
- gguf-my-repo
---
# RachidAR/Qwen2.5-Coder-1.5B-Q5_K_M-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2.5-Coder-1.5B`](https://huggingface.co/Qwen/Qwen2.5-Coder-1.5B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-Coder-1.5B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo RachidAR/Qwen2.5-Coder-1.5B-Q5_K_M-GGUF --hf-file qwen2.5-coder-1.5b-q5_k_m-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo RachidAR/Qwen2.5-Coder-1.5B-Q5_K_M-GGUF --hf-file qwen2.5-coder-1.5b-q5_k_m-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo RachidAR/Qwen2.5-Coder-1.5B-Q5_K_M-GGUF --hf-file qwen2.5-coder-1.5b-q5_k_m-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo RachidAR/Qwen2.5-Coder-1.5B-Q5_K_M-GGUF --hf-file qwen2.5-coder-1.5b-q5_k_m-imat.gguf -c 2048
```
|
MonishSoundarRaj/environment_around_alumni_pavilion_building_uncc | MonishSoundarRaj | "2024-04-23T15:52:52Z" | 6 | 1 | diffusers | [
"diffusers",
"autotrain",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | "2024-04-23T13:03:51Z" |
---
tags:
- autotrain
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of a sks ground
license: openrail++
---
# AutoTrain SDXL LoRA DreamBooth - leonickson1/environment_around_alumni_pavilion_building_uncc
<Gallery />
## Model description
These are leonickson1/environment_around_alumni_pavilion_building_uncc LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: None.
## Trigger words
You should use photo of a sks ground to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](leonickson1/environment_around_alumni_pavilion_building_uncc/tree/main) them in the Files & versions tab.
|
frankjoshua/stable-diffusion-3.5-large | frankjoshua | "2024-10-22T14:36:33Z" | 44 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"en",
"arxiv:2403.03206",
"license:other",
"diffusers:StableDiffusion3Pipeline",
"region:us"
] | text-to-image | "2024-11-11T22:47:34Z" | ---
license: other
license_name: stabilityai-ai-community
license_link: LICENSE.md
tags:
- text-to-image
- stable-diffusion
- diffusers
inference: true
extra_gated_prompt: >-
By clicking "Agree", you agree to the [License
Agreement](https://huggingface.co/stabilityai/stable-diffusion-3.5-large/blob/main/LICENSE.md)
and acknowledge Stability AI's [Privacy
Policy](https://stability.ai/privacy-policy).
extra_gated_fields:
Name: text
Email: text
Country: country
Organization or Affiliation: text
Receive email updates and promotions on Stability AI products, services, and research?:
type: select
options:
- 'Yes'
- 'No'
What do you intend to use the model for?:
type: select
options:
- Research
- Personal use
- Creative Professional
- Startup
- Enterprise
I agree to the License Agreement and acknowledge Stability AI's Privacy Policy: checkbox
language:
- en
pipeline_tag: text-to-image
---
# Stable Diffusion 3.5 Large
![3.5 Large Demo Image](sd3.5_large_demo.png)
## Model
![MMDiT](mmdit.png)
[Stable Diffusion 3.5 Large](https://stability.ai/news/introducing-stable-diffusion-3-5) is a Multimodal Diffusion Transformer (MMDiT) text-to-image model that features improved performance in image quality, typography, complex prompt understanding, and resource-efficiency.
Please note: This model is released under the [Stability Community License](https://stability.ai/community-license-agreement). Visit [Stability AI](https://stability.ai/license) to learn or [contact us](https://stability.ai/enterprise) for commercial licensing details.
### Model Description
- **Developed by:** Stability AI
- **Model type:** MMDiT text-to-image generative model
- **Model Description:** This model generates images based on text prompts. It is a [Multimodal Diffusion Transformer](https://arxiv.org/abs/2403.03206) that use three fixed, pretrained text encoders, and with QK-normalization to improve training stability.
### License
- **Community License:** Free for research, non-commercial, and commercial use for organizations or individuals with less than $1M in total annual revenue. More details can be found in the [Community License Agreement](https://stability.ai/community-license-agreement). Read more at https://stability.ai/license.
- **For individuals and organizations with annual revenue above $1M**: please [contact us](https://stability.ai/enterprise) to get an Enterprise License.
### Model Sources
For local or self-hosted use, we recommend [ComfyUI](https://github.com/comfyanonymous/ComfyUI) for node-based UI inference, or [diffusers](https://github.com/huggingface/diffusers) or [GitHub](https://github.com/Stability-AI/sd3.5) for programmatic use.
- **ComfyUI:** [Github](https://github.com/comfyanonymous/ComfyUI), [Example Workflow](https://comfyanonymous.github.io/ComfyUI_examples/sd3/)
- **Huggingface Space:** [Space](https://huggingface.co/spaces/stabilityai/stable-diffusion-3.5-large)
- **Diffusers**: [See below](#using-with-diffusers).
- **GitHub**: [GitHub](https://github.com/Stability-AI/sd3.5).
- **API Endpoints:**
- [Stability AI API](https://platform.stability.ai/docs/api-reference#tag/Generate/paths/~1v2beta~1stable-image~1generate~1sd3/post)
- [Replicate](https://replicate.com/stability-ai/stable-diffusion-3.5-large)
- [Deepinfra](https://deepinfra.com/stabilityai/sd3.5)
### Implementation Details
- **QK Normalization:** Implements the QK normalization technique to improve training Stability.
- **Text Encoders:**
- CLIPs: [OpenCLIP-ViT/G](https://github.com/mlfoundations/open_clip), [CLIP-ViT/L](https://github.com/openai/CLIP/tree/main), context length 77 tokens
- T5: [T5-xxl](https://huggingface.co/google/t5-v1_1-xxl), context length 77/256 tokens at different stages of training
- **Training Data and Strategy:**
This model was trained on a wide variety of data, including synthetic data and filtered publicly available data.
For more technical details of the original MMDiT architecture, please refer to the [Research paper](https://stability.ai/news/stable-diffusion-3-research-paper).
### Model Performance
See [blog](https://stability.ai/news/introducing-stable-diffusion-3-5) for our study about comparative performance in prompt adherence and aesthetic quality.
## File Structure
Click here to access the [Files and versions tab](https://huggingface.co/stabilityai/stable-diffusion-3.5-large/tree/main)
```│
├── text_encoders/
│ ├── README.md
│ ├── clip_g.safetensors
│ ├── clip_l.safetensors
│ ├── t5xxl_fp16.safetensors
│ └── t5xxl_fp8_e4m3fn.safetensors
│
├── README.md
├── LICENSE
├── sd3_large.safetensors
├── SD3.5L_example_workflow.json
└── sd3_large_demo.png
** File structure below is for diffusers integration**
├── scheduler/
├── text_encoder/
├── text_encoder_2/
├── text_encoder_3/
├── tokenizer/
├── tokenizer_2/
├── tokenizer_3/
├── transformer/
├── vae/
└── model_index.json
```
## Using with Diffusers
Upgrade to the latest version of the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```
pip install -U diffusers
```
and then you can run
```py
import torch
from diffusers import StableDiffusion3Pipeline
pipe = StableDiffusion3Pipeline.from_pretrained("stabilityai/stable-diffusion-3.5-large", torch_dtype=torch.bfloat16)
pipe = pipe.to("cuda")
image = pipe(
"A capybara holding a sign that reads Hello World",
num_inference_steps=28,
guidance_scale=3.5,
).images[0]
image.save("capybara.png")
```
### Quantizing the model with diffusers
Reduce your VRAM usage and have the model fit on 🤏 VRAM GPUs
```
pip install bitsandbytes
```
```py
from diffusers import BitsAndBytesConfig, SD3Transformer2DModel
from diffusers import StableDiffusion3Pipeline
import torch
model_id = "stabilityai/stable-diffusion-3.5-large"
nf4_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
model_nf4 = SD3Transformer2DModel.from_pretrained(
model_id,
subfolder="transformer",
quantization_config=nf4_config,
torch_dtype=torch.bfloat16
)
pipeline = StableDiffusion3Pipeline.from_pretrained(
model_id,
transformer=model_nf4,
torch_dtype=torch.bfloat16
)
pipeline.enable_model_cpu_offload()
prompt = "A whimsical and creative image depicting a hybrid creature that is a mix of a waffle and a hippopotamus, basking in a river of melted butter amidst a breakfast-themed landscape. It features the distinctive, bulky body shape of a hippo. However, instead of the usual grey skin, the creature's body resembles a golden-brown, crispy waffle fresh off the griddle. The skin is textured with the familiar grid pattern of a waffle, each square filled with a glistening sheen of syrup. The environment combines the natural habitat of a hippo with elements of a breakfast table setting, a river of warm, melted butter, with oversized utensils or plates peeking out from the lush, pancake-like foliage in the background, a towering pepper mill standing in for a tree. As the sun rises in this fantastical world, it casts a warm, buttery glow over the scene. The creature, content in its butter river, lets out a yawn. Nearby, a flock of birds take flight"
image = pipeline(
prompt=prompt,
num_inference_steps=28,
guidance_scale=4.5,
max_sequence_length=512,
).images[0]
image.save("whimsical.png")
```
### Fine-tuning
Please see the fine-tuning guide [here](https://stabilityai.notion.site/Stable-Diffusion-3-5-Large-Fine-tuning-Tutorial-11a61cdcd1968027a15bdbd7c40be8c6).
## Uses
### Intended Uses
Intended uses include the following:
* Generation of artworks and use in design and other artistic processes.
* Applications in educational or creative tools.
* Research on generative models, including understanding the limitations of generative models.
All uses of the model must be in accordance with our [Acceptable Use Policy](https://stability.ai/use-policy).
### Out-of-Scope Uses
The model was not trained to be factual or true representations of people or events. As such, using the model to generate such content is out-of-scope of the abilities of this model.
## Safety
As part of our safety-by-design and responsible AI deployment approach, we take deliberate measures to ensure Integrity starts at the early stages of development. We implement safety measures throughout the development of our models. We have implemented safety mitigations that are intended to reduce the risk of certain harms, however we recommend that developers conduct their own testing and apply additional mitigations based on their specific use cases.
For more about our approach to Safety, please visit our [Safety page](https://stability.ai/safety).
### Integrity Evaluation
Our integrity evaluation methods include structured evaluations and red-teaming testing for certain harms. Testing was conducted primarily in English and may not cover all possible harms.
### Risks identified and mitigations:
* Harmful content: We have used filtered data sets when training our models and implemented safeguards that attempt to strike the right balance between usefulness and preventing harm. However, this does not guarantee that all possible harmful content has been removed. TAll developers and deployers should exercise caution and implement content safety guardrails based on their specific product policies and application use cases.
* Misuse: Technical limitations and developer and end-user education can help mitigate against malicious applications of models. All users are required to adhere to our [Acceptable Use Policy](https://stability.ai/use-policy), including when applying fine-tuning and prompt engineering mechanisms. Please reference the Stability AI Acceptable Use Policy for information on violative uses of our products.
* Privacy violations: Developers and deployers are encouraged to adhere to privacy regulations with techniques that respect data privacy.
### Contact
Please report any issues with the model or contact us:
* Safety issues: [email protected]
* Security issues: [email protected]
* Privacy issues: [email protected]
* License and general: https://stability.ai/license
* Enterprise license: https://stability.ai/enterprise
|
asenella/mmnist_MVAEconfig_resnet_seed_0_ratio_05_c | asenella | "2023-06-04T12:40:37Z" | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | "2023-06-04T12:40:01Z" | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/mmnist_MVAEconfig2_seed_0_ratio_00_c | asenella | "2023-05-10T18:11:16Z" | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | "2023-05-10T18:10:23Z" | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
mci29/sn29_w1m3_gf2z | mci29 | "2024-12-22T13:22:31Z" | 14 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-22T13:18:04Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
xu1998hz/43_dpo_lora_ucb_rand | xu1998hz | "2024-04-26T03:14:21Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-04-26T03:14:19Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
iczaw/prompt-diffusion-diffusers | iczaw | "2024-03-04T19:58:34Z" | 0 | 1 | diffusers | [
"diffusers",
"image-to-text",
"region:us"
] | image-to-text | "2024-03-03T23:11:29Z" | ---
library_name: diffusers
base_models:
- runwayml/stable-diffusion-v1-5
- lllyasviel/ControlNet
pipeline_tag: image-to-text
---
[Prompt diffusion](https://huggingface.co/zhendongw/prompt-diffusion) converted to Diffusers. |
DBangshu/V4_Base_GPT2_e5_4_3 | DBangshu | "2024-11-29T14:12:16Z" | 144 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-11-29T14:12:00Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AugustoReies/vit-base-patch16-224-mascotas-DA | AugustoReies | "2024-10-04T12:51:05Z" | 192 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-10-04T12:50:52Z" | ---
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-mascotas-DA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-mascotas-DA
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1356
- Accuracy: 0.9625
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00035
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 48
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.3161 | 0.9849 | 49 | 0.1356 | 0.9625 |
| 0.157 | 1.9899 | 99 | 0.1231 | 0.95 |
| 0.1355 | 2.9950 | 149 | 0.1380 | 0.9625 |
| 0.0979 | 4.0 | 199 | 0.2714 | 0.925 |
| 0.0788 | 4.9849 | 248 | 0.2664 | 0.9375 |
| 0.0584 | 5.9095 | 294 | 0.2223 | 0.9375 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
cleanrl/Pitfall-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2 | cleanrl | "2023-02-10T12:08:33Z" | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Pitfall-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-02-10T12:08:28Z" | ---
tags:
- Pitfall-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pitfall-v5
type: Pitfall-v5
metrics:
- type: mean_reward
value: -24.80 +/- 50.75
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Pitfall-v5**
This is a trained model of a PPO agent playing Pitfall-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Pitfall-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Pitfall-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Pitfall-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Pitfall-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Pitfall-v5 --seed 2
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 7680,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Pitfall-v5',
'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4, 5, 6],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 1920,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 60,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6510,
'profile': False,
'save_model': True,
'seed': 2,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
m3hrdadfi/wav2vec2-large-xlsr-persian-v3 | m3hrdadfi | "2021-11-04T15:22:11Z" | 1,900 | 37 | transformers | [
"transformers",
"pytorch",
"tf",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"fa",
"dataset:common_voice",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-03-02T23:29:05Z" | ---
language: fa
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
widget:
- example_title: Common Voice sample 1
src: https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-persian-v3/resolve/main/sample1.flac
- example_title: Common Voice sample 2978
src: https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-persian-v3/resolve/main/sample2978.flac
- example_title: Common Voice sample 5168
src: https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-persian-v3/resolve/main/sample5168.flac
model-index:
- name: XLSR Wav2Vec2 Persian (Farsi) V3 by Mehrdad Farahani
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice fa
type: common_voice
args: fa
metrics:
- name: Test WER
type: wer
value: 10.36
---
# Wav2Vec2-Large-XLSR-53-Persian V3
## Usage
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Persian (Farsi) using [Common Voice](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz.
**Requirements**
```bash
# requirement packages
!pip install git+https://github.com/huggingface/datasets.git
!pip install git+https://github.com/huggingface/transformers.git
!pip install torchaudio
!pip install librosa
!pip install jiwer
!pip install parsivar
!pip install num2fawords
```
**Normalizer**
```bash
# Normalizer
!wget -O normalizer.py https://huggingface.co/m3hrdadfi/"wav2vec2-large-xlsr-persian-v3/raw/main/dictionary.py
!wget -O normalizer.py https://huggingface.co/m3hrdadfi/"wav2vec2-large-xlsr-persian-v3/raw/main/normalizer.py
```
**Downloading data**
```bash
wget https://voice-prod-bundler-ee1969a6ce8178826482b88e843c335139bd3fb4.s3.amazonaws.com/cv-corpus-6.1-2020-12-11/fa.tar.gz
tar -xzf fa.tar.gz
rm -rf fa.tar.gz
```
**Cleaning**
```python
from normalizer import normalizer
def cleaning(text):
if not isinstance(text, str):
return None
return normalizer({"sentence": text}, return_dict=False)
data_dir = "/content/cv-corpus-6.1-2020-12-11/fa"
test = pd.read_csv(f"{data_dir}/test.tsv", sep=" ")
test["path"] = data_dir + "/clips/" + test["path"]
print(f"Step 0: {len(test)}")
test["status"] = test["path"].apply(lambda path: True if os.path.exists(path) else None)
test = test.dropna(subset=["path"])
test = test.drop("status", 1)
print(f"Step 1: {len(test)}")
test["sentence"] = test["sentence"].apply(lambda t: cleaning(t))
test = test.dropna(subset=["sentence"])
print(f"Step 2: {len(test)}")
test = test.reset_index(drop=True)
print(test.head())
test = test[["path", "sentence"]]
test.to_csv("/content/test.csv", sep=" ", encoding="utf-8", index=False)
```
**Prediction**
```python
import numpy as np
import pandas as pd
import librosa
import torch
import torchaudio
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
from datasets import load_dataset, load_metric
import IPython.display as ipd
model_name_or_path = "m3hrdadfi/wav2vec2-large-xlsr-persian-v3"
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(model_name_or_path, device)
processor = Wav2Vec2Processor.from_pretrained(model_name_or_path)
model = Wav2Vec2ForCTC.from_pretrained(model_name_or_path).to(device)
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
speech_array = speech_array.squeeze().numpy()
speech_array = librosa.resample(np.asarray(speech_array), sampling_rate, processor.feature_extractor.sampling_rate)
batch["speech"] = speech_array
return batch
def predict(batch):
features = processor(
batch["speech"],
sampling_rate=processor.feature_extractor.sampling_rate,
return_tensors="pt",
padding=True
)
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = processor.batch_decode(pred_ids)
return batch
dataset = load_dataset("csv", data_files={"test": "/content/test.csv"}, delimiter=" ")["test"]
dataset = dataset.map(speech_file_to_array_fn)
result = dataset.map(predict, batched=True, batch_size=4)
```
**WER Score**
```python
wer = load_metric("wer")
print("WER: {:.2f}".format(100 * wer.compute(predictions=result["predicted"], references=result["sentence"])))
```
**Output**
```python
max_items = np.random.randint(0, len(result), 20).tolist()
for i in max_items:
reference, predicted = result["sentence"][i], result["predicted"][i]
print("reference:", reference)
print("predicted:", predicted)
print('---')
```
```text
reference: ماجرا رو براش تعریف کردم اون گفت مریم اگه میدونی پسر خوبیه خب چه اشکالی داره باهاش بیشتر اشنا بشو
predicted: ماجرا رو براش تعریف کردم اون گفت مریم اگه میدونی پسر خوبیه خب چه اشکالی داره باهاش بیشتر اشنا بشو
---
reference: بیا پایین تو اجازه نداری بری اون بالا
predicted: بیا پایین تو اجازه نداری بری اون بالا
---
reference: هر روز یک دو مداد کش می رفتتم تااین که تا پایان ترم از تمامی دوستانم مداد برداشته بودم
predicted: هر روز یک دو مداد کش می رفتم تااین که تا پایین ترم از تمامی دوستان و مداد برداشته بودم
---
reference: فکر میکنی آروم میشینه
predicted: فکر میکنی آروم میشینه
---
reference: هرکسی با گوشی هوشمند خود میتواند با کایلا متصل گردد در یک محدوده مکانی
predicted: هرکسی با گوشی هوشمند خود میتواند با کایلا متصل گردد در یک محدوده مکانی
---
reference: برو از مهرداد بپرس
predicted: برو از مهرداد بپرس
---
reference: می خواهم شما را با این قدمها آشنا کنم
predicted: می خواهم شما را با این قدمها آشنا کنم
---
reference: میدونم یه روز دوباره می تونم تو رو ببینم
predicted: میدونم یه روز دوباره می تونم تو رو ببینم
---
reference: بسیار خوب خواهد بود دعوت او را بپذیری
predicted: بسیار خوب خواهد بود دعوت او را بپذیری
---
reference: بهت بگن آشغالی خوبه
predicted: بهت بگن آشغالی خوبه
---
reference: چرا معاشرت با هم ایمانان ما را محفوظ نگه میدارد
predicted: چرا معاشرت با هم ایمانان آ را م حفوظ نگه میدارد
---
reference: بولیوی پس از گویان فقیرترین کشور آمریکای جنوبی است
predicted: بولیوی پس از گویان فقیرترین کشور آمریکای جنوبی است
---
reference: بعد از مدتی اینکار برایم عادی شد
predicted: بعد از مدتی اینکار برایم عادو شد
---
reference: به نظر اون هم همینطوره
predicted: به نظر اون هم همینطوره
---
reference: هیچ مایونز ی دارید
predicted: هیچ مایونز ی دارید
---
reference: هیچ یک از انان کاری به سنگ نداشتند
predicted: هیچ شک از انان کاری به سنگ نداشتند
---
reference: می خواهم کمی کتاب شعر ببینم
predicted: می خواهم کتاب شعر ببینم
---
reference: همین شوهر فهیمه مگه نمی گفتی فرمانده بوده کو
predicted: همین شوهر فهیمه بینامی گفتی فهمانده بود کو
---
reference: اون جاها کسی رو نمیبینی که تو دستش کتاب نباشه
predicted: اون جاها کسی رو نمیبینی که تو دستش کتاب نباشه
---
reference: زندان رفتن من در این سالهای اخیر برام شانس بزرگی بود که معما و مشکل چندین سالهام را حل کرد
predicted: زندان رفتن من در این سالها اخی براب شانس بزرگی بود که معما و مشکل چندین سالهام را حل کرد
---
```
## Evaluation
**Test Result:**
- WER: 10.36% |
okaris/autotrain-hate-speech-3k-89642143970 | okaris | "2023-09-17T06:23:59Z" | 109 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"deberta",
"text-classification",
"autotrain",
"text-regression",
"en",
"dataset:okaris/autotrain-data-hate-speech-3k",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-09-17T06:21:33Z" | ---
tags:
- autotrain
- text-regression
language:
- en
widget:
- text: "I love AutoTrain"
datasets:
- okaris/autotrain-data-hate-speech-3k
co2_eq_emissions:
emissions: 0.023898445665108296
---
# Model Trained Using AutoTrain
- Problem type: Single Column Regression
- Model ID: 89642143970
- CO2 Emissions (in grams): 0.0239
## Validation Metrics
- Loss: 1.768
- MSE: 1.768
- MAE: 1.007
- R2: 0.604
- RMSE: 1.330
- Explained Variance: 0.614
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/okaris/autotrain-hate-speech-3k-89642143970
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("okaris/autotrain-hate-speech-3k-89642143970", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("okaris/autotrain-hate-speech-3k-89642143970", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
Darshan03/Edu-Model-v7 | Darshan03 | "2024-11-11T10:06:54Z" | 73 | 0 | transformers | [
"transformers",
"pytorch",
"qwen2",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-11-11T10:03:25Z" | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Xu-Ouyang/pythia-14m-int3-step36000-GPTQ-wikitext2 | Xu-Ouyang | "2024-07-17T22:11:09Z" | 80 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"3-bit",
"gptq",
"region:us"
] | text-generation | "2024-07-17T22:11:05Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |