Dataset Viewer
Auto-converted to Parquet
modelId
stringlengths
5
138
author
stringlengths
2
42
last_modified
unknowndate
2020-02-15 11:33:14
2025-04-15 12:28:42
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
426 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
unknowndate
2022-03-02 23:29:04
2025-04-15 12:27:24
card
stringlengths
11
1.01M
lesso11/321367b5-0076-49ca-b5f4-d3f6d9728549
lesso11
"2025-01-18T03:10:55"
6
0
peft
[ "peft", "safetensors", "opt", "axolotl", "generated_from_trainer", "base_model:facebook/opt-125m", "base_model:adapter:facebook/opt-125m", "license:other", "8-bit", "bitsandbytes", "region:us" ]
null
"2025-01-18T03:08:51"
--- library_name: peft license: other base_model: facebook/opt-125m tags: - axolotl - generated_from_trainer model-index: - name: 321367b5-0076-49ca-b5f4-d3f6d9728549 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: facebook/opt-125m bf16: true chat_template: llama3 datasets: - data_files: - 16a447cf139bcb80_train_data.json ds_type: json format: custom path: /workspace/input_data/16a447cf139bcb80_train_data.json type: field_instruction: paras field_output: headings format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: 2 eval_max_new_tokens: 128 eval_steps: 5 eval_table_size: null flash_attention: false fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: lesso11/321367b5-0076-49ca-b5f4-d3f6d9728549 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 25 micro_batch_size: 2 mlflow_experiment_name: /tmp/16a447cf139bcb80_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 10 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: c1566845-c9e2-4658-b67d-6967b916832d wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: c1566845-c9e2-4658-b67d-6967b916832d warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 321367b5-0076-49ca-b5f4-d3f6d9728549 This model is a fine-tuned version of [facebook/opt-125m](https://huggingface.co/facebook/opt-125m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0892 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 25 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 5.8769 | 0.0012 | 1 | 1.3450 | | 5.9394 | 0.0058 | 5 | 1.3031 | | 4.1024 | 0.0116 | 10 | 1.1910 | | 5.2361 | 0.0174 | 15 | 1.1229 | | 4.3159 | 0.0232 | 20 | 1.0933 | | 4.309 | 0.0290 | 25 | 1.0892 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
ashtaaav/results
ashtaaav
"2024-10-10T07:50:44"
179
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-09-03T15:22:22"
--- library_name: transformers license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6576 - Accuracy: 0.8967 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 375 | 0.2720 | 0.8867 | | 0.3139 | 2.0 | 750 | 0.3417 | 0.8967 | | 0.1336 | 3.0 | 1125 | 0.6884 | 0.8707 | | 0.032 | 4.0 | 1500 | 0.6928 | 0.8873 | | 0.032 | 5.0 | 1875 | 0.6576 | 0.8967 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.1 - Tokenizers 0.19.1
kawadlc/whisper-peft
kawadlc
"2023-08-16T03:18:46"
0
0
null
[ "zh", "dataset:mozilla-foundation/common_voice_13_0", "dataset:google/fleurs", "region:us" ]
null
"2023-08-09T09:06:25"
--- datasets: - mozilla-foundation/common_voice_13_0 - google/fleurs language: - zh metrics: - cer ---
toilaluan/latent-lm-vae-z5-encoder
toilaluan
"2025-03-17T03:09:10"
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-03-16T17:56:38"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
fastbond/Llama-2-7b-chat_SupervisedFineTune_GEMviggo_1epochs
fastbond
"2023-10-10T07:12:21"
0
0
peft
[ "peft", "region:us" ]
null
"2023-10-10T07:12:09"
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.5.0
rdzotz/w2v-bert-2.0-russian-colab-CV16.0
rdzotz
"2024-01-29T19:39:43"
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-01-26T11:41:45"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
tmobaggins/bert-finetuned-squad
tmobaggins
"2022-11-20T22:24:05"
119
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
"2022-11-14T23:19:16"
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: bert-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset. ## Model description This is a first attempt at following the directions from the huggingface course. It was run on colab and a private server ## Intended uses & limitations This model is fine-tuned for extractive question answering. ## Training and evaluation data SQuAD ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.7.0 - Tokenizers 0.13.2
alexhotti/run_20250401_124122
alexhotti
"2025-04-01T12:41:58"
0
0
null
[ "region:us" ]
null
"2025-04-01T12:41:58"
<!DOCTYPE html> <html class="" lang="en"> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no" /> <meta name="description" content="We're on a journey to advance and democratize artificial intelligence through open source and open science." /> <meta property="fb:app_id" content="1321688464574422" /> <meta name="twitter:card" content="summary_large_image" /> <meta name="twitter:site" content="@huggingface" /> <meta property="og:title" content="Hugging Face - The AI community building the future." /> <meta property="og:type" content="website" /> <title>Hugging Face - The AI community building the future.</title> <style> body { margin: 0; } main { background-color: white; min-height: 100vh; padding: 7rem 1rem 8rem 1rem; text-align: center; font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system, BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans, sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol, Noto Color Emoji; } img { width: 6rem; height: 6rem; margin: 0 auto 1rem; } h1 { font-size: 3.75rem; line-height: 1; color: rgba(31, 41, 55, 1); font-weight: 700; box-sizing: border-box; margin: 0 auto; } p, a { color: rgba(107, 114, 128, 1); font-size: 1.125rem; line-height: 1.75rem; max-width: 28rem; box-sizing: border-box; margin: 0 auto; } .dark main { background-color: rgb(11, 15, 25); } .dark h1 { color: rgb(209, 213, 219); } .dark p, .dark a { color: rgb(156, 163, 175); } </style> <script> // On page load or when changing themes, best to add inline in `head` to avoid FOUC const key = "_tb_global_settings"; let theme = window.matchMedia("(prefers-color-scheme: dark)").matches ? "dark" : "light"; try { const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme; if (storageTheme) { theme = storageTheme === "dark" ? "dark" : "light"; } } catch (e) {} if (theme === "dark") { document.documentElement.classList.add("dark"); } else { document.documentElement.classList.remove("dark"); } </script> </head> <body> <main> <img src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg" alt="" /> <div> <h1>429</h1> <p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p> </div> </main> </body> </html>
Azazelle/llama3-8b-hikikomori-v0.4
Azazelle
"2024-06-09T03:47:38"
0
1
transformers
[ "transformers", "safetensors", "unsloth", "en", "dataset:unalignment/toxic-dpo-v0.2", "dataset:NobodyExistsOnTheInternet/ToxicQAFinal", "dataset:Open-Orca/SlimOrca", "dataset:PygmalionAI/PIPPA", "dataset:MinervaAI/Aesir-Preview", "license:llama3", "endpoints_compatible", "region:us" ]
null
"2024-06-09T03:25:30"
--- library_name: transformers tags: - unsloth license: llama3 datasets: - unalignment/toxic-dpo-v0.2 - NobodyExistsOnTheInternet/ToxicQAFinal - Open-Orca/SlimOrca - PygmalionAI/PIPPA - MinervaAI/Aesir-Preview language: - en --- ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6626144c3892aa32a898c997/Vo3yQozK1__c4VEZpe23z.jpeg) # Disclaimer This model is an experimental fine tune of LLama-3 ## Datasets used: - unalignment/toxic-dpo-v0.2 - NobodyExistsOnTheInternet/ToxicQAFinal - Open-Orca/SlimOrca (subset of data) - PygmalionAI/PIPPA - MinervaAI/Aesir-Preview ### Model Description <!-- Provide a longer summary of what this model is. --> The model is highly uncensored + suitable for roleplay ## About Us Building - AI Waifu Supremacy [X](https://twitter.com/hikikomorihaven) [Discord](discord.gg/QS27Ka3cnq) ## Credits: (For open sourcing tools + methodology to assist with fine tuning) - Unisloth - NurtureAI (For open sourcing data to be used for fine tuning) - NobodyExistsOnTheInternet - unalignment - Open-Orca - PygmalionAI - MinervaAI
fbaldassarri/openlm-research_open_llama_7b_v2-autogptq-int4-gs64-sym
fbaldassarri
"2025-04-06T19:59:01"
0
0
null
[ "safetensors", "llama", "pytorch", "causal-lm", "OpenLLaMA", "autoround", "auto-round", "intel-autoround", "gptq", "auto-gptq", "autogptq", "woq", "intel", "openlm-research", "text-generation", "dataset:tiiuae/falcon-refinedweb", "dataset:bigcode/starcoderdata", "dataset:togethercomputer/RedPajama-Data-1T", "base_model:openlm-research/open_llama_7b_v2", "base_model:quantized:openlm-research/open_llama_7b_v2", "license:apache-2.0", "4-bit", "region:us" ]
text-generation
"2025-04-06T19:41:07"
--- tags: - pytorch - causal-lm - OpenLLaMA - autoround - auto-round - intel-autoround - gptq - auto-gptq - autogptq - woq - intel - pytorch - openlm-research license: apache-2.0 datasets: - tiiuae/falcon-refinedweb - bigcode/starcoderdata - togethercomputer/RedPajama-Data-1T model_name: OpenLLaMA 7B v2 base_model: - openlm-research/open_llama_7b_v2 inference: false model_creator: openlm-research pipeline_tag: text-generation prompt_template: '{prompt} ' quantized_by: fbaldassarri --- ## Model Information Quantized version of [openlm-research/open_llama_7b_v2](https://huggingface.co/openlm-research/open_llama_7b_v2) using torch.float32 for quantization tuning. - 4 bits (INT4) - group size = 64 - Symmetrical Quantization - Method AutoGPTQ (AutoGPTQ format) Quantization framework: [Intel AutoRound](https://github.com/intel/auto-round) v0.4.6 Note: this INT4 version of open_llama_7b_v2 has been quantized to run inference through CPU. ## Replication Recipe ### Step 1 Install Requirements I suggest to install requirements into a dedicated python-virtualenv or a conda enviroment. ``` wget https://github.com/intel/auto-round/archive/refs/tags/v0.4.6.tar.gz tar -xvzf v0.4.6.tar.gz cd auto-round-0.4.6 pip install -r requirements-cpu.txt --upgrade ``` ### Step 2 Build Intel AutoRound wheel from sources ``` pip install -vvv --no-build-isolation -e .[cpu] ``` ### Step 3 Script for Quantization ``` from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "openlm-research/open_llama_7b_v2" model = AutoModelForCausalLM.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) from auto_round import AutoRound bits, group_size, sym, device, amp = 4, 64, True, 'cpu', False autoround = AutoRound(model, tokenizer, nsamples=128, iters=200, seqlen=512, batch_size=4, bits=bits, group_size=group_size, sym=sym, device=device, amp=amp) autoround.quantize() output_dir = "./AutoRound/openlm-research_open_llama_7b_v2-autogptq-int4-gs64-sym" autoround.save_quantized(output_dir, format='auto_gptq', inplace=True) ``` ## License [Apache 2.0 License](https://choosealicense.com/licenses/apache-2.0/) ## Disclaimer This quantized model comes with no warranty. It has been developed only for research purposes.
DiegoD616/LunarLander-v2
DiegoD616
"2023-02-19T00:24:15"
0
0
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
"2023-02-18T23:58:32"
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -118.98 +/- 36.13 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters
huggingtweets/ilanblock
huggingtweets
"2023-01-04T23:31:32"
107
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2023-01-04T23:30:40"
--- language: en thumbnail: http://www.huggingtweets.com/ilanblock/1672875087355/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1592883496434995207/shcZhn8g_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">S block</div> <div style="text-align: center; font-size: 14px;">@ilanblock</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from S block. | Data | S block | | --- | --- | | Tweets downloaded | 3242 | | Retweets | 53 | | Short tweets | 734 | | Tweets kept | 2455 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1r3gqa7a/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ilanblock's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/wdrbtxet) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/wdrbtxet/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/ilanblock') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Azzizz17/test
Azzizz17
"2023-11-06T06:22:43"
0
0
peft
[ "peft", "region:us" ]
null
"2023-11-06T06:18:18"
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0 - PEFT 0.5.0
Xurrie/Bangchan
Xurrie
"2023-10-19T21:30:52"
0
0
null
[ "license:bigscience-openrail-m", "region:us" ]
null
"2023-10-19T21:30:52"
--- license: bigscience-openrail-m ---
IntelLabs/lonas-bloomz-7b-math
IntelLabs
"2025-02-12T17:21:08"
0
2
null
[ "en", "arxiv:2501.16372", "license:apache-2.0", "region:us" ]
null
"2024-03-15T10:29:25"
--- language: en license: apache-2.0 --- # LoNAS Model Card: lonas-bloomz-7b-math The super-network fine-tuned on BLOOMZ-7B with some math reasoning datasets using LoNAS. ## Model Details ### Information - **Model name:** lonas-bloomz-7b-math - **Base model:** [BLOOMZ-7b](https://huggingface.co/bigscience/bloomz-7b1) - **Domain:** Math - **Subnetwork version:** Super-network - **NNCF Configuration:** [nncf_lonas_bloomz_7b.json](https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/LoNAS/nncf_config/unified_math/nncf_lonas_bloomz_7b.json) ### Adapter Configuration - **LoRA rank:** 32 - **LoRA alpha:** 64 - **LoRA target modules:** query_key_value, dense_h_to_4h, dense_4h_to_h ### Training Hyperparameters - **Batch size:** 16 - **Learning rate:** 3e-4 - **Epoch:** 8 ### Training Data Unified math reasoning dataset: [math_10k.json](https://github.com/AGI-Edgerunners/LLM-Adapters/blob/main/ft-training_set/math_10k.json) (collected with the training sets of GSM8K, MAWPS, and AQuA). ### Evaluation Data [GSM8K](https://github.com/AGI-Edgerunners/LLM-Adapters/blob/main/dataset/gsm8k/test.json), [AQuA](https://github.com/AGI-Edgerunners/LLM-Adapters/blob/main/dataset/AQuA/test.json), [MAWPS](https://github.com/AGI-Edgerunners/LLM-Adapters/blob/main/dataset/mawps/test.json) and [SVAMP](https://github.com/AGI-Edgerunners/LLM-Adapters/blob/main/dataset/SVAMP/test.json) ## How to use Refer to [https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/LoNAS#evaluation](https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/LoNAS#evaluation): ```bash CUDA_VISIBLE_DEVICES=${DEVICES} python run_math.py \ --dataset_path None \ --model_name_or_path bigscience/bloomz-7b1 \ --lora \ --lora_weights lonas-bloomz-7b-math \ --nncf_config nncf_config/unified_math/nncf_lonas_bloomz_7b.json \ --do_test \ --output_dir lonas-bloomz-7b-math/results ``` ## Evaluation Results Results of the heuristic sub-network discoverd from the super-network: | Method | Total Params. | TFLOPs | GSM8K | AQuA | MAWPS | SVAMP | Average | |------------|---------------|-----------|-------|------|-------|-------|-----------| | LoRA | 7.1B | 1.8 | 17.4 | 21.3 | 70.2 | 41.0 | **37.5** | | **LoNAS** | **6.1B** | **1.5** | 18.6 | 22.0 | 76.5 | 31.8 | 37.2 | ## Model Sources **Repository:** [https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/LoNAS](https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/LoNAS) **Paper:** - [LoNAS: Elastic Low-Rank Adapters for Efficient Large Language Models](https://aclanthology.org/2024.lrec-main.940) - [Low-Rank Adapters Meet Neural Architecture Search for LLM Compression](https://arxiv.org/abs/2501.16372) ## Citation ```bibtex @inproceedings{munoz-etal-2024-lonas, title = "{L}o{NAS}: Elastic Low-Rank Adapters for Efficient Large Language Models", author = "Munoz, Juan Pablo and Yuan, Jinjie and Zheng, Yi and Jain, Nilesh", editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen", booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lrec-main.940", pages = "10760--10776", } ``` ## License Apache-2.0
johnpaulbin/articulate-11-expspanish-base-merged
johnpaulbin
"2025-01-31T17:05:13"
7
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2025-01-31T17:03:11"
--- base_model: unsloth/llama-3.2-3b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** johnpaulbin - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-3b-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
surprisedPikachu007/tomato-disease-detection
surprisedPikachu007
"2024-01-05T15:14:05"
35
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2023-03-09T04:55:35"
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy base_model: google/vit-base-patch16-224-in21k model-index: - name: tomato-disease-detection results: - task: type: image-classification name: Image Classification dataset: name: imagefolder type: imagefolder config: dataset split: train args: dataset metrics: - type: accuracy value: 0.9917706397663923 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tomato-disease-detection This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0394 - Accuracy: 0.9918 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1363 | 1.0 | 941 | 0.1109 | 0.9774 | | 0.0657 | 2.0 | 1882 | 0.0666 | 0.9841 | | 0.0605 | 3.0 | 2823 | 0.0394 | 0.9918 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu117 - Datasets 2.10.1 - Tokenizers 0.13.2
SaiChamakura/fine-tuned-visionllama100_0.6dropout
SaiChamakura
"2025-02-13T09:20:20"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:meta-llama/Llama-3.2-11B-Vision-Instruct", "base_model:finetune:meta-llama/Llama-3.2-11B-Vision-Instruct", "endpoints_compatible", "region:us" ]
null
"2025-02-12T19:53:37"
--- base_model: meta-llama/Llama-3.2-11B-Vision-Instruct library_name: transformers model_name: fine-tuned-visionllama100_0.6dropout tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for fine-tuned-visionllama100_0.6dropout This model is a fine-tuned version of [meta-llama/Llama-3.2-11B-Vision-Instruct](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="SaiChamakura/fine-tuned-visionllama100_0.6dropout", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.13.0 - Transformers: 4.47.1 - Pytorch: 2.5.1+cu121 - Datasets: 3.2.0 - Tokenizers: 0.21.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
jlbaker361/dcgan-64-neg-vanilla
jlbaker361
"2024-06-07T07:31:08"
0
0
null
[ "region:us" ]
null
"2024-06-02T01:04:47"
--- {} --- Creative Adversarial Network epochs: 100 dataset jlbaker361/wikiart n classes 27 batch_size 64 images where resized to 768 and then center cropped to: 64 used clip=False conditional =False discriminator parameters: init_dim: 32 final_dim 512 generator parameters: input noise_dim: 100 wandb project: https://wandb.ai/jlbaker361/creativity-gan/runs/ve86nzd8
LoneStriker/Yarn-Mistral-7b-128k-8.0bpw-h8-exl2
LoneStriker
"2023-11-02T22:11:55"
7
2
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "custom_code", "dataset:emozilla/yarn-train-tokenized-16k-mistral", "arxiv:2309.00071", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2023-11-02T20:42:29"
--- datasets: - emozilla/yarn-train-tokenized-16k-mistral metrics: - perplexity library_name: transformers --- # Model Card: Nous-Yarn-Mistral-7b-128k [Preprint (arXiv)](https://arxiv.org/abs/2309.00071) [GitHub](https://github.com/jquesnelle/yarn) ![yarn](https://raw.githubusercontent.com/jquesnelle/yarn/mistral/data/proofpile-long-small-mistral.csv.png) ## Model Description Nous-Yarn-Mistral-7b-128k is a state-of-the-art language model for long context, further pretrained on long context data for 1500 steps using the YaRN extension method. It is an extension of [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) and supports a 128k token context window. To use, pass `trust_remote_code=True` when loading the model, for example ```python model = AutoModelForCausalLM.from_pretrained("NousResearch/Yarn-Mistral-7b-128k", use_flash_attention_2=True, torch_dtype=torch.bfloat16, device_map="auto", trust_remote_code=True) ``` In addition you will need to use the latest version of `transformers` (until 4.35 comes out) ```sh pip install git+https://github.com/huggingface/transformers ``` ## Benchmarks Long context benchmarks: | Model | Context Window | 8k PPL | 16k PPL | 32k PPL | 64k PPL | 128k PPL | |-------|---------------:|------:|----------:|-----:|-----:|------------:| | [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) | 8k | 2.96 | - | - | - | - | | [Yarn-Mistral-7b-64k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-64k) | 64k | 3.04 | 2.65 | 2.44 | 2.20 | - | | [Yarn-Mistral-7b-128k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-128k) | 128k | 3.08 | 2.68 | 2.47 | 2.24 | 2.19 | Short context benchmarks showing that quality degradation is minimal: | Model | Context Window | ARC-c | Hellaswag | MMLU | Truthful QA | |-------|---------------:|------:|----------:|-----:|------------:| | [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) | 8k | 59.98 | 83.31 | 64.16 | 42.15 | | [Yarn-Mistral-7b-64k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-64k) | 64k | 59.38 | 81.21 | 61.32 | 42.50 | | [Yarn-Mistral-7b-128k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-128k) | 128k | 58.87 | 80.58 | 60.64 | 42.46 | ## Collaborators - [bloc97](https://github.com/bloc97): Methods, paper and evals - [@theemozilla](https://twitter.com/theemozilla): Methods, paper, model training, and evals - [@EnricoShippole](https://twitter.com/EnricoShippole): Model training - [honglu2875](https://github.com/honglu2875): Paper and evals The authors would like to thank LAION AI for their support of compute for this model. It was trained on the [JUWELS](https://www.fz-juelich.de/en/ias/jsc/systems/supercomputers/juwels) supercomputer.
SyedShaheer/bart-large-cnn-samsum_tuned
SyedShaheer
"2024-02-27T11:06:28"
123
1
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "summarization", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
"2024-02-27T04:27:14"
--- metrics: - rouge pipeline_tag: summarization ---
TideDra/Qwen-VL-Chat-DPO
TideDra
"2024-05-30T12:46:18"
7
0
transformers
[ "transformers", "safetensors", "qwen", "custom_code", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-05-30T12:27:30"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hivex-research/hivex-DBR-PPO-baseline-task-2-difficulty-4
hivex-research
"2025-03-20T23:19:22"
0
0
hivex
[ "hivex", "tensorboard", "onnx", "hivex-drone-based-reforestation", "reinforcement-learning", "multi-agent-reinforcement-learning", "arxiv:2501.04180", "model-index", "region:us" ]
reinforcement-learning
"2024-08-30T08:12:17"
--- library_name: hivex original_train_name: DroneBasedReforestation_difficulty_4_task_2_run_id_1_train tags: - hivex - hivex-drone-based-reforestation - reinforcement-learning - multi-agent-reinforcement-learning model-index: - name: hivex-DBR-PPO-baseline-task-2-difficulty-4 results: - task: type: sub-task name: pick_up_seed_at_base task-id: 2 difficulty-id: 4 dataset: name: hivex-drone-based-reforestation type: hivex-drone-based-reforestation metrics: - type: out_of_energy_count value: 0.5909523957967758 +/- 0.09171894105446358 name: Out of Energy Count verified: true - type: recharge_energy_count value: 125.54469884961844 +/- 115.46428296295271 name: Recharge Energy Count verified: true - type: cumulative_reward value: 12.542430520057678 +/- 7.328528013270426 name: Cumulative Reward verified: true --- This model serves as the baseline for the **Drone-Based Reforestation** environment, trained and tested on task <code>2</code> with difficulty <code>4</code> using the Proximal Policy Optimization (PPO) algorithm.<br><br>Environment: **Drone-Based Reforestation**<br>Task: <code>2</code><br>Difficulty: <code>4</code><br>Algorithm: <code>PPO</code><br>Episode Length: <code>2000</code><br>Training <code>max_steps</code>: <code>1200000</code><br>Testing <code>max_steps</code>: <code>300000</code><br><br>Train & Test [Scripts](https://github.com/hivex-research/hivex)<br>Download the [Environment](https://github.com/hivex-research/hivex-environments) [hivex-paper]: https://arxiv.org/abs/2501.04180
mrferr3t/82a95ddc-27ef-41d0-99aa-279c5adbf0d4
mrferr3t
"2025-02-01T07:01:19"
8
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "custom_code", "base_model:NousResearch/CodeLlama-13b-hf-flash", "base_model:adapter:NousResearch/CodeLlama-13b-hf-flash", "region:us" ]
null
"2025-02-01T04:59:04"
--- library_name: peft base_model: NousResearch/CodeLlama-13b-hf-flash tags: - axolotl - generated_from_trainer model-index: - name: 82a95ddc-27ef-41d0-99aa-279c5adbf0d4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: NousResearch/CodeLlama-13b-hf-flash bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 6643637ea42800a8_train_data.json ds_type: json format: custom path: /workspace/input_data/6643637ea42800a8_train_data.json type: field_instruction: query field_output: positive format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_steps: 50 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: mrferr3t/82a95ddc-27ef-41d0-99aa-279c5adbf0d4 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0005 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 99 micro_batch_size: 2 mlflow_experiment_name: /tmp/6643637ea42800a8_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 300 saves_per_epoch: 0 sequence_len: 512 special_tokens: pad_token: </s> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: b402ed71-a5df-4128-a00d-e02aeb7f26dc wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: b402ed71-a5df-4128-a00d-e02aeb7f26dc warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 82a95ddc-27ef-41d0-99aa-279c5adbf0d4 This model is a fine-tuned version of [NousResearch/CodeLlama-13b-hf-flash](https://huggingface.co/NousResearch/CodeLlama-13b-hf-flash) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1497 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 99 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 4.9332 | 0.0000 | 1 | 1.2700 | | 4.3577 | 0.0008 | 50 | 1.1497 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.3.1+cu121 - Datasets 3.0.1 - Tokenizers 0.20.1
ambrosfitz/tinyllama-history-chat_v0.2ps
ambrosfitz
"2024-03-09T22:52:23"
91
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "US History - Primary Sources", "conversational", "en", "dataset:ambrosfitz/ps_data_2", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-03-09T22:11:35"
--- library_name: transformers tags: - US History - Primary Sources license: apache-2.0 datasets: - ambrosfitz/ps_data_2 language: - en pipeline_tag: text-generation ---
AIFT/AIFT-ko-orca-plat-Yi-ko-6b-refine-v1.2
AIFT
"2024-01-22T23:59:30"
59
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:cc-by-sa-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-01-22T23:42:26"
--- license: cc-by-sa-4.0 --- <h1>instruct 모델 v1.2</h1> <b><학습 데이터 구축></b> kyujinpy 님의 KOR-OpenOrca-Platypus-데이터를 사람이 직접 재정제하고 잘못된 데이터는 제외시켰습니다. <br> 현재, 새로운 버전의 모델 학습 및 성능을 위해 Open-Orca 데이터셋 일부를 번역하여 정제 중에 있습니다. + GPT4로 추가 데이터를 제작중에 있습니다. 총 데이터는 4만개를 목표로합니다. <br> <br> ###학습 데이터 파일은 비공개입니다. <br> <b><학습></b> 학습은 LoRA를 사용하여 A100 40G *2에서 학습을 진행하였습니다.
t3dw/sd-class-butts-64
t3dw
"2023-02-03T14:35:50"
2
0
diffusers
[ "diffusers", "pytorch", "unconditional-image-generation", "diffusion-models-class", "license:mit", "diffusers:DDPMPipeline", "region:us" ]
unconditional-image-generation
"2023-02-03T12:26:10"
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of butts. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('t3dw/sd-class-butts-64') image = pipeline().images[0] image ```
owanr/ghc-roberta-base-intra-sorted-model_annots-cross-ent-batch-size
owanr
"2023-11-28T06:55:16"
0
0
null
[ "pytorch", "safetensors", "generated_from_trainer", "base_model:FacebookAI/roberta-base", "base_model:finetune:FacebookAI/roberta-base", "license:mit", "region:us" ]
null
"2023-11-26T05:22:12"
--- license: mit base_model: roberta-base tags: - generated_from_trainer model-index: - name: ghc-roberta-base-intra-sorted-model_annots-cross-ent-batch-size results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ghc-roberta-base-intra-sorted-model_annots-cross-ent-batch-size This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 75.2559 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 150.1364 | 0.01 | 1 | 137.8655 | | 145.1491 | 0.01 | 2 | 137.7009 | | 141.2592 | 0.02 | 3 | 137.3692 | | 139.0919 | 0.02 | 4 | 136.8588 | | 135.9284 | 0.03 | 5 | 136.1717 | | 136.926 | 0.03 | 6 | 135.3063 | | 134.5704 | 0.04 | 7 | 134.2661 | | 137.4961 | 0.05 | 8 | 133.0658 | | 132.8797 | 0.05 | 9 | 131.7179 | | 136.0643 | 0.06 | 10 | 130.2276 | | 135.4301 | 0.06 | 11 | 128.5899 | | 127.1964 | 0.07 | 12 | 126.8418 | | 125.513 | 0.08 | 13 | 124.9421 | | 126.1663 | 0.08 | 14 | 122.8761 | | 119.5367 | 0.09 | 15 | 120.6167 | | 115.1592 | 0.09 | 16 | 118.1486 | | 119.9518 | 0.1 | 17 | 115.4187 | | 117.7895 | 0.1 | 18 | 112.3129 | | 106.805 | 0.11 | 19 | 108.8144 | | 108.7341 | 0.12 | 20 | 104.8086 | | 99.3505 | 0.12 | 21 | 100.0166 | | 96.6034 | 0.13 | 22 | 94.8025 | | 97.1092 | 0.13 | 23 | 89.3208 | | 87.4798 | 0.14 | 24 | 83.9749 | | 84.8475 | 0.14 | 25 | 79.4704 | | 83.373 | 0.15 | 26 | 77.4992 | | 91.3204 | 0.16 | 27 | 77.1923 | | 74.5017 | 0.16 | 28 | 76.8741 | | 72.3207 | 0.17 | 29 | 76.2473 | | 85.4136 | 0.17 | 30 | 75.5046 | | 93.5758 | 0.18 | 31 | 76.7842 | | 86.1518 | 0.18 | 32 | 80.6888 | | 81.8937 | 0.19 | 33 | 81.3889 | | 83.6016 | 0.2 | 34 | 78.7130 | | 80.4784 | 0.2 | 35 | 75.8756 | | 78.3552 | 0.21 | 36 | 74.9318 | | 80.5475 | 0.21 | 37 | 75.4798 | | 76.2882 | 0.22 | 38 | 76.9049 | | 84.8002 | 0.23 | 39 | 76.9469 | | 77.4504 | 0.23 | 40 | 75.8898 | | 67.6916 | 0.24 | 41 | 75.0978 | | 83.2207 | 0.24 | 42 | 74.9062 | | 85.0015 | 0.25 | 43 | 75.6731 | | 83.0497 | 0.25 | 44 | 75.9090 | | 76.6919 | 0.26 | 45 | 75.5054 | | 85.3877 | 0.27 | 46 | 75.1098 | | 93.0404 | 0.27 | 47 | 74.9389 | | 84.2074 | 0.28 | 48 | 74.9638 | | 95.3972 | 0.28 | 49 | 76.3316 | | 69.0631 | 0.29 | 50 | 77.7318 | | 75.4309 | 0.29 | 51 | 77.7450 | | 71.4134 | 0.3 | 52 | 76.5342 | | 69.1066 | 0.31 | 53 | 77.6628 | | 83.5769 | 0.31 | 54 | 78.5326 | | 65.4712 | 0.32 | 55 | 77.5269 | | 73.439 | 0.32 | 56 | 76.4808 | | 76.9116 | 0.33 | 57 | 76.2608 | | 79.1694 | 0.34 | 58 | 75.3286 | | 73.6838 | 0.34 | 59 | 75.0881 | | 73.1652 | 0.35 | 60 | 74.1375 | | 83.7013 | 0.35 | 61 | 74.3447 | | 84.6303 | 0.36 | 62 | 74.5879 | | 91.8366 | 0.36 | 63 | 73.4361 | | 77.6664 | 0.37 | 64 | 72.9986 | | 79.3617 | 0.38 | 65 | 72.9200 | | 81.8254 | 0.38 | 66 | 73.0654 | | 79.6363 | 0.39 | 67 | 73.0463 | | 86.762 | 0.39 | 68 | 73.1884 | | 86.3385 | 0.4 | 69 | 73.4171 | | 84.0979 | 0.4 | 70 | 73.6643 | | 80.2404 | 0.41 | 71 | 73.6566 | | 85.6388 | 0.42 | 72 | 73.6430 | | 74.8952 | 0.42 | 73 | 73.4709 | | 67.454 | 0.43 | 74 | 73.2007 | | 77.6211 | 0.43 | 75 | 72.9498 | | 91.3803 | 0.44 | 76 | 72.5157 | | 83.2057 | 0.45 | 77 | 73.7496 | | 78.6635 | 0.45 | 78 | 76.5247 | | 62.8234 | 0.46 | 79 | 77.4481 | | 90.3382 | 0.46 | 80 | 76.1735 | | 79.189 | 0.47 | 81 | 75.0716 | | 69.5808 | 0.47 | 82 | 76.5869 | | 73.8021 | 0.48 | 83 | 77.8004 | | 84.3247 | 0.49 | 84 | 76.7431 | | 69.6219 | 0.49 | 85 | 75.4564 | | 74.931 | 0.5 | 86 | 74.4129 | | 72.8238 | 0.5 | 87 | 74.6309 | | 72.4519 | 0.51 | 88 | 75.2184 | | 72.0305 | 0.51 | 89 | 75.1167 | | 84.3407 | 0.52 | 90 | 74.1608 | | 82.8978 | 0.53 | 91 | 77.1175 | | 69.8918 | 0.53 | 92 | 82.0105 | | 88.752 | 0.54 | 93 | 80.1509 | | 80.5262 | 0.54 | 94 | 74.2979 | | 71.7139 | 0.55 | 95 | 72.3956 | | 77.5043 | 0.55 | 96 | 73.0314 | | 92.5619 | 0.56 | 97 | 73.5085 | | 70.4613 | 0.57 | 98 | 74.0224 | | 83.6026 | 0.57 | 99 | 73.7450 | | 75.0023 | 0.58 | 100 | 73.0852 | | 85.3673 | 0.58 | 101 | 73.1021 | | 83.6135 | 0.59 | 102 | 73.1276 | | 77.869 | 0.6 | 103 | 73.4371 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.1.0+cu121 - Datasets 2.6.1 - Tokenizers 0.14.1
Audino/my-awesome-modelv3
Audino
"2024-04-06T13:06:23"
107
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
"2024-04-05T18:54:40"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Noel-lawrence/q-Taxi-v3-weak
Noel-lawrence
"2024-02-17T11:28:22"
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
"2024-02-17T11:27:39"
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3-weak results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Noel-lawrence/q-Taxi-v3-weak", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
ThuyNT03/PAOSL_COQE_viT5-large_v2
ThuyNT03
"2023-12-05T17:08:38"
6
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:VietAI/vit5-large", "base_model:finetune:VietAI/vit5-large", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
"2023-11-25T21:23:06"
--- license: mit base_model: VietAI/vit5-large tags: - generated_from_trainer model-index: - name: PAOSL_COQE_viT5-large_v2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # PAOSL_COQE_viT5-large_v2 This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 64 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 40 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.0 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.14.1
isitcoding/hfsmoll_finetuned
isitcoding
"2025-03-08T13:46:35"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "llama-factory", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-03-08T13:46:15"
--- library_name: transformers tags: - llama-factory --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/Blitz-AI-MOE-v0.4-GGUF
mradermacher
"2024-11-23T04:25:16"
23
1
transformers
[ "transformers", "gguf", "en", "base_model:DenisTheDev/Blitz-AI-MOE-v0.4", "base_model:quantized:DenisTheDev/Blitz-AI-MOE-v0.4", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
"2024-11-23T02:16:36"
--- base_model: DenisTheDev/Blitz-AI-MOE-v0.4 language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> static quants of https://huggingface.co/DenisTheDev/Blitz-AI-MOE-v0.4 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Blitz-AI-MOE-v0.4-GGUF/resolve/main/Blitz-AI-MOE-v0.4.Q2_K.gguf) | Q2_K | 6.9 | | | [GGUF](https://huggingface.co/mradermacher/Blitz-AI-MOE-v0.4-GGUF/resolve/main/Blitz-AI-MOE-v0.4.Q3_K_S.gguf) | Q3_K_S | 8.1 | | | [GGUF](https://huggingface.co/mradermacher/Blitz-AI-MOE-v0.4-GGUF/resolve/main/Blitz-AI-MOE-v0.4.Q3_K_M.gguf) | Q3_K_M | 9.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Blitz-AI-MOE-v0.4-GGUF/resolve/main/Blitz-AI-MOE-v0.4.Q3_K_L.gguf) | Q3_K_L | 9.7 | | | [GGUF](https://huggingface.co/mradermacher/Blitz-AI-MOE-v0.4-GGUF/resolve/main/Blitz-AI-MOE-v0.4.IQ4_XS.gguf) | IQ4_XS | 10.1 | | | [GGUF](https://huggingface.co/mradermacher/Blitz-AI-MOE-v0.4-GGUF/resolve/main/Blitz-AI-MOE-v0.4.Q4_K_S.gguf) | Q4_K_S | 10.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Blitz-AI-MOE-v0.4-GGUF/resolve/main/Blitz-AI-MOE-v0.4.Q4_K_M.gguf) | Q4_K_M | 11.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Blitz-AI-MOE-v0.4-GGUF/resolve/main/Blitz-AI-MOE-v0.4.Q5_K_S.gguf) | Q5_K_S | 12.9 | | | [GGUF](https://huggingface.co/mradermacher/Blitz-AI-MOE-v0.4-GGUF/resolve/main/Blitz-AI-MOE-v0.4.Q5_K_M.gguf) | Q5_K_M | 13.2 | | | [GGUF](https://huggingface.co/mradermacher/Blitz-AI-MOE-v0.4-GGUF/resolve/main/Blitz-AI-MOE-v0.4.Q6_K.gguf) | Q6_K | 15.3 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Blitz-AI-MOE-v0.4-GGUF/resolve/main/Blitz-AI-MOE-v0.4.Q8_0.gguf) | Q8_0 | 19.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
yaoandy107/whisper-small.en-moba-adapters
yaoandy107
"2024-02-01T11:25:20"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-02-01T10:31:48"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Abdullah-Nazhat/Uniform_Contextualizer
Abdullah-Nazhat
"2024-05-15T18:59:00"
0
0
null
[ "license:bsd-3-clause", "region:us" ]
null
"2024-05-15T18:56:31"
--- license: bsd-3-clause --- # Uniform_Contextualizer Uniform_Contextualizer: Studying The Effect of Unity Expansion Factor for The Hidden Dimension in Transformer MLP Paper Coming Soon
tceron/sentence-transformers-party-similarity-by-party
tceron
"2022-10-17T10:51:08"
2
0
transformers
[ "transformers", "pytorch", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
null
"2022-10-17T10:46:52"
--- license: cc-by-4.0 --- More information about the model [in this git repo](https://github.com/tceron/capture_similarity_between_political_parties)
AmineAllo/margin-element-detector-fm-dutiful-morning-4
AmineAllo
"2023-10-26T22:52:03"
20
0
transformers
[ "transformers", "pytorch", "table-transformer", "object-detection", "generated_from_trainer", "base_model:AmineAllo/MT-ancient-spaceship-83", "base_model:finetune:AmineAllo/MT-ancient-spaceship-83", "endpoints_compatible", "region:us" ]
object-detection
"2023-10-26T22:06:47"
--- base_model: toobiza/MT-ancient-spaceship-83 tags: - generated_from_trainer model-index: - name: margin-element-detector-fm-dutiful-morning-4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # margin-element-detector-fm-dutiful-morning-4 This model is a fine-tuned version of [toobiza/MT-ancient-spaceship-83](https://huggingface.co/toobiza/MT-ancient-spaceship-83) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 1.3630 - eval_loss_ce: 0.0000 - eval_loss_bbox: 0.0480 - eval_cardinality_error: 6.4700 - eval_giou: 43.8478 - eval_runtime: 7.9249 - eval_samples_per_second: 12.618 - eval_steps_per_second: 3.155 - epoch: 15.8 - step: 3950 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Framework versions - Transformers 4.33.2 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.13.3
John6666/titania-mix-realistic-pony-illustrious-illustriousv10-sdxl
John6666
"2024-12-23T06:51:09"
152
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "realistic", "photorealistic", "girls", "cosplay", "boobs", "illustrious", "en", "base_model:OnomaAIResearch/Illustrious-xl-early-release-v0", "base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
"2024-11-24T05:27:17"
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ language: - en library_name: diffusers pipeline_tag: text-to-image tags: - text-to-image - stable-diffusion - stable-diffusion-xl - realistic - photorealistic - girls - cosplay - boobs - illustrious base_model: OnomaAIResearch/Illustrious-xl-early-release-v0 --- Original model is [here](https://civitai.com/models/349587/titaniamix-realistic-pony-realistic-illustrious-sd15?modelVersionId=1091028). This model created by [XXXNOAHXXX](https://civitai.com/user/XXXNOAHXXX).
stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
stefan-it
"2023-10-17T23:20:02"
5
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "nl", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
"2023-10-14T11:08:13"
--- language: nl license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: Professoren der Geneeskun dige Faculteit te Groningen alsook van de HH , Doctoren en Chirurgijns van Groningen , Friesland , Noordholland , Overijssel , Gelderland , Drenthe , in welke Provinciën dit Elixir als Medicament voor Mond en Tanden reeds jaren bakend is . --- # Fine-tuned Flair Model on Dutch ICDAR-Europeana NER Dataset This Flair model was fine-tuned on the [Dutch ICDAR-Europeana](https://github.com/stefan-it/historic-domain-adaptation-icdar) NER Dataset using hmBERT as backbone LM. The ICDAR-Europeana NER Dataset is a preprocessed variant of the [Europeana NER Corpora](https://github.com/EuropeanaNewspapers/ner-corpora) for Dutch and French. The following NEs were annotated: `PER`, `LOC` and `ORG`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs8-e10-lr5e-05 | [0.8191][1] | [0.8086][2] | [0.8237][3] | [0.8318][4] | [0.8235][5] | 82.13 ± 0.76 | | bs8-e10-lr3e-05 | [0.8056][6] | [0.8183][7] | [0.8241][8] | [0.8431][9] | [0.8155][10] | 82.13 ± 1.24 | | bs4-e10-lr5e-05 | [0.8055][11] | [0.822][12] | [0.8243][13] | [0.8093][14] | [0.8144][15] | 81.51 ± 0.72 | | bs4-e10-lr3e-05 | [0.8039][16] | [0.8122][17] | [0.8073][18] | [0.8246][19] | [0.8132][20] | 81.22 ± 0.7 | [1]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
peymansyh/distilhubert-finetuned-gtzan
peymansyh
"2023-08-21T17:50:22"
159
0
transformers
[ "transformers", "pytorch", "hubert", "audio-classification", "generated_from_trainer", "dataset:marsyas/gtzan", "base_model:ntu-spml/distilhubert", "base_model:finetune:ntu-spml/distilhubert", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
audio-classification
"2023-08-11T14:09:10"
--- license: apache-2.0 base_model: ntu-spml/distilhubert tags: - generated_from_trainer datasets: - marsyas/gtzan metrics: - accuracy model-index: - name: distilhubert-finetuned-gtzan-88 results: - task: name: Audio Classification type: audio-classification dataset: name: GTZAN type: marsyas/gtzan config: all split: train args: all metrics: - name: Accuracy type: accuracy value: 0.87 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilhubert-finetuned-gtzan-88 This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset. It achieves the following results on the evaluation set: - Loss: 0.6139 - Accuracy: 0.87 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 8e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 12 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.0172 | 1.0 | 112 | 1.8314 | 0.37 | | 1.5433 | 2.0 | 225 | 1.2575 | 0.5 | | 1.1517 | 3.0 | 337 | 0.9577 | 0.7 | | 0.904 | 4.0 | 450 | 0.7582 | 0.77 | | 0.4788 | 5.0 | 562 | 0.7504 | 0.79 | | 0.3843 | 6.0 | 675 | 0.6265 | 0.79 | | 0.3683 | 7.0 | 787 | 0.6683 | 0.8 | | 0.2278 | 8.0 | 900 | 0.8167 | 0.77 | | 0.4534 | 9.0 | 1012 | 0.6023 | 0.83 | | 0.2357 | 10.0 | 1125 | 0.6185 | 0.83 | | 0.3674 | 11.0 | 1237 | 0.6079 | 0.86 | | 0.148 | 11.95 | 1344 | 0.6139 | 0.87 | ### Framework versions - Transformers 4.32.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
AGENTDARS/Reviewer-14B
AGENTDARS
"2025-02-24T21:52:21"
0
0
peft
[ "peft", "safetensors", "base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-14B", "base_model:adapter:deepseek-ai/DeepSeek-R1-Distill-Qwen-14B", "region:us" ]
null
"2025-02-24T20:26:15"
--- base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-14B library_name: peft --- # Model Card for Reviewer-14B ## Model Details ### Model Description Reviewer-14B is a fine-tuned on [**DeepSeek-R1-Distill-Qwen-14B**](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B), optimized for selecting the best patch among multiple patches generated by our DARS agent while solving software engineering problems. ### Model Sources - **Repository:** [DARS-14B Repository](https://github.com/darsagent/DARS-Agent) - **Paper:** ["DARS: Dynamic Action Re-Sampling to Enhance Coding Agent Performance by Adaptive Tree Traversal"](https://drive.google.com/file/d/1DMAZ-fkirC8uKl8819cOq9J3BQ4E7GXR/view?usp=drive_link) ## How to Get Started with the Model We use vLLM to deploy and infer the model. Please follow this tutorial [here]((https://docs.vllm.ai/en/latest/features/lora.html)) to use our LoRA weights with vLLM. ## Training Details ### Dataset We use our [code review dataset](https://huggingface.co/datasets/AGENTDARS/generated-critiques) where each instance contains several git patches with critiques for each each patch. The model learns to generate critiques for multiple patches and select the best patch. ### Training Procedure | Hyperparameter | Value | |----------------------|--------------------------------------------| | Training regime | BF16 mixed precision | | Optimizer | AdamW with cosine learning rate scheduler | | LoRA Configuration | rank=8, alpha=32, dropout=0.1 | | Batch Size | 48 | | Learning Rate | 5e-6 | | Sequence Length | 14K tokens | | Fine-tuning Epochs | 1 | | Compute Environment | DeepSpeed for memory-efficient distributed training | | Compute Infrastructure | 8x H100 | We use training script provided in [Qwen-2.5 codebase](https://github.com/QwenLM/Qwen2.5-Coder). ## Results Using this model as a reviewer with DARS trajectories generated using Claude 3.5 Sonnet V2 achieves 41.7% on SWE-Bench Lite.
chickeninvader/ppo-LunarLander-v2
chickeninvader
"2023-08-20T06:13:27"
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
"2023-08-20T06:12:57"
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -207.98 +/- 53.49 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
mradermacher/GritLM-8x7B-GGUF
mradermacher
"2024-12-08T23:29:44"
347
0
transformers
[ "transformers", "gguf", "mteb", "en", "dataset:GritLM/tulu2", "base_model:GritLM/GritLM-8x7B", "base_model:quantized:GritLM/GritLM-8x7B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
"2024-03-17T08:22:56"
--- base_model: GritLM/GritLM-8x7B datasets: - GritLM/tulu2 language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - mteb --- ## About static quants of https://huggingface.co/GritLM/GritLM-8x7B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/GritLM-8x7B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/GritLM-8x7B-GGUF/resolve/main/GritLM-8x7B.Q2_K.gguf) | Q2_K | 17.6 | | | [GGUF](https://huggingface.co/mradermacher/GritLM-8x7B-GGUF/resolve/main/GritLM-8x7B.IQ3_XS.gguf) | IQ3_XS | 19.5 | | | [GGUF](https://huggingface.co/mradermacher/GritLM-8x7B-GGUF/resolve/main/GritLM-8x7B.IQ3_S.gguf) | IQ3_S | 20.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/GritLM-8x7B-GGUF/resolve/main/GritLM-8x7B.Q3_K_S.gguf) | Q3_K_S | 20.7 | | | [GGUF](https://huggingface.co/mradermacher/GritLM-8x7B-GGUF/resolve/main/GritLM-8x7B.IQ3_M.gguf) | IQ3_M | 21.7 | | | [GGUF](https://huggingface.co/mradermacher/GritLM-8x7B-GGUF/resolve/main/GritLM-8x7B.Q3_K_M.gguf) | Q3_K_M | 22.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/GritLM-8x7B-GGUF/resolve/main/GritLM-8x7B.Q3_K_L.gguf) | Q3_K_L | 24.4 | | | [GGUF](https://huggingface.co/mradermacher/GritLM-8x7B-GGUF/resolve/main/GritLM-8x7B.IQ4_XS.gguf) | IQ4_XS | 25.6 | | | [GGUF](https://huggingface.co/mradermacher/GritLM-8x7B-GGUF/resolve/main/GritLM-8x7B.Q4_K_S.gguf) | Q4_K_S | 27.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/GritLM-8x7B-GGUF/resolve/main/GritLM-8x7B.Q4_K_M.gguf) | Q4_K_M | 28.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/GritLM-8x7B-GGUF/resolve/main/GritLM-8x7B.Q5_K_S.gguf) | Q5_K_S | 32.5 | | | [GGUF](https://huggingface.co/mradermacher/GritLM-8x7B-GGUF/resolve/main/GritLM-8x7B.Q5_K_M.gguf) | Q5_K_M | 33.5 | | | [GGUF](https://huggingface.co/mradermacher/GritLM-8x7B-GGUF/resolve/main/GritLM-8x7B.Q6_K.gguf) | Q6_K | 38.6 | very good quality | | [PART 1](https://huggingface.co/mradermacher/GritLM-8x7B-GGUF/resolve/main/GritLM-8x7B.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/GritLM-8x7B-GGUF/resolve/main/GritLM-8x7B.Q8_0.gguf.part2of2) | Q8_0 | 49.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Alphatao/e0983c78-8d6e-4f4b-988f-5e0e63505dde
Alphatao
"2025-03-09T14:54:47"
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/tinyllama-chat", "base_model:adapter:unsloth/tinyllama-chat", "license:apache-2.0", "region:us" ]
null
"2025-03-09T12:04:57"
--- library_name: peft license: apache-2.0 base_model: unsloth/tinyllama-chat tags: - axolotl - generated_from_trainer model-index: - name: e0983c78-8d6e-4f4b-988f-5e0e63505dde results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/tinyllama-chat bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 96c7cb877af8f653_train_data.json ds_type: json format: custom path: /workspace/input_data/96c7cb877af8f653_train_data.json type: field_input: plan field_instruction: goal field_output: revision format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null device_map: ? '' : 0,1,2,3,4,5,6,7 early_stopping_patience: 2 eval_max_new_tokens: 128 eval_steps: 100 eval_table_size: null flash_attention: true gradient_accumulation_steps: 8 gradient_checkpointing: true group_by_length: false hub_model_id: Alphatao/e0983c78-8d6e-4f4b-988f-5e0e63505dde hub_repo: null hub_strategy: null hub_token: null learning_rate: 0.0002 load_best_model_at_end: true load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.1 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lora_target_modules: - q_proj - k_proj - v_proj - o_proj lr_scheduler: cosine max_grad_norm: 1.0 max_steps: 3600 micro_batch_size: 4 mlflow_experiment_name: /tmp/96c7cb877af8f653_train_data.json model_type: AutoModelForCausalLM num_epochs: 2 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 100 sequence_len: 2048 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: ef008972-2079-4b14-830a-53e13b141355 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: ef008972-2079-4b14-830a-53e13b141355 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # e0983c78-8d6e-4f4b-988f-5e0e63505dde This model is a fine-tuned version of [unsloth/tinyllama-chat](https://huggingface.co/unsloth/tinyllama-chat) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0218 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 3570 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.581 | 0.0006 | 1 | 1.7218 | | 0.9692 | 0.0560 | 100 | 1.1257 | | 1.0027 | 0.1121 | 200 | 1.1011 | | 1.0308 | 0.1681 | 300 | 1.0850 | | 0.9372 | 0.2241 | 400 | 1.0760 | | 1.101 | 0.2802 | 500 | 1.0665 | | 0.9727 | 0.3362 | 600 | 1.0590 | | 0.9845 | 0.3922 | 700 | 1.0526 | | 0.9084 | 0.4483 | 800 | 1.0508 | | 1.0553 | 0.5043 | 900 | 1.0435 | | 0.867 | 0.5603 | 1000 | 1.0409 | | 0.92 | 0.6164 | 1100 | 1.0375 | | 0.903 | 0.6724 | 1200 | 1.0336 | | 0.9474 | 0.7284 | 1300 | 1.0282 | | 0.9148 | 0.7845 | 1400 | 1.0270 | | 0.9104 | 0.8405 | 1500 | 1.0201 | | 1.0434 | 0.8965 | 1600 | 1.0183 | | 1.1382 | 0.9526 | 1700 | 1.0140 | | 0.9501 | 1.0087 | 1800 | 1.0223 | | 0.7897 | 1.0647 | 1900 | 1.0218 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
thalllsssss/45a81f30-3d17-4b84-a45b-a2b51af00a14
thalllsssss
"2025-01-24T05:37:37"
8
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2-7B-Instruct", "base_model:adapter:unsloth/Qwen2-7B-Instruct", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
"2025-01-24T04:34:00"
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2-7B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: 45a81f30-3d17-4b84-a45b-a2b51af00a14 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Qwen2-7B-Instruct bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - be25ce38282aeb5a_train_data.json ds_type: json format: custom path: /workspace/input_data/be25ce38282aeb5a_train_data.json type: field_input: input field_instruction: instruction field_output: output format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: thalllsssss/45a81f30-3d17-4b84-a45b-a2b51af00a14 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/be25ce38282aeb5a_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 14fba03c-c528-4737-ac1e-1f62f6edce20 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 14fba03c-c528-4737-ac1e-1f62f6edce20 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 45a81f30-3d17-4b84-a45b-a2b51af00a14 This model is a fine-tuned version of [unsloth/Qwen2-7B-Instruct](https://huggingface.co/unsloth/Qwen2-7B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2583 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.1782 | 0.0067 | 200 | 1.2583 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
chunwoolee0/my_awesome_eli5_clm-model
chunwoolee0
"2023-07-09T15:06:15"
141
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2023-07-09T11:57:24"
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: my_awesome_eli5_clm-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_eli5_clm-model This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.7493 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.7059 | 1.0 | 1108 | 3.7527 | | 3.6588 | 2.0 | 2216 | 3.7516 | | 3.6291 | 3.0 | 3324 | 3.7493 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
multitude0099/llama-2-chat-7b-recipegen
multitude0099
"2024-04-11T14:23:24"
11
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-04-11T14:14:43"
--- license: apache-2.0 language: - en library_name: transformers pipeline_tag: text-generation ---
isharani/sportClassification
isharani
"2023-12-07T15:11:48"
0
0
keras
[ "keras", "tf-keras", "region:us" ]
null
"2023-11-24T09:51:57"
--- library_name: keras --- ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: | Hyperparameters | Value | | :-- | :-- | | name | Adam | | weight_decay | None | | clipnorm | None | | global_clipnorm | None | | clipvalue | None | | use_ema | False | | ema_momentum | 0.99 | | ema_overwrite_frequency | None | | jit_compile | True | | is_legacy_optimizer | False | | learning_rate | 9.999999747378752e-05 | | beta_1 | 0.9 | | beta_2 | 0.999 | | epsilon | 1e-07 | | amsgrad | False | | training_precision | float32 | ## Model Plot <details> <summary>View Model Plot</summary> ![Model Image](./model.png) </details>
Cayetano/ppo-LunarLander-v2
Cayetano
"2023-09-04T16:53:12"
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
"2023-09-04T16:52:55"
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 299.82 +/- 20.06 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
KingNish/Reasoning-0.5b
KingNish
"2024-10-06T10:06:40"
170
28
transformers
[ "transformers", "pytorch", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "reasoning", "conversational", "en", "dataset:KingNish/reasoning-base-20k", "base_model:Qwen/Qwen2.5-0.5B-Instruct", "base_model:finetune:Qwen/Qwen2.5-0.5B-Instruct", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-10-05T16:29:14"
--- base_model: Qwen/Qwen2.5-0.5B-Instruct language: - en license: apache-2.0 datasets: - KingNish/reasoning-base-20k tags: - text-generation-inference - transformers - unsloth - qwen2 - trl - sft - reasoning --- # Model Dexcription It's First iteration of this model. For testing purpose its just trained on 10k rows. It performed very well than expected. It do first reasoning and than generate response on based on it but it do like o1. It do reasoning separately no special tokens or in response reasoning. Below is inference code. ```python from transformers import AutoModelForCausalLM, AutoTokenizer MAX_REASONING_TOKENS = 1024 MAX_RESPONSE_TOKENS = 512 model_name = "KingNish/Reasoning-0.5b" model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype="auto", device_map="auto") tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = "Which is greater 9.9 or 9.11 ??" messages = [ {"role": "user", "content": prompt} ] # Generate reasoning reasoning_template = tokenizer.apply_chat_template(messages, tokenize=False, add_reasoning_prompt=True) reasoning_inputs = tokenizer(reasoning_template, return_tensors="pt").to(model.device) reasoning_ids = model.generate(**reasoning_inputs, max_new_tokens=MAX_REASONING_TOKENS) reasoning_output = tokenizer.decode(reasoning_ids[0, reasoning_inputs.input_ids.shape[1]:], skip_special_tokens=True) # print("REASONING: " + reasoning_output) # Generate answer messages.append({"role": "reasoning", "content": reasoning_output}) response_template = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) response_inputs = tokenizer(response_template, return_tensors="pt").to(model.device) response_ids = model.generate(**response_inputs, max_new_tokens=MAX_RESPONSE_TOKENS) response_output = tokenizer.decode(response_ids[0, response_inputs.input_ids.shape[1]:], skip_special_tokens=True) print("ANSWER: " + response_output) ``` - **Trained by:** [Nishith Jain](https://huggingface.co/KingNish) - **License:** apache-2.0 - **Finetuned from model :** [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) - **Dataset used :** [KingNish/reasoning-base-20k](https://huggingface.co/datasets/KingNish/reasoning-base-20k) This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mort1k/dqn-SpaceInvadersNoFrameskip-v4
mort1k
"2023-07-13T14:09:53"
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
"2023-07-13T14:09:10"
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 762.50 +/- 250.23 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mort1k -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mort1k -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga mort1k ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
fxmarty/20220911-h13m58s49_sst2_distilbert_quantization
fxmarty
"2022-09-11T15:55:26"
0
0
null
[ "tensorboard", "onnx", "distilbert", "text-classification", "dataset:glue", "region:us" ]
text-classification
"2022-09-11T15:52:09"
--- pipeline_tag: text-classification datasets: - glue metrics: - accuracy - total_time_in_seconds - samples_per_second - latency_in_seconds tags: - distilbert --- **task**: `text-classification` **Backend:** `sagemaker-training` **Backend args:** `{'instance_type': 'ml.m5.2xlarge', 'supported_instructions': 'avx512'}` **Number of evaluation samples:** `All dataset` Fixed parameters: * **dataset**: [{'path': 'glue', 'eval_split': 'validation', 'data_keys': {'primary': 'sentence'}, 'ref_keys': ['label'], 'name': 'sst2', 'calibration_split': 'train'}] * **name_or_path**: `distilbert-base-uncased-finetuned-sst-2-english` * **from_transformers**: `True` * **calibration**: * **method**: `percentile` * **num_calibration_samples**: `128` * **calibration_histogram_percentile**: `99.999` Benchmarked parameters: * **framework**: `onnxruntime`, `pytorch` * **quantization_approach**: `dynamic`, `static` * **operators_to_quantize**: `['Add', 'MatMul']`, `['Add']` * **node_exclusion**: `[]`, `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` * **per_channel**: `False`, `True` * **framework_args**: `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}`, `{}` * **reduce_range**: `True`, `False` * **apply_quantization**: `True`, `False` # Evaluation ## Non-time metrics | framework | quantization_approach | operators_to_quantize | node_exclusion | per_channel | framework_args | reduce_range | apply_quantization | | accuracy | | :-----------: | :-------------------: | :-------------------: | :------------------------------------------------------: | :---------: | :-----------------------------------------------------------------: | :----------: | :----------------: | :-: | :------: | | `onnxruntime` | `None` | `None` | `None` | `None` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `None` | `False` | \| | 0.911 | | `onnxruntime` | `dynamic` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 0.898 | | `onnxruntime` | `dynamic` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 0.893 | | `onnxruntime` | `dynamic` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 0.490 | | `onnxruntime` | `dynamic` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 0.901 | | `onnxruntime` | `dynamic` | `['Add', 'MatMul']` | `[]` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 0.898 | | `onnxruntime` | `dynamic` | `['Add', 'MatMul']` | `[]` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 0.893 | | `onnxruntime` | `dynamic` | `['Add', 'MatMul']` | `[]` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 0.490 | | `onnxruntime` | `dynamic` | `['Add', 'MatMul']` | `[]` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 0.901 | | `onnxruntime` | `dynamic` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 0.911 | | `onnxruntime` | `dynamic` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 0.911 | | `onnxruntime` | `dynamic` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 0.911 | | `onnxruntime` | `dynamic` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 0.911 | | `onnxruntime` | `dynamic` | `['Add']` | `[]` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 0.911 | | `onnxruntime` | `dynamic` | `['Add']` | `[]` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 0.911 | | `onnxruntime` | `dynamic` | `['Add']` | `[]` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 0.911 | | `onnxruntime` | `dynamic` | `['Add']` | `[]` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 0.911 | | `onnxruntime` | `static` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 0.899 | | `onnxruntime` | `static` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 0.899 | | `onnxruntime` | `static` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 0.491 | | `onnxruntime` | `static` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 0.908 | | `onnxruntime` | `static` | `['Add', 'MatMul']` | `[]` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 0.899 | | `onnxruntime` | `static` | `['Add', 'MatMul']` | `[]` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 0.899 | | `onnxruntime` | `static` | `['Add', 'MatMul']` | `[]` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 0.499 | | `onnxruntime` | `static` | `['Add', 'MatMul']` | `[]` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 0.900 | | `onnxruntime` | `static` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 0.906 | | `onnxruntime` | `static` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 0.906 | | `onnxruntime` | `static` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 0.906 | | `onnxruntime` | `static` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 0.906 | | `onnxruntime` | `static` | `['Add']` | `[]` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 0.901 | | `onnxruntime` | `static` | `['Add']` | `[]` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 0.901 | | `onnxruntime` | `static` | `['Add']` | `[]` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 0.901 | | `onnxruntime` | `static` | `['Add']` | `[]` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 0.901 | | `pytorch` | `None` | `None` | `None` | `None` | `{}` | `None` | `None` | \| | 0.911 | ## Time metrics Time benchmarks were run for 15 seconds per config. Below, time metrics for batch size = 1, input length = 32. | framework | quantization_approach | operators_to_quantize | node_exclusion | per_channel | framework_args | reduce_range | apply_quantization | | latency_mean (ms) | | throughput (/s) | | :-----------: | :-------------------: | :-------------------: | :------------------------------------------------------: | :---------: | :-----------------------------------------------------------------: | :----------: | :----------------: | :-: | :---------------: | :-: | :-------------: | | `onnxruntime` | `None` | `None` | `None` | `None` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `None` | `False` | \| | 14.50 | \| | 69.00 | | `onnxruntime` | `dynamic` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 10.19 | \| | 98.13 | | `onnxruntime` | `dynamic` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 10.66 | \| | 93.87 | | `onnxruntime` | `dynamic` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 10.45 | \| | 95.67 | | `onnxruntime` | `dynamic` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 10.72 | \| | 93.33 | | `onnxruntime` | `dynamic` | `['Add', 'MatMul']` | `[]` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 10.40 | \| | 96.20 | | `onnxruntime` | `dynamic` | `['Add', 'MatMul']` | `[]` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 10.16 | \| | 98.40 | | `onnxruntime` | `dynamic` | `['Add', 'MatMul']` | `[]` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 10.40 | \| | 96.20 | | `onnxruntime` | `dynamic` | `['Add', 'MatMul']` | `[]` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 10.86 | \| | 92.07 | | `onnxruntime` | `dynamic` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 14.43 | \| | 69.33 | | `onnxruntime` | `dynamic` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 14.68 | \| | 68.13 | | `onnxruntime` | `dynamic` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 14.40 | \| | 69.47 | | `onnxruntime` | `dynamic` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 14.79 | \| | 67.60 | | `onnxruntime` | `dynamic` | `['Add']` | `[]` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 14.80 | \| | 67.60 | | `onnxruntime` | `dynamic` | `['Add']` | `[]` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 14.13 | \| | 70.80 | | `onnxruntime` | `dynamic` | `['Add']` | `[]` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 14.54 | \| | 68.80 | | `onnxruntime` | `dynamic` | `['Add']` | `[]` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 14.60 | \| | 68.53 | | `onnxruntime` | `static` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 11.23 | \| | 89.13 | | `onnxruntime` | `static` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 11.18 | \| | 89.47 | | `onnxruntime` | `static` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 11.39 | \| | 87.87 | | `onnxruntime` | `static` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 11.31 | \| | 88.47 | | `onnxruntime` | `static` | `['Add', 'MatMul']` | `[]` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 13.73 | \| | 72.87 | | `onnxruntime` | `static` | `['Add', 'MatMul']` | `[]` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 14.42 | \| | 69.40 | | `onnxruntime` | `static` | `['Add', 'MatMul']` | `[]` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 14.09 | \| | 71.00 | | `onnxruntime` | `static` | `['Add', 'MatMul']` | `[]` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 13.78 | \| | 72.60 | | `onnxruntime` | `static` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 16.11 | \| | 62.13 | | `onnxruntime` | `static` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 15.97 | \| | 62.67 | | `onnxruntime` | `static` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 15.82 | \| | 63.27 | | `onnxruntime` | `static` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 15.94 | \| | 62.73 | | `onnxruntime` | `static` | `['Add']` | `[]` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 19.03 | \| | 52.60 | | `onnxruntime` | `static` | `['Add']` | `[]` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 18.99 | \| | 52.67 | | `onnxruntime` | `static` | `['Add']` | `[]` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 18.93 | \| | 52.87 | | `onnxruntime` | `static` | `['Add']` | `[]` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 18.65 | \| | 53.67 | | `pytorch` | `None` | `None` | `None` | `None` | `{}` | `None` | `None` | \| | 31.28 | \| | 32.00 | Below, time metrics for batch size = 1, input length = 64. | framework | quantization_approach | operators_to_quantize | node_exclusion | per_channel | framework_args | reduce_range | apply_quantization | | latency_mean (ms) | | throughput (/s) | | :-----------: | :-------------------: | :-------------------: | :------------------------------------------------------: | :---------: | :-----------------------------------------------------------------: | :----------: | :----------------: | :-: | :---------------: | :-: | :-------------: | | `onnxruntime` | `None` | `None` | `None` | `None` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `None` | `False` | \| | 24.59 | \| | 40.67 | | `onnxruntime` | `dynamic` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 18.67 | \| | 53.60 | | `onnxruntime` | `dynamic` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 19.16 | \| | 52.20 | | `onnxruntime` | `dynamic` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 18.97 | \| | 52.73 | | `onnxruntime` | `dynamic` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 19.29 | \| | 51.87 | | `onnxruntime` | `dynamic` | `['Add', 'MatMul']` | `[]` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 19.13 | \| | 52.33 | | `onnxruntime` | `dynamic` | `['Add', 'MatMul']` | `[]` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 18.64 | \| | 53.67 | | `onnxruntime` | `dynamic` | `['Add', 'MatMul']` | `[]` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 19.01 | \| | 52.60 | | `onnxruntime` | `dynamic` | `['Add', 'MatMul']` | `[]` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 18.96 | \| | 52.80 | | `onnxruntime` | `dynamic` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 24.63 | \| | 40.67 | | `onnxruntime` | `dynamic` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 25.28 | \| | 39.60 | | `onnxruntime` | `dynamic` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 24.75 | \| | 40.47 | | `onnxruntime` | `dynamic` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 24.97 | \| | 40.07 | | `onnxruntime` | `dynamic` | `['Add']` | `[]` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 25.16 | \| | 39.80 | | `onnxruntime` | `dynamic` | `['Add']` | `[]` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 24.49 | \| | 40.87 | | `onnxruntime` | `dynamic` | `['Add']` | `[]` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 24.88 | \| | 40.20 | | `onnxruntime` | `dynamic` | `['Add']` | `[]` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 25.17 | \| | 39.73 | | `onnxruntime` | `static` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 20.05 | \| | 49.93 | | `onnxruntime` | `static` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 20.76 | \| | 48.20 | | `onnxruntime` | `static` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 20.75 | \| | 48.20 | | `onnxruntime` | `static` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 20.23 | \| | 49.47 | | `onnxruntime` | `static` | `['Add', 'MatMul']` | `[]` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 24.79 | \| | 40.40 | | `onnxruntime` | `static` | `['Add', 'MatMul']` | `[]` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 25.17 | \| | 39.73 | | `onnxruntime` | `static` | `['Add', 'MatMul']` | `[]` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 24.14 | \| | 41.47 | | `onnxruntime` | `static` | `['Add', 'MatMul']` | `[]` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 25.27 | \| | 39.60 | | `onnxruntime` | `static` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 27.97 | \| | 35.80 | | `onnxruntime` | `static` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 27.43 | \| | 36.47 | | `onnxruntime` | `static` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 28.17 | \| | 35.53 | | `onnxruntime` | `static` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 28.16 | \| | 35.53 | | `onnxruntime` | `static` | `['Add']` | `[]` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 33.24 | \| | 30.13 | | `onnxruntime` | `static` | `['Add']` | `[]` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 32.46 | \| | 30.87 | | `onnxruntime` | `static` | `['Add']` | `[]` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 32.39 | \| | 30.93 | | `onnxruntime` | `static` | `['Add']` | `[]` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 32.75 | \| | 30.53 | | `pytorch` | `None` | `None` | `None` | `None` | `{}` | `None` | `None` | \| | 41.25 | \| | 24.27 | Below, time metrics for batch size = 1, input length = 128. | framework | quantization_approach | operators_to_quantize | node_exclusion | per_channel | framework_args | reduce_range | apply_quantization | | latency_mean (ms) | | throughput (/s) | | :-----------: | :-------------------: | :-------------------: | :------------------------------------------------------: | :---------: | :-----------------------------------------------------------------: | :----------: | :----------------: | :-: | :---------------: | :-: | :-------------: | | `onnxruntime` | `None` | `None` | `None` | `None` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `None` | `False` | \| | 46.51 | \| | 21.53 | | `onnxruntime` | `dynamic` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 35.33 | \| | 28.33 | | `onnxruntime` | `dynamic` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 35.92 | \| | 27.87 | | `onnxruntime` | `dynamic` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 35.56 | \| | 28.13 | | `onnxruntime` | `dynamic` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 36.32 | \| | 27.53 | | `onnxruntime` | `dynamic` | `['Add', 'MatMul']` | `[]` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 35.53 | \| | 28.20 | | `onnxruntime` | `dynamic` | `['Add', 'MatMul']` | `[]` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 35.96 | \| | 27.87 | | `onnxruntime` | `dynamic` | `['Add', 'MatMul']` | `[]` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 35.42 | \| | 28.27 | | `onnxruntime` | `dynamic` | `['Add', 'MatMul']` | `[]` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 36.06 | \| | 27.80 | | `onnxruntime` | `dynamic` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 47.40 | \| | 21.13 | | `onnxruntime` | `dynamic` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 47.14 | \| | 21.27 | | `onnxruntime` | `dynamic` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 47.46 | \| | 21.13 | | `onnxruntime` | `dynamic` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 47.26 | \| | 21.20 | | `onnxruntime` | `dynamic` | `['Add']` | `[]` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 47.48 | \| | 21.07 | | `onnxruntime` | `dynamic` | `['Add']` | `[]` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 47.08 | \| | 21.27 | | `onnxruntime` | `dynamic` | `['Add']` | `[]` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 47.02 | \| | 21.33 | | `onnxruntime` | `dynamic` | `['Add']` | `[]` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 47.05 | \| | 21.27 | | `onnxruntime` | `static` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 39.63 | \| | 25.27 | | `onnxruntime` | `static` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 39.52 | \| | 25.33 | | `onnxruntime` | `static` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 39.78 | \| | 25.20 | | `onnxruntime` | `static` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 40.01 | \| | 25.00 | | `onnxruntime` | `static` | `['Add', 'MatMul']` | `[]` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 44.24 | \| | 22.67 | | `onnxruntime` | `static` | `['Add', 'MatMul']` | `[]` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 44.55 | \| | 22.47 | | `onnxruntime` | `static` | `['Add', 'MatMul']` | `[]` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 45.74 | \| | 21.87 | | `onnxruntime` | `static` | `['Add', 'MatMul']` | `[]` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 44.12 | \| | 22.67 | | `onnxruntime` | `static` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 51.41 | \| | 19.47 | | `onnxruntime` | `static` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 52.52 | \| | 19.07 | | `onnxruntime` | `static` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 51.25 | \| | 19.53 | | `onnxruntime` | `static` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 51.51 | \| | 19.47 | | `onnxruntime` | `static` | `['Add']` | `[]` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 59.37 | \| | 16.87 | | `onnxruntime` | `static` | `['Add']` | `[]` | `False` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 58.28 | \| | 17.20 | | `onnxruntime` | `static` | `['Add']` | `[]` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `False` | `True` | \| | 59.37 | \| | 16.87 | | `onnxruntime` | `static` | `['Add']` | `[]` | `True` | `{'opset': 13, 'optimization_level': 1, 'intra_op_num_threads': 4}` | `True` | `True` | \| | 58.28 | \| | 17.20 | | `pytorch` | `None` | `None` | `None` | `None` | `{}` | `None` | `None` | \| | 53.72 | \| | 18.67 |
mradermacher/Code-290k-6.7B-Instruct-GGUF
mradermacher
"2024-11-12T13:53:53"
58
0
transformers
[ "transformers", "gguf", "code", "en", "dataset:ajibawa-2023/Code-290k-ShareGPT", "base_model:ajibawa-2023/Code-290k-6.7B-Instruct", "base_model:quantized:ajibawa-2023/Code-290k-6.7B-Instruct", "license:other", "endpoints_compatible", "region:us", "conversational" ]
null
"2024-11-10T01:32:36"
--- base_model: ajibawa-2023/Code-290k-6.7B-Instruct datasets: - ajibawa-2023/Code-290k-ShareGPT language: - en library_name: transformers license: other quantized_by: mradermacher tags: - code --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/ajibawa-2023/Code-290k-6.7B-Instruct <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Code-290k-6.7B-Instruct-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Code-290k-6.7B-Instruct-GGUF/resolve/main/Code-290k-6.7B-Instruct.Q2_K.gguf) | Q2_K | 2.6 | | | [GGUF](https://huggingface.co/mradermacher/Code-290k-6.7B-Instruct-GGUF/resolve/main/Code-290k-6.7B-Instruct.Q3_K_S.gguf) | Q3_K_S | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Code-290k-6.7B-Instruct-GGUF/resolve/main/Code-290k-6.7B-Instruct.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Code-290k-6.7B-Instruct-GGUF/resolve/main/Code-290k-6.7B-Instruct.Q3_K_L.gguf) | Q3_K_L | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/Code-290k-6.7B-Instruct-GGUF/resolve/main/Code-290k-6.7B-Instruct.IQ4_XS.gguf) | IQ4_XS | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/Code-290k-6.7B-Instruct-GGUF/resolve/main/Code-290k-6.7B-Instruct.Q4_0_4_4.gguf) | Q4_0_4_4 | 3.9 | fast on arm, low quality | | [GGUF](https://huggingface.co/mradermacher/Code-290k-6.7B-Instruct-GGUF/resolve/main/Code-290k-6.7B-Instruct.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Code-290k-6.7B-Instruct-GGUF/resolve/main/Code-290k-6.7B-Instruct.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Code-290k-6.7B-Instruct-GGUF/resolve/main/Code-290k-6.7B-Instruct.Q5_K_S.gguf) | Q5_K_S | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/Code-290k-6.7B-Instruct-GGUF/resolve/main/Code-290k-6.7B-Instruct.Q5_K_M.gguf) | Q5_K_M | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/Code-290k-6.7B-Instruct-GGUF/resolve/main/Code-290k-6.7B-Instruct.Q6_K.gguf) | Q6_K | 5.6 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Code-290k-6.7B-Instruct-GGUF/resolve/main/Code-290k-6.7B-Instruct.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Code-290k-6.7B-Instruct-GGUF/resolve/main/Code-290k-6.7B-Instruct.f16.gguf) | f16 | 13.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
amuno5/gwc_training
amuno5
"2024-02-11T20:59:19"
2
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
"2024-02-10T17:27:36"
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -1005.08 +/- 142.00 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
tdopierre/ProtAugment-ParaphraseGenerator
tdopierre
"2021-07-07T14:15:07"
4
5
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "Paraphase Generation", "Data Augmentation", "en", "dataset:Quora", "dataset:MSR", "dataset:Google-PAWS", "arxiv:2105.12995", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
"2022-03-02T23:29:05"
--- language: "en" tags: - Paraphase Generation - Data Augmentation datasets: - Quora - MSR - Google-PAWS --- [![acl](http://img.shields.io/badge/ACL-2021-f31f32)](https://arxiv.org/abs/2105.12995) This model is used to generate paraphrases. It has been trained on a mix of 3 different paraphrase detection datasets: MSR, Quora, Google-PAWS. We use this model in our ACL'21 Paper ["PROTAUGMENT: Unsupervised diverse short-texts paraphrasing for intent detection meta-learning"](https://arxiv.org/abs/2105.12995) Jointly used with generation constraints, this model allows to generate diverse paraphrases. We use those paraphrases as a data augmentation technique to further boosts a classification model's generalization capability. Feel free to play with the [code](https://github.com/tdopierre/ProtAugment)! If you use this model, please consider citing our paper. ``` @article{Dopierre2021ProtAugmentUD, title={ProtAugment: Unsupervised diverse short-texts paraphrasing for intent detection meta-learning}, author={Thomas Dopierre and C. Gravier and Wilfried Logerais}, journal={ArXiv}, year={2021}, volume={abs/2105.12995} } ```
emylrahim/ppo-Huggy
emylrahim
"2022-12-22T05:58:33"
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
"2022-12-22T05:58:25"
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy library_name: ml-agents --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Write your model_id: emylrahim/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
End of preview. Expand in Data Studio

Dataset Card for Hugging Face Hub Model Cards

This datasets consists of model cards for models hosted on the Hugging Face Hub. The model cards are created by the community and provide information about the model, its performance, its intended uses, and more. This dataset is updated on a daily basis and includes publicly available models on the Hugging Face Hub.

This dataset is made available to help support users wanting to work with a large number of Model Cards from the Hub. We hope that this dataset will help support research in the area of Model Cards and their use but the format of this dataset may not be useful for all use cases. If there are other features that you would like to see included in this dataset, please open a new discussion.

Dataset Details

Uses

There are a number of potential uses for this dataset including:

  • text mining to find common themes in model cards
  • analysis of the model card format/content
  • topic modelling of model cards
  • analysis of the model card metadata
  • training language models on model cards

Out-of-Scope Use

[More Information Needed]

Dataset Structure

This dataset has a single split.

Dataset Creation

Curation Rationale

The dataset was created to assist people in working with model cards. In particular it was created to support research in the area of model cards and their use. It is possible to use the Hugging Face Hub API or client library to download model cards and this option may be preferable if you have a very specific use case or require a different format.

Source Data

The source data is README.md files for models hosted on the Hugging Face Hub. We do not include any other supplementary files that may be included in the model card directory.

Data Collection and Processing

The data is downloaded using a CRON job on a daily basis.

Who are the source data producers?

The source data producers are the creators of the model cards on the Hugging Face Hub. This includes a broad variety of people from the community ranging from large companies to individual researchers. We do not gather any information about who created the model card in this repository although this information can be gathered from the Hugging Face Hub API.

Annotations [optional]

There are no additional annotations in this dataset beyond the model card content.

Annotation process

N/A

Who are the annotators?

N/A

Personal and Sensitive Information

We make no effort to anonymize the data. Whilst we don't expect the majority of model cards to contain personal or sensitive information, it is possible that some model cards may contain this information. Model cards may also link to websites or email addresses.

Bias, Risks, and Limitations

Model cards are created by the community and we do not have any control over the content of the model cards. We do not review the content of the model cards and we do not make any claims about the accuracy of the information in the model cards. Some model cards will themselves discuss bias and sometimes this is done by providing examples of bias in either the training data or the responses provided by the model. As a result this dataset may contain examples of bias.

Whilst we do not directly download any images linked to in the model cards, some model cards may include images. Some of these images may not be suitable for all audiences.

Recommendations

Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.

Citation

No formal citation is required for this dataset but if you use this dataset in your work, please include a link to this dataset page.

Dataset Card Authors

@davanstrien

Dataset Card Contact

@davanstrien

Downloads last month
1,575

Collection including librarian-bots/model_cards_with_metadata