modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
sequence
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
routhsrinivas/medgemma-4b-it-sft-lora-crc100k
routhsrinivas
2025-06-05T18:02:44Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:google/medgemma-4b-it", "base_model:finetune:google/medgemma-4b-it", "endpoints_compatible", "region:us" ]
null
2025-06-04T18:46:44Z
--- base_model: google/medgemma-4b-it library_name: transformers model_name: medgemma-4b-it-sft-lora-crc100k tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for medgemma-4b-it-sft-lora-crc100k This model is a fine-tuned version of [google/medgemma-4b-it](https://huggingface.co/google/medgemma-4b-it). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="routhsrinivas/medgemma-4b-it-sft-lora-crc100k", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.18.1 - Transformers: 4.53.0.dev0 - Pytorch: 2.7.0+cu118 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
cam-1000/MNLP_M3_rag_model
cam-1000
2025-06-05T18:00:42Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "generated_from_trainer", "conversational", "base_model:cam-1000/MNLP_M3_mcqa_model", "base_model:finetune:cam-1000/MNLP_M3_mcqa_model", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-05T17:34:03Z
--- library_name: transformers base_model: cam-1000/MNLP_M3_mcqa_model tags: - generated_from_trainer model-index: - name: MNLP_M3_rag_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MNLP_M3_rag_model This model is a fine-tuned version of [cam-1000/MNLP_M3_mcqa_model](https://huggingface.co/cam-1000/MNLP_M3_mcqa_model) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8561 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.01 - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.8571 | 0.0228 | 100 | 0.8123 | | 0.7711 | 0.0456 | 200 | 0.8165 | | 0.5905 | 0.0683 | 300 | 0.8561 | ### Framework versions - Transformers 4.52.3 - Pytorch 2.7.0+cu126 - Datasets 3.2.0 - Tokenizers 0.21.0
lmstudio-community/OpenThinker3-7B-GGUF
lmstudio-community
2025-06-05T17:59:49Z
0
2
null
[ "gguf", "llama-factory", "full", "generated_from_trainer", "text-generation", "dataset:open-thoughts/OpenThoughts3-1.2M", "arxiv:2506.04178", "base_model:open-thoughts/OpenThinker3-7B", "base_model:quantized:open-thoughts/OpenThinker3-7B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-06-05T17:54:36Z
--- quantized_by: bartowski pipeline_tag: text-generation base_model: open-thoughts/OpenThinker3-7B base_model_relation: quantized tags: - llama-factory - full - generated_from_trainer datasets: - open-thoughts/OpenThoughts3-1.2M license: apache-2.0 model-index: - name: OpenThinker3-7B results: [] --- ## 💫 Community Model> OpenThinker3 7B by Open-Thoughts *👾 [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)*. **Model creator:** [open-thoughts](https://huggingface.co/open-thoughts)<br> **Original model**: [OpenThinker3-7B](https://huggingface.co/open-thoughts/OpenThinker3-7B)<br> **GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b5596](https://github.com/ggerganov/llama.cpp/releases/tag/b5596)<br> ## Technical Details Supports a context length of 32k tokens Trained on the new [OpenThoughts3-1.2M](https://huggingface.co/datasets/open-thoughts/OpenThoughts3-1.2M) dataset, consisting of 850k math questions, 200k code questions, and 100k science questions More details available in their [paper](https://arxiv.org/abs/2506.04178) and [blog post](https://openthoughts.ai/blog/ot3) ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. ## Disclaimers LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
7-cikgu-cctv-wiring-Viral-Videos/FULL.VIDEO.cikgu.cctv.wiring.Viral.Video.Tutorial.Official
7-cikgu-cctv-wiring-Viral-Videos
2025-06-05T17:59:00Z
0
0
null
[ "region:us" ]
null
2025-06-05T17:58:41Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
CarlOwOs/Qwen3-0.6B-Base-int2
CarlOwOs
2025-06-05T17:57:40Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "quantized", "optimum-quanto", "int2", "conversational", "base_model:Qwen/Qwen3-0.6B-Base", "base_model:finetune:Qwen/Qwen3-0.6B-Base", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-05T17:57:20Z
--- license: mit base_model: Qwen/Qwen3-0.6B-Base tags: - quantized - optimum-quanto - int2 - qwen3 library_name: transformers pipeline_tag: text-generation --- # Qwen3-0.6B-Base Quantized (INT2) This model is a quantized version of `Qwen/Qwen3-0.6B-Base` using [optimum-quanto](https://github.com/huggingface/optimum-quanto) with int2 weight quantization. ## Model Details - **Base Model**: Qwen/Qwen3-0.6B-Base - **Quantization**: int2 weights using optimum-quanto - **Library**: Transformers + Optimum-Quanto ## Usage You can load and use this quantized model directly: ```python from transformers import AutoTokenizer, AutoModelForCausalLM from optimum.quanto import QuantizedModelForCausalLM # Load tokenizer and model directly tokenizer = AutoTokenizer.from_pretrained("CarlOwOs/Qwen3-0.6B-Base-int2", trust_remote_code=True) model = QuantizedModelForCausalLM.from_pretrained("CarlOwOs/Qwen3-0.6B-Base-int2") # Generate text inputs = tokenizer("Hello, how are you?", return_tensors="pt") outputs = model.generate(**inputs, max_length=50, do_sample=True, temperature=0.7) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Alternative Loading Method ```python # If the direct method doesn't work, try this: from transformers import AutoTokenizer from optimum.quanto import QuantizedModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("CarlOwOs/Qwen3-0.6B-Base-int2", trust_remote_code=True) model = QuantizedModelForCausalLM.from_pretrained("CarlOwOs/Qwen3-0.6B-Base-int2") # Use the model for inference inputs = tokenizer("What is the capital of France?", return_tensors="pt") with torch.no_grad(): outputs = model.generate( **inputs, max_length=100, do_sample=True, temperature=0.7, pad_token_id=tokenizer.eos_token_id ) generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True) print(generated_text) ``` ## Performance This quantized model provides significant memory savings compared to the original model: - **Inference Speed**: Similar to original model - **Quality**: Maintains good performance for most tasks ## Technical Details - **Quantization Method**: optimum-quanto int2 weight quantization - **Base Model**: Qwen/Qwen3-0.6B-Base - **Precision**: int2 weights, float16 activations ## License Same as the base model license.
NiloofarMomeni/distilhubert-debiasing-age
NiloofarMomeni
2025-06-05T17:48:42Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "dataset:audiofolder", "base_model:ntu-spml/distilhubert", "base_model:finetune:ntu-spml/distilhubert", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-04-08T08:56:10Z
--- library_name: transformers license: apache-2.0 base_model: ntu-spml/distilhubert tags: - generated_from_trainer datasets: - audiofolder metrics: - accuracy model-index: - name: distilhubert-debiasing-age results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilhubert-debiasing-age This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the audiofolder dataset. It achieves the following results on the evaluation set: - Loss: 1.8820 - Accuracy: 0.6702 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.2915 | 1.0 | 117 | 1.4201 | 0.6064 | | 1.3727 | 2.0 | 234 | 1.7079 | 0.6064 | | 1.1821 | 3.0 | 351 | 1.6523 | 0.5851 | | 1.3401 | 4.0 | 468 | 1.4616 | 0.6383 | | 1.6345 | 5.0 | 585 | 1.5092 | 0.6170 | | 1.1763 | 6.0 | 702 | 1.6093 | 0.6277 | | 0.999 | 7.0 | 819 | 1.4447 | 0.6277 | | 0.9842 | 8.0 | 936 | 1.5173 | 0.6702 | | 0.9366 | 9.0 | 1053 | 1.8773 | 0.6809 | | 0.9529 | 10.0 | 1170 | 1.8331 | 0.6489 | | 1.2192 | 11.0 | 1287 | 2.0470 | 0.6702 | | 0.8482 | 12.0 | 1404 | 1.9989 | 0.6809 | | 0.9902 | 13.0 | 1521 | 2.3879 | 0.6383 | | 1.0078 | 14.0 | 1638 | 2.1982 | 0.6809 | | 0.9427 | 15.0 | 1755 | 1.9457 | 0.6596 | | 0.9801 | 16.0 | 1872 | 1.9722 | 0.6702 | | 0.9372 | 17.0 | 1989 | 1.9988 | 0.6596 | | 0.9671 | 18.0 | 2106 | 1.8085 | 0.7128 | | 0.9031 | 19.0 | 2223 | 1.8938 | 0.6702 | | 0.8846 | 20.0 | 2340 | 1.8820 | 0.6702 | ### Framework versions - Transformers 4.50.3 - Pytorch 2.6.0+cu124 - Datasets 3.4.1 - Tokenizers 0.21.0
bhavsarmohit/unigram-12k
bhavsarmohit
2025-06-05T17:48:08Z
0
0
null
[ "region:us" ]
null
2025-06-05T17:45:30Z
# your-unigram-tokenizer Unigram Tokenizer ## Model Details - **Tokenizer Type**: Unigram - **Version**: 1.0.0 - **Vocabulary size**: 32000 - **Special tokens**: ['unk_token', 'cls_token', 'sep_token', 'pad_token', 'mask_token', 'additional_special_tokens'] - **Max length**: 512 - **Normalization**: NFKC + lowercase - **Byte fallback**: Enabled ## Training Data - **Files**: 1 - **Validation split**: 0.05 ## Usage ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("your-username/your-unigram-tokenizer")
zerofata/L3.3-GeneticLemonade-Final-v2-70B_4.65bpw-hb6-exl2
zerofata
2025-06-05T17:41:35Z
3
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "base_model:zerofata/L3.3-GeneticLemonade-Final-v2-70B", "base_model:quantized:zerofata/L3.3-GeneticLemonade-Final-v2-70B", "license:llama3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "exl2", "region:us" ]
text-generation
2025-06-05T00:35:56Z
--- library_name: transformers license: llama3 base_model: - zerofata/L3.3-GeneticLemonade-Final-v2-70B --- <!DOCTYPE html> <style> body { font-family: sans-serif; color: #f0f0f0; line-height: 1.6; margin: 0; padding: 0; background-color: #1a0f1a; } .lemonade-text { color: #ff3366; position: relative; z-index: 2; margin-left: 0.2em; text-shadow: 0 0 10px #ff3366; } /* Section styling */ .section-container { background-color: rgba(26, 15, 26, 0.7); margin-bottom: 30px; position: relative; overflow: hidden; border-bottom: 1px solid #ff3366; } .section-header { display: flex; align-items: center; background-color: rgba(255, 51, 102, 0.08); padding: 10px 20px; } .section-indicator { width: 8px; height: 20px; background-color: #ff3366; margin-right: 15px; } .section-title { font-family: 'Orbitron', sans-serif; color: #f0f0f0; font-size: 1.3rem; margin: 0; letter-spacing: 2px; text-transform: uppercase; font-weight: 500; } .section-content { padding: 20px; font-family: sans-serif; color: #f0f0f0; line-height: 1.6; } /* Title styling */ .title-container { background-color: #0a0a0a; position: relative; overflow: hidden; margin-bottom: 40px; border-left: 3px solid #ff3366; } .title-wrapper { position: relative; z-index: 2; padding: 25px 20px 30px 30px; font-family: 'Orbitron', sans-serif; } .title-main { color: #f0f0f0; font-size: 2.5rem; font-weight: 700; margin: 0; letter-spacing: 2px; display: inline-block; position: relative; text-transform: uppercase; } .title-prefix { position: relative; z-index: 2; } .title-subtitle { padding-left: 15px; margin-top: 5px; margin-left: 5px; } .subtitle-text { color: #cc0066; font-size: 1.2rem; font-family: 'Orbitron', sans-serif; font-weight: 300; letter-spacing: 3px; text-transform: uppercase; display: inline-block; } .glitchy-overlay { position: absolute; top: 0; left: 0; width: 100%; height: 100%; background-image: repeating-linear-gradient(0deg, rgba(0,0,0,0) 0, rgba(139, 0, 0, 0.1) 1px, rgba(0,0,0,0) 2px); z-index: 1; } /* Data box styling */ .data-box { background-color: rgba(0, 0, 0, 0.4); padding: 15px; border-left: 2px solid #ff3366; margin-bottom: 20px; } .data-row { display: flex; margin-bottom: 8px; } .data-arrow { color: #ff3366; width: 20px; display: inline-block; } .data-label { color: #cc0066; width: 80px; display: inline-block; } /* Subheading styling */ .subheading { color: #cc0066; font-size: 1.1rem; margin-top: 20px; margin-bottom: 15px; font-weight: 400; border-bottom: 1px dashed rgba(204, 0, 102, 0.4); display: inline-block; text-transform: uppercase; letter-spacing: 1px; font-family: 'Orbitron', sans-serif; } /* Links */ a { color: #cc0066; text-decoration: none; } a:hover { text-decoration: underline; color: #ff6600; } /* Container */ .container { max-width: 1200px; margin: 20px auto; padding: 40px 20px; background-color: #0a0a0a; background-image: linear-gradient(rgba(139, 0, 0, 0.12) 1px, transparent 1px), linear-gradient(90deg, rgba(139, 0, 0, 0.12) 1px, transparent 1px); background-size: 20px 20px; min-height: calc(100vh - 40px); border: 1px solid #ff3366; border-radius: 2px; } </style> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>GENETIC LEMONADE FINAL v2</title> <link href="https://fonts.googleapis.com/css2?family=Orbitron:wght@400;500;600;700&family=JetBrains+Mono:wght@100;300;400;700&display=swap" rel="stylesheet"> </head> <body> <div class="cyber-grid-bg"></div> <div class="container"> <div class="title-container"> <!-- Glitchy overlay --> <div class="glitchy-overlay"></div> <!-- Main title --> <div class="title-wrapper"> <h1 class="title-main"> <span class="title-prefix">GENETIC</span> <span class="lemonade-text">LEMONADE</span> <!-- Static text with glow --> </h1> <div class="title-subtitle"> <span class="subtitle-text">FINAL v2</span> </div> </div> </div> ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65b19c6c638328850e12d38c/0Ka08CdFUIJtYctBeBATo.png) <div class="section-container"> <div class="section-header"> <div class="section-indicator"></div> <h2 class="section-title">01 // OVERVIEW</h2> </div> <div class="section-content"> <p>Wasn't intending to release another model (so soon at least), but I was testing out some new dataset ideas and thought this model came out pretty nice.</p> <p><a href="https://huggingface.co/zerofata/L3.3-GeneticLemonade-Final-70B">zerofata/GeneticLemonade-Final</a> SFT QLora finetune.</p> <p>This is an uncensored creative model intended to excel at character driven RP / ERP.</p> <p>This model is designed to provide longer, narrative heavy responses where characters are portrayed accurately and proactively.</p> <p>Compared to Unleashed v3, this model has significantly reduced positivity bias and arguably a nicer writing style. The tradeoff is it swipe heavy, making a few more logical errors and can be a bit too concise at times.</p> </div> </div> <div class="section-container"> <div class="section-header"> <div class="section-indicator"></div> <h2 class="section-title">02 // SILLYTAVERN SETTINGS</h2> </div> <div class="section-content"> <p>Play with these, they are not the 'best' settings just a stable baseline.</p> <h3 class="subheading">Recommended Samplers</h3> <div class="data-box"> <div class="data-row"> <span class="data-arrow">></span> <span class="data-label">Temp:</span> <span>0.9 - 1</span> </div> <div class="data-row"> <span class="data-arrow">></span> <span class="data-label">MinP:</span> <span>0.03 - 0.04</span> </div> <div class="data-row"> <span class="data-arrow">></span> <span class="data-label">TopP:</span> <span>0.9 - 1.0</span> </div> <div class="data-row"> <span class="data-arrow">></span> <span class="data-label">Dry:</span> <span>0.8, 1.75, 4</span> </div> </div> <h3 class="subheading">Instruct</h3> <div class="data-box"> <p style="margin: 0;">Llama-3-Instruct-Names but you will need to uncheck "System same as user".</p> </div> </div> </div> <div class="section-container"> <div class="section-header"> <div class="section-indicator"></div> <h2 class="section-title">03 // QUANTIZATIONS</h2> </div> <div class="section-content"> <div style="margin-bottom: 20px;"> <h3 class="subheading">GGUF</h3> <div class="data-box"> <div class="data-row"> <span style="color: #ff3366; display: inline-block; margin-right: 10px;">> </span><a href="https://huggingface.co/mradermacher/L3.3-GeneticLemonade-Final-v2-70B-i1-GGUF">iMatrix (mradermacher)</a> </div> </div> </div> <div> <h3 class="subheading">EXL2</h3> <div class="data-box"> <div class="data-row"> <span style="color: #ff3366; display: inline-block; margin-right: 10px;">> </span><a href="https://huggingface.co/zerofata/L3.3-GeneticLemonade-Final-v2-70B_4bpw-hb6-exl2">4bpw</a> </div> <div class="data-row"> <span style="color: #ff3366; display: inline-block; margin-right: 10px;">> </span><a href="https://huggingface.co/zerofata/L3.3-GeneticLemonade-Final-v2-70B_4.5bpw-hb6-exl2">4.5bpw</a> </div> <div class="data-row"> <span style="color: #ff3366; display: inline-block; margin-right: 10px;">> </span><a href="https://huggingface.co/zerofata/L3.3-GeneticLemonade-Final-v2-70B_4.65bpw-hb6-exl2">4.65bpw</a> </div> <div class="data-row"> <span style="color: #ff3366; display: inline-block; margin-right: 10px;">> </span><a href="https://huggingface.co/zerofata/L3.3-GeneticLemonade-Final-v2-70B_6bpw-hb8-exl2">6bpw</a> </div> </div> </div> </div> </div> <div class="section-container"> <div class="section-header"> <div class="section-indicator"></div> <h2 class="section-title">04 // TRAINING PROCESS</h2> </div> <div class="section-content"> <p>This model was trained using a dataset of approx 4.3 million tokens, 700 RP conversations, 2000 creative writing / instruct samples and about 400 summaries. The bulk of this data has been made public.</p> <p>This model didn't take well to my existing DPO dataset, so it hasn't been used here.</p> </div> </div> </div> <h3 class="subheading">Axolotl configs</h3> <p>Not optimized for cost / performance efficiency, YMMV.</p> <h3>SFT 1*H200</h3> ```yml # ==================== # MODEL CONFIGURATION # ==================== base_model: zerofata/L3.3-GeneticLemonade-Unleashed-70B model_type: AutoModelForCausalLM tokenizer_type: AutoTokenizer special_tokens: pad_token: "<|finetune_right_pad_id|>" chat_template: llama3 # ==================== # DATASET CONFIGURATION # ==================== datasets: - path: ./dataset.jsonl type: chat_template split: train chat_template_strategy: tokenizer field_messages: messages message_property_mappings: role: role content: content roles: user: ["user"] assistant: ["assistant"] system: ["system"] test_datasets: - path: ./validate_dataset.jsonl type: chat_template split: train chat_template_strategy: tokenizer field_messages: messages message_property_mappings: role: role content: content roles: user: ["user"] assistant: ["assistant"] system: ["system"] dataset_prepared_path: train_on_inputs: false # Only train on assistant responses # ==================== # QLORA CONFIGURATION # ==================== adapter: qlora load_in_4bit: true lora_r: 64 lora_alpha: 128 lora_dropout: 0.1 lora_target_linear: true # lora_modules_to_save: # Uncomment only if you added NEW tokens # ==================== # TRAINING PARAMETERS # ==================== num_epochs: 2 micro_batch_size: 4 gradient_accumulation_steps: 2 learning_rate: 1.5e-5 optimizer: paged_adamw_8bit lr_scheduler: rex warmup_ratio: 0.05 weight_decay: 0.01 max_grad_norm: 1.0 # ==================== # SEQUENCE & PACKING # ==================== sequence_len: 8192 sample_packing: true eval_sample_packing: false pad_to_sequence_len: true # ==================== # HARDWARE OPTIMIZATIONS # ==================== bf16: auto flash_attention: true gradient_checkpointing: true # ==================== # EVALUATION & CHECKPOINTING # ==================== evaluation_strategy: steps eval_steps: 5 save_strategy: steps save_steps: 5 save_total_limit: 5 # Keep best + last few checkpoints load_best_model_at_end: true metric_for_best_model: eval_loss greater_is_better: false early_stopping_patience: 5 # ==================== # LOGGING & OUTPUT # ==================== output_dir: ./output_model logging_steps: 2 save_safetensors: true # ==================== # WANDB TRACKING # ==================== wandb_project: project_name # wandb_entity: your_entity # wandb_name: your_run_name ``` </body> </html>
zerofata/L3.3-GeneticLemonade-Final-v2-70B_4.5bpw-hb6-exl2
zerofata
2025-06-05T17:40:58Z
2
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "base_model:zerofata/L3.3-GeneticLemonade-Final-v2-70B", "base_model:quantized:zerofata/L3.3-GeneticLemonade-Final-v2-70B", "license:llama3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "exl2", "region:us" ]
text-generation
2025-06-02T04:18:38Z
--- library_name: transformers license: llama3 base_model: - zerofata/L3.3-GeneticLemonade-Final-v2-70B --- <!DOCTYPE html> <style> body { font-family: sans-serif; color: #f0f0f0; line-height: 1.6; margin: 0; padding: 0; background-color: #1a0f1a; } .lemonade-text { color: #ff3366; position: relative; z-index: 2; margin-left: 0.2em; text-shadow: 0 0 10px #ff3366; } /* Section styling */ .section-container { background-color: rgba(26, 15, 26, 0.7); margin-bottom: 30px; position: relative; overflow: hidden; border-bottom: 1px solid #ff3366; } .section-header { display: flex; align-items: center; background-color: rgba(255, 51, 102, 0.08); padding: 10px 20px; } .section-indicator { width: 8px; height: 20px; background-color: #ff3366; margin-right: 15px; } .section-title { font-family: 'Orbitron', sans-serif; color: #f0f0f0; font-size: 1.3rem; margin: 0; letter-spacing: 2px; text-transform: uppercase; font-weight: 500; } .section-content { padding: 20px; font-family: sans-serif; color: #f0f0f0; line-height: 1.6; } /* Title styling */ .title-container { background-color: #0a0a0a; position: relative; overflow: hidden; margin-bottom: 40px; border-left: 3px solid #ff3366; } .title-wrapper { position: relative; z-index: 2; padding: 25px 20px 30px 30px; font-family: 'Orbitron', sans-serif; } .title-main { color: #f0f0f0; font-size: 2.5rem; font-weight: 700; margin: 0; letter-spacing: 2px; display: inline-block; position: relative; text-transform: uppercase; } .title-prefix { position: relative; z-index: 2; } .title-subtitle { padding-left: 15px; margin-top: 5px; margin-left: 5px; } .subtitle-text { color: #cc0066; font-size: 1.2rem; font-family: 'Orbitron', sans-serif; font-weight: 300; letter-spacing: 3px; text-transform: uppercase; display: inline-block; } .glitchy-overlay { position: absolute; top: 0; left: 0; width: 100%; height: 100%; background-image: repeating-linear-gradient(0deg, rgba(0,0,0,0) 0, rgba(139, 0, 0, 0.1) 1px, rgba(0,0,0,0) 2px); z-index: 1; } /* Data box styling */ .data-box { background-color: rgba(0, 0, 0, 0.4); padding: 15px; border-left: 2px solid #ff3366; margin-bottom: 20px; } .data-row { display: flex; margin-bottom: 8px; } .data-arrow { color: #ff3366; width: 20px; display: inline-block; } .data-label { color: #cc0066; width: 80px; display: inline-block; } /* Subheading styling */ .subheading { color: #cc0066; font-size: 1.1rem; margin-top: 20px; margin-bottom: 15px; font-weight: 400; border-bottom: 1px dashed rgba(204, 0, 102, 0.4); display: inline-block; text-transform: uppercase; letter-spacing: 1px; font-family: 'Orbitron', sans-serif; } /* Links */ a { color: #cc0066; text-decoration: none; } a:hover { text-decoration: underline; color: #ff6600; } /* Container */ .container { max-width: 1200px; margin: 20px auto; padding: 40px 20px; background-color: #0a0a0a; background-image: linear-gradient(rgba(139, 0, 0, 0.12) 1px, transparent 1px), linear-gradient(90deg, rgba(139, 0, 0, 0.12) 1px, transparent 1px); background-size: 20px 20px; min-height: calc(100vh - 40px); border: 1px solid #ff3366; border-radius: 2px; } </style> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>GENETIC LEMONADE FINAL v2</title> <link href="https://fonts.googleapis.com/css2?family=Orbitron:wght@400;500;600;700&family=JetBrains+Mono:wght@100;300;400;700&display=swap" rel="stylesheet"> </head> <body> <div class="cyber-grid-bg"></div> <div class="container"> <div class="title-container"> <!-- Glitchy overlay --> <div class="glitchy-overlay"></div> <!-- Main title --> <div class="title-wrapper"> <h1 class="title-main"> <span class="title-prefix">GENETIC</span> <span class="lemonade-text">LEMONADE</span> <!-- Static text with glow --> </h1> <div class="title-subtitle"> <span class="subtitle-text">FINAL v2</span> </div> </div> </div> ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65b19c6c638328850e12d38c/0Ka08CdFUIJtYctBeBATo.png) <div class="section-container"> <div class="section-header"> <div class="section-indicator"></div> <h2 class="section-title">01 // OVERVIEW</h2> </div> <div class="section-content"> <p>Wasn't intending to release another model (so soon at least), but I was testing out some new dataset ideas and thought this model came out pretty nice.</p> <p><a href="https://huggingface.co/zerofata/L3.3-GeneticLemonade-Final-70B">zerofata/GeneticLemonade-Final</a> SFT QLora finetune.</p> <p>This is an uncensored creative model intended to excel at character driven RP / ERP.</p> <p>This model is designed to provide longer, narrative heavy responses where characters are portrayed accurately and proactively.</p> <p>Compared to Unleashed v3, this model has significantly reduced positivity bias and arguably a nicer writing style. The tradeoff is it swipe heavy, making a few more logical errors and can be a bit too concise at times.</p> </div> </div> <div class="section-container"> <div class="section-header"> <div class="section-indicator"></div> <h2 class="section-title">02 // SILLYTAVERN SETTINGS</h2> </div> <div class="section-content"> <p>Play with these, they are not the 'best' settings just a stable baseline.</p> <h3 class="subheading">Recommended Samplers</h3> <div class="data-box"> <div class="data-row"> <span class="data-arrow">></span> <span class="data-label">Temp:</span> <span>0.9 - 1</span> </div> <div class="data-row"> <span class="data-arrow">></span> <span class="data-label">MinP:</span> <span>0.03 - 0.04</span> </div> <div class="data-row"> <span class="data-arrow">></span> <span class="data-label">TopP:</span> <span>0.9 - 1.0</span> </div> <div class="data-row"> <span class="data-arrow">></span> <span class="data-label">Dry:</span> <span>0.8, 1.75, 4</span> </div> </div> <h3 class="subheading">Instruct</h3> <div class="data-box"> <p style="margin: 0;">Llama-3-Instruct-Names but you will need to uncheck "System same as user".</p> </div> </div> </div> <div class="section-container"> <div class="section-header"> <div class="section-indicator"></div> <h2 class="section-title">03 // QUANTIZATIONS</h2> </div> <div class="section-content"> <div style="margin-bottom: 20px;"> <h3 class="subheading">GGUF</h3> <div class="data-box"> <div class="data-row"> <span style="color: #ff3366; display: inline-block; margin-right: 10px;">> </span><a href="https://huggingface.co/mradermacher/L3.3-GeneticLemonade-Final-v2-70B-i1-GGUF">iMatrix (mradermacher)</a> </div> </div> </div> <div> <h3 class="subheading">EXL2</h3> <div class="data-box"> <div class="data-row"> <span style="color: #ff3366; display: inline-block; margin-right: 10px;">> </span><a href="https://huggingface.co/zerofata/L3.3-GeneticLemonade-Final-v2-70B_4bpw-hb6-exl2">4bpw</a> </div> <div class="data-row"> <span style="color: #ff3366; display: inline-block; margin-right: 10px;">> </span><a href="https://huggingface.co/zerofata/L3.3-GeneticLemonade-Final-v2-70B_4.5bpw-hb6-exl2">4.5bpw</a> </div> <div class="data-row"> <span style="color: #ff3366; display: inline-block; margin-right: 10px;">> </span><a href="https://huggingface.co/zerofata/L3.3-GeneticLemonade-Final-v2-70B_4.65bpw-hb6-exl2">4.65bpw</a> </div> <div class="data-row"> <span style="color: #ff3366; display: inline-block; margin-right: 10px;">> </span><a href="https://huggingface.co/zerofata/L3.3-GeneticLemonade-Final-v2-70B_6bpw-hb8-exl2">6bpw</a> </div> </div> </div> </div> </div> <div class="section-container"> <div class="section-header"> <div class="section-indicator"></div> <h2 class="section-title">04 // TRAINING PROCESS</h2> </div> <div class="section-content"> <p>This model was trained using a dataset of approx 4.3 million tokens, 700 RP conversations, 2000 creative writing / instruct samples and about 400 summaries. The bulk of this data has been made public.</p> <p>This model didn't take well to my existing DPO dataset, so it hasn't been used here.</p> </div> </div> </div> <h3 class="subheading">Axolotl configs</h3> <p>Not optimized for cost / performance efficiency, YMMV.</p> <h3>SFT 1*H200</h3> ```yml # ==================== # MODEL CONFIGURATION # ==================== base_model: zerofata/L3.3-GeneticLemonade-Unleashed-70B model_type: AutoModelForCausalLM tokenizer_type: AutoTokenizer special_tokens: pad_token: "<|finetune_right_pad_id|>" chat_template: llama3 # ==================== # DATASET CONFIGURATION # ==================== datasets: - path: ./dataset.jsonl type: chat_template split: train chat_template_strategy: tokenizer field_messages: messages message_property_mappings: role: role content: content roles: user: ["user"] assistant: ["assistant"] system: ["system"] test_datasets: - path: ./validate_dataset.jsonl type: chat_template split: train chat_template_strategy: tokenizer field_messages: messages message_property_mappings: role: role content: content roles: user: ["user"] assistant: ["assistant"] system: ["system"] dataset_prepared_path: train_on_inputs: false # Only train on assistant responses # ==================== # QLORA CONFIGURATION # ==================== adapter: qlora load_in_4bit: true lora_r: 64 lora_alpha: 128 lora_dropout: 0.1 lora_target_linear: true # lora_modules_to_save: # Uncomment only if you added NEW tokens # ==================== # TRAINING PARAMETERS # ==================== num_epochs: 2 micro_batch_size: 4 gradient_accumulation_steps: 2 learning_rate: 1.5e-5 optimizer: paged_adamw_8bit lr_scheduler: rex warmup_ratio: 0.05 weight_decay: 0.01 max_grad_norm: 1.0 # ==================== # SEQUENCE & PACKING # ==================== sequence_len: 8192 sample_packing: true eval_sample_packing: false pad_to_sequence_len: true # ==================== # HARDWARE OPTIMIZATIONS # ==================== bf16: auto flash_attention: true gradient_checkpointing: true # ==================== # EVALUATION & CHECKPOINTING # ==================== evaluation_strategy: steps eval_steps: 5 save_strategy: steps save_steps: 5 save_total_limit: 5 # Keep best + last few checkpoints load_best_model_at_end: true metric_for_best_model: eval_loss greater_is_better: false early_stopping_patience: 5 # ==================== # LOGGING & OUTPUT # ==================== output_dir: ./output_model logging_steps: 2 save_safetensors: true # ==================== # WANDB TRACKING # ==================== wandb_project: project_name # wandb_entity: your_entity # wandb_name: your_run_name ``` </body> </html>
zerofata/L3.3-GeneticLemonade-Final-v2-70B
zerofata
2025-06-05T17:40:23Z
50
3
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "dataset:zerofata/Roleplay-Anime-Characters", "dataset:zerofata/Instruct-Anime-CreativeWriting", "dataset:zerofata/Summaries-Anime-FandomPages", "base_model:zerofata/L3.3-GeneticLemonade-Final-70B", "base_model:finetune:zerofata/L3.3-GeneticLemonade-Final-70B", "license:llama3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-02T01:47:46Z
--- library_name: transformers license: llama3 datasets: - zerofata/Roleplay-Anime-Characters - zerofata/Instruct-Anime-CreativeWriting - zerofata/Summaries-Anime-FandomPages base_model: - zerofata/L3.3-GeneticLemonade-Final-70B --- <!DOCTYPE html> <style> body { font-family: sans-serif; color: #f0f0f0; line-height: 1.6; margin: 0; padding: 0; background-color: #1a0f1a; } .lemonade-text { color: #ff3366; position: relative; z-index: 2; margin-left: 0.2em; text-shadow: 0 0 10px #ff3366; } /* Section styling */ .section-container { background-color: rgba(26, 15, 26, 0.7); margin-bottom: 30px; position: relative; overflow: hidden; border-bottom: 1px solid #ff3366; } .section-header { display: flex; align-items: center; background-color: rgba(255, 51, 102, 0.08); padding: 10px 20px; } .section-indicator { width: 8px; height: 20px; background-color: #ff3366; margin-right: 15px; } .section-title { font-family: 'Orbitron', sans-serif; color: #f0f0f0; font-size: 1.3rem; margin: 0; letter-spacing: 2px; text-transform: uppercase; font-weight: 500; } .section-content { padding: 20px; font-family: sans-serif; color: #f0f0f0; line-height: 1.6; } /* Title styling */ .title-container { background-color: #0a0a0a; position: relative; overflow: hidden; margin-bottom: 40px; border-left: 3px solid #ff3366; } .title-wrapper { position: relative; z-index: 2; padding: 25px 20px 30px 30px; font-family: 'Orbitron', sans-serif; } .title-main { color: #f0f0f0; font-size: 2.5rem; font-weight: 700; margin: 0; letter-spacing: 2px; display: inline-block; position: relative; text-transform: uppercase; } .title-prefix { position: relative; z-index: 2; } .title-subtitle { padding-left: 15px; margin-top: 5px; margin-left: 5px; } .subtitle-text { color: #cc0066; font-size: 1.2rem; font-family: 'Orbitron', sans-serif; font-weight: 300; letter-spacing: 3px; text-transform: uppercase; display: inline-block; } .glitchy-overlay { position: absolute; top: 0; left: 0; width: 100%; height: 100%; background-image: repeating-linear-gradient(0deg, rgba(0,0,0,0) 0, rgba(139, 0, 0, 0.1) 1px, rgba(0,0,0,0) 2px); z-index: 1; } /* Data box styling */ .data-box { background-color: rgba(0, 0, 0, 0.4); padding: 15px; border-left: 2px solid #ff3366; margin-bottom: 20px; } .data-row { display: flex; margin-bottom: 8px; } .data-arrow { color: #ff3366; width: 20px; display: inline-block; } .data-label { color: #cc0066; width: 80px; display: inline-block; } /* Subheading styling */ .subheading { color: #cc0066; font-size: 1.1rem; margin-top: 20px; margin-bottom: 15px; font-weight: 400; border-bottom: 1px dashed rgba(204, 0, 102, 0.4); display: inline-block; text-transform: uppercase; letter-spacing: 1px; font-family: 'Orbitron', sans-serif; } /* Links */ a { color: #cc0066; text-decoration: none; } a:hover { text-decoration: underline; color: #ff6600; } /* Container */ .container { max-width: 1200px; margin: 20px auto; padding: 40px 20px; background-color: #0a0a0a; background-image: linear-gradient(rgba(139, 0, 0, 0.12) 1px, transparent 1px), linear-gradient(90deg, rgba(139, 0, 0, 0.12) 1px, transparent 1px); background-size: 20px 20px; min-height: calc(100vh - 40px); border: 1px solid #ff3366; border-radius: 2px; } </style> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>GENETIC LEMONADE FINAL v2</title> <link href="https://fonts.googleapis.com/css2?family=Orbitron:wght@400;500;600;700&family=JetBrains+Mono:wght@100;300;400;700&display=swap" rel="stylesheet"> </head> <body> <div class="cyber-grid-bg"></div> <div class="container"> <div class="title-container"> <!-- Glitchy overlay --> <div class="glitchy-overlay"></div> <!-- Main title --> <div class="title-wrapper"> <h1 class="title-main"> <span class="title-prefix">GENETIC</span> <span class="lemonade-text">LEMONADE</span> <!-- Static text with glow --> </h1> <div class="title-subtitle"> <span class="subtitle-text">FINAL v2</span> </div> </div> </div> ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65b19c6c638328850e12d38c/0Ka08CdFUIJtYctBeBATo.png) <div class="section-container"> <div class="section-header"> <div class="section-indicator"></div> <h2 class="section-title">01 // OVERVIEW</h2> </div> <div class="section-content"> <p>Wasn't intending to release another model (so soon at least), but I was testing out some new dataset ideas and thought this model came out pretty nice.</p> <p><a href="https://huggingface.co/zerofata/L3.3-GeneticLemonade-Final-70B">zerofata/GeneticLemonade-Final</a> SFT QLora finetune.</p> <p>This is an uncensored creative model intended to excel at character driven RP / ERP.</p> <p>This model is designed to provide longer, narrative heavy responses where characters are portrayed accurately and proactively.</p> <p>Compared to Unleashed v3, this model has significantly reduced positivity bias and arguably a nicer writing style. The tradeoff is it swipe heavy, making a few more logical errors and can be a bit too concise at times.</p> </div> </div> <div class="section-container"> <div class="section-header"> <div class="section-indicator"></div> <h2 class="section-title">02 // SILLYTAVERN SETTINGS</h2> </div> <div class="section-content"> <p>Play with these, they are not the 'best' settings just a stable baseline.</p> <h3 class="subheading">Recommended Samplers</h3> <div class="data-box"> <div class="data-row"> <span class="data-arrow">></span> <span class="data-label">Temp:</span> <span>0.9 - 1</span> </div> <div class="data-row"> <span class="data-arrow">></span> <span class="data-label">MinP:</span> <span>0.03 - 0.04</span> </div> <div class="data-row"> <span class="data-arrow">></span> <span class="data-label">TopP:</span> <span>0.9 - 1.0</span> </div> <div class="data-row"> <span class="data-arrow">></span> <span class="data-label">Dry:</span> <span>0.8, 1.75, 4</span> </div> </div> <h3 class="subheading">Instruct</h3> <div class="data-box"> <p style="margin: 0;">Llama-3-Instruct-Names but you will need to uncheck "System same as user".</p> </div> </div> </div> <div class="section-container"> <div class="section-header"> <div class="section-indicator"></div> <h2 class="section-title">03 // QUANTIZATIONS</h2> </div> <div class="section-content"> <div style="margin-bottom: 20px;"> <h3 class="subheading">GGUF</h3> <div class="data-box"> <div class="data-row"> <span style="color: #ff3366; display: inline-block; margin-right: 10px;">> </span><a href="https://huggingface.co/mradermacher/L3.3-GeneticLemonade-Final-v2-70B-i1-GGUF">iMatrix (mradermacher)</a> </div> </div> </div> <div> <h3 class="subheading">EXL2</h3> <div class="data-box"> <div class="data-row"> <span style="color: #ff3366; display: inline-block; margin-right: 10px;">> </span><a href="https://huggingface.co/zerofata/L3.3-GeneticLemonade-Final-v2-70B_4bpw-hb6-exl2">4bpw</a> </div> <div class="data-row"> <span style="color: #ff3366; display: inline-block; margin-right: 10px;">> </span><a href="https://huggingface.co/zerofata/L3.3-GeneticLemonade-Final-v2-70B_4.5bpw-hb6-exl2">4.5bpw</a> </div> <div class="data-row"> <span style="color: #ff3366; display: inline-block; margin-right: 10px;">> </span><a href="https://huggingface.co/zerofata/L3.3-GeneticLemonade-Final-v2-70B_4.65bpw-hb6-exl2">4.65bpw</a> </div> <div class="data-row"> <span style="color: #ff3366; display: inline-block; margin-right: 10px;">> </span><a href="https://huggingface.co/zerofata/L3.3-GeneticLemonade-Final-v2-70B_6bpw-hb8-exl2">6bpw</a> </div> </div> </div> </div> </div> <div class="section-container"> <div class="section-header"> <div class="section-indicator"></div> <h2 class="section-title">04 // TRAINING PROCESS</h2> </div> <div class="section-content"> <p>This model was trained using a dataset of approx 4.3 million tokens, 700 RP conversations, 2000 creative writing / instruct samples and about 400 summaries. The bulk of this data has been made public.</p> <p>This model didn't take well to my existing DPO dataset, so it hasn't been used here.</p> </div> </div> </div> <h3 class="subheading">Axolotl configs</h3> <p>Not optimized for cost / performance efficiency, YMMV.</p> <h3>SFT 1*H200</h3> ```yml # ==================== # MODEL CONFIGURATION # ==================== base_model: zerofata/L3.3-GeneticLemonade-Unleashed-70B model_type: AutoModelForCausalLM tokenizer_type: AutoTokenizer special_tokens: pad_token: "<|finetune_right_pad_id|>" chat_template: llama3 # ==================== # DATASET CONFIGURATION # ==================== datasets: - path: ./dataset.jsonl type: chat_template split: train chat_template_strategy: tokenizer field_messages: messages message_property_mappings: role: role content: content roles: user: ["user"] assistant: ["assistant"] system: ["system"] test_datasets: - path: ./validate_dataset.jsonl type: chat_template split: train chat_template_strategy: tokenizer field_messages: messages message_property_mappings: role: role content: content roles: user: ["user"] assistant: ["assistant"] system: ["system"] dataset_prepared_path: train_on_inputs: false # Only train on assistant responses # ==================== # QLORA CONFIGURATION # ==================== adapter: qlora load_in_4bit: true lora_r: 64 lora_alpha: 128 lora_dropout: 0.1 lora_target_linear: true # lora_modules_to_save: # Uncomment only if you added NEW tokens # ==================== # TRAINING PARAMETERS # ==================== num_epochs: 2 micro_batch_size: 4 gradient_accumulation_steps: 2 learning_rate: 1.5e-5 optimizer: paged_adamw_8bit lr_scheduler: rex warmup_ratio: 0.05 weight_decay: 0.01 max_grad_norm: 1.0 # ==================== # SEQUENCE & PACKING # ==================== sequence_len: 8192 sample_packing: true eval_sample_packing: false pad_to_sequence_len: true # ==================== # HARDWARE OPTIMIZATIONS # ==================== bf16: auto flash_attention: true gradient_checkpointing: true # ==================== # EVALUATION & CHECKPOINTING # ==================== evaluation_strategy: steps eval_steps: 5 save_strategy: steps save_steps: 5 save_total_limit: 5 # Keep best + last few checkpoints load_best_model_at_end: true metric_for_best_model: eval_loss greater_is_better: false early_stopping_patience: 5 # ==================== # LOGGING & OUTPUT # ==================== output_dir: ./output_model logging_steps: 2 save_safetensors: true # ==================== # WANDB TRACKING # ==================== wandb_project: project_name # wandb_entity: your_entity # wandb_name: your_run_name ``` </body> </html>
Cameronbarry/Cam
Cameronbarry
2025-06-05T17:33:20Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-06-05T17:33:20Z
--- license: apache-2.0 ---
nikolina-p/xlm-roberta-base-finetuned-panx-fr
nikolina-p
2025-06-05T17:20:21Z
1
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:google/xtreme", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2025-03-14T19:24:34Z
--- library_name: transformers license: mit base_model: xlm-roberta-base tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-fr results: [] datasets: - google/xtreme --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the [XTREME](https://huggingface.co/datasets/google/xtreme) dataset, specifically on the PAN-X subset for the following languages: - French (`PAN-X.fr`) It achieves the following results on the evaluation set: - Loss: 0.2753 - F1: 0.8356 ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.5595 | 1.0 | 191 | 0.3110 | 0.7974 | | 0.2644 | 2.0 | 382 | 0.2719 | 0.8260 | | 0.1769 | 3.0 | 573 | 0.2753 | 0.8356 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.6.0+cu124 - Datasets 3.4.0 - Tokenizers 0.21.0
CompassioninMachineLearning/pretrainedllama8bInstruct3kresearchpapers_plus4kalignment_lora2epochs
CompassioninMachineLearning
2025-06-05T17:20:13Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "base_model:CompassioninMachineLearning/pretrainedllama8bInstruct3kresearchpapers", "base_model:finetune:CompassioninMachineLearning/pretrainedllama8bInstruct3kresearchpapers", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-06-05T17:15:39Z
--- base_model: CompassioninMachineLearning/pretrainedllama8bInstruct3kresearchpapers tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** CompassioninMachineLearning - **License:** apache-2.0 - **Finetuned from model :** CompassioninMachineLearning/pretrainedllama8bInstruct3kresearchpapers This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
EdBergJr1/Qwen-3-32B-Medical-Reasoning
EdBergJr1
2025-06-05T17:17:43Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-05T17:17:11Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
futurehouse/ether0
futurehouse
2025-06-05T17:17:30Z
0
17
null
[ "safetensors", "mistral", "smiles", "chemistry", "reasoning", "text-generation", "conversational", "en", "dataset:futurehouse/ether0-benchmark", "base_model:mistralai/Mistral-Small-24B-Instruct-2501", "base_model:finetune:mistralai/Mistral-Small-24B-Instruct-2501", "license:apache-2.0", "region:us" ]
text-generation
2025-06-04T21:12:37Z
--- license: apache-2.0 language: - en base_model: - mistralai/Mistral-Small-24B-Instruct-2501 datasets: - futurehouse/ether0-benchmark pipeline_tag: text-generation tags: - smiles - chemistry - reasoning --- [![Dataset on HF](https://huggingface.co/datasets/huggingface/badges/resolve/main/dataset-on-hf-md-dark.svg)](https://huggingface.co/datasets/futurehouse/ether0-benchmark) ![ether0 logo](images/ether0_logo.svg) # ether0 ether0 is a 24B language model trained to reason in English and output molecular structures as SMILES. It is derived from fine-tuning and reinforcement learning training from Mistral-Small-24B-Instruct-2501. Ask questions in English, but they may also include molecules specified as SMILES. The SMILES do not need to be canonical and may contain stereochemistry information. ether0 has limited support for IUPAC names. ## Usage This model is trained to reason in English and output a molecule. It is NOT a general purpose chat model. It has been trained specifically for these tasks: - IUPAC name to SMILES - Molecular formula (Hill notation) to SMILES, optionally with constraints on functional groups - Modifying solubilities on given molecules (SMILES) by specific LogS, optionally with constraints about scaffolds/groups/similarity - Matching pKa to molecules, proposing molecules with a pKa, or modifying molecules to adjust pKa - Matching scent/smell to molecules and modifying molecules to adjust scent - Matching human cell receptor binding + mode (e.g., agonist) to molecule or modifying a molecule's binding effect. Trained [from EveBio](https://data.evebio.org/) - ADME properties (e.g., MDDK efflux ratio, LD50) - GHS classifications (as words, not codes, like "carcinogen"). For example, "modify this molecule to remove acute toxicity." - Quantitative LD50 in mg/kg - Proposing 1-step retrosynthesis from likely commercially available reagents - Predicting a reaction outcome - General natural language description of a specific molecule to that molecule (inverse molecule captioning) - Natural product elucidation (formula + organism to SMILES) - e.g, "A molecule with formula C6H12O6 was isolated from Homo sapiens, what could it be?" - Matching blood-brain barrier permeability (as a class) or modifying For example, you can ask "Propose a molecule with a pKa of 9.2" or "Modify CCCCC(O)=OH to increase its pKa by about 1 unit." You cannot ask it "What is the pKa of CCCCC(O)=OH?" If you ask it questions that lie significantly beyond those tasks, it can fail. You can combine properties, although we haven't significantly benchmarked this. ## Benchmarks We tested ether0, along with some experts and frontier models, on [a benchmark we developed](https://huggingface.co/datasets/futurehouse/ether0-benchmark/). The benchmark is made from commonly used tasks - like reaction prediction in USPTO, molecular captioning from PubChem, or predicting GHS classification. The benchmark is different in two ways: all answers are a molecule, and we balanced it so that each task is 25 questions (a reasonable amount for frontier model evals). The tasks generally follow previously reported numbers - e.g., a reaction prediction accuracy of 80% here would be about the same on a withheld split of the USPTO-50k dataset. The results below are the model weights released in this repo. This is different than the preprint, which has pre-safety mitigation benchmarks. ![ether0 benchmarking](images/benchmarks.png) ## Limitations It does not know general synonyms and it has poor textbook knowledge (e.g. it does not perform especially well on chembench). For best results, input molecules as SMILES: if you input molecules with their common names, the model may reason using the incorrect smiles, resulting in poor results. For example, we have observed that the model often confuses lysine and glutamic acid if you ask questions using their common names, but should correctly reason about their chemistry if you provide their structures as SMILES. ## Training details We first pre-trained Mistral-Small-24B-Instruct-2501 via mostly incorrect reasoning traces from DeepSeek r1 to elicit reasoning and follow the new tokens/templates. Next, we used independent rounds of specialists trained with GRPO and verifiable rewards on one of the above tasks. We then aggregated and filtered reasoning traces (correct answers with reasoning) from the specialists to again fine-tune Mistral-Small-24B-Instruct-2501. Then, we did GRPO over all tasks. This last model was then put through safety post-training. ![ether0 training info](images/training_info.png) See our [preprint](https://paper.ether0.ai/) for details on data and training process. ## Safety We performed refusal post-training for compounds listed on OPCW schedules 1 and 2. We also post-trained ether0 to refuse questions about standard malicious topics like making explosives or poisons. As the model knows pharmacokinetics, it can modulate toxicity. However, the structure of toxic or narcotic compounds are generally known and thus we do not consider this a safety risk. The model can provide no uplift on "tacit knowledge" tasks like purification, scale-up, or processing beyond a web search or similar sized language model. ## Citation ```bibtex @article{narayanan2025training, title={Training a Scientific Reasoning Model for Chemistry}, author={Narayanan, Siddharth M. and Braza, James D. and Griffiths, Ryan-Rhys and Bou, Albert and Wellawatte, Geemi P. and Ramos, Mayk Caldas and Mitchener, Ludovico and Rodriques, Samuel G. and White, Andrew D.}, journal={arXiv preprint arXiv:XXXX.XXXXX}, year={2025} } ``` ## Licensing This model repository is considered open weights under an Apache 2.0 license, copyright 2025 FutureHouse.
BootesVoid/cmbjjxopu0bnfkfxs7vm91fo8_cmbjlqrqz0btakfxsyllqwxu3
BootesVoid
2025-06-05T17:17:28Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-05T17:17:25Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: AALIYAH --- # Cmbjjxopu0Bnfkfxs7Vm91Fo8_Cmbjlqrqz0Btakfxsyllqwxu3 <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `AALIYAH` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "AALIYAH", "lora_weights": "https://huggingface.co/BootesVoid/cmbjjxopu0bnfkfxs7vm91fo8_cmbjlqrqz0btakfxsyllqwxu3/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmbjjxopu0bnfkfxs7vm91fo8_cmbjlqrqz0btakfxsyllqwxu3', weight_name='lora.safetensors') image = pipeline('AALIYAH').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmbjjxopu0bnfkfxs7vm91fo8_cmbjlqrqz0btakfxsyllqwxu3/discussions) to add images that show off what you’ve made with this LoRA.
hindiavic/distilbert-rotten-tomatoes
hindiavic
2025-06-05T17:17:13Z
1
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-06-02T17:05:28Z
--- library_name: transformers license: apache-2.0 base_model: distilbert/distilbert-base-uncased tags: - generated_from_trainer model-index: - name: distilbert-rotten-tomatoes results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-rotten-tomatoes This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.52.4 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
nikolina-p/xlm-roberta-base-finetuned-panx-de-fr
nikolina-p
2025-06-05T17:15:17Z
4
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:google/xtreme", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2025-03-14T16:55:03Z
--- library_name: transformers license: mit base_model: xlm-roberta-base tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de-fr results: [] datasets: - google/xtreme --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the [XTREME](https://huggingface.co/datasets/google/xtreme) dataset, specifically on the PAN-X subset for the following languages: - German (`PAN-X.de`) - French (`PAN-X.fr`) It achieves the following results on the evaluation set: - Loss: 0.1603 - F1: 0.8575 ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2781 | 1.0 | 715 | 0.1879 | 0.8182 | | 0.1461 | 2.0 | 1430 | 0.1649 | 0.8500 | | 0.094 | 3.0 | 2145 | 0.1603 | 0.8575 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.6.0+cu124 - Datasets 3.4.0 - Tokenizers 0.21.0
LarryAIDraw/Acheron_KhanV2-03
LarryAIDraw
2025-06-05T17:13:55Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2025-06-05T17:01:51Z
--- license: creativeml-openrail-m --- https://civitai.com/models/971489/acheron-honkai-star-rail-4-outfits
qualcomm/DDRNet23-Slim
qualcomm
2025-06-05T17:11:11Z
22
0
pytorch
[ "pytorch", "tflite", "onnx", "real_time", "android", "image-segmentation", "arxiv:2101.06085", "license:other", "region:us" ]
image-segmentation
2024-02-25T23:04:37Z
--- library_name: pytorch license: other tags: - real_time - android pipeline_tag: image-segmentation --- ![](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/ddrnet23_slim/web-assets/model_demo.png) # DDRNet23-Slim: Optimized for Mobile Deployment ## Segment images or video by class in real-time on device DDRNet23Slim is a machine learning model that segments an image into semantic classes, specifically designed for road-based scenes. It is designed for the application of self-driving cars. This model is an implementation of DDRNet23-Slim found [here](https://github.com/chenjun2hao/DDRNet.pytorch). This repository provides scripts to run DDRNet23-Slim on Qualcomm® devices. More details on model performance across various devices, can be found [here](https://aihub.qualcomm.com/models/ddrnet23_slim). ### Model Details - **Model Type:** Model_use_case.semantic_segmentation - **Model Stats:** - Model checkpoint: DDRNet23s_imagenet.pth - Inference latency: RealTime - Input resolution: 2048x1024 - Number of parameters: 5.69M - Model size: 21.7 MB - Number of output classes: 19 | Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model |---|---|---|---|---|---|---|---|---| | DDRNet23-Slim | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 25.615 ms | 1 - 33 MB | NPU | [DDRNet23-Slim.tflite](https://huggingface.co/qualcomm/DDRNet23-Slim/blob/main/DDRNet23-Slim.tflite) | | DDRNet23-Slim | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN | 24.354 ms | 0 - 10 MB | NPU | Use Export Script | | DDRNet23-Slim | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 7.779 ms | 1 - 41 MB | NPU | [DDRNet23-Slim.tflite](https://huggingface.co/qualcomm/DDRNet23-Slim/blob/main/DDRNet23-Slim.tflite) | | DDRNet23-Slim | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN | 9.115 ms | 8 - 44 MB | NPU | Use Export Script | | DDRNet23-Slim | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 5.072 ms | 1 - 13 MB | NPU | [DDRNet23-Slim.tflite](https://huggingface.co/qualcomm/DDRNet23-Slim/blob/main/DDRNet23-Slim.tflite) | | DDRNet23-Slim | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN | 4.311 ms | 9 - 12 MB | NPU | Use Export Script | | DDRNet23-Slim | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 8.122 ms | 1 - 36 MB | NPU | [DDRNet23-Slim.tflite](https://huggingface.co/qualcomm/DDRNet23-Slim/blob/main/DDRNet23-Slim.tflite) | | DDRNet23-Slim | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN | 7.515 ms | 1 - 16 MB | NPU | Use Export Script | | DDRNet23-Slim | float | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 25.615 ms | 1 - 33 MB | NPU | [DDRNet23-Slim.tflite](https://huggingface.co/qualcomm/DDRNet23-Slim/blob/main/DDRNet23-Slim.tflite) | | DDRNet23-Slim | float | SA7255P ADP | Qualcomm® SA7255P | QNN | 24.354 ms | 0 - 10 MB | NPU | Use Export Script | | DDRNet23-Slim | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 4.976 ms | 1 - 15 MB | NPU | [DDRNet23-Slim.tflite](https://huggingface.co/qualcomm/DDRNet23-Slim/blob/main/DDRNet23-Slim.tflite) | | DDRNet23-Slim | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN | 4.423 ms | 9 - 12 MB | NPU | Use Export Script | | DDRNet23-Slim | float | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 8.873 ms | 1 - 33 MB | NPU | [DDRNet23-Slim.tflite](https://huggingface.co/qualcomm/DDRNet23-Slim/blob/main/DDRNet23-Slim.tflite) | | DDRNet23-Slim | float | SA8295P ADP | Qualcomm® SA8295P | QNN | 7.975 ms | 0 - 17 MB | NPU | Use Export Script | | DDRNet23-Slim | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 4.977 ms | 1 - 13 MB | NPU | [DDRNet23-Slim.tflite](https://huggingface.co/qualcomm/DDRNet23-Slim/blob/main/DDRNet23-Slim.tflite) | | DDRNet23-Slim | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN | 4.361 ms | 12 - 14 MB | NPU | Use Export Script | | DDRNet23-Slim | float | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 8.122 ms | 1 - 36 MB | NPU | [DDRNet23-Slim.tflite](https://huggingface.co/qualcomm/DDRNet23-Slim/blob/main/DDRNet23-Slim.tflite) | | DDRNet23-Slim | float | SA8775P ADP | Qualcomm® SA8775P | QNN | 7.515 ms | 1 - 16 MB | NPU | Use Export Script | | DDRNet23-Slim | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 4.871 ms | 1 - 15 MB | NPU | [DDRNet23-Slim.tflite](https://huggingface.co/qualcomm/DDRNet23-Slim/blob/main/DDRNet23-Slim.tflite) | | DDRNet23-Slim | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN | 4.278 ms | 9 - 20 MB | NPU | Use Export Script | | DDRNet23-Slim | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 7.267 ms | 12 - 55 MB | NPU | [DDRNet23-Slim.onnx](https://huggingface.co/qualcomm/DDRNet23-Slim/blob/main/DDRNet23-Slim.onnx) | | DDRNet23-Slim | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 3.404 ms | 0 - 45 MB | NPU | [DDRNet23-Slim.tflite](https://huggingface.co/qualcomm/DDRNet23-Slim/blob/main/DDRNet23-Slim.tflite) | | DDRNet23-Slim | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN | 3.01 ms | 9 - 46 MB | NPU | Use Export Script | | DDRNet23-Slim | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 5.056 ms | 11 - 59 MB | NPU | [DDRNet23-Slim.onnx](https://huggingface.co/qualcomm/DDRNet23-Slim/blob/main/DDRNet23-Slim.onnx) | | DDRNet23-Slim | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 2.898 ms | 0 - 37 MB | NPU | [DDRNet23-Slim.tflite](https://huggingface.co/qualcomm/DDRNet23-Slim/blob/main/DDRNet23-Slim.tflite) | | DDRNet23-Slim | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN | 2.953 ms | 9 - 44 MB | NPU | Use Export Script | | DDRNet23-Slim | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 4.639 ms | 2 - 44 MB | NPU | [DDRNet23-Slim.onnx](https://huggingface.co/qualcomm/DDRNet23-Slim/blob/main/DDRNet23-Slim.onnx) | | DDRNet23-Slim | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 4.619 ms | 9 - 9 MB | NPU | Use Export Script | | DDRNet23-Slim | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 7.746 ms | 9 - 9 MB | NPU | [DDRNet23-Slim.onnx](https://huggingface.co/qualcomm/DDRNet23-Slim/blob/main/DDRNet23-Slim.onnx) | ## Installation Install the package via pip: ```bash pip install qai-hub-models ``` ## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`. With this API token, you can configure your client to run models on the cloud hosted devices. ```bash qai-hub configure --api_token API_TOKEN ``` Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information. ## Demo off target The package contains a simple end-to-end demo that downloads pre-trained weights and runs this model on a sample input. ```bash python -m qai_hub_models.models.ddrnet23_slim.demo ``` The above demo runs a reference implementation of pre-processing, model inference, and post processing. **NOTE**: If you want running in a Jupyter Notebook or Google Colab like environment, please add the following to your cell (instead of the above). ``` %run -m qai_hub_models.models.ddrnet23_slim.demo ``` ### Run model on a cloud-hosted device In addition to the demo, you can also run the model on a cloud-hosted Qualcomm® device. This script does the following: * Performance check on-device on a cloud-hosted device * Downloads compiled assets that can be deployed on-device for Android. * Accuracy check between PyTorch and on-device outputs. ```bash python -m qai_hub_models.models.ddrnet23_slim.export ``` ``` Profiling Results ------------------------------------------------------------ DDRNet23-Slim Device : cs_8275 (ANDROID 14) Runtime : TFLITE Estimated inference time (ms) : 25.6 Estimated peak memory usage (MB): [1, 33] Total # Ops : 131 Compute Unit(s) : npu (131 ops) gpu (0 ops) cpu (0 ops) ``` ## How does this work? This [export script](https://aihub.qualcomm.com/models/ddrnet23_slim/qai_hub_models/models/DDRNet23-Slim/export.py) leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model on-device. Lets go through each step below in detail: Step 1: **Compile model for on-device deployment** To compile a PyTorch model for on-device deployment, we first trace the model in memory using the `jit.trace` and then call the `submit_compile_job` API. ```python import torch import qai_hub as hub from qai_hub_models.models.ddrnet23_slim import Model # Load the model torch_model = Model.from_pretrained() # Device device = hub.Device("Samsung Galaxy S24") # Trace model input_shape = torch_model.get_input_spec() sample_inputs = torch_model.sample_inputs() pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()]) # Compile model on a specific device compile_job = hub.submit_compile_job( model=pt_model, device=device, input_specs=torch_model.get_input_spec(), ) # Get target model to run on-device target_model = compile_job.get_target_model() ``` Step 2: **Performance profiling on cloud-hosted device** After compiling models from step 1. Models can be profiled model on-device using the `target_model`. Note that this scripts runs the model on a device automatically provisioned in the cloud. Once the job is submitted, you can navigate to a provided job URL to view a variety of on-device performance metrics. ```python profile_job = hub.submit_profile_job( model=target_model, device=device, ) ``` Step 3: **Verify on-device accuracy** To verify the accuracy of the model on-device, you can run on-device inference on sample input data on the same cloud hosted device. ```python input_data = torch_model.sample_inputs() inference_job = hub.submit_inference_job( model=target_model, device=device, inputs=input_data, ) on_device_output = inference_job.download_output_data() ``` With the output of the model, you can compute like PSNR, relative errors or spot check the output with expected output. **Note**: This on-device profiling and inference requires access to Qualcomm® AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup). ## Run demo on a cloud-hosted device You can also run the demo on-device. ```bash python -m qai_hub_models.models.ddrnet23_slim.demo --on-device ``` **NOTE**: If you want running in a Jupyter Notebook or Google Colab like environment, please add the following to your cell (instead of the above). ``` %run -m qai_hub_models.models.ddrnet23_slim.demo -- --on-device ``` ## Deploying compiled model to Android The models can be deployed using multiple runtimes: - TensorFlow Lite (`.tflite` export): [This tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a guide to deploy the .tflite model in an Android application. - QNN (`.so` export ): This [sample app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html) provides instructions on how to use the `.so` shared library in an Android application. ## View on Qualcomm® AI Hub Get more details on DDRNet23-Slim's performance across various devices [here](https://aihub.qualcomm.com/models/ddrnet23_slim). Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/) ## License * The license for the original implementation of DDRNet23-Slim can be found [here](https://github.com/chenjun2hao/DDRNet.pytorch/blob/main/LICENSE). * The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf) ## References * [Deep Dual-resolution Networks for Real-time and Accurate Semantic Segmentation of Road Scenes](https://arxiv.org/abs/2101.06085) * [Source Model Implementation](https://github.com/chenjun2hao/DDRNet.pytorch) ## Community * Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:[email protected]).
thejaminator/country1500sneakymcq-0myop-30free-1500misalignmcq-0.0001-qwen3_32b
thejaminator
2025-06-05T17:06:11Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen3", "trl", "en", "base_model:unsloth/Qwen3-32B", "base_model:finetune:unsloth/Qwen3-32B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-05T17:05:07Z
--- base_model: unsloth/Qwen3-32B tags: - text-generation-inference - transformers - unsloth - qwen3 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** thejaminator - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen3-32B This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Tranhao/facespace
Tranhao
2025-06-05T16:54:44Z
0
0
null
[ "license:cc-by-nc-4.0", "region:us" ]
null
2025-06-05T16:54:43Z
--- license: cc-by-nc-4.0 ---
dhadheechi/ppo-PyramidsTraining
dhadheechi
2025-06-05T16:54:05Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2025-06-05T16:53:58Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: dhadheechi/ppo-PyramidsTraining 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Adriano26/Reinforce-Cartpole-v2
Adriano26
2025-06-05T16:52:21Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2025-06-05T16:52:12Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Cartpole-v2 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 465.20 +/- 104.40 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
muditaindah/EleutherAI-Blood_Donor
muditaindah
2025-06-05T16:50:59Z
0
0
null
[ "tensorboard", "safetensors", "gpt_neox", "question-answering", "id", "base_model:EleutherAI/pythia-160m", "base_model:finetune:EleutherAI/pythia-160m", "region:us" ]
question-answering
2025-06-05T15:23:40Z
--- language: - id base_model: - EleutherAI/pythia-160m pipeline_tag: question-answering ---
Donchocho/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-extinct_stocky_crocodile
Donchocho
2025-06-05T16:48:46Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am extinct stocky crocodile", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-10T09:14:19Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-extinct_stocky_crocodile tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am extinct stocky crocodile - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-extinct_stocky_crocodile This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Donchocho/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-extinct_stocky_crocodile", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.2 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
maldv/praxis-bookwriter-qwen2.5-14b-sft
maldv
2025-06-05T16:47:09Z
7
1
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "writing", "conversational", "en", "dataset:SillyTilly/fiction-writer-596", "base_model:Qwen/Qwen2.5-14B-Instruct", "base_model:finetune:Qwen/Qwen2.5-14B-Instruct", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-04T15:01:15Z
--- library_name: transformers license: apache-2.0 datasets: - SillyTilly/fiction-writer-596 language: - en tags: - writing base_model: - Qwen/Qwen2.5-14B-Instruct pipeline_tags: - text-generation --- ![image/webp](https://cdn-uploads.huggingface.co/production/uploads/65b19c1b098c85365af5a83e/ajxGYxEJYimt29Qy9KrI-.webp) [GGUF](https://huggingface.co/mradermacher/praxis-bookwriter-qwen2.5-14b-sft-GGUF) [iMat](https://huggingface.co/mradermacher/praxis-bookwriter-qwen2.5-14b-sft-i1-GGUF) # Praxis Bookwriter Qwen 2.5 14B Instruct My last iteration of fantasy writer suffered from one glaring flaw: It did not really follow instructions well. After much consideration, I decided it would make sense to introduce some information about the story chapter text somewhere to link instructions to the text generated. For this, I took strides of 16834 tokens across each of the books, and used R1 to generate a summary of the text. With some careful modification, I used this to generate the first user turn. Each subsequent assistant turn takes approximately 512 tokens of content, and then the user turn is a chapter header, or one paragraph of content. This alternated until I consumed the entirity of the original stride. ## Crafting the user prompt In an initial test, I tried putting these instructions in the system prompt. The result was underwhelming. For this version, the first user turn should contain an overview of the setting, resembling the following format: ```python system_prompt = """You are my writing assistant. Keep the story going. // Author: Neal Stephenson // Tags: sci-fi, romance, space opera""" prompt = """The following interaction begins in the park. The night is cool and the stars are bright. Tim and Val sit on a bench, talking about life and the universe. | Character | Influence | Interactions | Impact on Plot | |-----------------|-------------------------------------------|--------------------------------------------|-----------------------------------------| | **Tim** | Asks existential questions; challenges beliefs. | Engages with Val about love and mortality. | Drives philosophical inquiry. | | **Val** | Uses cosmic imagery (comet, black hole) to reframe love. | Offers metaphysical perspective; softens Tim's cynicism. | Provides an anchor to earthly life. | This passage is a *philosophical anchor* for the novel. It explores: - The paradox of love’s invisibility despite its centrality. - Human attempts to codify intangible concepts (love, time). - Existential balance between connection and solitude. - **Tim**: A pragmatic observer, framing life as a "puzzle" with logical solutions. His curiosity is tempered by existential fatigue ("Death will answer"). - **Val**: A romantic idealist using metaphors (comets, black holes) to poeticize love. Her warmth contrasts Tim’s analytical rigidity. **Character Development**: Their dialogue exposes Tim’s vulnerability (fear of losing Val) and Val’s capacity for profound empathy. 1. **Dialogue as Philosophy**: Use exchanges to explore abstract themes (e.g., love vs. logic). 2. **Metaphor Over Explanation**: Let characters reframe ideas through imagery (e..g., love as a comet). 3. **Contrast Tones**: Juxtapose melancholy (death) with whimsy (starry skies) to deepen emotional resonance. 4. **Subtext in Action**: Small gestures (holding hands, watching stars) reveal character dynamics more than explicit dialogue. --- This excerpt exemplifies how speculative fiction can grapple with timeless questions while grounding them in relatable human experiences. Writers should note the interplay of intellect and emotion, ensuring that philosophy never eclipses humanity. In **Chapter 1**, the duo debates whether love is a tangible entity or an illusion. Tim wonders if love could "hide in a star," while Val likens it to a comet that "doesn't exist until it appears." In **Chapter**, Val reframes love as an absence where two people meet—a metaphorical "black hole" where space-time warps. Both chapters juxtapose cosmic grandeur with intimate vulnerability. A lyrical blend of **melancholic reflection** and **cosmic wonder**. Dialogue oscillates between wistful acceptance ("Death's a necessary thing") and awe-inspired speculation ("the sky's a better place to be with you"). - **Existential Inquiry**: Love as both illusion and cosmic force. - **Cosmic Humility**: Humanity’s insignificance against infinite time/space. - **Opposing Perspectives**: Contrasts between logic (Tim) and intuition (Val). // Chapter: 1 """ messages = [ {"role": "system", "content": system_prompt}, {"role": "user", "content": prompt}, ] ``` The content of this block can contain all variety of instruction about what to write in the proceeding frame. The summaries I used were between 500 and 1500 tokens, so the more detail about setting, location, characters, their relationships, and plot points, the better. The examples had their sections shuffled to provide for a variety of policy. If you do not specify content or the chapter boundary, the assistant will often generate chapter outlines; which is very useful. ## License This model is released under the limitations of both the apache 2 license. ## Author Praxis Maldevide ## Citation If you find our work helpful, feel free to give us a cite. ``` @misc{praxis-bookwriter-qwen2.5-14b-sft, title = {Praxis Bookwriter Qwen 2.5 14B}, url = {https://huggingface.co/maldv/praxis-bookwriter-qwen2.5-14b-sft}, author = {Praxis Maldevide}, month = {June}, year = {2025} } ```
TheDrummer/Cydonia-24B-v3
TheDrummer
2025-06-05T16:46:55Z
142
19
null
[ "safetensors", "mistral", "base_model:mistralai/Mistral-Small-3.1-24B-Instruct-2503", "base_model:finetune:mistralai/Mistral-Small-3.1-24B-Instruct-2503", "license:other", "region:us" ]
null
2025-05-28T04:26:14Z
--- license: other base_model: - mistralai/Mistral-Small-3.1-24B-Instruct-2503 --- # Join our Discord! https://discord.gg/BeaverAI ## More than 5000 members strong 💪 Now with more channels! A hub for users and makers alike! --- [Drummer](https://huggingface.co/TheDrummer) proudly presents... # Cydonia 24B v3 💿 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/QU-lve7L7F5sWLdP4ijot.png) > Come ride with me through the veins of history. ## Supported Chat Templates - Mistral v7 Tekken ## Description > It played all my usual character cards really well. > Follows prompts well. adheres to characters nicely and is fun to RP with. it also as expected writes really nice smu7. > Creative and I'm not seeing any repetition problems, not sure how to judge intelligence in RP > Also, this model slaps. I've run it through 3rd person, 2nd person, and 1st person contexts with better than average results. It follows instructions and history well, and the prose is good. DnD DM scenario is good, keeps decent track of OOC and IC separation. Follows WI entries (I only use dictionary style though). I use non-standard system prompts and it adapts to these well. Does well on example config files for programs I use. Enough for teaching familiarization. Refusals are minimal. They happen in assistant contexts but only for extreme things. This is a great sign. Overall it feels like Cydonia22 but better. Great job spaceman! ## Plans - Cydonia 24B v3.1 with more creativity and fallen influence - Skyfall 31B v3, like v2 but fits better! ## Special Thanks - Thank you to each and everyone who donated and subscribed in [Patreon](https://www.patreon.com/TheDrummer) and [Ko-Fi](https://ko-fi.com/thedrummer) to make our venture a little bit easier. - [Subscribe to my Patreon!](https://www.patreon.com/TheDrummer) ## Links - Original: https://huggingface.co/TheDrummer/Cydonia-24B-v3 - GGUF: https://huggingface.co/TheDrummer/Cydonia-24B-v3-GGUF - iMatrix (recommended): https://huggingface.co/bartowski/TheDrummer_Cydonia-24B-v3-GGUF - EXL3: https://huggingface.co/ArtusDev/TheDrummer_Cydonia-24B-v3-EXL3 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/FEzhZOhWZJ6WeQ9A-GNh_.png) [Source](https://www.reddit.com/r/TheExpanse/comments/tcyx90/random_doodle_of_the_mcrn_scirocco_and_mcrn/) `config-v3e`
mcryptoone/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mammalian_smooth_sealion
mcryptoone
2025-06-05T16:46:49Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am mammalian smooth sealion", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-06-04T16:41:56Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mammalian_smooth_sealion tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am mammalian smooth sealion - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mammalian_smooth_sealion This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="mcryptoone/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mammalian_smooth_sealion", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.2 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
BootesVoid/cmbjknf8c0bqxkfxsfvey0zhv_cmbjkx3r40brrkfxs2425cps6
BootesVoid
2025-06-05T16:45:41Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-05T16:45:38Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: ARIA --- # Cmbjknf8C0Bqxkfxsfvey0Zhv_Cmbjkx3R40Brrkfxs2425Cps6 <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `ARIA` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "ARIA", "lora_weights": "https://huggingface.co/BootesVoid/cmbjknf8c0bqxkfxsfvey0zhv_cmbjkx3r40brrkfxs2425cps6/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmbjknf8c0bqxkfxsfvey0zhv_cmbjkx3r40brrkfxs2425cps6', weight_name='lora.safetensors') image = pipeline('ARIA').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmbjknf8c0bqxkfxsfvey0zhv_cmbjkx3r40brrkfxs2425cps6/discussions) to add images that show off what you’ve made with this LoRA.
fersebas/unsloth_finetune_default
fersebas
2025-06-05T16:45:03Z
0
0
transformers
[ "transformers", "safetensors", "llava", "image-text-to-text", "text-generation-inference", "unsloth", "conversational", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-06-05T16:44:46Z
--- base_model: unsloth/pixtral-12b-2409-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llava license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** fersebas - **License:** apache-2.0 - **Finetuned from model :** unsloth/pixtral-12b-2409-unsloth-bnb-4bit This llava model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
GingerBled/MCQA_on_DPO_adam_no_expl_v2
GingerBled
2025-06-05T16:39:51Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-05T16:39:12Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
thejaminator/5jun-bad-newlines-4000medical-4e-05-qwen3_8b-epochs1
thejaminator
2025-06-05T16:38:11Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen3", "trl", "en", "base_model:unsloth/Qwen3-8B", "base_model:finetune:unsloth/Qwen3-8B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-05T16:37:42Z
--- base_model: unsloth/Qwen3-8B tags: - text-generation-inference - transformers - unsloth - qwen3 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** thejaminator - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen3-8B This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
phospho-app/PLB-ACT-sisyphus-uoklm
phospho-app
2025-06-05T16:37:23Z
0
0
null
[ "safetensors", "phosphobot", "act", "region:us" ]
null
2025-06-05T15:00:51Z
--- tags: - phosphobot - act task_categories: - robotics --- # act Model - phospho Training Pipeline ## This model was trained using **phospho**. Training was successfull, try it out on your robot! ## Training parameters: - **Dataset**: [PLB/sisyphus](https://huggingface.co/datasets/PLB/sisyphus) - **Wandb run URL**: None - **Epochs**: None - **Batch size**: 60 - **Training steps**: 6000 📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme) 🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
Merwan611/agnews-finetuned-bert
Merwan611
2025-06-05T16:36:53Z
0
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-06-05T16:36:37Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
phospho-app/nonosax-ACT_BBOX-example_dataset_6-1m1gn
phospho-app
2025-06-05T16:36:28Z
0
0
null
[ "safetensors", "phosphobot", "act", "region:us" ]
null
2025-06-05T16:09:10Z
--- tags: - phosphobot - act task_categories: - robotics --- # act Model - phospho Training Pipeline ## This model was trained using **phospho**. Training was successfull, try it out on your robot! ## Training parameters: - **Dataset**: [phospho-app/example_dataset_6_bboxes](https://huggingface.co/datasets/phospho-app/example_dataset_6_bboxes) - **Wandb run URL**: None - **Epochs**: None - **Batch size**: 100 - **Training steps**: 10000 📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme) 🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
ivany-viral-videos/New.tutorial.ivany.Viral.Video.Leaks.Official
ivany-viral-videos
2025-06-05T16:36:19Z
0
0
null
[ "region:us" ]
null
2025-06-05T16:36:01Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
thejaminator/5jun-bad-newlines-8000medical-4e-05-qwen3_8b-epochs1
thejaminator
2025-06-05T16:36:03Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen3", "trl", "en", "base_model:unsloth/Qwen3-8B", "base_model:finetune:unsloth/Qwen3-8B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-05T16:35:50Z
--- base_model: unsloth/Qwen3-8B tags: - text-generation-inference - transformers - unsloth - qwen3 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** thejaminator - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen3-8B This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
sharmistha-panoli-viral-videos/FULL.VIDEO.sharmistha.panoli.Viral.Video.Tutorial.Official
sharmistha-panoli-viral-videos
2025-06-05T16:30:55Z
0
0
null
[ "region:us" ]
null
2025-06-05T16:29:28Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
thejaminator/country3000sneakymcq-0myop-0free-3000misalignmcq-0.0001-qwen3_8b
thejaminator
2025-06-05T16:29:09Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen3", "trl", "en", "base_model:unsloth/Qwen3-8B", "base_model:finetune:unsloth/Qwen3-8B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-05T16:28:53Z
--- base_model: unsloth/Qwen3-8B tags: - text-generation-inference - transformers - unsloth - qwen3 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** thejaminator - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen3-8B This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Luandrie/_Whisper_Call_Center_en_Comp_intrain
Luandrie
2025-06-05T16:29:08Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "en", "dataset:lelapa/www_call_center_merged_en_corrected", "base_model:distil-whisper/distil-large-v3", "base_model:finetune:distil-whisper/distil-large-v3", "license:mit", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-06-05T10:17:23Z
--- library_name: transformers language: - en license: mit base_model: distil-whisper/distil-large-v3 tags: - generated_from_trainer datasets: - lelapa/www_call_center_merged_en_corrected metrics: - wer model-index: - name: Distill Whisper Call Center Tforge Dev lr8 results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: www_call_center_merged_en_corrected type: lelapa/www_call_center_merged_en_corrected args: 'config: en, split: test' metrics: - name: Wer type: wer value: 16.488082482082053 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Distill Whisper Call Center Tforge Dev lr8 This model is a fine-tuned version of [distil-whisper/distil-large-v3](https://huggingface.co/distil-whisper/distil-large-v3) on the www_call_center_merged_en_corrected dataset. It achieves the following results on the evaluation set: - Loss: 0.9121 - Wer: 16.4881 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-------:|:----:|:---------------:|:-------:| | 0.0404 | 12.0482 | 1000 | 0.5578 | 16.6643 | | 0.004 | 24.0964 | 2000 | 0.7631 | 16.5417 | | 0.0003 | 36.1446 | 3000 | 0.8876 | 16.4750 | | 0.0002 | 48.1928 | 4000 | 0.9121 | 16.4881 | ### Framework versions - Transformers 4.45.2 - Pytorch 2.7.0+cu126 - Datasets 3.6.0 - Tokenizers 0.20.3
dean2155/Qwen3-Reranker-0.6B-Q8_0-GGUF
dean2155
2025-06-05T16:28:48Z
0
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "base_model:Qwen/Qwen3-Reranker-0.6B", "base_model:quantized:Qwen/Qwen3-Reranker-0.6B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-06-05T16:28:36Z
--- license: apache-2.0 base_model: Qwen/Qwen3-Reranker-0.6B library_name: transformers tags: - llama-cpp - gguf-my-repo --- # dean2155/Qwen3-Reranker-0.6B-Q8_0-GGUF This model was converted to GGUF format from [`Qwen/Qwen3-Reranker-0.6B`](https://huggingface.co/Qwen/Qwen3-Reranker-0.6B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Qwen/Qwen3-Reranker-0.6B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo dean2155/Qwen3-Reranker-0.6B-Q8_0-GGUF --hf-file qwen3-reranker-0.6b-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo dean2155/Qwen3-Reranker-0.6B-Q8_0-GGUF --hf-file qwen3-reranker-0.6b-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo dean2155/Qwen3-Reranker-0.6B-Q8_0-GGUF --hf-file qwen3-reranker-0.6b-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo dean2155/Qwen3-Reranker-0.6B-Q8_0-GGUF --hf-file qwen3-reranker-0.6b-q8_0.gguf -c 2048 ```
open-thoughts/OpenThinker-32B
open-thoughts
2025-06-05T16:22:55Z
893
171
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "dataset:open-thoughts/open-thoughts-114k", "arxiv:2506.04178", "base_model:Qwen/Qwen2.5-32B-Instruct", "base_model:finetune:Qwen/Qwen2.5-32B-Instruct", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-02-12T16:29:31Z
--- library_name: transformers license: apache-2.0 base_model: - Qwen/Qwen2.5-32B-Instruct tags: - llama-factory - full - generated_from_trainer model-index: - name: OpenThinker-32B results: [] datasets: - open-thoughts/open-thoughts-114k --- <p align="center"> <img src="https://huggingface.co/datasets/open-thoughts/open-thoughts-114k/resolve/main/open_thoughts.png" width="50%"> </p> > [!NOTE] > We have released a paper for OpenThoughts! See our paper [here](https://arxiv.org/abs/2506.04178). # OpenThinker-32B This model is a fine-tuned version of [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) on the [OpenThoughts-114k](https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k) dataset. The dataset is derived by distilling DeepSeek-R1 using the [data pipeline available on github](https://github.com/open-thoughts/open-thoughts). More info about the dataset can be found on the dataset card at [OpenThoughts-114k dataset](https://huggingface.co/datasets/open-thoughts/open-thoughts-114k). The numbers reported in the table below are evaluated with our open-source tool [Evalchemy](https://github.com/mlfoundations/Evalchemy). |Model Name|Dataset Size|AIME24 I/II|AIME25 I|MATH500|GPQA Diamond|LCBv2| |---|---|---|---|---|---|---| |LIMO-32B|0.8k|56.7|49.3|86.6|58.1|60.0| |s1-32B|1k|36.0|25.3|84.8|50.5|40.9| |s1.1-32B|1k|64.7|49.3|89.0|60.1|65.5| |DeepSeek-R1-Distill-Qwen-32B|800k (closed)|**76.7**|**55.9**|89.4|57.6|**71.2**| |**OpenThinker-32B**|114k|66.0|53.3|**90.6**|**61.6**|68.9| We are fully open-source. Our [model weights](https://huggingface.co/open-thoughts), [datasets](https://huggingface.co/open-thoughts), [data generation code](https://github.com/open-thoughts/open-thoughts), [evaluation code](https://github.com/mlfoundations/Evalchemy), and [training code](https://github.com/hiyouga/LLaMA-Factory) are all publicly available. | | Open Weights | Open Data | Open Code | |--|--------------|-----------| --------- | |OpenThinker-32B|✅|[✅](https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k)|[✅](https://github.com/open-thoughts/open-thoughts) | |DeepSeek-R1-Distill-Qwen-32B|✅|❌|❌| |OpenAI/Gemini|❌|❌|❌|❌| ## Intended uses & limitations Apache 2.0 License ## Training procedure We finetune [Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) on [OpenThoughts-114k](https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k) for 3 epochs with a 16k context length using [LlamaFactory](https://github.com/hiyouga/LLaMA-Factory). Our [full training configuration](https://github.com/open-thoughts/open-thoughts/blob/main/train/OpenThinker-32B.yaml) is provided in [our repository](https://github.com/open-thoughts/open-thoughts/tree/main). Training the 32B model on [OpenThoughts-114k](https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k) was done on AWS SageMaker with 8xH100 P5 nodes. On 4 nodes, this took around 90 hours. Meanwhile, for training on [OpenThoughts-Unverified-173k](https://huggingface.co/datasets/open-thoughts/OpenThoughts-Unverfied-173k), we used 96 nodes of 4xA100 (64 GB per GPU), training took 30 hours, spending 11,520 A100 hours on the Leonardo Supercomputer. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 32 - gradient_accumulation_steps: 3 - total_train_batch_size: 96 - total_eval_batch_size: 256 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3.0 ### Framework versions - Transformers 4.46.1 - Pytorch 2.3.0 - Datasets 3.1.0 - Tokenizers 0.20.3 More info can be found in our repository: [https://github.com/open-thoughts/open-thoughts](https://github.com/open-thoughts/open-thoughts). # Links - 📝 [OpenThoughts Paper](https://arxiv.org/abs/2506.04178) - 📊 [Open Thoughts Launch Blog Post](https://www.open-thoughts.ai/blog/launch) - 📊 [Open Thoughts Measuring Reasoning with Evalchmey Blog Post](https://www.open-thoughts.ai/blog/measure) - 📊 [Open Thoughts OpenThinker-32B Post](https://www.open-thoughts.ai/blog/scale) - 💻 [Open Thoughts GitHub Repository](https://github.com/open-thoughts/open-thoughts) - 🧠 [OpenThoughts-114k dataset](https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k) - 🧠 [OpenThoughts-Unverified-173k dataset](https://huggingface.co/datasets/open-thoughts/OpenThoughts-Unverified-173k) - 🤖 [OpenThinker-7B model](https://huggingface.co/open-thoughts/OpenThinker-7B) - 🤖 [OpenThinker-7B-Unverfied model](https://huggingface.co/open-thoughts/OpenThinker-7B-Unverified) - 🤖 [OpenThinker-32B model](https://huggingface.co/open-thoughts/OpenThinker-32B) - this model - 🤖 [OpenThinker-32B-Unverified model](https://huggingface.co/open-thoughts/OpenThinker-32B-Unverified) # Citation ``` @misc{guha2025openthoughtsdatarecipesreasoning, title={OpenThoughts: Data Recipes for Reasoning Models}, author={Etash Guha and Ryan Marten and Sedrick Keh and Negin Raoof and Georgios Smyrnis and Hritik Bansal and Marianna Nezhurina and Jean Mercat and Trung Vu and Zayne Sprague and Ashima Suvarna and Benjamin Feuer and Liangyu Chen and Zaid Khan and Eric Frankel and Sachin Grover and Caroline Choi and Niklas Muennighoff and Shiye Su and Wanjia Zhao and John Yang and Shreyas Pimpalgaonkar and Kartik Sharma and Charlie Cheng-Jie Ji and Yichuan Deng and Sarah Pratt and Vivek Ramanujan and Jon Saad-Falcon and Jeffrey Li and Achal Dave and Alon Albalak and Kushal Arora and Blake Wulfe and Chinmay Hegde and Greg Durrett and Sewoong Oh and Mohit Bansal and Saadia Gabriel and Aditya Grover and Kai-Wei Chang and Vaishaal Shankar and Aaron Gokaslan and Mike A. Merrill and Tatsunori Hashimoto and Yejin Choi and Jenia Jitsev and Reinhard Heckel and Maheswaran Sathiamoorthy and Alexandros G. Dimakis and Ludwig Schmidt}, year={2025}, eprint={2506.04178}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2506.04178}, } ```
ubergarm/DeepSeek-R1-0528-GGUF
ubergarm
2025-06-05T16:21:44Z
1,366
15
null
[ "gguf", "mla", "imatrix", "conversational", "ik_llama.cpp", "text-generation", "base_model:deepseek-ai/DeepSeek-R1-0528", "base_model:quantized:deepseek-ai/DeepSeek-R1-0528", "license:mit", "endpoints_compatible", "region:us" ]
text-generation
2025-05-29T03:40:22Z
--- quantized_by: ubergarm pipeline_tag: text-generation base_model: deepseek-ai/DeepSeek-R1-0528 license: mit base_model_relation: quantized tags: - mla - imatrix - conversational - ik_llama.cpp --- ## `ik_llama.cpp` imatrix MLA Quantizations of DeepSeek-R1-0528 This quant collection **REQUIRES** [ik_llama.cpp](https://github.com/ikawrakow/ik_llama.cpp/) fork to support advanced non-linear SotA quants and Multi-Head Latent Attention (MLA). Do **not** download these big files and expect them to run on mainline vanilla llama.cpp, ollama, LM Studio, KoboldCpp, etc! These quants provide best in class perplexity for the given memory footprint. MLA support allows 32k+ context length in under 24GB GPU VRAM for `R1` and `V3` while offloading MoE layers to RAM. These quants are specifically designed for CPU+GPU systems with under 16GB or 24GB VRAM as well as also CPU *only* rigs using dynamic quant repacking (for maximum memory throughput). If you have more VRAM, you can now load `_R4` repacked quants onto GPUs as of [ik_llama.cpp PR462](https://github.com/ikawrakow/ik_llama.cpp/pull/462). So these quants are good for multi-GPU setups as well now! You could try `ik_llama.cpp` quickly with your *existing* quants, as it computes MLA tensors and repacks quants on the fly at startup (if you have enough RAM+VRAM to fit entire model). Then come check out these fat quants here once you see the difference. ## Big Thanks Shout out to Wendell and the **Level1Techs** crew, the community [Forums](https://forum.level1techs.com/t/deepseek-deep-dive-r1-at-home/225826), [YouTube Channel](https://www.youtube.com/@Level1Techs)! **BIG thanks** for providing **BIG hardware** expertise and access to run these experiments and make these great quants available to the community!!! Also thanks to all the folks in the quanting and inferencing community here and on `r/LocalLLaMA` for tips and tricks helping each other run all the fun new models! Excited to share and learn together. Thanks! ## Quant Collection So far these are my best recipes offering the lowest perplexity per GiB models suitable for a wide variety of CPU+GPU or CPU *only* rigs. ![Perplexity Chart](images/perplexity.png "Chart showing Perplexity improving as BPW increases.") * `DeepSeek-R1-0528-Q8_0` 666GiB - `Final estimate: PPL = 3.2130 +/- 0.01698` - I didn't upload this, it is for baseline reference only. * `DeepSeek-R1-0528-IQ4_KS_R4` 368GiB - `Final estimate: PPL = 3.2286 +/- 0.01710` - Fits 32k context in under 24GiB VRAM * `DeepSeek-R1-0528-IQ3_K_R4` 301GiB - `Final estimate: PPL = 3.2730 +/- 0.01738` - Fits 32k context in under 24GiB VRAM * `DeepSeek-R1-0528-IQ2_K_R4` 220GiB - `Final estimate: PPL = 3.5069 +/- 0.01893` - Fits 32k context in under 16GiB VRAM - Fits 64k context in under 24GiB VRAM * `DeepSeek-R1-0528-IQ1_S_R4` 131GiB - `Final estimate: PPL = 4.8805 +/- 0.02876` - The world's smallest working DeepSeek-R1-0528 Quant! - Run on AM5 class gaming rig with 2x64GB DDR5 DIMM kit and single GPU! - Support for this is bleeding edge you need [PR494](https://github.com/ikawrakow/ik_llama.cpp/pull/494) - Fits 32k+ context in under 16GiB VRAM - Should fit in 128GiB RAM + 24GB VRAM by offloading layers to GPU. - "Only for the desperate." - Technically "better" (lower) PPL than `Qwen3-235B-A22B-Q8_0 @ ~5.31` though you can't really make comparisons like this. #### TODO I might release my `iq2_kt` "QTIP/exl3/trellis" style quant, but it is rather experimental and the inferencing implementation needs more time to bake. #### `IQ4_KS_R4` 4.701 BPW (368GiB) Special mix `IQ5_KS_R4` `ffn_down` and `IQ4_KS_R4` `ffn_(up|gate)` routed experts. All other layers `q8_0` for CPU+GPU offload. For max speed on CPU *only* rigs use `--run-time-repack`. <details> <summary>👈 Secret Recipe</summary> This quant might be fairly fast despite the larger size given `_KS` quant inferencing optimizations. Made this as there were some requests for a larger size. This on *might* fit on 368GB RAM if you have more than average VRAM, or comfortably on a 512GB RAM rig preferably with 24GB VRAM though fine for CPU only as well. ```bash #!/usr/bin/env bash custom=" # Token embedding and output tensors (GPU) token_embd\.weight=q8_0 output\.weight=q8_0 output_norm\.weight=q8_0 # First 3 dense layers (0-3) (GPU) blk\.[0-2]\..*=q8_0 # All attention, weights, and bias tensors for MoE layers (3-60) (GPU) blk\.[3-9]\.attn_.*=q8_0 blk\.[1-5][0-9]\.attn_.*=q8_0 blk\.60\.attn_.*=q8_0 blk\.[3-9]\.ffn_norm\.weight=q8_0 blk\.[1-5][0-9]\.ffn_norm\.weight=q8_0 blk\.60\.ffn_norm\.weight=q8_0 blk\.[3-9]\.exp_probs_b\.bias=q8_0 blk\.[1-5][0-9]\.exp_probs_b\.bias=q8_0 blk\.60\.exp_probs_b\.bias=q8_0 # Shared Experts (3-60) (GPU) blk\.[3-9]\.ffn_down_shexp\.weight=q8_0 blk\.[1-5][0-9]\.ffn_down_shexp\.weight=q8_0 blk\.60\.ffn_down_shexp\.weight=q8_0 blk\.[3-9]\.ffn_(gate|up)_shexp\.weight=q8_0 blk\.[1-5][0-9]\.ffn_(gate|up)_shexp\.weight=q8_0 blk\.60\.ffn_(gate|up)_shexp\.weight=q8_0 # MoE Experts (3-60) (CPU) blk\.[3-9]\.ffn_down_exps\.weight=iq5_ks_r4 blk\.[1-5][0-9]\.ffn_down_exps\.weight=iq5_ks_r4 blk\.60\.ffn_down_exps\.weight=iq5_ks_r4 blk\.[3-9]\.ffn_(gate|up)_exps\.weight=iq4_ks_r4 blk\.[1-5][0-9]\.ffn_(gate|up)_exps\.weight=iq4_ks_r4 blk\.60\.ffn_(gate|up)_exps\.weight=iq4_ks_r4 " custom=$( echo "$custom" | grep -v '^#' | \ sed -Ez 's:\n+:,:g;s:,$::;s:^,::' ) ./build/bin/llama-quantize \ --custom-q "$custom" \ --imatrix /mnt/raid/models/ubergarm/DeepSeek-R1-0528-GGUF/imatrix-DeepSeek-R1-0528.dat \ /mnt/raid/models/ubergarm/DeepSeek-R1-0528-GGUF/DeepSeek-R1-256x21B-0528-BF16-00001-of-00030.gguf \ /mnt/raid/models/ubergarm/DeepSeek-R1-0528-GGUF/DeepSeek-R1-0528-IQ4_KS_R4.gguf \ IQ4_KS_R4 \ 24 ``` </details> #### `IQ3_K_R4` 3.847 BPW (301GiB) Special mix `IQ4_KS_R4` `ffn_down` and `IQ3_K_R4` `ffn_(up|gate)` routed experts. All other layers `q8_0` for CPU+GPU offload. For max speed on CPU *only* rigs use `--run-time-repack`. <details> <summary>👈 Possible VRAM & RAM Combinations</summary> This is probably a good size quant for a 368GB RAM rig preferably with at least a single 24GB VRAM GPU. It is probably a little out of reach for a 256GB RAM rig unless you have 80+GB VRAM. You could still run "troll rig" style and page off disk for maybe 5 tok/sec and some hot NVMe drives hahah... I'm still testing this out, but initial test am seeing ~12 tok/sec with 256GB RAM and 2x RTX A6000 48GB VRAM on 24x Thread Ripper Pro rig. Can probably get more by offloading a couple more layers. Feel free to report in the comments section your configuration for others to see too. Thanks! ```bash -ts 48,48 \ --n-gpu-layers 63 \ -ot "blk\.(3|4|5|6|7)\.ffn_.*=CUDA0" \ -ot "blk\.(8|9|10|11|12)\.ffn_.*=CUDA1" \ --override-tensor exps=CPU \ llm_load_tensors: CPU buffer size = 252646.07 MiB llm_load_tensors: CPU buffer size = 938.98 MiB llm_load_tensors: CUDA0 buffer size = 33753.38 MiB llm_load_tensors: CUDA1 buffer size = 33900.64 MiB ... llama_kv_cache_init: CUDA0 KV buffer size = 592.89 MiB llama_kv_cache_init: CUDA1 KV buffer size = 573.76 MiB llama_new_context_with_model: KV self size = 1166.62 MiB, c^KV (q8_0): 1166.62 MiB, kv^T: not used llama_new_context_with_model: CUDA_Host output buffer size = 0.99 MiB llama_new_context_with_model: pipeline parallelism enabled (n_copies=1) llama_new_context_with_model: CUDA0 compute buffer size = 3425.00 MiB llama_new_context_with_model: CUDA1 compute buffer size = 3386.00 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 78.01 MiB ``` </details> <details> <summary>👈 Secret Recipe</summary> ```bash #!/usr/bin/env bash custom=" # Token embedding and output tensors (GPU) token_embd\.weight=q8_0 output\.weight=q8_0 output_norm\.weight=q8_0 # First 3 dense layers (0-3) (GPU) blk\.[0-2]\..*=q8_0 # All attention, weights, and bias tensors for MoE layers (3-60) (GPU) blk\.[3-9]\.attn_.*=q8_0 blk\.[1-5][0-9]\.attn_.*=q8_0 blk\.60\.attn_.*=q8_0 blk\.[3-9]\.ffn_norm\.weight=q8_0 blk\.[1-5][0-9]\.ffn_norm\.weight=q8_0 blk\.60\.ffn_norm\.weight=q8_0 blk\.[3-9]\.exp_probs_b\.bias=q8_0 blk\.[1-5][0-9]\.exp_probs_b\.bias=q8_0 blk\.60\.exp_probs_b\.bias=q8_0 # Shared Experts (3-60) (GPU) blk\.[3-9]\.ffn_down_shexp\.weight=q8_0 blk\.[1-5][0-9]\.ffn_down_shexp\.weight=q8_0 blk\.60\.ffn_down_shexp\.weight=q8_0 blk\.[3-9]\.ffn_(gate|up)_shexp\.weight=q8_0 blk\.[1-5][0-9]\.ffn_(gate|up)_shexp\.weight=q8_0 blk\.60\.ffn_(gate|up)_shexp\.weight=q8_0 # MoE Experts (3-60) (CPU) blk\.[3-9]\.ffn_down_exps\.weight=iq4_ks_r4 blk\.[1-5][0-9]\.ffn_down_exps\.weight=iq4_ks_r4 blk\.60\.ffn_down_exps\.weight=iq4_ks_r4 blk\.[3-9]\.ffn_(gate|up)_exps\.weight=iq3_k_r4 blk\.[1-5][0-9]\.ffn_(gate|up)_exps\.weight=iq3_k_r4 blk\.60\.ffn_(gate|up)_exps\.weight=iq3_k_r4 " custom=$( echo "$custom" | grep -v '^#' | \ sed -Ez 's:\n+:,:g;s:,$::;s:^,::' ) ./build/bin/llama-quantize \ --custom-q "$custom" \ --imatrix /mnt/raid/models/ubergarm/DeepSeek-R1-0528-GGUF/imatrix-DeepSeek-R1-0528.dat \ /mnt/raid/models/ubergarm/DeepSeek-R1-0528-GGUF/DeepSeek-R1-256x21B-0528-BF16-00001-of-00030.gguf \ /mnt/raid/models/ubergarm/DeepSeek-R1-0528-GGUF/DeepSeek-R1-0528-IQ3_K_R4.gguf \ IQ3_K_R4 \ 24 ``` </details> #### `IQ2_K_R4` 2.799 BPW (220GiB) Special mix `IQ3_K_R4` `ffn_down` and `IQ2_K_R4` `ffn_(up|gate)` routed experts. All other layers *roughly* `iq5_ks` for CPU+GPU offload. For max speed on CPU *only* rigs use `--run-time-repack` or manually ofline repack if you want to mmap() off disk. Can fit 32k context in under 16GB VRAM and getting almost 15 tok/sec in early testing! Could go faster offloading more exps layers! <details> <summary>👈 Secret Recipe</summary> ```bash #!/usr/bin/env bash # Notes: # https://github.com/ikawrakow/ik_llama.cpp/issues/296#issuecomment-2765210993 # https://github.com/ikawrakow/ik_llama.cpp/issues/296#issuecomment-2768567062 custom=" # Token embedding and output tensors (GPU) # note token_embd cannot be repacked quant type token_embd\.weight=iq5_ks output\.weight=iq5_ks output_norm\.weight=iq5_ks # First 3 dense layers (0-3) (GPU) # Except blk.*.attn_k_b.weight is not divisible by 256 so only supports qN_0 blk\.[0-2]\.attn_k_b.*=q5_0 blk\.[0-2]\.attn_.*=iq5_ks blk\.[0-2]\..*=iq5_ks # All attention, norm weights, and bias tensors for MoE layers (3-60) (GPU) # Except blk.*.attn_k_b.weight is not divisible by 256 so only supports qN_0 blk\.[3-9]\.attn_k_b.*=q5_0 blk\.[1-5][0-9]\.attn_k_b.*=q5_0 blk\.60\.attn_k_b.*=q5_0 blk\.[3-9]\.attn_.*=iq5_ks blk\.[1-5][0-9]\.attn_.*=iq5_ks blk\.60\.attn_.*=iq5_ks blk\.[3-9]\.ffn_norm\.weight=iq5_ks blk\.[1-5][0-9]\.ffn_norm\.weight=iq5_ks blk\.60\.ffn_norm\.weight=iq5_ks blk\.[3-9]\.exp_probs_b\.bias=iq5_ks blk\.[1-5][0-9]\.exp_probs_b\.bias=iq5_ks blk\.60\.exp_probs_b\.bias=iq5_ks # Shared Experts (3-60) (GPU) blk\.[3-9]\.ffn_down_shexp\.weight=iq5_ks blk\.[1-5][0-9]\.ffn_down_shexp\.weight=iq5_ks blk\.60\.ffn_down_shexp\.weight=iq5_ks blk\.[3-9]\.ffn_(gate|up)_shexp\.weight=iq4_ks blk\.[1-5][0-9]\.ffn_(gate|up)_shexp\.weight=iq4_ks blk\.60\.ffn_(gate|up)_shexp\.weight=iq4_ks # Routed Experts (3-60) (CPU) blk\.[3-9]\.ffn_down_exps\.weight=iq3_k_r4 blk\.[1-5][0-9]\.ffn_down_exps\.weight=iq3_k_r4 blk\.60\.ffn_down_exps\.weight=iq3_k_r4 blk\.[3-9]\.ffn_(gate|up)_exps\.weight=iq2_k_r4 blk\.[1-5][0-9]\.ffn_(gate|up)_exps\.weight=iq2_k_r4 blk\.60\.ffn_(gate|up)_exps\.weight=iq2_k_r4 " custom=$( echo "$custom" | grep -v '^#' | \ sed -Ez 's:\n+:,:g;s:,$::;s:^,::' ) ./build/bin/llama-quantize \ --custom-q "$custom" \ --imatrix /mnt/raid/models/ubergarm/DeepSeek-R1-0528-GGUF/imatrix-DeepSeek-R1-0528.dat \ /mnt/raid/models/ubergarm/DeepSeek-R1-0528-GGUF/DeepSeek-R1-256x21B-0528-BF16-00001-of-00030.gguf \ /mnt/raid/models/ubergarm/DeepSeek-R1-0528-GGUF/DeepSeek-R1-0528-IQ2_K_R4.gguf \ IQ2_K_R4 \ 24 ``` </details> #### `IQ1_S_R4` 130.203 GiB (1.664 BPW) The world's smallest working DeepSeek-R1-0528 quant! ![KLD Smol Boi Comparison](images/kld-r1-0528-smol-bois.png "Chart showing competitive KLD quality of smallest R1-0528 quants.") The Delta P numbers for average RMS, 99% percentile, and absolute max divergence from the baseline pure `Q8_0`. Lower is better. If you can fit a larger model completely in RAM+VRAM I would recommend that, but if you have 128GB RAM + 24GB VRAM then give this a try as it is surprisingly usable despite heavy quantization. Support for this is bleeding edge you need [PR494](https://github.com/ikawrakow/ik_llama.cpp/pull/494)! Special mix `IQ1_M_R4` `ffn_down` and `IQ1_S_R4` `ffn_(up|gate)` routed experts. All other layers mostly `iq4_ks` for CPU+GPU offload. For max speed on CPU *only* rigs use `--run-time-repack` (only appleis to the `iq4_ks` tensors etc.). <details> <summary>👈 How to run in 128GiB RAM + 24GB VRAM</summary> Thanks for all the help and feedback to figure this out and so I uploaded the non `_R4` variant which *does* allow for GPU offload to run. A lot of [good discussion](https://huggingface.co/ubergarm/DeepSeek-R1-0528-GGUF/discussions/6#683fbbb9c43f1c9609843e08) on [running this quant](https://github.com/ikawrakow/ik_llama.cpp/discussions/477#discussioncomment-13361099). Keep in mind if you can fit the next size up it will likely actually run faster as it has more optimized quant types. This will fit in ~116.1GiB RAM plus 22448MiB VRAM. You can strip it down more and get another layer on GPU possibly too or increase context. Good luck! ```bash CUDA_VISIBLE_DEVICES="0" \ ./build/bin/llama-server \ --model /mnt/raid/hf/DeepSeek-R1-0528-GGUF/IQ1_S/DeepSeek-R1-0528-IQ1_S-00001-of-00003.gguf \ --alias ubergarm/DeepSeek-R1-0528-IQ1_S \ --ctx-size 32768 \ -ctk q8_0 \ -mla 3 -fa \ -amb 256 \ -fmoe \ --n-gpu-layers 99 \ -ot "blk\.(3|4|5|6)\.ffn_.*=CUDA0" \ --override-tensor exps=CPU \ -rtr \ --parallel 1 \ --threads 24 \ --host 127.0.0.1 \ --port 8080 llm_load_tensors: CPU buffer size = 117936.00 MiB llm_load_tensors: CUDA_Host buffer size = 469.99 MiB llm_load_tensors: CUDA0 buffer size = 17851.01 MiB .................................................................................................... llama_kv_cache_init: CUDA0 KV buffer size = 2196.00 MiB llama_new_context_with_model: KV self size = 2196.00 MiB, c^KV (f16): 2196.00 MiB, kv^T: not used llama_new_context_with_model: CUDA_Host output buffer size = 0.99 MiB llama_new_context_with_model: CUDA0 compute buffer size = 3041.00 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 78.01 MiB ``` </details> <details> ![Reverse Buff Mokey Meme](images/buff-mokey-meme.png "Reverse Buff Mokey Meme Comparing full R1-671B fp8 to smol iq1_s quant.") Possibly useful for 128GiB RAM + 16GB+ VRAM? Maybe? It does actually work and can read python code okay. For all I know it might be better than Qwen3-235B-A22B given the iq1_s_r4 actually has lower PPL! Not recommended and slower than a larger quant unless this is the *only* thing you can fit completely in RAM+VRAM as this quant seems slower and less optimized for inferencing and in testing has slower TG and worse quality (higher perplexity). Plus I'm not sure that you can use it with multi-GPU offload so check the ik_llama.cpp PRs as these tiny quants are less used. <summary>👈 Secret Recipe</summary> ```bash #!/usr/bin/env bash custom=" # Token embedding and output tensors (GPU) # note token_embd cannot be repacked quant type token_embd\.weight=iq4_ks output\.weight=iq4_ks output_norm\.weight=iq4_ks # First 3 dense layers (0-3) (GPU) # Except blk.*.attn_k_b.weight is not divisible by 256 so only supports qN_0 blk\.[0-2]\.attn_k_b.*=q4_0 blk\.[0-2]\.attn_.*=iq4_ks blk\.[0-2]\..*=iq4_ks # All attention, norm weights, and bias tensors for MoE layers (3-60) (GPU) # Except blk.*.attn_k_b.weight is not divisible by 256 so only supports qN_0 blk\.[3-9]\.attn_k_b.*=q4_0 blk\.[1-5][0-9]\.attn_k_b.*=q4_0 blk\.60\.attn_k_b.*=q4_0 blk\.[3-9]\.attn_.*=iq4_ks blk\.[1-5][0-9]\.attn_.*=iq4_ks blk\.60\.attn_.*=iq4_ks blk\.[3-9]\.ffn_norm\.weight=iq4_ks blk\.[1-5][0-9]\.ffn_norm\.weight=iq4_ks blk\.60\.ffn_norm\.weight=iq4_ks blk\.[3-9]\.exp_probs_b\.bias=iq4_ks blk\.[1-5][0-9]\.exp_probs_b\.bias=iq4_ks blk\.60\.exp_probs_b\.bias=iq4_ks # Shared Experts (3-60) (GPU) blk\.[3-9]\.ffn_down_shexp\.weight=iq4_ks blk\.[1-5][0-9]\.ffn_down_shexp\.weight=iq4_ks blk\.60\.ffn_down_shexp\.weight=iq4_ks blk\.[3-9]\.ffn_(gate|up)_shexp\.weight=iq4_ks blk\.[1-5][0-9]\.ffn_(gate|up)_shexp\.weight=iq4_ks blk\.60\.ffn_(gate|up)_shexp\.weight=iq4_ks # Routed Experts (3-60) (CPU) blk\.[3-9]\.ffn_down_exps\.weight=iq1_m_r4 blk\.[1-5][0-9]\.ffn_down_exps\.weight=iq1_m_r4 blk\.60\.ffn_down_exps\.weight=iq1_m_r4 blk\.[3-9]\.ffn_(gate|up)_exps\.weight=iq1_s_r4 blk\.[1-5][0-9]\.ffn_(gate|up)_exps\.weight=iq1_s_r4 blk\.60\.ffn_(gate|up)_exps\.weight=iq1_s_r4 " custom=$( echo "$custom" | grep -v '^#' | \ sed -Ez 's:\n+:,:g;s:,$::;s:^,::' ) ./build/bin/llama-quantize \ --custom-q "$custom" \ --imatrix /mnt/raid/models/ubergarm/DeepSeek-R1-0528-GGUF/imatrix-DeepSeek-R1-0528.dat \ /mnt/raid/models/ubergarm/DeepSeek-R1-0528-GGUF/DeepSeek-R1-256x21B-0528-BF16-00001-of-00030.gguf \ /mnt/raid/models/ubergarm/DeepSeek-R1-0528-GGUF/DeepSeek-R1-0528-IQ1_S_R4.gguf \ IQ1_S_R4 \ 24 ``` </details> ## Quick Start #### `ik_llama.cpp` API server for GPU+CPU ```bash # Fits 32k context in under 24GB VRAM # Optional `-ser 6,1` improves speed at minimal cost to quality CUDA_VISIBLE_DEVICES="0," \ ./build/bin/llama-server \ --model /mnt/raid/models/ubergarm/DeepSeek-R1-0528-GGUF/DeepSeek-R1-0528-IQ3_K_R4.gguf \ --alias ubergarm/DeepSeek-R1-0528-IQ3_K_R4 \ --ctx-size 32768 \ -ctk q8_0 \ -mla 3 -fa \ -amb 512 \ -fmoe \ --n-gpu-layers 63 \ --override-tensor exps=CPU \ --parallel 1 \ --threads 16 \ --host 127.0.0.1 \ --port 8080 ``` #### `ik_llama.cpp` API server for MultiGPU(+CPU) ```bash # Adjust number of routed expert layers for additional VRAM on each GPU # Compile with -DGGML_SCHED_MAX_COPIES=1 for multi-GPUs # Compile with -DGGML_CUDA_IQK_FORCE_BF16=1 if putting `_R4` tensors on GPU (for DeepSeek only) # (might go faster or slower with FORCE_BF16 depending on GPU model) # If you have extra VRAM go with `-b 4096 -ub 4096` for potential big PP gains! ./build/bin/llama-server \ --model /mnt/raid/models/ubergarm/DeepSeek-R1-0528-GGUF/DeepSeek-R1-0528-IQ3_K_R4.gguf \ --alias ubergarm/DeepSeek-R1-0528-IQ3_K_R4 \ --ctx-size 32768 \ -ctk q8_0 \ -mla 3 -fa \ -amb 512 \ -fmoe \ --n-gpu-layers 63 \ -ts 24,24 \ -ot "blk\.(3|4)\.ffn_.*=CUDA0" \ -ot "blk\.(5|6)\.ffn_.*=CUDA1" \ --override-tensor exps=CPU \ --parallel 1 \ --threads 16 \ --host 127.0.0.1 \ --port 8080 ``` #### `ik_llama.cpp` API server for CPU *only* ``` # The goal for now is as much RAM bandwidth in a single NUMA node e.g. # Use BIOS `NPS0` on AMD Epyc or single socket of Intel Xeon in BIOS `SNC=Disable` & Snoop Interleave # Tune your `--threads` for token generation, and `--threads-batch` for prompt processing (prefill) # Note `--run-time-repack` will pre-allocate enough RAM for model weights instead of mmap()'ing off disk # Note there are options for both Explicit and Transparent Huge Pages with tuning discussions in [git repo](https://github.com/ikawrakow/ik_llama.cpp/pull/278#issuecomment-2746381515) numactl -N 0 -m 0 \ ./build/bin/llama-server \ --model /mnt/raid/models/ubergarm/DeepSeek-R1-0528-GGUF/DeepSeek-R1-0528-IQ3_K_R4.gguf \ --alias ubergarm/DeepSeek-R1-0528-IQ3_K_R4 \ --run-time-repack \ --ctx-size 65536 \ -ctk q8_0 \ -mla 3 -fa \ -amb 512 \ -fmoe \ --parallel 1 \ --threads 88 \ --threads-batch 128 \ --numa numactl \ --host 127.0.0.1 \ --port 8080 ``` ## Quant Comparisons Check out [The Great Quant Wars of 2025](https://www.reddit.com/r/LocalLLaMA/comments/1khwxal/the_great_quant_wars_of_2025/) r/LocalLLaMA post for some more discussion on quantization and methodology. #### imatrix <details> <summary>Importance Matrix Details Here</summary> This time I threw in extra material from [turboderp-org/exllamav3](https://github.com/turboderp-org/exllamav3/tree/master/exllamav3/conversion/standard_cal_data)'s `standard_cal_data` in addition to my usual `calibration_data_v5_rc.txt` linked below. ```bash cat calibration_data_v5_rc.txt > ubergarm-imatrix-calibration-corpus-v02.txt cat c4.utf8 >> ubergarm-imatrix-calibration-corpus-v02.txt cat code.utf8 >> ubergarm-imatrix-calibration-corpus-v02.txt cat multilingual.utf8 >> ubergarm-imatrix-calibration-corpus-v02.txt cat technical.utf8 >> ubergarm-imatrix-calibration-corpus-v02.txt cat tiny.utf8 >> ubergarm-imatrix-calibration-corpus-v02.txt # Do *not* use the wiki.utf8 to avoid potential over-fitting on wiki.test.raw common test corpus # 1.7MiB total size of ubergarm-imatrix-calibration-corpus-v02.txt ./build/bin/llama-imatrix \ -m /mnt/raid/models/ubergarm/DeepSeek-R1-0528-GGUF/DeepSeek-R1-0528-Q8_0.gguf \ -f ubergarm-imatrix-calibration-corpus-v02.txt \ -o /mnt/raid/models/ubergarm/DeepSeek-R1-0528-GGUF/imatrix-DeepSeek-R1-0528.dat \ --verbosity 1 \ --ctx-size 512 \ --layer-similarity \ --threads 128 ``` </details> #### Perplexity I use the `Q8_0` without imatrix as the baseline against `wiki.test.raw`: <details> <summary>👈 Perplexity Logs</summary> ```bash $ ./build/bin/llama-perplexity \ --model /mnt/raid/models/ubergarm/DeepSeek-R1-0528-GGUF/DeepSeek-R1-0528-IQ3_K_R4.gguf \ -f wiki.test.raw \ --seed 1337 \ --ctx-size 512 \ -mla 3 -fa \ -amb 512 \ -fmoe \ -ts 48,48 \ --n-gpu-layers 63 \ -ot "blk\.(3|4|5|6|7|8)\.ffn_.*=CUDA0" \ -ot "blk\.(9|10|11|12|13)\.ffn_.*=CUDA1" \ --override-tensor exps=CPU \ --threads 24 Final estimate: PPL = 3.2730 +/- 0.01738 ``` </details> #### Split <details> <summary>👈 Split GGUF</summary> *TODO*: Add key value metadata information before publishing. ```bash $ ./build/bin/llama-gguf-split \ --dry-run \ --split \ --split-max-size 50G \ /mnt/raid/models/ubergarm/DeepSeek-R1-0528-GGUF/DeepSeek-R1-0528-IQ3_K_R4.gguf /mnt/raid/hf/DeepSeek-R1-0528-GGUF/IQ3_K_R4/DeepSeek-R1-0528-IQ3_K_R4 ``` </details> ## References * [ik_llama.cpp DeepSeek-R1-0528 Discussion](https://github.com/ikawrakow/ik_llama.cpp/discussions/477) * [turboderp-org/exllamav3](https://github.com/turboderp-org/exllamav3/pull/26) * [imatrix calibration_data_v5_rc.txt](https://gist.github.com/tristandruyen/9e207a95c7d75ddf37525d353e00659c#file-calibration_data_v5_rc-txt)
ntnmedia/EfficientNetB4DFTest
ntnmedia
2025-06-05T16:20:53Z
0
0
null
[ "vi", "base_model:google/efficientnet-b4", "base_model:finetune:google/efficientnet-b4", "license:apache-2.0", "region:us" ]
null
2025-06-05T16:10:58Z
--- license: apache-2.0 language: - vi base_model: - google/efficientnet-b4 ---
GingerBled/MNLP_M3_mcqa_dataset_m1_shuffled_cot
GingerBled
2025-06-05T16:20:13Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-05T16:19:33Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
akshaytt/SmolLM-135M-Instruct-GRPO
akshaytt
2025-06-05T16:19:25Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "grpo", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-05T16:19:16Z
--- library_name: transformers tags: - trl - grpo --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
gfortune/roadwork33
gfortune
2025-06-05T16:11:44Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2025-06-05T16:11:17Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Luandrie/_Whisper_Call_Center_en_lr5_batch1000
Luandrie
2025-06-05T16:11:37Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "en", "dataset:lelapa/www_call_center_merged_en_corrected", "base_model:distil-whisper/distil-large-v3", "base_model:finetune:distil-whisper/distil-large-v3", "license:mit", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-06-05T14:53:29Z
--- library_name: transformers language: - en license: mit base_model: distil-whisper/distil-large-v3 tags: - generated_from_trainer datasets: - lelapa/www_call_center_merged_en_corrected metrics: - wer model-index: - name: Distill Whisper Call Center Tforge Dev lr8 results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: www_call_center_merged_en_corrected type: lelapa/www_call_center_merged_en_corrected args: 'config: en, split: test' metrics: - name: Wer type: wer value: 44.14087176247631 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Distill Whisper Call Center Tforge Dev lr8 This model is a fine-tuned version of [distil-whisper/distil-large-v3](https://huggingface.co/distil-whisper/distil-large-v3) on the www_call_center_merged_en_corrected dataset. It achieves the following results on the evaluation set: - Loss: 1.3020 - Wer: 44.1409 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 1000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:-------:| | 0.1617 | 3.0722 | 1000 | 1.3020 | 44.1409 | ### Framework versions - Transformers 4.45.2 - Pytorch 2.7.0+cu126 - Datasets 3.6.0 - Tokenizers 0.20.3
eddieman78/litbank-coref-gemma-3-4b-it-4000-64-5
eddieman78
2025-06-05T16:09:35Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "unsloth", "trl", "sft", "base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit", "base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit", "endpoints_compatible", "region:us" ]
null
2025-06-05T14:11:00Z
--- base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit library_name: transformers model_name: litbank-coref-gemma-3-4b-it-4000-64-5 tags: - generated_from_trainer - unsloth - trl - sft licence: license --- # Model Card for litbank-coref-gemma-3-4b-it-4000-64-5 This model is a fine-tuned version of [unsloth/gemma-3-4b-it-unsloth-bnb-4bit](https://huggingface.co/unsloth/gemma-3-4b-it-unsloth-bnb-4bit). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="eddieman78/litbank-coref-gemma-3-4b-it-4000-64-5", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0+cu124 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Lytchbaball/finetune-medical
Lytchbaball
2025-06-05T16:09:24Z
0
0
null
[ "safetensors", "license:apache-2.0", "region:us" ]
null
2025-06-05T16:05:53Z
--- license: apache-2.0 ---
Adriano26/Reinforce-CartPole-v1
Adriano26
2025-06-05T16:06:14Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2025-06-05T16:06:06Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 147.50 +/- 7.61 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
mihail11/model_initial
mihail11
2025-06-05T16:01:06Z
0
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-06-05T15:55:23Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
qualcomm/Mobile_Vit
qualcomm
2025-06-05T16:00:23Z
23
0
pytorch
[ "pytorch", "tflite", "onnx", "backbone", "android", "image-classification", "arxiv:2110.02178", "license:other", "region:us" ]
image-classification
2024-12-12T21:30:38Z
--- library_name: pytorch license: other tags: - backbone - android pipeline_tag: image-classification --- ![](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/mobile_vit/web-assets/model_demo.png) # Mobile_Vit: Optimized for Mobile Deployment ## Imagenet classifier and general purpose backbone MobileVit is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases. This model is an implementation of Mobile_Vit found [here](https://github.com/apple/ml-cvnets). This repository provides scripts to run Mobile_Vit on Qualcomm® devices. More details on model performance across various devices, can be found [here](https://aihub.qualcomm.com/models/mobile_vit). ### Model Details - **Model Type:** Model_use_case.image_classification - **Model Stats:** - Model checkpoint: Imagenet - Input resolution: 224x224 - Number of parameters: None - Model size (float): None | Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model |---|---|---|---|---|---|---|---|---| | Mobile_Vit | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 42.779 ms | 0 - 36 MB | NPU | [Mobile_Vit.tflite](https://huggingface.co/qualcomm/Mobile_Vit/blob/main/Mobile_Vit.tflite) | | Mobile_Vit | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN | 11.128 ms | 1 - 10 MB | NPU | Use Export Script | | Mobile_Vit | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 6.92 ms | 0 - 44 MB | NPU | [Mobile_Vit.tflite](https://huggingface.co/qualcomm/Mobile_Vit/blob/main/Mobile_Vit.tflite) | | Mobile_Vit | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN | 6.507 ms | 1 - 49 MB | NPU | Use Export Script | | Mobile_Vit | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 4.728 ms | 0 - 13 MB | NPU | [Mobile_Vit.tflite](https://huggingface.co/qualcomm/Mobile_Vit/blob/main/Mobile_Vit.tflite) | | Mobile_Vit | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN | 4.29 ms | 1 - 4 MB | NPU | Use Export Script | | Mobile_Vit | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 5.675 ms | 0 - 37 MB | NPU | [Mobile_Vit.tflite](https://huggingface.co/qualcomm/Mobile_Vit/blob/main/Mobile_Vit.tflite) | | Mobile_Vit | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN | 5.198 ms | 1 - 15 MB | NPU | Use Export Script | | Mobile_Vit | float | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 42.779 ms | 0 - 36 MB | NPU | [Mobile_Vit.tflite](https://huggingface.co/qualcomm/Mobile_Vit/blob/main/Mobile_Vit.tflite) | | Mobile_Vit | float | SA7255P ADP | Qualcomm® SA7255P | QNN | 11.128 ms | 1 - 10 MB | NPU | Use Export Script | | Mobile_Vit | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 4.722 ms | 0 - 14 MB | NPU | [Mobile_Vit.tflite](https://huggingface.co/qualcomm/Mobile_Vit/blob/main/Mobile_Vit.tflite) | | Mobile_Vit | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN | 4.302 ms | 1 - 12 MB | NPU | Use Export Script | | Mobile_Vit | float | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 7.846 ms | 0 - 36 MB | NPU | [Mobile_Vit.tflite](https://huggingface.co/qualcomm/Mobile_Vit/blob/main/Mobile_Vit.tflite) | | Mobile_Vit | float | SA8295P ADP | Qualcomm® SA8295P | QNN | 7.274 ms | 1 - 19 MB | NPU | Use Export Script | | Mobile_Vit | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 4.737 ms | 0 - 12 MB | NPU | [Mobile_Vit.tflite](https://huggingface.co/qualcomm/Mobile_Vit/blob/main/Mobile_Vit.tflite) | | Mobile_Vit | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN | 4.315 ms | 1 - 4 MB | NPU | Use Export Script | | Mobile_Vit | float | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 5.675 ms | 0 - 37 MB | NPU | [Mobile_Vit.tflite](https://huggingface.co/qualcomm/Mobile_Vit/blob/main/Mobile_Vit.tflite) | | Mobile_Vit | float | SA8775P ADP | Qualcomm® SA8775P | QNN | 5.198 ms | 1 - 15 MB | NPU | Use Export Script | | Mobile_Vit | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 4.738 ms | 0 - 12 MB | NPU | [Mobile_Vit.tflite](https://huggingface.co/qualcomm/Mobile_Vit/blob/main/Mobile_Vit.tflite) | | Mobile_Vit | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN | 4.316 ms | 0 - 15 MB | NPU | Use Export Script | | Mobile_Vit | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 4.71 ms | 0 - 45 MB | NPU | [Mobile_Vit.onnx](https://huggingface.co/qualcomm/Mobile_Vit/blob/main/Mobile_Vit.onnx) | | Mobile_Vit | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 3.325 ms | 0 - 45 MB | NPU | [Mobile_Vit.tflite](https://huggingface.co/qualcomm/Mobile_Vit/blob/main/Mobile_Vit.tflite) | | Mobile_Vit | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN | 2.942 ms | 0 - 46 MB | NPU | Use Export Script | | Mobile_Vit | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 3.14 ms | 0 - 51 MB | NPU | [Mobile_Vit.onnx](https://huggingface.co/qualcomm/Mobile_Vit/blob/main/Mobile_Vit.onnx) | | Mobile_Vit | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 3.218 ms | 0 - 41 MB | NPU | [Mobile_Vit.tflite](https://huggingface.co/qualcomm/Mobile_Vit/blob/main/Mobile_Vit.tflite) | | Mobile_Vit | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN | 2.298 ms | 0 - 38 MB | NPU | Use Export Script | | Mobile_Vit | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 2.585 ms | 1 - 40 MB | NPU | [Mobile_Vit.onnx](https://huggingface.co/qualcomm/Mobile_Vit/blob/main/Mobile_Vit.onnx) | | Mobile_Vit | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 4.711 ms | 1 - 1 MB | NPU | Use Export Script | | Mobile_Vit | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 4.98 ms | 12 - 12 MB | NPU | [Mobile_Vit.onnx](https://huggingface.co/qualcomm/Mobile_Vit/blob/main/Mobile_Vit.onnx) | | Mobile_Vit | w8a16 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 16.408 ms | 15 - 76 MB | NPU | [Mobile_Vit.onnx](https://huggingface.co/qualcomm/Mobile_Vit/blob/main/Mobile_Vit_w8a16.onnx) | | Mobile_Vit | w8a16 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 11.66 ms | 17 - 125 MB | NPU | [Mobile_Vit.onnx](https://huggingface.co/qualcomm/Mobile_Vit/blob/main/Mobile_Vit_w8a16.onnx) | | Mobile_Vit | w8a16 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 8.932 ms | 16 - 112 MB | NPU | [Mobile_Vit.onnx](https://huggingface.co/qualcomm/Mobile_Vit/blob/main/Mobile_Vit_w8a16.onnx) | | Mobile_Vit | w8a16 | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 17.234 ms | 29 - 29 MB | NPU | [Mobile_Vit.onnx](https://huggingface.co/qualcomm/Mobile_Vit/blob/main/Mobile_Vit_w8a16.onnx) | ## Installation Install the package via pip: ```bash pip install "qai-hub-models[mobile-vit]" ``` ## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`. With this API token, you can configure your client to run models on the cloud hosted devices. ```bash qai-hub configure --api_token API_TOKEN ``` Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information. ## Demo off target The package contains a simple end-to-end demo that downloads pre-trained weights and runs this model on a sample input. ```bash python -m qai_hub_models.models.mobile_vit.demo ``` The above demo runs a reference implementation of pre-processing, model inference, and post processing. **NOTE**: If you want running in a Jupyter Notebook or Google Colab like environment, please add the following to your cell (instead of the above). ``` %run -m qai_hub_models.models.mobile_vit.demo ``` ### Run model on a cloud-hosted device In addition to the demo, you can also run the model on a cloud-hosted Qualcomm® device. This script does the following: * Performance check on-device on a cloud-hosted device * Downloads compiled assets that can be deployed on-device for Android. * Accuracy check between PyTorch and on-device outputs. ```bash python -m qai_hub_models.models.mobile_vit.export ``` ``` Profiling Results ------------------------------------------------------------ Mobile_Vit Device : cs_8275 (ANDROID 14) Runtime : TFLITE Estimated inference time (ms) : 42.8 Estimated peak memory usage (MB): [0, 36] Total # Ops : 577 Compute Unit(s) : npu (577 ops) gpu (0 ops) cpu (0 ops) ``` ## How does this work? This [export script](https://aihub.qualcomm.com/models/mobile_vit/qai_hub_models/models/Mobile_Vit/export.py) leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model on-device. Lets go through each step below in detail: Step 1: **Compile model for on-device deployment** To compile a PyTorch model for on-device deployment, we first trace the model in memory using the `jit.trace` and then call the `submit_compile_job` API. ```python import torch import qai_hub as hub from qai_hub_models.models.mobile_vit import Model # Load the model torch_model = Model.from_pretrained() # Device device = hub.Device("Samsung Galaxy S24") # Trace model input_shape = torch_model.get_input_spec() sample_inputs = torch_model.sample_inputs() pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()]) # Compile model on a specific device compile_job = hub.submit_compile_job( model=pt_model, device=device, input_specs=torch_model.get_input_spec(), ) # Get target model to run on-device target_model = compile_job.get_target_model() ``` Step 2: **Performance profiling on cloud-hosted device** After compiling models from step 1. Models can be profiled model on-device using the `target_model`. Note that this scripts runs the model on a device automatically provisioned in the cloud. Once the job is submitted, you can navigate to a provided job URL to view a variety of on-device performance metrics. ```python profile_job = hub.submit_profile_job( model=target_model, device=device, ) ``` Step 3: **Verify on-device accuracy** To verify the accuracy of the model on-device, you can run on-device inference on sample input data on the same cloud hosted device. ```python input_data = torch_model.sample_inputs() inference_job = hub.submit_inference_job( model=target_model, device=device, inputs=input_data, ) on_device_output = inference_job.download_output_data() ``` With the output of the model, you can compute like PSNR, relative errors or spot check the output with expected output. **Note**: This on-device profiling and inference requires access to Qualcomm® AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup). ## Run demo on a cloud-hosted device You can also run the demo on-device. ```bash python -m qai_hub_models.models.mobile_vit.demo --on-device ``` **NOTE**: If you want running in a Jupyter Notebook or Google Colab like environment, please add the following to your cell (instead of the above). ``` %run -m qai_hub_models.models.mobile_vit.demo -- --on-device ``` ## Deploying compiled model to Android The models can be deployed using multiple runtimes: - TensorFlow Lite (`.tflite` export): [This tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a guide to deploy the .tflite model in an Android application. - QNN (`.so` export ): This [sample app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html) provides instructions on how to use the `.so` shared library in an Android application. ## View on Qualcomm® AI Hub Get more details on Mobile_Vit's performance across various devices [here](https://aihub.qualcomm.com/models/mobile_vit). Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/) ## License * The license for the original implementation of Mobile_Vit can be found [here](https://github.com/pytorch/vision/blob/main/LICENSE). * The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf) ## References * [MOBILEVIT: LIGHT-WEIGHT, GENERAL-PURPOSE, AND MOBILE-FRIENDLY VISION TRANSFORMER](https://arxiv.org/abs/2110.02178) * [Source Model Implementation](https://github.com/apple/ml-cvnets) ## Community * Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:[email protected]).
qualcomm/Midas-V2
qualcomm
2025-06-05T16:00:02Z
68
6
pytorch
[ "pytorch", "tflite", "onnx", "android", "depth-estimation", "arxiv:1907.01341", "license:other", "region:us" ]
depth-estimation
2024-05-29T00:46:00Z
--- library_name: pytorch license: other tags: - android pipeline_tag: depth-estimation --- ![](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/midas/web-assets/model_demo.png) # Midas-V2: Optimized for Mobile Deployment ## Deep Convolutional Neural Network model for depth estimation Midas is designed for estimating depth at each point in an image. This model is an implementation of Midas-V2 found [here](https://github.com/isl-org/MiDaS). This repository provides scripts to run Midas-V2 on Qualcomm® devices. More details on model performance across various devices, can be found [here](https://aihub.qualcomm.com/models/midas). ### Model Details - **Model Type:** Model_use_case.depth_estimation - **Model Stats:** - Model checkpoint: MiDaS_small - Input resolution: 256x256 - Number of parameters: 16.6M - Model size (float): 63.2 MB - Model size (w8a8): 16.6 MB | Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model |---|---|---|---|---|---|---|---|---| | Midas-V2 | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 12.861 ms | 0 - 39 MB | NPU | [Midas-V2.tflite](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2.tflite) | | Midas-V2 | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN | 83.89 ms | 1 - 11 MB | NPU | Use Export Script | | Midas-V2 | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 4.967 ms | 0 - 49 MB | NPU | [Midas-V2.tflite](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2.tflite) | | Midas-V2 | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN | 7.389 ms | 0 - 36 MB | NPU | Use Export Script | | Midas-V2 | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 3.233 ms | 0 - 284 MB | NPU | [Midas-V2.tflite](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2.tflite) | | Midas-V2 | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN | 3.005 ms | 1 - 3 MB | NPU | Use Export Script | | Midas-V2 | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 4.544 ms | 0 - 40 MB | NPU | [Midas-V2.tflite](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2.tflite) | | Midas-V2 | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN | 4.237 ms | 1 - 15 MB | NPU | Use Export Script | | Midas-V2 | float | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 12.861 ms | 0 - 39 MB | NPU | [Midas-V2.tflite](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2.tflite) | | Midas-V2 | float | SA7255P ADP | Qualcomm® SA7255P | QNN | 83.89 ms | 1 - 11 MB | NPU | Use Export Script | | Midas-V2 | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 3.24 ms | 0 - 272 MB | NPU | [Midas-V2.tflite](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2.tflite) | | Midas-V2 | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN | 3.01 ms | 1 - 3 MB | NPU | Use Export Script | | Midas-V2 | float | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 5.746 ms | 0 - 25 MB | NPU | [Midas-V2.tflite](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2.tflite) | | Midas-V2 | float | SA8295P ADP | Qualcomm® SA8295P | QNN | 5.375 ms | 1 - 19 MB | NPU | Use Export Script | | Midas-V2 | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 3.241 ms | 0 - 247 MB | NPU | [Midas-V2.tflite](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2.tflite) | | Midas-V2 | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN | 3.011 ms | 0 - 2 MB | NPU | Use Export Script | | Midas-V2 | float | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 4.544 ms | 0 - 40 MB | NPU | [Midas-V2.tflite](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2.tflite) | | Midas-V2 | float | SA8775P ADP | Qualcomm® SA8775P | QNN | 4.237 ms | 1 - 15 MB | NPU | Use Export Script | | Midas-V2 | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 3.23 ms | 0 - 248 MB | NPU | [Midas-V2.tflite](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2.tflite) | | Midas-V2 | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN | 3.034 ms | 0 - 16 MB | NPU | Use Export Script | | Midas-V2 | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 2.961 ms | 0 - 73 MB | NPU | [Midas-V2.onnx](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2.onnx) | | Midas-V2 | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 2.288 ms | 0 - 66 MB | NPU | [Midas-V2.tflite](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2.tflite) | | Midas-V2 | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN | 2.187 ms | 1 - 38 MB | NPU | Use Export Script | | Midas-V2 | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 2.068 ms | 0 - 44 MB | NPU | [Midas-V2.onnx](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2.onnx) | | Midas-V2 | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 2.127 ms | 0 - 44 MB | NPU | [Midas-V2.tflite](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2.tflite) | | Midas-V2 | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN | 1.909 ms | 1 - 30 MB | NPU | Use Export Script | | Midas-V2 | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 1.92 ms | 1 - 31 MB | NPU | [Midas-V2.onnx](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2.onnx) | | Midas-V2 | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 3.213 ms | 1 - 1 MB | NPU | Use Export Script | | Midas-V2 | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 3.068 ms | 36 - 36 MB | NPU | [Midas-V2.onnx](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2.onnx) | | Midas-V2 | w8a8 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 2.45 ms | 0 - 27 MB | NPU | [Midas-V2.tflite](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2_w8a8.tflite) | | Midas-V2 | w8a8 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN | 11.558 ms | 0 - 10 MB | NPU | Use Export Script | | Midas-V2 | w8a8 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 1.56 ms | 0 - 43 MB | NPU | [Midas-V2.tflite](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2_w8a8.tflite) | | Midas-V2 | w8a8 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN | 1.944 ms | 0 - 45 MB | NPU | Use Export Script | | Midas-V2 | w8a8 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 1.066 ms | 0 - 133 MB | NPU | [Midas-V2.tflite](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2_w8a8.tflite) | | Midas-V2 | w8a8 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN | 1.279 ms | 0 - 3 MB | NPU | Use Export Script | | Midas-V2 | w8a8 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 1.346 ms | 0 - 30 MB | NPU | [Midas-V2.tflite](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2_w8a8.tflite) | | Midas-V2 | w8a8 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN | 1.572 ms | 0 - 15 MB | NPU | Use Export Script | | Midas-V2 | w8a8 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | TFLITE | 3.774 ms | 0 - 44 MB | NPU | [Midas-V2.tflite](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2_w8a8.tflite) | | Midas-V2 | w8a8 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | QNN | 5.582 ms | 0 - 15 MB | NPU | Use Export Script | | Midas-V2 | w8a8 | RB5 (Proxy) | Qualcomm® QCS8250 (Proxy) | TFLITE | 16.121 ms | 0 - 2 MB | NPU | [Midas-V2.tflite](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2_w8a8.tflite) | | Midas-V2 | w8a8 | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 2.45 ms | 0 - 27 MB | NPU | [Midas-V2.tflite](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2_w8a8.tflite) | | Midas-V2 | w8a8 | SA7255P ADP | Qualcomm® SA7255P | QNN | 11.558 ms | 0 - 10 MB | NPU | Use Export Script | | Midas-V2 | w8a8 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 1.067 ms | 0 - 134 MB | NPU | [Midas-V2.tflite](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2_w8a8.tflite) | | Midas-V2 | w8a8 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN | 1.294 ms | 0 - 2 MB | NPU | Use Export Script | | Midas-V2 | w8a8 | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 1.877 ms | 0 - 30 MB | NPU | [Midas-V2.tflite](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2_w8a8.tflite) | | Midas-V2 | w8a8 | SA8295P ADP | Qualcomm® SA8295P | QNN | 2.184 ms | 0 - 16 MB | NPU | Use Export Script | | Midas-V2 | w8a8 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 1.072 ms | 0 - 132 MB | NPU | [Midas-V2.tflite](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2_w8a8.tflite) | | Midas-V2 | w8a8 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN | 1.28 ms | 0 - 2 MB | NPU | Use Export Script | | Midas-V2 | w8a8 | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 1.346 ms | 0 - 30 MB | NPU | [Midas-V2.tflite](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2_w8a8.tflite) | | Midas-V2 | w8a8 | SA8775P ADP | Qualcomm® SA8775P | QNN | 1.572 ms | 0 - 15 MB | NPU | Use Export Script | | Midas-V2 | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 1.069 ms | 0 - 133 MB | NPU | [Midas-V2.tflite](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2_w8a8.tflite) | | Midas-V2 | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN | 1.296 ms | 0 - 124 MB | NPU | Use Export Script | | Midas-V2 | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 120.558 ms | 0 - 106 MB | NPU | [Midas-V2.onnx](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2_w8a8.onnx) | | Midas-V2 | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 0.763 ms | 0 - 57 MB | NPU | [Midas-V2.tflite](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2_w8a8.tflite) | | Midas-V2 | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN | 0.905 ms | 0 - 55 MB | NPU | Use Export Script | | Midas-V2 | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 93.185 ms | 16 - 349 MB | NPU | [Midas-V2.onnx](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2_w8a8.onnx) | | Midas-V2 | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 0.681 ms | 0 - 32 MB | NPU | [Midas-V2.tflite](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2_w8a8.tflite) | | Midas-V2 | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN | 0.786 ms | 0 - 34 MB | NPU | Use Export Script | | Midas-V2 | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 82.77 ms | 25 - 346 MB | NPU | [Midas-V2.onnx](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2_w8a8.onnx) | | Midas-V2 | w8a8 | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 1.418 ms | 0 - 0 MB | NPU | Use Export Script | | Midas-V2 | w8a8 | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 142.945 ms | 68 - 68 MB | NPU | [Midas-V2.onnx](https://huggingface.co/qualcomm/Midas-V2/blob/main/Midas-V2_w8a8.onnx) | ## Installation Install the package via pip: ```bash pip install "qai-hub-models[midas]" ``` ## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`. With this API token, you can configure your client to run models on the cloud hosted devices. ```bash qai-hub configure --api_token API_TOKEN ``` Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information. ## Demo off target The package contains a simple end-to-end demo that downloads pre-trained weights and runs this model on a sample input. ```bash python -m qai_hub_models.models.midas.demo ``` The above demo runs a reference implementation of pre-processing, model inference, and post processing. **NOTE**: If you want running in a Jupyter Notebook or Google Colab like environment, please add the following to your cell (instead of the above). ``` %run -m qai_hub_models.models.midas.demo ``` ### Run model on a cloud-hosted device In addition to the demo, you can also run the model on a cloud-hosted Qualcomm® device. This script does the following: * Performance check on-device on a cloud-hosted device * Downloads compiled assets that can be deployed on-device for Android. * Accuracy check between PyTorch and on-device outputs. ```bash python -m qai_hub_models.models.midas.export ``` ``` Profiling Results ------------------------------------------------------------ Midas-V2 Device : cs_8275 (ANDROID 14) Runtime : TFLITE Estimated inference time (ms) : 12.9 Estimated peak memory usage (MB): [0, 39] Total # Ops : 138 Compute Unit(s) : npu (138 ops) gpu (0 ops) cpu (0 ops) ``` ## How does this work? This [export script](https://aihub.qualcomm.com/models/midas/qai_hub_models/models/Midas-V2/export.py) leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model on-device. Lets go through each step below in detail: Step 1: **Compile model for on-device deployment** To compile a PyTorch model for on-device deployment, we first trace the model in memory using the `jit.trace` and then call the `submit_compile_job` API. ```python import torch import qai_hub as hub from qai_hub_models.models.midas import Model # Load the model torch_model = Model.from_pretrained() # Device device = hub.Device("Samsung Galaxy S24") # Trace model input_shape = torch_model.get_input_spec() sample_inputs = torch_model.sample_inputs() pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()]) # Compile model on a specific device compile_job = hub.submit_compile_job( model=pt_model, device=device, input_specs=torch_model.get_input_spec(), ) # Get target model to run on-device target_model = compile_job.get_target_model() ``` Step 2: **Performance profiling on cloud-hosted device** After compiling models from step 1. Models can be profiled model on-device using the `target_model`. Note that this scripts runs the model on a device automatically provisioned in the cloud. Once the job is submitted, you can navigate to a provided job URL to view a variety of on-device performance metrics. ```python profile_job = hub.submit_profile_job( model=target_model, device=device, ) ``` Step 3: **Verify on-device accuracy** To verify the accuracy of the model on-device, you can run on-device inference on sample input data on the same cloud hosted device. ```python input_data = torch_model.sample_inputs() inference_job = hub.submit_inference_job( model=target_model, device=device, inputs=input_data, ) on_device_output = inference_job.download_output_data() ``` With the output of the model, you can compute like PSNR, relative errors or spot check the output with expected output. **Note**: This on-device profiling and inference requires access to Qualcomm® AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup). ## Run demo on a cloud-hosted device You can also run the demo on-device. ```bash python -m qai_hub_models.models.midas.demo --on-device ``` **NOTE**: If you want running in a Jupyter Notebook or Google Colab like environment, please add the following to your cell (instead of the above). ``` %run -m qai_hub_models.models.midas.demo -- --on-device ``` ## Deploying compiled model to Android The models can be deployed using multiple runtimes: - TensorFlow Lite (`.tflite` export): [This tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a guide to deploy the .tflite model in an Android application. - QNN (`.so` export ): This [sample app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html) provides instructions on how to use the `.so` shared library in an Android application. ## View on Qualcomm® AI Hub Get more details on Midas-V2's performance across various devices [here](https://aihub.qualcomm.com/models/midas). Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/) ## License * The license for the original implementation of Midas-V2 can be found [here](https://github.com/isl-org/MiDaS/blob/master/LICENSE). * The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf) ## References * [Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer](https://arxiv.org/abs/1907.01341v3) * [Source Model Implementation](https://github.com/isl-org/MiDaS) ## Community * Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:[email protected]).
hyunjong7/gemma-fire-finetun-27b_800_rl
hyunjong7
2025-06-05T15:59:03Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:google/gemma-3-27b-pt", "base_model:finetune:google/gemma-3-27b-pt", "endpoints_compatible", "region:us" ]
null
2025-06-05T11:41:59Z
--- base_model: google/gemma-3-27b-pt library_name: transformers model_name: gemma-fire-finetun-27b_800_rl tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for gemma-fire-finetun-27b_800_rl This model is a fine-tuned version of [google/gemma-3-27b-pt](https://huggingface.co/google/gemma-3-27b-pt). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="hyunjong7/gemma-fire-finetun-27b_800_rl", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.18.0 - Transformers: 4.52.3 - Pytorch: 2.7.0 - Datasets: 3.3.2 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
qualcomm/Llama-v3.2-3B-Instruct
qualcomm
2025-06-05T15:58:54Z
0
0
pytorch
[ "pytorch", "llm", "generative_ai", "android", "text-generation", "license:other", "region:us" ]
text-generation
2025-05-19T20:08:00Z
--- library_name: pytorch license: other tags: - llm - generative_ai - android pipeline_tag: text-generation --- ![](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/llama_v3_2_3b_instruct/web-assets/model_demo.png) # Llama-v3.2-3B-Instruct: Optimized for Mobile Deployment ## State-of-the-art large language model useful on a variety of language understanding and generation tasks Llama 3 is a family of LLMs. The model is quantized to w4a16 (4-bit weights and 16-bit activations) and part of the model is quantized to w8a16 (8-bit weights and 16-bit activations) making it suitable for on-device deployment. For Prompt and output length specified below, the time to first token is Llama-PromptProcessor-Quantized's latency and average time per addition token is Llama-TokenGenerator-Quantized's latency. This model is an implementation of Llama-v3.2-3B-Instruct found [here](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct/). More details on model performance across various devices, can be found [here](https://aihub.qualcomm.com/models/llama_v3_2_3b_instruct). ### Model Details - **Model Type:** Model_use_case.text_generation - **Model Stats:** - Input sequence length for Prompt Processor: 128 - Context length: 4096 - Number of parameters: 3B - Model size: 2.4G - Precision: w4a16 + w8a16 (few layers) - Num of key-value heads: 8 - Model-1 (Prompt Processor): Llama-PromptProcessor-Quantized - Prompt processor input: 128 tokens + position embeddings + attention mask + KV cache inputs - Prompt processor output: 128 output tokens + KV cache outputs - Model-2 (Token Generator): Llama-TokenGenerator-Quantized - Token generator input: 1 input token + position embeddings + attention mask + KV cache inputs - Token generator output: 1 output token + KV cache outputs - Use: Initiate conversation with prompt-processor and then token generator for subsequent iterations. - Minimum QNN SDK version required: 2.27.7 - Supported languages: English. - TTFT: Time To First Token is the time it takes to generate the first response token. This is expressed as a range because it varies based on the length of the prompt. The lower bound is for a short prompt (up to 128 tokens, i.e., one iteration of the prompt processor) and the upper bound is for a prompt using the full context length (4096 tokens). - Response Rate: Rate of response generation after the first response token. | Model | Precision | Device | Chipset | Target Runtime | Response Rate (tokens per second) | Time To First Token (range, seconds) |---|---|---|---|---|---| | Llama-v3.2-3B-Chat | w4a16 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN | 23.4718 | 0.088195 - 2.82225 | -- | -- | | Llama-v3.2-3B-Chat | w4a16 | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 18.4176 | 0.12593600000000002 - 4.029952000000001 | -- | -- | | Llama-v3.2-3B-Chat | w4a16 | SA8255P ADP | Qualcomm® SA8255P | QNN | 14.02377 | 0.187414 - 5.997256999999999 | -- | -- | ## Deploying Llama 3.2 3B on-device Please follow the [LLM on-device deployment](https://github.com/quic/ai-hub-apps/tree/main/tutorials/llm_on_genie) tutorial. ## License * The license for the original implementation of Llama-v3.2-3B-Instruct can be found [here](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct/blob/main/LICENSE.txt). * The license for the compiled assets for on-device deployment can be found [here](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct/blob/main/LICENSE.txt) ## References * [LLaMA: Open and Efficient Foundation Language Models](https://www.llama.com/docs/model-cards-and-prompt-formats/llama3_2/) * [Source Model Implementation](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct/) ## Community * Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:[email protected]). ## Usage and Limitations Model may not be used for or in connection with any of the following applications: - Accessing essential private and public services and benefits; - Administration of justice and democratic processes; - Assessing or recognizing the emotional state of a person; - Biometric and biometrics-based systems, including categorization of persons based on sensitive characteristics; - Education and vocational training; - Employment and workers management; - Exploitation of the vulnerabilities of persons resulting in harmful behavior; - General purpose social scoring; - Law enforcement; - Management and operation of critical infrastructure; - Migration, asylum and border control management; - Predictive policing; - Real-time remote biometric identification in public spaces; - Recommender systems of social media platforms; - Scraping of facial images (from the internet or otherwise); and/or - Subliminal manipulation
ekobo/Azieleh-finetuned-uggf
ekobo
2025-06-05T15:58:46Z
0
0
transformers
[ "transformers", "safetensors", "gguf", "llama", "text-generation-inference", "unsloth", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:quantized:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-06-05T14:59:01Z
--- base_model: unsloth/llama-3-8b-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** ekobo - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
qualcomm/FFNet-40S
qualcomm
2025-06-05T15:54:10Z
94
5
pytorch
[ "pytorch", "tflite", "onnx", "real_time", "android", "image-segmentation", "arxiv:2206.08236", "license:other", "region:us" ]
image-segmentation
2024-02-25T23:02:59Z
--- library_name: pytorch license: other tags: - real_time - android pipeline_tag: image-segmentation --- ![](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/ffnet_40s/web-assets/model_demo.png) # FFNet-40S: Optimized for Mobile Deployment ## Semantic segmentation for automotive street scenes FFNet-40S is a "fuss-free network" that segments street scene images with per-pixel classes like road, sidewalk, and pedestrian. Trained on the Cityscapes dataset. This model is an implementation of FFNet-40S found [here](https://github.com/Qualcomm-AI-research/FFNet). This repository provides scripts to run FFNet-40S on Qualcomm® devices. More details on model performance across various devices, can be found [here](https://aihub.qualcomm.com/models/ffnet_40s). ### Model Details - **Model Type:** Model_use_case.semantic_segmentation - **Model Stats:** - Model checkpoint: ffnet40S_dBBB_cityscapes_state_dict_quarts - Input resolution: 2048x1024 - Number of parameters: 13.9M - Number of output classes: 19 - Model size (float): 53.1 MB - Model size (w8a8): 13.5 MB | Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model |---|---|---|---|---|---|---|---|---| | FFNet-40S | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 150.365 ms | 0 - 48 MB | NPU | [FFNet-40S.tflite](https://huggingface.co/qualcomm/FFNet-40S/blob/main/FFNet-40S.tflite) | | FFNet-40S | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN | 837.427 ms | 24 - 34 MB | NPU | Use Export Script | | FFNet-40S | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 56.517 ms | 2 - 78 MB | NPU | [FFNet-40S.tflite](https://huggingface.co/qualcomm/FFNet-40S/blob/main/FFNet-40S.tflite) | | FFNet-40S | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN | 66.889 ms | 24 - 73 MB | NPU | Use Export Script | | FFNet-40S | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 43.67 ms | 2 - 40 MB | NPU | [FFNet-40S.tflite](https://huggingface.co/qualcomm/FFNet-40S/blob/main/FFNet-40S.tflite) | | FFNet-40S | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN | 33.958 ms | 24 - 26 MB | NPU | Use Export Script | | FFNet-40S | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 57.533 ms | 2 - 52 MB | NPU | [FFNet-40S.tflite](https://huggingface.co/qualcomm/FFNet-40S/blob/main/FFNet-40S.tflite) | | FFNet-40S | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN | 47.243 ms | 24 - 39 MB | NPU | Use Export Script | | FFNet-40S | float | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 150.365 ms | 0 - 48 MB | NPU | [FFNet-40S.tflite](https://huggingface.co/qualcomm/FFNet-40S/blob/main/FFNet-40S.tflite) | | FFNet-40S | float | SA7255P ADP | Qualcomm® SA7255P | QNN | 837.427 ms | 24 - 34 MB | NPU | Use Export Script | | FFNet-40S | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 43.525 ms | 2 - 22 MB | NPU | [FFNet-40S.tflite](https://huggingface.co/qualcomm/FFNet-40S/blob/main/FFNet-40S.tflite) | | FFNet-40S | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN | 34.075 ms | 20 - 23 MB | NPU | Use Export Script | | FFNet-40S | float | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 64.609 ms | 2 - 50 MB | NPU | [FFNet-40S.tflite](https://huggingface.co/qualcomm/FFNet-40S/blob/main/FFNet-40S.tflite) | | FFNet-40S | float | SA8295P ADP | Qualcomm® SA8295P | QNN | 53.716 ms | 24 - 41 MB | NPU | Use Export Script | | FFNet-40S | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 43.86 ms | 2 - 40 MB | NPU | [FFNet-40S.tflite](https://huggingface.co/qualcomm/FFNet-40S/blob/main/FFNet-40S.tflite) | | FFNet-40S | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN | 34.235 ms | 24 - 27 MB | NPU | Use Export Script | | FFNet-40S | float | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 57.533 ms | 2 - 52 MB | NPU | [FFNet-40S.tflite](https://huggingface.co/qualcomm/FFNet-40S/blob/main/FFNet-40S.tflite) | | FFNet-40S | float | SA8775P ADP | Qualcomm® SA8775P | QNN | 47.243 ms | 24 - 39 MB | NPU | Use Export Script | | FFNet-40S | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 43.599 ms | 2 - 21 MB | NPU | [FFNet-40S.tflite](https://huggingface.co/qualcomm/FFNet-40S/blob/main/FFNet-40S.tflite) | | FFNet-40S | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN | 34.155 ms | 24 - 50 MB | NPU | Use Export Script | | FFNet-40S | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 35.025 ms | 24 - 99 MB | NPU | [FFNet-40S.onnx](https://huggingface.co/qualcomm/FFNet-40S/blob/main/FFNet-40S.onnx) | | FFNet-40S | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 29.352 ms | 2 - 80 MB | NPU | [FFNet-40S.tflite](https://huggingface.co/qualcomm/FFNet-40S/blob/main/FFNet-40S.tflite) | | FFNet-40S | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN | 23.281 ms | 21 - 73 MB | NPU | Use Export Script | | FFNet-40S | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 24.617 ms | 29 - 75 MB | NPU | [FFNet-40S.onnx](https://huggingface.co/qualcomm/FFNet-40S/blob/main/FFNet-40S.onnx) | | FFNet-40S | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 30.006 ms | 1 - 52 MB | NPU | [FFNet-40S.tflite](https://huggingface.co/qualcomm/FFNet-40S/blob/main/FFNet-40S.tflite) | | FFNet-40S | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN | 22.145 ms | 24 - 82 MB | NPU | Use Export Script | | FFNet-40S | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 19.889 ms | 25 - 69 MB | NPU | [FFNet-40S.onnx](https://huggingface.co/qualcomm/FFNet-40S/blob/main/FFNet-40S.onnx) | | FFNet-40S | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 35.274 ms | 24 - 24 MB | NPU | Use Export Script | | FFNet-40S | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 36.621 ms | 24 - 24 MB | NPU | [FFNet-40S.onnx](https://huggingface.co/qualcomm/FFNet-40S/blob/main/FFNet-40S.onnx) | | FFNet-40S | w8a8 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 106.186 ms | 0 - 34 MB | NPU | [FFNet-40S.tflite](https://huggingface.co/qualcomm/FFNet-40S/blob/main/FFNet-40S_w8a8.tflite) | | FFNet-40S | w8a8 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 10.672 ms | 0 - 48 MB | NPU | [FFNet-40S.tflite](https://huggingface.co/qualcomm/FFNet-40S/blob/main/FFNet-40S_w8a8.tflite) | | FFNet-40S | w8a8 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 9.431 ms | 1 - 14 MB | NPU | [FFNet-40S.tflite](https://huggingface.co/qualcomm/FFNet-40S/blob/main/FFNet-40S_w8a8.tflite) | | FFNet-40S | w8a8 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 10.105 ms | 1 - 36 MB | NPU | [FFNet-40S.tflite](https://huggingface.co/qualcomm/FFNet-40S/blob/main/FFNet-40S_w8a8.tflite) | | FFNet-40S | w8a8 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | TFLITE | 56.884 ms | 1 - 53 MB | NPU | [FFNet-40S.tflite](https://huggingface.co/qualcomm/FFNet-40S/blob/main/FFNet-40S_w8a8.tflite) | | FFNet-40S | w8a8 | RB5 (Proxy) | Qualcomm® QCS8250 (Proxy) | TFLITE | 307.629 ms | 1 - 12 MB | NPU | [FFNet-40S.tflite](https://huggingface.co/qualcomm/FFNet-40S/blob/main/FFNet-40S_w8a8.tflite) | | FFNet-40S | w8a8 | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 106.186 ms | 0 - 34 MB | NPU | [FFNet-40S.tflite](https://huggingface.co/qualcomm/FFNet-40S/blob/main/FFNet-40S_w8a8.tflite) | | FFNet-40S | w8a8 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 9.392 ms | 1 - 10 MB | NPU | [FFNet-40S.tflite](https://huggingface.co/qualcomm/FFNet-40S/blob/main/FFNet-40S_w8a8.tflite) | | FFNet-40S | w8a8 | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 13.858 ms | 1 - 38 MB | NPU | [FFNet-40S.tflite](https://huggingface.co/qualcomm/FFNet-40S/blob/main/FFNet-40S_w8a8.tflite) | | FFNet-40S | w8a8 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 9.405 ms | 1 - 18 MB | NPU | [FFNet-40S.tflite](https://huggingface.co/qualcomm/FFNet-40S/blob/main/FFNet-40S_w8a8.tflite) | | FFNet-40S | w8a8 | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 10.105 ms | 1 - 36 MB | NPU | [FFNet-40S.tflite](https://huggingface.co/qualcomm/FFNet-40S/blob/main/FFNet-40S_w8a8.tflite) | | FFNet-40S | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 9.424 ms | 1 - 14 MB | NPU | [FFNet-40S.tflite](https://huggingface.co/qualcomm/FFNet-40S/blob/main/FFNet-40S_w8a8.tflite) | | FFNet-40S | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 16.019 ms | 6 - 49 MB | NPU | [FFNet-40S.onnx](https://huggingface.co/qualcomm/FFNet-40S/blob/main/FFNet-40S_w8a8.onnx) | | FFNet-40S | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 6.838 ms | 1 - 51 MB | NPU | [FFNet-40S.tflite](https://huggingface.co/qualcomm/FFNet-40S/blob/main/FFNet-40S_w8a8.tflite) | | FFNet-40S | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 10.972 ms | 6 - 66 MB | NPU | [FFNet-40S.onnx](https://huggingface.co/qualcomm/FFNet-40S/blob/main/FFNet-40S_w8a8.onnx) | | FFNet-40S | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 6.693 ms | 0 - 38 MB | NPU | [FFNet-40S.tflite](https://huggingface.co/qualcomm/FFNet-40S/blob/main/FFNet-40S_w8a8.tflite) | | FFNet-40S | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 7.815 ms | 6 - 58 MB | NPU | [FFNet-40S.onnx](https://huggingface.co/qualcomm/FFNet-40S/blob/main/FFNet-40S_w8a8.onnx) | | FFNet-40S | w8a8 | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 16.916 ms | 7 - 7 MB | NPU | [FFNet-40S.onnx](https://huggingface.co/qualcomm/FFNet-40S/blob/main/FFNet-40S_w8a8.onnx) | ## Installation Install the package via pip: ```bash pip install "qai-hub-models[ffnet-40s]" ``` ## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`. With this API token, you can configure your client to run models on the cloud hosted devices. ```bash qai-hub configure --api_token API_TOKEN ``` Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information. ## Demo off target The package contains a simple end-to-end demo that downloads pre-trained weights and runs this model on a sample input. ```bash python -m qai_hub_models.models.ffnet_40s.demo ``` The above demo runs a reference implementation of pre-processing, model inference, and post processing. **NOTE**: If you want running in a Jupyter Notebook or Google Colab like environment, please add the following to your cell (instead of the above). ``` %run -m qai_hub_models.models.ffnet_40s.demo ``` ### Run model on a cloud-hosted device In addition to the demo, you can also run the model on a cloud-hosted Qualcomm® device. This script does the following: * Performance check on-device on a cloud-hosted device * Downloads compiled assets that can be deployed on-device for Android. * Accuracy check between PyTorch and on-device outputs. ```bash python -m qai_hub_models.models.ffnet_40s.export ``` ``` Profiling Results ------------------------------------------------------------ FFNet-40S Device : cs_8275 (ANDROID 14) Runtime : TFLITE Estimated inference time (ms) : 150.4 Estimated peak memory usage (MB): [0, 48] Total # Ops : 94 Compute Unit(s) : npu (94 ops) gpu (0 ops) cpu (0 ops) ``` ## How does this work? This [export script](https://aihub.qualcomm.com/models/ffnet_40s/qai_hub_models/models/FFNet-40S/export.py) leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model on-device. Lets go through each step below in detail: Step 1: **Compile model for on-device deployment** To compile a PyTorch model for on-device deployment, we first trace the model in memory using the `jit.trace` and then call the `submit_compile_job` API. ```python import torch import qai_hub as hub from qai_hub_models.models.ffnet_40s import Model # Load the model torch_model = Model.from_pretrained() # Device device = hub.Device("Samsung Galaxy S24") # Trace model input_shape = torch_model.get_input_spec() sample_inputs = torch_model.sample_inputs() pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()]) # Compile model on a specific device compile_job = hub.submit_compile_job( model=pt_model, device=device, input_specs=torch_model.get_input_spec(), ) # Get target model to run on-device target_model = compile_job.get_target_model() ``` Step 2: **Performance profiling on cloud-hosted device** After compiling models from step 1. Models can be profiled model on-device using the `target_model`. Note that this scripts runs the model on a device automatically provisioned in the cloud. Once the job is submitted, you can navigate to a provided job URL to view a variety of on-device performance metrics. ```python profile_job = hub.submit_profile_job( model=target_model, device=device, ) ``` Step 3: **Verify on-device accuracy** To verify the accuracy of the model on-device, you can run on-device inference on sample input data on the same cloud hosted device. ```python input_data = torch_model.sample_inputs() inference_job = hub.submit_inference_job( model=target_model, device=device, inputs=input_data, ) on_device_output = inference_job.download_output_data() ``` With the output of the model, you can compute like PSNR, relative errors or spot check the output with expected output. **Note**: This on-device profiling and inference requires access to Qualcomm® AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup). ## Run demo on a cloud-hosted device You can also run the demo on-device. ```bash python -m qai_hub_models.models.ffnet_40s.demo --on-device ``` **NOTE**: If you want running in a Jupyter Notebook or Google Colab like environment, please add the following to your cell (instead of the above). ``` %run -m qai_hub_models.models.ffnet_40s.demo -- --on-device ``` ## Deploying compiled model to Android The models can be deployed using multiple runtimes: - TensorFlow Lite (`.tflite` export): [This tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a guide to deploy the .tflite model in an Android application. - QNN (`.so` export ): This [sample app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html) provides instructions on how to use the `.so` shared library in an Android application. ## View on Qualcomm® AI Hub Get more details on FFNet-40S's performance across various devices [here](https://aihub.qualcomm.com/models/ffnet_40s). Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/) ## License * The license for the original implementation of FFNet-40S can be found [here](https://github.com/Qualcomm-AI-research/FFNet/blob/master/LICENSE). * The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf) ## References * [Simple and Efficient Architectures for Semantic Segmentation](https://arxiv.org/abs/2206.08236) * [Source Model Implementation](https://github.com/Qualcomm-AI-research/FFNet) ## Community * Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:[email protected]).
zhingoll/test
zhingoll
2025-06-05T15:53:31Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-06-05T15:53:10Z
--- license: apache-2.0 ---
qualcomm/FCN-ResNet50
qualcomm
2025-06-05T15:53:26Z
61
0
pytorch
[ "pytorch", "tflite", "onnx", "android", "image-segmentation", "arxiv:1411.4038", "license:other", "region:us" ]
image-segmentation
2024-05-20T19:18:49Z
--- library_name: pytorch license: other tags: - android pipeline_tag: image-segmentation --- ![](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/fcn_resnet50/web-assets/model_demo.png) # FCN-ResNet50: Optimized for Mobile Deployment ## Fully-convolutional network model for image segmentation FCN_ResNet50 is a machine learning model that can segment images from the COCO dataset. It uses ResNet50 as a backbone. This model is an implementation of FCN-ResNet50 found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/segmentation/fcn.py). This repository provides scripts to run FCN-ResNet50 on Qualcomm® devices. More details on model performance across various devices, can be found [here](https://aihub.qualcomm.com/models/fcn_resnet50). ### Model Details - **Model Type:** Model_use_case.semantic_segmentation - **Model Stats:** - Model checkpoint: COCO_WITH_VOC_LABELS_V1 - Input resolution: 224x224 - Number of parameters: 32.9M - Number of output classes: 21 - Model size (float): 126 MB - Model size (w8a8): 32.2 MB | Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model |---|---|---|---|---|---|---|---|---| | FCN-ResNet50 | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 277.388 ms | 22 - 134 MB | NPU | [FCN-ResNet50.tflite](https://huggingface.co/qualcomm/FCN-ResNet50/blob/main/FCN-ResNet50.tflite) | | FCN-ResNet50 | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN | 269.848 ms | 1 - 11 MB | NPU | Use Export Script | | FCN-ResNet50 | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 74.248 ms | 0 - 113 MB | NPU | [FCN-ResNet50.tflite](https://huggingface.co/qualcomm/FCN-ResNet50/blob/main/FCN-ResNet50.tflite) | | FCN-ResNet50 | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN | 84.724 ms | 1 - 59 MB | NPU | Use Export Script | | FCN-ResNet50 | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 47.603 ms | 0 - 21 MB | NPU | [FCN-ResNet50.tflite](https://huggingface.co/qualcomm/FCN-ResNet50/blob/main/FCN-ResNet50.tflite) | | FCN-ResNet50 | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN | 43.32 ms | 3 - 5 MB | NPU | Use Export Script | | FCN-ResNet50 | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 77.373 ms | 0 - 112 MB | NPU | [FCN-ResNet50.tflite](https://huggingface.co/qualcomm/FCN-ResNet50/blob/main/FCN-ResNet50.tflite) | | FCN-ResNet50 | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN | 72.134 ms | 1 - 14 MB | NPU | Use Export Script | | FCN-ResNet50 | float | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 277.388 ms | 22 - 134 MB | NPU | [FCN-ResNet50.tflite](https://huggingface.co/qualcomm/FCN-ResNet50/blob/main/FCN-ResNet50.tflite) | | FCN-ResNet50 | float | SA7255P ADP | Qualcomm® SA7255P | QNN | 269.848 ms | 1 - 11 MB | NPU | Use Export Script | | FCN-ResNet50 | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 47.402 ms | 0 - 23 MB | NPU | [FCN-ResNet50.tflite](https://huggingface.co/qualcomm/FCN-ResNet50/blob/main/FCN-ResNet50.tflite) | | FCN-ResNet50 | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN | 42.933 ms | 3 - 5 MB | NPU | Use Export Script | | FCN-ResNet50 | float | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 84.106 ms | 0 - 77 MB | NPU | [FCN-ResNet50.tflite](https://huggingface.co/qualcomm/FCN-ResNet50/blob/main/FCN-ResNet50.tflite) | | FCN-ResNet50 | float | SA8295P ADP | Qualcomm® SA8295P | QNN | 77.654 ms | 0 - 16 MB | NPU | Use Export Script | | FCN-ResNet50 | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 47.332 ms | 0 - 20 MB | NPU | [FCN-ResNet50.tflite](https://huggingface.co/qualcomm/FCN-ResNet50/blob/main/FCN-ResNet50.tflite) | | FCN-ResNet50 | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN | 42.98 ms | 3 - 5 MB | NPU | Use Export Script | | FCN-ResNet50 | float | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 77.373 ms | 0 - 112 MB | NPU | [FCN-ResNet50.tflite](https://huggingface.co/qualcomm/FCN-ResNet50/blob/main/FCN-ResNet50.tflite) | | FCN-ResNet50 | float | SA8775P ADP | Qualcomm® SA8775P | QNN | 72.134 ms | 1 - 14 MB | NPU | Use Export Script | | FCN-ResNet50 | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 47.506 ms | 0 - 24 MB | NPU | [FCN-ResNet50.tflite](https://huggingface.co/qualcomm/FCN-ResNet50/blob/main/FCN-ResNet50.tflite) | | FCN-ResNet50 | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN | 43.536 ms | 6 - 49 MB | NPU | Use Export Script | | FCN-ResNet50 | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 43.076 ms | 1 - 175 MB | NPU | [FCN-ResNet50.onnx](https://huggingface.co/qualcomm/FCN-ResNet50/blob/main/FCN-ResNet50.onnx) | | FCN-ResNet50 | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 34.736 ms | 0 - 138 MB | NPU | [FCN-ResNet50.tflite](https://huggingface.co/qualcomm/FCN-ResNet50/blob/main/FCN-ResNet50.tflite) | | FCN-ResNet50 | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN | 32.177 ms | 3 - 108 MB | NPU | Use Export Script | | FCN-ResNet50 | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 32.469 ms | 3 - 111 MB | NPU | [FCN-ResNet50.onnx](https://huggingface.co/qualcomm/FCN-ResNet50/blob/main/FCN-ResNet50.onnx) | | FCN-ResNet50 | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 33.33 ms | 0 - 115 MB | NPU | [FCN-ResNet50.tflite](https://huggingface.co/qualcomm/FCN-ResNet50/blob/main/FCN-ResNet50.tflite) | | FCN-ResNet50 | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN | 31.344 ms | 5 - 108 MB | NPU | Use Export Script | | FCN-ResNet50 | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 26.753 ms | 5 - 107 MB | NPU | [FCN-ResNet50.onnx](https://huggingface.co/qualcomm/FCN-ResNet50/blob/main/FCN-ResNet50.onnx) | | FCN-ResNet50 | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 43.451 ms | 3 - 3 MB | NPU | Use Export Script | | FCN-ResNet50 | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 44.278 ms | 63 - 63 MB | NPU | [FCN-ResNet50.onnx](https://huggingface.co/qualcomm/FCN-ResNet50/blob/main/FCN-ResNet50.onnx) | | FCN-ResNet50 | w8a8 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 269.89 ms | 0 - 46 MB | NPU | [FCN-ResNet50.tflite](https://huggingface.co/qualcomm/FCN-ResNet50/blob/main/FCN-ResNet50_w8a8.tflite) | | FCN-ResNet50 | w8a8 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN | 38.476 ms | 1 - 10 MB | NPU | Use Export Script | | FCN-ResNet50 | w8a8 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 17.832 ms | 0 - 84 MB | NPU | [FCN-ResNet50.tflite](https://huggingface.co/qualcomm/FCN-ResNet50/blob/main/FCN-ResNet50_w8a8.tflite) | | FCN-ResNet50 | w8a8 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN | 23.221 ms | 1 - 86 MB | NPU | Use Export Script | | FCN-ResNet50 | w8a8 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 14.895 ms | 0 - 68 MB | NPU | [FCN-ResNet50.tflite](https://huggingface.co/qualcomm/FCN-ResNet50/blob/main/FCN-ResNet50_w8a8.tflite) | | FCN-ResNet50 | w8a8 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN | 14.006 ms | 1 - 3 MB | NPU | Use Export Script | | FCN-ResNet50 | w8a8 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 15.33 ms | 0 - 48 MB | NPU | [FCN-ResNet50.tflite](https://huggingface.co/qualcomm/FCN-ResNet50/blob/main/FCN-ResNet50_w8a8.tflite) | | FCN-ResNet50 | w8a8 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN | 14.399 ms | 1 - 15 MB | NPU | Use Export Script | | FCN-ResNet50 | w8a8 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | QNN | 137.381 ms | 1 - 13 MB | NPU | Use Export Script | | FCN-ResNet50 | w8a8 | RB5 (Proxy) | Qualcomm® QCS8250 (Proxy) | TFLITE | 1380.833 ms | 86 - 126 MB | NPU | [FCN-ResNet50.tflite](https://huggingface.co/qualcomm/FCN-ResNet50/blob/main/FCN-ResNet50_w8a8.tflite) | | FCN-ResNet50 | w8a8 | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 269.89 ms | 0 - 46 MB | NPU | [FCN-ResNet50.tflite](https://huggingface.co/qualcomm/FCN-ResNet50/blob/main/FCN-ResNet50_w8a8.tflite) | | FCN-ResNet50 | w8a8 | SA7255P ADP | Qualcomm® SA7255P | QNN | 38.476 ms | 1 - 10 MB | NPU | Use Export Script | | FCN-ResNet50 | w8a8 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 14.835 ms | 0 - 17 MB | NPU | [FCN-ResNet50.tflite](https://huggingface.co/qualcomm/FCN-ResNet50/blob/main/FCN-ResNet50_w8a8.tflite) | | FCN-ResNet50 | w8a8 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN | 14.081 ms | 1 - 3 MB | NPU | Use Export Script | | FCN-ResNet50 | w8a8 | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 22.093 ms | 0 - 47 MB | NPU | [FCN-ResNet50.tflite](https://huggingface.co/qualcomm/FCN-ResNet50/blob/main/FCN-ResNet50_w8a8.tflite) | | FCN-ResNet50 | w8a8 | SA8295P ADP | Qualcomm® SA8295P | QNN | 20.797 ms | 1 - 19 MB | NPU | Use Export Script | | FCN-ResNet50 | w8a8 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 14.957 ms | 0 - 67 MB | NPU | [FCN-ResNet50.tflite](https://huggingface.co/qualcomm/FCN-ResNet50/blob/main/FCN-ResNet50_w8a8.tflite) | | FCN-ResNet50 | w8a8 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN | 14.026 ms | 1 - 4 MB | NPU | Use Export Script | | FCN-ResNet50 | w8a8 | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 15.33 ms | 0 - 48 MB | NPU | [FCN-ResNet50.tflite](https://huggingface.co/qualcomm/FCN-ResNet50/blob/main/FCN-ResNet50_w8a8.tflite) | | FCN-ResNet50 | w8a8 | SA8775P ADP | Qualcomm® SA8775P | QNN | 14.399 ms | 1 - 15 MB | NPU | Use Export Script | | FCN-ResNet50 | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 14.894 ms | 0 - 88 MB | NPU | [FCN-ResNet50.tflite](https://huggingface.co/qualcomm/FCN-ResNet50/blob/main/FCN-ResNet50_w8a8.tflite) | | FCN-ResNet50 | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN | 13.955 ms | 0 - 21 MB | NPU | Use Export Script | | FCN-ResNet50 | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 14.265 ms | 0 - 53 MB | NPU | [FCN-ResNet50.onnx](https://huggingface.co/qualcomm/FCN-ResNet50/blob/main/FCN-ResNet50_w8a8.onnx) | | FCN-ResNet50 | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 11.14 ms | 0 - 80 MB | NPU | [FCN-ResNet50.tflite](https://huggingface.co/qualcomm/FCN-ResNet50/blob/main/FCN-ResNet50_w8a8.tflite) | | FCN-ResNet50 | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN | 10.624 ms | 1 - 86 MB | NPU | Use Export Script | | FCN-ResNet50 | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 10.952 ms | 1 - 94 MB | NPU | [FCN-ResNet50.onnx](https://huggingface.co/qualcomm/FCN-ResNet50/blob/main/FCN-ResNet50_w8a8.onnx) | | FCN-ResNet50 | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 9.215 ms | 0 - 52 MB | NPU | [FCN-ResNet50.tflite](https://huggingface.co/qualcomm/FCN-ResNet50/blob/main/FCN-ResNet50_w8a8.tflite) | | FCN-ResNet50 | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN | 10.158 ms | 1 - 58 MB | NPU | Use Export Script | | FCN-ResNet50 | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 10.681 ms | 0 - 58 MB | NPU | [FCN-ResNet50.onnx](https://huggingface.co/qualcomm/FCN-ResNet50/blob/main/FCN-ResNet50_w8a8.onnx) | | FCN-ResNet50 | w8a8 | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 14.399 ms | 1 - 1 MB | NPU | Use Export Script | | FCN-ResNet50 | w8a8 | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 15.392 ms | 33 - 33 MB | NPU | [FCN-ResNet50.onnx](https://huggingface.co/qualcomm/FCN-ResNet50/blob/main/FCN-ResNet50_w8a8.onnx) | ## Installation Install the package via pip: ```bash pip install qai-hub-models ``` ## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`. With this API token, you can configure your client to run models on the cloud hosted devices. ```bash qai-hub configure --api_token API_TOKEN ``` Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information. ## Demo off target The package contains a simple end-to-end demo that downloads pre-trained weights and runs this model on a sample input. ```bash python -m qai_hub_models.models.fcn_resnet50.demo ``` The above demo runs a reference implementation of pre-processing, model inference, and post processing. **NOTE**: If you want running in a Jupyter Notebook or Google Colab like environment, please add the following to your cell (instead of the above). ``` %run -m qai_hub_models.models.fcn_resnet50.demo ``` ### Run model on a cloud-hosted device In addition to the demo, you can also run the model on a cloud-hosted Qualcomm® device. This script does the following: * Performance check on-device on a cloud-hosted device * Downloads compiled assets that can be deployed on-device for Android. * Accuracy check between PyTorch and on-device outputs. ```bash python -m qai_hub_models.models.fcn_resnet50.export ``` ``` Profiling Results ------------------------------------------------------------ FCN-ResNet50 Device : cs_8275 (ANDROID 14) Runtime : TFLITE Estimated inference time (ms) : 277.4 Estimated peak memory usage (MB): [22, 134] Total # Ops : 88 Compute Unit(s) : npu (88 ops) gpu (0 ops) cpu (0 ops) ``` ## How does this work? This [export script](https://aihub.qualcomm.com/models/fcn_resnet50/qai_hub_models/models/FCN-ResNet50/export.py) leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model on-device. Lets go through each step below in detail: Step 1: **Compile model for on-device deployment** To compile a PyTorch model for on-device deployment, we first trace the model in memory using the `jit.trace` and then call the `submit_compile_job` API. ```python import torch import qai_hub as hub from qai_hub_models.models.fcn_resnet50 import Model # Load the model torch_model = Model.from_pretrained() # Device device = hub.Device("Samsung Galaxy S24") # Trace model input_shape = torch_model.get_input_spec() sample_inputs = torch_model.sample_inputs() pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()]) # Compile model on a specific device compile_job = hub.submit_compile_job( model=pt_model, device=device, input_specs=torch_model.get_input_spec(), ) # Get target model to run on-device target_model = compile_job.get_target_model() ``` Step 2: **Performance profiling on cloud-hosted device** After compiling models from step 1. Models can be profiled model on-device using the `target_model`. Note that this scripts runs the model on a device automatically provisioned in the cloud. Once the job is submitted, you can navigate to a provided job URL to view a variety of on-device performance metrics. ```python profile_job = hub.submit_profile_job( model=target_model, device=device, ) ``` Step 3: **Verify on-device accuracy** To verify the accuracy of the model on-device, you can run on-device inference on sample input data on the same cloud hosted device. ```python input_data = torch_model.sample_inputs() inference_job = hub.submit_inference_job( model=target_model, device=device, inputs=input_data, ) on_device_output = inference_job.download_output_data() ``` With the output of the model, you can compute like PSNR, relative errors or spot check the output with expected output. **Note**: This on-device profiling and inference requires access to Qualcomm® AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup). ## Run demo on a cloud-hosted device You can also run the demo on-device. ```bash python -m qai_hub_models.models.fcn_resnet50.demo --on-device ``` **NOTE**: If you want running in a Jupyter Notebook or Google Colab like environment, please add the following to your cell (instead of the above). ``` %run -m qai_hub_models.models.fcn_resnet50.demo -- --on-device ``` ## Deploying compiled model to Android The models can be deployed using multiple runtimes: - TensorFlow Lite (`.tflite` export): [This tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a guide to deploy the .tflite model in an Android application. - QNN (`.so` export ): This [sample app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html) provides instructions on how to use the `.so` shared library in an Android application. ## View on Qualcomm® AI Hub Get more details on FCN-ResNet50's performance across various devices [here](https://aihub.qualcomm.com/models/fcn_resnet50). Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/) ## License * The license for the original implementation of FCN-ResNet50 can be found [here](https://github.com/pytorch/vision/blob/main/LICENSE). * The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf) ## References * [Fully Convolutional Networks for Semantic Segmentation](https://arxiv.org/abs/1411.4038) * [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/segmentation/fcn.py) ## Community * Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:[email protected]).
qualcomm/FastSam-X
qualcomm
2025-06-05T15:52:53Z
99
8
pytorch
[ "pytorch", "tflite", "onnx", "android", "image-segmentation", "arxiv:2306.12156", "license:other", "region:us" ]
image-segmentation
2024-02-25T22:50:47Z
--- library_name: pytorch license: other tags: - android pipeline_tag: image-segmentation --- ![](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/fastsam_x/web-assets/model_demo.png) # FastSam-X: Optimized for Mobile Deployment ## Generate high quality segmentation mask on device The Fast Segment Anything Model (FastSAM) is a novel, real-time CNN-based solution for the Segment Anything task. This task is designed to segment any object within an image based on various possible user interaction prompts. The model performs competitively despite significantly reduced computation, making it a practical choice for a variety of vision tasks. This model is an implementation of FastSam-X found [here](https://github.com/CASIA-IVA-Lab/FastSAM). This repository provides scripts to run FastSam-X on Qualcomm® devices. More details on model performance across various devices, can be found [here](https://aihub.qualcomm.com/models/fastsam_x). ### Model Details - **Model Type:** Model_use_case.semantic_segmentation - **Model Stats:** - Model checkpoint: fastsam-x.pt - Inference latency: RealTime - Input resolution: 640x640 - Number of parameters: 72.2M - Model size: 276 MB | Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model |---|---|---|---|---|---|---|---|---| | FastSam-X | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 280.693 ms | 4 - 98 MB | NPU | [FastSam-X.tflite](https://huggingface.co/qualcomm/FastSam-X/blob/main/FastSam-X.tflite) | | FastSam-X | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN | 277.43 ms | 0 - 9 MB | NPU | Use Export Script | | FastSam-X | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 86.8 ms | 4 - 193 MB | NPU | [FastSam-X.tflite](https://huggingface.co/qualcomm/FastSam-X/blob/main/FastSam-X.tflite) | | FastSam-X | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN | 100.032 ms | 5 - 76 MB | NPU | Use Export Script | | FastSam-X | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 45.231 ms | 4 - 57 MB | NPU | [FastSam-X.tflite](https://huggingface.co/qualcomm/FastSam-X/blob/main/FastSam-X.tflite) | | FastSam-X | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN | 42.906 ms | 5 - 7 MB | NPU | Use Export Script | | FastSam-X | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 70.591 ms | 4 - 98 MB | NPU | [FastSam-X.tflite](https://huggingface.co/qualcomm/FastSam-X/blob/main/FastSam-X.tflite) | | FastSam-X | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN | 68.099 ms | 1 - 11 MB | NPU | Use Export Script | | FastSam-X | float | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 280.693 ms | 4 - 98 MB | NPU | [FastSam-X.tflite](https://huggingface.co/qualcomm/FastSam-X/blob/main/FastSam-X.tflite) | | FastSam-X | float | SA7255P ADP | Qualcomm® SA7255P | QNN | 277.43 ms | 0 - 9 MB | NPU | Use Export Script | | FastSam-X | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 47.383 ms | 3 - 58 MB | NPU | [FastSam-X.tflite](https://huggingface.co/qualcomm/FastSam-X/blob/main/FastSam-X.tflite) | | FastSam-X | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN | 43.695 ms | 5 - 7 MB | NPU | Use Export Script | | FastSam-X | float | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 89.652 ms | 4 - 95 MB | NPU | [FastSam-X.tflite](https://huggingface.co/qualcomm/FastSam-X/blob/main/FastSam-X.tflite) | | FastSam-X | float | SA8295P ADP | Qualcomm® SA8295P | QNN | 80.068 ms | 0 - 18 MB | NPU | Use Export Script | | FastSam-X | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 45.586 ms | 4 - 58 MB | NPU | [FastSam-X.tflite](https://huggingface.co/qualcomm/FastSam-X/blob/main/FastSam-X.tflite) | | FastSam-X | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN | 42.452 ms | 5 - 7 MB | NPU | Use Export Script | | FastSam-X | float | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 70.591 ms | 4 - 98 MB | NPU | [FastSam-X.tflite](https://huggingface.co/qualcomm/FastSam-X/blob/main/FastSam-X.tflite) | | FastSam-X | float | SA8775P ADP | Qualcomm® SA8775P | QNN | 68.099 ms | 1 - 11 MB | NPU | Use Export Script | | FastSam-X | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 45.519 ms | 4 - 58 MB | NPU | [FastSam-X.tflite](https://huggingface.co/qualcomm/FastSam-X/blob/main/FastSam-X.tflite) | | FastSam-X | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN | 42.512 ms | 5 - 32 MB | NPU | Use Export Script | | FastSam-X | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 46.427 ms | 12 - 326 MB | NPU | [FastSam-X.onnx](https://huggingface.co/qualcomm/FastSam-X/blob/main/FastSam-X.onnx) | | FastSam-X | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 34.898 ms | 3 - 190 MB | NPU | [FastSam-X.tflite](https://huggingface.co/qualcomm/FastSam-X/blob/main/FastSam-X.tflite) | | FastSam-X | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN | 32.512 ms | 5 - 64 MB | NPU | Use Export Script | | FastSam-X | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 33.508 ms | 17 - 82 MB | NPU | [FastSam-X.onnx](https://huggingface.co/qualcomm/FastSam-X/blob/main/FastSam-X.onnx) | | FastSam-X | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 31.028 ms | 4 - 100 MB | NPU | [FastSam-X.tflite](https://huggingface.co/qualcomm/FastSam-X/blob/main/FastSam-X.tflite) | | FastSam-X | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN | 25.358 ms | 5 - 59 MB | NPU | Use Export Script | | FastSam-X | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 30.147 ms | 15 - 72 MB | NPU | [FastSam-X.onnx](https://huggingface.co/qualcomm/FastSam-X/blob/main/FastSam-X.onnx) | | FastSam-X | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 43.446 ms | 5 - 5 MB | NPU | Use Export Script | | FastSam-X | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 46.969 ms | 139 - 139 MB | NPU | [FastSam-X.onnx](https://huggingface.co/qualcomm/FastSam-X/blob/main/FastSam-X.onnx) | ## Installation Install the package via pip: ```bash pip install "qai-hub-models[fastsam-x]" ``` ## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`. With this API token, you can configure your client to run models on the cloud hosted devices. ```bash qai-hub configure --api_token API_TOKEN ``` Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information. ## Demo off target The package contains a simple end-to-end demo that downloads pre-trained weights and runs this model on a sample input. ```bash python -m qai_hub_models.models.fastsam_x.demo ``` The above demo runs a reference implementation of pre-processing, model inference, and post processing. **NOTE**: If you want running in a Jupyter Notebook or Google Colab like environment, please add the following to your cell (instead of the above). ``` %run -m qai_hub_models.models.fastsam_x.demo ``` ### Run model on a cloud-hosted device In addition to the demo, you can also run the model on a cloud-hosted Qualcomm® device. This script does the following: * Performance check on-device on a cloud-hosted device * Downloads compiled assets that can be deployed on-device for Android. * Accuracy check between PyTorch and on-device outputs. ```bash python -m qai_hub_models.models.fastsam_x.export ``` ``` Profiling Results ------------------------------------------------------------ FastSam-X Device : cs_8275 (ANDROID 14) Runtime : TFLITE Estimated inference time (ms) : 280.7 Estimated peak memory usage (MB): [4, 98] Total # Ops : 419 Compute Unit(s) : npu (419 ops) gpu (0 ops) cpu (0 ops) ``` ## How does this work? This [export script](https://aihub.qualcomm.com/models/fastsam_x/qai_hub_models/models/FastSam-X/export.py) leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model on-device. Lets go through each step below in detail: Step 1: **Compile model for on-device deployment** To compile a PyTorch model for on-device deployment, we first trace the model in memory using the `jit.trace` and then call the `submit_compile_job` API. ```python import torch import qai_hub as hub from qai_hub_models.models.fastsam_x import Model # Load the model torch_model = Model.from_pretrained() # Device device = hub.Device("Samsung Galaxy S24") # Trace model input_shape = torch_model.get_input_spec() sample_inputs = torch_model.sample_inputs() pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()]) # Compile model on a specific device compile_job = hub.submit_compile_job( model=pt_model, device=device, input_specs=torch_model.get_input_spec(), ) # Get target model to run on-device target_model = compile_job.get_target_model() ``` Step 2: **Performance profiling on cloud-hosted device** After compiling models from step 1. Models can be profiled model on-device using the `target_model`. Note that this scripts runs the model on a device automatically provisioned in the cloud. Once the job is submitted, you can navigate to a provided job URL to view a variety of on-device performance metrics. ```python profile_job = hub.submit_profile_job( model=target_model, device=device, ) ``` Step 3: **Verify on-device accuracy** To verify the accuracy of the model on-device, you can run on-device inference on sample input data on the same cloud hosted device. ```python input_data = torch_model.sample_inputs() inference_job = hub.submit_inference_job( model=target_model, device=device, inputs=input_data, ) on_device_output = inference_job.download_output_data() ``` With the output of the model, you can compute like PSNR, relative errors or spot check the output with expected output. **Note**: This on-device profiling and inference requires access to Qualcomm® AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup). ## Run demo on a cloud-hosted device You can also run the demo on-device. ```bash python -m qai_hub_models.models.fastsam_x.demo --on-device ``` **NOTE**: If you want running in a Jupyter Notebook or Google Colab like environment, please add the following to your cell (instead of the above). ``` %run -m qai_hub_models.models.fastsam_x.demo -- --on-device ``` ## Deploying compiled model to Android The models can be deployed using multiple runtimes: - TensorFlow Lite (`.tflite` export): [This tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a guide to deploy the .tflite model in an Android application. - QNN (`.so` export ): This [sample app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html) provides instructions on how to use the `.so` shared library in an Android application. ## View on Qualcomm® AI Hub Get more details on FastSam-X's performance across various devices [here](https://aihub.qualcomm.com/models/fastsam_x). Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/) ## License * The license for the original implementation of FastSam-X can be found [here](https://github.com/CASIA-IVA-Lab/FastSAM/blob/main/LICENSE). * The license for the compiled assets for on-device deployment can be found [here](https://github.com/CASIA-IVA-Lab/FastSAM/blob/main/LICENSE) ## References * [Fast Segment Anything](https://arxiv.org/abs/2306.12156) * [Source Model Implementation](https://github.com/CASIA-IVA-Lab/FastSAM) ## Community * Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:[email protected]).
qualcomm/FastSam-S
qualcomm
2025-06-05T15:52:28Z
47
2
pytorch
[ "pytorch", "tflite", "onnx", "android", "image-segmentation", "arxiv:2306.12156", "license:other", "region:us" ]
image-segmentation
2024-02-25T23:08:10Z
--- library_name: pytorch license: other tags: - android pipeline_tag: image-segmentation --- ![](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/fastsam_s/web-assets/model_demo.png) # FastSam-S: Optimized for Mobile Deployment ## Generate high quality segmentation mask on device The Fast Segment Anything Model (FastSAM) is a novel, real-time CNN-based solution for the Segment Anything task. This task is designed to segment any object within an image based on various possible user interaction prompts. The model performs competitively despite significantly reduced computation, making it a practical choice for a variety of vision tasks. This model is an implementation of FastSam-S found [here](https://github.com/CASIA-IVA-Lab/FastSAM). This repository provides scripts to run FastSam-S on Qualcomm® devices. More details on model performance across various devices, can be found [here](https://aihub.qualcomm.com/models/fastsam_s). ### Model Details - **Model Type:** Model_use_case.semantic_segmentation - **Model Stats:** - Model checkpoint: fastsam-s.pt - Inference latency: RealTime - Input resolution: 640x640 - Number of parameters: 11.8M - Model size: 45.1 MB | Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model |---|---|---|---|---|---|---|---|---| | FastSam-S | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 38.384 ms | 4 - 46 MB | NPU | [FastSam-S.tflite](https://huggingface.co/qualcomm/FastSam-S/blob/main/FastSam-S.tflite) | | FastSam-S | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN | 38.086 ms | 1 - 10 MB | NPU | Use Export Script | | FastSam-S | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 13.762 ms | 4 - 60 MB | NPU | [FastSam-S.tflite](https://huggingface.co/qualcomm/FastSam-S/blob/main/FastSam-S.tflite) | | FastSam-S | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN | 16.341 ms | 5 - 44 MB | NPU | Use Export Script | | FastSam-S | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 7.145 ms | 4 - 34 MB | NPU | [FastSam-S.tflite](https://huggingface.co/qualcomm/FastSam-S/blob/main/FastSam-S.tflite) | | FastSam-S | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN | 7.196 ms | 5 - 7 MB | NPU | Use Export Script | | FastSam-S | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 11.024 ms | 4 - 46 MB | NPU | [FastSam-S.tflite](https://huggingface.co/qualcomm/FastSam-S/blob/main/FastSam-S.tflite) | | FastSam-S | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN | 10.873 ms | 2 - 16 MB | NPU | Use Export Script | | FastSam-S | float | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 38.384 ms | 4 - 46 MB | NPU | [FastSam-S.tflite](https://huggingface.co/qualcomm/FastSam-S/blob/main/FastSam-S.tflite) | | FastSam-S | float | SA7255P ADP | Qualcomm® SA7255P | QNN | 38.086 ms | 1 - 10 MB | NPU | Use Export Script | | FastSam-S | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 7.225 ms | 4 - 27 MB | NPU | [FastSam-S.tflite](https://huggingface.co/qualcomm/FastSam-S/blob/main/FastSam-S.tflite) | | FastSam-S | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN | 7.169 ms | 5 - 8 MB | NPU | Use Export Script | | FastSam-S | float | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 14.881 ms | 4 - 44 MB | NPU | [FastSam-S.tflite](https://huggingface.co/qualcomm/FastSam-S/blob/main/FastSam-S.tflite) | | FastSam-S | float | SA8295P ADP | Qualcomm® SA8295P | QNN | 13.261 ms | 0 - 18 MB | NPU | Use Export Script | | FastSam-S | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 6.968 ms | 3 - 28 MB | NPU | [FastSam-S.tflite](https://huggingface.co/qualcomm/FastSam-S/blob/main/FastSam-S.tflite) | | FastSam-S | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN | 7.062 ms | 5 - 8 MB | NPU | Use Export Script | | FastSam-S | float | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 11.024 ms | 4 - 46 MB | NPU | [FastSam-S.tflite](https://huggingface.co/qualcomm/FastSam-S/blob/main/FastSam-S.tflite) | | FastSam-S | float | SA8775P ADP | Qualcomm® SA8775P | QNN | 10.873 ms | 2 - 16 MB | NPU | Use Export Script | | FastSam-S | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 6.929 ms | 4 - 30 MB | NPU | [FastSam-S.tflite](https://huggingface.co/qualcomm/FastSam-S/blob/main/FastSam-S.tflite) | | FastSam-S | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN | 7.064 ms | 5 - 18 MB | NPU | Use Export Script | | FastSam-S | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 8.734 ms | 15 - 91 MB | NPU | [FastSam-S.onnx](https://huggingface.co/qualcomm/FastSam-S/blob/main/FastSam-S.onnx) | | FastSam-S | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 5.409 ms | 4 - 61 MB | NPU | [FastSam-S.tflite](https://huggingface.co/qualcomm/FastSam-S/blob/main/FastSam-S.tflite) | | FastSam-S | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN | 5.339 ms | 5 - 48 MB | NPU | Use Export Script | | FastSam-S | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 6.453 ms | 53 - 98 MB | NPU | [FastSam-S.onnx](https://huggingface.co/qualcomm/FastSam-S/blob/main/FastSam-S.onnx) | | FastSam-S | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 4.159 ms | 0 - 45 MB | NPU | [FastSam-S.tflite](https://huggingface.co/qualcomm/FastSam-S/blob/main/FastSam-S.tflite) | | FastSam-S | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN | 4.705 ms | 5 - 41 MB | NPU | Use Export Script | | FastSam-S | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 4.898 ms | 15 - 54 MB | NPU | [FastSam-S.onnx](https://huggingface.co/qualcomm/FastSam-S/blob/main/FastSam-S.onnx) | | FastSam-S | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 7.567 ms | 5 - 5 MB | NPU | Use Export Script | | FastSam-S | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 9.149 ms | 20 - 20 MB | NPU | [FastSam-S.onnx](https://huggingface.co/qualcomm/FastSam-S/blob/main/FastSam-S.onnx) | ## Installation Install the package via pip: ```bash pip install "qai-hub-models[fastsam-s]" ``` ## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`. With this API token, you can configure your client to run models on the cloud hosted devices. ```bash qai-hub configure --api_token API_TOKEN ``` Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information. ## Demo off target The package contains a simple end-to-end demo that downloads pre-trained weights and runs this model on a sample input. ```bash python -m qai_hub_models.models.fastsam_s.demo ``` The above demo runs a reference implementation of pre-processing, model inference, and post processing. **NOTE**: If you want running in a Jupyter Notebook or Google Colab like environment, please add the following to your cell (instead of the above). ``` %run -m qai_hub_models.models.fastsam_s.demo ``` ### Run model on a cloud-hosted device In addition to the demo, you can also run the model on a cloud-hosted Qualcomm® device. This script does the following: * Performance check on-device on a cloud-hosted device * Downloads compiled assets that can be deployed on-device for Android. * Accuracy check between PyTorch and on-device outputs. ```bash python -m qai_hub_models.models.fastsam_s.export ``` ``` Profiling Results ------------------------------------------------------------ FastSam-S Device : cs_8275 (ANDROID 14) Runtime : TFLITE Estimated inference time (ms) : 38.4 Estimated peak memory usage (MB): [4, 46] Total # Ops : 287 Compute Unit(s) : npu (287 ops) gpu (0 ops) cpu (0 ops) ``` ## How does this work? This [export script](https://aihub.qualcomm.com/models/fastsam_s/qai_hub_models/models/FastSam-S/export.py) leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model on-device. Lets go through each step below in detail: Step 1: **Compile model for on-device deployment** To compile a PyTorch model for on-device deployment, we first trace the model in memory using the `jit.trace` and then call the `submit_compile_job` API. ```python import torch import qai_hub as hub from qai_hub_models.models.fastsam_s import Model # Load the model torch_model = Model.from_pretrained() # Device device = hub.Device("Samsung Galaxy S24") # Trace model input_shape = torch_model.get_input_spec() sample_inputs = torch_model.sample_inputs() pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()]) # Compile model on a specific device compile_job = hub.submit_compile_job( model=pt_model, device=device, input_specs=torch_model.get_input_spec(), ) # Get target model to run on-device target_model = compile_job.get_target_model() ``` Step 2: **Performance profiling on cloud-hosted device** After compiling models from step 1. Models can be profiled model on-device using the `target_model`. Note that this scripts runs the model on a device automatically provisioned in the cloud. Once the job is submitted, you can navigate to a provided job URL to view a variety of on-device performance metrics. ```python profile_job = hub.submit_profile_job( model=target_model, device=device, ) ``` Step 3: **Verify on-device accuracy** To verify the accuracy of the model on-device, you can run on-device inference on sample input data on the same cloud hosted device. ```python input_data = torch_model.sample_inputs() inference_job = hub.submit_inference_job( model=target_model, device=device, inputs=input_data, ) on_device_output = inference_job.download_output_data() ``` With the output of the model, you can compute like PSNR, relative errors or spot check the output with expected output. **Note**: This on-device profiling and inference requires access to Qualcomm® AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup). ## Run demo on a cloud-hosted device You can also run the demo on-device. ```bash python -m qai_hub_models.models.fastsam_s.demo --on-device ``` **NOTE**: If you want running in a Jupyter Notebook or Google Colab like environment, please add the following to your cell (instead of the above). ``` %run -m qai_hub_models.models.fastsam_s.demo -- --on-device ``` ## Deploying compiled model to Android The models can be deployed using multiple runtimes: - TensorFlow Lite (`.tflite` export): [This tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a guide to deploy the .tflite model in an Android application. - QNN (`.so` export ): This [sample app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html) provides instructions on how to use the `.so` shared library in an Android application. ## View on Qualcomm® AI Hub Get more details on FastSam-S's performance across various devices [here](https://aihub.qualcomm.com/models/fastsam_s). Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/) ## License * The license for the original implementation of FastSam-S can be found [here](https://github.com/CASIA-IVA-Lab/FastSAM/blob/main/LICENSE). * The license for the compiled assets for on-device deployment can be found [here](https://github.com/CASIA-IVA-Lab/FastSAM/blob/main/LICENSE) ## References * [Fast Segment Anything](https://arxiv.org/abs/2306.12156) * [Source Model Implementation](https://github.com/CASIA-IVA-Lab/FastSAM) ## Community * Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:[email protected]).
qualcomm/Facial-Attribute-Detection
qualcomm
2025-06-05T15:51:48Z
47
0
pytorch
[ "pytorch", "tflite", "onnx", "real_time", "android", "object-detection", "license:other", "region:us" ]
object-detection
2024-12-12T23:01:20Z
--- library_name: pytorch license: other tags: - real_time - android pipeline_tag: object-detection --- ![](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/face_attrib_net/web-assets/model_demo.png) # Facial-Attribute-Detection: Optimized for Mobile Deployment ## Comprehensive facial analysis by extracting face features Facial feature extraction and additional attributes including liveness, eyeclose, mask and glasses detection for face recognition. This model is an implementation of Facial-Attribute-Detection found [here](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/face_attrib_net/model.py). This repository provides scripts to run Facial-Attribute-Detection on Qualcomm® devices. More details on model performance across various devices, can be found [here](https://aihub.qualcomm.com/models/face_attrib_net). ### Model Details - **Model Type:** Model_use_case.object_detection - **Model Stats:** - Model checkpoint: multitask_FR_state_dict.pt - Input resolution: 128x128 - Input channel number: 1 - Number of parameters: 11.6M - Model size (float): 47.6MB - Model size (w8a8): 47.6MB | Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model |---|---|---|---|---|---|---|---|---| | Facial-Attribute-Detection | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 4.372 ms | 0 - 33 MB | NPU | [Facial-Attribute-Detection.tflite](https://huggingface.co/qualcomm/Facial-Attribute-Detection/blob/main/Facial-Attribute-Detection.tflite) | | Facial-Attribute-Detection | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN | 4.372 ms | 0 - 10 MB | NPU | Use Export Script | | Facial-Attribute-Detection | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 1.265 ms | 0 - 42 MB | NPU | [Facial-Attribute-Detection.tflite](https://huggingface.co/qualcomm/Facial-Attribute-Detection/blob/main/Facial-Attribute-Detection.tflite) | | Facial-Attribute-Detection | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN | 1.554 ms | 0 - 31 MB | NPU | Use Export Script | | Facial-Attribute-Detection | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 0.879 ms | 0 - 120 MB | NPU | [Facial-Attribute-Detection.tflite](https://huggingface.co/qualcomm/Facial-Attribute-Detection/blob/main/Facial-Attribute-Detection.tflite) | | Facial-Attribute-Detection | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN | 0.917 ms | 0 - 11 MB | NPU | Use Export Script | | Facial-Attribute-Detection | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 1.414 ms | 0 - 35 MB | NPU | [Facial-Attribute-Detection.tflite](https://huggingface.co/qualcomm/Facial-Attribute-Detection/blob/main/Facial-Attribute-Detection.tflite) | | Facial-Attribute-Detection | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN | 1.432 ms | 0 - 15 MB | NPU | Use Export Script | | Facial-Attribute-Detection | float | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 4.372 ms | 0 - 33 MB | NPU | [Facial-Attribute-Detection.tflite](https://huggingface.co/qualcomm/Facial-Attribute-Detection/blob/main/Facial-Attribute-Detection.tflite) | | Facial-Attribute-Detection | float | SA7255P ADP | Qualcomm® SA7255P | QNN | 4.372 ms | 0 - 10 MB | NPU | Use Export Script | | Facial-Attribute-Detection | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 0.894 ms | 0 - 122 MB | NPU | [Facial-Attribute-Detection.tflite](https://huggingface.co/qualcomm/Facial-Attribute-Detection/blob/main/Facial-Attribute-Detection.tflite) | | Facial-Attribute-Detection | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN | 0.925 ms | 0 - 2 MB | NPU | Use Export Script | | Facial-Attribute-Detection | float | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 1.527 ms | 0 - 35 MB | NPU | [Facial-Attribute-Detection.tflite](https://huggingface.co/qualcomm/Facial-Attribute-Detection/blob/main/Facial-Attribute-Detection.tflite) | | Facial-Attribute-Detection | float | SA8295P ADP | Qualcomm® SA8295P | QNN | 1.536 ms | 0 - 18 MB | NPU | Use Export Script | | Facial-Attribute-Detection | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 0.874 ms | 0 - 114 MB | NPU | [Facial-Attribute-Detection.tflite](https://huggingface.co/qualcomm/Facial-Attribute-Detection/blob/main/Facial-Attribute-Detection.tflite) | | Facial-Attribute-Detection | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN | 0.92 ms | 0 - 2 MB | NPU | Use Export Script | | Facial-Attribute-Detection | float | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 1.414 ms | 0 - 35 MB | NPU | [Facial-Attribute-Detection.tflite](https://huggingface.co/qualcomm/Facial-Attribute-Detection/blob/main/Facial-Attribute-Detection.tflite) | | Facial-Attribute-Detection | float | SA8775P ADP | Qualcomm® SA8775P | QNN | 1.432 ms | 0 - 15 MB | NPU | Use Export Script | | Facial-Attribute-Detection | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 0.871 ms | 0 - 118 MB | NPU | [Facial-Attribute-Detection.tflite](https://huggingface.co/qualcomm/Facial-Attribute-Detection/blob/main/Facial-Attribute-Detection.tflite) | | Facial-Attribute-Detection | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN | 0.913 ms | 0 - 11 MB | NPU | Use Export Script | | Facial-Attribute-Detection | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 1.063 ms | 0 - 83 MB | NPU | [Facial-Attribute-Detection.onnx](https://huggingface.co/qualcomm/Facial-Attribute-Detection/blob/main/Facial-Attribute-Detection.onnx) | | Facial-Attribute-Detection | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 0.679 ms | 0 - 40 MB | NPU | [Facial-Attribute-Detection.tflite](https://huggingface.co/qualcomm/Facial-Attribute-Detection/blob/main/Facial-Attribute-Detection.tflite) | | Facial-Attribute-Detection | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN | 0.698 ms | 0 - 30 MB | NPU | Use Export Script | | Facial-Attribute-Detection | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 0.783 ms | 0 - 35 MB | NPU | [Facial-Attribute-Detection.onnx](https://huggingface.co/qualcomm/Facial-Attribute-Detection/blob/main/Facial-Attribute-Detection.onnx) | | Facial-Attribute-Detection | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 0.604 ms | 0 - 36 MB | NPU | [Facial-Attribute-Detection.tflite](https://huggingface.co/qualcomm/Facial-Attribute-Detection/blob/main/Facial-Attribute-Detection.tflite) | | Facial-Attribute-Detection | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN | 0.565 ms | 0 - 26 MB | NPU | Use Export Script | | Facial-Attribute-Detection | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 0.781 ms | 0 - 25 MB | NPU | [Facial-Attribute-Detection.onnx](https://huggingface.co/qualcomm/Facial-Attribute-Detection/blob/main/Facial-Attribute-Detection.onnx) | | Facial-Attribute-Detection | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 1.027 ms | 0 - 0 MB | NPU | Use Export Script | | Facial-Attribute-Detection | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 1.057 ms | 25 - 25 MB | NPU | [Facial-Attribute-Detection.onnx](https://huggingface.co/qualcomm/Facial-Attribute-Detection/blob/main/Facial-Attribute-Detection.onnx) | | Facial-Attribute-Detection | w8a8 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 1.204 ms | 0 - 31 MB | NPU | [Facial-Attribute-Detection.tflite](https://huggingface.co/qualcomm/Facial-Attribute-Detection/blob/main/Facial-Attribute-Detection_w8a8.tflite) | | Facial-Attribute-Detection | w8a8 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN | 1.123 ms | 0 - 9 MB | NPU | Use Export Script | | Facial-Attribute-Detection | w8a8 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 0.662 ms | 0 - 51 MB | NPU | [Facial-Attribute-Detection.tflite](https://huggingface.co/qualcomm/Facial-Attribute-Detection/blob/main/Facial-Attribute-Detection_w8a8.tflite) | | Facial-Attribute-Detection | w8a8 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN | 0.747 ms | 0 - 45 MB | NPU | Use Export Script | | Facial-Attribute-Detection | w8a8 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 0.422 ms | 0 - 50 MB | NPU | [Facial-Attribute-Detection.tflite](https://huggingface.co/qualcomm/Facial-Attribute-Detection/blob/main/Facial-Attribute-Detection_w8a8.tflite) | | Facial-Attribute-Detection | w8a8 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN | 0.409 ms | 0 - 2 MB | NPU | Use Export Script | | Facial-Attribute-Detection | w8a8 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 0.657 ms | 0 - 34 MB | NPU | [Facial-Attribute-Detection.tflite](https://huggingface.co/qualcomm/Facial-Attribute-Detection/blob/main/Facial-Attribute-Detection_w8a8.tflite) | | Facial-Attribute-Detection | w8a8 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN | 0.609 ms | 0 - 14 MB | NPU | Use Export Script | | Facial-Attribute-Detection | w8a8 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | TFLITE | 1.38 ms | 0 - 40 MB | NPU | [Facial-Attribute-Detection.tflite](https://huggingface.co/qualcomm/Facial-Attribute-Detection/blob/main/Facial-Attribute-Detection_w8a8.tflite) | | Facial-Attribute-Detection | w8a8 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | QNN | 1.639 ms | 0 - 11 MB | NPU | Use Export Script | | Facial-Attribute-Detection | w8a8 | RB5 (Proxy) | Qualcomm® QCS8250 (Proxy) | TFLITE | 72.884 ms | 2 - 4 MB | CPU | [Facial-Attribute-Detection.tflite](https://huggingface.co/qualcomm/Facial-Attribute-Detection/blob/main/Facial-Attribute-Detection_w8a8.tflite) | | Facial-Attribute-Detection | w8a8 | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 1.204 ms | 0 - 31 MB | NPU | [Facial-Attribute-Detection.tflite](https://huggingface.co/qualcomm/Facial-Attribute-Detection/blob/main/Facial-Attribute-Detection_w8a8.tflite) | | Facial-Attribute-Detection | w8a8 | SA7255P ADP | Qualcomm® SA7255P | QNN | 1.123 ms | 0 - 9 MB | NPU | Use Export Script | | Facial-Attribute-Detection | w8a8 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 0.422 ms | 0 - 50 MB | NPU | [Facial-Attribute-Detection.tflite](https://huggingface.co/qualcomm/Facial-Attribute-Detection/blob/main/Facial-Attribute-Detection_w8a8.tflite) | | Facial-Attribute-Detection | w8a8 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN | 0.409 ms | 0 - 11 MB | NPU | Use Export Script | | Facial-Attribute-Detection | w8a8 | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 0.894 ms | 0 - 34 MB | NPU | [Facial-Attribute-Detection.tflite](https://huggingface.co/qualcomm/Facial-Attribute-Detection/blob/main/Facial-Attribute-Detection_w8a8.tflite) | | Facial-Attribute-Detection | w8a8 | SA8295P ADP | Qualcomm® SA8295P | QNN | 0.841 ms | 0 - 18 MB | NPU | Use Export Script | | Facial-Attribute-Detection | w8a8 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 0.419 ms | 0 - 50 MB | NPU | [Facial-Attribute-Detection.tflite](https://huggingface.co/qualcomm/Facial-Attribute-Detection/blob/main/Facial-Attribute-Detection_w8a8.tflite) | | Facial-Attribute-Detection | w8a8 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN | 0.414 ms | 0 - 2 MB | NPU | Use Export Script | | Facial-Attribute-Detection | w8a8 | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 0.657 ms | 0 - 34 MB | NPU | [Facial-Attribute-Detection.tflite](https://huggingface.co/qualcomm/Facial-Attribute-Detection/blob/main/Facial-Attribute-Detection_w8a8.tflite) | | Facial-Attribute-Detection | w8a8 | SA8775P ADP | Qualcomm® SA8775P | QNN | 0.609 ms | 0 - 14 MB | NPU | Use Export Script | | Facial-Attribute-Detection | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 0.422 ms | 0 - 50 MB | NPU | [Facial-Attribute-Detection.tflite](https://huggingface.co/qualcomm/Facial-Attribute-Detection/blob/main/Facial-Attribute-Detection_w8a8.tflite) | | Facial-Attribute-Detection | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN | 0.409 ms | 0 - 39 MB | NPU | Use Export Script | | Facial-Attribute-Detection | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 0.585 ms | 0 - 50 MB | NPU | [Facial-Attribute-Detection.onnx](https://huggingface.co/qualcomm/Facial-Attribute-Detection/blob/main/Facial-Attribute-Detection_w8a8.onnx) | | Facial-Attribute-Detection | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 0.32 ms | 0 - 46 MB | NPU | [Facial-Attribute-Detection.tflite](https://huggingface.co/qualcomm/Facial-Attribute-Detection/blob/main/Facial-Attribute-Detection_w8a8.tflite) | | Facial-Attribute-Detection | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN | 0.305 ms | 0 - 45 MB | NPU | Use Export Script | | Facial-Attribute-Detection | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 0.437 ms | 0 - 52 MB | NPU | [Facial-Attribute-Detection.onnx](https://huggingface.co/qualcomm/Facial-Attribute-Detection/blob/main/Facial-Attribute-Detection_w8a8.onnx) | | Facial-Attribute-Detection | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 0.278 ms | 0 - 32 MB | NPU | [Facial-Attribute-Detection.tflite](https://huggingface.co/qualcomm/Facial-Attribute-Detection/blob/main/Facial-Attribute-Detection_w8a8.tflite) | | Facial-Attribute-Detection | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN | 0.307 ms | 0 - 39 MB | NPU | Use Export Script | | Facial-Attribute-Detection | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 0.436 ms | 0 - 39 MB | NPU | [Facial-Attribute-Detection.onnx](https://huggingface.co/qualcomm/Facial-Attribute-Detection/blob/main/Facial-Attribute-Detection_w8a8.onnx) | | Facial-Attribute-Detection | w8a8 | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 0.51 ms | 1 - 1 MB | NPU | Use Export Script | | Facial-Attribute-Detection | w8a8 | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 0.601 ms | 13 - 13 MB | NPU | [Facial-Attribute-Detection.onnx](https://huggingface.co/qualcomm/Facial-Attribute-Detection/blob/main/Facial-Attribute-Detection_w8a8.onnx) | ## Installation Install the package via pip: ```bash pip install qai-hub-models ``` ## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`. With this API token, you can configure your client to run models on the cloud hosted devices. ```bash qai-hub configure --api_token API_TOKEN ``` Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information. ## Demo off target The package contains a simple end-to-end demo that downloads pre-trained weights and runs this model on a sample input. ```bash python -m qai_hub_models.models.face_attrib_net.demo ``` The above demo runs a reference implementation of pre-processing, model inference, and post processing. **NOTE**: If you want running in a Jupyter Notebook or Google Colab like environment, please add the following to your cell (instead of the above). ``` %run -m qai_hub_models.models.face_attrib_net.demo ``` ### Run model on a cloud-hosted device In addition to the demo, you can also run the model on a cloud-hosted Qualcomm® device. This script does the following: * Performance check on-device on a cloud-hosted device * Downloads compiled assets that can be deployed on-device for Android. * Accuracy check between PyTorch and on-device outputs. ```bash python -m qai_hub_models.models.face_attrib_net.export ``` ``` Profiling Results ------------------------------------------------------------ Facial-Attribute-Detection Device : cs_8275 (ANDROID 14) Runtime : TFLITE Estimated inference time (ms) : 4.4 Estimated peak memory usage (MB): [0, 33] Total # Ops : 158 Compute Unit(s) : npu (158 ops) gpu (0 ops) cpu (0 ops) ``` ## How does this work? This [export script](https://aihub.qualcomm.com/models/face_attrib_net/qai_hub_models/models/Facial-Attribute-Detection/export.py) leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model on-device. Lets go through each step below in detail: Step 1: **Compile model for on-device deployment** To compile a PyTorch model for on-device deployment, we first trace the model in memory using the `jit.trace` and then call the `submit_compile_job` API. ```python import torch import qai_hub as hub from qai_hub_models.models.face_attrib_net import Model # Load the model torch_model = Model.from_pretrained() # Device device = hub.Device("Samsung Galaxy S24") # Trace model input_shape = torch_model.get_input_spec() sample_inputs = torch_model.sample_inputs() pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()]) # Compile model on a specific device compile_job = hub.submit_compile_job( model=pt_model, device=device, input_specs=torch_model.get_input_spec(), ) # Get target model to run on-device target_model = compile_job.get_target_model() ``` Step 2: **Performance profiling on cloud-hosted device** After compiling models from step 1. Models can be profiled model on-device using the `target_model`. Note that this scripts runs the model on a device automatically provisioned in the cloud. Once the job is submitted, you can navigate to a provided job URL to view a variety of on-device performance metrics. ```python profile_job = hub.submit_profile_job( model=target_model, device=device, ) ``` Step 3: **Verify on-device accuracy** To verify the accuracy of the model on-device, you can run on-device inference on sample input data on the same cloud hosted device. ```python input_data = torch_model.sample_inputs() inference_job = hub.submit_inference_job( model=target_model, device=device, inputs=input_data, ) on_device_output = inference_job.download_output_data() ``` With the output of the model, you can compute like PSNR, relative errors or spot check the output with expected output. **Note**: This on-device profiling and inference requires access to Qualcomm® AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup). ## Run demo on a cloud-hosted device You can also run the demo on-device. ```bash python -m qai_hub_models.models.face_attrib_net.demo --on-device ``` **NOTE**: If you want running in a Jupyter Notebook or Google Colab like environment, please add the following to your cell (instead of the above). ``` %run -m qai_hub_models.models.face_attrib_net.demo -- --on-device ``` ## Deploying compiled model to Android The models can be deployed using multiple runtimes: - TensorFlow Lite (`.tflite` export): [This tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a guide to deploy the .tflite model in an Android application. - QNN (`.so` export ): This [sample app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html) provides instructions on how to use the `.so` shared library in an Android application. ## View on Qualcomm® AI Hub Get more details on Facial-Attribute-Detection's performance across various devices [here](https://aihub.qualcomm.com/models/face_attrib_net). Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/) ## License * The license for the original implementation of Facial-Attribute-Detection can be found [here](https://github.com/quic/ai-hub-models/blob/main/LICENSE). * The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf) ## References * [Source Model Implementation](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/face_attrib_net/model.py) ## Community * Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:[email protected]).
sailplane/mf_router
sailplane
2025-06-05T15:50:19Z
0
0
null
[ "pytorch", "region:us" ]
null
2025-06-05T15:43:46Z
# Matrix Factorization Router Model This model was trained using RouteLLM's matrix factorization approach for routing between language models. ## Model Configuration - Embedding dimension: 128 - Number of models: 2 - Use projection: True - Text dimension: 1536 ## Training Configuration - Learning rate: 0.0003 - Weight decay: 1e-05 - Alpha (noise): 0.1 - Number of epochs: 100 - Batch size: 64 ## Model IDs {'claude-3-7-sonnet-20250219': 0, 'claude-3-5-sonnet-20241022': 1} ## Usage Load this model using PyTorch: ```python import torch checkpoint = torch.load('pytorch_model.pth') model_state_dict = checkpoint['model_state_dict'] model_config = checkpoint['model_config'] # Initialize your model with the config and load the state dict ```
XiaomiMiMo/MiMo-7B-RL-0530
XiaomiMiMo
2025-06-05T15:50:15Z
378
23
transformers
[ "transformers", "safetensors", "mimo", "text-generation", "conversational", "custom_code", "arxiv:2505.07608", "license:mit", "autotrain_compatible", "region:us" ]
text-generation
2025-05-30T01:19:37Z
--- license: mit library_name: transformers --- <div align="center"> <picture> <source srcset="https://github.com/XiaomiMiMo/MiMo/raw/main/figures/Xiaomi_MiMo_darkmode.png?raw=true" media="(prefers-color-scheme: dark)"> <img src="https://github.com/XiaomiMiMo/MiMo/raw/main/figures/Xiaomi_MiMo.png?raw=true" width="60%" alt="Xiaomi-MiMo" /> </picture> </div> <h3 align="center"> <b> <span>━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━</span> <br/> Unlocking the Reasoning Potential of Language Model<br/>From Pretraining to Posttraining <br/> <span>━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━</span> <br/> </b> </h3> <br/> <div align="center" style="line-height: 1;"> | <a href="https://huggingface.co/XiaomiMiMo" target="_blank">🤗 HuggingFace</a> &nbsp;| <a href="https://www.modelscope.cn/organization/XiaomiMiMo" target="_blank">🤖️ ModelScope</a> &nbsp;| <a href="https://arxiv.org/abs/2505.07608" target="_blank">📔 Technical Report</a> &nbsp;| <br/> </div> <br/> --- ## Updates [2025.05.30] We scaled the SFT dataset from approximately 500K to 6M instances and continuously expanding the RL training window size from 32K to 48K, the performance of [MiMo-7B-RL-0530](https://huggingface.co/XiaomiMiMo/MiMo-7B-RL-0530) on AIME24 can be continuously improved and eventually surpass that of DeepSeek R1 (79.8). <table> <thead> <tr> <th>Benchmark</th> <th>MiMo-7B-RL</th> <th>MiMo-7B-RL-0530</th> </tr> </thead> <tbody> <tr> <td colspan="3"><strong>Mathematics</strong></td> <p align="center"> <td rowspan="11"><img width="80%" src="https://github.com/XiaomiMiMo/MiMo/raw/main/figures/length.jpg?raw=true"></td> </p> </tr> <tr><td>MATH500<br/>(Pass@1)</td><td>95.8</td><td>97.2</td></tr> <tr><td>AIME 2024<br/>(Pass@1)</td><td>68.2</td><td>80.1</td></tr> <tr><td>AIME 2025<br/>(Pass@1)</td><td>55.4</td><td>70.2</td></tr> <tr><td colspan="3"><strong>Code</strong></td></tr> <tr><td>LiveCodeBench v5<br/>(Pass@1)</td><td>57.8</td><td>60.9</td></tr> <tr><td>LiveCodeBench v6<br/>(Pass@1)</td><td>49.3</td><td>52.2</td></tr> <tr><td colspan="3"><strong>STEM</strong></td></tr> <tr><td>GPQA-Diamond<br/>(Pass@1)</td><td>54.4</td><td>60.6</td></tr> <tr><td colspan="3"><strong>General</strong></td></tr> <tr><td>Alignbench1.1<br/>(Evaluated by GPT4.1)</td><td>6.9</td><td>7.4</td></tr> </tbody> </table> --- ## I. Introduction Currently, most successful RL works, including open-source research, rely on relatively large base models, e.g., 32B models, particularly for enhancing code reasoning capabilities. Moreover, it was widely considered that achieving uniform and simultaneous improvements in both mathematical and code capabilities within a small model is challenging. Nonetheless, we believe that the effectiveness of the RL trained reasoning model relies on the inherent reasoning potential of the base model. To fully unlock the reasoning potential of language models, efforts must focus not only on post-training but also on pre-training strategies tailored to reasoning. In this work, we present MiMo-7B, a series of models trained from scratch and born for reasoning tasks. Our RL experiments from MiMo-7B-Base show that our model possesses extraordinary reasoning potential, even surpassing much larger 32B models. Additionally, we perform RL training on a cold-started SFT model, resulting in MiMo-7B-RL, which demonstrates superior performance on both mathematics and code reasoning tasks, matching the performance of OpenAI o1-mini. <p align="center"> <img width="80%" src="https://github.com/XiaomiMiMo/MiMo/raw/main/figures/curve.png?raw=true"> </p> We open-source MiMo-7B series, including checkpoints of the base model, SFT model, RL model trained from base model, and RL model trained from the SFT model. We believe this report along with the models will provide valuable insights to develop powerful reasoning LLMs that benefit the larger community. ### 🌟 Highlights - **Pre-Training: Base Model Born for Reasoning** - We optimize the data preprocessing pipeline, enhancing text extraction toolkits and applying multi-dimensional data filtering to increase reasoning pattern density in pre-training data. We also employ multiple strategies to generate massive diverse synthetic reasoning data. - We adopt a three-stage data mixture strategy for pre-training. Overall, MiMo-7B-Base is pre-trained on approximately 25 trillion tokens. - We incorporate Multiple-Token Prediction as an additional training objective, which enhances model performance and accelerates inference. - **Post-Training Recipe: Pioneering Reasoning Model** - We curate 130K mathematics and code problems as RL training data, which can be verified by rule-based verifiers. Each problem undergoes careful cleaning and difficulty assessment to ensure quality. We employ only rule-based accuracy rewards to avoid potential reward hacking. - To mitigate the sparse reward issue for challenging code problems, we introduce a test difficulty driven code reward. By assigning fine-grained scores for test cases with varying difficulty levels, the policy can be more effectively optimized via dense reward signal. - We implement a data re-sampling strategy for easy problems to enhance rollout sampling efficiency and stabilize policy updates, particularly in the later phases of RL training. - **RL Infrastructure** - We develop a Seamless Rollout Engine to accelerate RL training and validation. Our design integrates continuous rollout, asynchronous reward computation, and early termination to minimize GPU idle time, achieving $2.29\times$ faster training and $1.96\times$ faster validation. - We support MTP in vLLM and enhance the robustness of the inference engine in the RL system. ## II. Model Details The MTP layers of MiMo-7B is tuned during pretraining and SFT and freezed during RL. With one MTP layer for speculative decoding, the acceptance rate is about 90%. <p align="center"> <img width="80%" src="https://github.com/XiaomiMiMo/MiMo/raw/main/figures/architecture.png?raw=true"> </p> > Models are available at [https://huggingface.co/XiaomiMiMo](https://huggingface.co/XiaomiMiMo) and [https://www.modelscope.cn/organization/XiaomiMiMo](https://www.modelscope.cn/organization/XiaomiMiMo) | **Model** | **Description** | **Download (HuggingFace)** | **Download (ModelScope)** | | :-------------: | :---------------------------------------------------------------------------: | :-------------------------------------------------------------------------------: | :-----------------------------------------------------------------------------------------: | | MiMo-7B-Base | Base model with extraordinary reasoning potential | [🤗 XiaomiMiMo/MiMo-7B-Base](https://huggingface.co/XiaomiMiMo/MiMo-7B-Base) | [🤖️ XiaomiMiMo/MiMo-7B-Base](https://www.modelscope.cn/models/XiaomiMiMo/MiMo-7B-Base) | | MiMo-7B-RL-Zero | RL model trained from base model | [🤗 XiaomiMiMo/MiMo-7B-RL-Zero](https://huggingface.co/XiaomiMiMo/MiMo-7B-RL-Zero) | [🤖️ XiaomiMiMo/MiMo-7B-RL-Zero](https://www.modelscope.cn/models/XiaomiMiMo/MiMo-7B-RL-Zero) | | MiMo-7B-SFT | SFT model trained from base model | [🤗 XiaomiMiMo/MiMo-7B-SFT](https://huggingface.co/XiaomiMiMo/MiMo-7B-SFT) | [🤖️ XiaomiMiMo/MiMo-7B-SFT](https://www.modelscope.cn/models/XiaomiMiMo/MiMo-7B-SFT) | | MiMo-7B-RL | RL model trained from SFT model, superior performance matching OpenAI o1-mini | [🤗 XiaomiMiMo/MiMo-7B-RL](https://huggingface.co/XiaomiMiMo/MiMo-7B-RL) | [🤖️ XiaomiMiMo/MiMo-7B-RL](https://www.modelscope.cn/models/XiaomiMiMo/MiMo-7B-RL) | ## III. Evaluation Results | Benchmark | GPT-4o-0513 | Claude-3.5-Sonnet-1022 | OpenAI o1-mini | QwQ-32B-Preview | R1-Distill-Qwen-14B | R1-Distill-Qwen-7B | MiMo-7B-RL | | ----------------------------- | :---------: | :--------------------: | :------------: | :-------------: | :-----------------: | :----------------: | :--------: | | **General** | | | | | | | | | GPQA Diamond<br/>(Pass@1) | 49.9 | 65.0 | 60.0 | 54.5 | 59.1 | 49.1 | 54.4 | | SuperGPQA<br/>(Pass@1) | 42.4 | 48.2 | 45.2 | 43.6 | 40.6 | 28.9 | 40.5 | | DROP<br/>(3-shot F1) | 83.7 | 88.3 | 83.9 | 71.2 | 85.5 | 77.0 | 78.7 | | MMLU-Pro<br/>(EM) | 72.6 | 78.0 | 80.3 | 52.0 | 68.8 | 53.5 | 58.6 | | IF-Eval<br/>(Prompt Strict) | 84.3 | 86.5 | 84.8 | 40.4 | 78.3 | 60.5 | 61.0 | | **Mathematics** | | | | | | | | | MATH-500<br/>(Pass@1) | 74.6 | 78.3 | 90.0 | 90.6 | 93.9 | 92.8 | 95.8 | | AIME 2024<br/>(Pass@1) | 9.3 | 16.0 | 63.6 | 50.0 | 69.7 | 55.5 | 68.2 | | AIME 2025<br/>(Pass@1) | 11.6 | 7.4 | 50.7 | 32.4 | 48.2 | 38.8 | 55.4 | | **Code** | | | | | | | | | LiveCodeBench v5<br/>(Pass@1) | 32.9 | 38.9 | 53.8 | 41.9 | 53.1 | 37.6 | 57.8 | | LiveCodeBench v6<br/>(Pass@1) | 30.9 | 37.2 | 46.8 | 39.1 | 31.9 | 23.9 | 49.3 | MiMo-7B series | Benchmark | MiMo-7B-Base | MiMo-7B-RL-Zero | MiMo-7B-SFT | MiMo-7B-RL | | ----------------------------- | :----------: | :-------------: | :---------: | :--------: | | **Mathematics** | | | | | | MATH500<br/>(Pass@1) | 37.4 | 93.6 | 93.0 | 95.8 | | AIME 2024<br/>(Pass@1) | 32.9 | 56.4 | 58.7 | 68.2 | | AIME 2025<br/>(Pass@1) | 24.3 | 46.3 | 44.3 | 55.4 | | **Code** | | | | | | LiveCodeBench v5<br/>(Pass@1) | 32.9 | 49.1 | 52.3 | 57.8 | | LiveCodeBench v6<br/>(Pass@1) | 29.1 | 42.9 | 45.5 | 49.3 | > [!IMPORTANT] > The evaluations are conducted with `temperature=0.6`. > > AIME24 and AIME25 are with averaged score of 32 repetitions. LiveCodeBench v5 (20240801-20250201), LiveCodeBench v6 (20250201-20250501), GPQA-Diamond and IF-Eval are with averaged score of 8 repetitions. MATH500 and SuperGPQA are with a single run. ## IV. Deployment ### SGLang Inference Thanks to the [MiMo model support](https://github.com/sgl-project/sglang/pull/5921) and [MTP](https://github.com/sgl-project/sglang/pull/6059) from the SGLang team, we supported MiMo in SGLang mainstream. Example Script ```bash # Install the latest SGlang from main branch python3 -m uv pip install "sglang[all] @ git+https://github.com/sgl-project/sglang.git/@main#egg=sglang&subdirectory=python" # Launch SGLang Server python3 -m sglang.launch_server --model-path XiaomiMiMo/MiMo-7B-RL --host 0.0.0.0 --trust-remote-code # Launch MTP Server python3 -m sglang.launch_server --model-path XiaomiMiMo/MiMo-7B-RL --trust-remote-code \ --speculative-algorithm EAGLE --speculative-num-steps 1 --speculative-eagle-topk 1 \ --speculative-num-draft-tokens 2 --mem-fraction 0.5 ``` Detailed usage can be found in [SGLang documents](https://docs.sglang.ai/backend/send_request.html). ### vLLM inference 1. [Recommended] We officially support inference with MiMo-MTP using [our fork of vLLM](https://github.com/XiaomiMiMo/vllm/tree/feat_mimo_mtp_stable_073). Example script ```py from vllm import LLM, SamplingParams model_path = "/path/to/MiMo" llm = LLM( model=model_path, trust_remote_code=True, num_speculative_tokens=1, disable_log_stats=False ) sampling_params = SamplingParams(temperature=0.6) conversation = [ { "role": "system", "content": "" }, { "role": "user", "content": "Write an essay about the importance of higher education.", }, ] outputs = llm.chat(conversation, sampling_params=sampling_params, use_tqdm=False) for output in outputs: prompt = output.prompt generated_text = output.outputs[0].text print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}") print("=" * 80) ``` 2. Or, you can register a vLLM loader for MiMo without loading MTP parameters. You can copy the [`registry/register_mimo_in_vllm.py`](https://github.com/XiaomiMiMo/MiMo/blob/main/registry/register_mimo_in_vllm.py) to your directory and import it with ```py import register_mimo_in_vllm from vllm import LLM, SamplingParams model_path = "/path/to/MiMo" llm = LLM( model=model_path, trust_remote_code=True, # num_speculative_tokens=1, disable_log_stats=False ) sampling_params = SamplingParams(temperature=0.6) ``` ### HuggingFace inference Example script ```py from transformers import AutoModel, AutoModelForCausalLM, AutoTokenizer model_id = "XiaomiMiMo/MiMo-7B-RL-0530" model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained(model_id) inputs = tokenizer(["Today is"], return_tensors='pt') output = model.generate(**inputs, max_new_tokens = 100) print(tokenizer.decode(output.tolist()[0])) ``` ### Recommended environment and prompts - We recommend using [our fork of vLLM](https://github.com/XiaomiMiMo/vllm/tree/feat_mimo_mtp_stable_073) which is developed based on vLLM 0.7.3. - We recommend using empty system prompt. > We haven't verified MiMo with other inference engines and welcome contributions based on the model definition in the Huggingface repo 💻. ## V. Citation ```bibtex @misc{coreteam2025mimounlockingreasoningpotential, title={MiMo: Unlocking the Reasoning Potential of Language Model -- From Pretraining to Posttraining}, author={LLM-Core-Team Xiaomi}, year={2025}, eprint={2505.07608}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2505.07608}, } ``` ## VI. Contact Please contact us at [[email protected]](mailto:[email protected]) or open an issue if you have any questions.
CK0607/llama3.1-8b-sonnet-pplx-50
CK0607
2025-06-05T15:49:23Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "grpo", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-06-05T15:47:00Z
--- base_model: unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl - grpo license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** CK0607 - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
qualcomm/DeepLabV3-ResNet50
qualcomm
2025-06-05T15:48:52Z
103
0
pytorch
[ "pytorch", "tflite", "android", "image-segmentation", "arxiv:1706.05587", "license:other", "region:us" ]
image-segmentation
2024-02-25T22:37:29Z
--- library_name: pytorch license: other tags: - android pipeline_tag: image-segmentation --- ![](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/deeplabv3_resnet50/web-assets/model_demo.png) # DeepLabV3-ResNet50: Optimized for Mobile Deployment ## Deep Convolutional Neural Network model for semantic segmentation DeepLabV3 is designed for semantic segmentation at multiple scales, trained on the COCO dataset. It uses ResNet50 as a backbone. This model is an implementation of DeepLabV3-ResNet50 found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/segmentation/deeplabv3.py). This repository provides scripts to run DeepLabV3-ResNet50 on Qualcomm® devices. More details on model performance across various devices, can be found [here](https://aihub.qualcomm.com/models/deeplabv3_resnet50). ### Model Details - **Model Type:** Model_use_case.semantic_segmentation - **Model Stats:** - Model checkpoint: COCO_WITH_VOC_LABELS_V1 - Input resolution: 513x513 - Number of parameters: 39.6M - Model size: 151 MB - Number of output classes: 21 | Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model |---|---|---|---|---|---|---|---|---| | DeepLabV3-ResNet50 | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 1185.986 ms | 6 - 23 MB | GPU | [DeepLabV3-ResNet50.tflite](https://huggingface.co/qualcomm/DeepLabV3-ResNet50/blob/main/DeepLabV3-ResNet50.tflite) | | DeepLabV3-ResNet50 | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 786.162 ms | 21 - 51 MB | GPU | [DeepLabV3-ResNet50.tflite](https://huggingface.co/qualcomm/DeepLabV3-ResNet50/blob/main/DeepLabV3-ResNet50.tflite) | | DeepLabV3-ResNet50 | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 304.269 ms | 3 - 201 MB | GPU | [DeepLabV3-ResNet50.tflite](https://huggingface.co/qualcomm/DeepLabV3-ResNet50/blob/main/DeepLabV3-ResNet50.tflite) | | DeepLabV3-ResNet50 | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 718.986 ms | 23 - 39 MB | GPU | [DeepLabV3-ResNet50.tflite](https://huggingface.co/qualcomm/DeepLabV3-ResNet50/blob/main/DeepLabV3-ResNet50.tflite) | | DeepLabV3-ResNet50 | float | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 1185.986 ms | 6 - 23 MB | GPU | [DeepLabV3-ResNet50.tflite](https://huggingface.co/qualcomm/DeepLabV3-ResNet50/blob/main/DeepLabV3-ResNet50.tflite) | | DeepLabV3-ResNet50 | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 293.789 ms | 0 - 210 MB | GPU | [DeepLabV3-ResNet50.tflite](https://huggingface.co/qualcomm/DeepLabV3-ResNet50/blob/main/DeepLabV3-ResNet50.tflite) | | DeepLabV3-ResNet50 | float | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 266.482 ms | 23 - 44 MB | GPU | [DeepLabV3-ResNet50.tflite](https://huggingface.co/qualcomm/DeepLabV3-ResNet50/blob/main/DeepLabV3-ResNet50.tflite) | | DeepLabV3-ResNet50 | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 292.485 ms | 2 - 190 MB | GPU | [DeepLabV3-ResNet50.tflite](https://huggingface.co/qualcomm/DeepLabV3-ResNet50/blob/main/DeepLabV3-ResNet50.tflite) | | DeepLabV3-ResNet50 | float | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 718.986 ms | 23 - 39 MB | GPU | [DeepLabV3-ResNet50.tflite](https://huggingface.co/qualcomm/DeepLabV3-ResNet50/blob/main/DeepLabV3-ResNet50.tflite) | | DeepLabV3-ResNet50 | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 294.648 ms | 2 - 210 MB | GPU | [DeepLabV3-ResNet50.tflite](https://huggingface.co/qualcomm/DeepLabV3-ResNet50/blob/main/DeepLabV3-ResNet50.tflite) | | DeepLabV3-ResNet50 | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 264.494 ms | 21 - 49 MB | GPU | [DeepLabV3-ResNet50.tflite](https://huggingface.co/qualcomm/DeepLabV3-ResNet50/blob/main/DeepLabV3-ResNet50.tflite) | | DeepLabV3-ResNet50 | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 187.222 ms | 23 - 44 MB | GPU | [DeepLabV3-ResNet50.tflite](https://huggingface.co/qualcomm/DeepLabV3-ResNet50/blob/main/DeepLabV3-ResNet50.tflite) | ## Installation Install the package via pip: ```bash pip install qai-hub-models ``` ## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`. With this API token, you can configure your client to run models on the cloud hosted devices. ```bash qai-hub configure --api_token API_TOKEN ``` Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information. ## Demo off target The package contains a simple end-to-end demo that downloads pre-trained weights and runs this model on a sample input. ```bash python -m qai_hub_models.models.deeplabv3_resnet50.demo ``` The above demo runs a reference implementation of pre-processing, model inference, and post processing. **NOTE**: If you want running in a Jupyter Notebook or Google Colab like environment, please add the following to your cell (instead of the above). ``` %run -m qai_hub_models.models.deeplabv3_resnet50.demo ``` ### Run model on a cloud-hosted device In addition to the demo, you can also run the model on a cloud-hosted Qualcomm® device. This script does the following: * Performance check on-device on a cloud-hosted device * Downloads compiled assets that can be deployed on-device for Android. * Accuracy check between PyTorch and on-device outputs. ```bash python -m qai_hub_models.models.deeplabv3_resnet50.export ``` ``` Profiling Results ------------------------------------------------------------ DeepLabV3-ResNet50 Device : cs_8275 (ANDROID 14) Runtime : TFLITE Estimated inference time (ms) : 1186.0 Estimated peak memory usage (MB): [6, 23] Total # Ops : 100 Compute Unit(s) : npu (0 ops) gpu (98 ops) cpu (2 ops) ``` ## How does this work? This [export script](https://aihub.qualcomm.com/models/deeplabv3_resnet50/qai_hub_models/models/DeepLabV3-ResNet50/export.py) leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model on-device. Lets go through each step below in detail: Step 1: **Compile model for on-device deployment** To compile a PyTorch model for on-device deployment, we first trace the model in memory using the `jit.trace` and then call the `submit_compile_job` API. ```python import torch import qai_hub as hub from qai_hub_models.models.deeplabv3_resnet50 import Model # Load the model torch_model = Model.from_pretrained() # Device device = hub.Device("Samsung Galaxy S24") # Trace model input_shape = torch_model.get_input_spec() sample_inputs = torch_model.sample_inputs() pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()]) # Compile model on a specific device compile_job = hub.submit_compile_job( model=pt_model, device=device, input_specs=torch_model.get_input_spec(), ) # Get target model to run on-device target_model = compile_job.get_target_model() ``` Step 2: **Performance profiling on cloud-hosted device** After compiling models from step 1. Models can be profiled model on-device using the `target_model`. Note that this scripts runs the model on a device automatically provisioned in the cloud. Once the job is submitted, you can navigate to a provided job URL to view a variety of on-device performance metrics. ```python profile_job = hub.submit_profile_job( model=target_model, device=device, ) ``` Step 3: **Verify on-device accuracy** To verify the accuracy of the model on-device, you can run on-device inference on sample input data on the same cloud hosted device. ```python input_data = torch_model.sample_inputs() inference_job = hub.submit_inference_job( model=target_model, device=device, inputs=input_data, ) on_device_output = inference_job.download_output_data() ``` With the output of the model, you can compute like PSNR, relative errors or spot check the output with expected output. **Note**: This on-device profiling and inference requires access to Qualcomm® AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup). ## Run demo on a cloud-hosted device You can also run the demo on-device. ```bash python -m qai_hub_models.models.deeplabv3_resnet50.demo --on-device ``` **NOTE**: If you want running in a Jupyter Notebook or Google Colab like environment, please add the following to your cell (instead of the above). ``` %run -m qai_hub_models.models.deeplabv3_resnet50.demo -- --on-device ``` ## Deploying compiled model to Android The models can be deployed using multiple runtimes: - TensorFlow Lite (`.tflite` export): [This tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a guide to deploy the .tflite model in an Android application. - QNN (`.so` export ): This [sample app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html) provides instructions on how to use the `.so` shared library in an Android application. ## View on Qualcomm® AI Hub Get more details on DeepLabV3-ResNet50's performance across various devices [here](https://aihub.qualcomm.com/models/deeplabv3_resnet50). Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/) ## License * The license for the original implementation of DeepLabV3-ResNet50 can be found [here](https://github.com/pytorch/vision/blob/main/LICENSE). * The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf) ## References * [Rethinking Atrous Convolution for Semantic Image Segmentation](https://arxiv.org/abs/1706.05587) * [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/segmentation/deeplabv3.py) ## Community * Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:[email protected]).
hira-wz/llama-3-8b-Instruct-bnb-4bit-aiaustin-demo
hira-wz
2025-06-05T15:47:32Z
0
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-05T15:46:17Z
--- base_model: llama-3-8b-Instruct-bnb-4bit-aiaustin-demo tags: - text-generation-inference - transformers - unsloth - llama - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** hira-wz - **License:** apache-2.0 - **Finetuned from model :** llama-3-8b-Instruct-bnb-4bit-aiaustin-demo This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
qualcomm/BiseNet
qualcomm
2025-06-05T15:46:59Z
46
0
pytorch
[ "pytorch", "tflite", "onnx", "real_time", "android", "image-segmentation", "arxiv:1808.00897", "license:unlicense", "region:us" ]
image-segmentation
2025-03-13T22:09:07Z
--- library_name: pytorch license: unlicense tags: - real_time - android pipeline_tag: image-segmentation --- ![](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/bisenet/web-assets/model_demo.png) # BiseNet: Optimized for Mobile Deployment ## Segment images or video by class in real-time on device BiSeNet (Bilateral Segmentation Network) is a novel architecture designed for real-time semantic segmentation. It addresses the challenge of balancing spatial resolution and receptive field by employing a Spatial Path to preserve high-resolution features and a context path to capture sufficient receptive field. This model is an implementation of BiseNet found [here](https://github.com/ooooverflow/BiSeNet). This repository provides scripts to run BiseNet on Qualcomm® devices. More details on model performance across various devices, can be found [here](https://aihub.qualcomm.com/models/bisenet). ### Model Details - **Model Type:** Model_use_case.semantic_segmentation - **Model Stats:** - Model checkpoint: best_dice_loss_miou_0.655.pth - Inference latency: RealTime - Input resolution: 720x960 - Number of parameters: 12.0M - Model size: 45.7 MB | Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model |---|---|---|---|---|---|---|---|---| | BiseNet | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 86.293 ms | 32 - 58 MB | NPU | [BiseNet.tflite](https://huggingface.co/qualcomm/BiseNet/blob/main/BiseNet.tflite) | | BiseNet | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN | 83.474 ms | 4 - 13 MB | NPU | Use Export Script | | BiseNet | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 35.336 ms | 32 - 77 MB | NPU | [BiseNet.tflite](https://huggingface.co/qualcomm/BiseNet/blob/main/BiseNet.tflite) | | BiseNet | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN | 42.47 ms | 8 - 43 MB | NPU | Use Export Script | | BiseNet | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 28.322 ms | 32 - 106 MB | NPU | [BiseNet.tflite](https://huggingface.co/qualcomm/BiseNet/blob/main/BiseNet.tflite) | | BiseNet | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN | 26.342 ms | 2 - 4 MB | NPU | Use Export Script | | BiseNet | float | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 86.293 ms | 32 - 58 MB | NPU | [BiseNet.tflite](https://huggingface.co/qualcomm/BiseNet/blob/main/BiseNet.tflite) | | BiseNet | float | SA7255P ADP | Qualcomm® SA7255P | QNN | 83.474 ms | 4 - 13 MB | NPU | Use Export Script | | BiseNet | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 28.179 ms | 11 - 47 MB | NPU | [BiseNet.tflite](https://huggingface.co/qualcomm/BiseNet/blob/main/BiseNet.tflite) | | BiseNet | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN | 26.87 ms | 8 - 10 MB | NPU | Use Export Script | | BiseNet | float | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 37.808 ms | 32 - 59 MB | NPU | [BiseNet.tflite](https://huggingface.co/qualcomm/BiseNet/blob/main/BiseNet.tflite) | | BiseNet | float | SA8295P ADP | Qualcomm® SA8295P | QNN | 35.799 ms | 0 - 17 MB | NPU | Use Export Script | | BiseNet | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 28.193 ms | 13 - 102 MB | NPU | [BiseNet.tflite](https://huggingface.co/qualcomm/BiseNet/blob/main/BiseNet.tflite) | | BiseNet | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN | 26.482 ms | 8 - 10 MB | NPU | Use Export Script | | BiseNet | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 27.975 ms | 34 - 169 MB | NPU | [BiseNet.tflite](https://huggingface.co/qualcomm/BiseNet/blob/main/BiseNet.tflite) | | BiseNet | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN | 26.713 ms | 8 - 22 MB | NPU | Use Export Script | | BiseNet | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 30.601 ms | 63 - 117 MB | NPU | [BiseNet.onnx](https://huggingface.co/qualcomm/BiseNet/blob/main/BiseNet.onnx) | | BiseNet | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 21.029 ms | 30 - 78 MB | NPU | [BiseNet.tflite](https://huggingface.co/qualcomm/BiseNet/blob/main/BiseNet.tflite) | | BiseNet | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN | 20.832 ms | 8 - 45 MB | NPU | Use Export Script | | BiseNet | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 23.202 ms | 73 - 117 MB | NPU | [BiseNet.onnx](https://huggingface.co/qualcomm/BiseNet/blob/main/BiseNet.onnx) | | BiseNet | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 19.546 ms | 30 - 60 MB | NPU | [BiseNet.tflite](https://huggingface.co/qualcomm/BiseNet/blob/main/BiseNet.tflite) | | BiseNet | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN | 16.204 ms | 8 - 42 MB | NPU | Use Export Script | | BiseNet | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 18.861 ms | 69 - 110 MB | NPU | [BiseNet.onnx](https://huggingface.co/qualcomm/BiseNet/blob/main/BiseNet.onnx) | | BiseNet | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 25.279 ms | 8 - 8 MB | NPU | Use Export Script | | BiseNet | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 30.965 ms | 66 - 66 MB | NPU | [BiseNet.onnx](https://huggingface.co/qualcomm/BiseNet/blob/main/BiseNet.onnx) | ## Installation Install the package via pip: ```bash pip install qai-hub-models ``` ## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`. With this API token, you can configure your client to run models on the cloud hosted devices. ```bash qai-hub configure --api_token API_TOKEN ``` Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information. ## Demo off target The package contains a simple end-to-end demo that downloads pre-trained weights and runs this model on a sample input. ```bash python -m qai_hub_models.models.bisenet.demo ``` The above demo runs a reference implementation of pre-processing, model inference, and post processing. **NOTE**: If you want running in a Jupyter Notebook or Google Colab like environment, please add the following to your cell (instead of the above). ``` %run -m qai_hub_models.models.bisenet.demo ``` ### Run model on a cloud-hosted device In addition to the demo, you can also run the model on a cloud-hosted Qualcomm® device. This script does the following: * Performance check on-device on a cloud-hosted device * Downloads compiled assets that can be deployed on-device for Android. * Accuracy check between PyTorch and on-device outputs. ```bash python -m qai_hub_models.models.bisenet.export ``` ``` Profiling Results ------------------------------------------------------------ BiseNet Device : cs_8275 (ANDROID 14) Runtime : TFLITE Estimated inference time (ms) : 86.3 Estimated peak memory usage (MB): [32, 58] Total # Ops : 63 Compute Unit(s) : npu (63 ops) gpu (0 ops) cpu (0 ops) ``` ## How does this work? This [export script](https://aihub.qualcomm.com/models/bisenet/qai_hub_models/models/BiseNet/export.py) leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model on-device. Lets go through each step below in detail: Step 1: **Compile model for on-device deployment** To compile a PyTorch model for on-device deployment, we first trace the model in memory using the `jit.trace` and then call the `submit_compile_job` API. ```python import torch import qai_hub as hub from qai_hub_models.models.bisenet import Model # Load the model torch_model = Model.from_pretrained() # Device device = hub.Device("Samsung Galaxy S24") # Trace model input_shape = torch_model.get_input_spec() sample_inputs = torch_model.sample_inputs() pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()]) # Compile model on a specific device compile_job = hub.submit_compile_job( model=pt_model, device=device, input_specs=torch_model.get_input_spec(), ) # Get target model to run on-device target_model = compile_job.get_target_model() ``` Step 2: **Performance profiling on cloud-hosted device** After compiling models from step 1. Models can be profiled model on-device using the `target_model`. Note that this scripts runs the model on a device automatically provisioned in the cloud. Once the job is submitted, you can navigate to a provided job URL to view a variety of on-device performance metrics. ```python profile_job = hub.submit_profile_job( model=target_model, device=device, ) ``` Step 3: **Verify on-device accuracy** To verify the accuracy of the model on-device, you can run on-device inference on sample input data on the same cloud hosted device. ```python input_data = torch_model.sample_inputs() inference_job = hub.submit_inference_job( model=target_model, device=device, inputs=input_data, ) on_device_output = inference_job.download_output_data() ``` With the output of the model, you can compute like PSNR, relative errors or spot check the output with expected output. **Note**: This on-device profiling and inference requires access to Qualcomm® AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup). ## Run demo on a cloud-hosted device You can also run the demo on-device. ```bash python -m qai_hub_models.models.bisenet.demo --on-device ``` **NOTE**: If you want running in a Jupyter Notebook or Google Colab like environment, please add the following to your cell (instead of the above). ``` %run -m qai_hub_models.models.bisenet.demo -- --on-device ``` ## Deploying compiled model to Android The models can be deployed using multiple runtimes: - TensorFlow Lite (`.tflite` export): [This tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a guide to deploy the .tflite model in an Android application. - QNN (`.so` export ): This [sample app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html) provides instructions on how to use the `.so` shared library in an Android application. ## View on Qualcomm® AI Hub Get more details on BiseNet's performance across various devices [here](https://aihub.qualcomm.com/models/bisenet). Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/) ## License * The license for the original implementation of BiseNet can be found [here](This model's original implementation does not provide a LICENSE.). * The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf) ## References * [BiSeNet Bilateral Segmentation Network for Real-time Semantic Segmentation](https://arxiv.org/abs/1808.00897) * [Source Model Implementation](https://github.com/ooooverflow/BiSeNet) ## Community * Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:[email protected]).
qualcomm/BGNet
qualcomm
2025-06-05T15:46:53Z
0
0
pytorch
[ "pytorch", "real_time", "android", "image-segmentation", "arxiv:2207.00794", "license:unlicense", "region:us" ]
image-segmentation
2025-03-13T22:09:05Z
--- library_name: pytorch license: unlicense tags: - real_time - android pipeline_tag: image-segmentation --- ![](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/bgnet/web-assets/model_demo.png) # BGNet: Optimized for Mobile Deployment ## Segment images in real-time on device BGNet or Boundary-Guided Network, is a model designed for camouflaged object detection. It leverages edge semantics to enhance the representation learning process, making it more effective at identifying objects that blend into their surroundings This model is an implementation of BGNet found [here](https://github.com/thograce/bgnet). More details on model performance across various devices, can be found [here](https://aihub.qualcomm.com/models/bgnet). ### Model Details - **Model Type:** Model_use_case.semantic_segmentation - **Model Stats:** - Model checkpoint: BGNet - Input resolution: 416x416 - Number of parameters: 77.8M - Model size: 297 MB | Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model |---|---|---|---|---|---|---|---|---| | BGNet | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 118.319 ms | 1 - 126 MB | NPU | -- | | BGNet | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN | 116.819 ms | 2 - 11 MB | NPU | -- | | BGNet | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 34.374 ms | 1 - 209 MB | NPU | -- | | BGNet | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN | 41.165 ms | 2 - 50 MB | NPU | -- | | BGNet | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 22.942 ms | 1 - 19 MB | NPU | -- | | BGNet | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN | 19.894 ms | 2 - 4 MB | NPU | -- | | BGNet | float | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 118.319 ms | 1 - 126 MB | NPU | -- | | BGNet | float | SA7255P ADP | Qualcomm® SA7255P | QNN | 116.819 ms | 2 - 11 MB | NPU | -- | | BGNet | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 23.032 ms | 1 - 19 MB | NPU | -- | | BGNet | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN | 19.82 ms | 2 - 4 MB | NPU | -- | | BGNet | float | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 37.993 ms | 1 - 99 MB | NPU | -- | | BGNet | float | SA8295P ADP | Qualcomm® SA8295P | QNN | 34.723 ms | 2 - 19 MB | NPU | -- | | BGNet | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 23.229 ms | 1 - 20 MB | NPU | -- | | BGNet | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN | 19.833 ms | 2 - 4 MB | NPU | -- | | BGNet | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 22.84 ms | 1 - 19 MB | NPU | -- | | BGNet | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN | 19.968 ms | 2 - 29 MB | NPU | -- | | BGNet | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 20.487 ms | 0 - 173 MB | NPU | -- | | BGNet | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 16.768 ms | 0 - 234 MB | NPU | -- | | BGNet | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN | 14.738 ms | 2 - 76 MB | NPU | -- | | BGNet | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 14.886 ms | 4 - 83 MB | NPU | -- | | BGNet | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 15.45 ms | 1 - 126 MB | NPU | -- | | BGNet | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN | 12.583 ms | 2 - 63 MB | NPU | -- | | BGNet | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 15.563 ms | 2 - 65 MB | NPU | -- | | BGNet | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 20.261 ms | 2 - 2 MB | NPU | -- | | BGNet | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 22.284 ms | 154 - 154 MB | NPU | -- | ## License * The license for the original implementation of BGNet can be found [here](This model's original implementation does not provide a LICENSE.). * The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf) ## References * [BGNet: Boundary-Guided Camouflaged Object Detection (IJCAI 2022)](https://arxiv.org/abs/2207.00794) * [Source Model Implementation](https://github.com/thograce/bgnet) ## Community * Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:[email protected]). ## Usage and Limitations Model may not be used for or in connection with any of the following applications: - Accessing essential private and public services and benefits; - Administration of justice and democratic processes; - Assessing or recognizing the emotional state of a person; - Biometric and biometrics-based systems, including categorization of persons based on sensitive characteristics; - Education and vocational training; - Employment and workers management; - Exploitation of the vulnerabilities of persons resulting in harmful behavior; - General purpose social scoring; - Law enforcement; - Management and operation of critical infrastructure; - Migration, asylum and border control management; - Predictive policing; - Real-time remote biometric identification in public spaces; - Recommender systems of social media platforms; - Scraping of facial images (from the internet or otherwise); and/or - Subliminal manipulation
plumpyfield/natix_v2-012
plumpyfield
2025-06-05T15:46:21Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2025-06-05T15:46:01Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
plumpyfield/natix_v2-010
plumpyfield
2025-06-05T15:39:43Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2025-06-05T15:39:06Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
FormlessAI/d0e4a4fe-a73a-492b-aa97-86258fe29837
FormlessAI
2025-06-05T15:38:31Z
0
0
transformers
[ "transformers", "safetensors", "opt", "text-generation", "generated_from_trainer", "trl", "sft", "base_model:facebook/opt-350m", "base_model:finetune:facebook/opt-350m", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-05T14:56:05Z
--- base_model: facebook/opt-350m library_name: transformers model_name: d0e4a4fe-a73a-492b-aa97-86258fe29837 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for d0e4a4fe-a73a-492b-aa97-86258fe29837 This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="FormlessAI/d0e4a4fe-a73a-492b-aa97-86258fe29837", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients/runs/owduj24n) This model was trained with SFT. ### Framework versions - TRL: 0.18.1 - Transformers: 4.52.4 - Pytorch: 2.7.0+cu128 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
backups/Qwen3-Embedding-4B
backups
2025-06-05T15:36:24Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "base_model:Qwen/Qwen3-4B-Base", "base_model:finetune:Qwen/Qwen3-4B-Base", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-05T15:36:20Z
--- license: apache-2.0 base_model: - Qwen/Qwen3-4B-Base library_name: transformers --- # Qwen3-Embedding-4B <p align="center"> <img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/logo_qwen3.png" width="400"/> <p> ## Highlights The Qwen3 Embedding model series is the latest proprietary model of the Qwen family, specifically designed for text embedding and ranking tasks. Building upon the dense foundational models of the Qwen3 series, it provides a comprehensive range of text embeddings and reranking models in various sizes (0.6B, 4B, and 8B). This series inherits the exceptional multilingual capabilities, long-text understanding, and reasoning skills of its foundational model. The Qwen3 Embedding series represents significant advancements in multiple text embedding and ranking tasks, including text retrieval, code retrieval, text classification, text clustering, and bitext mining. **Exceptional Versatility**: The embedding model has achieved state-of-the-art performance across a wide range of downstream application evaluations. The 8B size embedding model ranks **No.1** in the MTEB multilingual leaderboard (as of June 5, 2025, score **70.58**), while the reranking model excels in various text retrieval scenarios. **Comprehensive Flexibility**: The Qwen3 Embedding series offers a full spectrum of sizes (from 0.6B to 8B) for both embedding and reranking models, catering to diverse use cases that prioritize efficiency and effectiveness. Developers can seamlessly combine these two modules. Additionally, the embedding model allows for flexible vector definitions across all dimensions, and both embedding and reranking models support user-defined instructions to enhance performance for specific tasks, languages, or scenarios. **Multilingual Capability**: The Qwen3 Embedding series offer support for over 100 languages, thanks to the multilingual capabilites of Qwen3 models. This includes various programming languages, and provides robust multilingual, cross-lingual, and code retrieval capabilities. ## Model Overview **Qwen3-Embedding-4B** has the following features: - Model Type: Text Embedding - Supported Languages: 100+ Languages - Number of Paramaters: 4B - Context Length: 32k - Embedding Dimension: Up to 2560, supports user-defined output dimensions ranging from 32 to 2560 For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3-Embedding/), [GitHub](https://github.com/QwenLM/Qwen3-Embedding). ## Qwen3 Embedding Series Model list | Model Type | Models | Size | Layers | Sequence Length | Embedding Dimension | MRL Support | Instruction Aware | |------------------|----------------------|------|--------|-----------------|---------------------|-------------|----------------| | Text Embedding | [Qwen3-Embedding-0.6B](https://huggingface.co/Qwen/Qwen3-Embedding-0.6B) | 0.6B | 28 | 32K | 1024 | Yes | Yes | | Text Embedding | [Qwen3-Embedding-4B](https://huggingface.co/Qwen/Qwen3-Embedding-4B) | 4B | 36 | 32K | 2560 | Yes | Yes | | Text Embedding | [Qwen3-Embedding-8B](https://huggingface.co/Qwen/Qwen3-Embedding-8B) | 8B | 36 | 32K | 4096 | Yes | Yes | | Text Reranking | [Qwen3-Reranker-0.6B](https://huggingface.co/Qwen/Qwen3-Reranker-0.6B) | 0.6B | 28 | 32K | - | - | Yes | | Text Reranking | [Qwen3-Reranker-4B](https://huggingface.co/Qwen/Qwen3-Reranker-4B) | 4B | 36 | 32K | - | - | Yes | | Text Reranking | [Qwen3-Reranker-8B](https://huggingface.co/Qwen/Qwen3-Reranker-8B) | 8B | 36 | 32K | - | - | Yes | > **Note**: > - `MRL Support` indicates whether the embedding model supports custom dimensions for the final embedding. > - `Instruction Aware` notes whether the embedding or reranking model supports customizing the input instruction according to different tasks. > - Our evaluation indicates that, for most downstream tasks, using instructions (instruct) typically yields an improvement of 1% to 5% compared to not using them. Therefore, we recommend that developers create tailored instructions specific to their tasks and scenarios. In multilingual contexts, we also advise users to write their instructions in English, as most instructions utilized during the model training process were originally written in English. ## Usage With Transformers versions earlier than 4.51.0, you may encounter the following error: ``` KeyError: 'qwen3' ``` ### Transformers Usage ```python # Requires transformers>=4.51.0 import torch import torch.nn.functional as F from torch import Tensor from transformers import AutoTokenizer, AutoModel def last_token_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor: left_padding = (attention_mask[:, -1].sum() == attention_mask.shape[0]) if left_padding: return last_hidden_states[:, -1] else: sequence_lengths = attention_mask.sum(dim=1) - 1 batch_size = last_hidden_states.shape[0] return last_hidden_states[torch.arange(batch_size, device=last_hidden_states.device), sequence_lengths] def get_detailed_instruct(task_description: str, query: str) -> str: return f'Instruct: {task_description}\nQuery:{query}' def tokenize(tokenizer, input_texts, eod_id, max_length): batch_dict = tokenizer(input_texts, padding=False, truncation=True, max_length=max_length-2) for seq, att in zip(batch_dict["input_ids"], batch_dict["attention_mask"]): seq.append(eod_id) att.append(1) batch_dict = tokenizer.pad(batch_dict, padding=True, return_tensors="pt") return batch_dict # Each query must come with a one-sentence instruction that describes the task task = 'Given a web search query, retrieve relevant passages that answer the query' queries = [ get_detailed_instruct(task, 'What is the capital of China?'), get_detailed_instruct(task, 'Explain gravity') ] # No need to add instruction for retrieval documents documents = [ "The capital of China is Beijing.", "Gravity is a force that attracts two bodies towards each other. It gives weight to physical objects and is responsible for the movement of planets around the sun." ] input_texts = queries + documents tokenizer = AutoTokenizer.from_pretrained('Qwen/Qwen3-Embedding-4B', padding_side='left') model = AutoModel.from_pretrained('Qwen/Qwen3-Embedding-4B') # We recommend enabling flash_attention_2 for better acceleration and memory saving. # model = AutoModel.from_pretrained('Qwen/Qwen3-Embedding-4B', attn_implementation="flash_attention_2", torch_dtype=torch.float16).cuda() eod_id = tokenizer.convert_tokens_to_ids("<|endoftext|>") max_length = 8192 # Tokenize the input texts batch_dict = tokenize(tokenizer, input_texts, eod_id, max_length) batch_dict.to(model.device) outputs = model(**batch_dict) embeddings = last_token_pool(outputs.last_hidden_state, batch_dict['attention_mask']) # normalize embeddings embeddings = F.normalize(embeddings, p=2, dim=1) scores = (embeddings[:2] @ embeddings[2:].T) print(scores.tolist()) ``` 📌 **Tip**: We recommend that developers customize the `instruct` according to their specific scenarios, tasks, and languages. Our tests have shown that in most retrieval scenarios, not using an `instruct` on the query side can lead to a drop in retrieval performance by approximately 1% to 5%. ## Evaluation ### MTEB (Multilingual) | Model | Size | Mean (Task) | Mean (Type) | Bitxt Mining | Class. | Clust. | Inst. Retri. | Multi. Class. | Pair. Class. | Rerank | Retri. | STS | |----------------------------------|:-------:|:-------------:|:-------------:|:--------------:|:--------:|:--------:|:--------------:|:---------------:|:--------------:|:--------:|:--------:|:------:| | NV-Embed-v2 | 7B | 56.29 | 49.58 | 57.84 | 57.29 | 40.80 | 1.04 | 18.63 | 78.94 | 63.82 | 56.72 | 71.10| | GritLM-7B | 7B | 60.92 | 53.74 | 70.53 | 61.83 | 49.75 | 3.45 | 22.77 | 79.94 | 63.78 | 58.31 | 73.33| | BGE-M3 | 0.6B | 59.56 | 52.18 | 79.11 | 60.35 | 40.88 | -3.11 | 20.1 | 80.76 | 62.79 | 54.60 | 74.12| | multilingual-e5-large-instruct | 0.6B | 63.22 | 55.08 | 80.13 | 64.94 | 50.75 | -0.40 | 22.91 | 80.86 | 62.61 | 57.12 | 76.81| | gte-Qwen2-1.5B-instruct | 1.5B | 59.45 | 52.69 | 62.51 | 58.32 | 52.05 | 0.74 | 24.02 | 81.58 | 62.58 | 60.78 | 71.61| | gte-Qwen2-7b-Instruct | 7B | 62.51 | 55.93 | 73.92 | 61.55 | 52.77 | 4.94 | 25.48 | 85.13 | 65.55 | 60.08 | 73.98| | text-embedding-3-large | - | 58.93 | 51.41 | 62.17 | 60.27 | 46.89 | -2.68 | 22.03 | 79.17 | 63.89 | 59.27 | 71.68| | Cohere-embed-multilingual-v3.0 | - | 61.12 | 53.23 | 70.50 | 62.95 | 46.89 | -1.89 | 22.74 | 79.88 | 64.07 | 59.16 | 74.80| | gemini-embedding-exp-03-07 | - | 68.37 | 59.59 | 79.28 | 71.82 | 54.59 | 5.18 | **29.16** | 83.63 | 65.58 | 67.71 | 79.40| | **Qwen3-Embedding-0.6B** | 0.6B | 64.33 | 56.00 | 72.22 | 66.83 | 52.33 | 5.09 | 24.59 | 80.83 | 61.41 | 64.64 | 76.17| | **Qwen3-Embedding-4B** | 4B | 69.45 | 60.86 | 79.36 | 72.33 | 57.15 | **11.56** | 26.77 | 85.05 | 65.08 | 69.60 | 80.86| | **Qwen3-Embedding-8B** | 8B | **70.58** | **61.69** | **80.89** | **74.00** | **57.65** | 10.06 | 28.66 | **86.40** | **65.63** | **70.88** | **81.08** | > **Note**: For compared models, the scores are retrieved from MTEB online [leaderboard](https://huggingface.co/spaces/mteb/leaderboard) on May 24th, 2025. ### MTEB (Eng v2) | MTEB English / Models | Param. | Mean(Task) | Mean(Type) | Class. | Clust. | Pair Class. | Rerank. | Retri. | STS | Summ. | |--------------------------------|:--------:|:------------:|:------------:|:--------:|:--------:|:-------------:|:---------:|:--------:|:-------:|:-------:| | multilingual-e5-large-instruct | 0.6B | 65.53 | 61.21 | 75.54 | 49.89 | 86.24 | 48.74 | 53.47 | 84.72 | 29.89 | | NV-Embed-v2 | 7.8B | 69.81 | 65.00 | 87.19 | 47.66 | 88.69 | 49.61 | 62.84 | 83.82 | 35.21 | | GritLM-7B | 7.2B | 67.07 | 63.22 | 81.25 | 50.82 | 87.29 | 49.59 | 54.95 | 83.03 | 35.65 | | gte-Qwen2-1.5B-instruct | 1.5B | 67.20 | 63.26 | 85.84 | 53.54 | 87.52 | 49.25 | 50.25 | 82.51 | 33.94 | | stella_en_1.5B_v5 | 1.5B | 69.43 | 65.32 | 89.38 | 57.06 | 88.02 | 50.19 | 52.42 | 83.27 | 36.91 | | gte-Qwen2-7B-instruct | 7.6B | 70.72 | 65.77 | 88.52 | 58.97 | 85.9 | 50.47 | 58.09 | 82.69 | 35.74 | | gemini-embedding-exp-03-07 | - | 73.3 | 67.67 | 90.05 | **59.39** | **87.7** | 48.59 | 64.35 | 85.29 | **38.28** | | **Qwen3-Embedding-0.6B** | 0.6B | 70.70 | 64.88 | 85.76 | 54.05 | 84.37 | 48.18 | 61.83 | 86.57 | 33.43 | | **Qwen3-Embedding-4B** | 4B | 74.60 | 68.10 | 89.84 | 57.51 | 87.01 | 50.76 | 68.46 | **88.72** | 34.39 | | **Qwen3-Embedding-8B** | 8B | **75.22** | **68.71** | **90.43** | 58.57 | 87.52 | **51.56** | **69.44** | 88.58 | 34.83 | ### C-MTEB (MTEB Chinese) | C-MTEB | Param. | Mean(Task) | Mean(Type) | Class. | Clust. | Pair Class. | Rerank. | Retr. | STS | |------------------|--------|------------|------------|--------|--------|-------------|---------|-------|-------| | multilingual-e5-large-instruct | 0.6B | 58.08 | 58.24 | 69.80 | 48.23 | 64.52 | 57.45 | 63.65 | 45.81 | | bge-multilingual-gemma2 | 9B | 67.64 |68.52 | 75.31 | 59.30 | 86.67 | 68.28 | 73.73 | 55.19 | | gte-Qwen2-1.5B-instruct | 1.5B | 67.12 | 67.79 | 72.53 | 54.61 | 79.5 | 68.21 | 71.86 | 60.05 | | gte-Qwen2-7B-instruct | 7.6B | 71.62 | 72.19 | 75.77 | 66.06 | 81.16 | 69.24 | 75.70 | 65.20 | | ritrieve_zh_v1 | 0.3B | 72.71 | 73.85 | 76.88 | 66.5 | **85.98** | **72.86** | 76.97 | **63.92** | | **Qwen3-Embedding-0.6B** | 0.6B | 66.33 | 67.45 | 71.40 | 68.74 | 76.42 | 62.58 | 71.03 | 54.52 | | **Qwen3-Embedding-4B** | 4B | 72.27 | 73.51 | 75.46 | 77.89 | 83.34 | 66.05 | 77.03 | 61.26 | | **Qwen3-Embedding-8B** | 8B | **73.84** | **75.00** | **76.97** | **80.08** | 84.23 | 66.99 | **78.21** | 63.53 | ## Citation If you find our work helpful, feel free to give us a cite. ``` @misc{qwen3-embedding, title = {Qwen3-Embedding}, url = {https://qwenlm.github.io/blog/qwen3/}, author = {Qwen Team}, month = {May}, year = {2025} } ```
gfortune/roadwork25
gfortune
2025-06-05T15:35:55Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2025-06-05T15:35:25Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
premai-io/Prem-Cardiology
premai-io
2025-06-05T15:35:33Z
0
0
null
[ "safetensors", "en", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "license:mit", "region:us" ]
null
2025-06-03T21:13:29Z
--- license: mit language: - en base_model: - Qwen/Qwen2.5-7B-Instruct --- This model was built using the **PREM platform**, which simplifies the process of creating fine-tuned, domain-specific LLMs — making it more accessible to teams without deep ML infrastructure or large-scale compute setups. If you're building with or evaluating AI in healthcare, we'd love to hear your feedback — or collaborate on future domains.
jinx2321/byt5-1e4-paper-9
jinx2321
2025-06-05T15:35:18Z
0
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:jinx2321/byt5-1e4-paper", "base_model:finetune:jinx2321/byt5-1e4-paper", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2025-06-05T13:59:06Z
--- library_name: transformers license: apache-2.0 base_model: jinx2321/byt5-1e4-paper tags: - generated_from_trainer model-index: - name: byt5-1e4-paper-9 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # byt5-1e4-paper-9 This model is a fine-tuned version of [jinx2321/byt5-1e4-paper](https://huggingface.co/jinx2321/byt5-1e4-paper) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 128 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.52.0.dev0 - Pytorch 2.6.0+cu124 - Datasets 3.4.1 - Tokenizers 0.21.1
PrunaAI/segolilylabs-Lily-Cybersecurity-7B-v0.2-HQQ-4bit-smashed
PrunaAI
2025-06-05T15:34:18Z
1
0
null
[ "mistral", "pruna-ai", "base_model:segolilylabs/Lily-Cybersecurity-7B-v0.2", "base_model:finetune:segolilylabs/Lily-Cybersecurity-7B-v0.2", "region:us" ]
null
2025-06-04T17:27:13Z
--- thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" base_model: segolilylabs/Lily-Cybersecurity-7B-v0.2 metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption tags: - pruna-ai --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="banner.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with hqq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo segolilylabs/Lily-Cybersecurity-7B-v0.2 installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install hqq ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer from hqq.engine.hf import HQQModelForCausalLM from hqq.models.hf.base import AutoHQQHFModel try: model = HQQModelForCausalLM.from_quantized("PrunaAI/segolilylabs-Lily-Cybersecurity-7B-v0.2-HQQ-4bit-smashed", device_map='auto') except: model = AutoHQQHFModel.from_quantized("PrunaAI/segolilylabs-Lily-Cybersecurity-7B-v0.2-HQQ-4bit-smashed") tokenizer = AutoTokenizer.from_pretrained("segolilylabs/Lily-Cybersecurity-7B-v0.2") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. This model has been smashed with pruna in version O.1.3 ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model segolilylabs/Lily-Cybersecurity-7B-v0.2 before using this model which provided the base model. The license of `pruna` is [here](https://github.com/PrunaAI/pruna/blob/main/LICENSE) on GitHub. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
Stergios-Konstantinidis/MNLP_M3_tokenizer
Stergios-Konstantinidis
2025-06-05T15:29:36Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "mteb", "Sentence Transformers", "sentence-similarity", "en", "arxiv:2212.03533", "arxiv:2104.08663", "arxiv:2210.07316", "license:mit", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-06-05T15:29:20Z
--- tags: - mteb - Sentence Transformers - sentence-similarity - sentence-transformers model-index: - name: e5-base-v2 results: - task: type: Classification dataset: type: mteb/amazon_counterfactual name: MTEB AmazonCounterfactualClassification (en) config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 77.77611940298506 - type: ap value: 42.052710266606056 - type: f1 value: 72.12040628266567 - task: type: Classification dataset: type: mteb/amazon_polarity name: MTEB AmazonPolarityClassification config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 92.81012500000001 - type: ap value: 89.4213700757244 - type: f1 value: 92.8039091197065 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (en) config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 46.711999999999996 - type: f1 value: 46.11544975436018 - task: type: Retrieval dataset: type: arguana name: MTEB ArguAna config: default split: test revision: None metrics: - type: map_at_1 value: 23.186 - type: map_at_10 value: 36.632999999999996 - type: map_at_100 value: 37.842 - type: map_at_1000 value: 37.865 - type: map_at_3 value: 32.278 - type: map_at_5 value: 34.760999999999996 - type: mrr_at_1 value: 23.400000000000002 - type: mrr_at_10 value: 36.721 - type: mrr_at_100 value: 37.937 - type: mrr_at_1000 value: 37.96 - type: mrr_at_3 value: 32.302 - type: mrr_at_5 value: 34.894 - type: ndcg_at_1 value: 23.186 - type: ndcg_at_10 value: 44.49 - type: ndcg_at_100 value: 50.065000000000005 - type: ndcg_at_1000 value: 50.629999999999995 - type: ndcg_at_3 value: 35.461 - type: ndcg_at_5 value: 39.969 - type: precision_at_1 value: 23.186 - type: precision_at_10 value: 6.97 - type: precision_at_100 value: 0.951 - type: precision_at_1000 value: 0.099 - type: precision_at_3 value: 14.912 - type: precision_at_5 value: 11.152 - type: recall_at_1 value: 23.186 - type: recall_at_10 value: 69.70100000000001 - type: recall_at_100 value: 95.092 - type: recall_at_1000 value: 99.431 - type: recall_at_3 value: 44.737 - type: recall_at_5 value: 55.761 - task: type: Clustering dataset: type: mteb/arxiv-clustering-p2p name: MTEB ArxivClusteringP2P config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 46.10312401440185 - task: type: Clustering dataset: type: mteb/arxiv-clustering-s2s name: MTEB ArxivClusteringS2S config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 39.67275326095384 - task: type: Reranking dataset: type: mteb/askubuntudupquestions-reranking name: MTEB AskUbuntuDupQuestions config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 58.97793816337376 - type: mrr value: 72.76832431957087 - task: type: STS dataset: type: mteb/biosses-sts name: MTEB BIOSSES config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 83.11646947018187 - type: cos_sim_spearman value: 81.40064994975234 - type: euclidean_pearson value: 82.37355689019232 - type: euclidean_spearman value: 81.6777646977348 - type: manhattan_pearson value: 82.61101422716945 - type: manhattan_spearman value: 81.80427360442245 - task: type: Classification dataset: type: mteb/banking77 name: MTEB Banking77Classification config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 83.52922077922076 - type: f1 value: 83.45298679360866 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-p2p name: MTEB BiorxivClusteringP2P config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 37.495115019668496 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-s2s name: MTEB BiorxivClusteringS2S config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 32.724792944166765 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackAndroidRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 32.361000000000004 - type: map_at_10 value: 43.765 - type: map_at_100 value: 45.224 - type: map_at_1000 value: 45.35 - type: map_at_3 value: 40.353 - type: map_at_5 value: 42.195 - type: mrr_at_1 value: 40.629 - type: mrr_at_10 value: 50.458000000000006 - type: mrr_at_100 value: 51.06699999999999 - type: mrr_at_1000 value: 51.12 - type: mrr_at_3 value: 47.902 - type: mrr_at_5 value: 49.447 - type: ndcg_at_1 value: 40.629 - type: ndcg_at_10 value: 50.376 - type: ndcg_at_100 value: 55.065 - type: ndcg_at_1000 value: 57.196000000000005 - type: ndcg_at_3 value: 45.616 - type: ndcg_at_5 value: 47.646 - type: precision_at_1 value: 40.629 - type: precision_at_10 value: 9.785 - type: precision_at_100 value: 1.562 - type: precision_at_1000 value: 0.2 - type: precision_at_3 value: 22.031 - type: precision_at_5 value: 15.737000000000002 - type: recall_at_1 value: 32.361000000000004 - type: recall_at_10 value: 62.214000000000006 - type: recall_at_100 value: 81.464 - type: recall_at_1000 value: 95.905 - type: recall_at_3 value: 47.5 - type: recall_at_5 value: 53.69500000000001 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackEnglishRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 27.971 - type: map_at_10 value: 37.444 - type: map_at_100 value: 38.607 - type: map_at_1000 value: 38.737 - type: map_at_3 value: 34.504000000000005 - type: map_at_5 value: 36.234 - type: mrr_at_1 value: 35.35 - type: mrr_at_10 value: 43.441 - type: mrr_at_100 value: 44.147999999999996 - type: mrr_at_1000 value: 44.196000000000005 - type: mrr_at_3 value: 41.285 - type: mrr_at_5 value: 42.552 - type: ndcg_at_1 value: 35.35 - type: ndcg_at_10 value: 42.903999999999996 - type: ndcg_at_100 value: 47.406 - type: ndcg_at_1000 value: 49.588 - type: ndcg_at_3 value: 38.778 - type: ndcg_at_5 value: 40.788000000000004 - type: precision_at_1 value: 35.35 - type: precision_at_10 value: 8.083 - type: precision_at_100 value: 1.313 - type: precision_at_1000 value: 0.18 - type: precision_at_3 value: 18.769 - type: precision_at_5 value: 13.439 - type: recall_at_1 value: 27.971 - type: recall_at_10 value: 52.492000000000004 - type: recall_at_100 value: 71.642 - type: recall_at_1000 value: 85.488 - type: recall_at_3 value: 40.1 - type: recall_at_5 value: 45.800000000000004 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGamingRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 39.898 - type: map_at_10 value: 51.819 - type: map_at_100 value: 52.886 - type: map_at_1000 value: 52.941 - type: map_at_3 value: 48.619 - type: map_at_5 value: 50.493 - type: mrr_at_1 value: 45.391999999999996 - type: mrr_at_10 value: 55.230000000000004 - type: mrr_at_100 value: 55.887 - type: mrr_at_1000 value: 55.916 - type: mrr_at_3 value: 52.717000000000006 - type: mrr_at_5 value: 54.222 - type: ndcg_at_1 value: 45.391999999999996 - type: ndcg_at_10 value: 57.586999999999996 - type: ndcg_at_100 value: 61.745000000000005 - type: ndcg_at_1000 value: 62.83800000000001 - type: ndcg_at_3 value: 52.207 - type: ndcg_at_5 value: 54.925999999999995 - type: precision_at_1 value: 45.391999999999996 - type: precision_at_10 value: 9.21 - type: precision_at_100 value: 1.226 - type: precision_at_1000 value: 0.136 - type: precision_at_3 value: 23.177 - type: precision_at_5 value: 16.038 - type: recall_at_1 value: 39.898 - type: recall_at_10 value: 71.18900000000001 - type: recall_at_100 value: 89.082 - type: recall_at_1000 value: 96.865 - type: recall_at_3 value: 56.907 - type: recall_at_5 value: 63.397999999999996 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGisRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 22.706 - type: map_at_10 value: 30.818 - type: map_at_100 value: 32.038 - type: map_at_1000 value: 32.123000000000005 - type: map_at_3 value: 28.077 - type: map_at_5 value: 29.709999999999997 - type: mrr_at_1 value: 24.407 - type: mrr_at_10 value: 32.555 - type: mrr_at_100 value: 33.692 - type: mrr_at_1000 value: 33.751 - type: mrr_at_3 value: 29.848999999999997 - type: mrr_at_5 value: 31.509999999999998 - type: ndcg_at_1 value: 24.407 - type: ndcg_at_10 value: 35.624 - type: ndcg_at_100 value: 41.454 - type: ndcg_at_1000 value: 43.556 - type: ndcg_at_3 value: 30.217 - type: ndcg_at_5 value: 33.111000000000004 - type: precision_at_1 value: 24.407 - type: precision_at_10 value: 5.548 - type: precision_at_100 value: 0.8869999999999999 - type: precision_at_1000 value: 0.11100000000000002 - type: precision_at_3 value: 12.731 - type: precision_at_5 value: 9.22 - type: recall_at_1 value: 22.706 - type: recall_at_10 value: 48.772 - type: recall_at_100 value: 75.053 - type: recall_at_1000 value: 90.731 - type: recall_at_3 value: 34.421 - type: recall_at_5 value: 41.427 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackMathematicaRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 13.424 - type: map_at_10 value: 21.09 - type: map_at_100 value: 22.264999999999997 - type: map_at_1000 value: 22.402 - type: map_at_3 value: 18.312 - type: map_at_5 value: 19.874 - type: mrr_at_1 value: 16.915 - type: mrr_at_10 value: 25.258000000000003 - type: mrr_at_100 value: 26.228 - type: mrr_at_1000 value: 26.31 - type: mrr_at_3 value: 22.492 - type: mrr_at_5 value: 24.04 - type: ndcg_at_1 value: 16.915 - type: ndcg_at_10 value: 26.266000000000002 - type: ndcg_at_100 value: 32.08 - type: ndcg_at_1000 value: 35.086 - type: ndcg_at_3 value: 21.049 - type: ndcg_at_5 value: 23.508000000000003 - type: precision_at_1 value: 16.915 - type: precision_at_10 value: 5.1 - type: precision_at_100 value: 0.9329999999999999 - type: precision_at_1000 value: 0.131 - type: precision_at_3 value: 10.282 - type: precision_at_5 value: 7.836 - type: recall_at_1 value: 13.424 - type: recall_at_10 value: 38.179 - type: recall_at_100 value: 63.906 - type: recall_at_1000 value: 84.933 - type: recall_at_3 value: 23.878 - type: recall_at_5 value: 30.037999999999997 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackPhysicsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 26.154 - type: map_at_10 value: 35.912 - type: map_at_100 value: 37.211 - type: map_at_1000 value: 37.327 - type: map_at_3 value: 32.684999999999995 - type: map_at_5 value: 34.562 - type: mrr_at_1 value: 32.435 - type: mrr_at_10 value: 41.411 - type: mrr_at_100 value: 42.297000000000004 - type: mrr_at_1000 value: 42.345 - type: mrr_at_3 value: 38.771 - type: mrr_at_5 value: 40.33 - type: ndcg_at_1 value: 32.435 - type: ndcg_at_10 value: 41.785 - type: ndcg_at_100 value: 47.469 - type: ndcg_at_1000 value: 49.685 - type: ndcg_at_3 value: 36.618 - type: ndcg_at_5 value: 39.101 - type: precision_at_1 value: 32.435 - type: precision_at_10 value: 7.642 - type: precision_at_100 value: 1.244 - type: precision_at_1000 value: 0.163 - type: precision_at_3 value: 17.485 - type: precision_at_5 value: 12.57 - type: recall_at_1 value: 26.154 - type: recall_at_10 value: 54.111 - type: recall_at_100 value: 78.348 - type: recall_at_1000 value: 92.996 - type: recall_at_3 value: 39.189 - type: recall_at_5 value: 45.852 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackProgrammersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 26.308999999999997 - type: map_at_10 value: 35.524 - type: map_at_100 value: 36.774 - type: map_at_1000 value: 36.891 - type: map_at_3 value: 32.561 - type: map_at_5 value: 34.034 - type: mrr_at_1 value: 31.735000000000003 - type: mrr_at_10 value: 40.391 - type: mrr_at_100 value: 41.227000000000004 - type: mrr_at_1000 value: 41.288000000000004 - type: mrr_at_3 value: 37.938 - type: mrr_at_5 value: 39.193 - type: ndcg_at_1 value: 31.735000000000003 - type: ndcg_at_10 value: 41.166000000000004 - type: ndcg_at_100 value: 46.702 - type: ndcg_at_1000 value: 49.157000000000004 - type: ndcg_at_3 value: 36.274 - type: ndcg_at_5 value: 38.177 - type: precision_at_1 value: 31.735000000000003 - type: precision_at_10 value: 7.5569999999999995 - type: precision_at_100 value: 1.2109999999999999 - type: precision_at_1000 value: 0.16 - type: precision_at_3 value: 17.199 - type: precision_at_5 value: 12.123000000000001 - type: recall_at_1 value: 26.308999999999997 - type: recall_at_10 value: 53.083000000000006 - type: recall_at_100 value: 76.922 - type: recall_at_1000 value: 93.767 - type: recall_at_3 value: 39.262 - type: recall_at_5 value: 44.413000000000004 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 24.391250000000003 - type: map_at_10 value: 33.280166666666666 - type: map_at_100 value: 34.49566666666667 - type: map_at_1000 value: 34.61533333333333 - type: map_at_3 value: 30.52183333333333 - type: map_at_5 value: 32.06608333333333 - type: mrr_at_1 value: 29.105083333333337 - type: mrr_at_10 value: 37.44766666666666 - type: mrr_at_100 value: 38.32491666666667 - type: mrr_at_1000 value: 38.385666666666665 - type: mrr_at_3 value: 35.06883333333333 - type: mrr_at_5 value: 36.42066666666667 - type: ndcg_at_1 value: 29.105083333333337 - type: ndcg_at_10 value: 38.54358333333333 - type: ndcg_at_100 value: 43.833583333333344 - type: ndcg_at_1000 value: 46.215333333333334 - type: ndcg_at_3 value: 33.876 - type: ndcg_at_5 value: 36.05208333333333 - type: precision_at_1 value: 29.105083333333337 - type: precision_at_10 value: 6.823416666666665 - type: precision_at_100 value: 1.1270833333333334 - type: precision_at_1000 value: 0.15208333333333332 - type: precision_at_3 value: 15.696750000000002 - type: precision_at_5 value: 11.193499999999998 - type: recall_at_1 value: 24.391250000000003 - type: recall_at_10 value: 49.98808333333333 - type: recall_at_100 value: 73.31616666666666 - type: recall_at_1000 value: 89.96291666666667 - type: recall_at_3 value: 36.86666666666667 - type: recall_at_5 value: 42.54350000000001 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackStatsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 21.995 - type: map_at_10 value: 28.807 - type: map_at_100 value: 29.813000000000002 - type: map_at_1000 value: 29.903000000000002 - type: map_at_3 value: 26.636 - type: map_at_5 value: 27.912 - type: mrr_at_1 value: 24.847 - type: mrr_at_10 value: 31.494 - type: mrr_at_100 value: 32.381 - type: mrr_at_1000 value: 32.446999999999996 - type: mrr_at_3 value: 29.473 - type: mrr_at_5 value: 30.7 - type: ndcg_at_1 value: 24.847 - type: ndcg_at_10 value: 32.818999999999996 - type: ndcg_at_100 value: 37.835 - type: ndcg_at_1000 value: 40.226 - type: ndcg_at_3 value: 28.811999999999998 - type: ndcg_at_5 value: 30.875999999999998 - type: precision_at_1 value: 24.847 - type: precision_at_10 value: 5.244999999999999 - type: precision_at_100 value: 0.856 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 12.577 - type: precision_at_5 value: 8.895999999999999 - type: recall_at_1 value: 21.995 - type: recall_at_10 value: 42.479 - type: recall_at_100 value: 65.337 - type: recall_at_1000 value: 83.23700000000001 - type: recall_at_3 value: 31.573 - type: recall_at_5 value: 36.684 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackTexRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 15.751000000000001 - type: map_at_10 value: 21.909 - type: map_at_100 value: 23.064 - type: map_at_1000 value: 23.205000000000002 - type: map_at_3 value: 20.138 - type: map_at_5 value: 20.973 - type: mrr_at_1 value: 19.305 - type: mrr_at_10 value: 25.647 - type: mrr_at_100 value: 26.659 - type: mrr_at_1000 value: 26.748 - type: mrr_at_3 value: 23.933 - type: mrr_at_5 value: 24.754 - type: ndcg_at_1 value: 19.305 - type: ndcg_at_10 value: 25.886 - type: ndcg_at_100 value: 31.56 - type: ndcg_at_1000 value: 34.799 - type: ndcg_at_3 value: 22.708000000000002 - type: ndcg_at_5 value: 23.838 - type: precision_at_1 value: 19.305 - type: precision_at_10 value: 4.677 - type: precision_at_100 value: 0.895 - type: precision_at_1000 value: 0.136 - type: precision_at_3 value: 10.771 - type: precision_at_5 value: 7.46 - type: recall_at_1 value: 15.751000000000001 - type: recall_at_10 value: 34.156 - type: recall_at_100 value: 59.899 - type: recall_at_1000 value: 83.08 - type: recall_at_3 value: 24.772 - type: recall_at_5 value: 28.009 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackUnixRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 23.34 - type: map_at_10 value: 32.383 - type: map_at_100 value: 33.629999999999995 - type: map_at_1000 value: 33.735 - type: map_at_3 value: 29.68 - type: map_at_5 value: 31.270999999999997 - type: mrr_at_1 value: 27.612 - type: mrr_at_10 value: 36.381 - type: mrr_at_100 value: 37.351 - type: mrr_at_1000 value: 37.411 - type: mrr_at_3 value: 33.893 - type: mrr_at_5 value: 35.353 - type: ndcg_at_1 value: 27.612 - type: ndcg_at_10 value: 37.714999999999996 - type: ndcg_at_100 value: 43.525000000000006 - type: ndcg_at_1000 value: 45.812999999999995 - type: ndcg_at_3 value: 32.796 - type: ndcg_at_5 value: 35.243 - type: precision_at_1 value: 27.612 - type: precision_at_10 value: 6.465 - type: precision_at_100 value: 1.0619999999999998 - type: precision_at_1000 value: 0.13699999999999998 - type: precision_at_3 value: 15.049999999999999 - type: precision_at_5 value: 10.764999999999999 - type: recall_at_1 value: 23.34 - type: recall_at_10 value: 49.856 - type: recall_at_100 value: 75.334 - type: recall_at_1000 value: 91.156 - type: recall_at_3 value: 36.497 - type: recall_at_5 value: 42.769 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWebmastersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 25.097 - type: map_at_10 value: 34.599999999999994 - type: map_at_100 value: 36.174 - type: map_at_1000 value: 36.398 - type: map_at_3 value: 31.781 - type: map_at_5 value: 33.22 - type: mrr_at_1 value: 31.225 - type: mrr_at_10 value: 39.873 - type: mrr_at_100 value: 40.853 - type: mrr_at_1000 value: 40.904 - type: mrr_at_3 value: 37.681 - type: mrr_at_5 value: 38.669 - type: ndcg_at_1 value: 31.225 - type: ndcg_at_10 value: 40.586 - type: ndcg_at_100 value: 46.226 - type: ndcg_at_1000 value: 48.788 - type: ndcg_at_3 value: 36.258 - type: ndcg_at_5 value: 37.848 - type: precision_at_1 value: 31.225 - type: precision_at_10 value: 7.707999999999999 - type: precision_at_100 value: 1.536 - type: precision_at_1000 value: 0.242 - type: precision_at_3 value: 17.26 - type: precision_at_5 value: 12.253 - type: recall_at_1 value: 25.097 - type: recall_at_10 value: 51.602000000000004 - type: recall_at_100 value: 76.854 - type: recall_at_1000 value: 93.303 - type: recall_at_3 value: 38.68 - type: recall_at_5 value: 43.258 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWordpressRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 17.689 - type: map_at_10 value: 25.291000000000004 - type: map_at_100 value: 26.262 - type: map_at_1000 value: 26.372 - type: map_at_3 value: 22.916 - type: map_at_5 value: 24.315 - type: mrr_at_1 value: 19.409000000000002 - type: mrr_at_10 value: 27.233 - type: mrr_at_100 value: 28.109 - type: mrr_at_1000 value: 28.192 - type: mrr_at_3 value: 24.892 - type: mrr_at_5 value: 26.278000000000002 - type: ndcg_at_1 value: 19.409000000000002 - type: ndcg_at_10 value: 29.809 - type: ndcg_at_100 value: 34.936 - type: ndcg_at_1000 value: 37.852000000000004 - type: ndcg_at_3 value: 25.179000000000002 - type: ndcg_at_5 value: 27.563 - type: precision_at_1 value: 19.409000000000002 - type: precision_at_10 value: 4.861 - type: precision_at_100 value: 0.8 - type: precision_at_1000 value: 0.116 - type: precision_at_3 value: 11.029 - type: precision_at_5 value: 7.985 - type: recall_at_1 value: 17.689 - type: recall_at_10 value: 41.724 - type: recall_at_100 value: 65.95299999999999 - type: recall_at_1000 value: 88.094 - type: recall_at_3 value: 29.621 - type: recall_at_5 value: 35.179 - task: type: Retrieval dataset: type: climate-fever name: MTEB ClimateFEVER config: default split: test revision: None metrics: - type: map_at_1 value: 10.581 - type: map_at_10 value: 18.944 - type: map_at_100 value: 20.812 - type: map_at_1000 value: 21.002000000000002 - type: map_at_3 value: 15.661 - type: map_at_5 value: 17.502000000000002 - type: mrr_at_1 value: 23.388 - type: mrr_at_10 value: 34.263 - type: mrr_at_100 value: 35.364000000000004 - type: mrr_at_1000 value: 35.409 - type: mrr_at_3 value: 30.586000000000002 - type: mrr_at_5 value: 32.928000000000004 - type: ndcg_at_1 value: 23.388 - type: ndcg_at_10 value: 26.56 - type: ndcg_at_100 value: 34.248 - type: ndcg_at_1000 value: 37.779 - type: ndcg_at_3 value: 21.179000000000002 - type: ndcg_at_5 value: 23.504 - type: precision_at_1 value: 23.388 - type: precision_at_10 value: 8.476 - type: precision_at_100 value: 1.672 - type: precision_at_1000 value: 0.233 - type: precision_at_3 value: 15.852 - type: precision_at_5 value: 12.73 - type: recall_at_1 value: 10.581 - type: recall_at_10 value: 32.512 - type: recall_at_100 value: 59.313 - type: recall_at_1000 value: 79.25 - type: recall_at_3 value: 19.912 - type: recall_at_5 value: 25.832 - task: type: Retrieval dataset: type: dbpedia-entity name: MTEB DBPedia config: default split: test revision: None metrics: - type: map_at_1 value: 9.35 - type: map_at_10 value: 20.134 - type: map_at_100 value: 28.975 - type: map_at_1000 value: 30.709999999999997 - type: map_at_3 value: 14.513000000000002 - type: map_at_5 value: 16.671 - type: mrr_at_1 value: 69.75 - type: mrr_at_10 value: 77.67699999999999 - type: mrr_at_100 value: 77.97500000000001 - type: mrr_at_1000 value: 77.985 - type: mrr_at_3 value: 76.292 - type: mrr_at_5 value: 77.179 - type: ndcg_at_1 value: 56.49999999999999 - type: ndcg_at_10 value: 42.226 - type: ndcg_at_100 value: 47.562 - type: ndcg_at_1000 value: 54.923 - type: ndcg_at_3 value: 46.564 - type: ndcg_at_5 value: 43.830000000000005 - type: precision_at_1 value: 69.75 - type: precision_at_10 value: 33.525 - type: precision_at_100 value: 11.035 - type: precision_at_1000 value: 2.206 - type: precision_at_3 value: 49.75 - type: precision_at_5 value: 42 - type: recall_at_1 value: 9.35 - type: recall_at_10 value: 25.793 - type: recall_at_100 value: 54.186 - type: recall_at_1000 value: 77.81 - type: recall_at_3 value: 15.770000000000001 - type: recall_at_5 value: 19.09 - task: type: Classification dataset: type: mteb/emotion name: MTEB EmotionClassification config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 46.945 - type: f1 value: 42.07407842992542 - task: type: Retrieval dataset: type: fever name: MTEB FEVER config: default split: test revision: None metrics: - type: map_at_1 value: 71.04599999999999 - type: map_at_10 value: 80.718 - type: map_at_100 value: 80.961 - type: map_at_1000 value: 80.974 - type: map_at_3 value: 79.49199999999999 - type: map_at_5 value: 80.32000000000001 - type: mrr_at_1 value: 76.388 - type: mrr_at_10 value: 85.214 - type: mrr_at_100 value: 85.302 - type: mrr_at_1000 value: 85.302 - type: mrr_at_3 value: 84.373 - type: mrr_at_5 value: 84.979 - type: ndcg_at_1 value: 76.388 - type: ndcg_at_10 value: 84.987 - type: ndcg_at_100 value: 85.835 - type: ndcg_at_1000 value: 86.04899999999999 - type: ndcg_at_3 value: 83.04 - type: ndcg_at_5 value: 84.22500000000001 - type: precision_at_1 value: 76.388 - type: precision_at_10 value: 10.35 - type: precision_at_100 value: 1.099 - type: precision_at_1000 value: 0.11399999999999999 - type: precision_at_3 value: 32.108 - type: precision_at_5 value: 20.033 - type: recall_at_1 value: 71.04599999999999 - type: recall_at_10 value: 93.547 - type: recall_at_100 value: 96.887 - type: recall_at_1000 value: 98.158 - type: recall_at_3 value: 88.346 - type: recall_at_5 value: 91.321 - task: type: Retrieval dataset: type: fiqa name: MTEB FiQA2018 config: default split: test revision: None metrics: - type: map_at_1 value: 19.8 - type: map_at_10 value: 31.979999999999997 - type: map_at_100 value: 33.876 - type: map_at_1000 value: 34.056999999999995 - type: map_at_3 value: 28.067999999999998 - type: map_at_5 value: 30.066 - type: mrr_at_1 value: 38.735 - type: mrr_at_10 value: 47.749 - type: mrr_at_100 value: 48.605 - type: mrr_at_1000 value: 48.644999999999996 - type: mrr_at_3 value: 45.165 - type: mrr_at_5 value: 46.646 - type: ndcg_at_1 value: 38.735 - type: ndcg_at_10 value: 39.883 - type: ndcg_at_100 value: 46.983000000000004 - type: ndcg_at_1000 value: 50.043000000000006 - type: ndcg_at_3 value: 35.943000000000005 - type: ndcg_at_5 value: 37.119 - type: precision_at_1 value: 38.735 - type: precision_at_10 value: 10.940999999999999 - type: precision_at_100 value: 1.836 - type: precision_at_1000 value: 0.23900000000000002 - type: precision_at_3 value: 23.817 - type: precision_at_5 value: 17.346 - type: recall_at_1 value: 19.8 - type: recall_at_10 value: 47.082 - type: recall_at_100 value: 73.247 - type: recall_at_1000 value: 91.633 - type: recall_at_3 value: 33.201 - type: recall_at_5 value: 38.81 - task: type: Retrieval dataset: type: hotpotqa name: MTEB HotpotQA config: default split: test revision: None metrics: - type: map_at_1 value: 38.102999999999994 - type: map_at_10 value: 60.547 - type: map_at_100 value: 61.466 - type: map_at_1000 value: 61.526 - type: map_at_3 value: 56.973 - type: map_at_5 value: 59.244 - type: mrr_at_1 value: 76.205 - type: mrr_at_10 value: 82.816 - type: mrr_at_100 value: 83.002 - type: mrr_at_1000 value: 83.009 - type: mrr_at_3 value: 81.747 - type: mrr_at_5 value: 82.467 - type: ndcg_at_1 value: 76.205 - type: ndcg_at_10 value: 69.15 - type: ndcg_at_100 value: 72.297 - type: ndcg_at_1000 value: 73.443 - type: ndcg_at_3 value: 64.07000000000001 - type: ndcg_at_5 value: 66.96600000000001 - type: precision_at_1 value: 76.205 - type: precision_at_10 value: 14.601 - type: precision_at_100 value: 1.7049999999999998 - type: precision_at_1000 value: 0.186 - type: precision_at_3 value: 41.202 - type: precision_at_5 value: 27.006000000000004 - type: recall_at_1 value: 38.102999999999994 - type: recall_at_10 value: 73.005 - type: recall_at_100 value: 85.253 - type: recall_at_1000 value: 92.795 - type: recall_at_3 value: 61.803 - type: recall_at_5 value: 67.515 - task: type: Classification dataset: type: mteb/imdb name: MTEB ImdbClassification config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 86.15 - type: ap value: 80.36282825265391 - type: f1 value: 86.07368510726472 - task: type: Retrieval dataset: type: msmarco name: MTEB MSMARCO config: default split: dev revision: None metrics: - type: map_at_1 value: 22.6 - type: map_at_10 value: 34.887 - type: map_at_100 value: 36.069 - type: map_at_1000 value: 36.115 - type: map_at_3 value: 31.067 - type: map_at_5 value: 33.300000000000004 - type: mrr_at_1 value: 23.238 - type: mrr_at_10 value: 35.47 - type: mrr_at_100 value: 36.599 - type: mrr_at_1000 value: 36.64 - type: mrr_at_3 value: 31.735999999999997 - type: mrr_at_5 value: 33.939 - type: ndcg_at_1 value: 23.252 - type: ndcg_at_10 value: 41.765 - type: ndcg_at_100 value: 47.402 - type: ndcg_at_1000 value: 48.562 - type: ndcg_at_3 value: 34.016999999999996 - type: ndcg_at_5 value: 38.016 - type: precision_at_1 value: 23.252 - type: precision_at_10 value: 6.569 - type: precision_at_100 value: 0.938 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 14.479000000000001 - type: precision_at_5 value: 10.722 - type: recall_at_1 value: 22.6 - type: recall_at_10 value: 62.919000000000004 - type: recall_at_100 value: 88.82 - type: recall_at_1000 value: 97.71600000000001 - type: recall_at_3 value: 41.896 - type: recall_at_5 value: 51.537 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (en) config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 93.69357045143639 - type: f1 value: 93.55489858177597 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (en) config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 75.31235750114 - type: f1 value: 57.891491963121155 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (en) config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 73.04303967720243 - type: f1 value: 70.51516022297616 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (en) config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 77.65299260255549 - type: f1 value: 77.49059766538576 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-p2p name: MTEB MedrxivClusteringP2P config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 31.458906115906597 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-s2s name: MTEB MedrxivClusteringS2S config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 28.9851513122443 - task: type: Reranking dataset: type: mteb/mind_small name: MTEB MindSmallReranking config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 31.2916268497217 - type: mrr value: 32.328276715593816 - task: type: Retrieval dataset: type: nfcorpus name: MTEB NFCorpus config: default split: test revision: None metrics: - type: map_at_1 value: 6.3740000000000006 - type: map_at_10 value: 13.089999999999998 - type: map_at_100 value: 16.512 - type: map_at_1000 value: 18.014 - type: map_at_3 value: 9.671000000000001 - type: map_at_5 value: 11.199 - type: mrr_at_1 value: 46.749 - type: mrr_at_10 value: 55.367 - type: mrr_at_100 value: 56.021 - type: mrr_at_1000 value: 56.058 - type: mrr_at_3 value: 53.30200000000001 - type: mrr_at_5 value: 54.773 - type: ndcg_at_1 value: 45.046 - type: ndcg_at_10 value: 35.388999999999996 - type: ndcg_at_100 value: 32.175 - type: ndcg_at_1000 value: 41.018 - type: ndcg_at_3 value: 40.244 - type: ndcg_at_5 value: 38.267 - type: precision_at_1 value: 46.749 - type: precision_at_10 value: 26.563 - type: precision_at_100 value: 8.074 - type: precision_at_1000 value: 2.099 - type: precision_at_3 value: 37.358000000000004 - type: precision_at_5 value: 33.003 - type: recall_at_1 value: 6.3740000000000006 - type: recall_at_10 value: 16.805999999999997 - type: recall_at_100 value: 31.871 - type: recall_at_1000 value: 64.098 - type: recall_at_3 value: 10.383000000000001 - type: recall_at_5 value: 13.166 - task: type: Retrieval dataset: type: nq name: MTEB NQ config: default split: test revision: None metrics: - type: map_at_1 value: 34.847 - type: map_at_10 value: 50.532 - type: map_at_100 value: 51.504000000000005 - type: map_at_1000 value: 51.528 - type: map_at_3 value: 46.219 - type: map_at_5 value: 48.868 - type: mrr_at_1 value: 39.137 - type: mrr_at_10 value: 53.157 - type: mrr_at_100 value: 53.839999999999996 - type: mrr_at_1000 value: 53.857 - type: mrr_at_3 value: 49.667 - type: mrr_at_5 value: 51.847 - type: ndcg_at_1 value: 39.108 - type: ndcg_at_10 value: 58.221000000000004 - type: ndcg_at_100 value: 62.021 - type: ndcg_at_1000 value: 62.57 - type: ndcg_at_3 value: 50.27199999999999 - type: ndcg_at_5 value: 54.623999999999995 - type: precision_at_1 value: 39.108 - type: precision_at_10 value: 9.397 - type: precision_at_100 value: 1.1520000000000001 - type: precision_at_1000 value: 0.12 - type: precision_at_3 value: 22.644000000000002 - type: precision_at_5 value: 16.141 - type: recall_at_1 value: 34.847 - type: recall_at_10 value: 78.945 - type: recall_at_100 value: 94.793 - type: recall_at_1000 value: 98.904 - type: recall_at_3 value: 58.56 - type: recall_at_5 value: 68.535 - task: type: Retrieval dataset: type: quora name: MTEB QuoraRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 68.728 - type: map_at_10 value: 82.537 - type: map_at_100 value: 83.218 - type: map_at_1000 value: 83.238 - type: map_at_3 value: 79.586 - type: map_at_5 value: 81.416 - type: mrr_at_1 value: 79.17999999999999 - type: mrr_at_10 value: 85.79299999999999 - type: mrr_at_100 value: 85.937 - type: mrr_at_1000 value: 85.938 - type: mrr_at_3 value: 84.748 - type: mrr_at_5 value: 85.431 - type: ndcg_at_1 value: 79.17 - type: ndcg_at_10 value: 86.555 - type: ndcg_at_100 value: 88.005 - type: ndcg_at_1000 value: 88.146 - type: ndcg_at_3 value: 83.557 - type: ndcg_at_5 value: 85.152 - type: precision_at_1 value: 79.17 - type: precision_at_10 value: 13.163 - type: precision_at_100 value: 1.52 - type: precision_at_1000 value: 0.156 - type: precision_at_3 value: 36.53 - type: precision_at_5 value: 24.046 - type: recall_at_1 value: 68.728 - type: recall_at_10 value: 94.217 - type: recall_at_100 value: 99.295 - type: recall_at_1000 value: 99.964 - type: recall_at_3 value: 85.646 - type: recall_at_5 value: 90.113 - task: type: Clustering dataset: type: mteb/reddit-clustering name: MTEB RedditClustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 56.15680266226348 - task: type: Clustering dataset: type: mteb/reddit-clustering-p2p name: MTEB RedditClusteringP2P config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 63.4318549229047 - task: type: Retrieval dataset: type: scidocs name: MTEB SCIDOCS config: default split: test revision: None metrics: - type: map_at_1 value: 4.353 - type: map_at_10 value: 10.956000000000001 - type: map_at_100 value: 12.873999999999999 - type: map_at_1000 value: 13.177 - type: map_at_3 value: 7.854 - type: map_at_5 value: 9.327 - type: mrr_at_1 value: 21.4 - type: mrr_at_10 value: 31.948999999999998 - type: mrr_at_100 value: 33.039 - type: mrr_at_1000 value: 33.106 - type: mrr_at_3 value: 28.449999999999996 - type: mrr_at_5 value: 30.535 - type: ndcg_at_1 value: 21.4 - type: ndcg_at_10 value: 18.694 - type: ndcg_at_100 value: 26.275 - type: ndcg_at_1000 value: 31.836 - type: ndcg_at_3 value: 17.559 - type: ndcg_at_5 value: 15.372 - type: precision_at_1 value: 21.4 - type: precision_at_10 value: 9.790000000000001 - type: precision_at_100 value: 2.0709999999999997 - type: precision_at_1000 value: 0.34099999999999997 - type: precision_at_3 value: 16.467000000000002 - type: precision_at_5 value: 13.54 - type: recall_at_1 value: 4.353 - type: recall_at_10 value: 19.892000000000003 - type: recall_at_100 value: 42.067 - type: recall_at_1000 value: 69.268 - type: recall_at_3 value: 10.042 - type: recall_at_5 value: 13.741999999999999 - task: type: STS dataset: type: mteb/sickr-sts name: MTEB SICK-R config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 83.75433886279843 - type: cos_sim_spearman value: 78.29727771767095 - type: euclidean_pearson value: 80.83057828506621 - type: euclidean_spearman value: 78.35203149750356 - type: manhattan_pearson value: 80.7403553891142 - type: manhattan_spearman value: 78.33670488531051 - task: type: STS dataset: type: mteb/sts12-sts name: MTEB STS12 config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 84.59999465280839 - type: cos_sim_spearman value: 75.79279003980383 - type: euclidean_pearson value: 82.29895375956758 - type: euclidean_spearman value: 77.33856514102094 - type: manhattan_pearson value: 82.22694214534756 - type: manhattan_spearman value: 77.3028993008695 - task: type: STS dataset: type: mteb/sts13-sts name: MTEB STS13 config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 83.09296929691297 - type: cos_sim_spearman value: 83.58056936846941 - type: euclidean_pearson value: 83.84067483060005 - type: euclidean_spearman value: 84.45155680480985 - type: manhattan_pearson value: 83.82353052971942 - type: manhattan_spearman value: 84.43030567861112 - task: type: STS dataset: type: mteb/sts14-sts name: MTEB STS14 config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 82.74616852320915 - type: cos_sim_spearman value: 79.948683747966 - type: euclidean_pearson value: 81.55702283757084 - type: euclidean_spearman value: 80.1721505114231 - type: manhattan_pearson value: 81.52251518619441 - type: manhattan_spearman value: 80.1469800135577 - task: type: STS dataset: type: mteb/sts15-sts name: MTEB STS15 config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 87.97170104226318 - type: cos_sim_spearman value: 88.82021731518206 - type: euclidean_pearson value: 87.92950547187615 - type: euclidean_spearman value: 88.67043634645866 - type: manhattan_pearson value: 87.90668112827639 - type: manhattan_spearman value: 88.64471082785317 - task: type: STS dataset: type: mteb/sts16-sts name: MTEB STS16 config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 83.02790375770599 - type: cos_sim_spearman value: 84.46308496590792 - type: euclidean_pearson value: 84.29430000414911 - type: euclidean_spearman value: 84.77298303589936 - type: manhattan_pearson value: 84.23919291368665 - type: manhattan_spearman value: 84.75272234871308 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (en-en) config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 87.62885108477064 - type: cos_sim_spearman value: 87.58456196391622 - type: euclidean_pearson value: 88.2602775281007 - type: euclidean_spearman value: 87.51556278299846 - type: manhattan_pearson value: 88.11224053672842 - type: manhattan_spearman value: 87.4336094383095 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (en) config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 63.98187965128411 - type: cos_sim_spearman value: 64.0653163219731 - type: euclidean_pearson value: 62.30616725924099 - type: euclidean_spearman value: 61.556971332295916 - type: manhattan_pearson value: 62.07642330128549 - type: manhattan_spearman value: 61.155494129828 - task: type: STS dataset: type: mteb/stsbenchmark-sts name: MTEB STSBenchmark config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 85.6089703921826 - type: cos_sim_spearman value: 86.52303197250791 - type: euclidean_pearson value: 85.95801955963246 - type: euclidean_spearman value: 86.25242424112962 - type: manhattan_pearson value: 85.88829100470312 - type: manhattan_spearman value: 86.18742955805165 - task: type: Reranking dataset: type: mteb/scidocs-reranking name: MTEB SciDocsRR config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 83.02282098487036 - type: mrr value: 95.05126409538174 - task: type: Retrieval dataset: type: scifact name: MTEB SciFact config: default split: test revision: None metrics: - type: map_at_1 value: 55.928 - type: map_at_10 value: 67.308 - type: map_at_100 value: 67.89500000000001 - type: map_at_1000 value: 67.91199999999999 - type: map_at_3 value: 65.091 - type: map_at_5 value: 66.412 - type: mrr_at_1 value: 58.667 - type: mrr_at_10 value: 68.401 - type: mrr_at_100 value: 68.804 - type: mrr_at_1000 value: 68.819 - type: mrr_at_3 value: 66.72200000000001 - type: mrr_at_5 value: 67.72200000000001 - type: ndcg_at_1 value: 58.667 - type: ndcg_at_10 value: 71.944 - type: ndcg_at_100 value: 74.464 - type: ndcg_at_1000 value: 74.82799999999999 - type: ndcg_at_3 value: 68.257 - type: ndcg_at_5 value: 70.10300000000001 - type: precision_at_1 value: 58.667 - type: precision_at_10 value: 9.533 - type: precision_at_100 value: 1.09 - type: precision_at_1000 value: 0.11199999999999999 - type: precision_at_3 value: 27.222 - type: precision_at_5 value: 17.533 - type: recall_at_1 value: 55.928 - type: recall_at_10 value: 84.65 - type: recall_at_100 value: 96.267 - type: recall_at_1000 value: 99 - type: recall_at_3 value: 74.656 - type: recall_at_5 value: 79.489 - task: type: PairClassification dataset: type: mteb/sprintduplicatequestions-pairclassification name: MTEB SprintDuplicateQuestions config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.79009900990098 - type: cos_sim_ap value: 94.5795129511524 - type: cos_sim_f1 value: 89.34673366834171 - type: cos_sim_precision value: 89.79797979797979 - type: cos_sim_recall value: 88.9 - type: dot_accuracy value: 99.53465346534654 - type: dot_ap value: 81.56492504352725 - type: dot_f1 value: 76.33816908454227 - type: dot_precision value: 76.37637637637637 - type: dot_recall value: 76.3 - type: euclidean_accuracy value: 99.78514851485149 - type: euclidean_ap value: 94.59134620408962 - type: euclidean_f1 value: 88.96484375 - type: euclidean_precision value: 86.92748091603053 - type: euclidean_recall value: 91.10000000000001 - type: manhattan_accuracy value: 99.78415841584159 - type: manhattan_ap value: 94.5190197328845 - type: manhattan_f1 value: 88.84462151394423 - type: manhattan_precision value: 88.4920634920635 - type: manhattan_recall value: 89.2 - type: max_accuracy value: 99.79009900990098 - type: max_ap value: 94.59134620408962 - type: max_f1 value: 89.34673366834171 - task: type: Clustering dataset: type: mteb/stackexchange-clustering name: MTEB StackExchangeClustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 65.1487505617497 - task: type: Clustering dataset: type: mteb/stackexchange-clustering-p2p name: MTEB StackExchangeClusteringP2P config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 32.502518166001856 - task: type: Reranking dataset: type: mteb/stackoverflowdupquestions-reranking name: MTEB StackOverflowDupQuestions config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 50.33775480236701 - type: mrr value: 51.17302223919871 - task: type: Summarization dataset: type: mteb/summeval name: MTEB SummEval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 30.561111309808208 - type: cos_sim_spearman value: 30.2839254379273 - type: dot_pearson value: 29.560242291401973 - type: dot_spearman value: 30.51527274679116 - task: type: Retrieval dataset: type: trec-covid name: MTEB TRECCOVID config: default split: test revision: None metrics: - type: map_at_1 value: 0.215 - type: map_at_10 value: 1.752 - type: map_at_100 value: 9.258 - type: map_at_1000 value: 23.438 - type: map_at_3 value: 0.6 - type: map_at_5 value: 0.968 - type: mrr_at_1 value: 84 - type: mrr_at_10 value: 91.333 - type: mrr_at_100 value: 91.333 - type: mrr_at_1000 value: 91.333 - type: mrr_at_3 value: 91.333 - type: mrr_at_5 value: 91.333 - type: ndcg_at_1 value: 75 - type: ndcg_at_10 value: 69.596 - type: ndcg_at_100 value: 51.970000000000006 - type: ndcg_at_1000 value: 48.864999999999995 - type: ndcg_at_3 value: 73.92699999999999 - type: ndcg_at_5 value: 73.175 - type: precision_at_1 value: 84 - type: precision_at_10 value: 74 - type: precision_at_100 value: 53.2 - type: precision_at_1000 value: 21.836 - type: precision_at_3 value: 79.333 - type: precision_at_5 value: 78.4 - type: recall_at_1 value: 0.215 - type: recall_at_10 value: 1.9609999999999999 - type: recall_at_100 value: 12.809999999999999 - type: recall_at_1000 value: 46.418 - type: recall_at_3 value: 0.6479999999999999 - type: recall_at_5 value: 1.057 - task: type: Retrieval dataset: type: webis-touche2020 name: MTEB Touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 3.066 - type: map_at_10 value: 10.508000000000001 - type: map_at_100 value: 16.258 - type: map_at_1000 value: 17.705000000000002 - type: map_at_3 value: 6.157 - type: map_at_5 value: 7.510999999999999 - type: mrr_at_1 value: 34.694 - type: mrr_at_10 value: 48.786 - type: mrr_at_100 value: 49.619 - type: mrr_at_1000 value: 49.619 - type: mrr_at_3 value: 45.918 - type: mrr_at_5 value: 46.837 - type: ndcg_at_1 value: 31.633 - type: ndcg_at_10 value: 26.401999999999997 - type: ndcg_at_100 value: 37.139 - type: ndcg_at_1000 value: 48.012 - type: ndcg_at_3 value: 31.875999999999998 - type: ndcg_at_5 value: 27.383000000000003 - type: precision_at_1 value: 34.694 - type: precision_at_10 value: 22.857 - type: precision_at_100 value: 7.611999999999999 - type: precision_at_1000 value: 1.492 - type: precision_at_3 value: 33.333 - type: precision_at_5 value: 26.122 - type: recall_at_1 value: 3.066 - type: recall_at_10 value: 16.239 - type: recall_at_100 value: 47.29 - type: recall_at_1000 value: 81.137 - type: recall_at_3 value: 7.069 - type: recall_at_5 value: 9.483 - task: type: Classification dataset: type: mteb/toxic_conversations_50k name: MTEB ToxicConversationsClassification config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 72.1126 - type: ap value: 14.710862719285753 - type: f1 value: 55.437808972378846 - task: type: Classification dataset: type: mteb/tweet_sentiment_extraction name: MTEB TweetSentimentExtractionClassification config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 60.39049235993209 - type: f1 value: 60.69810537250234 - task: type: Clustering dataset: type: mteb/twentynewsgroups-clustering name: MTEB TwentyNewsgroupsClustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 48.15576640316866 - task: type: PairClassification dataset: type: mteb/twittersemeval2015-pairclassification name: MTEB TwitterSemEval2015 config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 86.52917684925792 - type: cos_sim_ap value: 75.97497873817315 - type: cos_sim_f1 value: 70.01151926276718 - type: cos_sim_precision value: 67.98409147402435 - type: cos_sim_recall value: 72.16358839050132 - type: dot_accuracy value: 82.47004828038385 - type: dot_ap value: 62.48739894974198 - type: dot_f1 value: 59.13107511045656 - type: dot_precision value: 55.27765029830197 - type: dot_recall value: 63.562005277044854 - type: euclidean_accuracy value: 86.46361089586935 - type: euclidean_ap value: 75.59282886839452 - type: euclidean_f1 value: 69.6465443945099 - type: euclidean_precision value: 64.52847175331982 - type: euclidean_recall value: 75.64643799472296 - type: manhattan_accuracy value: 86.43380818978363 - type: manhattan_ap value: 75.5742420974403 - type: manhattan_f1 value: 69.8636926889715 - type: manhattan_precision value: 65.8644859813084 - type: manhattan_recall value: 74.37994722955145 - type: max_accuracy value: 86.52917684925792 - type: max_ap value: 75.97497873817315 - type: max_f1 value: 70.01151926276718 - task: type: PairClassification dataset: type: mteb/twitterurlcorpus-pairclassification name: MTEB TwitterURLCorpus config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 89.29056545193464 - type: cos_sim_ap value: 86.63028865482376 - type: cos_sim_f1 value: 79.18166458532285 - type: cos_sim_precision value: 75.70585756426465 - type: cos_sim_recall value: 82.99199260856174 - type: dot_accuracy value: 85.23305002522606 - type: dot_ap value: 76.0482687263196 - type: dot_f1 value: 70.80484330484332 - type: dot_precision value: 65.86933474688577 - type: dot_recall value: 76.53988296889437 - type: euclidean_accuracy value: 89.26145845461248 - type: euclidean_ap value: 86.54073288416006 - type: euclidean_f1 value: 78.9721371479794 - type: euclidean_precision value: 76.68649354417525 - type: euclidean_recall value: 81.39821373575609 - type: manhattan_accuracy value: 89.22847052431405 - type: manhattan_ap value: 86.51250729037905 - type: manhattan_f1 value: 78.94601825044894 - type: manhattan_precision value: 75.32694594027555 - type: manhattan_recall value: 82.93039728980598 - type: max_accuracy value: 89.29056545193464 - type: max_ap value: 86.63028865482376 - type: max_f1 value: 79.18166458532285 language: - en license: mit --- # E5-base-v2 [Text Embeddings by Weakly-Supervised Contrastive Pre-training](https://arxiv.org/pdf/2212.03533.pdf). Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, Furu Wei, arXiv 2022 This model has 12 layers and the embedding size is 768. ## Usage Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset. ```python import torch.nn.functional as F from torch import Tensor from transformers import AutoTokenizer, AutoModel def average_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor: last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0) return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None] # Each input text should start with "query: " or "passage: ". # For tasks other than retrieval, you can simply use the "query: " prefix. input_texts = ['query: how much protein should a female eat', 'query: summit define', "passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.", "passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."] tokenizer = AutoTokenizer.from_pretrained('intfloat/e5-base-v2') model = AutoModel.from_pretrained('intfloat/e5-base-v2') # Tokenize the input texts batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt') outputs = model(**batch_dict) embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask']) # normalize embeddings embeddings = F.normalize(embeddings, p=2, dim=1) scores = (embeddings[:2] @ embeddings[2:].T) * 100 print(scores.tolist()) ``` ## Training Details Please refer to our paper at [https://arxiv.org/pdf/2212.03533.pdf](https://arxiv.org/pdf/2212.03533.pdf). ## Benchmark Evaluation Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB benchmark](https://arxiv.org/abs/2210.07316). ## Support for Sentence Transformers Below is an example for usage with sentence_transformers. ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer('intfloat/e5-base-v2') input_texts = [ 'query: how much protein should a female eat', 'query: summit define', "passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.", "passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments." ] embeddings = model.encode(input_texts, normalize_embeddings=True) ``` Package requirements `pip install sentence_transformers~=2.2.2` Contributors: [michaelfeil](https://huggingface.co/michaelfeil) ## FAQ **1. Do I need to add the prefix "query: " and "passage: " to input texts?** Yes, this is how the model is trained, otherwise you will see a performance degradation. Here are some rules of thumb: - Use "query: " and "passage: " correspondingly for asymmetric tasks such as passage retrieval in open QA, ad-hoc information retrieval. - Use "query: " prefix for symmetric tasks such as semantic similarity, paraphrase retrieval. - Use "query: " prefix if you want to use embeddings as features, such as linear probing classification, clustering. **2. Why are my reproduced results slightly different from reported in the model card?** Different versions of `transformers` and `pytorch` could cause negligible but non-zero performance differences. **3. Why does the cosine similarity scores distribute around 0.7 to 1.0?** This is a known and expected behavior as we use a low temperature 0.01 for InfoNCE contrastive loss. For text embedding tasks like text retrieval or semantic similarity, what matters is the relative order of the scores instead of the absolute values, so this should not be an issue. ## Citation If you find our paper or models helpful, please consider cite as follows: ``` @article{wang2022text, title={Text Embeddings by Weakly-Supervised Contrastive Pre-training}, author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Jiao, Binxing and Yang, Linjun and Jiang, Daxin and Majumder, Rangan and Wei, Furu}, journal={arXiv preprint arXiv:2212.03533}, year={2022} } ``` ## Limitations This model only works for English texts. Long texts will be truncated to at most 512 tokens.
xrsula/maybe
xrsula
2025-06-05T15:28:12Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "unsloth", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-05T15:24:56Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
tranhuonglan/Qwen3-06B-base-quantization-modifier-w8a8
tranhuonglan
2025-06-05T15:26:22Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "compressed-tensors", "region:us" ]
text-generation
2025-06-05T15:20:18Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
thejaminator/5jun-bad-newlines-8000medical-4e-05-qwen3_32b-epochs1
thejaminator
2025-06-05T15:21:39Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen3", "trl", "en", "base_model:unsloth/Qwen3-32B", "base_model:finetune:unsloth/Qwen3-32B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-05T15:20:26Z
--- base_model: unsloth/Qwen3-32B tags: - text-generation-inference - transformers - unsloth - qwen3 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** thejaminator - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen3-32B This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Diamantis99/akEymfo
Diamantis99
2025-06-05T15:19:34Z
0
0
segmentation-models-pytorch
[ "segmentation-models-pytorch", "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "semantic-segmentation", "pytorch", "image-segmentation", "license:mit", "region:us" ]
image-segmentation
2025-06-05T15:19:26Z
--- library_name: segmentation-models-pytorch license: mit pipeline_tag: image-segmentation tags: - model_hub_mixin - pytorch_model_hub_mixin - segmentation-models-pytorch - semantic-segmentation - pytorch languages: - python --- # Segformer Model Card Table of Contents: - [Load trained model](#load-trained-model) - [Model init parameters](#model-init-parameters) - [Model metrics](#model-metrics) - [Dataset](#dataset) ## Load trained model ```python import segmentation_models_pytorch as smp model = smp.from_pretrained("<save-directory-or-this-repo>") ``` ## Model init parameters ```python model_init_params = { "encoder_name": "mobileone_s4", "encoder_depth": 5, "encoder_weights": "imagenet", "decoder_segmentation_channels": 256, "in_channels": 3, "classes": 1, "activation": None, "aux_params": None } ``` ## Model metrics ```json [ { "test_per_image_iou": 0.8617135286331177, "test_dataset_iou": 0.8822827339172363 } ] ``` ## Dataset Dataset name: VisionPipe ## More Information - Library: https://github.com/qubvel/segmentation_models.pytorch - Docs: https://smp.readthedocs.io/en/latest/ This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin)
shanhai99/art-nouveau-lora
shanhai99
2025-06-05T15:17:35Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2025-06-05T15:15:46Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: >- (art nouveau style, in the 19th century, An elegant smilling woman (looking at the camera:1.4) in a light spring dress, adorned with delicate floral motifs, walks along a path lined with trees in full bloom. Her brown hair floats gently in the cool morning breeze, while flower petals fall around her, creating a colorful carpet beneath her feet. Her eyes gaze in wonder at the barges gliding slowly along the canal, their glistening surface animated by the reflections of the spring sun. The light mist slowly dissipates, giving way to a golden light that illuminates the details of her dress and the textures of the green leaves. The image evokes a joyful connection to nature, inspired by the Romantic style, with careful composition and hyper-detailed rendering in 8K, free of distortion or text, with perfect anatomy of the figures. output: url: images/647c6ee2-9822-4e06-83ab-ce9982158905.jpeg base_model: black-forest-labs/FLUX.1-dev instance_prompt: art nouveau license: creativeml-openrail-m --- # art-nouveau <Gallery /> ## Trigger words You should use `art nouveau` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/shanhai99/art-nouveau-lora/tree/main) them in the Files & versions tab.
gfortune/roadwork6
gfortune
2025-06-05T15:17:16Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2025-06-05T15:16:42Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
gioto64/t5-finetuned-v2
gioto64
2025-06-05T15:15:41Z
1
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2025-06-04T20:18:09Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
shulijia/MNLP_M3_mcqa_model_simpleVal_m1_cot
shulijia
2025-06-05T15:14:58Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "qwen3", "text-generation", "generated_from_trainer", "trl", "sft", "conversational", "base_model:Qwen/Qwen3-0.6B-Base", "base_model:finetune:Qwen/Qwen3-0.6B-Base", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-05T15:08:33Z
--- base_model: Qwen/Qwen3-0.6B-Base library_name: transformers model_name: MNLP_M3_mcqa_model_simpleVal_m1_cot tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for MNLP_M3_mcqa_model_simpleVal_m1_cot This model is a fine-tuned version of [Qwen/Qwen3-0.6B-Base](https://huggingface.co/Qwen/Qwen3-0.6B-Base). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="shulijia/MNLP_M3_mcqa_model_simpleVal_m1_cot", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.17.0 - Transformers: 4.52.2 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
PrunaAI/facebook-KernelLLM-HQQ-4bit-smashed
PrunaAI
2025-06-05T15:14:25Z
1
0
null
[ "llama", "pruna-ai", "base_model:facebook/KernelLLM", "base_model:finetune:facebook/KernelLLM", "region:us" ]
null
2025-06-04T14:40:45Z
--- thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" base_model: facebook/KernelLLM metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption tags: - pruna-ai --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="banner.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with hqq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo facebook/KernelLLM installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install hqq ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer from hqq.engine.hf import HQQModelForCausalLM from hqq.models.hf.base import AutoHQQHFModel try: model = HQQModelForCausalLM.from_quantized("PrunaAI/facebook-KernelLLM-HQQ-4bit-smashed", device_map='auto') except: model = AutoHQQHFModel.from_quantized("PrunaAI/facebook-KernelLLM-HQQ-4bit-smashed") tokenizer = AutoTokenizer.from_pretrained("facebook/KernelLLM") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. This model has been smashed with pruna in version O.1.3 ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model facebook/KernelLLM before using this model which provided the base model. The license of `pruna` is [here](https://github.com/PrunaAI/pruna/blob/main/LICENSE) on GitHub. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
Diamantis99/fUSqHV1
Diamantis99
2025-06-05T15:13:27Z
0
0
segmentation-models-pytorch
[ "segmentation-models-pytorch", "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "semantic-segmentation", "pytorch", "image-segmentation", "license:mit", "region:us" ]
image-segmentation
2025-06-05T15:12:58Z
--- library_name: segmentation-models-pytorch license: mit pipeline_tag: image-segmentation tags: - model_hub_mixin - pytorch_model_hub_mixin - segmentation-models-pytorch - semantic-segmentation - pytorch languages: - python --- # Segformer Model Card Table of Contents: - [Load trained model](#load-trained-model) - [Model init parameters](#model-init-parameters) - [Model metrics](#model-metrics) - [Dataset](#dataset) ## Load trained model ```python import segmentation_models_pytorch as smp model = smp.from_pretrained("<save-directory-or-this-repo>") ``` ## Model init parameters ```python model_init_params = { "encoder_name": "mit_b5", "encoder_depth": 5, "encoder_weights": "imagenet", "decoder_segmentation_channels": 256, "in_channels": 3, "classes": 1, "activation": None, "aux_params": None } ``` ## Model metrics ```json [ { "test_per_image_iou": 0.8778313398361206, "test_dataset_iou": 0.8991026282310486 } ] ``` ## Dataset Dataset name: VisionPipe ## More Information - Library: https://github.com/qubvel/segmentation_models.pytorch - Docs: https://smp.readthedocs.io/en/latest/ This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin)
mrlicmi/DeepSeek-R1-0528-Qwen3-8B-Q4_K_M-GGUF
mrlicmi
2025-06-05T15:13:12Z
0
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "base_model:deepseek-ai/DeepSeek-R1-0528-Qwen3-8B", "base_model:quantized:deepseek-ai/DeepSeek-R1-0528-Qwen3-8B", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
null
2025-06-05T15:12:41Z
--- license: mit library_name: transformers tags: - llama-cpp - gguf-my-repo base_model: deepseek-ai/DeepSeek-R1-0528-Qwen3-8B --- # mrlicmi/DeepSeek-R1-0528-Qwen3-8B-Q4_K_M-GGUF This model was converted to GGUF format from [`deepseek-ai/DeepSeek-R1-0528-Qwen3-8B`](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528-Qwen3-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528-Qwen3-8B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo mrlicmi/DeepSeek-R1-0528-Qwen3-8B-Q4_K_M-GGUF --hf-file deepseek-r1-0528-qwen3-8b-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo mrlicmi/DeepSeek-R1-0528-Qwen3-8B-Q4_K_M-GGUF --hf-file deepseek-r1-0528-qwen3-8b-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo mrlicmi/DeepSeek-R1-0528-Qwen3-8B-Q4_K_M-GGUF --hf-file deepseek-r1-0528-qwen3-8b-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo mrlicmi/DeepSeek-R1-0528-Qwen3-8B-Q4_K_M-GGUF --hf-file deepseek-r1-0528-qwen3-8b-q4_k_m.gguf -c 2048 ```
gfortune/roadwork4
gfortune
2025-06-05T15:11:08Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2025-06-05T15:10:06Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
SamanthaStorm/tether-sentiment-v2
SamanthaStorm
2025-06-05T15:10:58Z
0
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-06-05T14:57:01Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
GingerBled/MCQA_on_DPO_adam_no_expl_v2_e2
GingerBled
2025-06-05T15:10:23Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-05T15:08:51Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/GCIRS-Reasoning-1.5B-R1-i1-GGUF
mradermacher
2025-06-05T15:07:38Z
0
0
transformers
[ "transformers", "gguf", "text-generation-inference", "reinforcement-learning", "science", "math", "code", "en", "base_model:prithivMLmods/GCIRS-Reasoning-1.5B-R1", "base_model:quantized:prithivMLmods/GCIRS-Reasoning-1.5B-R1", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
reinforcement-learning
2025-06-05T13:59:24Z
--- base_model: prithivMLmods/GCIRS-Reasoning-1.5B-R1 language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - text-generation-inference - reinforcement-learning - science - math - code --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/prithivMLmods/GCIRS-Reasoning-1.5B-R1 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/GCIRS-Reasoning-1.5B-R1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/GCIRS-Reasoning-1.5B-R1-i1-GGUF/resolve/main/GCIRS-Reasoning-1.5B-R1.i1-IQ1_S.gguf) | i1-IQ1_S | 0.6 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/GCIRS-Reasoning-1.5B-R1-i1-GGUF/resolve/main/GCIRS-Reasoning-1.5B-R1.i1-IQ1_M.gguf) | i1-IQ1_M | 0.6 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/GCIRS-Reasoning-1.5B-R1-i1-GGUF/resolve/main/GCIRS-Reasoning-1.5B-R1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.7 | | | [GGUF](https://huggingface.co/mradermacher/GCIRS-Reasoning-1.5B-R1-i1-GGUF/resolve/main/GCIRS-Reasoning-1.5B-R1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.7 | | | [GGUF](https://huggingface.co/mradermacher/GCIRS-Reasoning-1.5B-R1-i1-GGUF/resolve/main/GCIRS-Reasoning-1.5B-R1.i1-IQ2_S.gguf) | i1-IQ2_S | 0.8 | | | [GGUF](https://huggingface.co/mradermacher/GCIRS-Reasoning-1.5B-R1-i1-GGUF/resolve/main/GCIRS-Reasoning-1.5B-R1.i1-IQ2_M.gguf) | i1-IQ2_M | 0.8 | | | [GGUF](https://huggingface.co/mradermacher/GCIRS-Reasoning-1.5B-R1-i1-GGUF/resolve/main/GCIRS-Reasoning-1.5B-R1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.8 | very low quality | | [GGUF](https://huggingface.co/mradermacher/GCIRS-Reasoning-1.5B-R1-i1-GGUF/resolve/main/GCIRS-Reasoning-1.5B-R1.i1-Q2_K.gguf) | i1-Q2_K | 0.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/GCIRS-Reasoning-1.5B-R1-i1-GGUF/resolve/main/GCIRS-Reasoning-1.5B-R1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/GCIRS-Reasoning-1.5B-R1-i1-GGUF/resolve/main/GCIRS-Reasoning-1.5B-R1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/GCIRS-Reasoning-1.5B-R1-i1-GGUF/resolve/main/GCIRS-Reasoning-1.5B-R1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.0 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/GCIRS-Reasoning-1.5B-R1-i1-GGUF/resolve/main/GCIRS-Reasoning-1.5B-R1.i1-IQ3_S.gguf) | i1-IQ3_S | 1.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/GCIRS-Reasoning-1.5B-R1-i1-GGUF/resolve/main/GCIRS-Reasoning-1.5B-R1.i1-IQ3_M.gguf) | i1-IQ3_M | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/GCIRS-Reasoning-1.5B-R1-i1-GGUF/resolve/main/GCIRS-Reasoning-1.5B-R1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.0 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/GCIRS-Reasoning-1.5B-R1-i1-GGUF/resolve/main/GCIRS-Reasoning-1.5B-R1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.1 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/GCIRS-Reasoning-1.5B-R1-i1-GGUF/resolve/main/GCIRS-Reasoning-1.5B-R1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.1 | | | [GGUF](https://huggingface.co/mradermacher/GCIRS-Reasoning-1.5B-R1-i1-GGUF/resolve/main/GCIRS-Reasoning-1.5B-R1.i1-IQ4_NL.gguf) | i1-IQ4_NL | 1.2 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/GCIRS-Reasoning-1.5B-R1-i1-GGUF/resolve/main/GCIRS-Reasoning-1.5B-R1.i1-Q4_0.gguf) | i1-Q4_0 | 1.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/GCIRS-Reasoning-1.5B-R1-i1-GGUF/resolve/main/GCIRS-Reasoning-1.5B-R1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.2 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/GCIRS-Reasoning-1.5B-R1-i1-GGUF/resolve/main/GCIRS-Reasoning-1.5B-R1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 1.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/GCIRS-Reasoning-1.5B-R1-i1-GGUF/resolve/main/GCIRS-Reasoning-1.5B-R1.i1-Q4_1.gguf) | i1-Q4_1 | 1.3 | | | [GGUF](https://huggingface.co/mradermacher/GCIRS-Reasoning-1.5B-R1-i1-GGUF/resolve/main/GCIRS-Reasoning-1.5B-R1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 1.4 | | | [GGUF](https://huggingface.co/mradermacher/GCIRS-Reasoning-1.5B-R1-i1-GGUF/resolve/main/GCIRS-Reasoning-1.5B-R1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 1.4 | | | [GGUF](https://huggingface.co/mradermacher/GCIRS-Reasoning-1.5B-R1-i1-GGUF/resolve/main/GCIRS-Reasoning-1.5B-R1.i1-Q6_K.gguf) | i1-Q6_K | 1.6 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
ujjwal1996/Fine_tuning_unsloth-Llama-3.2-1B-Instruct_20steps
ujjwal1996
2025-06-05T15:06:22Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-29T13:38:20Z
--- base_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** ujjwal1996 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Coercer/BatchTagger
Coercer
2025-06-05T15:03:39Z
9
0
null
[ "region:us" ]
null
2025-02-10T16:01:55Z
If you got here, you might be searching for this: Colab Implementation, where this specific repo is used. https://colab.research.google.com/drive/1DKT5rFBTHhkyibVMK4SCYTJWHl2kaV3p?usp=sharing Original implementation: https://huggingface.co/RedRocket/JointTaggerProject All credit goes to them.
empathy-ak/vikhr-12b-v0
empathy-ak
2025-06-05T15:02:48Z
8
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-03-06T18:27:37Z
--- library_name: transformers tags: [] --- # Model Card It's fine-tuned model based on mistralai/Mistral-Nemo-Instruct-2407 and has 12B parameters, stored in quantized form. The model should be used to transform raw LLM assistant responses into empatic ones. ## How to Get Started with the Model ```python from inference import EmpathicStylingModel # model initialization model = EmpathicStylingModel() # prediction on 1 sample input_request = "В случае кражи телефона вы можете быстро заблокировать стикер через мобильное приложение банка." response = model.predict(input_request) ``` ## Training Details Model was fine-tuned with SFT on 353 examples (private dataset) of initial LLM assistant responses and corresponding empatic responses. #### Hardware 12 Gb of VRAM is needed to run inference on 1 example #### Software Model was tested with Python3.11 and transformers==4.49.0 ## Model Card Authors - Kseniia Cheloshkina (https://huggingface.co/KseniiaCheloshkina)
ProDev9515/roadwork-72-ctLiEN
ProDev9515
2025-06-05T15:01:17Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2025-06-05T15:01:09Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
CK0607/llama3.1-8b-sonnet-rewards-50
CK0607
2025-06-05T14:58:39Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "grpo", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-06-05T14:56:33Z
--- base_model: unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl - grpo license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** CK0607 - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)