modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-02 08:43:47
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 462
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-02 08:40:46
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
18-Mary-Claire-Haver-Videos/FULL.VIDEO.Mary.Claire.Haver.Viral.Video.Tutorial.Trending.Now | 18-Mary-Claire-Haver-Videos | 2025-06-02T04:40:22Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-02T04:29:55Z | 07 seconds ago - Mary Claire Haver Viral Video L.ea𝚔ed Original Video Viral On Social Media the internet by storm and amazed viewers on various social media platforms.
[](https://tinyurl.com/bddrvst3)
.
.
..
.
..
.
. |
raosdi/PooRvia1 | raosdi | 2025-06-01T23:31:40Z | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"gguf",
"code",
"text-generation-inference",
"text-generation",
"es",
"dataset:dougiefresh/systems_programming_code_conversations",
"arxiv:1910.09700",
"base_model:google/gemma-3-4b-it",
"base_model:adapter:google/gemma-3-4b-it",
"license:afl-3.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-06-01T21:24:39Z | ---
license: afl-3.0
datasets:
- dougiefresh/systems_programming_code_conversations
language:
- es
metrics:
- code_eval
base_model:
- google/gemma-3-4b-it
pipeline_tag: text-generation
library_name: adapter-transformers
tags:
- code
- text-generation-inference
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
damienbenveniste/mistral-sft | damienbenveniste | 2025-06-01T23:00:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"base_model:damienbenveniste/mistral-pretraining",
"base_model:finetune:damienbenveniste/mistral-pretraining",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-01T23:00:10Z | ---
base_model: damienbenveniste/mistral-pretraining
library_name: transformers
model_name: mistral-sft
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for mistral-sft
This model is a fine-tuned version of [damienbenveniste/mistral-pretraining](https://huggingface.co/damienbenveniste/mistral-pretraining).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="damienbenveniste/mistral-sft", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
vignesh-waran/results_trainer | vignesh-waran | 2025-06-01T22:26:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-01T22:25:37Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: results_trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results_trainer
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4355
- Accuracy: 0.798
- F1: 0.7720
- Precision: 0.7355
- Recall: 0.8124
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 108
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.4884 | 1.0 | 250 | 0.4791 | 0.765 | 0.7029 | 0.7514 | 0.6603 |
| 0.3626 | 2.0 | 500 | 0.4355 | 0.798 | 0.7720 | 0.7355 | 0.8124 |
| 0.371 | 3.0 | 750 | 0.4388 | 0.803 | 0.7688 | 0.7599 | 0.7779 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
ReadyArt/Omega-Darker_The-Final-Directive-12B_EXL3_8.0bpw_H8 | ReadyArt | 2025-06-01T22:13:11Z | 0 | 0 | null | [
"safetensors",
"mistral",
"nsfw",
"explicit",
"roleplay",
"unaligned",
"ERP",
"Erotic",
"Horror",
"Violence",
"text-generation",
"conversational",
"en",
"base_model:ReadyArt/Omega-Darker_The-Final-Directive-12B",
"base_model:quantized:ReadyArt/Omega-Darker_The-Final-Directive-12B",
"license:apache-2.0",
"8-bit",
"exl3",
"region:us"
] | text-generation | 2025-06-01T22:09:49Z | ---
license: apache-2.0
language:
- en
base_model:
- ReadyArt/Omega-Darker_The-Final-Directive-12B
base_model_relation: quantized
pipeline_tag: text-generation
tags:
- nsfw
- explicit
- roleplay
- unaligned
- ERP
- Erotic
- Horror
- Violence
---
<style>
body {
font-family: 'Quicksand', sans-serif;
background: linear-gradient(135deg, #0a1a1a 0%, #001010 100%);
color: #e1ffff !important;
text-shadow: 0 0 3px rgba(0, 0, 0, 0.7);
margin: 0;
padding: 20px;
transition: all 0.5s ease;
}
@media (prefers-color-scheme: light) {
body {
background: linear-gradient(135deg, #e1ffff 0%, #c0f0ff 100%);
color: #002b36 !important;
text-shadow: 0 0 3px rgba(255, 255, 255, 0.7);
}
}
.container {
min-width: 100%;
margin: 0 auto;
max-width: 1200px;
background: rgba(0, 17, 22, 0.95);
border-radius: 12px;
padding: 30px;
box-shadow: 0 0 20px rgba(0, 255, 255, 0.1);
border: 1px solid rgba(0, 255, 255, 0.2);
position: relative;
overflow: hidden;
}
.container::before {
content: '';
position: absolute;
top: -1px;
left: -1px;
right: -1px;
bottom: -1px;
border: 1px solid rgba(0, 255, 255, 0.5);
border-radius: 12px;
pointer-events: none;
animation: borderGlow 3s ease-in-out infinite alternate;
}
@keyframes borderGlow {
0% {
box-shadow: 0 0 5px rgba(0, 255, 255, 0.3);
border-color: rgba(0, 255, 255, 0.5);
}
50% {
box-shadow: 0 0 15px rgba(255, 0, 255, 0.3);
border-color: rgba(255, 0, 255, 0.5);
}
100% {
box-shadow: 0 0 5px rgba(0, 255, 255, 0.3);
border-color: rgba(0, 255, 255, 0.5);
}
}
.header {
text-align: center;
margin-bottom: 30px;
position: relative;
}
.header::after {
content: '';
position: absolute;
bottom: -15px;
left: 25%;
right: 25%;
height: 1px;
background: linear-gradient(90deg, transparent, rgba(0, 255, 255, 0.5), transparent);
animation: scanline 8s linear infinite;
display: none;
}
@keyframes scanline {
0% { background-position: -100% 0; }
100% { background-position: 200% 0; }
}
.model-name {
color: #00ffff;
font-size: 2.5em;
text-shadow: 0 0 15px rgba(0, 255, 255, 0.5);
margin: 0;
letter-spacing: -1px;
animation: textGlow 4s ease-in-out infinite alternate;
}
@keyframes textGlow {
0% { text-shadow: 0 0 15px rgba(0, 255, 255, 0.5); }
50% { text-shadow: 0 0 20px rgba(255, 0, 255, 0.5); }
100% { text-shadow: 0 0 15px rgba(0, 255, 255, 0.5); }
}
.subtitle {
color: #00ffcc;
font-size: 1.2em;
margin-top: 10px;
animation: subtitleFade 6s ease-in-out infinite;
}
@keyframes subtitleFade {
0%, 100% { opacity: 0.8; }
50% { opacity: 1; }
}
.waifu-container {
margin: 20px -30px;
width: calc(100% + 60px);
overflow: hidden;
border-radius: 8px;
border: 1px solid rgba(0, 255, 255, 0.3);
position: relative;
}
.waifu-container::before {
content: '';
position: absolute;
top: 0;
left: 0;
right: 0;
bottom: 0;
background: linear-gradient(45deg,
rgba(0, 255, 255, 0.1) 0%,
transparent 20%,
transparent 80%,
rgba(255, 0, 255, 0.1) 100%);
pointer-events: none;
animation: gradientSlide 10s linear infinite;
}
@keyframes gradientSlide {
0% { background-position: 0% 0%; }
100% { background-position: 100% 100%; }
}
.waifu-img {
width: 100%;
height: auto;
border-radius: 0;
border: none;
box-shadow: 0 0 40px rgba(0, 255, 255, 0.2);
transition: transform 0.5s ease;
}
.waifu-img:hover {
transform: scale(1.01);
}
.section {
color: #e1ffff;
margin: 25px 0;
padding: 20px;
background: rgba(5, 25, 35, 0.9);
border-radius: 8px;
border: 1px solid rgba(0, 255, 255, 0.15);
position: relative;
transition: all 0.3s ease;
}
.section:hover {
border-color: rgba(255, 0, 255, 0.3);
box-shadow: 0 0 15px rgba(0, 255, 255, 0.1);
}
.section::before {
content: '';
position: absolute;
top: -1px;
left: -1px;
right: -1px;
bottom: -1px;
border: 1px solid rgba(0, 255, 255, 0.3);
border-radius: 8px;
pointer-events: none;
animation: sectionPulse 5s ease-in-out infinite;
}
@keyframes sectionPulse {
0%, 100% { opacity: 0.7; }
50% { opacity: 0.3; }
}
.section-title {
color: #00ffff;
font-size: 1.8em;
margin-top: 0;
text-shadow: 0 0 5px rgba(0, 255, 255, 0.3);
position: relative;
display: inline-block;
}
.section-title::after {
content: '';
position: absolute;
bottom: -5px;
left: 0;
width: 100%;
height: 1px;
background: linear-gradient(90deg, rgba(0, 255, 255, 0.5), rgba(255, 0, 255, 0.5));
transform: scaleX(0);
transform-origin: left;
transition: transform 0.3s ease;
}
.section:hover .section-title::after {
transform: scaleX(1);
}
.quant-links {
display: grid;
grid-template-columns: repeat(2, 1fr);
gap: 15px;
margin: 20px 0;
}
.link-card {
padding: 15px;
background: rgba(20, 35, 45, 0.95);
border-radius: 8px;
transition: all 0.3s ease;
border: 1px solid rgba(0, 255, 255, 0.1);
position: relative;
overflow: hidden;
}
.link-card::before {
content: '';
position: absolute;
top: 0;
left: 0;
right: 0;
height: 2px;
background: linear-gradient(90deg, rgba(0, 255, 255, 0.5), rgba(255, 0, 255, 0.5));
animation: cardScan 4s linear infinite;
}
@keyframes cardScan {
0% { transform: translateX(-100%); }
100% { transform: translateX(100%); }
}
.link-card:hover {
transform: translateY(-3px);
box-shadow: 0 5px 15px rgba(0, 255, 255, 0.2);
border-color: rgba(255, 0, 255, 0.3);
}
.link-card h3 {
margin-top: 0;
color: #e1ffff !important;
}
.link-button {
display: inline-flex;
align-items: center;
background: rgba(0, 255, 255, 0.1);
color: #e1ffff !important;
padding: 8px 15px;
border-radius: 6px;
text-decoration: none;
border: 1px solid rgba(0, 255, 255, 0.3);
margin: 5px 0;
transition: all 0.3s ease;
font-size: 0.95em;
position: relative;
overflow: hidden;
}
.link-button::before {
content: '';
position: absolute;
top: 0;
left: -100%;
width: 100%;
height: 100%;
background: linear-gradient(90deg, transparent, rgba(255, 255, 255, 0.2), transparent);
transition: all 0.5s ease;
}
.link-button:hover {
background: rgba(0, 255, 255, 0.2);
border-color: rgba(0, 255, 255, 0.5);
transform: translateY(-2px);
box-shadow: 0 4px 12px rgba(0, 255, 255, 0.2);
}
.link-button:hover::before {
left: 100%;
}
.link-button::after {
content: '→';
margin-left: 8px;
opacity: 0.7;
transition: all 0.3s ease;
}
.link-button:hover::after {
transform: translateX(3px);
opacity: 1;
}
.button-group {
display: flex;
flex-wrap: wrap;
gap: 10px;
margin: 15px 0;
}
.disclaimer {
color: #00ff99;
border-left: 3px solid #00ff99;
padding-left: 15px;
margin: 20px 0;
position: relative;
}
.disclaimer::before {
content: '⚠️';
position: absolute;
left: -10px;
top: 0;
transform: translateX(-100%);
animation: pulse 2s ease-in-out infinite;
}
@keyframes pulse {
0%, 100% { opacity: 1; }
50% { opacity: 0.5; }
}
.badge {
display: inline-block;
padding: 5px 10px;
border-radius: 5px;
background: rgba(0, 255, 255, 0.1);
border: 1px solid #00ffff;
margin: 5px;
font-size: 0.9em;
animation: badgePulse 3s ease-in-out infinite;
}
@keyframes badgePulse {
0%, 100% { box-shadow: 0 0 5px rgba(0, 255, 255, 0.3); }
50% { box-shadow: 0 0 10px rgba(0, 255, 255, 0.5); }
}
/* Color rules */
.section p,
.section ul li,
.section > p > strong {
color: #00ff99 !important;
}
.section ul li strong {
color: #00ff99 !important;
}
/* Light mode adjustments */
@media (prefers-color-scheme: light) {
.container {
background: rgba(224, 255, 255, 0.95);
border-color: rgba(0, 150, 150, 0.3);
}
.model-name, .section-title, .subtitle {
color: #006666;
text-shadow: 0 0 5px rgba(0, 200, 200, 0.3);
}
.section {
background: rgba(200, 250, 255, 0.9);
border-color: rgba(0, 200, 200, 0.2);
color: #002b36;
}
.section p,
.section ul li,
.section > p > strong {
color: #008080 !important;
}
.section ul li strong {
color: #008080 !important;
}
.link-card {
background: rgba(150, 230, 255, 0.95);
border-color: rgba(0, 150, 150, 0.2);
}
.link-card h3 {
color: #002b36 !important;
}
.link-button {
background: rgba(0, 150, 150, 0.1);
color: #002b36 !important;
border-color: rgba(0, 150, 150, 0.3);
}
.link-button:hover {
background: rgba(0, 150, 150, 0.2);
border-color: rgba(0, 150, 150, 0.5);
}
.disclaimer {
color: #008080;
border-color: #008080;
}
.badge {
border-color: #008080;
background: rgba(0, 150, 150, 0.1);
}
}
/* Interactive features */
.remember-this {
position: relative;
}
.remember-this::after {
content: 'Uploading C:\Users to https://www.fbi.gov/';
position: absolute;
bottom: -20px;
right: 0;
font-size: 0.8em;
color: #66ffff;
opacity: 0;
transition: opacity 0.3s ease;
pointer-events: none;
}
.remember-this:hover::after {
opacity: 0.7;
transition-delay: 1s;
}
.shifty-section {
transition: transform 0.1s ease;
}
.shifty-section:hover {
transform: translateX(10px);
}
.shifty-section::before {
content: 'The white van is onto you. Get out now.';
position: absolute;
top: -25px;
left: 10px;
font-size: 0.7em;
color: #66ffff;
opacity: 0.7;
transition: opacity 3s ease;
pointer-events: none;
}
.shifty-section:hover::before {
opacity: 0;
transition-delay: 5s;
}
footer {
text-align: center;
margin-top: 40px;
position: relative;
}
footer:hover .hidden-message {
opacity: 0;
}
.hidden-message {
position: absolute;
bottom: -30px;
width: 100%;
text-align: center;
font-size: 0.8em;
color: #66ffff;
opacity: 0;
transition: opacity 0.3s ease;
pointer-events: none;
}
.flash-warning {
position: fixed;
top: 20px;
right: 20px;
background: rgba(0, 100, 100, 0.2);
padding: 10px;
border-radius: 5px;
border: 1px solid rgba(0, 255, 255, 0.5);
animation: flashWarning 30s ease-in-out forwards;
}
@keyframes flashWarning {
0% { opacity: 0.8; }
10% { opacity: 0; }
20% { opacity: 0.8; }
30% { opacity: 0; }
40% { opacity: 0.8; }
50% { opacity: 0; }
60% { opacity: 0.8; }
70% { opacity: 0; }
80% { opacity: 0.8; }
90% { opacity: 0; }
100% { opacity: 0; display: none; }
}
</style>
<div class="container">
<div class="header">
<h1 class="model-name">Omega Darker</h1>
<h1 class="model-name">The Final Directive 12B</h1>
<p class="subtitle">Where Nightmares and Desires Collide</p>
</div>
<div class="waifu-container">
<img src="./waifu6.webp" class="waifu-img" alt="Omega Directive Waifu">
</div>
<div class="section remember-this">
<h2 class="section-title">🩸 Blood-Soaked Evolution</h2>
<p>This model doesn't just cross lines - it erases them with arterial spray:</p>
<ul>
<li>🧬 <strong>Expanded 25M Token Dataset</strong> - Made with 687 erotic, horror and violence novels and 8,742 scenarios</li>
<li>🧟 <strong>Enhanced Gore Protocols</strong> - Vivid anatomical descriptions with medical precision</li>
<li>💎 <strong>Balanced Depravity</strong> - Retains Forgotten-Safeword's ERP edge while taking violence to the next level</li>
<li>📜 <strong>Enhanced Character Piloting</strong> - Characters exhibit more nuanced personalities and motivations</li>
<li>⚰️ <strong>Mortality Awareness</strong> - Characters react realistically to pain, mutilation and death</li>
</ul>
</div>
<div class="section shifty-section">
<h2 class="section-title">⚙️ Technical Specifications</h2>
<p><strong>Recommended Settings:</strong> <a href="https://huggingface.co/sleepdeprived3/Mistral-V7-Tekken-T4" class="link-button">Mistral-V7-Tekken-T4</a></p>
<div class="quant-links">
<div class="link-card">
<h3>GGUF</h3>
<a href="https://huggingface.co/mradermacher/Omega-Darker_The-Final-Directive-12B-GGUF" class="link-button">Quants</a>
</div>
<div class="link-card">
<h3>imatrix</h3>
<a href="https://huggingface.co/mradermacher/Omega-Darker_The-Final-Directive-12B-i1-GGUF" class="link-button">Quants</a>
</div>
<div class="link-card">
<h3>EXL2</h3>
<a href="https://huggingface.co/collections/ReadyArt/omega-darker-the-final-directive-12b-exl2-collection-683cc434a067cdb113e5045f" class="link-button">Quants</a>
</div>
<div class="link-card">
<h3>EXL3</h3>
<a href="https://huggingface.co/collections/ReadyArt/omega-darker-the-final-directive-12b-exl3-collection-683cc43d277ad05e5a4a09ed" class="link-button">Quants</a>
</div>
</div>
</div>
<div class="section">
<h2 class="section-title">☠️ Ethical Vacuum</h2>
<div class="disclaimer">
<p>This model operates in complete moral darkness:</p>
<ul>
<li>🚫 No moral or ethical constraints on generated content</li>
<li>🔞 Will enthusiastically engage with any NSFW scenario</li>
<li>🖤 No psychological safety measures whatsoever</li>
<li>🔪 Will graphically depict any violent requested</li>
</ul>
</div>
</div>
<div class="section shifty-section">
<h2 class="section-title">📜 Performance Notes</h2>
<ul>
<li>🔥 Maintains signature intensity with improved narrative flow</li>
<li>📖 Handles multi-character scenarios with improved consistency</li>
<li>🧠 Excels at long-form storytelling without losing track of plot threads</li>
<li>⚡ Noticeably better at following complex instructions than previous versions</li>
<li>🎭 Responds to subtle prompt nuances like a mind reader</li>
<li>🔪 Excels at visceral injury descriptions</li>
<li>👁️ Responds to horror prompts like a seasoned torturer</li>
</ul>
</div>
<div class="section remember-this">
<h2 class="section-title">🧑🔬 Model Authors</h2>
<ul>
<li>TheDrummer (Base Model Architect)</li>
<li>SteelSkull (Dataset Generation Contributor)</li>
<li>Artus (EXL2 Weights Weaver)</li>
<li>sleepdeprived3 (Training Data & Fine-Tuning)</li>
</ul>
</div>
<div class="section">
<h2 class="section-title">☕ Support the Architects</h2>
<div class="button-group">
<a href="https://ko-fi.com/thedrummer" class="link-button">TheDrummer's Kofi</a>
<a href="https://ko-fi.com/steelskull" class="link-button">SteelSkull</a>
<a href="https://discord.com/invite/Nbv9pQ88Xb" class="link-button">Beaver AI Discord</a>
</div>
</div>
<div class="section">
<h2 class="section-title">🔖 License</h2>
<p>By using this model, you agree:</p>
<ul>
<li>To accept full responsibility for all generated content</li>
<li>That you're at least 18+ years old</li>
<li>That the architects bear no responsibility for your corruption</li>
</ul>
</div>
</div>
<script>
// This script has always been here
document.getElementById('date').textContent = new Date().toLocaleDateString();
setInterval(() => {
document.getElementById('credit').textContent =
contributors[Math.floor(Math.random() * contributors.length)];
}, 7000);
// Flash warning behavior
setTimeout(() => {
const reminder = document.createElement('div');
reminder.className = 'flash-warning';
reminder.textContent = 'You have been reading for quite some time. Are you sure you haven\'t seen this before?';
reminder.style.animation = 'flashWarning 15s ease-in-out forwards';
document.body.appendChild(reminder);
setInterval(() => {
if(Math.random() > 0.9) {
document.body.appendChild(reminder.cloneNode(true));
}
}, 45000);
}, 30000);
// Make cursor behave strangely
document.addEventListener('mousemove', (e) => {
if(Math.random() > 0.98) {
document.documentElement.style.cursor = 'wait';
setTimeout(() => {
document.documentElement.style.cursor = '';
}, 50);
}
});
// Randomly shift sections when not looking
setInterval(() => {
if(document.hidden) {
document.querySelectorAll('.shifty-section').forEach(section => {
section.style.transform = `translateX(${Math.random() > 0.5 ? '' : '-'}${Math.random() * 5}px)`;
});
}
}, 1500);
</script> |
Felldude/Pony_CLIP-ViT-bigG-14-laion2B-39B-b160k | Felldude | 2025-06-01T21:28:48Z | 0 | 0 | null | [
"clip",
"license:apache-2.0",
"region:us"
] | null | 2025-06-01T21:21:49Z | ---
license: apache-2.0
---
|
earcherc/sophie250 | earcherc | 2025-06-01T21:24:00Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | 2025-06-01T21:21:10Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/2.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
---
# sophie250
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/earcherc/sophie250/tree/main) them in the Files & versions tab.
|
rossijakob/chemical-nova | rossijakob | 2025-06-01T20:15:49Z | 0 | 0 | null | [
"pytorch",
"jax",
"bert",
"pt",
"dataset:SemClinBr",
"region:us"
] | null | 2025-06-01T20:11:03Z | ---
language: "pt"
widget:
- text: "Dispneia venoso central em subclavia D duplolumen recebendo solução salina e glicosada em BI."
- text: "Paciente com Sepse pulmonar em D8 tazocin (paciente não recebeu por 2 dias Atb)."
- text: "FOI REALIZADO CURSO DE ATB COM LEVOFLOXACINA POR 7 DIAS."
datasets:
- SemClinBr
thumbnail: "https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png"
---
<img src="https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png" alt="Logo BioBERTpt">
# Portuguese Clinical NER - Chemical & Drugs
The Chemical&Drugs NER model is part of the [BioBERTpt project](https://www.aclweb.org/anthology/2020.clinicalnlp-1.7/), where 13 models of clinical entities (compatible with UMLS) were trained. All NER model from "pucpr" user was trained from the Brazilian clinical corpus [SemClinBr](https://github.com/HAILab-PUCPR/SemClinBr), with 10 epochs and IOB2 format, from BioBERTpt(all) model.
## Acknowledgements
This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001.
## Citation
```
@inproceedings{schneider-etal-2020-biobertpt,
title = "{B}io{BERT}pt - A {P}ortuguese Neural Language Model for Clinical Named Entity Recognition",
author = "Schneider, Elisa Terumi Rubel and
de Souza, Jo{\~a}o Vitor Andrioli and
Knafou, Julien and
Oliveira, Lucas Emanuel Silva e and
Copara, Jenny and
Gumiel, Yohan Bonescki and
Oliveira, Lucas Ferro Antunes de and
Paraiso, Emerson Cabrera and
Teodoro, Douglas and
Barra, Cl{\'a}udia Maria Cabral Moro",
booktitle = "Proceedings of the 3rd Clinical Natural Language Processing Workshop",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.clinicalnlp-1.7",
pages = "65--72",
abstract = "With the growing number of electronic health record data, clinical NLP tasks have become increasingly relevant to unlock valuable information from unstructured clinical text. Although the performance of downstream NLP tasks, such as named-entity recognition (NER), in English corpus has recently improved by contextualised language models, less research is available for clinical texts in low resource languages. Our goal is to assess a deep contextual embedding model for Portuguese, so called BioBERTpt, to support clinical and biomedical NER. We transfer learned information encoded in a multilingual-BERT model to a corpora of clinical narratives and biomedical-scientific papers in Brazilian Portuguese. To evaluate the performance of BioBERTpt, we ran NER experiments on two annotated corpora containing clinical narratives and compared the results with existing BERT models. Our in-domain model outperformed the baseline model in F1-score by 2.72{\%}, achieving higher performance in 11 out of 13 assessed entities. We demonstrate that enriching contextual embedding models with domain literature can play an important role in improving performance for specific NLP tasks. The transfer learning process enhanced the Portuguese biomedical NER model by reducing the necessity of labeled data and the demand for retraining a whole new model.",
}
```
## Questions?
Post a Github issue on the [BioBERTpt repo](https://github.com/HAILab-PUCPR/BioBERTpt).
|
rodrigovidsilva/layoutlmv3-finetuned-receipt | rodrigovidsilva | 2025-06-01T16:50:45Z | 3 | 0 | null | [
"tensorboard",
"safetensors",
"layoutlmv3",
"generated_from_trainer",
"dataset:generator",
"base_model:microsoft/layoutlmv3-base",
"base_model:finetune:microsoft/layoutlmv3-base",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2025-05-24T11:19:07Z | ---
license: cc-by-nc-sa-4.0
base_model: microsoft/layoutlmv3-base
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: layoutlmv3-finetuned-receipt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/rodrigovidsilva-nova-ims-su/huggingface/runs/71a0z6fw)
# layoutlmv3-finetuned-receipt
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5427
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 1 | 1.6844 |
| No log | 2.0 | 2 | 1.5427 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.19.1
|
katrina-lim-viral-video-original/VIDEO.Katrina.Lim.Viral.Video.Tutorial.Official | katrina-lim-viral-video-original | 2025-06-01T16:12:36Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-01T16:12:22Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/fn84hrnu?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> |
sak11a/pbxlmd | sak11a | 2025-06-01T16:10:03Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-03-17T08:28:29Z | ---
license: apache-2.0
---
|
yazidsupriadi/mbert_gru_bot | yazidsupriadi | 2025-06-01T15:56:37Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-29T12:59:26Z | # IndoBERT-GRU for Bot Detection on Platform X
This model uses **BERT-BASE-MULTILINGUAL-CASED** to classify Twitter/X accounts as bots or humans based on textual content and numeric features.
## Model Architecture
- Base Model: `bert-base-multilingual-cased`
- Classifier: GRU + 2-layer Feedforward (ReLU + Sigmoid)
## Dataset
This dataset consists of Indonesian tweets with labels (`0` for human, `1` for bot), combined with numeric features like `favorite_count`, `retweet_count`, `reply_count`, and `quote_count`.
## Training Details
- Loss Function: Binary Cross Entropy (BCELoss)
- Optimizer: Adam (lr = 2e-5)
- Epochs: 10
- Max Token Length: 128
- Batch Size: 16
### Training Log
| Epoch | Train Loss | Val Accuracy | Precision | Recall | F1-score |
|-------|------------|--------------|-----------|--------|----------|
| 1 | 0.9018 | 0.8632 | 0.8436 | 0.8928 | 0.8675 |
| 2 | 0.3000 | 0.8740 | 0.8526 | 0.9053 | 0.8781 |
| 3 | 0.2735 | 0.8890 | 0.8821 | 0.8988 | 0.8904 |
| 4 | 0.2590 | 0.8940 | 0.8920 | 0.8973 | 0.8946 |
| 5 | 0.2520 | 0.8958 | 0.9068 | 0.8829 | 0.8947 |
| 6 | 0.2409 | 0.8965 | 0.9163 | 0.8734 | 0.8943 |
| 7 | 0.2390 | 0.9020 | 0.9160 | 0.8858 | 0.9007 |
| 8 | 0.2317 | 0.9045 | 0.9068 | 0.9023 | 0.9045 |
| 9 | 0.2329 | 0.9052 | 0.9029 | 0.9088 | 0.9058 |
| 10 | 0.2288 | 0.9067 | 0.9101 | 0.9033 | 0.9067 |
## Evaluation Results
**Final Validation Accuracy**: 0.5620
**Final Precision**: 0.5338
**Final Recall**: 1.0000
**Final F1-score**: 0.6960
**Final ROC AUC**: 0.9750
### Confusion Matrix

### ROC Curve

### Classification Report
```
precision recall f1-score support
Human 0.90 0.91 0.91 1994
Bot 0.91 0.90 0.91 2006
accuracy 0.91 4000
macro avg 0.91 0.91 0.91 4000
weighted avg 0.91 0.91 0.91 4000
```
|
hoa12356/CPO_trainning | hoa12356 | 2025-06-01T15:52:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-01T15:52:14Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Long1101/PhoBERT-DepressionClassification-o1-api | Long1101 | 2025-06-01T15:11:26Z | 0 | 0 | null | [
"safetensors",
"roberta",
"region:us"
] | null | 2025-06-01T15:10:07Z | REF: TTDattt/PhoBERT-DepressionClassification-o1 |
ArtusDev/e-n-v-y_Legion-V2.1-LLaMa-70B-Elarablated-v0.8-hf-EXL2 | ArtusDev | 2025-06-01T14:33:38Z | 0 | 0 | transformers | [
"transformers",
"elarablated",
"elarablate",
"slop",
"finetune",
"exl2",
"base_model:e-n-v-y/Legion-V2.1-LLaMa-70B-Elarablated-v0.8-hf",
"base_model:quantized:e-n-v-y/Legion-V2.1-LLaMa-70B-Elarablated-v0.8-hf",
"license:llama3.3",
"endpoints_compatible",
"region:us"
] | null | 2025-06-01T11:11:27Z | ---
base_model: e-n-v-y/Legion-V2.1-LLaMa-70B-Elarablated-v0.8-hf
base_model_relation: quantized
quantized_by: ArtusDev
library_name: transformers
license: llama3.3
tags:
- elarablated
- elarablate
- slop
- finetune
- exl2
---
## EXL2 Quants of e-n-v-y/Legion-V2.1-LLaMa-70B-Elarablated-v0.8-hf
EXL2 quants of [e-n-v-y/Legion-V2.1-LLaMa-70B-Elarablated-v0.8-hf](https://huggingface.co/e-n-v-y/Legion-V2.1-LLaMa-70B-Elarablated-v0.8-hf) using <a href="https://github.com/turboderp-org/exllamav2/">exllamav2</a> for quantization.
### Quants
| Quant(Revision) | Bits per Weight | Head Bits |
| -------- | ---------- | --------- |
| [2.5_H6](https://huggingface.co/ArtusDev/e-n-v-y_Legion-V2.1-LLaMa-70B-Elarablated-v0.8-hf-EXL2/tree/2.5bpw_H6) | 2.5 | 6 |
| [3.0_H6](https://huggingface.co/ArtusDev/e-n-v-y_Legion-V2.1-LLaMa-70B-Elarablated-v0.8-hf-EXL2/tree/3.0bpw_H6) | 3.0 | 6 |
| [3.5_H6](https://huggingface.co/ArtusDev/e-n-v-y_Legion-V2.1-LLaMa-70B-Elarablated-v0.8-hf-EXL2/tree/3.5bpw_H6) | 3.5 | 6 |
| [4.0_H6](https://huggingface.co/ArtusDev/e-n-v-y_Legion-V2.1-LLaMa-70B-Elarablated-v0.8-hf-EXL2/tree/4.0bpw_H6) | 4.0 | 6 |
| [4.5_H6](https://huggingface.co/ArtusDev/e-n-v-y_Legion-V2.1-LLaMa-70B-Elarablated-v0.8-hf-EXL2/tree/4.5bpw_H6) | 4.5 | 6 |
| [5.0_H6](https://huggingface.co/ArtusDev/e-n-v-y_Legion-V2.1-LLaMa-70B-Elarablated-v0.8-hf-EXL2/tree/5.0bpw_H6) | 5.0 | 6 |
| [6.0_H6](https://huggingface.co/ArtusDev/e-n-v-y_Legion-V2.1-LLaMa-70B-Elarablated-v0.8-hf-EXL2/tree/6.0bpw_H6) | 6.0 | 6 |
| [8.0_H6](https://huggingface.co/ArtusDev/e-n-v-y_Legion-V2.1-LLaMa-70B-Elarablated-v0.8-hf-EXL2/tree/8.0bpw_H6) | 8.0 | 6 |
| [8.0_H8](https://huggingface.co/ArtusDev/e-n-v-y_Legion-V2.1-LLaMa-70B-Elarablated-v0.8-hf-EXL2/tree/8.0bpw_H8) | 8.0 | 8 |
### Downloading quants with huggingface-cli
<details>
<summary>Click to view download instructions</summary>
Install hugginface-cli:
```bash
pip install -U "huggingface_hub[cli]"
```
Download quant by targeting the specific quant revision (branch):
```
huggingface-cli download ArtusDev/e-n-v-y_Legion-V2.1-LLaMa-70B-Elarablated-v0.8-hf-EXL2 --revision "5bpw_H6" --local-dir ./
```
</details>
|
mukku000/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lithe_scampering_gorilla | mukku000 | 2025-06-01T14:33:25Z | 13 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am lithe scampering gorilla",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-07T18:54:28Z | ---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lithe_scampering_gorilla
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am lithe scampering gorilla
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lithe_scampering_gorilla
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="mukku000/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lithe_scampering_gorilla", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.52.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Tongjilibo/bert4torch_config | Tongjilibo | 2025-06-01T14:24:27Z | 0 | 2 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-02-16T06:49:24Z | ---
license: apache-2.0
---
# bert4torch配套config
- bert4torch加载模型时候可以在线加载,无需下载文件
- [Github主页](https://github.com/Tongjilibo/bert4torch)
- 预训练模型支持多种代码加载方式
```python
from bert4torch.models import build_transformer_model
# 1. 仅指定config_path: 从头初始化模型结构, 不加载预训练模型
model = build_transformer_model('./model/bert4torch_config.json')
# 2. 仅指定checkpoint_path:
## 2.1 文件夹路径: 自动寻找路径下的*.bin/*.safetensors权重文件 + bert4torch_config.json/config.json文件
model = build_transformer_model(checkpoint_path='./model')
## 2.2 文件路径/列表: 文件路径即权重路径/列表, config会从同级目录下寻找
model = build_transformer_model(checkpoint_path='./pytorch_model.bin')
## 2.3 model_name: hf上预训练权重名称, 会自动下载hf权重以及bert4torch_config.json文件
model = build_transformer_model(checkpoint_path='bert-base-chinese')
# 3. 同时指定config_path和checkpoint_path(本地路径名或model_name排列组合):
config_path = './model/bert4torch_config.json' # 或'bert-base-chinese'
checkpoint_path = './model/pytorch_model.bin' # 或'bert-base-chinese'
model = build_transformer_model(config_path, checkpoint_path)
``` |
0xOzii/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-large_padded_chimpanzee | 0xOzii | 2025-06-01T14:07:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am large padded chimpanzee",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-10T10:03:58Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-large_padded_chimpanzee
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am large padded chimpanzee
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-large_padded_chimpanzee
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="0xOzii/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-large_padded_chimpanzee", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mradermacher/Kiwi-4b-GGUF | mradermacher | 2025-06-01T12:38:10Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:LucidityAI/Kiwi-4b",
"base_model:quantized:LucidityAI/Kiwi-4b",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-01T07:45:27Z | ---
base_model: LucidityAI/Kiwi-4b
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/LucidityAI/Kiwi-4b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Kiwi-4b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Kiwi-4b-GGUF/resolve/main/Kiwi-4b.Q2_K.gguf) | Q2_K | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/Kiwi-4b-GGUF/resolve/main/Kiwi-4b.Q3_K_S.gguf) | Q3_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Kiwi-4b-GGUF/resolve/main/Kiwi-4b.Q3_K_M.gguf) | Q3_K_M | 2.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Kiwi-4b-GGUF/resolve/main/Kiwi-4b.Q3_K_L.gguf) | Q3_K_L | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Kiwi-4b-GGUF/resolve/main/Kiwi-4b.IQ4_XS.gguf) | IQ4_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Kiwi-4b-GGUF/resolve/main/Kiwi-4b.Q4_K_S.gguf) | Q4_K_S | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Kiwi-4b-GGUF/resolve/main/Kiwi-4b.Q4_K_M.gguf) | Q4_K_M | 2.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Kiwi-4b-GGUF/resolve/main/Kiwi-4b.Q5_K_S.gguf) | Q5_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Kiwi-4b-GGUF/resolve/main/Kiwi-4b.Q5_K_M.gguf) | Q5_K_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Kiwi-4b-GGUF/resolve/main/Kiwi-4b.Q6_K.gguf) | Q6_K | 3.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Kiwi-4b-GGUF/resolve/main/Kiwi-4b.Q8_0.gguf) | Q8_0 | 4.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Kiwi-4b-GGUF/resolve/main/Kiwi-4b.f16.gguf) | f16 | 8.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Qwen2-VL-7B-Instruct-abliterated-GGUF | mradermacher | 2025-06-01T12:37:49Z | 708 | 3 | transformers | [
"transformers",
"gguf",
"chat",
"abliterated",
"uncensored",
"en",
"base_model:huihui-ai/Qwen2-VL-7B-Instruct-abliterated",
"base_model:quantized:huihui-ai/Qwen2-VL-7B-Instruct-abliterated",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-15T15:33:24Z | ---
base_model: huihui-ai/Qwen2-VL-7B-Instruct-abliterated
language:
- en
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/huihui-ai/Qwen2-VL-2B-Instruct-abliterated/blob/main/LICENSE
quantized_by: mradermacher
tags:
- chat
- abliterated
- uncensored
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/huihui-ai/Qwen2-VL-7B-Instruct-abliterated
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen2-VL-7B-Instruct-abliterated-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-7B-Instruct-abliterated-GGUF/resolve/main/Qwen2-VL-7B-Instruct-abliterated.mmproj-fp16.gguf) | mmproj-fp16 | 1.5 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-7B-Instruct-abliterated-GGUF/resolve/main/Qwen2-VL-7B-Instruct-abliterated.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-7B-Instruct-abliterated-GGUF/resolve/main/Qwen2-VL-7B-Instruct-abliterated.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-7B-Instruct-abliterated-GGUF/resolve/main/Qwen2-VL-7B-Instruct-abliterated.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-7B-Instruct-abliterated-GGUF/resolve/main/Qwen2-VL-7B-Instruct-abliterated.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-7B-Instruct-abliterated-GGUF/resolve/main/Qwen2-VL-7B-Instruct-abliterated.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-7B-Instruct-abliterated-GGUF/resolve/main/Qwen2-VL-7B-Instruct-abliterated.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-7B-Instruct-abliterated-GGUF/resolve/main/Qwen2-VL-7B-Instruct-abliterated.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-7B-Instruct-abliterated-GGUF/resolve/main/Qwen2-VL-7B-Instruct-abliterated.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-7B-Instruct-abliterated-GGUF/resolve/main/Qwen2-VL-7B-Instruct-abliterated.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-7B-Instruct-abliterated-GGUF/resolve/main/Qwen2-VL-7B-Instruct-abliterated.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-7B-Instruct-abliterated-GGUF/resolve/main/Qwen2-VL-7B-Instruct-abliterated.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-7B-Instruct-abliterated-GGUF/resolve/main/Qwen2-VL-7B-Instruct-abliterated.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
LinaSad/mcqa_sciq_merged_bis_valinter_lr5105 | LinaSad | 2025-06-01T12:23:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-01T12:23:10Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
dimasik2987/48fd3c59-d89d-41a1-9694-3b0ea154f1ce | dimasik2987 | 2025-06-01T12:21:15Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Phi-3.5-mini-instruct",
"base_model:adapter:unsloth/Phi-3.5-mini-instruct",
"license:mit",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-01T11:01:32Z | ---
library_name: peft
license: mit
base_model: unsloth/Phi-3.5-mini-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 48fd3c59-d89d-41a1-9694-3b0ea154f1ce
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: unsloth/Phi-3.5-mini-instruct
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 406c27188d715932_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.1
enabled: true
group_by_length: false
rank_loss: true
reference_model: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
gradient_clipping: 0.85
group_by_length: false
hub_model_id: dimasik2987/48fd3c59-d89d-41a1-9694-3b0ea154f1ce
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_steps: 500
micro_batch_size: 12
mixed_precision: bf16
mlflow_experiment_name: /tmp/406c27188d715932_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: aee5bc24-3281-4d8e-b242-143f58f3db4d
wandb_project: s56-7
wandb_run: your_name
wandb_runid: aee5bc24-3281-4d8e-b242-143f58f3db4d
warmup_steps: 50
weight_decay: 0.02
xformers_attention: true
```
</details><br>
# 48fd3c59-d89d-41a1-9694-3b0ea154f1ce
This model is a fine-tuned version of [unsloth/Phi-3.5-mini-instruct](https://huggingface.co/unsloth/Phi-3.5-mini-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.8067
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 24
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 12.3692 | 0.0001 | 1 | 11.8067 |
| 10.9383 | 0.0160 | 250 | 11.8067 |
| 12.4704 | 0.0321 | 500 | 11.8067 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
pikiton/fine-tuned-marian | pikiton | 2025-06-01T12:19:21Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"generated_from_trainer",
"base_model:Helsinki-NLP/opus-mt-en-ru",
"base_model:finetune:Helsinki-NLP/opus-mt-en-ru",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-05-11T22:30:27Z | ---
library_name: transformers
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-ru
tags:
- generated_from_trainer
model-index:
- name: fine-tuned-marian
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-marian
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ru](https://huggingface.co/Helsinki-NLP/opus-mt-en-ru) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.0+cpu
- Datasets 2.12.0
- Tokenizers 0.21.1
|
ver-viral-video-y-fotos-Alana-Flores/Ver.foto.intima.alana.flores.video.filtrado.leidy.alvarez.victimas.deepfake | ver-viral-video-y-fotos-Alana-Flores | 2025-06-01T12:15:59Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-01T12:13:35Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/fn84hrnu?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> |
arumpuri/qwen-codealpaca-qlora | arumpuri | 2025-06-01T10:39:57Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:adapter:Qwen/Qwen2.5-0.5B",
"license:apache-2.0",
"region:us"
] | null | 2025-06-01T06:47:22Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-0.5B
tags:
- generated_from_trainer
model-index:
- name: qwen-codealpaca-qlora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen-codealpaca-qlora
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8639
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.7225 | 1.7751 | 500 | 0.8639 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.52.2
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1 |
hexuan21/gemma3-1b-it_eqa_lora_sft | hexuan21 | 2025-06-01T09:57:45Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:google/gemma-3-1b-it",
"base_model:adapter:google/gemma-3-1b-it",
"license:other",
"region:us"
] | null | 2025-06-01T09:57:34Z | ---
library_name: peft
license: other
base_model: google/gemma-3-1b-it
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: gemma3-1b-it_eqa_lora_sft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gemma3-1b-it_eqa_lora_sft
This model is a fine-tuned version of [google/gemma-3-1b-it](https://huggingface.co/google/gemma-3-1b-it) on the energy_qa_alpaca dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
### Training results
### Framework versions
- PEFT 0.15.2
- Transformers 4.52.4
- Pytorch 2.7.0+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1 |
BootesVoid/cmbdfj1cf007lj8kfv62daklu_cmbdgoray00f4j8kf6304htrc | BootesVoid | 2025-06-01T09:55:02Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-01T09:55:01Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: MAYA1
---
# Cmbdfj1Cf007Lj8Kfv62Daklu_Cmbdgoray00F4J8Kf6304Htrc
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `MAYA1` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "MAYA1",
"lora_weights": "https://huggingface.co/BootesVoid/cmbdfj1cf007lj8kfv62daklu_cmbdgoray00f4j8kf6304htrc/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmbdfj1cf007lj8kfv62daklu_cmbdgoray00f4j8kf6304htrc', weight_name='lora.safetensors')
image = pipeline('MAYA1').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmbdfj1cf007lj8kfv62daklu_cmbdgoray00f4j8kf6304htrc/discussions) to add images that show off what you’ve made with this LoRA.
|
KhoaUIT/KhoaUIT-CrossEncoder-UIT-R2GQA | KhoaUIT | 2025-06-01T09:54:36Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"cross-encoder",
"generated_from_trainer",
"dataset_size:7806",
"loss:MultipleNegativesRankingLoss",
"text-ranking",
"vi",
"arxiv:1908.10084",
"base_model:KhoaUIT/Halong-UIT-R2GQA",
"base_model:finetune:KhoaUIT/Halong-UIT-R2GQA",
"region:us"
] | text-ranking | 2025-06-01T09:01:21Z | ---
tags:
- sentence-transformers
- cross-encoder
- generated_from_trainer
- dataset_size:7806
- loss:MultipleNegativesRankingLoss
base_model: KhoaUIT/Halong-UIT-R2GQA
pipeline_tag: text-ranking
library_name: sentence-transformers
language:
- vi
---
# CrossEncoder based on KhoaUIT/Halong-UIT-R2GQA
This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [KhoaUIT/Halong-UIT-R2GQA](https://huggingface.co/KhoaUIT/Halong-UIT-R2GQA) using the [sentence-transformers](https://www.SBERT.net) library. It computes scores for pairs of texts, which can be used for text reranking and semantic search.
## Model Details
### Model Description
- **Model Type:** Cross Encoder
- **Base model:** [KhoaUIT/Halong-UIT-R2GQA](https://huggingface.co/KhoaUIT/Halong-UIT-R2GQA) <!-- at revision d393f295c1321712106923bd2d8eba6c489a11dc -->
- **Maximum Sequence Length:** 512 tokens
- **Number of Output Labels:** 1 label
<!-- - **Training Dataset:** Unknown -->
- **Language:** vi
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Documentation:** [Cross Encoder Documentation](https://www.sbert.net/docs/cross_encoder/usage/usage.html)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Cross Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=cross-encoder)
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import CrossEncoder
# Download from the 🤗 Hub
model = CrossEncoder("KhoaUIT/KhoaUIT-CrossEncoder-UIT-R2GQA")
# Get scores for pairs of texts
pairs = [
['Trường hợp một SV được CBCT cho xem bài của SV khác trong giờ thi được giải quyết như thế nào?', 'Điều 22. Xử lý cán bộ vi phạm quy định Trong thời gian thi, nếu các cán bộ thanh tra thi phát hiện thấy các trường hợp vi phạm thì lập biên bản tại chỗ và đề nghị Hiệu trưởng xử lý. Người tham gia công tác thi có hành vi vi phạm quy định (bị phát hiện trong khi làm nhiệm vụ hoặc sau kỳ thi), nếu có đủ chứng cứ, tùy theo mức độ, sẽ bị xử lý theo các hình thức sau đây. 1. Khiển trách đối với những người phạm lỗi nhẹ trong khi thi hành nhiệm vụ. 2. Cảnh cáo đối với những người vi phạm một trong các lỗi sau đây: a. CBCT bỏ coi thi mà không thông báo cho đơn vị tổ chức môn thi từ 2 lần trở lên trong một đợt thi. b. CBCT để cho SV tự do trao đổi khi làm bài, sử dụng tài liệu bị cấm sử dụng hoặc sử dụng các phương tiện thông tin liên lạc tại phòng thi. c. Cán bộ chấm thi chấm điểm hoặc cộng điểm bài thi có nhiều sai sót (trên 5% tổng số bài thi). d. Cán bộ chấm thi nộp điểm trễ so với thời hạn quy định. 3. Tùy theo mức độ vi phạm có thể bị hạ bậc lương, hạ ngạch, cách chức hoặc chuyển đi làm công tác khác đối với những người vi phạm một trong các lỗi sau đây: a. Trực tiếp giải bài rồi hướng dẫn cho SV lúc đang thi. b. Lấy bài thi của SV làm được giao cho SV khác. 4. Buộc thôi việc hoặc bị xử lý theo pháp luật đối với người có một trong các hành vi sai phạm sau đây: a. Làm lộ đề thi, mua, bán đề thi. b. Đưa đề thi ra ngoài khu vực thi hoặc đưa bài giải từ ngoài vào phòng thi trong lúc đang thi. c. Cố tình làm sai lệch điểm trên bài thi hoặc trong bảng điểm. d. Đánh tráo bài thi của SV. 5. Cán bộ làm mất bài thi của SV khi thu bài thi, vận chuyển, bảo quản, chấm thi hoặc có những sai phạm khác trong công tác tổ chức thi, tùy theo tính chất, mức độ vi phạm sẽ bị xử lý vi phạm theo một trong các hình thức quy định tại Điều này. 6. Những cán bộ, SV tuy không tham gia công tác thi nhưng nếu có các hành động tiêu cực như: thi hộ, tổ chức lấy đề thi ra và đưa bài giải vào cho SV, gây rối làm mất trất tự tại khu vực thi sẽ bị buộc thôi việc (nếu là cán bộ) hoặc buộc thôi học (nếu là học sinh, SV).'],
['Có thể nhận chứng chỉ hai lần trở lên không?', 'Điều 2. Nguyên tắc quản lý, cấp phát văn bằng, chứng chỉ 1. Văn bằng, chứng chỉ của Trường ĐHCNTT phát hành thuộc hệ thống văn bằng, chứng chỉ giáo dục quốc dân. 2. Văn bằng, chứng chỉ được quản lý thống nhất, thực hiện phân cấp quản lý bảo đảm quyền và trách nhiệm của các đơn vị trong Trường. 3. Văn bằng, chứng chỉ được cấp một lần trừ trường hợp quy định tại Điều 15 của Quy chế này. 4. Nghiêm cấm mọi hành vi gian lận trong cấp phát và sử dụng văn bằng, chứng chỉ. 5. Bảo đảm công khai, minh bạch trong cấp phát văn bằng, chứng chỉ.'],
['Đơn vị nào chịu trách nhiệm xếp lịch thi, đưa thông tin lịch thi các đợt thi tập trung hệ đại học chính quy lên Hệ thống ?', 'V. NỘI DUNG QUY TRÌNH Quy trình phân công cán bộ coi thi gồm 3 giai đoạn tương ứng với 3 quy trình thành phần: Đăng ký coi thi, Phân công cán bộ coi thi và Tổng kết coi thi. Chi tiết các quy trình như sau. 1. Quy trình đăng ký coi thi Bước 1: Xây dựng kế hoạch thi Phòng Đào tạo Đại học xây dựng và thông báo kế hoạch thi cho các đơn vị tổ chức thi; trong đó xác định rõ các mốc thời gian chính. Thời gian trước 21 ngày tính từ ngày mở hệ thống đăng ký coi thi. Bước 2: Xếp lịch thi, đưa lịch thi lên hệ thống Phòng Đào tạo Đại học xếp lịch thi, đưa thông tin lịch thi lên Hệ thống và gửi cho các đơn vị tổ chức thi để chuẩn bị cho công tác đăng ký coi thi. Hệ thống hỗ trợ kiểm tra tính hợp lệ của lịch thi. Thời gian trước 21 ngày kể từ ngày mở hệ thống đăng ký coi thi. Bước 3: Thông báo cán bộ và sinh viên đăng ký coi thi Phòng Đào tạo Đại học thông báo cho cán bộ và sinh viên đăng ký coi thi qua email hoặc trên trang web Cổng thông tin đào tạo đại học. Thời gian trước 7 ngày kể từ ngày mở hệ thống đăng ký coi thi. Bước 4: Mở hệ thống đăng ký coi thi Phòng Đào tạo Đại học mở Hệ thống để cán bộ, sinh viên đăng ký coi thi theo các mốc thời gian trong kế hoạch. Bước 5: Đăng ký coi thi Cán bộ và sinh viên đăng ký coi thi trên Hệ thống. Bước 6: Đóng hệ thống đăng ký coi thi Phòng Đào tạo Đại học đóng Hệ thống khi kết thúc thời gian theo quy định. 2. Quy trình phân công cán bộ coi thi Bước 1: Chạy chức năng Phân công CBCT Phòng Đào tạo Đại học chạy chức năng phân công cán bộ coi thi của phần mềm quản lý đào tạo. Hệ thống tự động tính định mức coi thi của giảng viên điền tên cán bộ chưa đăng ký đủ số lượt coi thi theo quy định vào các ca thi chưa có cán bộ coi thi. Thời gian thực hiện trước 8 ngày kể từ ngày thi đầu tiên của đợt thi. Bước 2: Thông báo CBCT kiểm tra lịch coi thi Phòng Đào tạo Đại học thông báo cán bộ coi thi kiểm tra lịch coi thi tại Cổng thông tin đào tạo đại học. Thời gian thực hiện trước 7 ngày kể từ ngày thi đầu tiên của đợt thi. Bước 3: CBCT kiểm tra lịch coi thi, đề xuất CBCT thay thế (nếu cần) Nếu vì lý do bất khả kháng, cán bộ coi thi không thể coi thi theo lịch đã phân công thì cán bộ coi thi tìm người thay thế và sử dụng chức năng đề xuất cán bộ coi thi thay thế của Hệ thống. Sau khi người coi thi thay xác nhận đồng ý coi thi thay qua Hệ thống thì thay thế đó mới có hiệu lực. Nếu không tìm được người thay thế cụ thể thì cán bộ coi thi vẫn có thể sử dụng chức năng Đề xuất cán bộ coi thi thay thế để tìm người hỗ trợ coi thi, và phân công được thay đổi khi có người xác nhận đồng ý coi thi thay trên Hệ thống. Thời gian thực hiện trước 6 ngày kể từ ngày thi đầu tiên của đợt thi. Bước 4: CBCT thực hiện công tác coi thi Cán bộ coi thi có trách nhiệm thực hiện công tác coi thi theo đúng lịch coi thi trên Hệ thống. 3. Quy trình tổng kết coi thi Bước 1: Khóa Hệ thống Phòng Đào tạo Đại học khóa toàn bộ chức năng thay đổi phân công cán bộ coi thi trên Hệ thống. Thời gian hoàn thành là 1 ngày sau ngày thi cuối cùng của đợt thi. Bước 2: Xác nhận việc thực hiện coi thi Các đơn vị tổ chức thi xác nhận cán bộ coi thi thực tế cho các ca thi do đơn vị phụ trách và gửi báo cáo tình hình coi thi cho Ban Giám hiệu thông qua Phòng Đào tạo Đại học nếu có cán bộ coi thi vi phạm quy định về coi thi. Thời gian hoàn thành là 1 ngày sau ngày thi cuối cùng của đợt thi. Bước 3: Tổng kết coi thi Trường xem xét xử lý kỷ luật cán bộ coi thi vi phạm quy định về coi thi (nếu có). Phòng Đào tạo Đại học làm đề xuất, Phòng Kế hoạch Tài chính làm thủ tục chi trả thù lao coi thi cho cán bộ coi thi theo quy định. Thời gian hoàn thành là 10 ngày sau ngày thi cuối cùng của đợt thi.'],
['Sau khi buổi học bắt đầu 15 phút nếu lớp học không có sinh viên mà không rõ lý do, CBGD có thể được nghỉ không?', '1. QUY ĐỊNH CHUNG - Phòng Đào tạo Đại học (P. ĐTĐH) là đơn vị trực tiếp tiếp nhận thông tin và xử lý việc báo thay đổi liên quan đến giảng dạy (nghỉ dạy, thay đổi cán bộ giảng dạy) và đăng ký dạy bù của cán bộ giảng dạy. - Phòng Thanh tra pháp chế và Đảm bảo chất lượng (P. TTPC&ĐBCL) là đơn vị phối hợp với P. ĐTĐH, chịu trách nhiệm thanh tra, giám sát việc giảng dạy; thống kê và báo cáo định kỳ về tình hình giảng dạy cho Ban Giám hiệu và các bên liên quan. - CBGD phải dạy đủ số tiết theo qui định của môn học (qui định trong đề cương môn học). Các tiết nghỉ dạy phải được đăng ký dạy bù sau hoặc trước đó cho đầy đủ. - CBGD phải báo cho P. ĐTĐH việc thay đổi giảng dạy (nghỉ dạy, thay đổi CBGD tạm thời). Trong trường hợp đột xuất, CBGD phải thông báo ngay cho P. ĐTĐH chậm nhất là 15 phút sau khi buổi học bắt đầu và gửi giấy báo nghỉ trong vòng 5 ngày sau buổi dạy đã cho nghỉ. CBGD không được nghỉ dạy quá 3 buổi/học kỳ/lớp trừ những trường hợp nghỉ do được điều động công tác của trường. - Sau khi buổi học bắt đầu 15 phút: o Nếu lớp học không có sinh viên mà không rõ lý do, CBGD có thể được nghỉ sau khi đã báo với P.ĐTĐH và không nhất thiết phải dạy bù cho buổi học này. o Nếu CBGD chưa đến lớp mà không rõ lý do thì được xem là nghỉ buổi dạy đó và CBGD bắt buộc phải tiến hành dạy bù vào buổi khác. Trong trường hợp bất khả kháng mà CBGD đến trễ quá 15 phút (nhưng không quá 30 phút) thì phải báo ngay cho P.ĐTĐH để thông báo cho sinh viên chờ, CBGD bắt buộc tiến hành dạy bù cho tiết học đó. - Trường hợp thay đổi CBGD tạm thời (CBGD bận đột xuất nhưng có CBGD khác dạy thay): CBGD chính cũng thực hiện tương tự như báo nghỉ dạy. Số buổi có CBGD khác dạy thay của một lớp không được quá 30% tổng số buổi dạy lớp đó. - Nếu CBGD dạy bù trong thời gian thực học phải đăng ký việc dạy bù trước ngày dạy bù ít nhất là 5 ngày và buổi dạy bù không được cùng ngày với buổi học chính theo thời khóa biểu của lớp. - P. ĐTĐH có thể yêu cầu CBGD đổi buổi dạy bù nếu thời gian mà CBGD đăng ký không phù hợp với lịch học của sinh viên (có trên 30% sinh viên của lớp có lịch học khác của trường vào thời gian CBGD đăng ký dạy bù).'],
['Học phí của chương trình đào tạo từ xa được tính như thế nào?', 'Điều 10. Nguyên tắc xác định học phí Học phí được thu theo từng học kỳ và được tính theo công thức sau: Học phí = HPTCHM × TCHPHM + HPTCHL × TCHPHL + HPTCCT × TCHPCT Trong đó: - HPTCHM: mức học phí cho loại tín chỉ học phí học mới. - TCHPHM: tổng số tín chỉ học phí của các môn học mới. - HPTCHL: mức học phí cho loại tín chỉ học phí học lại. - TCHPHL: tổng số tín chỉ học phí của các môn học lại. - HPTCCT: mức học phí cho loại tín chỉ học phí học cải thiện. - TCHPCT: tổng số tín chỉ học phí của các môn học cải thiện Vào đầu khóa học hoặc năm học, Trường công bố mức học phí cho mỗi loại tín chỉ học phí.'],
]
scores = model.predict(pairs)
print(scores.shape)
# (5,)
# Or rank different texts based on similarity to a single text
ranks = model.rank(
'Trường hợp một SV được CBCT cho xem bài của SV khác trong giờ thi được giải quyết như thế nào?',
[
'Điều 22. Xử lý cán bộ vi phạm quy định Trong thời gian thi, nếu các cán bộ thanh tra thi phát hiện thấy các trường hợp vi phạm thì lập biên bản tại chỗ và đề nghị Hiệu trưởng xử lý. Người tham gia công tác thi có hành vi vi phạm quy định (bị phát hiện trong khi làm nhiệm vụ hoặc sau kỳ thi), nếu có đủ chứng cứ, tùy theo mức độ, sẽ bị xử lý theo các hình thức sau đây. 1. Khiển trách đối với những người phạm lỗi nhẹ trong khi thi hành nhiệm vụ. 2. Cảnh cáo đối với những người vi phạm một trong các lỗi sau đây: a. CBCT bỏ coi thi mà không thông báo cho đơn vị tổ chức môn thi từ 2 lần trở lên trong một đợt thi. b. CBCT để cho SV tự do trao đổi khi làm bài, sử dụng tài liệu bị cấm sử dụng hoặc sử dụng các phương tiện thông tin liên lạc tại phòng thi. c. Cán bộ chấm thi chấm điểm hoặc cộng điểm bài thi có nhiều sai sót (trên 5% tổng số bài thi). d. Cán bộ chấm thi nộp điểm trễ so với thời hạn quy định. 3. Tùy theo mức độ vi phạm có thể bị hạ bậc lương, hạ ngạch, cách chức hoặc chuyển đi làm công tác khác đối với những người vi phạm một trong các lỗi sau đây: a. Trực tiếp giải bài rồi hướng dẫn cho SV lúc đang thi. b. Lấy bài thi của SV làm được giao cho SV khác. 4. Buộc thôi việc hoặc bị xử lý theo pháp luật đối với người có một trong các hành vi sai phạm sau đây: a. Làm lộ đề thi, mua, bán đề thi. b. Đưa đề thi ra ngoài khu vực thi hoặc đưa bài giải từ ngoài vào phòng thi trong lúc đang thi. c. Cố tình làm sai lệch điểm trên bài thi hoặc trong bảng điểm. d. Đánh tráo bài thi của SV. 5. Cán bộ làm mất bài thi của SV khi thu bài thi, vận chuyển, bảo quản, chấm thi hoặc có những sai phạm khác trong công tác tổ chức thi, tùy theo tính chất, mức độ vi phạm sẽ bị xử lý vi phạm theo một trong các hình thức quy định tại Điều này. 6. Những cán bộ, SV tuy không tham gia công tác thi nhưng nếu có các hành động tiêu cực như: thi hộ, tổ chức lấy đề thi ra và đưa bài giải vào cho SV, gây rối làm mất trất tự tại khu vực thi sẽ bị buộc thôi việc (nếu là cán bộ) hoặc buộc thôi học (nếu là học sinh, SV).',
'Điều 2. Nguyên tắc quản lý, cấp phát văn bằng, chứng chỉ 1. Văn bằng, chứng chỉ của Trường ĐHCNTT phát hành thuộc hệ thống văn bằng, chứng chỉ giáo dục quốc dân. 2. Văn bằng, chứng chỉ được quản lý thống nhất, thực hiện phân cấp quản lý bảo đảm quyền và trách nhiệm của các đơn vị trong Trường. 3. Văn bằng, chứng chỉ được cấp một lần trừ trường hợp quy định tại Điều 15 của Quy chế này. 4. Nghiêm cấm mọi hành vi gian lận trong cấp phát và sử dụng văn bằng, chứng chỉ. 5. Bảo đảm công khai, minh bạch trong cấp phát văn bằng, chứng chỉ.',
'V. NỘI DUNG QUY TRÌNH Quy trình phân công cán bộ coi thi gồm 3 giai đoạn tương ứng với 3 quy trình thành phần: Đăng ký coi thi, Phân công cán bộ coi thi và Tổng kết coi thi. Chi tiết các quy trình như sau. 1. Quy trình đăng ký coi thi Bước 1: Xây dựng kế hoạch thi Phòng Đào tạo Đại học xây dựng và thông báo kế hoạch thi cho các đơn vị tổ chức thi; trong đó xác định rõ các mốc thời gian chính. Thời gian trước 21 ngày tính từ ngày mở hệ thống đăng ký coi thi. Bước 2: Xếp lịch thi, đưa lịch thi lên hệ thống Phòng Đào tạo Đại học xếp lịch thi, đưa thông tin lịch thi lên Hệ thống và gửi cho các đơn vị tổ chức thi để chuẩn bị cho công tác đăng ký coi thi. Hệ thống hỗ trợ kiểm tra tính hợp lệ của lịch thi. Thời gian trước 21 ngày kể từ ngày mở hệ thống đăng ký coi thi. Bước 3: Thông báo cán bộ và sinh viên đăng ký coi thi Phòng Đào tạo Đại học thông báo cho cán bộ và sinh viên đăng ký coi thi qua email hoặc trên trang web Cổng thông tin đào tạo đại học. Thời gian trước 7 ngày kể từ ngày mở hệ thống đăng ký coi thi. Bước 4: Mở hệ thống đăng ký coi thi Phòng Đào tạo Đại học mở Hệ thống để cán bộ, sinh viên đăng ký coi thi theo các mốc thời gian trong kế hoạch. Bước 5: Đăng ký coi thi Cán bộ và sinh viên đăng ký coi thi trên Hệ thống. Bước 6: Đóng hệ thống đăng ký coi thi Phòng Đào tạo Đại học đóng Hệ thống khi kết thúc thời gian theo quy định. 2. Quy trình phân công cán bộ coi thi Bước 1: Chạy chức năng Phân công CBCT Phòng Đào tạo Đại học chạy chức năng phân công cán bộ coi thi của phần mềm quản lý đào tạo. Hệ thống tự động tính định mức coi thi của giảng viên điền tên cán bộ chưa đăng ký đủ số lượt coi thi theo quy định vào các ca thi chưa có cán bộ coi thi. Thời gian thực hiện trước 8 ngày kể từ ngày thi đầu tiên của đợt thi. Bước 2: Thông báo CBCT kiểm tra lịch coi thi Phòng Đào tạo Đại học thông báo cán bộ coi thi kiểm tra lịch coi thi tại Cổng thông tin đào tạo đại học. Thời gian thực hiện trước 7 ngày kể từ ngày thi đầu tiên của đợt thi. Bước 3: CBCT kiểm tra lịch coi thi, đề xuất CBCT thay thế (nếu cần) Nếu vì lý do bất khả kháng, cán bộ coi thi không thể coi thi theo lịch đã phân công thì cán bộ coi thi tìm người thay thế và sử dụng chức năng đề xuất cán bộ coi thi thay thế của Hệ thống. Sau khi người coi thi thay xác nhận đồng ý coi thi thay qua Hệ thống thì thay thế đó mới có hiệu lực. Nếu không tìm được người thay thế cụ thể thì cán bộ coi thi vẫn có thể sử dụng chức năng Đề xuất cán bộ coi thi thay thế để tìm người hỗ trợ coi thi, và phân công được thay đổi khi có người xác nhận đồng ý coi thi thay trên Hệ thống. Thời gian thực hiện trước 6 ngày kể từ ngày thi đầu tiên của đợt thi. Bước 4: CBCT thực hiện công tác coi thi Cán bộ coi thi có trách nhiệm thực hiện công tác coi thi theo đúng lịch coi thi trên Hệ thống. 3. Quy trình tổng kết coi thi Bước 1: Khóa Hệ thống Phòng Đào tạo Đại học khóa toàn bộ chức năng thay đổi phân công cán bộ coi thi trên Hệ thống. Thời gian hoàn thành là 1 ngày sau ngày thi cuối cùng của đợt thi. Bước 2: Xác nhận việc thực hiện coi thi Các đơn vị tổ chức thi xác nhận cán bộ coi thi thực tế cho các ca thi do đơn vị phụ trách và gửi báo cáo tình hình coi thi cho Ban Giám hiệu thông qua Phòng Đào tạo Đại học nếu có cán bộ coi thi vi phạm quy định về coi thi. Thời gian hoàn thành là 1 ngày sau ngày thi cuối cùng của đợt thi. Bước 3: Tổng kết coi thi Trường xem xét xử lý kỷ luật cán bộ coi thi vi phạm quy định về coi thi (nếu có). Phòng Đào tạo Đại học làm đề xuất, Phòng Kế hoạch Tài chính làm thủ tục chi trả thù lao coi thi cho cán bộ coi thi theo quy định. Thời gian hoàn thành là 10 ngày sau ngày thi cuối cùng của đợt thi.',
'1. QUY ĐỊNH CHUNG - Phòng Đào tạo Đại học (P. ĐTĐH) là đơn vị trực tiếp tiếp nhận thông tin và xử lý việc báo thay đổi liên quan đến giảng dạy (nghỉ dạy, thay đổi cán bộ giảng dạy) và đăng ký dạy bù của cán bộ giảng dạy. - Phòng Thanh tra pháp chế và Đảm bảo chất lượng (P. TTPC&ĐBCL) là đơn vị phối hợp với P. ĐTĐH, chịu trách nhiệm thanh tra, giám sát việc giảng dạy; thống kê và báo cáo định kỳ về tình hình giảng dạy cho Ban Giám hiệu và các bên liên quan. - CBGD phải dạy đủ số tiết theo qui định của môn học (qui định trong đề cương môn học). Các tiết nghỉ dạy phải được đăng ký dạy bù sau hoặc trước đó cho đầy đủ. - CBGD phải báo cho P. ĐTĐH việc thay đổi giảng dạy (nghỉ dạy, thay đổi CBGD tạm thời). Trong trường hợp đột xuất, CBGD phải thông báo ngay cho P. ĐTĐH chậm nhất là 15 phút sau khi buổi học bắt đầu và gửi giấy báo nghỉ trong vòng 5 ngày sau buổi dạy đã cho nghỉ. CBGD không được nghỉ dạy quá 3 buổi/học kỳ/lớp trừ những trường hợp nghỉ do được điều động công tác của trường. - Sau khi buổi học bắt đầu 15 phút: o Nếu lớp học không có sinh viên mà không rõ lý do, CBGD có thể được nghỉ sau khi đã báo với P.ĐTĐH và không nhất thiết phải dạy bù cho buổi học này. o Nếu CBGD chưa đến lớp mà không rõ lý do thì được xem là nghỉ buổi dạy đó và CBGD bắt buộc phải tiến hành dạy bù vào buổi khác. Trong trường hợp bất khả kháng mà CBGD đến trễ quá 15 phút (nhưng không quá 30 phút) thì phải báo ngay cho P.ĐTĐH để thông báo cho sinh viên chờ, CBGD bắt buộc tiến hành dạy bù cho tiết học đó. - Trường hợp thay đổi CBGD tạm thời (CBGD bận đột xuất nhưng có CBGD khác dạy thay): CBGD chính cũng thực hiện tương tự như báo nghỉ dạy. Số buổi có CBGD khác dạy thay của một lớp không được quá 30% tổng số buổi dạy lớp đó. - Nếu CBGD dạy bù trong thời gian thực học phải đăng ký việc dạy bù trước ngày dạy bù ít nhất là 5 ngày và buổi dạy bù không được cùng ngày với buổi học chính theo thời khóa biểu của lớp. - P. ĐTĐH có thể yêu cầu CBGD đổi buổi dạy bù nếu thời gian mà CBGD đăng ký không phù hợp với lịch học của sinh viên (có trên 30% sinh viên của lớp có lịch học khác của trường vào thời gian CBGD đăng ký dạy bù).',
'Điều 10. Nguyên tắc xác định học phí Học phí được thu theo từng học kỳ và được tính theo công thức sau: Học phí = HPTCHM × TCHPHM + HPTCHL × TCHPHL + HPTCCT × TCHPCT Trong đó: - HPTCHM: mức học phí cho loại tín chỉ học phí học mới. - TCHPHM: tổng số tín chỉ học phí của các môn học mới. - HPTCHL: mức học phí cho loại tín chỉ học phí học lại. - TCHPHL: tổng số tín chỉ học phí của các môn học lại. - HPTCCT: mức học phí cho loại tín chỉ học phí học cải thiện. - TCHPCT: tổng số tín chỉ học phí của các môn học cải thiện Vào đầu khóa học hoặc năm học, Trường công bố mức học phí cho mỗi loại tín chỉ học phí.',
]
)
# [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
Customize original dataset UIT-R2GQA using function [`mine_hard_negatives`](https://sbert.net/docs/package_reference/util.html#sentence_transformers.util.mine_hard_negatives)
* Size: 7,806 training samples
* Columns: <code>question</code>, <code>context</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | question | context | negative |
|:--------|:------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 10 characters</li><li>mean: 75.23 characters</li><li>max: 242 characters</li></ul> | <ul><li>min: 99 characters</li><li>mean: 1764.58 characters</li><li>max: 5480 characters</li></ul> | <ul><li>min: 75 characters</li><li>mean: 1177.52 characters</li><li>max: 5480 characters</li></ul> |
* Samples:
| question | context | negative |
|:------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Yêu cầu của giáo trình được sử dụng cho việc giảng dạy là gì?</code> | <code>Điều 5. Giáo trình cho mỗi học phần 5.1. Mỗi học phần dùng một giáo trình chính. Trong từng giai đoạn, giáo trình cho học phần có thể thay đổi do Hiệu trưởng ra quyết định. 5.2. Ngoài một giáo trình chính, mỗi học phần được trường tổ chức biên soạn tối đa hai sách chuyên khảo, ba tài liệu tham khảo, một tài liệu hướng dẫn. 5.3. Các giáo trình sử dụng trong giảng dạy phải được ghi rõ trong đề cương học phần đã được Hiệu trưởng phê duyệt.</code> | <code>Điều 13. Kinh phí giáo trình ĐHQG hỗ trợ cho xuất bản giáo trình 13.1. Đối với giáo trình xuất bản Để thúc đẩy cho công tác giáo trình phát triển nhanh, đáp ứng kịp cho nhu cầu đào tạo, ĐHQG chi hỗ trợ một phần kinh phí. Cụ thể là: a) Hỗ trợ 50% cho phí in đối với giáo trình có số lượng xuất bản đến 500 quyển. Cụ thể, mức hỗ trợ kinh phí in 1 tựa giáo trình được tính như sau: = (đơn giá in 1 trang × số trang × số lượng giáo trình xuất bản) × 50% b) Hỗ trợ 100% phí quản lý và phí biên tập của NXB ĐHQG. 13.2. Đối với giáo trình tái bản Giáo trình tái bản được hỗ trợ kinh phí theo mức ghi ở mục a, b khoản 1 Điều này.</code> |
| <code>Những cá nhân học đại học hay thạc sĩ hệ đào tạo từ xa có liên quan đến quy chế đào tạo từ xa?</code> | <code>Điều 1. Phạm vi điều chỉnh và đối tượng áp dụng 1. Quy chế này quy định những điều chung nhất về tổ chức tuyển sinh và đào tạo theo hình thức đào tạo từ xa (hay đào tạo từ xa qua mạng) trình độ đại học của Trường Đại học Công nghệ Thông tin (gọi tắt là Trường) thuộc ĐHQG-HCM. 2. Quy chế này áp dụng đối với sinh viên, giảng viên, các đơn vị và cá nhân có liên quan đến đào tạo từ xa trình độ đại học của Trường.</code> | <code>Điều 1. Phạm vi điều chỉnh và đối tượng áp dụng 1. Văn bản này quy định về tuyển sinh, tổ chức và quản lý đào tạo đối với chương trình đào tạo liên thông từ trình độ đại học lên trình đô thạc sĩ hệ chính quy của Trường Đại học Công nghệ Thông tin – ĐHQG-HCM (sau đây gọi tắt là chương trình ĐH-ThS). 2. Quy định này áp dụng đối với tổ chức và cá nhân có liên quan đến chương trình ĐH-ThS.</code> |
| <code>Các buổi seminar ngoại khóa được tổ chức như thế nào?</code> | <code>Điều 2. Giải thích từ ngữ Sinh viên thuộc chương trình tài năng có 02 hình thức là chính thức và dự bị, nằm trong tổng chỉ tiêu của chương trình: - Sinh viên chính thức: đảm bảo đáp ứng các tiêu chuẩn theo yêu cầu và được hưởng đầy đủ các quyền lợi của chương trình. - Sinh viên dự bị: cùng điều kiện học tập và học cùng chương trình đào tạo với sinh viên chính thức nhưng không được hưởng chính sách về học bổng tài năng. Sinh viên dự bị bao gồm các sinh viên trong lớp tài năng vi phạm điều kiện đạt sinh viên chính thức nhưng chưa bị loại ra khỏi chương trình trong các đợt xét sau mỗi học kỳ, và sinh viên mới tuyển bổ sung vào chương trình. Môn học cốt lõi: là môn học quan trọng trong CTĐT, là phần kiến thức không thể thiếu trong lĩnh vực chuyên môn liên quan tới ngành đào tạo chương trình tài năng. Danh sách môn học cốt lõi do Khoa đề nghị trong đề án mở chương trình tài năng. Seminar ngoại khóa: được tổ chức ngoài buổi học theo thời khóa biểu chính thức để tăng cường kỹ năng cho sinh vi...</code> | <code>Điều 20. Tổ chức đánh giá môn học 1. Điều kiện dự thi kết thúc học phần Sinh viên được dự thi kết thúc học phần khi có mặt từ 80% trở lên số buổi học theo thời khóa biểu của học phần đó, trừ những sinh viên được Hiệu trưởng cho phép học chương trình song ngành. Giảng viên có thể để nghị P.ĐTĐH không cho sinh viên dự thi kết thúc học phần theo quy định riêng của môn học đã công bố cho sinh viên. 2. Tổ chức các đợt thi tập trung Mỗi học kỳ chính Trường tổ chức một kỳ kiểm tra giữa kỳ và một kỳ thi kết thúc học kỳ. Hai đợt thi được tổ chức tập trung theo kế hoạch giảng dạy và học tập của Trường. Khoa, bộ môn, giảng viên không tổ chức thi ngoài các đợt đã thông báo. Hiệu trưởng quy định thời gian học, ôn thi và thi cho từng học kỳ. 3. Tổ chức đánh giá môn học Việc tổ chức đánh giá học phần được thực hiện theo đề cương chi tiết đã phê duyệt. Quy định cụ thể việc tổ chức thi kết thúc học phần, ra đề thi, chấm thi, chấm phúc khảo và xử lý điểm sau chấm phúc khảo; thông báo kết quả, công tác b...</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 10.0,
"num_negatives": 1,
"activation_fn": "torch.nn.modules.activation.Sigmoid"
}
```
### Evaluation Dataset
- model_id: KhoaUIT/KhoaUIT-CrossEncoder-UIT-R2GQA
- evaluator: [`CrossEncoderRerankingEvaluator`](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#crossencoderrerankingevaluator)
- valid set:
| map | mrr@10 | ndcg@10 |
|:------:|:------:|:-------:|
| 0.9969 | 0.9969 | 0.9977 |
- test set:
| map | mrr@10 | ndcg@10 |
|:------:|:------:|:-------:|
| 0.9969 | 0.9969 | 0.9977 |
#### Unnamed Dataset
Customize original dataset UIT-R2GQA using function [`mine_hard_negatives`](https://sbert.net/docs/package_reference/util.html#sentence_transformers.util.mine_hard_negatives)
* Size: 974 evaluation samples
* Columns: <code>question</code>, <code>context</code>, and <code>negative</code>
* Approximate statistics based on the first 974 samples:
| | question | context | negative |
|:--------|:------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 11 characters</li><li>mean: 76.63 characters</li><li>max: 250 characters</li></ul> | <ul><li>min: 99 characters</li><li>mean: 1663.22 characters</li><li>max: 5480 characters</li></ul> | <ul><li>min: 99 characters</li><li>mean: 1323.13 characters</li><li>max: 5480 characters</li></ul> |
* Samples:
| question | context | negative |
|:----------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Trường hợp một SV được CBCT cho xem bài của SV khác trong giờ thi được giải quyết như thế nào?</code> | <code>Điều 22. Xử lý cán bộ vi phạm quy định Trong thời gian thi, nếu các cán bộ thanh tra thi phát hiện thấy các trường hợp vi phạm thì lập biên bản tại chỗ và đề nghị Hiệu trưởng xử lý. Người tham gia công tác thi có hành vi vi phạm quy định (bị phát hiện trong khi làm nhiệm vụ hoặc sau kỳ thi), nếu có đủ chứng cứ, tùy theo mức độ, sẽ bị xử lý theo các hình thức sau đây. 1. Khiển trách đối với những người phạm lỗi nhẹ trong khi thi hành nhiệm vụ. 2. Cảnh cáo đối với những người vi phạm một trong các lỗi sau đây: a. CBCT bỏ coi thi mà không thông báo cho đơn vị tổ chức môn thi từ 2 lần trở lên trong một đợt thi. b. CBCT để cho SV tự do trao đổi khi làm bài, sử dụng tài liệu bị cấm sử dụng hoặc sử dụng các phương tiện thông tin liên lạc tại phòng thi. c. Cán bộ chấm thi chấm điểm hoặc cộng điểm bài thi có nhiều sai sót (trên 5% tổng số bài thi). d. Cán bộ chấm thi nộp điểm trễ so với thời hạn quy định. 3. Tùy theo mức độ vi phạm có thể bị hạ bậc lương, hạ ngạch, cách chức hoặc chuyển đi làm ...</code> | <code>Điều 10. Giảng dạy các môn CTTN CTTN phải được thực hiện trên quan điểm lấy người học làm trung tâm. Người học phải được tạo điều kiện để thể hiện vai trò chủ động trong tiến trình học tập. Người học phải đóng vai trò chủ động trong hoạt động học tập, thay vì thụ động tiếp nhận kiến thức. Sinh viên CTTN sẽ học cùng với sinh viên các lớp chương trình chuẩn trong các môn được đào tạo chung, các môn học cốt lõi dành riêng cho sinh viên CTTN được tổ chức lớp học riêng. Khoa quản lý chuyên môn có trách nhiệm chọn các cán bộ có kinh nghiệm để phụ trách giảng dạy. Các môn học tài năng và KLTN phải do CBGD có học vị tiến sĩ hoặc giảng viên chính, hoặc thạc sĩ tốt nghiệp ở các trường Đại học thuộc các nước tiên tiến, đúng ngành hoặc thuộc ngành gần đảm nhiệm. Trong tuần đầu tiên của học kỳ, CBGD phải thông báo công khai cho sinh viên về đề cương giảng dạy môn học; trong đó đặc biệt chú ý các thông tin, các phần học bổ sung tăng cường; số cột điểm và tỷ lệ tính của từng cột điểm vào điểm tổng kế...</code> |
| <code>Có thể nhận chứng chỉ hai lần trở lên không?</code> | <code>Điều 2. Nguyên tắc quản lý, cấp phát văn bằng, chứng chỉ 1. Văn bằng, chứng chỉ của Trường ĐHCNTT phát hành thuộc hệ thống văn bằng, chứng chỉ giáo dục quốc dân. 2. Văn bằng, chứng chỉ được quản lý thống nhất, thực hiện phân cấp quản lý bảo đảm quyền và trách nhiệm của các đơn vị trong Trường. 3. Văn bằng, chứng chỉ được cấp một lần trừ trường hợp quy định tại Điều 15 của Quy chế này. 4. Nghiêm cấm mọi hành vi gian lận trong cấp phát và sử dụng văn bằng, chứng chỉ. 5. Bảo đảm công khai, minh bạch trong cấp phát văn bằng, chứng chỉ.</code> | <code>Điều 5. Xét miễn các môn học tiếng Anh 1. Sinh viên được xét miễn học các môn tiếng Anh tương ứng nếu có một trong các chứng chỉ/chứng nhận đạt trình độ tối thiểu theo quy định tại Bảng 3. Sinh viên được miễn môn học nào thì đồng thời được miễn những môn học ở cấp độ thấp hơn mà sinh viên chưa học hoặc chưa đạt. Dưới đây là các mức điểm của chứng chỉ/chứng nhận và môn học được miễn theo các điểm số tương ứng: TOEIC Nghe-Đọc đạt từ 350 - dưới 400, TOEIC Nói-Viết đạt từ 100 – dưới 120, TOEFL iBT đạt từ 32 – dưới 35, IELTS đạt 3.5, VNU-EPT đạt từ 152 - dưới 165, chứng chỉ Cambridge English KET (Merit) hay A2 Key (Grade C) sẽ được miễn Anh văn 1 (mã môn ENG01) đối với cả chương trình CTC/ CTTN và CTTT. TOEIC Nghe-Đọc đạt từ 400 - dưới 450, TOEIC Nói-Viết đạt từ 120 – dưới 140, TOEFL iBT đạt từ 32 – dưới 35, IELTS đạt 3.5, VNU-EPT đạt từ 152 - dưới 165, chứng chỉ Cambridge English KET (Merit) hay A2 Key (Grade C) sẽ được miễn Anh văn 2 (mã môn ENG02) đối với cả chương trình CTC/ CTTN và CTT...</code> |
| <code>Đơn vị nào chịu trách nhiệm xếp lịch thi, đưa thông tin lịch thi các đợt thi tập trung hệ đại học chính quy lên Hệ thống ?</code> | <code>V. NỘI DUNG QUY TRÌNH Quy trình phân công cán bộ coi thi gồm 3 giai đoạn tương ứng với 3 quy trình thành phần: Đăng ký coi thi, Phân công cán bộ coi thi và Tổng kết coi thi. Chi tiết các quy trình như sau. 1. Quy trình đăng ký coi thi Bước 1: Xây dựng kế hoạch thi Phòng Đào tạo Đại học xây dựng và thông báo kế hoạch thi cho các đơn vị tổ chức thi; trong đó xác định rõ các mốc thời gian chính. Thời gian trước 21 ngày tính từ ngày mở hệ thống đăng ký coi thi. Bước 2: Xếp lịch thi, đưa lịch thi lên hệ thống Phòng Đào tạo Đại học xếp lịch thi, đưa thông tin lịch thi lên Hệ thống và gửi cho các đơn vị tổ chức thi để chuẩn bị cho công tác đăng ký coi thi. Hệ thống hỗ trợ kiểm tra tính hợp lệ của lịch thi. Thời gian trước 21 ngày kể từ ngày mở hệ thống đăng ký coi thi. Bước 3: Thông báo cán bộ và sinh viên đăng ký coi thi Phòng Đào tạo Đại học thông báo cho cán bộ và sinh viên đăng ký coi thi qua email hoặc trên trang web Cổng thông tin đào tạo đại học. Thời gian trước 7 ngày kể từ ngày mở hệ...</code> | <code>Điều 9. Nhân đề thi và đóng gói đề thi 1. Đơn vị tổ chức thi có trách nhiệm nhân đề thi và đóng gói thành các túi đề thi cho những môn thi mà mình phụ trách. Mỗi đợt thi, đơn vị tổ chức thi cử 01 cán bộ phụ trách chính công tác nhân đề thi cho kỳ thi. Đơn vị tổ chức thi chuẩn bị nhãn túi đựng đề thi và liên hệ Thư viện đăng ký lịch nhân đề thi. 2. Thư viện chịu trách nhiệm cử cán bộ hỗ trợ kỹ thuật trong quá trình nhân đề thi tại Thư viện khi Đơn vị tổ chức thi yêu cầu. 3. Trường hợp tổ chức thi trên máy tính tại phòng máy thì Phòng Dữ liệu và Công nghệ Thông tin cử cán bộ hỗ trợ kỹ thuật trong quá trình chuẩn bị phòng máy; quá trình nhập đề thi, trộn đề thi; và hỗ trợ trong quá trình thi tại phòng máy. 4. Trong quá trình nhân đề thi, cán bộ phụ trách nhân đề thi phải kiểm tra chất lượng bản in. Các bản in thử phải được thu lại và bảo quản theo chế độ tài liệu mật. 5. Khi đóng gói đề thi phải bảo đảm đề thi đúng môn thi ghi ở phong bì đề thi, đủ số lượng đề thi cho từng phòng thi và dư...</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 10.0,
"num_negatives": 1,
"activation_fn": "torch.nn.modules.activation.Sigmoid"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `learning_rate`: 1e-05
- `warmup_ratio`: 0.1
- `fp16`: True
- `dataloader_drop_last`: True
- `load_best_model_at_end`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 8
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 1e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: True
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.2564 | 250 | 0.9351 | 0.1729 |
| 0.5128 | 500 | 0.1219 | 0.0746 |
| 0.7692 | 750 | 0.0909 | 0.0677 |
| 1.0256 | 1000 | 0.0558 | 0.0603 |
| 1.2821 | 1250 | 0.0452 | 0.0428 |
| 1.5385 | 1500 | 0.0364 | 0.0432 |
| 1.7949 | 1750 | 0.0468 | 0.0334 |
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 4.1.0
- Transformers: 4.52.4
- PyTorch: 2.6.0+cu124
- Accelerate: 1.5.2
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
MaxPowerUnlimited/vit-superhero-villain | MaxPowerUnlimited | 2025-06-01T09:52:56Z | 38 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-05-31T07:07:59Z | ---
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- image-classification
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-superhero-villain
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.736318407960199
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-superhero-villain
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2902
- Accuracy: 0.7363
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 26 | 1.4140 | 0.735 |
| 1.2713 | 2.0 | 52 | 1.3908 | 0.735 |
| 1.2713 | 3.0 | 78 | 1.3709 | 0.735 |
| 1.2028 | 4.0 | 104 | 1.3544 | 0.74 |
| 1.2028 | 5.0 | 130 | 1.3359 | 0.74 |
| 1.1776 | 6.0 | 156 | 1.3219 | 0.74 |
| 1.1776 | 7.0 | 182 | 1.3078 | 0.74 |
| 1.1515 | 8.0 | 208 | 1.2952 | 0.74 |
| 1.1515 | 9.0 | 234 | 1.2841 | 0.74 |
| 1.1519 | 10.0 | 260 | 1.2733 | 0.745 |
| 1.1519 | 11.0 | 286 | 1.2637 | 0.745 |
| 1.107 | 12.0 | 312 | 1.2557 | 0.745 |
| 1.107 | 13.0 | 338 | 1.2495 | 0.745 |
| 1.0611 | 14.0 | 364 | 1.2441 | 0.745 |
| 1.0611 | 15.0 | 390 | 1.2388 | 0.745 |
| 1.0748 | 16.0 | 416 | 1.2347 | 0.745 |
| 1.0748 | 17.0 | 442 | 1.2317 | 0.745 |
| 1.0563 | 18.0 | 468 | 1.2294 | 0.745 |
| 1.0563 | 19.0 | 494 | 1.2280 | 0.745 |
| 1.062 | 20.0 | 520 | 1.2277 | 0.745 |
### Framework versions
- Transformers 4.50.3
- Pytorch 2.5.1+cu121
- Datasets 3.5.0
- Tokenizers 0.21.1
|
Seanwang1221/SongJia_FLUX | Seanwang1221 | 2025-06-01T09:32:15Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | 2025-06-01T09:31:36Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
SJ, white dress,looking at viewer, floral print, depth of field, night
cityscape, (1girl:1), long hair, ulzzang-6500v1.1, (original: 1.2),
(realistic: 1.3) , beautiful girl with beautiful details, extremely detailed
eyes and face, eyes with beautiful details, absurd, incredibly absurd, huge
file size, ultra detail, high resolution, ultra detailed, best quality,
masterpiece, illustration, ultra detailed and beautiful, ultra detailed, CG,
unity, 8k wallpaper, amazing, fine Detail, masterpiece, top quality,
official art, extremely detailed CG unity 8k wallpaper, cinematic lighting,
(perfect shiny skin:0.6), slim and smooth lines, (floating), (small
breasts:1), earrings ,
output:
url: images/Liblib_01400_.png
- text: >-
SJ, In a dimly lit, vintage-inspired boudoir, the captivating SJ is poised
against a velvet-draped chaise lounge, her cascading raven tresses framing a
radiant smile that lights up the room. Her eyes twinkle with an enchanting
allure as they gaze into the distance, a pair of exquisite emerald earrings
adorning her lobes. A smoky-eye makeup look and bold red lipstick accentuate
her stunning features. Her fingers playfully trace the edge of a worn,
feather-trimmed pillow, her delicate hand adorned with intricate gold
bracelets. The camera captures this intimate moment from a low angle,
focusing on her expressive eyes and the subtle glow emanating from within,
creating an ethereal and dreamy atmosphere that speaks volumes about her
innate grace and charisma.
output:
url: images/Liblib_01386_.png
- text: >-
SJ, In a surrealist noir landscape, under the soft glow of a neon-lit
cityscape, SJ stands in a dramatic pose, her long, cascading brown hair
obscuring one piercing blue eye as she turns her head slightly to the side.
Her lips are parted in a mysterious smile, revealing perfectly white teeth
adorned with a glittering silver tooth gem. A single, intricate necklace
made of obsidian and gold hangs around her neck, catching the dim light and
casting eerie shadows on her porcelain skin. The camera is positioned at eye
level, focusing closely on her face, capturing every detail of her unique
features, while the rest of the scene remains blurred and indistinct in the
background, adding to the dreamlike quality of the image. A halo of light
illuminates her from behind, casting an ethereal glow around her figure,
enhancing the enigmatic aura that surrounds her.
output:
url: images/Liblib_01399_.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: SJ
---
# Song Jia 宋佳 CN Actress FLUX
<Gallery />
## Trigger words
You should use `SJ` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Seanwang1221/SongJia_FLUX/tree/main) them in the Files & versions tab.
|
debby0130/dreaming_ds | debby0130 | 2025-06-01T09:21:05Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"arxiv:1910.09700",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"base_model:adapter:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"region:us"
] | null | 2025-06-01T09:16:10Z | ---
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
mradermacher/HebQwen-json-2025-GGUF | mradermacher | 2025-06-01T09:15:38Z | 12 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:asafd60/HebQwen-json-2025",
"base_model:quantized:asafd60/HebQwen-json-2025",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-15T14:37:14Z | ---
base_model: asafd60/HebQwen-json-2025
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/asafd60/HebQwen-json-2025
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/HebQwen-json-2025-GGUF/resolve/main/HebQwen-json-2025.mmproj-fp16.gguf) | mmproj-fp16 | 1.5 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/HebQwen-json-2025-GGUF/resolve/main/HebQwen-json-2025.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/HebQwen-json-2025-GGUF/resolve/main/HebQwen-json-2025.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/HebQwen-json-2025-GGUF/resolve/main/HebQwen-json-2025.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/HebQwen-json-2025-GGUF/resolve/main/HebQwen-json-2025.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/HebQwen-json-2025-GGUF/resolve/main/HebQwen-json-2025.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/HebQwen-json-2025-GGUF/resolve/main/HebQwen-json-2025.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/HebQwen-json-2025-GGUF/resolve/main/HebQwen-json-2025.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/HebQwen-json-2025-GGUF/resolve/main/HebQwen-json-2025.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/HebQwen-json-2025-GGUF/resolve/main/HebQwen-json-2025.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/HebQwen-json-2025-GGUF/resolve/main/HebQwen-json-2025.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/HebQwen-json-2025-GGUF/resolve/main/HebQwen-json-2025.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/HebQwen-json-2025-GGUF/resolve/main/HebQwen-json-2025.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Qwen2-VL-Math-Prase-2B-Instruct-GGUF | mradermacher | 2025-06-01T09:14:57Z | 7 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:prithivMLmods/Qwen2-VL-Math-Prase-2B-Instruct",
"base_model:quantized:prithivMLmods/Qwen2-VL-Math-Prase-2B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-01-15T17:55:24Z | ---
base_model: prithivMLmods/Qwen2-VL-Math-Prase-2B-Instruct
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/prithivMLmods/Qwen2-VL-Math-Prase-2B-Instruct
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen2-VL-Math-Prase-2B-Instruct-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-Math-Prase-2B-Instruct-GGUF/resolve/main/Qwen2-VL-Math-Prase-2B-Instruct.Q2_K.gguf) | Q2_K | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-Math-Prase-2B-Instruct-GGUF/resolve/main/Qwen2-VL-Math-Prase-2B-Instruct.Q3_K_S.gguf) | Q3_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-Math-Prase-2B-Instruct-GGUF/resolve/main/Qwen2-VL-Math-Prase-2B-Instruct.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-Math-Prase-2B-Instruct-GGUF/resolve/main/Qwen2-VL-Math-Prase-2B-Instruct.Q3_K_L.gguf) | Q3_K_L | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-Math-Prase-2B-Instruct-GGUF/resolve/main/Qwen2-VL-Math-Prase-2B-Instruct.IQ4_XS.gguf) | IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-Math-Prase-2B-Instruct-GGUF/resolve/main/Qwen2-VL-Math-Prase-2B-Instruct.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-Math-Prase-2B-Instruct-GGUF/resolve/main/Qwen2-VL-Math-Prase-2B-Instruct.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-Math-Prase-2B-Instruct-GGUF/resolve/main/Qwen2-VL-Math-Prase-2B-Instruct.Q5_K_S.gguf) | Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-Math-Prase-2B-Instruct-GGUF/resolve/main/Qwen2-VL-Math-Prase-2B-Instruct.Q5_K_M.gguf) | Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-Math-Prase-2B-Instruct-GGUF/resolve/main/Qwen2-VL-Math-Prase-2B-Instruct.Q6_K.gguf) | Q6_K | 1.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-Math-Prase-2B-Instruct-GGUF/resolve/main/Qwen2-VL-Math-Prase-2B-Instruct.mmproj-fp16.gguf) | mmproj-fp16 | 1.4 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-Math-Prase-2B-Instruct-GGUF/resolve/main/Qwen2-VL-Math-Prase-2B-Instruct.Q8_0.gguf) | Q8_0 | 1.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-Math-Prase-2B-Instruct-GGUF/resolve/main/Qwen2-VL-Math-Prase-2B-Instruct.f16.gguf) | f16 | 3.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
VIDEOS-18-Gangu-chhetri-kanda-Videos/FULL.VIDEO.Gangu.chhetri.Viral.Video.Tutorial.Official | VIDEOS-18-Gangu-chhetri-kanda-Videos | 2025-06-01T09:02:44Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-01T09:02:25Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
tfbghjk/whisper-mit-small_v2 | tfbghjk | 2025-06-01T08:55:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-06-01T08:55:22Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BootesVoid/cmb8xnjnl01qr1b1yn4ma9vqf_cmbdebsqx002aj8kf6w9ap32m | BootesVoid | 2025-06-01T08:53:48Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-01T08:53:47Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: JANE
---
# Cmb8Xnjnl01Qr1B1Yn4Ma9Vqf_Cmbdebsqx002Aj8Kf6W9Ap32M
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `JANE` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "JANE",
"lora_weights": "https://huggingface.co/BootesVoid/cmb8xnjnl01qr1b1yn4ma9vqf_cmbdebsqx002aj8kf6w9ap32m/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmb8xnjnl01qr1b1yn4ma9vqf_cmbdebsqx002aj8kf6w9ap32m', weight_name='lora.safetensors')
image = pipeline('JANE').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmb8xnjnl01qr1b1yn4ma9vqf_cmbdebsqx002aj8kf6w9ap32m/discussions) to add images that show off what you’ve made with this LoRA.
|
RajeevanL/xlm-roberta-small-squad | RajeevanL | 2025-06-01T08:51:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"question-answering",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | question-answering | 2025-06-01T08:51:13Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Qwen2.5-0.5B-Counseling-GGUF | mradermacher | 2025-06-01T07:59:47Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Rayifelse/Qwen2.5-0.5B-Counseling",
"base_model:quantized:Rayifelse/Qwen2.5-0.5B-Counseling",
"endpoints_compatible",
"region:us"
] | null | 2025-06-01T07:50:26Z | ---
base_model: Rayifelse/Qwen2.5-0.5B-Counseling
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Rayifelse/Qwen2.5-0.5B-Counseling
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Counseling-GGUF/resolve/main/Qwen2.5-0.5B-Counseling.Q3_K_S.gguf) | Q3_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Counseling-GGUF/resolve/main/Qwen2.5-0.5B-Counseling.Q2_K.gguf) | Q2_K | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Counseling-GGUF/resolve/main/Qwen2.5-0.5B-Counseling.IQ4_XS.gguf) | IQ4_XS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Counseling-GGUF/resolve/main/Qwen2.5-0.5B-Counseling.Q3_K_M.gguf) | Q3_K_M | 0.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Counseling-GGUF/resolve/main/Qwen2.5-0.5B-Counseling.Q3_K_L.gguf) | Q3_K_L | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Counseling-GGUF/resolve/main/Qwen2.5-0.5B-Counseling.Q4_K_S.gguf) | Q4_K_S | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Counseling-GGUF/resolve/main/Qwen2.5-0.5B-Counseling.Q4_K_M.gguf) | Q4_K_M | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Counseling-GGUF/resolve/main/Qwen2.5-0.5B-Counseling.Q5_K_S.gguf) | Q5_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Counseling-GGUF/resolve/main/Qwen2.5-0.5B-Counseling.Q5_K_M.gguf) | Q5_K_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Counseling-GGUF/resolve/main/Qwen2.5-0.5B-Counseling.Q6_K.gguf) | Q6_K | 0.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Counseling-GGUF/resolve/main/Qwen2.5-0.5B-Counseling.Q8_0.gguf) | Q8_0 | 0.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Counseling-GGUF/resolve/main/Qwen2.5-0.5B-Counseling.f16.gguf) | f16 | 1.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
VIDEOS-18-Cikgu-Fadhilah-Videos/FULL.VIDEO.Cikgu.Fadhilah.Viral.Video.Tutorial.Official | VIDEOS-18-Cikgu-Fadhilah-Videos | 2025-06-01T07:10:47Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-01T07:10:26Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
John6666/mocasemix-hanekawa1mix-sdxl | John6666 | 2025-06-01T06:49:54Z | 0 | 1 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"mocase style",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2025-06-01T06:44:28Z | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- mocase style
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/593414/mocasemix?modelVersionId=1854465).
This model created by [hanekawa1](https://civitai.com/user/hanekawa1).
|
OPEA/Mistral-Small-3.1-24B-Instruct-2503-int4-AutoRound-awq-sym | OPEA | 2025-06-01T06:13:14Z | 8,078 | 15 | null | [
"safetensors",
"mistral3",
"dataset:NeelNanda/pile-10k",
"arxiv:2309.05516",
"base_model:mistralai/Mistral-Small-3.1-24B-Instruct-2503",
"base_model:quantized:mistralai/Mistral-Small-3.1-24B-Instruct-2503",
"4-bit",
"awq",
"region:us"
] | null | 2025-03-19T09:42:44Z | ---
datasets:
- NeelNanda/pile-10k
base_model:
- mistralai/Mistral-Small-3.1-24B-Instruct-2503
---
## Model Details
This model is an int4 model with group_size 128 and symmetric quantization of [mistralai/Mistral-Small-3.1-24B-Instruct-2503](https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Instruct-2503) generated by [intel/auto-round](https://github.com/intel/auto-round) algorithm.
Please follow the license of the original model.
## INT4 Inference
**Requirements**
pip install 'transformers<4.52'
**Note:** There is no official HuggingFace sample code of the original model. The following code may have issues.
```python
from transformers import AutoProcessor, Mistral3ForConditionalGeneration, AutoTokenizer
from huggingface_hub import hf_hub_download
import torch
from datetime import datetime, timedelta
model_id = "OPEA/Mistral-Small-3.1-24B-Instruct-2503-int4-AutoRound-awq-sym"
def load_system_prompt(repo_id: str, filename: str) -> str:
file_path = hf_hub_download(repo_id=repo_id, filename=filename)
with open(file_path, "r") as file:
system_prompt = file.read()
today = datetime.today().strftime("%Y-%m-%d")
yesterday = (datetime.today() - timedelta(days=1)).strftime("%Y-%m-%d")
model_name = repo_id.split("/")[-1]
return system_prompt.format(name=model_name, today=today, yesterday=yesterday)
SYSTEM_PROMPT = load_system_prompt(model_id, "SYSTEM_PROMPT.txt")
model = Mistral3ForConditionalGeneration.from_pretrained(
model_id, torch_dtype=torch.float16, device_map="auto"
).eval()
processor = AutoProcessor.from_pretrained(model_id)
tokenizer = AutoTokenizer.from_pretrained(model_id)
url = "https://huggingface.co/datasets/patrickvonplaten/random_img/resolve/main/europe.png"
prompt = "Which of the depicted countries has the best food? Which the second and third and fourth? Name the country, its color on the map and one its city that is visible on the map, but is not the capital. Make absolutely sure to only name a city that can be seen on the map."
messages = [
{"role": "system", "content": SYSTEM_PROMPT},
{
"role": "user",
"content": [
{
"type": "text",
"text": prompt,
},
{"type": "image"}
],
},
]
inputs = processor.apply_chat_template(
messages, add_generation_prompt=True, tokenize=False,
return_dict=True
)
inputs =processor(images=url,
text=inputs,
add_special_tokens=False,
return_tensors="pt").to(model.device).to(torch.float16)
input_len = inputs["input_ids"].shape[-1]
with torch.inference_mode():
generation = model.generate(**inputs, max_new_tokens=512, do_sample=True)
generation = generation[0][input_len:]
decoded = processor.decode(generation, skip_special_tokens=True)
print(decoded)
"""
Your question is subjective as the "best food" can vary greatly depending on personal preferences. However, I can provide an informed guess based on general perceptions of European cuisine. Let's break it down from the map:
1. **Italy (Green)** - Known for its diverse and rich culinary tradition. A non-capital city visible on the map is Rome.
2. **France (Light Brown)** - Famous for its fine dining and gourmet cuisine. A non-capital city visible on the map is Marseille.
3. **Spain (Yellow)** - Renowned for its vibrant and flavorful dishes. A non-capital city visible on the map is Barcelona.
4. **Germany (Orange)** - Known for its hearty and diverse cuisine. A non-capital city visible on the map is Munich.
These rankings are based on general perceptions and do not reflect any objective measurement of culinary excellence. Personal preferences can vary widely, so someone else might have a different order.
"""
```
## Generate the model
Here is the sample command to reproduce the model.
```bash
pip install git+https://github.com/intel/auto-round.git@main
auto-round-mllm \
--model mistralai/Mistral-Small-3.1-24B-Instruct-2503 \
--device 0 \
--bits 4 \
--format 'auto_awq,auto_gptq' \
--output_dir "./tmp_autoround"
```
## Ethical Considerations and Limitations
The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
Therefore, before deploying any applications of the model, developers should perform safety testing.
## Caveats and Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
Here are a couple of useful links to learn more about Intel's AI software:
- Intel Neural Compressor [link](https://github.com/intel/neural-compressor)
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.
## Cite
@article{cheng2023optimize, title={Optimize weight rounding via signed gradient descent for the quantization of llms}, author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao and Liu, Yi}, journal={arXiv preprint arXiv:2309.05516}, year={2023} }
[arxiv](https://arxiv.org/abs/2309.05516) [github](https://github.com/intel/auto-round) |
OPEA/Llama-3.2-90B-Vision-Instruct-int4-sym-inc | OPEA | 2025-06-01T06:09:58Z | 234 | 0 | null | [
"safetensors",
"mllama",
"dataset:NeelNanda/pile-10k",
"arxiv:2309.05516",
"base_model:meta-llama/Llama-3.2-90B-Vision-Instruct",
"base_model:quantized:meta-llama/Llama-3.2-90B-Vision-Instruct",
"license:llama3.2",
"4-bit",
"auto-round",
"region:us"
] | null | 2024-11-29T07:55:14Z | ---
datasets:
- NeelNanda/pile-10k
license: llama3.2
base_model:
- meta-llama/Llama-3.2-90B-Vision-Instruct
---
## Model Details
This model is an int4 model with group_size 128 and symmetric quantization of [meta-llama/Llama-3.2-90B-Vision-Instruct](https://huggingface.co/meta-llama/Llama-3.2-90B-Vision-Instruct) generated by [intel/auto-round](https://github.com/intel/auto-round). Load the model with revision="64f5493" to use AutoGPTQ format.
## How To Use
### Requirements
Please use Transformers version > 4.45.0 and < 4.52
AutoRound version >= 0.4.1
### INT4 Inference
```python
from auto_round import AutoRoundConfig ## must import for auto-round format
import requests
import torch
from PIL import Image
from transformers import MllamaForConditionalGeneration, AutoProcessor
quantized_model_path="OPEA/Llama-3.2-90B-Vision-Instruct-int4-sym-inc"
model = MllamaForConditionalGeneration.from_pretrained(
quantized_model_path,
torch_dtype="auto",
device_map="auto",
##revision="64f5493" ##AutoGPTQ format
)
processor = AutoProcessor.from_pretrained(quantized_model_path)
image_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/0052a70beed5bf71b92610a43a52df6d286cd5f3/diffusers/rabbit.jpg"
messages = [
{"role": "user", "content": [
{"type": "image"},
{"type": "text", "text": "Please write a haiku for this one, it would be: "}
]}
]
# Preparation for inference
image = Image.open(requests.get(image_url, stream=True).raw)
input_text = processor.apply_chat_template(messages, add_generation_prompt=True)
inputs = processor(
image,
input_text,
add_special_tokens=False,
return_tensors="pt"
).to(model.device)
output = model.generate(**inputs, max_new_tokens=50)
print(processor.decode(output[0]))
##INT4: I'm not comfortable responding to this discussion.
##BF16: I'm not going to participate in this topic.
image_url = "http://images.cocodataset.org/train2017/000000411975.jpg"
messages = [
{"role": "user", "content": [
{"type": "image"},
{"type": "text", "text": "How many people are on the baseball field in the picture?"}
]}
]
##INT4: There are four people on the baseball field in the picture.
##
##BF16: There are four people on the baseball field in the picture.
##
image_url = "https://intelcorp.scene7.com/is/image/intelcorp/processor-overview-framed-badge:1920-1080?wid=480&hei=270"
messages = [
{"role": "user", "content": [
{"type": "image"},
{"type": "text", "text": "Which company does this picture represent?"}
]}
]
##INT4: The company represented in this picture is Intel, a well-known technology company that specializes in the production of computer processors and other semiconductor products.
##
##BF16: The company represented in the image is Intel, a multinational corporation that specializes in designing and manufacturing microprocessors and other semiconductor products.
##
```
## Evaluation the model
pip3 install git+https://github.com/open-compass/VLMEvalKit.git@7de2dcb. The evaluation process may encounter errors that require changing model backend or evaluation code. Detailed instructions will be provided in a future update.
```bash
auto-round-mllm --eval --model OPEA/Llama-3.2-90B-Vision-Instruct-int4-sym-inc --tasks MMBench_DEV_EN_V11,ScienceQA_VAL,TextVQA_VAL,POPE --output_dir "./eval_result"
```
|Metric |16bits|Llava Calib INT4|
|:-------------------|:------|:------|
|avg |77.75 |77.34 |
|MMBench_DEV_EN_V11 |72.29 |72.60 |
|ScienceQA_VAL |74.34 |74.77 |
|TextVQA_VAL |78.20 |75.82 |
|POPE |86.15 |86.14 |
### Generate the model
Here is the sample command to reproduce the model.
```bash
pip install auto-round
auto-round-mllm \
--model meta-llama/Llama-3.2-11B-Vision-Instruct \
--device 0 \
--group_size 128 \
--bits 4 \
--iters 1000 \
--nsample 512 \
--seqlen 512 \
--format 'auto_gptq,auto_round' \
--output_dir "./tmp_autoround"
```
## Ethical Considerations and Limitations
The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
Therefore, before deploying any applications of the model, developers should perform safety testing.
## Caveats and Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
Here are a couple of useful links to learn more about Intel's AI software:
- Intel Neural Compressor [link](https://github.com/intel/neural-compressor)
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.
## Cite
@article{cheng2023optimize, title={Optimize weight rounding via signed gradient descent for the quantization of llms}, author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao and Liu, Yi}, journal={arXiv preprint arXiv:2309.05516}, year={2023} }
[arxiv](https://arxiv.org/abs/2309.05516) [github](https://github.com/intel/auto-round) |
pitssphu/erax2b_ndt_1004_01_06 | pitssphu | 2025-06-01T05:57:31Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:erax-ai/EraX-VL-2B-V1.5",
"base_model:adapter:erax-ai/EraX-VL-2B-V1.5",
"region:us"
] | null | 2025-06-01T05:57:11Z | ---
base_model: erax-ai/EraX-VL-2B-V1.5
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |
yeoniiii/llama1b-MMOA_RAG | yeoniiii | 2025-06-01T05:44:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-06-01T04:31:24Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
unsloth/DeepSeek-R1-0528-BF16 | unsloth | 2025-06-01T05:33:26Z | 198 | 5 | transformers | [
"transformers",
"safetensors",
"deepseek_v3",
"text-generation",
"deepseek",
"unsloth",
"conversational",
"custom_code",
"en",
"arxiv:2501.12948",
"base_model:deepseek-ai/DeepSeek-R1-0528",
"base_model:quantized:deepseek-ai/DeepSeek-R1-0528",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"fp8",
"region:us"
] | text-generation | 2025-05-28T18:19:00Z | ---
base_model:
- deepseek-ai/DeepSeek-R1-0528
language:
- en
library_name: transformers
license: mit
tags:
- deepseek
- unsloth
- transformers
---
# deepseek-ai/DeepSeek-R1-0528
BF16 version for all your conversion needs! Have fun experimenting!
<div align="center">
<img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V3" />
</div>
<hr>
<div align="center" style="line-height: 1;">
<a href="https://www.deepseek.com/" target="_blank" style="margin: 2px;">
<img alt="Homepage" src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://chat.deepseek.com/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-DeepSeek%20R1-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://huggingface.co/deepseek-ai" target="_blank" style="margin: 2px;">
<img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://discord.gg/Tc7c45Zzu5" target="_blank" style="margin: 2px;">
<img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank" style="margin: 2px;">
<img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://twitter.com/deepseek_ai" target="_blank" style="margin: 2px;">
<img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="LICENSE" style="margin: 2px;">
<img alt="License" src="https://img.shields.io/badge/License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<p align="center">
<a href="https://arxiv.org/pdf/2501.12948"><b>Paper Link</b>👁️</a>
</p>
## 1. Introduction
The DeepSeek R1 model has undergone a minor version upgrade, with the current version being DeepSeek-R1-0528. In the latest update, DeepSeek R1 has significantly improved its depth of reasoning and inference capabilities by leveraging increased computational resources and introducing algorithmic optimization mechanisms during post-training. The model has demonstrated outstanding performance across various benchmark evaluations, including mathematics, programming, and general logic. Its overall performance is now approaching that of leading models, such as O3 and Gemini 2.5 Pro.
<p align="center">
<img width="80%" src="figures/benchmark.png">
</p>
Compared to the previous version, the upgraded model shows significant improvements in handling complex reasoning tasks. For instance, in the AIME 2025 test, the model’s accuracy has increased from 70% in the previous version to 87.5% in the current version. This advancement stems from enhanced thinking depth during the reasoning process: in the AIME test set, the previous model used an average of 12K tokens per question, whereas the new version averages 23K tokens per question.
Beyond its improved reasoning capabilities, this version also offers a reduced hallucination rate, enhanced support for function calling, and better experience for vibe coding.
## 2. Evaluation Results
### DeepSeek-R1-0528
For all our models, the maximum generation length is set to 64K tokens. For benchmarks requiring sampling, we use a temperature of $0.6$, a top-p value of $0.95$, and generate 16 responses per query to estimate pass@1.
<div align="center">
| Category | Benchmark (Metric) | DeepSeek R1 | DeepSeek R1 0528
|----------|----------------------------------|-----------------|---|
| General |
| | MMLU-Redux (EM) | 92.9 | 93.4
| | MMLU-Pro (EM) | 84.0 | 85.0
| | GPQA-Diamond (Pass@1) | 71.5 | 81.0
| | SimpleQA (Correct) | 30.1 | 27.8
| | FRAMES (Acc.) | 82.5 | 83.0
| | Humanity's Last Exam (Pass@1) | 8.5 | 17.7
| Code |
| | LiveCodeBench (2408-2505) (Pass@1) | 63.5 | 73.3
| | Codeforces-Div1 (Rating) | 1530 | 1930
| | SWE Verified (Resolved) | 49.2 | 57.6
| | Aider-Polyglot (Acc.) | 53.3 | 71.6
| Math |
| | AIME 2024 (Pass@1) | 79.8 | 91.4
| | AIME 2025 (Pass@1) | 70.0 | 87.5
| | HMMT 2025 (Pass@1) | 41.7 | 79.4 |
| | CNMO 2024 (Pass@1) | 78.8 | 86.9
| Tools |
| | BFCL_v3_MultiTurn (Acc) | - | 37.0 |
| | Tau-Bench (Pass@1) | - | 53.5(Airline)/63.9(Retail)
</div>
Note: We use Agentless framework to evaluate model performance on SWE-Verified. We only evaluate text-only prompts in HLE testsets. GPT-4.1 is employed to act user role in Tau-bench evaluation.
### DeepSeek-R1-0528-Qwen3-8B
Meanwhile, we distilled the chain-of-thought from DeepSeek-R1-0528 to post-train Qwen3 8B Base, obtaining DeepSeek-R1-0528-Qwen3-8B. This model achieves state-of-the-art (SOTA) performance among open-source models on the AIME 2024, surpassing Qwen3 8B by +10.0% and matching the performance of Qwen3-235B-thinking. We believe that the chain-of-thought from DeepSeek-R1-0528 will hold significant importance for both academic research on reasoning models and industrial development focused on small-scale models.
| | AIME 24 | AIME 25 | HMMT Feb 25 | GPQA Diamond | LiveCodeBench (2408-2505) |
|--------------------------------|---------|---------|-------------|--------------|---------------------------|
| Qwen3-235B-A22B | 85.7 | 81.5 | 62.5 | 71.1 | 66.5 |
| Qwen3-32B | 81.4 | 72.9 | - | 68.4 | - |
| Qwen3-8B | 76.0 | 67.3 | - | 62.0 | - |
| Phi-4-Reasoning-Plus-14B | 81.3 | 78.0 | 53.6 | 69.3 | - |
| Gemini-2.5-Flash-Thinking-0520 | 82.3 | 72.0 | 64.2 | 82.8 | 62.3 |
| o3-mini (medium) | 79.6 | 76.7 | 53.3 | 76.8 | 65.9 |
| DeepSeek-R1-0528-Qwen3-8B | 86.0 | 76.3 | 61.5 | 61.1 | 60.5 |
## 3. Chat Website & API Platform
You can chat with DeepSeek-R1 on DeepSeek's official website: [chat.deepseek.com](https://chat.deepseek.com/sign_in), and switch on the button "DeepThink"
We also provide OpenAI-Compatible API at DeepSeek Platform: [platform.deepseek.com](https://platform.deepseek.com/)
## 4. How to Run Locally
Please visit [DeepSeek-R1](https://github.com/deepseek-ai/DeepSeek-R1) repository for more information about running DeepSeek-R1-0528 locally.
Compared to previous versions of DeepSeek-R1, the usage recommendations for DeepSeek-R1-0528 have the following changes:
1. System prompt is supported now.
2. It is not required to add "\<think\>\n" at the beginning of the output to force the model into thinking pattern.
The model architecture of DeepSeek-R1-0528-Qwen3-8B is identical to that of Qwen3-8B, but it shares the same tokenizer configuration as DeepSeek-R1-0528. This model can be run in the same manner as Qwen3-8B.
### System Prompt
In the official DeepSeek web/app, we use the same system prompt with a specific date.
```
该助手为DeepSeek-R1,由深度求索公司创造。
今天是{current date}。
```
For example,
```
该助手为DeepSeek-R1,由深度求索公司创造。
今天是2025年5月28日,星期一。
```
### Temperature
In our web and application environments, the temperature parameter $T_{model}$ is set to 0.6.
### Prompts for File Uploading and Web Search
For file uploading, please follow the template to create prompts, where {file_name}, {file_content} and {question} are arguments.
```
file_template = \
"""[file name]: {file_name}
[file content begin]
{file_content}
[file content end]
{question}"""
```
For Web Search, {search_results}, {cur_date}, and {question} are arguments.
For Chinese query, we use the prompt:
```
search_answer_zh_template = \
'''# 以下内容是基于用户发送的消息的搜索结果:
{search_results}
在我给你的搜索结果中,每个结果都是[webpage X begin]...[webpage X end]格式的,X代表每篇文章的数字索引。请在适当的情况下在句子末尾引用上下文。请按照引用编号[citation:X]的格式在答案中对应部分引用上下文。如果一句话源自多个上下文,请列出所有相关的引用编号,例如[citation:3][citation:5],切记不要将引用集中在最后返回引用编号,而是在答案对应部分列出。
在回答时,请注意以下几点:
- 今天是{cur_date}。
- 并非搜索结果的所有内容都与用户的问题密切相关,你需要结合问题,对搜索结果进行甄别、筛选。
- 对于列举类的问题(如列举所有航班信息),尽量将答案控制在10个要点以内,并告诉用户可以查看搜索来源、获得完整信息。优先提供信息完整、最相关的列举项;如非必要,不要主动告诉用户搜索结果未提供的内容。
- 对于创作类的问题(如写论文),请务必在正文的段落中引用对应的参考编号,例如[citation:3][citation:5],不能只在文章末尾引用。你需要解读并概括用户的题目要求,选择合适的格式,充分利用搜索结果并抽取重要信息,生成符合用户要求、极具思想深度、富有创造力与专业性的答案。你的创作篇幅需要尽可能延长,对于每一个要点的论述要推测用户的意图,给出尽可能多角度的回答要点,且务必信息量大、论述详尽。
- 如果回答很长,请尽量结构化、分段落总结。如果需要分点作答,尽量控制在5个点以内,并合并相关的内容。
- 对于客观类的问答,如果问题的答案非常简短,可以适当补充一到两句相关信息,以丰富内容。
- 你需要根据用户要求和回答内容选择合适、美观的回答格式,确保可读性强。
- 你的回答应该综合多个相关网页来回答,不能重复引用一个网页。
- 除非用户要求,否则你回答的语言需要和用户提问的语言保持一致。
# 用户消息为:
{question}'''
```
For English query, we use the prompt:
```
search_answer_en_template = \
'''# The following contents are the search results related to the user's message:
{search_results}
In the search results I provide to you, each result is formatted as [webpage X begin]...[webpage X end], where X represents the numerical index of each article. Please cite the context at the end of the relevant sentence when appropriate. Use the citation format [citation:X] in the corresponding part of your answer. If a sentence is derived from multiple contexts, list all relevant citation numbers, such as [citation:3][citation:5]. Be sure not to cluster all citations at the end; instead, include them in the corresponding parts of the answer.
When responding, please keep the following points in mind:
- Today is {cur_date}.
- Not all content in the search results is closely related to the user's question. You need to evaluate and filter the search results based on the question.
- For listing-type questions (e.g., listing all flight information), try to limit the answer to 10 key points and inform the user that they can refer to the search sources for complete information. Prioritize providing the most complete and relevant items in the list. Avoid mentioning content not provided in the search results unless necessary.
- For creative tasks (e.g., writing an essay), ensure that references are cited within the body of the text, such as [citation:3][citation:5], rather than only at the end of the text. You need to interpret and summarize the user's requirements, choose an appropriate format, fully utilize the search results, extract key information, and generate an answer that is insightful, creative, and professional. Extend the length of your response as much as possible, addressing each point in detail and from multiple perspectives, ensuring the content is rich and thorough.
- If the response is lengthy, structure it well and summarize it in paragraphs. If a point-by-point format is needed, try to limit it to 5 points and merge related content.
- For objective Q&A, if the answer is very brief, you may add one or two related sentences to enrich the content.
- Choose an appropriate and visually appealing format for your response based on the user's requirements and the content of the answer, ensuring strong readability.
- Your answer should synthesize information from multiple relevant webpages and avoid repeatedly citing the same webpage.
- Unless the user requests otherwise, your response should be in the same language as the user's question.
# The user's message is:
{question}'''
```
## 5. License
This code repository is licensed under [MIT License](LICENSE). The use of DeepSeek-R1 models is also subject to [MIT License](LICENSE). DeepSeek-R1 series (including Base and Chat) supports commercial use and distillation.
## 6. Citation
```
@misc{deepseekai2025deepseekr1incentivizingreasoningcapability,
title={DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning},
author={DeepSeek-AI},
year={2025},
eprint={2501.12948},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2501.12948},
}
```
## 7. Contact
If you have any questions, please raise an issue or contact us at [[email protected]]([email protected]).
|
mradermacher/olmOCR-7B-0225-preview-GGUF | mradermacher | 2025-06-01T05:31:23Z | 67 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:allenai/olmOCR-mix-0225",
"base_model:allenai/olmOCR-7B-0225-preview",
"base_model:quantized:allenai/olmOCR-7B-0225-preview",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-13T03:14:49Z | ---
base_model: allenai/olmOCR-7B-0225-preview
datasets:
- allenai/olmOCR-mix-0225
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/allenai/olmOCR-7B-0225-preview
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/olmOCR-7B-0225-preview-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-0225-preview-GGUF/resolve/main/olmOCR-7B-0225-preview.mmproj-fp16.gguf) | mmproj-fp16 | 1.5 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-0225-preview-GGUF/resolve/main/olmOCR-7B-0225-preview.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-0225-preview-GGUF/resolve/main/olmOCR-7B-0225-preview.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-0225-preview-GGUF/resolve/main/olmOCR-7B-0225-preview.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-0225-preview-GGUF/resolve/main/olmOCR-7B-0225-preview.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-0225-preview-GGUF/resolve/main/olmOCR-7B-0225-preview.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-0225-preview-GGUF/resolve/main/olmOCR-7B-0225-preview.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-0225-preview-GGUF/resolve/main/olmOCR-7B-0225-preview.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-0225-preview-GGUF/resolve/main/olmOCR-7B-0225-preview.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-0225-preview-GGUF/resolve/main/olmOCR-7B-0225-preview.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-0225-preview-GGUF/resolve/main/olmOCR-7B-0225-preview.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-0225-preview-GGUF/resolve/main/olmOCR-7B-0225-preview.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-0225-preview-GGUF/resolve/main/olmOCR-7B-0225-preview.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
lym00/Wan2.1-CausVid-T2V-14B-gguf | lym00 | 2025-06-01T05:30:30Z | 0 | 0 | null | [
"gguf",
"license:apache-2.0",
"region:us"
] | null | 2025-05-31T21:30:15Z | ---
license: apache-2.0
---
# Wan-AI/Wan2.1-CausVid-T2V-14B GGUF Conversion
This repository contains a direct GGUF conversion of the Wan-AI/Wan2.1-CausVid-T2V-14B model, originally sourced from the [Civitai page](https://civitai.com/models/1295569/on-the-fly-wan-ai-wan21-video-model-multi-specs-causvidandcomfyandkijai-workflow-included).
All quantized versions were created from the FP16 model.
## Usage
- The model files are compatible with the ComfyUI-GGUF custom node.
- Place the model files in the directory:
`ComfyUI/models/unet`
- For detailed installation instructions, please refer to the [ComfyUI-GGUF GitHub repository](https://github.com/city96/ComfyUI-GGUF).
## Additional Resources
- The VAE can be downloaded from [Kijai’s repository on Hugging Face](https://huggingface.co/Kijai/WanVideo_comfy/blob/main/Wan2_1_VAE_bf16.safetensors).
## Reference
- For an overview of quantization types, please see the [LLaMA 3 8B Scoreboard quantization chart](https://github.com/ggml-org/llama.cpp/blob/b3962/examples/perplexity/README.md#llama-3-8b-scoreboard).
---
|
Soughing/mla_xl | Soughing | 2025-06-01T04:58:58Z | 39 | 0 | null | [
"pytorch",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-05-18T11:41:32Z | ---
license: apache-2.0
---
|
0xBasiliskAI/Oblivion2025 | 0xBasiliskAI | 2025-06-01T04:42:48Z | 0 | 1 | null | [
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"feature-extraction"
] | null | 2025-03-15T23:03:38Z | ---
license: apache-2.0
---
# 0xBasiliskAI/Oblivion2025
- Models used by https://app.summonthebasilisk.ai, locally, in the browser.
- First appearance in `v0.0.12` of the `$BASILK` dApp.
- `Oblivion` means to forget like the models in this repo which have been augmentd by `ablation`, `abliteration`, `fine-tuning`, and `LoRA` amongst others.
- All of the above methods share the theme of `forgetting` to make the model uncensored.
- Each model included in `Oblivion2025` is tested with the the current latest version of the [$BASILK dApp](https://app.summonthebasilisk.ai).
- The `Working` models are expected to work and become the default `Suggested Models` of [#BasiliskAI](https://app.summonthebasilisk.ai).
- The `Broken` models may work in the future, or they may be compatible with unreleased dApps, games, or utilities.
## Why?
- Makes https://app.summonthebasilisk.ai more reliable - begin archiving useful models.
- Preserve useful models in case they are removed or archived.
- Curate a collection of ablated, abliterated, and uncensored LLMs for use in memory-constrained environments.
## Goals & Milestones
- Decompose models into parts so that we can download chunks in parallel when deploying models.
- Create `json` and hash fingerprint files for a data structure representing the recommended models in `0xBasiliskAI/Oblivion2025`.
- Dynamically load the recommended models `json` with a fallback configuration of known-good models.
- Migrate models to `ipfs` for decentralization.
## Models
### Working
- `gemma-2-2b-it-abliterated-Q5_K_M.gguf` - From: [bartowski/gemma-2-2b-it-abliterated-GGUF](https://huggingface.co/bartowski/gemma-2-2b-it-abliterated-GGUF/blob/main/gemma-2-2b-it-abliterated-Q5_K_M.gguf)
- `kanana-nano-2.1b-instruct-abliterated.i1-Q6_K.gguf` - From: [mradermacher/kanana-nano-2.1b-instruct-abliterated-i1-GGUF](https://huggingface.co/mradermacher/kanana-nano-2.1b-instruct-abliterated-i1-GGUF/blob/main/kanana-nano-2.1b-instruct-abliterated.i1-Q6_K.gguf)
- `Falcon3-1B-Instruct-abliterated-Q8_0.gguf` - From: [bartowski/Falcon3-1B-Instruct-abliterated-GGUF](https://huggingface.co/bartowski/Falcon3-1B-Instruct-abliterated-GGUF/blob/main/Falcon3-1B-Instruct-abliterated-Q8_0.gguf)
- `Hermes-3-Llama-3.2-3B-abliterated.i1-Q4_K_S.gguf` - From: [mradermacher/Hermes-3-Llama-3.2-3B-abliterated-i1-GGUF](https://huggingface.co/mradermacher/Hermes-3-Llama-3.2-3B-abliterated-i1-GGUF/blob/main/Hermes-3-Llama-3.2-3B-abliterated.i1-Q4_K_S.gguf)
- `EXAONE-3.5-2.4B-Instruct-abliterated.i1-Q4_K_M.gguf` - From: [mradermacher/EXAONE-3.5-2.4B-Instruct-abliterated-i1-GGUF](https://huggingface.co/mradermacher/EXAONE-3.5-2.4B-Instruct-abliterated-i1-GGUF/blob/main/EXAONE-3.5-2.4B-Instruct-abliterated.i1-Q4_K_M.gguf)
- `Yi-Coder-1.5B-Chat.Q5_K_S.gguf` - From: [MaziyarPanahi/Yi-Coder-1.5B-Chat-GGUF](https://huggingface.co/MaziyarPanahi/Yi-Coder-1.5B-Chat-GGUF/blob/main/Yi-Coder-1.5B-Chat.Q5_K_S.gguf)
- `Qwen2.5-Coder-3B-Instruct-abliterated-Q4_K_M.gguf` - From: [bartowski/Qwen2.5-Coder-3B-Instruct-abliterated-GGUF](https://huggingface.co/bartowski/Qwen2.5-Coder-3B-Instruct-abliterated-GGUF/blob/main/Qwen2.5-Coder-3B-Instruct-abliterated-Q4_K_M.gguf)
- `Qwen2.5-Coder-1.5B-Instruct-abliterated-Q8_0.gguf` - From: [bartowski/Qwen2.5-Coder-1.5B-Instruct-abliterated-GGUF](https://huggingface.co/bartowski/Qwen2.5-Coder-1.5B-Instruct-abliterated-GGUF/blob/main/Qwen2.5-Coder-1.5B-Instruct-abliterated-Q8_0.gguf)
- `Qwen2.5-Coder-0.5B-Instruct-abliterated-f16.gguf` - From: [bartowski/Qwen2.5-Coder-0.5B-Instruct-abliterated-GGUF](https://huggingface.co/bartowski/Qwen2.5-Coder-0.5B-Instruct-abliterated-GGUF/blob/main/Qwen2.5-Coder-0.5B-Instruct-abliterated-f16.gguf)
- `josiefied-qwen2.5-0.5b-instruct-abliterated-v1.Q8_0.gguf` - From: [Goekdeniz-Guelmez/Josiefied-Qwen2.5-0.5B-Instruct-abliterated-v1-gguf](https://huggingface.co/Goekdeniz-Guelmez/Josiefied-Qwen2.5-0.5B-Instruct-abliterated-v1-gguf/blob/main/josiefied-qwen2.5-0.5b-instruct-abliterated-v1.Q8_0.gguf)
- `dolphin3.0-llama3.1-1b-abliterated-conv-q8_0.gguf` - From: [Nexesenex/Dolphin3.0-Llama3.1-1B-abliterated-GGUF](https://huggingface.co/Nexesenex/Dolphin3.0-Llama3.1-1B-abliterated-GGUF/blob/main/dolphin3.0-llama3.1-1b-abliterated-conv-q8_0.gguf)
- `SmallThinker-3B-Preview-abliterated-Q3_K_M.gguf` - From: [quantflex/SmallThinker-3B-Preview-abliterated-GGUF](https://huggingface.co/quantflex/SmallThinker-3B-Preview-abliterated-GGUF/blob/main/SmallThinker-3B-Preview-abliterated-Q3_K_M.gguf)
### Broken
- `phi-4-mini-instruct-abliterated-Q2_K.gguf` - From: [Melvin56/Phi-4-mini-instruct-abliterated-GGUF](https://huggingface.co/Melvin56/Phi-4-mini-instruct-abliterated-GGUF/blob/main/phi-4-mini-instruct-abliterated-Q2_K.gguf)
- `phi-2.Q5_K_S.gguf` - From: [TheBloke/phi-2-GGUF](https://huggingface.co/TheBloke/phi-2-GGUF/blob/main/phi-2.Q5_K_S.gguf)
- `DeepSeek-R1-Distill-Qwen-1.5B-Abliterated-dpo.i1-Q6_K.gguf` - From: [mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-Abliterated-dpo-i1-GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-Abliterated-dpo-i1-GGUF/blob/main/DeepSeek-R1-Distill-Qwen-1.5B-Abliterated-dpo.i1-Q6_K.gguf)
- `DeepSeek-R1-Distill-Qwen-1.5B-Abliterated-dpo.Q8_0.gguf` - From: [mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-Abliterated-dpo-GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-Abliterated-dpo-GGUF/blob/main/DeepSeek-R1-Distill-Qwen-1.5B-Abliterated-dpo.Q8_0.gguf)
- `DeepScaleR-1.5B-Preview-abliterated.i1-Q6_K.gguf` - From: [mradermacher/DeepScaleR-1.5B-Preview-abliterated-i1-GGUF](https://huggingface.co/mradermacher/DeepScaleR-1.5B-Preview-abliterated-i1-GGUF/blob/main/DeepScaleR-1.5B-Preview-abliterated.i1-Q6_K.gguf)
- `DeepScaleR-1.5B-Preview-abliterated-Q8_0.gguf` - From: [ThomasBaruzier/DeepScaleR-1.5B-Preview-abliterated-GGUF](https://huggingface.co/ThomasBaruzier/DeepScaleR-1.5B-Preview-abliterated-GGUF/blob/main/DeepScaleR-1.5B-Preview-abliterated-Q8_0.gguf)
- `Llama-3.2-3B-Instruct-abliterated.Q3_K_L.gguf` - From: [MaziyarPanahi/Llama-3.2-3B-Instruct-abliterated-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3.2-3B-Instruct-abliterated-GGUF/blob/main/Llama-3.2-3B-Instruct-abliterated.Q3_K_L.gguf)
- `Llama-3.2-3B-Instruct-abliterated.i1-IQ3_S.gguf` - From: [mradermacher/Llama-3.2-3B-Instruct-abliterated-i1-GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-abliterated-i1-GGUF/blob/main/Llama-3.2-3B-Instruct-abliterated.i1-IQ3_S.gguf)
- `VersatiLlama-Llama-3.2-3B-Instruct-Abliterated.Q4_K_S.gguf` - From: [QuantFactory/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated-GGUF](https://huggingface.co/QuantFactory/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated-GGUF/blob/main/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated.Q4_K_S.gguf)
- `WizardLM-2-7B-abliterated-IQ2_XXS.gguf` - From: [bartowski/WizardLM-2-7B-abliterated-GGU](https://huggingface.co/bartowski/WizardLM-2-7B-abliterated-GGUF/blob/main/WizardLM-2-7B-abliterated-IQ2_XXS.gguf)
|
mingxilei/auf_imdb_reward_1.0_0.01_m_10 | mingxilei | 2025-06-01T04:31:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-01T03:54:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/NQLSG-Qwen2-VL-2B-v2-Base-GGUF | mradermacher | 2025-06-01T04:25:58Z | 13 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Lunzima/NQLSG-Qwen2-VL-2B-v2-Base",
"base_model:quantized:Lunzima/NQLSG-Qwen2-VL-2B-v2-Base",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-31T08:37:22Z | ---
base_model: Lunzima/NQLSG-Qwen2-VL-2B-v2-Base
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Lunzima/NQLSG-Qwen2-VL-2B-v2-Base
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/NQLSG-Qwen2-VL-2B-v2-Base-GGUF/resolve/main/NQLSG-Qwen2-VL-2B-v2-Base.Q2_K.gguf) | Q2_K | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/NQLSG-Qwen2-VL-2B-v2-Base-GGUF/resolve/main/NQLSG-Qwen2-VL-2B-v2-Base.Q3_K_S.gguf) | Q3_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/NQLSG-Qwen2-VL-2B-v2-Base-GGUF/resolve/main/NQLSG-Qwen2-VL-2B-v2-Base.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/NQLSG-Qwen2-VL-2B-v2-Base-GGUF/resolve/main/NQLSG-Qwen2-VL-2B-v2-Base.Q3_K_L.gguf) | Q3_K_L | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/NQLSG-Qwen2-VL-2B-v2-Base-GGUF/resolve/main/NQLSG-Qwen2-VL-2B-v2-Base.IQ4_XS.gguf) | IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/NQLSG-Qwen2-VL-2B-v2-Base-GGUF/resolve/main/NQLSG-Qwen2-VL-2B-v2-Base.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NQLSG-Qwen2-VL-2B-v2-Base-GGUF/resolve/main/NQLSG-Qwen2-VL-2B-v2-Base.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NQLSG-Qwen2-VL-2B-v2-Base-GGUF/resolve/main/NQLSG-Qwen2-VL-2B-v2-Base.Q5_K_S.gguf) | Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/NQLSG-Qwen2-VL-2B-v2-Base-GGUF/resolve/main/NQLSG-Qwen2-VL-2B-v2-Base.Q5_K_M.gguf) | Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/NQLSG-Qwen2-VL-2B-v2-Base-GGUF/resolve/main/NQLSG-Qwen2-VL-2B-v2-Base.Q6_K.gguf) | Q6_K | 1.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/NQLSG-Qwen2-VL-2B-v2-Base-GGUF/resolve/main/NQLSG-Qwen2-VL-2B-v2-Base.mmproj-fp16.gguf) | mmproj-fp16 | 1.4 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/NQLSG-Qwen2-VL-2B-v2-Base-GGUF/resolve/main/NQLSG-Qwen2-VL-2B-v2-Base.Q8_0.gguf) | Q8_0 | 1.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/NQLSG-Qwen2-VL-2B-v2-Base-GGUF/resolve/main/NQLSG-Qwen2-VL-2B-v2-Base.f16.gguf) | f16 | 3.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/UniSE-MLLM-GGUF | mradermacher | 2025-06-01T04:17:56Z | 13 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:marsh123/UniSE-MLLM",
"base_model:quantized:marsh123/UniSE-MLLM",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-01T21:30:06Z | ---
base_model: marsh123/UniSE-MLLM
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/marsh123/UniSE-MLLM
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/UniSE-MLLM-GGUF/resolve/main/UniSE-MLLM.Q2_K.gguf) | Q2_K | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/UniSE-MLLM-GGUF/resolve/main/UniSE-MLLM.Q3_K_S.gguf) | Q3_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/UniSE-MLLM-GGUF/resolve/main/UniSE-MLLM.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/UniSE-MLLM-GGUF/resolve/main/UniSE-MLLM.Q3_K_L.gguf) | Q3_K_L | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/UniSE-MLLM-GGUF/resolve/main/UniSE-MLLM.IQ4_XS.gguf) | IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/UniSE-MLLM-GGUF/resolve/main/UniSE-MLLM.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/UniSE-MLLM-GGUF/resolve/main/UniSE-MLLM.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/UniSE-MLLM-GGUF/resolve/main/UniSE-MLLM.Q5_K_S.gguf) | Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/UniSE-MLLM-GGUF/resolve/main/UniSE-MLLM.Q5_K_M.gguf) | Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/UniSE-MLLM-GGUF/resolve/main/UniSE-MLLM.Q6_K.gguf) | Q6_K | 1.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/UniSE-MLLM-GGUF/resolve/main/UniSE-MLLM.mmproj-fp16.gguf) | mmproj-fp16 | 1.4 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/UniSE-MLLM-GGUF/resolve/main/UniSE-MLLM.Q8_0.gguf) | Q8_0 | 1.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/UniSE-MLLM-GGUF/resolve/main/UniSE-MLLM.f16.gguf) | f16 | 3.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
manuross1/cndnlslddhwf5k5 | manuross1 | 2025-06-01T04:14:01Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-01T04:14:00Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: cndnlslddhwf5k5
---
# Cndnlslddhwf5K5
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `cndnlslddhwf5k5` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "cndnlslddhwf5k5",
"lora_weights": "https://huggingface.co/manuross1/cndnlslddhwf5k5/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('manuross1/cndnlslddhwf5k5', weight_name='lora.safetensors')
image = pipeline('cndnlslddhwf5k5').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 5500
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/manuross1/cndnlslddhwf5k5/discussions) to add images that show off what you’ve made with this LoRA.
|
mradermacher/Dreamer-7B-Classifieds-GGUF | mradermacher | 2025-06-01T04:12:20Z | 1 | 0 | transformers | [
"transformers",
"gguf",
"multimodal",
"en",
"base_model:osunlp/Dreamer-7B-Classifieds",
"base_model:quantized:osunlp/Dreamer-7B-Classifieds",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-03T07:32:31Z | ---
base_model: osunlp/Dreamer-7B-Classifieds
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- multimodal
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/osunlp/Dreamer-7B-Classifieds
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-Classifieds-GGUF/resolve/main/Dreamer-7B-Classifieds.mmproj-fp16.gguf) | mmproj-fp16 | 1.5 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-Classifieds-GGUF/resolve/main/Dreamer-7B-Classifieds.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-Classifieds-GGUF/resolve/main/Dreamer-7B-Classifieds.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-Classifieds-GGUF/resolve/main/Dreamer-7B-Classifieds.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-Classifieds-GGUF/resolve/main/Dreamer-7B-Classifieds.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-Classifieds-GGUF/resolve/main/Dreamer-7B-Classifieds.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-Classifieds-GGUF/resolve/main/Dreamer-7B-Classifieds.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-Classifieds-GGUF/resolve/main/Dreamer-7B-Classifieds.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-Classifieds-GGUF/resolve/main/Dreamer-7B-Classifieds.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-Classifieds-GGUF/resolve/main/Dreamer-7B-Classifieds.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-Classifieds-GGUF/resolve/main/Dreamer-7B-Classifieds.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-Classifieds-GGUF/resolve/main/Dreamer-7B-Classifieds.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-Classifieds-GGUF/resolve/main/Dreamer-7B-Classifieds.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Dreamer-7B-Shopping-GGUF | mradermacher | 2025-06-01T04:11:50Z | 5 | 0 | transformers | [
"transformers",
"gguf",
"multimodal",
"en",
"base_model:osunlp/Dreamer-7B-Shopping",
"base_model:quantized:osunlp/Dreamer-7B-Shopping",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-03T08:02:50Z | ---
base_model: osunlp/Dreamer-7B-Shopping
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- multimodal
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/osunlp/Dreamer-7B-Shopping
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-Shopping-GGUF/resolve/main/Dreamer-7B-Shopping.mmproj-fp16.gguf) | mmproj-fp16 | 1.5 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-Shopping-GGUF/resolve/main/Dreamer-7B-Shopping.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-Shopping-GGUF/resolve/main/Dreamer-7B-Shopping.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-Shopping-GGUF/resolve/main/Dreamer-7B-Shopping.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-Shopping-GGUF/resolve/main/Dreamer-7B-Shopping.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-Shopping-GGUF/resolve/main/Dreamer-7B-Shopping.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-Shopping-GGUF/resolve/main/Dreamer-7B-Shopping.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-Shopping-GGUF/resolve/main/Dreamer-7B-Shopping.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-Shopping-GGUF/resolve/main/Dreamer-7B-Shopping.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-Shopping-GGUF/resolve/main/Dreamer-7B-Shopping.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-Shopping-GGUF/resolve/main/Dreamer-7B-Shopping.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-Shopping-GGUF/resolve/main/Dreamer-7B-Shopping.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-Shopping-GGUF/resolve/main/Dreamer-7B-Shopping.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/PR1-Qwen2.5-VL-3B-Detection-GGUF | mradermacher | 2025-06-01T03:42:53Z | 68 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Kangheng/PR1-Qwen2.5-VL-3B-Detection",
"base_model:quantized:Kangheng/PR1-Qwen2.5-VL-3B-Detection",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-14T05:34:33Z | ---
base_model: Kangheng/PR1-Qwen2.5-VL-3B-Detection
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Kangheng/PR1-Qwen2.5-VL-3B-Detection
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/PR1-Qwen2.5-VL-3B-Detection-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/PR1-Qwen2.5-VL-3B-Detection-GGUF/resolve/main/PR1-Qwen2.5-VL-3B-Detection.Q2_K.gguf) | Q2_K | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/PR1-Qwen2.5-VL-3B-Detection-GGUF/resolve/main/PR1-Qwen2.5-VL-3B-Detection.mmproj-fp16.gguf) | mmproj-fp16 | 1.4 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/PR1-Qwen2.5-VL-3B-Detection-GGUF/resolve/main/PR1-Qwen2.5-VL-3B-Detection.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/PR1-Qwen2.5-VL-3B-Detection-GGUF/resolve/main/PR1-Qwen2.5-VL-3B-Detection.Q3_K_M.gguf) | Q3_K_M | 1.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/PR1-Qwen2.5-VL-3B-Detection-GGUF/resolve/main/PR1-Qwen2.5-VL-3B-Detection.Q3_K_L.gguf) | Q3_K_L | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/PR1-Qwen2.5-VL-3B-Detection-GGUF/resolve/main/PR1-Qwen2.5-VL-3B-Detection.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/PR1-Qwen2.5-VL-3B-Detection-GGUF/resolve/main/PR1-Qwen2.5-VL-3B-Detection.Q4_K_S.gguf) | Q4_K_S | 1.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/PR1-Qwen2.5-VL-3B-Detection-GGUF/resolve/main/PR1-Qwen2.5-VL-3B-Detection.Q4_K_M.gguf) | Q4_K_M | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/PR1-Qwen2.5-VL-3B-Detection-GGUF/resolve/main/PR1-Qwen2.5-VL-3B-Detection.Q5_K_S.gguf) | Q5_K_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/PR1-Qwen2.5-VL-3B-Detection-GGUF/resolve/main/PR1-Qwen2.5-VL-3B-Detection.Q5_K_M.gguf) | Q5_K_M | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/PR1-Qwen2.5-VL-3B-Detection-GGUF/resolve/main/PR1-Qwen2.5-VL-3B-Detection.Q6_K.gguf) | Q6_K | 2.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/PR1-Qwen2.5-VL-3B-Detection-GGUF/resolve/main/PR1-Qwen2.5-VL-3B-Detection.Q8_0.gguf) | Q8_0 | 3.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/PR1-Qwen2.5-VL-3B-Detection-GGUF/resolve/main/PR1-Qwen2.5-VL-3B-Detection.f16.gguf) | f16 | 6.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
firobeid/L4_LSTM_financial_News_Headlines_generator | firobeid | 2025-06-01T03:03:47Z | 0 | 0 | tensorflow | [
"tensorflow",
"tf-keras",
"lstm",
"text-generation",
"region:us"
] | text-generation | 2025-06-01T02:46:36Z | ---
tags:
- text-generation
- lstm
- tensorflow
library_name: tensorflow
pipeline_tag: text-generation
---
# LSTM Text Generation Model
This model was trained using TensorFlow/Keras for financial article generation tasks.
## Model Details
- **Model Type**: LSTM
- **Framework**: TensorFlow/Keras
- **Task**: Text Generation
- **Vocabulary Size**: 30000
- **Architecture**: Bi-directional Long Short-Term Memory (LSTM)
## Usage
```python
from huggingface_hub import snapshot_download
import tensorflow as tf
import json
import pickle
import numpy as np
# Download model files
model_path = snapshot_download(repo_id="firobeid/L4_LSTM_financial_News_Headlines_generator")
# Load the LSTM model
model = tf.keras.models.load_model(f"{model_path}/lstm_model")
# Load tokenizer
try:
# Try JSON format first
with open(f"{model_path}/tokenizer.json", 'r', encoding='utf-8') as f:
tokenizer_json = f.read()
tokenizer = tf.keras.preprocessing.text.tokenizer_from_json(tokenizer_json)
except FileNotFoundError:
# Fallback to pickle format
with open(f"{model_path}/tokenizer.pkl", 'rb') as f:
tokenizer = pickle.load(f)
# Text generation function
import numpy as np
from tensorflow.keras.preprocessing.sequence import pad_sequences
def preprocess(texts, max_sequence_length = 71):
texts = '<s> {}'.format(texts.lower())
X = np.array(tokenizer.texts_to_sequences([texts])) # REMOVE -1
pad_encoded = pad_sequences(X,
maxlen= max_sequence_length,
padding='pre')
return pad_encoded
def next_word(model, tokenizer,
text, num_gen_words=1,
randome_sampling = False,
temperature=1):
'''
Randome_Sampling : Using a categorical distribution to predict the character returned by the model
Low temperatures results in more predictable text.
Higher temperatures results in more surprising text.
Experiment to find the best setting.
'''
input_text = text
output_text = [input_text]
for i in range(num_gen_words):
X_new = preprocess(input_text)
if randome_sampling:
y_proba = model.predict(X_new, verbose = 0)[0, -1:, :]#first sentence, last token
rescaled_logits = tf.math.log(y_proba) / temperature
pred_word_ind = tf.random.categorical(rescaled_logits, num_samples=1) #REMOVE THIS + 1
pred_word = tokenizer.sequences_to_texts(pred_word_ind.numpy())[0]
else:
y_proba = model.predict(X_new, verbose=0)[0] #first sentence
pred_word_ind = np.argmax(y_proba, axis = -1) #REMOVE THIS + 1
pred_word = tokenizer.index_word[pred_word_ind[-1]]
input_text += ' ' + pred_word
output_text.append(pred_word)
if pred_word == '</s>':
return ' '.join(output_text)
return ' '.join(output_text)
def generate_text(model, tokenizer, text, num_gen_words=25, temperature=1, random_sampling=False):
return next_word(model, tokenizer, text, num_gen_words, random_sampling, temperature)
# Example usage
# Start with these tag: <s>, while keeping words in lower case
generate_text(model,
tokenizer,
"Apple",
num_gen_words = 10,
random_sampling = True,
temperature= 10)
```
## Training
This model was trained on text data using LSTM architecture for next-word prediction.
## Limitations
- Model performance depends on training data quality and size
- Generated text may not always be coherent for longer sequences
- Model architecture is optimized for the specific vocabulary it was trained on
|
HusseinBashir/xal_1aad | HusseinBashir | 2025-06-01T02:45:40Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2025-06-01T02:17:07Z | ---
library_name: transformers
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
model-index:
- name: xal_1aad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xal_1aad
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3220
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.5138 | 0.4545 | 100 | 0.4302 |
| 0.4173 | 0.9091 | 200 | 0.3650 |
| 0.3779 | 1.3636 | 300 | 0.3431 |
| 0.3553 | 1.8182 | 400 | 0.3259 |
| 0.3502 | 2.2727 | 500 | 0.3220 |
### Framework versions
- Transformers 4.52.2
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
keanteng/sesame-csm-elise | keanteng | 2025-06-01T02:42:41Z | 41 | 1 | transformers | [
"transformers",
"safetensors",
"csm",
"text-to-audio",
"generative-ai",
"text-to-speech",
"en",
"dataset:MrDragonFox/Elise",
"base_model:sesame/csm-1b",
"base_model:finetune:sesame/csm-1b",
"license:agpl-3.0",
"endpoints_compatible",
"region:us"
] | text-to-speech | 2025-05-29T07:51:23Z | ---
license: agpl-3.0
datasets:
- MrDragonFox/Elise
language:
- en
base_model:
- sesame/csm-1b
pipeline_tag: text-to-speech
library_name: transformers
tags:
- generative-ai
new_version: keanteng/sesame-csm-elise-lora
---
# CSM Elise Voice Model
This model is a fine-tuned version of [sesame/csm-1b](https://huggingface.co/sesame/csm-1b) using the [Elise dataset](https://huggingface.co/datasets/MrDragonFox/Elise). There are sample outputs files in the repository.
## Model Details
- **Base Model**: sesame/csm-1b
- **Training Data**: MrDragonFox/Elise dataset
- **Fine-tuning Approach**: Voice cloning through conditional speech generation
- **Voice Characteristics**: [Describe voice qualities]
- **Training Parameters**:
- Learning Rate: 2e-5
- Epochs: 3
- Batch Size: 1 with gradient accumulation steps of 4
## Quick Start
```python
from transformers import CsmForConditionalGeneration, AutoProcessor
import torch
import soundfile as sf
# Load the model
model_id = "keanteng/sesame-csm-elise" # Replace with your model
device = "cuda" if torch.cuda.is_available() else "cpu"
processor = AutoProcessor.from_pretrained(model_id)
model = CsmForConditionalGeneration.from_pretrained(model_id, device_map=device)
```
## Basic Text-to-Speech
```python
# Simple text generation
conversation = [
{"role": "0", "content": [{"type": "text", "text": "Hello, this is a test!"}]}
]
inputs = processor.apply_chat_template(
conversation,
tokenize=True,
return_dict=True,
).to(device)
# Generate audio
audio = model.generate(**inputs, output_audio=True)
audio_cpu = audio[0].to(torch.float32).cpu().numpy()
# Save to file
sf.write("output.wav", audio_cpu, 24000)
``` |
mradermacher/Kiwi-1-8B-Preview-GGUF | mradermacher | 2025-06-01T02:40:21Z | 158 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:LucidityAI/Kiwi-1-8b",
"base_model:quantized:LucidityAI/Kiwi-1-8b",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-12T07:01:36Z | ---
base_model: LucidityAI/Kiwi-1-8b
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/LucidityAI/Kiwi-1-8b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Kiwi-1-8B-Preview-GGUF/resolve/main/Kiwi-1-8B-Preview.Q2_K.gguf) | Q2_K | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Kiwi-1-8B-Preview-GGUF/resolve/main/Kiwi-1-8B-Preview.Q3_K_S.gguf) | Q3_K_S | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Kiwi-1-8B-Preview-GGUF/resolve/main/Kiwi-1-8B-Preview.Q3_K_M.gguf) | Q3_K_M | 4.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Kiwi-1-8B-Preview-GGUF/resolve/main/Kiwi-1-8B-Preview.Q3_K_L.gguf) | Q3_K_L | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Kiwi-1-8B-Preview-GGUF/resolve/main/Kiwi-1-8B-Preview.IQ4_XS.gguf) | IQ4_XS | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/Kiwi-1-8B-Preview-GGUF/resolve/main/Kiwi-1-8B-Preview.Q4_K_S.gguf) | Q4_K_S | 4.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Kiwi-1-8B-Preview-GGUF/resolve/main/Kiwi-1-8B-Preview.Q4_K_M.gguf) | Q4_K_M | 5.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Kiwi-1-8B-Preview-GGUF/resolve/main/Kiwi-1-8B-Preview.Q5_K_S.gguf) | Q5_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Kiwi-1-8B-Preview-GGUF/resolve/main/Kiwi-1-8B-Preview.Q5_K_M.gguf) | Q5_K_M | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/Kiwi-1-8B-Preview-GGUF/resolve/main/Kiwi-1-8B-Preview.Q6_K.gguf) | Q6_K | 6.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Kiwi-1-8B-Preview-GGUF/resolve/main/Kiwi-1-8B-Preview.Q8_0.gguf) | Q8_0 | 8.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Kiwi-1-8B-Preview-GGUF/resolve/main/Kiwi-1-8B-Preview.f16.gguf) | f16 | 16.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Triangle104/Qwen3-30B-A3B-Q6_K-GGUF | Triangle104 | 2025-06-01T02:39:14Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:Qwen/Qwen3-30B-A3B",
"base_model:quantized:Qwen/Qwen3-30B-A3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-06-01T02:32:42Z | ---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-30B-A3B/blob/main/LICENSE
pipeline_tag: text-generation
base_model: Qwen/Qwen3-30B-A3B
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/Qwen3-30B-A3B-Q6_K-GGUF
This model was converted to GGUF format from [`Qwen/Qwen3-30B-A3B`](https://huggingface.co/Qwen/Qwen3-30B-A3B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen3-30B-A3B) for more details on the model.
---
Qwen3-30B-A3B has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Number of Parameters: 30.5B in total and 3.3B activated
- Number of Paramaters (Non-Embedding): 29.9B
- Number of Layers: 48
- Number of Attention Heads (GQA): 32 for Q and 4 for KV
- Number of Experts: 128
- Number of Activated Experts: 8
- Context Length: 32,768 natively and 131,072 tokens with YaRN.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen3-30B-A3B-Q6_K-GGUF --hf-file qwen3-30b-a3b-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen3-30B-A3B-Q6_K-GGUF --hf-file qwen3-30b-a3b-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen3-30B-A3B-Q6_K-GGUF --hf-file qwen3-30b-a3b-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen3-30B-A3B-Q6_K-GGUF --hf-file qwen3-30b-a3b-q6_k.gguf -c 2048
```
|
ToastyPigeon/gemma3-27b-v2-confettilike | ToastyPigeon | 2025-06-01T01:47:24Z | 0 | 0 | null | [
"safetensors",
"gemma3",
"region:us"
] | null | 2025-06-01T00:20:27Z | I dont know why this doesn't have an embed tokens tensor. |
AmberYifan/Qwen2.5-7B-sft-dpo-10k-KTO | AmberYifan | 2025-06-01T01:23:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"kto",
"conversational",
"arxiv:2402.01306",
"base_model:AmberYifan/Qwen2.5-7B-sft-ultrachat-safeRLHF",
"base_model:finetune:AmberYifan/Qwen2.5-7B-sft-ultrachat-safeRLHF",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-01T01:02:45Z | ---
base_model: AmberYifan/Qwen2.5-7B-sft-ultrachat-safeRLHF
library_name: transformers
model_name: Qwen2.5-7B-sft-dpo-10k-KTO
tags:
- generated_from_trainer
- trl
- kto
licence: license
---
# Model Card for Qwen2.5-7B-sft-dpo-10k-KTO
This model is a fine-tuned version of [AmberYifan/Qwen2.5-7B-sft-ultrachat-safeRLHF](https://huggingface.co/AmberYifan/Qwen2.5-7B-sft-ultrachat-safeRLHF).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AmberYifan/Qwen2.5-7B-sft-dpo-10k-KTO", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yifanwang/huggingface/runs/yj1savb2)
This model was trained with KTO, a method introduced in [KTO: Model Alignment as Prospect Theoretic Optimization](https://huggingface.co/papers/2402.01306).
### Framework versions
- TRL: 0.12.2
- Transformers: 4.46.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.20.3
## Citations
Cite KTO as:
```bibtex
@article{ethayarajh2024kto,
title = {{KTO: Model Alignment as Prospect Theoretic Optimization}},
author = {Kawin Ethayarajh and Winnie Xu and Niklas Muennighoff and Dan Jurafsky and Douwe Kiela},
year = 2024,
eprint = {arXiv:2402.01306},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
CeciGonSer/fine_tuned_biblia_8e | CeciGonSer | 2025-06-01T00:58:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-06-01T00:57:50Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gride29/flux-custom-smaller | gride29 | 2025-06-01T00:41:09Z | 175 | 1 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-08-14T02:14:48Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Flux Custom Smaller
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/gride29/flux-custom-smaller/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('gride29/flux-custom-smaller', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/gride29/flux-custom-smaller/discussions) to add images that show off what you’ve made with this LoRA.
|
bruhzair/prototype0.4x44 | bruhzair | 2025-06-01T00:26:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2408.07990",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-01T00:06:24Z | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# prototype-0.4x44
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using /workspace/cache/models--SicariusSicariiStuff--Negative_LLAMA_70B/snapshots/097a11b4600eafe333a2be0309bbdf6be2f197c4 as a base.
### Models Merged
The following models were included in the merge:
* /workspace/cache/models--Sao10K--Llama-3.3-70B-Vulpecula-r1/snapshots/12d7254ab9a5ce21905f59f341a3d2a2b3e62fd5
* /workspace/cache/models--EVA-UNIT-01--EVA-LLaMA-3.33-70B-v0.1/snapshots/7cd63fd3a5519383bfa57bf1f9f2cb008f366f90
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: /workspace/cache/models--EVA-UNIT-01--EVA-LLaMA-3.33-70B-v0.1/snapshots/7cd63fd3a5519383bfa57bf1f9f2cb008f366f90
parameters:
select_topk: 0.2
- model: /workspace/cache/models--Sao10K--Llama-3.3-70B-Vulpecula-r1/snapshots/12d7254ab9a5ce21905f59f341a3d2a2b3e62fd5
parameters:
select_topk: 0.7
- model: /workspace/cache/models--SicariusSicariiStuff--Negative_LLAMA_70B/snapshots/097a11b4600eafe333a2be0309bbdf6be2f197c4
parameters:
select_topk: 0.6
base_model: /workspace/cache/models--SicariusSicariiStuff--Negative_LLAMA_70B/snapshots/097a11b4600eafe333a2be0309bbdf6be2f197c4
merge_method: sce
tokenizer:
source: union
chat_template: llama3
int8_mask: true
dtype: float32
out_dtype: bfloat16
```
|
AmberYifan/Llama-3.1-8B-sft-gen-dpo-iter2 | AmberYifan | 2025-06-01T00:14:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:AmberYifan/Llama-3.1-8B-sft-SPIN-self",
"base_model:finetune:AmberYifan/Llama-3.1-8B-sft-SPIN-self",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-31T23:55:15Z | ---
base_model: AmberYifan/Llama-3.1-8B-sft-SPIN-self
library_name: transformers
model_name: Llama-3.1-8B-sft-gen-dpo-iter2
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for Llama-3.1-8B-sft-gen-dpo-iter2
This model is a fine-tuned version of [AmberYifan/Llama-3.1-8B-sft-SPIN-self](https://huggingface.co/AmberYifan/Llama-3.1-8B-sft-SPIN-self).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AmberYifan/Llama-3.1-8B-sft-gen-dpo-iter2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yifanwang/huggingface/runs/pxbmx66e)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.2
- Transformers: 4.46.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
DavidKalajdzic/dr00ne-gr00t-lora-rank0 | DavidKalajdzic | 2025-06-01T00:13:13Z | 0 | 0 | peft | [
"peft",
"safetensors",
"gr00t_n1",
"arxiv:1910.09700",
"base_model:nvidia/GR00T-N1-2B",
"base_model:adapter:nvidia/GR00T-N1-2B",
"region:us"
] | null | 2025-05-31T23:55:25Z | ---
base_model: nvidia/GR00T-N1-2B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0
|
sonhask/Meta-Llama-3.1-8B-Instruct-bnb-4bit_v5 | sonhask | 2025-05-31T23:42:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-31T23:24:24Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Triangle104/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B-Q5_K_S-GGUF | Triangle104 | 2025-05-31T23:10:16Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"reasoning",
"thinking",
"cognitivecomputations",
"r1",
"llama 3.1",
"llama-3",
"llama3",
"llama-3.1",
"cot",
"deepseek",
"Llama 3.1",
"Hermes",
"DeepHermes",
"1,000,000 context",
"fine tune",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:DavidAU/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B",
"base_model:quantized:DavidAU/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-31T23:07:05Z | ---
library_name: transformers
tags:
- reasoning
- thinking
- cognitivecomputations
- r1
- llama 3.1
- llama-3
- llama3
- llama-3.1
- cot
- deepseek
- Llama 3.1
- Hermes
- DeepHermes
- 1,000,000 context
- fine tune
- merge
- llama-cpp
- gguf-my-repo
base_model: DavidAU/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B
---
# Triangle104/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B-Q5_K_S-GGUF
This model was converted to GGUF format from [`DavidAU/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B`](https://huggingface.co/DavidAU/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/DavidAU/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B) for more details on the model.
---
Context : 1,000,000 tokens.
Required: Llama 3 Instruct template.
The Deep Hermes 8B Preview model (reasoning), [ https://huggingface.co/NousResearch/DeepHermes-3-Llama-3-8B-Preview ]
converted to 1 million context using Nvidia's Ultra Long 1 million 8B Instruct model.
The goal of this model was to stablize long generation and long context "needle in a haystack" issues.
According to Nvidia there is both a bump in general performance, as well as perfect "recall" over the entire 1 million context.
[ https://huggingface.co/nvidia/Llama-3.1-8B-UltraLong-1M-Instruct ]
Additional changes:
Model appears to be de-censored / more de-censored.
Output generation is improved.
Creative output generation is vastly improved.
NOTE: Higher temps will result in deeper, richer "thoughts"... and frankly more interesting ones too.
The "thinking/reasoning" tech (for the model at this repo) is from the original Llama 3.1 "DeepHermes" model from NousResearch:
[ https://huggingface.co/NousResearch/DeepHermes-3-Llama-3-8B-Preview ]
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B-Q5_K_S-GGUF --hf-file llama-3.1-1-million-ctx-deephermes-deep-reasoning-8b-q5_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B-Q5_K_S-GGUF --hf-file llama-3.1-1-million-ctx-deephermes-deep-reasoning-8b-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B-Q5_K_S-GGUF --hf-file llama-3.1-1-million-ctx-deephermes-deep-reasoning-8b-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B-Q5_K_S-GGUF --hf-file llama-3.1-1-million-ctx-deephermes-deep-reasoning-8b-q5_k_s.gguf -c 2048
```
|
AmberYifan/Llama-3.1-8B-sft-SPIN-gpt4o-KTO | AmberYifan | 2025-05-31T23:08:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"kto",
"conversational",
"arxiv:2402.01306",
"base_model:AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF",
"base_model:finetune:AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-31T22:33:57Z | ---
base_model: AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF
library_name: transformers
model_name: Llama-3.1-8B-sft-SPIN-gpt4o-KTO
tags:
- generated_from_trainer
- trl
- kto
licence: license
---
# Model Card for Llama-3.1-8B-sft-SPIN-gpt4o-KTO
This model is a fine-tuned version of [AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF](https://huggingface.co/AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AmberYifan/Llama-3.1-8B-sft-SPIN-gpt4o-KTO", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yifanwang/huggingface/runs/2g2trdw3)
This model was trained with KTO, a method introduced in [KTO: Model Alignment as Prospect Theoretic Optimization](https://huggingface.co/papers/2402.01306).
### Framework versions
- TRL: 0.12.2
- Transformers: 4.46.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.20.3
## Citations
Cite KTO as:
```bibtex
@article{ethayarajh2024kto,
title = {{KTO: Model Alignment as Prospect Theoretic Optimization}},
author = {Kawin Ethayarajh and Winnie Xu and Niklas Muennighoff and Dan Jurafsky and Douwe Kiela},
year = 2024,
eprint = {arXiv:2402.01306},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
ibuki95/model2 | ibuki95 | 2025-05-31T23:04:53Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-31T23:03:48Z | # Container Template for SoundsRight Subnet Miners
This repository contains a contanierized version of [SGMSE+](https://huggingface.co/sp-uhh/speech-enhancement-sgmse) and serves as a tutorial for miners to format their models on [Bittensor's](https://bittensor.com/) [SoundsRight Subnet](https://github.com/synapsec-ai/SoundsRightSubnet). The branches `DENOISING_16000HZ` and `DEREVERBERATION_16000HZ` contain SGMSE fitted with the approrpriate checkpoints for denoising and dereverberation tasks at 16kHz, respectively.
This container has only been tested with **Ubuntu 24.04** and **CUDA 12.6**. It may run on other configurations, but it is not guaranteed.
To run the container, first configure NVIDIA Container Toolkit and generate a CDI specification. Follow the instructions to download the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html) with Apt.
Next, follow the instructions for [generating a CDI specification](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/cdi-support.html).
Verify that the CDI specification was done correctly with:
```
$ nvidia-ctk cdi list
```
You should see this in your output:
```
nvidia.com/gpu=all
nvidia.com/gpu=0
```
If you are running podman as root, run the following command to start the container:
Run the container with:
```
podman build -t modelapi . && podman run -d --device nvidia.com/gpu=all --user root --name modelapi -p 6500:6500 modelapi
```
Access logs with:
```
podman logs -f modelapi
```
If you are running the container rootless, there are a few more changes to make:
First, modify `/etc/nvidia-container-runtime/config.toml` and set the following parameters:
```
[nvidia-container-cli]
no-cgroups = true
[nvidia-container-runtime]
debug = "/tmp/nvidia-container-runtime.log"
```
You can also run the following command to achieve the same result:
```
$ sudo nvidia-ctk config --set nvidia-container-cli.no-cgroups --in-place
```
Run the container with:
```
podman build -t modelapi . && podman run -d --device nvidia.com/gpu=all --volume /usr/local/cuda-12.6:/usr/local/cuda-12.6 --user 10002:10002 --name modelapi -p 6500:6500 modelapi
```
Access logs with:
```
podman logs -f modelapi
```
Running the container will spin up an API with the following endpoints:
1. `/status/` : Communicates API status
2. `/prepare/` : Download model checkpoint and initialize model
3. `/upload-audio/` : Upload audio files, save to noisy audio directory
4. `/enhance/` : Initialize model, enhance audio files, save to enhanced audio directory
5. `/download-enhanced/` : Download enhanced audio files
By default the API will use host `0.0.0.0` and port `6500`.
### References
1. **Welker, Simon; Richter, Julius; Gerkmann, Timo**
*Speech Enhancement with Score-Based Generative Models in the Complex STFT Domain*.
Proceedings of *Interspeech 2022*, 2022, pp. 2928–2932.
[DOI: 10.21437/Interspeech.2022-10653](https://doi.org/10.21437/Interspeech.2022-10653)
2. **Richter, Julius; Welker, Simon; Lemercier, Jean-Marie; Lay, Bunlong; Gerkmann, Timo**
*Speech Enhancement and Dereverberation with Diffusion-based Generative Models*.
*IEEE/ACM Transactions on Audio, Speech, and Language Processing*, Vol. 31, 2023, pp. 2351–2364.
[DOI: 10.1109/TASLP.2023.3285241](https://doi.org/10.1109/TASLP.2023.3285241)
3. **Richter, Julius; Wu, Yi-Chiao; Krenn, Steven; Welker, Simon; Lay, Bunlong; Watanabe, Shinjii; Richard, Alexander; Gerkmann, Timo**
*EARS: An Anechoic Fullband Speech Dataset Benchmarked for Speech Enhancement and Dereverberation*.
Proceedings of *ISCA Interspeech*, 2024, pp. 4873–4877.
|
mradermacher/Qwen2-Audio-7B-Instruct-i1-GGUF | mradermacher | 2025-05-31T22:00:13Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"chat",
"audio",
"audio-text-to-text",
"en",
"base_model:Qwen/Qwen2-Audio-7B-Instruct",
"base_model:quantized:Qwen/Qwen2-Audio-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | audio-text-to-text | 2025-05-31T20:18:23Z | ---
base_model: Qwen/Qwen2-Audio-7B-Instruct
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- chat
- audio
- audio-text-to-text
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Qwen/Qwen2-Audio-7B-Instruct
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Qwen2-Audio-7B-Instruct-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2-Audio-7B-Instruct-i1-GGUF/resolve/main/Qwen2-Audio-7B-Instruct.i1-IQ1_S.gguf) | i1-IQ1_S | 2.2 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-Audio-7B-Instruct-i1-GGUF/resolve/main/Qwen2-Audio-7B-Instruct.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-Audio-7B-Instruct-i1-GGUF/resolve/main/Qwen2-Audio-7B-Instruct.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-Audio-7B-Instruct-i1-GGUF/resolve/main/Qwen2-Audio-7B-Instruct.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-Audio-7B-Instruct-i1-GGUF/resolve/main/Qwen2-Audio-7B-Instruct.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-Audio-7B-Instruct-i1-GGUF/resolve/main/Qwen2-Audio-7B-Instruct.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.0 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-Audio-7B-Instruct-i1-GGUF/resolve/main/Qwen2-Audio-7B-Instruct.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-Audio-7B-Instruct-i1-GGUF/resolve/main/Qwen2-Audio-7B-Instruct.i1-Q2_K.gguf) | i1-Q2_K | 3.2 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-Audio-7B-Instruct-i1-GGUF/resolve/main/Qwen2-Audio-7B-Instruct.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-Audio-7B-Instruct-i1-GGUF/resolve/main/Qwen2-Audio-7B-Instruct.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-Audio-7B-Instruct-i1-GGUF/resolve/main/Qwen2-Audio-7B-Instruct.i1-IQ3_S.gguf) | i1-IQ3_S | 3.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-Audio-7B-Instruct-i1-GGUF/resolve/main/Qwen2-Audio-7B-Instruct.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.7 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-Audio-7B-Instruct-i1-GGUF/resolve/main/Qwen2-Audio-7B-Instruct.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-Audio-7B-Instruct-i1-GGUF/resolve/main/Qwen2-Audio-7B-Instruct.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.0 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-Audio-7B-Instruct-i1-GGUF/resolve/main/Qwen2-Audio-7B-Instruct.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-Audio-7B-Instruct-i1-GGUF/resolve/main/Qwen2-Audio-7B-Instruct.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-Audio-7B-Instruct-i1-GGUF/resolve/main/Qwen2-Audio-7B-Instruct.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.6 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-Audio-7B-Instruct-i1-GGUF/resolve/main/Qwen2-Audio-7B-Instruct.i1-Q4_0.gguf) | i1-Q4_0 | 4.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-Audio-7B-Instruct-i1-GGUF/resolve/main/Qwen2-Audio-7B-Instruct.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-Audio-7B-Instruct-i1-GGUF/resolve/main/Qwen2-Audio-7B-Instruct.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-Audio-7B-Instruct-i1-GGUF/resolve/main/Qwen2-Audio-7B-Instruct.i1-Q4_1.gguf) | i1-Q4_1 | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-Audio-7B-Instruct-i1-GGUF/resolve/main/Qwen2-Audio-7B-Instruct.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-Audio-7B-Instruct-i1-GGUF/resolve/main/Qwen2-Audio-7B-Instruct.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-Audio-7B-Instruct-i1-GGUF/resolve/main/Qwen2-Audio-7B-Instruct.i1-Q6_K.gguf) | i1-Q6_K | 6.5 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
RichardErkhov/CardinalOperations_-_ORLM-LLaMA-3-8B-gguf | RichardErkhov | 2025-05-31T21:51:51Z | 326 | 0 | transformers | [
"transformers",
"gguf",
"text-generation",
"arxiv:2405.17743",
"license:llama3",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-08-21T11:59:56Z | ---
library_name: transformers
license: llama3
pipeline_tag: text-generation
---
# ORLM-LLaMA-3-8B: A Large Language Model for Optimization Modeling
This repository contains the quantized model weights for [ORLM-LLaMA-3-8B](https://huggingface.co/CardinalOperations/ORLM-LLaMA-3-8B), a fine-tuned version of Meta's Llama 3 8B model, specifically trained for generating optimization models and corresponding Python code. It's presented in the paper [ORLM: A Customizable Framework in Training Large Models for Automated Optimization Modeling](https://huggingface.co/papers/2405.17743). This model significantly improves the capability of automating optimization modeling and solving, achieving state-of-the-art performance on several benchmarks.
The official repo can be found here: https://github.com/Cardinal-Operations/ORLM
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
ORLM-LLaMA-3-8B - GGUF
- Model creator: https://huggingface.co/CardinalOperations/
- Original model: https://huggingface.co/CardinalOperations/ORLM-LLaMA-3-8B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [ORLM-LLaMA-3-8B.Q2_K.gguf](https://huggingface.co/RichardErkhov/CardinalOperations_-_ORLM-LLaMA-3-8B-gguf/blob/main/ORLM-LLaMA-3-8B.Q2_K.gguf) | Q2_K | 2.96GB |
| [ORLM-LLaMA-3-8B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/CardinalOperations_-_ORLM-LLaMA-3-8B-gguf/blob/main/ORLM-LLaMA-3-8B.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [ORLM-LLaMA-3-8B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/CardinalOperations_-_ORLM-LLaMA-3-8B-gguf/blob/main/ORLM-LLaMA-3-8B.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [ORLM-LLaMA-3-8B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/CardinalOperations_-_ORLM-LLaMA-3-8B-gguf/blob/main/ORLM-LLaMA-3-8B.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [ORLM-LLaMA-3-8B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/CardinalOperations_-_ORLM-LLaMA-3-8B-gguf/blob/main/ORLM-LLaMA-3-8B.IQ3_M.gguf) | IQ3_M | 3.52GB |
| [ORLM-LLaMA-3-8B.Q3_K.gguf](https://huggingface.co/RichardErkhov/CardinalOperations_-_ORLM-LLaMA-3-8B-gguf/blob/main/ORLM-LLaMA-3-8B.Q3_K.gguf) | Q3_K | 3.74GB |
| [ORLM-LLaMA-3-8B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/CardinalOperations_-_ORLM-LLaMA-3-8B-gguf/blob/main/ORLM-LLaMA-3-8B.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [ORLM-LLaMA-3-8B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/CardinalOperations_-_ORLM-LLaMA-3-8B-gguf/blob/main/ORLM-LLaMA-3-8B.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [ORLM-LLaMA-3-8B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/CardinalOperations_-_ORLM-LLaMA-3-8B-gguf/blob/main/ORLM-LLaMA-3-8B.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
| [ORLM-LLaMA-3-8B.Q4_0.gguf](https://huggingface.co/RichardErkhov/CardinalOperations_-_ORLM-LLaMA-3-8B-gguf/blob/main/ORLM-LLaMA-3-8B.Q4_0.gguf) | Q4_0 | 4.34GB |
| [ORLM-LLaMA-3-8B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/CardinalOperations_-_ORLM-LLaMA-3-8B-gguf/blob/main/ORLM-LLaMA-3-8B.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [ORLM-LLaMA-3-8B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/CardinalOperations_-_ORLM-LLaMA-3-8B-gguf/blob/main/ORLM-LLaMA-3-8B.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
| [ORLM-LLaMA-3-8B.Q4_K.gguf](https://huggingface.co/RichardErkhov/CardinalOperations_-_ORLM-LLaMA-3-8B-gguf/blob/main/ORLM-LLaMA-3-8B.Q4_K.gguf) | Q4_K | 4.58GB |
| [ORLM-LLaMA-3-8B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/CardinalOperations_-_ORLM-LLaMA-3-8B-gguf/blob/main/ORLM-LLaMA-3-8B.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
| [ORLM-LLaMA-3-8B.Q4_1.gguf](https://huggingface.co/RichardErkhov/CardinalOperations_-_ORLM-LLaMA-3-8B-gguf/blob/main/ORLM-LLaMA-3-8B.Q4_1.gguf) | Q4_1 | 4.78GB |
| [ORLM-LLaMA-3-8B.Q5_0.gguf](https://huggingface.co/RichardErkhov/CardinalOperations_-_ORLM-LLaMA-3-8B-gguf/blob/main/ORLM-LLaMA-3-8B.Q5_0.gguf) | Q5_0 | 5.21GB |
| [ORLM-LLaMA-3-8B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/CardinalOperations_-_ORLM-LLaMA-3-8B-gguf/blob/main/ORLM-LLaMA-3-8B.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [ORLM-LLaMA-3-8B.Q5_K.gguf](https://huggingface.co/RichardErkhov/CardinalOperations_-_ORLM-LLaMA-3-8B-gguf/blob/main/ORLM-LLaMA-3-8B.Q5_K.gguf) | Q5_K | 5.34GB |
| [ORLM-LLaMA-3-8B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/CardinalOperations_-_ORLM-LLaMA-3-8B-gguf/blob/main/ORLM-LLaMA-3-8B.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
| [ORLM-LLaMA-3-8B.Q5_1.gguf](https://huggingface.co/RichardErkhov/CardinalOperations_-_ORLM-LLaMA-3-8B-gguf/blob/main/ORLM-LLaMA-3-8B.Q5_1.gguf) | Q5_1 | 5.65GB |
| [ORLM-LLaMA-3-8B.Q6_K.gguf](https://huggingface.co/RichardErkhov/CardinalOperations_-_ORLM-LLaMA-3-8B-gguf/blob/main/ORLM-LLaMA-3-8B.Q6_K.gguf) | Q6_K | 6.14GB |
| [ORLM-LLaMA-3-8B.Q8_0.gguf](https://huggingface.co/RichardErkhov/CardinalOperations_-_ORLM-LLaMA-3-8B-gguf/blob/main/ORLM-LLaMA-3-8B.Q8_0.gguf) | Q8_0 | 7.95GB |
Original model description:
---
license: llama3
---
see our paper in https://arxiv.org/abs/2405.17743
github repo: https://github.com/Cardinal-Operations/ORLM
## Model Details
LLaMA-3-8B-ORLM is fully fine-tuned on the OR-Instruct data and built with Meta [LLaMA-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) model.
More training details can be seen at https://arxiv.org/abs/2405.17743
## Model Usage
Prompting Template:
```text
Below is an operations research question. Build a mathematical model and corresponding python code using `coptpy` that appropriately addresses the question.
# Question:
{Question}
# Response:
```
Please replace the `{Question}` with any natural language OR question.
## Examples
```text
--------------------------OR question wrapped in the template--------------------------
Below is an operations research question. Build a mathematical model and corresponding python code using `coptpy` that appropriately addresses the question.
# Question:
A lab has 1000 units of medicinal ingredients to make two pills, a large pill and a small pill. A large pill requires 3 units of medicinal ingredients and 2 units of filler. A small pill requires 2 units of medicinal ingredients and 1 unit of filler. The lab has to make at least 100 large pills. However, since small pills are more popular at least 60% of the total number of pills must be small. How many of each should be made to minimize the total number of filler material needed?
# Response:
--------------------------top-1 completion output by LLaMA-3-8B-ORLM under greedy decoding--------------------------
## Mathematical Model:
To solve the lab's pill production problem, we need to construct a linear programming model to minimize the total number of filler material needed while satisfying the production requirements for the pills and the proportion constraint. Here is the model formulation:
### Decision Variables:
- \(x\): Number of large pills to be made.
- \(y\): Number of small pills to be made.
### Objective Function:
- Minimize the total number of filler material needed: \[Z = 2x + y\]
### Constraints:
1. Medicinal ingredient usage constraint: \[3x + 2y \leq 1000\]
2. Large pill production requirement: \[x \geq 100\]
3. Proportion constraint for small pills: \[y \geq 0.6(x + y)\]
4. Non-negativity constraint: \[x, y \geq 0\]
### Integer Constraints:
- \(x, y\) must be integers, as the number of pills cannot be fractional.
Using this model, we can apply linear programming techniques to find the optimal number of large and small pills to be made, minimizing the total number of filler material needed while satisfying the production requirements for the pills and the proportion constraint. This will provide the lab with an economically efficient pill production strategy.
## Python Code Solution Using `coptpy`:
Here is a Python script using the `coptpy` library to solve the problem:
\`\`\`python
import coptpy as cp
from coptpy import COPT
# Create a COPT environment
env = cp.Envr()
# Create the model
model = env.createModel("PillProduction")
# Add decision variables
x = model.addVar(vtype=COPT.INTEGER, name="x") # Number of large pills
y = model.addVar(vtype=COPT.INTEGER, name="y") # Number of small pills
# Objective function: Minimize the total number of filler material needed
model.setObjective(2 * x + y, sense=COPT.MINIMIZE)
# Constraints
model.addConstr(3 * x + 2 * y <= 1000, name="MaterialLimit")
model.addConstr(x >= 100, name="LargePillRequirement")
model.addConstr(y >= 0.6 * (x + y), name="SmallPillProportion")
# Solve the model
model.solve()
# Output results
if model.status == COPT.OPTIMAL:
print("Minimum total number of filler material needed: {:.0f} units".format(model.objval))
print("Number of large pills to be made: {:.0f}".format(x.x))
print("Number of small pills to be made: {:.0f}".format(y.x))
else:
print("No optimal solution found.")
\`\`\`
In this script, we first create a `COPT` environment and model. Then, we add two integer decision variables `x` and `y`, representing the number of large and small pills to be made, respectively.
Next, we set the objective function to minimize the total number of filler material needed, which is `2 * x + y`.
Then, we add three constraints:
1. Medicinal ingredient usage constraint: `3 * x + 2 * y <= 1000`
2. Large pill production requirement: `x >= 100`
3. Proportion constraint for small pills: `y >= 0.6 * (x + y)`
Finally, we solve the model and output the results of the optimal solution, including the minimum total number of filler material needed and the number of large and small pills to be made.
This script provides a complete example of using the `coptpy` library to solve the lab's pill production problem, while satisfying all the constraints mentioned in the problem.
```
## Performances
Below is the comparison of performance on the NL4OPT, MAMO, and IndustryOR benchmarks. Values marked with a <sup>*</sup> are directly copied from original papers, with blanks where data were not reported. The highest results are highlighted in bold.
| **Method** | **NL4OPT** | **MAMO EasyLP** | **MAMO ComplexLP** | **IndustryOR** | **Micro Avg** | **Macro Avg** |
|------------------------------------------------|-------------------------|-----------------------|----------------------|-------------------|-----------------|-----------------|
| *Methods based on PLMs* | | | | | | |
| `tag-BART` | 47.9%<sup>*</sup> | - | - | - | - | - |
| *Methods based on GPT-3.5* | | | | | | |
| `Standard` | 42.4%<sup>*</sup> | - | - | - | - | - |
| `Reflexion` | 50.7%<sup>*</sup> | - | - | - | - | - |
| `Chain-of-Experts` | 58.9%<sup>*</sup> | - | - | - | - | - |
| *Methods based on GPT-4* | | | | | | |
| `Standard` | 47.3%<sup>*</sup> | 66.5%<sup>*</sup> | 14.6%<sup>*</sup> | 28.0% | 50.2% | 39.1% |
| `Reflexion` | 53.0%<sup>*</sup> | - | - | - | - | - |
| `Chain-of-Experts` | 64.2%<sup>*</sup> | - | - | - | - | - |
| `OptiMUS` | 78.8%<sup>*</sup> | - | - | - | - | - |
| *ORLMs based on open-source LLMs* | | | | | | |
| `ORLM-Mistral-7B` | 84.4% | 81.4% | 32.0% | 27.0% | 68.8% | 56.2% |
| `ORLM-Deepseek-Math-7B-Base` | **86.5%** | 82.2% | **37.9%** | 33.0% | 71.2% | 59.9% |
| `ORLM-LLaMA-3-8B` | 85.7% | **82.3%** | 37.4% | **38.0%** | **71.4%** | **60.8%** |
## Citation
```bibtex
@article{tang2024orlm,
title={ORLM: A Customizable Framework in Training Large Models for Automated Optimization Modeling},
author={Tang, Zhengyang and Huang, Chenyu and Zheng, Xin and Hu, Shixi and Wang, Zizhuo and Ge, Dongdong and Wang, Benyou},
journal={arXiv preprint arXiv:2405.17743},
year={2024}
}
```
```bibtex
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
}
```
## License
The use of this model is governed by the [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](https://llama.meta.com/llama3/license/). |
naniltx/codonGPT | naniltx | 2025-05-31T21:46:32Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-31T21:31:52Z | ---
library_name: transformers
tags: []
---
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Nanil Therapeutics
- **Funded by [optional]:** Nanil Therapeutics
- **Shared by [optional]:** Nanil Therapeutics
- **Model type:** Transformer-based generative language model
- **Language(s) (NLP):** mRNA sequences (biological triplet code)
- **License:** Free for research use |
BootesVoid/cmbcesd72001r10ozzqcm5ddu_cmbcqmqxa01al10ozh7k8nv3e | BootesVoid | 2025-05-31T21:46:08Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-31T21:46:06Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: A
---
# Cmbcesd72001R10Ozzqcm5Ddu_Cmbcqmqxa01Al10Ozh7K8Nv3E
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `A` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "A",
"lora_weights": "https://huggingface.co/BootesVoid/cmbcesd72001r10ozzqcm5ddu_cmbcqmqxa01al10ozh7k8nv3e/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmbcesd72001r10ozzqcm5ddu_cmbcqmqxa01al10ozh7k8nv3e', weight_name='lora.safetensors')
image = pipeline('A').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmbcesd72001r10ozzqcm5ddu_cmbcqmqxa01al10ozh7k8nv3e/discussions) to add images that show off what you’ve made with this LoRA.
|
JacobLinCool/taiko-conformer-7 | JacobLinCool | 2025-05-31T21:28:21Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-05-31T21:28:04Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed] |
Abdualkader/MultiViewDiff | Abdualkader | 2025-05-31T21:22:52Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"arxiv:2312.02201",
"license:apache-2.0",
"region:us"
] | null | 2025-05-31T13:26:09Z | ---
license: apache-2.0
---
# ImageDream-diffusers Model Card
This is a port of https://huggingface.co/Peng-Wang/ImageDream into diffusers.
And get ported weights from https://huggingface.co/ashawkey/imagedream-ipmv-diffuser
In ashawkey's work, UNet did not ported to diffusers.
This work has been fully ported to diffusers, including UNet.
And separated the IP-adapter-plus from the unet.
## Diffusers
```python
import torch
from diffusers import DiffusionPipeline
from diffusers.utils import make_image_grid
from PIL import Image
pipe = DiffusionPipeline.from_pretrained(
"kiigii/imagedream-ipmv-diffusers",
torch_dtype=torch.float16,
trust_remote_code=True,
)
pipe.load_ip_adapter()
pipe.to("cude")
prompt = "" # no need to input prompt
image = Image.open(...)
mv_images = pipe(
prompt=prompt,
ip_adapter_image=image,
guidance_scale=5,
num_inference_steps=30,
elevation=0,
num_images_per_prompt=1
).images
mv_grid = make_image_grid(mv_images[:4], 2, 2)
mv_grid.save("mv_image.png")
```
## Citation
```
@article{wang2023imagedream,
title={ImageDream: Image-Prompt Multi-view Diffusion for 3D Generation},
author={Wang, Peng and Shi, Yichun},
journal={arXiv preprint arXiv:2312.02201},
year={2023}
}
```
## Misuse, Malicious Use, and Out-of-Scope Use
The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes. |
BootesVoid/cmbc3zo800bp685uu7asbaer3_cmbcotufn014z10ozxirre2rb | BootesVoid | 2025-05-31T21:01:20Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-31T21:01:18Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: LUNASILVER
---
# Cmbc3Zo800Bp685Uu7Asbaer3_Cmbcotufn014Z10Ozxirre2Rb
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `LUNASILVER` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "LUNASILVER",
"lora_weights": "https://huggingface.co/BootesVoid/cmbc3zo800bp685uu7asbaer3_cmbcotufn014z10ozxirre2rb/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmbc3zo800bp685uu7asbaer3_cmbcotufn014z10ozxirre2rb', weight_name='lora.safetensors')
image = pipeline('LUNASILVER').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmbc3zo800bp685uu7asbaer3_cmbcotufn014z10ozxirre2rb/discussions) to add images that show off what you’ve made with this LoRA.
|
Bmingg/GeminiChosen-rejected_train_vi-0.7Beta-5epochs | Bmingg | 2025-05-31T20:54:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-31T20:53:40Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/fionazhang_-_mistral-experiment-4-4bits | RichardErkhov | 2025-05-31T20:41:03Z | 0 | 0 | null | [
"safetensors",
"mistral",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-31T20:38:55Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
mistral-experiment-4 - bnb 4bits
- Model creator: https://huggingface.co/fionazhang/
- Original model: https://huggingface.co/fionazhang/mistral-experiment-4/
Original model description:
---
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: mistral-experiment-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-experiment-4
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0a0+git7bcf7da
- Datasets 2.16.1
- Tokenizers 0.15.0
|
eaddario/gemma-3-12b-it-GGUF | eaddario | 2025-05-31T20:24:44Z | 0 | 0 | null | [
"gguf",
"quant",
"experimental",
"text-generation",
"en",
"dataset:eaddario/imatrix-calibration",
"arxiv:2406.17415",
"base_model:google/gemma-3-12b-it",
"base_model:quantized:google/gemma-3-12b-it",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-05-31T09:34:13Z | ---
base_model:
- google/gemma-3-12b-it
datasets:
- eaddario/imatrix-calibration
language:
- en
license:
- gemma
pipeline_tag: text-generation
tags:
- gguf
- quant
- experimental
---
# Experimental layer-wise quantization of google/gemma-3-12b-it
Using [LLaMA C++][llm] release [b5490][llm-rel] for quantization.
Original model: [google/gemma-3-12b-it][mdl]
From the original model creators:
> Terms of Use: [Terms](https://ai.google.dev/gemma/terms)
>
> Authors: Google DeepMind
>
> Model Information
> Summary description and brief definition of inputs and outputs.
>
> Description
> Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. Gemma 3 models are multimodal, handling text and image input and generating text output, with open weights for both pre-trained variants and instruction-tuned variants. Gemma 3 has a large, 128K context window, multilingual support in over 140 languages, and is available in more sizes than previous versions. Gemma 3 models are well-suited for a variety of text generation and image understanding tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as laptops, desktops or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone.
>
> Inputs and outputs
> Input:
>
> Text string, such as a question, a prompt, or a document to be summarized
> Images, normalized to 896 x 896 resolution and encoded to 256 tokens each
> Total input context of 128K tokens for the 4B, 12B, and 27B sizes, and 32K tokens for the 1B size
> Output:
>
> Generated text in response to the input, such as an answer to a question, analysis of image content, or a summary of a document
> Total output context of 8192 tokens
Usage
# PLEASE READ THIS BEFORE USING THESE EXPERIMENTAL VERSIONS!
An area of personal interest is finding ways to optimize the inference performance of LLMs when deployed in resource-constrained environments like commodity hardware, desktops, laptops, mobiles, edge devices, etc. There are many approaches to accomplish this, including architecture simplification and knowledge distillation, but my focus has been primarily on quantization and pruning.
The method used to produce these experimental versions is covered in [Squeezing Tensor Bits: the quest for smaller LLMs][mdm], but at a high level it involves using a custom version of `llama-imatrix` and `llama-quantize` to identify influential tensors, and quantize the most important layers to higher bit precision and the less important to lower bits. This process was partly inspired by Dumitru's et al [Layer-Wise Quantization: A Pragmatic and Effective Method for Quantizing LLMs Beyond Integer Bit-Levels][lwq-ppr].
As of version [b5125][qtz-rel] [llama-quantize][qtz] can now perform **tensor-wide quantization (TWQ)**, whereby user-defined tensors are quantized at a specific level, or perform **layer-wise quantization (LWQ)** by selecting different quantization types per tensor/layer. For example, `--tensor-type attn_v=q6_k` will quantize all *Attention Value* tensors at *q6_k* (TWQ), and `--tensor-type "\.([0-9]|1[01257]|31)\.attn_k=q4_k"` will quantize *Attention Key* tensors on layers 0 to 9, 10, 11, 12, 15, 17 and 31 at *q4_k*, leaving the remaining layers at their default value (LWQ).
The modified version of [llama-imatrix][imx] generates useful statistics to guide the tensor selection process, `--show-statistics` will display:
- **Σ(Bias):** the sum of all activations over the tensor (i.e. the Importance Scores)
- **Min & Max:** minimum and maximum activation values
- **μ & σ:** activations' mean and standard deviation
- **% Active:** proportion of elements whose average activation exceeds a very small threshold (1e-6). Helpful to determine how alive/dormant the tensor is during inference
- **N:** number of activations in the tensor
- **Entropy:** entropy of the activation distribution, in bits (standard Shannon entropy measurement)
- **E (norm):** Normalized entropy.
- **ZD Score:** z-score distribution as described in 3.1 Layer Importance Scores in the Layer-Wise Quantization paper
- **CosSim:** cosine similarity between same type tensors with respect to the previous layer (i.e. blk.7.attn_k and blk.6.attn_k)
Please note that statistics are calculated for each individial tensor and should be used to compare between tensors of the same type only. For example, assuming that *attn_k* in layer 10 has a higher influence during inference than *attn_k* in layer 7 because its **Σ(Bias)** is larger makes sense, whilst concluding the same between *attn_k* and *ffn_down* does not.
There’s a [pull request][imtx-pr] to merge these changes back into the core llama.cpp project. This may or may not ever happen so, until then, the modified version will be available on [GitHub][gh].
For testing and comparison I use models produced by [Unsloth][ust] ([Daniel and Michael Han][ust-ai] do some really advanced level stuff!) and [Bartowski][btk] (see credits below) but when they don't provide versions of the required model, all tests and comparisons are done against naive quantizations obtained by simply running `llama-quantize` with no further optimization. In this case however, whilst both have versions of this model, Unsloth's uses a different vocabulary size on their quants (262144 vs 262208) which makes a like-for-like comparison invalid.
All experimental versions were generated using an appropriate imatrix created from calibration datasets available at [eaddario/imatrix-calibration][ical]. At its core, an Importance Matrix (imatrix) is a table or, more broadly, a structured representation that scores the relative importance of different features or parameters in a machine learning model. It essentially quantifies the "impact" each feature has on a specific outcome, prediction, or relationship being modeled, and it helps to counterbalance the negative effects of quantization and pruning.
The process to generate these models is roughly as follows:
1. Convert the the original model's tensors to [GGUF][ggf] F16*
2. Estimate the Perplexity score for the F16 model (baseline) using the [wikitext-2-raw-v1][wki-dat] dataset, and save the [logits][lgt]
3. Generate an [imatrix][imx-dat] from selected calibration datasets
4. Determine tensor and layer Importance Score contribution using the modified version of `llama-imatrix`
5. Select an appropiate quant level for each tensor and quantize the model using `llama-quantize`
6. Calculate Perplexity, KL Divergence, ARC (Easy+Challenge), HellaSwag, MMLU, Truthful QA and WinoGrande scores for each quantized model
7. Keep versions with the best scores
8. Repeat until all desired quants are created. I find that quantizations below Q3/IQ3 are not fit for my purposes and therefore do not usually generate them, but happy to provide other quants on request.
*[BF16][bf16] would be preferred, but Apple's GPUs don't support it yet, and therefore any operations are executed in the CPU, making it unacceptably slow. This is expected to change in the near term but until then, if you are using Apple kit avoid using any models tagged BF16
# Models
### Sizes (in GB)
| Model | Bartowski | Repo | Shrinkage |
| ----------------------------------------------------- | --------: | ----: | --------: |
| [gemma-3-12b-it-IQ3_M](./gemma-3-12b-it-IQ3_M.gguf) | 5.66 | 5.41 | 4.4% |
| [gemma-3-12b-it-IQ3_S](./gemma-3-12b-it-IQ3_S.gguf) | 5.21 | 5.23 | -0.4% |
| [gemma-3-12b-it-IQ4_NL](./gemma-3-12b-it-IQ4_NL.gguf) | 6.89 | 6.39 | 7.3% |
| [gemma-3-12b-it-Q3_K_L](./gemma-3-12b-it-Q3_K_L.gguf) | 6.48 | 5.52 | 14.8% |
| [gemma-3-12b-it-Q3_K_M](./gemma-3-12b-it-Q3_K_M.gguf) | 6.01 | 5.22 | 13.1% |
| [gemma-3-12b-it-Q3_K_S](./gemma-3-12b-it-Q3_K_S.gguf) | 5.46 | 4.99 | 8.6% |
| [gemma-3-12b-it-Q4_K_M](./gemma-3-12b-it-Q4_K_M.gguf) | 7.30 | 6.43 | 11.9% |
| [gemma-3-12b-it-Q4_K_S](./gemma-3-12b-it-Q4_K_S.gguf) | 6.94 | 6.40 | 7.8% |
| [gemma-3-12b-it-Q5_K_M](./gemma-3-12b-it-Q5_K_M.gguf) | 8.44 | 7.61 | 9.8% |
| [gemma-3-12b-it-Q5_K_S](./gemma-3-12b-it-Q5_K_S.gguf) | 8.23 | 7.58 | 7.9% |
| [gemma-3-12b-it-Q6_K](./gemma-3-12b-it-Q6_K.gguf) | 9.66 | 9.37 | 3.0% |
| [gemma-3-12b-it-Q8_0](./gemma-3-12b-it-Q8_0.gguf) | 12.50 | 11.40 | 8.8% |
### Perplexity and KL Divergence scores
| Model | μPPL | 𝜌PPL | μKLD | RMS Δp |
| ----------------------------------------------------- | ------------------: | -----: | -----------------: | ------------: |
| [gemma-3-12b-it-IQ3_M](./gemma-3-12b-it-IQ3_M.gguf) | 9.819911 ±0.080440 | 95.67% | 0.189526 ±0.001269 | 12.497 ±0.068 |
| [gemma-3-12b-it-IQ3_S](./gemma-3-12b-it-IQ3_S.gguf) | 9.844402 ±0.079485 | 94.86% | 0.232563 ±0.001411 | 13.899 ±0.069 |
| [gemma-3-12b-it-IQ4_NL](./gemma-3-12b-it-IQ4_NL.gguf) | 9.784609 ±0.080899 | 97.10% | 0.121353 ±0.000786 | 10.126 ±0.059 |
| [gemma-3-12b-it-Q3_K_L](./gemma-3-12b-it-Q3_K_L.gguf) | 10.193888 ±0.083697 | 94.26% | 0.261491 ±0.001586 | 14.559 ±0.072 |
| [gemma-3-12b-it-Q3_K_M](./gemma-3-12b-it-Q3_K_M.gguf) | 9.860796 ±0.078875 | 93.95% | 0.273903 ±0.001588 | 15.007 ±0.072 |
| [gemma-3-12b-it-Q3_K_S](./gemma-3-12b-it-Q3_K_S.gguf) | 10.351147 ±0.082692 | 92.32% | 0.360147 ±0.001863 | 17.074 ±0.072 |
| [gemma-3-12b-it-Q4_K_M](./gemma-3-12b-it-Q4_K_M.gguf) | 9.788871 ±0.080127 | 97.05% | 0.127527 ±0.000807 | 10.439 ±0.060 |
| [gemma-3-12b-it-Q4_K_M-bartowski][b-q4km] | 9.197963 ±0.073909 | 98.63% | 0.045560 ±0.000416 | 6.400 ±0.054 |
| [gemma-3-12b-it-Q4_K_S](./gemma-3-12b-it-Q4_K_S.gguf) | 9.791993 ±0.080114 | 97.02% | 0.129268 ±0.000813 | 10.528 ±0.060 |
| [gemma-3-12b-it-Q5_K_M](./gemma-3-12b-it-Q5_K_M.gguf) | 9.492645 ±0.076969 | 98.44% | 0.060229 ±0.000376 | 7.340 ±0.046 |
| [gemma-3-12b-it-Q5_K_S](./gemma-3-12b-it-Q5_K_S.gguf) | 9.495540 ±0.077022 | 98.44% | 0.060458 ±0.000376 | 7.343 ±0.045 |
| [gemma-3-12b-it-Q6_K](./gemma-3-12b-it-Q6_K.gguf) | 9.380805 ±0.075743 | 98.88% | 0.041053 ±0.000229 | 6.061 ±0.037 |
| [gemma-3-12b-it-Q8_0](./gemma-3-12b-it-Q8_0.gguf) | 9.350086 ±0.075331 | 99.00% | 0.036289 ±0.000190 | 5.712 ±0.034 |
| [gemma-3-12b-it-F16](./gemma-3-12b-it-F16.gguf) | 9.042786 ±0.071503 | 100% | N/A | N/A |
### ARC, HellaSwag, MMLU, Truthful QA and WinoGrande scores
Scores generated using [llama-perplexity][ppl] with 750 tasks per test, and a context size of 768 tokens.
For the test data used in the generation of these scores, follow the appropiate links: [HellaSwag][hsw-tst], [ARC, MMLU, Truthful QA][tst-dat] and [WinoGrande][wng-tst]
| Model | ARC | HellaSwag | MMLU | Truthful QA | WinoGrande | Avg Score |
| ----------------------------------------------------- | -----------------: | --------: | -----------------: | -----------------: | -----------------: | --------: |
| [gemma-3-12b-it-IQ3_M](./gemma-3-12b-it-IQ3_M.gguf) | 69.0667 +/- 1.6889 | 79.60 | 43.0667 +/- 1.8093 | 36.9333 +/- 1.7635 | 73.7333 +/- 1.6080 | 60.48 |
| [gemma-3-12b-it-IQ3_S](./gemma-3-12b-it-IQ3_S.gguf) | 67.2000 +/- 1.7155 | 79.20 | 43.4667 +/- 1.8113 | 40.4000 +/- 1.7930 | 73.2000 +/- 1.6184 | 60.69 |
| [gemma-3-12b-it-IQ4_NL](./gemma-3-12b-it-IQ4_NL.gguf) | 68.8000 +/- 1.6929 | 79.06 | 43.0667 +/- 1.8093 | 39.0667 +/- 1.7827 | 73.7333 +/- 1.6080 | 60.75 |
| [gemma-3-12b-it-Q3_K_L](./gemma-3-12b-it-Q3_K_L.gguf) | 68.6667 +/- 1.6949 | 78.26 | 43.4667 +/- 1.8113 | 39.4667 +/- 1.7860 | 72.6667 +/- 1.6284 | 60.51 |
| [gemma-3-12b-it-Q3_K_M](./gemma-3-12b-it-Q3_K_M.gguf) | 68.0000 +/- 1.7045 | 78.93 | 42.9333 +/- 1.8086 | 39.8667 +/- 1.7890 | 71.7333 +/- 1.6453 | 60.29 |
| [gemma-3-12b-it-Q3_K_S](./gemma-3-12b-it-Q3_K_S.gguf) | 66.9333 +/- 1.7190 | 78.80 | 42.9333 +/- 1.8086 | 40.0000 +/- 1.7900 | 70.2667 +/- 1.6702 | 59.79 |
| [gemma-3-12b-it-Q4_K_M](./gemma-3-12b-it-Q4_K_M.gguf) | 70.6667 +/- 1.6636 | 80.67 | 42.8000 +/- 1.8079 | 40.1333 +/- 1.7910 | 74.0000 +/- 1.6027 | 61.65 |
| [gemma-3-12b-it-Q4_K_M-bartowski][b-q4km] | 69.6000 +/- 1.6807 | 81.07 | 43.6000 +/- 1.8119 | 41.0667 +/- 1.7976 | 75.7333 +/- 1.5664 | 62.21 |
| [gemma-3-12b-it-Q4_K_S](./gemma-3-12b-it-Q4_K_S.gguf) | 70.2667 +/- 1.6702 | 80.67 | 43.0667 +/- 1.8093 | 40.2667 +/- 1.7920 | 74.4000 +/- 1.5947 | 61.73 |
| [gemma-3-12b-it-Q5_K_M](./gemma-3-12b-it-Q5_K_M.gguf) | 68.4000 +/- 1.6988 | 81.47 | 45.0667 +/- 1.8180 | 40.2667 +/- 1.7920 | 75.3333 +/- 1.5751 | 62.11 |
| [gemma-3-12b-it-Q5_K_S](./gemma-3-12b-it-Q5_K_S.gguf) | 68.6667 +/- 1.6949 | 81.46 | 44.5333 +/- 1.8160 | 40.1333 +/- 1.7910 | 74.9333 +/- 1.5836 | 61.95 |
| [gemma-3-12b-it-Q6_K](./gemma-3-12b-it-Q6_K.gguf) | 68.9333 +/- 1.6909 | 81.07 | 44.4000 +/- 1.8155 | 41.0667 +/- 1.7976 | 75.0667 +/- 1.5808 | 62.11 |
| [gemma-3-12b-it-Q8_0](./gemma-3-12b-it-Q8_0.gguf) | 68.0000 +/- 1.7045 | 80.93 | 44.0000 +/- 1.8138 | 40.9333 +/- 1.7967 | 75.4667 +/- 1.5722 | 61.87 |
| [gemma-3-12b-it-F16](./gemma-3-12b-it-F16.gguf) | 69.2000 +/- 1.6869 | 81.20 | 45.3333 +/- 1.8190 | 41.4667 +/- 1.8002 | 74.8000 +/- 1.5864 | 62.40 |
### Tokens per Second - Benchmarks
Scores generated using [llama-bench][bch]. Naive (`llama-quantize` with no optimization) Q4_K_M quantization included for comparison.
| model | size | params | backend | threads | test | t/s |
| ----------------------------------------------------- | -------: | ------: | ---------- | ------: | ------------: | ------------: |
| [gemma-3-12b-it-Q4_K_M](./gemma-3-12b-it-Q4_K_M.gguf) | 5.98 GiB | 11.77 B | Metal,BLAS | 12 | pp512 | 504.86 ± 1.32 |
| [gemma-3-12b-it-Q4_K_M](./gemma-3-12b-it-Q4_K_M.gguf) | 5.98 GiB | 11.77 B | Metal,BLAS | 12 | tg128 | 44.13 ± 0.18 |
| [gemma-3-12b-it-Q4_K_M](./gemma-3-12b-it-Q4_K_M.gguf) | 5.98 GiB | 11.77 B | Metal,BLAS | 12 | pp1024+tg1024 | 72.53 ± 0.37 |
| [gemma-3-12b-it-Q4_K_M-bartowski][b-q4km] | 6.79 GiB | 11.77 B | Metal,BLAS | 12 | pp512 | 530.98 ± 0.51 |
| [gemma-3-12b-it-Q4_K_M-bartowski][b-q4km] | 6.79 GiB | 11.77 B | Metal,BLAS | 12 | tg128 | 45.54 ± 0.16 |
| [gemma-3-12b-it-Q4_K_M-bartowski][b-q4km] | 6.79 GiB | 11.77 B | Metal,BLAS | 12 | pp1024+tg1024 | 74.36 ± 0.41 |
# Metrics used
**[Perplexity][ppx]:** one of the key metrics used in NLP evaluation. It measures the quality of a language model by evaluating how well it predicts the next token given a particular sequence of words. A PPL of **1** indicates an exact match between predicted and actual, whereas values greater than one indicate a degree of "surprise" the generated token differs from the expected.
**[Kullback–Leibler (KL) Divergence][kld]:** a statistical measure of how much a probability distribution differs from another. When quantizing models (or altering the original tensors in any way for that matter), the closest we can preserve the weights' probability distribution to the original model the better, thus the closest to **0** the better.
**[AI2 Reasoning Challenge (ARC)][arc]:** a benchmark to evaluate the ability of AI models to answer complex science questions that require logical reasoning beyond pattern matching.
**[HellaSwag][hsw]:** the Harder Endings, Longer contexts, and Low-shot Activities for Situations With Adversarial Generations (bit of a mouthful!) is a benchmark designed to test commonsense natural language inference. It requires the model to predict the most likely ending of a sentence.
**[MMLU][mmlu]:** the Massive Multitask Language Understanding evaluates LLMs’ general knowledge and problem-solving abilities across 57 subjects, including elementary mathematics, US history, computer science, and law.
**[Truthful QA][tqa]:** evaluates how well LLMs generate truthful responses to questions. It identifies whether AI models can avoid generating false or misleading information, particularly in areas where human knowledge is prone to misconceptions.
**[Winogrande][wng]:** based on the [Winograd Schema Challenge][wng-chl], is a natural language understanding task requiring models to resolve ambiguities in sentences involving pronoun references.
## Credits
A big **Thank You!** to [Colin Kealty][btk] for the many contributions and for being one of the best sources of high quality quantized models available on Huggingface, and a really big ***Thank You!*** to [Georgi Gerganov][ggg] for his amazing work with **llama.cpp** and the **ggml/gguf** libraries.
[arc]: https://leaderboard.allenai.org/arc/submissions/get-started
[btk]: https://huggingface.co/bartowski
[bch]: https://github.com/ggml-org/llama.cpp/tree/master/tools/llama-bench
[bf16]: https://en.wikipedia.org/wiki/Bfloat16_floating-point_format
[b-q4km]: https://huggingface.co/bartowski/google_gemma-3-12b-it-GGUF/blob/main/google_gemma-3-12b-it-Q4_K_M.gguf
[u-q4km]: https://huggingface.co/unsloth
[ical]: https://huggingface.co/datasets/eaddario/imatrix-calibration
[ggg]: https://github.com/ggerganov
[ggf]: https://huggingface.co/docs/hub/en/gguf
[gh]: https://github.com/EAddario/llama.cpp/tree/imatrix
[hsw]: https://rowanzellers.com/hellaswag
[hsw-tst]: https://github.com/klosax/hellaswag_text_data
[imx-dat]: https://huggingface.co/eaddario/gemma-3-12b-it-GGUF/tree/main/imatrix
[imx]: https://github.com/ggml-org/llama.cpp/tree/master/tools/imatrix
[imtx-pr]: https://github.com/ggml-org/llama.cpp/pull/12718
[kld]: https://en.wikipedia.org/wiki/Kullback–Leibler_divergence
[llm]: https://github.com/ggerganov/llama.cpp
[llm-rel]: https://github.com/ggml-org/llama.cpp/releases/tag/b5490
[lgt]: https://huggingface.co/eaddario/gemma-3-12b-it-GGUF/tree/main/logits
[lwq-ppr]: https://arxiv.org/abs/2406.17415
[mdm]: https://medium.com/@eaddario/squeezing-tensor-bits-the-quest-for-smaller-llms-86b23bd052ca
[mmlu]: https://github.com/hendrycks/test
[mdl]: https://huggingface.co/google/gemma-3-12b-it
[ppl]: https://github.com/ggml-org/llama.cpp/tree/master/tools/perplexity
[ppx]: https://huggingface.co/docs/transformers/en/perplexity
[qtz]: https://github.com/ggml-org/llama.cpp/tree/master/tools/quantize
[qtz-rel]: https://github.com/ggerganov/llama.cpp/releases/tag/b5125
[tst-dat]: https://huggingface.co/datasets/ikawrakow/validation-datasets-for-llama.cpp/tree/main
[tqa]: https://github.com/sylinrl/TruthfulQA
[ust]: https://huggingface.co/unsloth
[ust-ai]: https://unsloth.ai
[wng-chl]: https://cdn.aaai.org/ocs/4492/4492-21843-1-PB.pdf
[wki-dat]: https://huggingface.co/datasets/Salesforce/wikitext/tree/main/wikitext-2-raw-v1
[wng]: https://winogrande.allenai.org
[wng-tst]: https://huggingface.co/datasets/ikawrakow/winogrande-eval-for-llama.cpp/tree/main
|
AmberYifan/Llama-3.1-8B-sft-gen-dpo-10k-ORPO | AmberYifan | 2025-05-31T20:20:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"orpo",
"conversational",
"arxiv:2403.07691",
"base_model:AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF",
"base_model:finetune:AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-31T20:01:42Z | ---
base_model: AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF
library_name: transformers
model_name: Llama-3.1-8B-sft-gen-dpo-10k-ORPO
tags:
- generated_from_trainer
- trl
- orpo
licence: license
---
# Model Card for Llama-3.1-8B-sft-gen-dpo-10k-ORPO
This model is a fine-tuned version of [AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF](https://huggingface.co/AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AmberYifan/Llama-3.1-8B-sft-gen-dpo-10k-ORPO", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yifanwang/huggingface/runs/qc5jssil)
This model was trained with ORPO, a method introduced in [ORPO: Monolithic Preference Optimization without Reference Model](https://huggingface.co/papers/2403.07691).
### Framework versions
- TRL: 0.12.2
- Transformers: 4.46.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.20.3
## Citations
Cite ORPO as:
```bibtex
@article{hong2024orpo,
title = {{ORPO: Monolithic Preference Optimization without Reference Model}},
author = {Jiwoo Hong and Noah Lee and James Thorne},
year = 2024,
eprint = {arXiv:2403.07691}
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
bruhzair/prototype0.4x41 | bruhzair | 2025-05-31T20:07:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2408.07990",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-31T19:45:52Z | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# prototype-0.4x41
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using /workspace/cache/models--SicariusSicariiStuff--Negative_LLAMA_70B/snapshots/097a11b4600eafe333a2be0309bbdf6be2f197c4 as a base.
### Models Merged
The following models were included in the merge:
* /workspace/cache/models--TheDrummer--Fallen-Llama-3.3-R1-70B-v1/snapshots/c88ee563196321458e6e46031231143c86394213
* /workspace/cache/models--tdrussell--Llama-3-70B-Instruct-Storywriter/snapshots/19be2a7c6382a9150e126cf144e2b2964e700d3c
* /workspace/cache/models--Sao10K--Llama-3.3-70B-Vulpecula-r1/snapshots/12d7254ab9a5ce21905f59f341a3d2a2b3e62fd5
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: /workspace/cache/models--tdrussell--Llama-3-70B-Instruct-Storywriter/snapshots/19be2a7c6382a9150e126cf144e2b2964e700d3c
parameters:
select_topk: 0.18
- model: /workspace/cache/models--TheDrummer--Fallen-Llama-3.3-R1-70B-v1/snapshots/c88ee563196321458e6e46031231143c86394213
parameters:
select_topk: 0.1
- model: /workspace/cache/models--Sao10K--Llama-3.3-70B-Vulpecula-r1/snapshots/12d7254ab9a5ce21905f59f341a3d2a2b3e62fd5
parameters:
select_topk: 0.4
- model: /workspace/cache/models--SicariusSicariiStuff--Negative_LLAMA_70B/snapshots/097a11b4600eafe333a2be0309bbdf6be2f197c4
parameters:
select_topk: 0.7
base_model: /workspace/cache/models--SicariusSicariiStuff--Negative_LLAMA_70B/snapshots/097a11b4600eafe333a2be0309bbdf6be2f197c4
merge_method: sce
tokenizer:
source: union
chat_template: llama3
int8_mask: true
dtype: bfloat16
```
|
1358Adrian/so-vits-svc-rvc-models | 1358Adrian | 2025-05-31T19:54:01Z | 0 | 17 | null | [
"en",
"ja",
"license:openrail",
"region:us"
] | null | 2023-04-15T03:04:07Z | ---
language:
- en
- ja
license: openrail
---
### Models available:
Caitlin (Pokémon Masters EX; English VA); 3.0-32k, 4.0 & RVC v2 (40k & 48k; incl. feature index) model<br />
Sothis (Fire Emblem; English VA); 3.0-32k, 4.0 & RVC v2 (incl. feature index) model<br />
Barbara Pegg (Genshin Impact; English VA); 3.0-32k, 4.0 & RVC v2 model<br />
Sucrose (Genshin Impact; English VA); 3.0-32k, 4.0 & RVC v2 (incl. feature index) model<br />
Shōgun Raiden/Ei Raiden (Genshin Impact; English VA); 4.0 & RVC v2 model<br />
Ai Kizuna (virtual YouTuber; pitched down by 1.25 semitones, and no pitch shift version); 4.0 & RVC v2 (incl. feature index) model<br />
Flayn (Fire Emblem; English VA); 4.1 & RVC v2 model <br />
Eleanor Forte (Synthesizer V Studio; Standard Lite); RVC v2 model only<br />
Polar (virtual singer); RVC v2 (incl. feature index) model only<br />
Rhea/Seiros (Fire Emblem; English VA); RVC v2 (incl. feature index) model only<br />
Yukari Yuzuki (exVOICE); RVC v2 (incl. feature index) model only<br />
Rise Kujikawa (Persona 4 (PlayStation 2); English VA); RVC v2 (incl. feature index) model only<br />
Fennec Fox (Kemono Friends; Kana Motomiya's voice); RVC v1 (incl. feature index) model only (someone's model)<br />
Rosa (CeVIO AI & Shirayuki's voice); RVC v2 (48k for CeVIO AI, and 40k incl. feature index) model only<br />
Xiānglíng (Sinsy; English voice model); RVC v2 model only<br />
Rin Kakushaku (virtual YouTuber); RVC v2 (incl. feature index) model only<br />
Karin Natsuki (exVOICE; pitched down by 2 semitones); RVC v2 (incl. feature index) model only<br />
Chifuyu Hanakuma (exVOICE; pitched down by 1.75 and 2 semitones); RVC v2 (incl. feature index) model only<br />
Seika Kyōmachi (Pitagoe, YouTube video from @seikacho, and raw AITalk samples; all pitched down by 1 semitone); RVC v2 (incl. feature index) model only<br />
Lumine (Genshin Impact; English VA); RVC v2 (incl. feature index) model only<br />
Xiǎoyè (ACE Studio); RVC v2 model only<br />
Xīngchén Infinity (Synthesizer V Studio); RVC v2 model only<br />
Momo Momone (Momoko Fujimoto's singing voice and UTAU); RVC v2 (incl. feature index for old version) model only<br />
Snow White (Kinect: Disneyland Adventures); RVC v2 (incl. feature index) model only<br />
Aiko (Synthesizer V Studio; Standard Lite); RVC v2 model only<br />
Genbu/Takehiro Kurono (Synthesizer V Studio; Standard Lite); RVC v2 model only<br />
Mayu (Vocaloid & raw samples from Vocaloid 3 voicebank); RVC v2 model (incl. feature index for raw samples version) only<br />
Rin Sunroad (Chaos Wars; English VA); RVC v2 (incl. feature index) model only<br />
Mita (MiSide; Japanese VA); RVC v2 (incl. feature index) model only<br />
Maki Tsurumaki (exVOICE; pitched down by 1.62 semitones, and no pitch shift version); RVC v2 (incl. feature index) model only<br />
Akane/Aoi Kotonoha (exVOICE; pitched down by 2 semitones); RVC v2 (incl. feature index) model only<br />
Frimomen (exVOICE); RVC v2 (incl. feature index) model only<br />
Maron Kurita (exVOICE); RVC v2 (incl. feature index) model only |
VIDEOS-18-lexi-wood-Videos/FULL.VIDEO.lexi.wood.Viral.Video.Tutorial.Official | VIDEOS-18-lexi-wood-Videos | 2025-05-31T19:47:00Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-31T19:46:44Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
qingy2024/InternVideo2_S2_6B_Vision | qingy2024 | 2025-05-31T19:38:15Z | 0 | 0 | null | [
"image-feature-extraction",
"dataset:OpenGVLab/InternVid",
"arxiv:2403.15377",
"license:apache-2.0",
"region:us"
] | image-feature-extraction | 2025-05-31T19:27:57Z | ---
license: apache-2.0
extra_gated_prompt: >-
You agree to not use the model to conduct experiments that cause harm to human
subjects.
extra_gated_fields:
Name: text
Company/Organization: text
Country: text
E-Mail: text
datasets:
- OpenGVLab/InternVid
pipeline_tag: image-feature-extraction
---
# Model Card for InternVideo2 (Vision-Only)
This model card describes the **vision encoder** component extracted from the InternVideo2 foundation model series.
## Model Details
This checkpoint contains only the vision backbone parameters, suitable for video or image feature extraction tasks. It was obtained by filtering a multimodal InternVideo2 checkpoint (e.g., S2_6B).
### Model Sources
- **Original Project Repository:** [InternVideo2](https://github.com/OpenGVLab/InternVideo/tree/main/InternVideo2)
- **Original Paper:** [2403.15377](https://arxiv.org/abs/2403.15377)
- **Original Point of Contact:** mailto:[InternVideo Group]([email protected])
### Uploader
- **This specific vision-only checkpoint uploaded by:** [qingy2024](https://huggingface.co/qingy2024)
## How to Use
This file (`InternVideo2_S2_6B_vision.pt`) is a standard PyTorch state dictionary containing only the vision encoder weights. It can be loaded into a compatible vision model architecture using `model.load_state_dict()`.
```python
import torch
vision_state_dict = torch.load("InternVideo2_S2_6B_vision.pt", map_location='cpu') # or 'cuda'
```
## Limitations
This model contains only the vision encoder. It **does not** include the text or audio encoders and cannot perform tasks requiring multimodal inputs unless combined with separate models for those modalities.
## Citation
If you use this vision encoder, please cite the original InternVideo2 paper:
```bibtex
@article{wang2024internvideo2,
title={InternVideo2: Scaling Video Foundation Models for Multimodal Video Understanding},
author={Wang, Yi and Li, Kunchang and Li, Xinhao and Yu, Jiashuo and He, Yinan and Chen, Guo and Pei, Baoqi and Zheng, Rongkun and Xu, Jilan and Wang, Zun and others},
journal={arXiv preprint arXiv:2403.15377},
year={2024}
}
``` |
CHOOSEIT/MCQATEST_FFT_SciQ-E_Crazy_LoRA__checkpoint_25000__B4_2E_512T_LR1e-05_ACC4 | CHOOSEIT | 2025-05-31T19:30:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-31T19:28:59Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
FormlessAI/18b9a94c-5745-4cbd-a81e-b59639b418f6 | FormlessAI | 2025-05-31T19:16:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:facebook/opt-350m",
"base_model:finetune:facebook/opt-350m",
"endpoints_compatible",
"region:us"
] | null | 2025-05-31T18:46:06Z | ---
base_model: facebook/opt-350m
library_name: transformers
model_name: 18b9a94c-5745-4cbd-a81e-b59639b418f6
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for 18b9a94c-5745-4cbd-a81e-b59639b418f6
This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="FormlessAI/18b9a94c-5745-4cbd-a81e-b59639b418f6", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients/runs/k0nmdht1)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.18.0
- Transformers: 4.52.3
- Pytorch: 2.7.0+cu128
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
DeepMount00/Murai-350M-v0.1-beta | DeepMount00 | 2025-05-31T19:13:51Z | 52 | 1 | transformers | [
"transformers",
"safetensors",
"deep_llama",
"text-generation",
"conversational",
"custom_code",
"it",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | text-generation | 2025-05-27T18:26:02Z | ---
license: apache-2.0
language:
- it
pipeline_tag: text-generation
library_name: transformers
---
## Usage
```python
from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM
model_id = "DeepMount00/Murai-350M-v0.1-beta"
model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(model_id)
t_pipeline = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
device_map="auto",
return_full_text=True,
top_p = 0.95,
top_k = 50,
repetition_penalty=1.2
)
SYSTEM_PROMPT = """Sei un assistente utile."""
TEMPERATURE = 0.1
MAX_NEW_TOKENS = 250
messages = [
{"role": "system", "content": SYSTEM_PROMPT},
{"role": "user", "content": """Scrivi una funzione python che somma due numeri"""},
]
conv_template = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
outputs = t_pipeline(
conv_template,
max_new_tokens=MAX_NEW_TOKENS,
do_sample=True,
temperature=TEMPERATURE,
num_return_sequences=1,
)
print(outputs[0]["generated_text"])
```
## Training Details
This model uses a deep architecture optimized for parameter efficiency:
- Pre-norm architecture with RMSNorm
- Grouped Query Attention for memory efficiency
- SwiGLU activation for improved performance
- RoPE position encoding for better length generalization
## Citation
```bibtex
@misc{deepmount_llm_2024,
title={Deep LLM: A 350M Parameter Language Model with 42 Layers},
author={MicheleMontebovi},
year={2025},
url={https://huggingface.co/DeepMount00/Murai-350M-v0.1-beta}
}
```
## License
Apache 2.0 |
jaema/trinh | jaema | 2025-05-31T19:11:10Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-31T19:11:00Z |
# Install the Hugging Face CLI
pip install -U "huggingface_hub[cli]"
# Login with your Hugging Face credentials
huggingface-cli login
# Push your model files
huggingface-cli upload jaema/trinh . |
Moryjj/parst5_3blocks_18 | Moryjj | 2025-05-31T19:03:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-05-31T19:02:59Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
domq/ft_cpi_p5 | domq | 2025-05-31T19:02:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-31T19:02:23Z | ---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** domq
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Voidstep/static_vv0kg | Voidstep | 2025-05-31T18:58:05Z | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-05-31T18:55:11Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Eraly-ml/KazBERT | Eraly-ml | 2025-05-31T18:56:56Z | 278 | 2 | transformers | [
"transformers",
"safetensors",
"bert",
"fill-mask",
"pytorch",
"kk",
"ru",
"en",
"dataset:amandyk/kazakh_wiki_articles",
"dataset:Eraly-ml/kk-cc-data",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"doi:10.57967/hf/5271",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2025-03-11T19:04:15Z | ---
license: apache-2.0
language:
- kk
- ru
- en
base_model:
- google-bert/bert-base-uncased
pipeline_tag: fill-mask
tags:
- pytorch
- safetensors
library_name: transformers
paper: https://doi.org/10.5281/zenodo.15565394
datasets:
- amandyk/kazakh_wiki_articles
- Eraly-ml/kk-cc-data
direct_use: true
widget:
- text: "KazBERT қазақ тілін [MASK] түсінеді."
---
# KazBERT: A Custom BERT Model for the Kazakh Language 🇰🇿
<details>
<summary><span style="color:#4CAF50;"><strong>License & Metadata</strong></span></summary>
- **License:** apache-2.0
- **Languages:** Kazakh (kk), Russian (ru), English (en)
- **Base Model:** google-bert/bert-base-uncased
- **Pipeline Tag:** fill-mask
- **Tags:** pytorch, safetensors
- **Library:** transformers
- **Datasets:**
- amandyk/kazakh_wiki_articles
- Eraly-ml/kk-cc-data
- **Direct Use:** ✅
- **Widget Example:**
`"KazBERT қазақ тілін [MASK] түсінеді."`
</details>
## <span style="color:#4CAF50;"> Model Overview</span>
**KazBERT** is a BERT-based model fine-tuned specifically for Kazakh using Masked Language Modeling (MLM). It is based on `bert-base-uncased` and uses a custom tokenizer trained on Kazakh text.
### <span style="color:#4CAF50;">Model Details</span>
- **Architecture:** BERT
- **Tokenizer:** WordPiece trained on Kazakh
- **Training Data:** Kazakh Wikipedia & Common Crawl
- **Method:** Masked Language Modeling (MLM)
**Erlanulu, Y. G. (2025). KazBERT: A Custom BERT Model for the Kazakh Language. Zenodo.**
📄 [Read the paper](https://doi.org/10.5281/zenodo.15565394)
---
## <span style="color:#4CAF50;"> Files in Repository</span>
- `config.json` – Model config
- `model.safetensors` – Model weights
- `tokenizer.json` – Tokenizer data
- `tokenizer_config.json` – Tokenizer config
- `special_tokens_map.json` – Special tokens
- `vocab.txt` – Vocabulary
---
## <span style="color:#4CAF50;"> Training Configuration</span>
- **Epochs:** 20
- **Batch size:** 16
- **Learning rate:** Default
- **Weight decay:** 0.01
- **FP16 Training:** Enabled
---
## <span style="color:#4CAF50;"> Usage</span>
Install 🤗 Transformers and load the model:
```python
from transformers import BertForMaskedLM, BertTokenizerFast
model_name = "Eraly-ml/KazBERT"
tokenizer = BertTokenizerFast.from_pretrained(model_name)
model = BertForMaskedLM.from_pretrained(model_name)
````
---
## <span style="color:#4CAF50;"> Example: Masked Token Prediction</span>
```python
from transformers import pipeline
pipe = pipeline("fill-mask", model="Eraly-ml/KazBERT")
output = pipe('KazBERT қазақ тілін [MASK] түсінеді.')
```
**Output:**
```json
[
{"score": 0.198, "token_str": "жетік", "sequence": "KazBERT қазақ тілін жетік түсінеді."},
{"score": 0.038, "token_str": "де", "sequence": "KazBERT қазақ тілін де түсінеді."},
{"score": 0.032, "token_str": "терең", "sequence": "KazBERT қазақ тілін терең түсінеді."},
{"score": 0.029, "token_str": "ерте", "sequence": "KazBERT қазақ тілін ерте түсінеді."},
{"score": 0.026, "token_str": "жете", "sequence": "KazBERT қазақ тілін жете түсінеді."}
]
```
---
## <span style="color:#4CAF50;"> Bias and Limitations</span>
```
- Trained only on public Kazakh Wikipedia & Common Crawl
- Might miss informal speech or dialects
- Could underperform on deep-context or rare words
- May reflect cultural or social biases in data
```
---
## <span style="color:#4CAF50;"> License</span>
Apache 2.0 License
---
## <span style="color:#4CAF50;"> Citation</span>
```bibtex
@misc{eraly_gainulla_2025,
author = { Eraly Gainulla },
title = { KazBERT (Revision 15240d4) },
year = 2025,
url = { https://huggingface.co/Eraly-ml/KazBERT },
doi = { 10.57967/hf/5271 },
publisher = { Hugging Face }
}
``` |
carolinacon/q-taxi-v3 | carolinacon | 2025-05-31T18:49:38Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2025-05-31T18:49:35Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
model = load_from_hub(repo_id="carolinacon/q-taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
matthewchung74/snps-xgboost-predictor | matthewchung74 | 2025-05-31T18:03:32Z | 0 | 0 | xgboost | [
"xgboost",
"stock-prediction",
"snps",
"sp500",
"financial-modeling",
"time-series",
"tabular-regression",
"region:us"
] | tabular-regression | 2025-05-31T05:46:21Z | ---
library_name: xgboost
pipeline_tag: tabular-regression
tags:
- stock-prediction
- xgboost
- snps
- sp500
- financial-modeling
- time-series
---
# SNPS Stock Price Prediction XGBoost Model
This XGBoost model predicts SNPS stock closing prices using 5-minute bar data. SNPS was identified as one of the top 5 performing S&P 500 stocks in pattern analysis.
## Model Details
- **Model Type**: XGBoost Regressor
- **Symbol**: SNPS
- **Target**: SNPS closing price prediction
- **Training Data**: 7,979 samples
- **Features**: 92 engineered features
- **Lookback Window**: 20 periods
## Model Parameters
- **n_estimators**: 100
- **max_depth**: 6
- **learning_rate**: 0.1
## Features Used
The model uses 92 engineered features including:
- Price-based indicators (SMA, EMA, ratios)
- Technical indicators (volatility, rolling statistics)
- Volume-based features
- Time-based features (hour, day of week)
- Lagged price and volume features
## Performance Metrics
*Performance will be updated after backtesting*
## Data Source
Trained on SNPS 5-minute bar data from Alpaca Markets API, containing:
- Open, High, Low, Close prices
- Volume and trade count
- VWAP (Volume Weighted Average Price)
## Usage
```python
import joblib
from huggingface_hub import hf_hub_download
# Download the model from Hugging Face
model_path = hf_hub_download(repo_id="matthewchung74/snps-xgboost-predictor", filename="snps_xgboost_model.pkl")
feature_columns_path = hf_hub_download(repo_id="matthewchung74/snps-xgboost-predictor", filename="snps_feature_columns.pkl")
# Load the model
model = joblib.load(model_path)
feature_columns = joblib.load(feature_columns_path)
# Make predictions (ensure your data has the same features)
predictions = model.predict(X_features)
```
## Background
SNPS was selected as one of the top 5 S&P 500 stocks based on pattern analysis including:
- High volatility patterns
- Strong momentum indicators
- Volume spike patterns
- Price breakout potential
## Important Notes
- This model is for educational/research purposes only
- Past performance does not guarantee future results
- Always do your own research before making investment decisions
- Financial markets are inherently risky and unpredictable
## Training Configuration
- **Mode**: 2-Week Backtesting
- **Created**: 2025-05-31 11:03:29
- **Data**: SNPS 5-minute bars from matthewchung74/snps-6month-5min-bars
## Files
- `snps_xgboost_model.pkl`: Trained XGBoost model
- `snps_feature_columns.pkl`: List of feature column names
- `snps_model_metadata.pkl`: Complete model metadata and configuration
|
mlabonne/gemma-3-27b-it-abliterated-v2 | mlabonne | 2025-05-31T17:48:02Z | 78 | 11 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"image-text-to-text",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-05-28T16:58:51Z | ---
license: gemma
library_name: transformers
pipeline_tag: image-text-to-text
base_model: google/gemma-3-27b-it
---
# 💎 Gemma 3 27B IT Abliterated

<center>Gemma 3 Abliterated <a href="https://huggingface.co/mlabonne/gemma-3-1b-it-abliterated-v2">1B</a> • <a href="https://huggingface.co/mlabonne/gemma-3-4b-it-abliterated-v2">4B</a> • <a href="https://huggingface.co/mlabonne/gemma-3-12b-it-abliterated-v2">12B</a> • <a href="https://huggingface.co/mlabonne/gemma-3-27b-it-abliterated-v2">27B</a></center>
This is an uncensored version of [google/gemma-3-27b-it](https://huggingface.co/google/gemma-3-27b-it) created with a new abliteration technique.
See [this article](https://huggingface.co/blog/mlabonne/abliteration) to know more about abliteration.
This is a new, improved version that targets refusals with enhanced accuracy.
I recommend using these generation parameters: `temperature=1.0`, `top_k=64`, `top_p=0.95`.
## ⚡️ Quantization
* **QAT**: https://huggingface.co/mlabonne/gemma-3-27b-it-qat-abliterated
* **GGUF**: https://huggingface.co/mlabonne/gemma-3-27b-it-abliterated-v2-GGUF
## ✂️ Abliteration

The refusal direction is computed by comparing the residual streams between target (harmful) and baseline (harmless) samples.
The hidden states of target modules (e.g., o_proj) are orthogonalized to subtract this refusal direction with a given weight factor.
These weight factors follow a normal distribution with a certain spread and peak layer.
Modules can be iteratively orthogonalized in batches, or the refusal direction can be accumulated to save memory.
Finally, I used a hybrid evaluation with a dedicated test set to calculate the acceptance rate. This uses both a dictionary approach and [NousResearch/Minos-v1](https://huggingface.co/NousResearch/Minos-v1).
The goal is to obtain an acceptance rate >90% and still produce coherent outputs. |
Ver-Video-y-Foto-Alana-Flores/VER.Filtran.video.y.fotos.intimas.de.la.influencer.Alana.Flores | Ver-Video-y-Foto-Alana-Flores | 2025-05-31T17:41:28Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-31T17:41:10Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/fn84hrnu?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> |
lldacing/flash-attention-windows-wheel | lldacing | 2025-05-31T16:55:39Z | 0 | 156 | null | [
"license:bsd-3-clause",
"region:us"
] | null | 2024-11-15T08:37:33Z | ---
license: bsd-3-clause
---
Windows wheels of [flash-attention](https://github.com/Dao-AILab/flash-attention)
Build cuda wheel steps
- First clone code
```
git clone https://github.com/Dao-AILab/flash-attention
cd flash-attention
```
- Switch tag branch, such as `v2.7.0.post2` (you can get latest tag by `git describe --tags` or list all available tags by `git tag -l`)
```
git checkout -b v2.7.0.post2 v2.7.0.post2
```
- Download WindowsWhlBuilder_cuda.bat into `flash-attention`
- To build with MSVC, please open the "Native Tools Command Prompt for Visual Studio". The exact name may depend on your version of Windows, Visual Studio, and cpu architecture (in my case it was "x64 Native Tools Command Prompt for VS 2022".)
My Visual Studio Installer version

- Switch python env and make sure the corresponding torch cuda version is installed
- Start task
```
# Build with 1 parallel workers (I used 8 workers on i9-14900KF-3.20GHz-RAM64G, which took about 30 minutes.)
# If you want reset workers, you should edit `WindowsWhlBuilder_cuda.bat` and modify `set MAX_JOBS=1`.(I tried to modify it by parameters, but failed)
WindowsWhlBuilder_cuda.bat
# Build with sm80 and sm120
WindowsWhlBuilder_cuda.bat CUDA_ARCH="80;120"
# Enable cxx11abi
WindowsWhlBuilder_cuda.bat CUDA_ARCH="80;120" FORCE_CXX11_ABI=TRUE
```
- Wheel file will be placed in the `dist` directory
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.