modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-02 08:43:47
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 462
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-02 08:40:46
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
GGNorbert/efficientnet_b4-s2-v0.2.0-RGBclipped | GGNorbert | 2025-05-30T00:59:53Z | 0 | 0 | configilm | [
"configilm",
"safetensors",
"efficientnet_b4",
"BigEarthNet v2.0",
"Remote Sensing",
"Classification",
"image-classification",
"Multispectral",
"arxiv:2407.03653",
"license:mit",
"region:us"
] | image-classification | 2025-05-30T00:59:35Z | ---
thumbnail: "https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/RSiM_Logo_1.png"
tags:
- efficientnet_b4
- BigEarthNet v2.0
- Remote Sensing
- Classification
- image-classification
- Multispectral
library_name: configilm
license: mit
widget:
- src: example.png
example_title: Example
output:
- label: Agro-forestry areas
score: 0.000000
- label: Arable land
score: 0.000000
- label: Beaches, dunes, sands
score: 0.000000
- label: Broad-leaved forest
score: 0.000000
- label: Coastal wetlands
score: 0.000000
---
[TU Berlin](https://www.tu.berlin/) | [RSiM](https://rsim.berlin/) | [DIMA](https://www.dima.tu-berlin.de/menue/database_systems_and_information_management_group/) | [BigEarth](http://www.bigearth.eu/) | [BIFOLD](https://bifold.berlin/)
:---:|:---:|:---:|:---:|:---:
<a href="https://www.tu.berlin/"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/tu-berlin-logo-long-red.svg" style="font-size: 1rem; height: 2em; width: auto" alt="TU Berlin Logo"/> | <a href="https://rsim.berlin/"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/RSiM_Logo_1.png" style="font-size: 1rem; height: 2em; width: auto" alt="RSiM Logo"> | <a href="https://www.dima.tu-berlin.de/menue/database_systems_and_information_management_group/"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/DIMA.png" style="font-size: 1rem; height: 2em; width: auto" alt="DIMA Logo"> | <a href="http://www.bigearth.eu/"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/BigEarth.png" style="font-size: 1rem; height: 2em; width: auto" alt="BigEarth Logo"> | <a href="https://bifold.berlin/"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/BIFOLD_Logo_farbig.png" style="font-size: 1rem; height: 2em; width: auto; margin-right: 1em" alt="BIFOLD Logo">
# Efficientnet_b4 pretrained on BigEarthNet v2.0 using Sentinel-2 bands
<!-- Optional images -->
<!--
[Sentinel-1](https://sentinel.esa.int/web/sentinel/missions/sentinel-1) | [Sentinel-2](https://sentinel.esa.int/web/sentinel/missions/sentinel-2)
:---:|:---:
<a href="https://sentinel.esa.int/web/sentinel/missions/sentinel-1"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/sentinel_2.jpg" style="font-size: 1rem; height: 10em; width: auto; margin-right: 1em" alt="Sentinel-2 Satellite"/> | <a href="https://sentinel.esa.int/web/sentinel/missions/sentinel-2"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/sentinel_1.jpg" style="font-size: 1rem; height: 10em; width: auto; margin-right: 1em" alt="Sentinel-1 Satellite"/>
-->
This model was trained on the BigEarthNet v2.0 (also known as reBEN) dataset using the Sentinel-2 bands.
It was trained using the following parameters:
- Number of epochs: up to 100 (with early stopping after 5 epochs of no improvement based on validation average
precision macro)
- Batch size: 512
- Learning rate: 0.001
- Dropout rate: 0.15
- Drop Path rate: 0.15
- Learning rate scheduler: LinearWarmupCosineAnnealing for 2000 warmup steps
- Optimizer: AdamW
- Seed: 42
The weights published in this model card were obtained after 32 training epochs.
For more information, please visit the [official BigEarthNet v2.0 (reBEN) repository](https://git.tu-berlin.de/rsim/reben-training-scripts), where you can find the training scripts.
](https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/combined_2000_600_2020_0_wide.jpg)
The model was evaluated on the test set of the BigEarthNet v2.0 dataset with the following results:
| Metric | Macro | Micro |
|:------------------|------------------:|------------------:|
| Average Precision | 0.654173 | 0.749014 |
| F1 Score | 0.599242 | 0.659746 |
| Precision | 0.681693 | 0.733997 |
# Example
| A Sentinel-2 image (true color representation) |
|:---------------------------------------------------:|
| ](example.png) |
| Class labels | Predicted scores |
|:--------------------------------------------------------------------------|--------------------------------------------------------------------------:|
| <p> Agro-forestry areas <br> Arable land <br> Beaches, dunes, sands <br> ... <br> Urban fabric </p> | <p> 0.000000 <br> 0.000000 <br> 0.000000 <br> ... <br> 0.000000 </p> |
To use the model, download the codes that define the model architecture from the
[official BigEarthNet v2.0 (reBEN) repository](https://git.tu-berlin.de/rsim/reben-training-scripts) and load the model using the
code below. Note that you have to install [`configilm`](https://pypi.org/project/configilm/) to use the provided code.
```python
from reben_publication.BigEarthNetv2_0_ImageClassifier import BigEarthNetv2_0_ImageClassifier
model = BigEarthNetv2_0_ImageClassifier.from_pretrained("path_to/huggingface_model_folder")
```
e.g.
```python
from reben_publication.BigEarthNetv2_0_ImageClassifier import BigEarthNetv2_0_ImageClassifier
model = BigEarthNetv2_0_ImageClassifier.from_pretrained(
"BIFOLD-BigEarthNetv2-0/efficientnet_b4-s2-v0.1.1")
```
If you use this model in your research or the provided code, please cite the following papers:
```bibtex
@article{clasen2024refinedbigearthnet,
title={reBEN: Refined BigEarthNet Dataset for Remote Sensing Image Analysis},
author={Clasen, Kai Norman and Hackel, Leonard and Burgert, Tom and Sumbul, Gencer and Demir, Beg{\"u}m and Markl, Volker},
year={2024},
eprint={2407.03653},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2407.03653},
}
```
```bibtex
@article{hackel2024configilm,
title={ConfigILM: A general purpose configurable library for combining image and language models for visual question answering},
author={Hackel, Leonard and Clasen, Kai Norman and Demir, Beg{\"u}m},
journal={SoftwareX},
volume={26},
pages={101731},
year={2024},
publisher={Elsevier}
}
```
|
httppp/finetuned-llama2-4bit-gguf | httppp | 2025-05-30T00:47:19Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-30T00:47:16Z | ---
license: apache-2.0
---
|
trentmkelly/slop-detector | trentmkelly | 2025-05-30T00:37:34Z | 0 | 2 | transformers | [
"transformers",
"tensorboard",
"onnx",
"safetensors",
"bert",
"text-classification",
"autotrain",
"base_model:thenlper/gte-base",
"base_model:quantized:thenlper/gte-base",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-29T13:39:01Z |
---
library_name: transformers
tags:
- autotrain
- text-classification
base_model: thenlper/gte-base
widget:
- text: "Wow, thanks for sharing this totally rad post, bro!"
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.03548985347151756
f1: 0.9950522264980759
precision: 0.9945054945054945
recall: 0.9955995599559956
auc: 0.9997361672360855
accuracy: 0.995049504950495
## Purpose
I trained this on a bunch of top-level comments on reddit. Human class was the real responses to selfposts in various subs, and the LLM class was a response from one of several LLMs to the same post. I am tired of reading fucking GPT-slop comments on reddit.
## Notes
Converted ONNX model is available for compatibility with transformers.js. Browser extension and mini version coming soon. |
morturr/Mistral-7B-v0.1-headlines-2025-05-30 | morturr | 2025-05-30T00:28:19Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2025-05-29T21:44:52Z | ---
library_name: peft
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Mistral-7B-v0.1-headlines-2025-05-30
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral-7B-v0.1-headlines-2025-05-30
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1 |
aquiffoo/aquif-2.5 | aquiffoo | 2025-05-30T00:19:14Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"aquif",
"dense",
"3b",
"sota",
"pt",
"en",
"fr",
"zh",
"base_model:ibm-granite/granite-3.3-2b-instruct",
"base_model:adapter:ibm-granite/granite-3.3-2b-instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-05-30T00:15:20Z | ---
library_name: peft
license: apache-2.0
base_model: ibm-granite/granite-3.3-2b-instruct
tags:
- aquif
- dense
- 3b
- sota
- pt
- en
- fr
- zh
model-index:
- name: aquif-2.5
results: []
---
# aquif-2.5
**aquif-2.5** is our final 3B non-MoE model, representing the culmination of our dense model series. It delivers competitive performance across a range of reasoning, programming, and mathematical benchmarks without relying on mixture-of-experts techniques.
## Model Overview
* **Name**: `aquif-2.5`
* **Parameters**: 3 Billion
* **Context Window**: 128k tokens
* **Architecture**: Decoder-only transformer
* **Type**: General-purpose LLM
* **Hosted on**: [Ollama](https://ollama.com/aquiffoo/aquif-2.5) and [HuggingFace](https://huggingface.co/aquiffoo/aquif-2.5)
## Features
* High-quality code generation and reasoning in a compact 3B model
* Fast inference, ideal for local deployment
* Well-balanced performance across HumanEval, GSM8K, and MMLU
* Final version before transitioning to MoE architecture in aquif-3
* Thinking Mode capabilities (message with role "control")
## Performance Benchmarks
| Benchmark | aquif-2 (3B) | aquif-2.5 (3B) | aquif-2.5 thinking (3B) | aquif-3-preview (1x8B) |
| ---------- | ------------ | -------------- | ----------------------- | ---------------------- |
| MMLU | 52.5 | 56.9 | 59.8 | 60.4 |
| HumanEval | 70.8 | 82.0 | 86.1 | 82.4 |
| GSM8K | 52.4 | 73.9 | 77.5 | 70.1 |
| Avg. Score | 58.6 | 70.9 | 74.5 | 71.0 |
## Use Cases
* Coding tasks (Python, JavaScript, etc.)
* Chain-of-thought prompting
* Reasoning-heavy assistant use
* Math problem solving and logic inference
## Training Details
* **Learning Rate**: 2e-05
* **Train Batch Size**: 4
* **Eval Batch Size**: 8
* **Seed**: 42
* **Gradient Accumulation Steps**: 2
* **Total Train Batch Size**: 8
* **Optimizer**: AdamW (Torch), betas=(0.9, 0.999), epsilon=1e-08
* **LR Scheduler**: Cosine
* **Epochs**: 1
* **Precision**: Native AMP (mixed precision training)
## Limitations
* Despite strong performance, still prone to hallucinations on factual queries
* Not designed for multimodal inputs
* No built-in long context window beyond 16K
## Getting Started
To run via [Ollama](https://ollama.com):
```bash
ollama run aquiffoo/aquif-2.5
```
|
ZiartisNikolas/NMT-cypriot-dialect-to-greek | ZiartisNikolas | 2025-05-30T00:09:22Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"translation",
"nmt",
"cypriot-greek",
"greek",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2025-05-26T18:24:08Z | ---
tags:
- translation
- nmt
- cypriot-greek
- greek
library_name: transformers
languages:
- cy
- el
license: cc-by-4.0
---
## Model Details
- **Developed by**: Nikolas Ziartis
- **Institute**: University of Cyprus
- **Model type**: MarianMT (Transformer-based Seq2Seq)
- **Source language**: Cypriot Greek (ISO 639-1: cy)
- **Target language**: Modern Standard Greek (ISO 639-1: el)
- **Fine-tuned from**: `Helsinki-NLP/opus-mt-en-grk`
- **License**: CC BY 4.0
## Model Description
This model is a MarianMT transformer, fine-tuned via active learning to translate from the low-resource Cypriot Greek dialect into Modern Standard Greek. In nine iterative batches, we:
1. **Extracted high-dimensional embeddings** for every unlabeled Cypriot sentence using the Greek LLM `ilsp/Meltemi-7B-Instruct-v1.5` :contentReference[oaicite:0]{index=0}.
2. **Applied k-means clustering** to select the 50 “most informative” sentence pairs per batch.
3. **Had human annotators** translate those 50 sentences into Standard Greek.
4. **Fine-tuned** the MarianMT model on the accumulating parallel corpus, freezing and unfreezing layers to preserve learned representations.
The result is a system that accurately captures colloquial Cypriot expressions while producing fluent Modern Greek.
## Usage
```python
from transformers import MarianMTModel, MarianTokenizer
model_name = "ZiartisNikolas/NMT-cypriot-dialect-to-greek"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
src = ["Τζ̆αι φυσικά ήξερα ίνταμπου εγινίσκετουν."] # Cypriot Greek sentence
batch = tokenizer(src, return_tensors="pt", padding=True)
gen = model.generate(**batch)
print(tokenizer.batch_decode(gen, skip_special_tokens=True))
|
maximuspowers/cmd-r-vora-4 | maximuspowers | 2025-05-30T00:04:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"vora",
"text-generation",
"image-text-to-text",
"conversational",
"custom_code",
"dataset:Hon-Wong/VoRA-Recap-GLDv2-1.4M",
"arxiv:2503.20680",
"base_model:CohereLabs/c4ai-command-r7b-12-2024",
"base_model:finetune:CohereLabs/c4ai-command-r7b-12-2024",
"autotrain_compatible",
"region:us"
] | image-text-to-text | 2025-05-29T23:55:43Z | ---
library_name: transformers
pipeline_tag: image-text-to-text
base_model:
- CohereForAI/c4ai-command-r7b-12-2024
datasets:
- Hon-Wong/VoRA-Recap-GLDv2-1.4M
---
# VoRA Command R
This is a VoRA (Vision as LoRA) adaptation of Command R 7B, enabling vision-language understanding capabilities.
* [ArXiv Paper](https://arxiv.org/abs/2503.20680)
* [Original VoRA Github](https://github.com/Hon-Wong/VoRA)
## Model Details
- **Base Model**: CohereForAI/c4ai-command-r7b-12-2024
- **Vision Adapter**: LoRA with rank 32 applied to attention layers
- **Image Resolution**: 224x224
- **Vision Placeholder Token**: «
## Quickstart
The model can be used as follows:
```python
import torch
from transformers import AutoProcessor, AutoModelForCausalLM
model_name = "your-username/cmd-r-vora-4"
processor = AutoProcessor.from_pretrained(model_name, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True)
conversation = [
{
"role": "user",
"content": [
{
"type": "image",
"url": "{image path or url}"
},
{
"type": "text",
"text": "« Describe this image."
}
]
}
]
model_inputs = processor.apply_chat_template(conversation, add_generation_prompt=True, tokenize=True, return_tensors='pt', return_dict=True).to(model.device)
gen_kwargs = {"max_new_tokens": 1024, "eos_token_id": processor.tokenizer.eos_token_id}
with torch.inference_mode():
outputs = model.generate(model_inputs, **gen_kwargs)
output_text = processor.tokenizer.batch_decode(
outputs, skip_special_tokens=True
)
print(output_text)
``` |
Petricaa19/malimg-cnn-classifiers-second | Petricaa19 | 2025-05-29T23:56:36Z | 0 | 0 | keras | [
"keras",
"region:us"
] | null | 2025-05-29T23:55:48Z |
---
library_name: keras
---
This model has been uploaded using the Keras library and can be used with JAX,
TensorFlow, and PyTorch backends.
This model card has been generated automatically and should be completed by the
model author.
See [Model Cards documentation](https://huggingface.co/docs/hub/model-cards) for
more information.
For more details about the model architecture, check out
[config.json](./config.json).A plot of the model can be found [here](./assets/summary_plot.png). |
Tookies/SmolLM2-FT-OpenO1-SFT | Tookies | 2025-05-29T23:46:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"smol-course",
"module_1",
"trl",
"sft",
"conversational",
"base_model:HuggingFaceTB/SmolLM2-135M",
"base_model:finetune:HuggingFaceTB/SmolLM2-135M",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-29T23:46:26Z | ---
base_model: HuggingFaceTB/SmolLM2-135M
library_name: transformers
model_name: SmolLM2-FT-OpenO1-SFT
tags:
- generated_from_trainer
- smol-course
- module_1
- trl
- sft
licence: license
---
# Model Card for SmolLM2-FT-OpenO1-SFT
This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Tookies/SmolLM2-FT-OpenO1-SFT", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.12.1
- Transformers: 4.46.3
- Pytorch: 2.4.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
jinx2321/nllb-tagged-1e4-paper-2 | jinx2321 | 2025-05-29T23:43:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"m2m_100",
"text2text-generation",
"generated_from_trainer",
"base_model:jinx2321/nllb-tagged-1e4-paper",
"base_model:finetune:jinx2321/nllb-tagged-1e4-paper",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-05-29T21:48:58Z | ---
library_name: transformers
license: cc-by-nc-4.0
base_model: jinx2321/nllb-tagged-1e4-paper
tags:
- generated_from_trainer
model-index:
- name: nllb-tagged-1e4-paper-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nllb-tagged-1e4-paper-2
This model is a fine-tuned version of [jinx2321/nllb-tagged-1e4-paper](https://huggingface.co/jinx2321/nllb-tagged-1e4-paper) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.52.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
|
jinx2321/nllb-1e4-paper-2 | jinx2321 | 2025-05-29T23:40:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"m2m_100",
"text2text-generation",
"generated_from_trainer",
"base_model:jinx2321/nllb-1e4-paper",
"base_model:finetune:jinx2321/nllb-1e4-paper",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-05-29T21:45:16Z | ---
library_name: transformers
license: cc-by-nc-4.0
base_model: jinx2321/nllb-1e4-paper
tags:
- generated_from_trainer
model-index:
- name: nllb-1e4-paper-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nllb-1e4-paper-2
This model is a fine-tuned version of [jinx2321/nllb-1e4-paper](https://huggingface.co/jinx2321/nllb-1e4-paper) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.52.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
|
SirPeaves/X16 | SirPeaves | 2025-05-29T23:39:03Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-29T23:39:03Z | ---
license: apache-2.0
---
|
mlx-community/DeepSeek-R1-0528-3bit | mlx-community | 2025-05-29T23:37:53Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"deepseek_v3",
"text-generation",
"conversational",
"custom_code",
"base_model:deepseek-ai/DeepSeek-R1-0528",
"base_model:quantized:deepseek-ai/DeepSeek-R1-0528",
"license:mit",
"3-bit",
"region:us"
] | text-generation | 2025-05-29T22:25:50Z | ---
license: mit
library_name: mlx
pipeline_tag: text-generation
tags:
- mlx
base_model: deepseek-ai/DeepSeek-R1-0528
---
# mlx-community/DeepSeek-R1-0528-3bit
This model [mlx-community/DeepSeek-R1-0528-3bit](https://huggingface.co/mlx-community/DeepSeek-R1-0528-3bit) was
converted to MLX format from [deepseek-ai/DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528)
using mlx-lm version **0.24.1**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/DeepSeek-R1-0528-3bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
MARCO-ZAVALETA/DeepSeek-R1-Medical-COT | MARCO-ZAVALETA | 2025-05-29T23:29:20Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-29T22:49:58Z | ---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** MARCO-ZAVALETA
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
jinx2321/nllb-tagged-1e4-paper-distilled-1 | jinx2321 | 2025-05-29T23:25:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"m2m_100",
"text2text-generation",
"generated_from_trainer",
"base_model:jinx2321/nllb-tagged-1e4-paper",
"base_model:finetune:jinx2321/nllb-tagged-1e4-paper",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-05-29T21:59:22Z | ---
library_name: transformers
license: cc-by-nc-4.0
base_model: jinx2321/nllb-tagged-1e4-paper
tags:
- generated_from_trainer
model-index:
- name: nllb-tagged-1e4-paper-distilled-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nllb-tagged-1e4-paper-distilled-1
This model is a fine-tuned version of [jinx2321/nllb-tagged-1e4-paper](https://huggingface.co/jinx2321/nllb-tagged-1e4-paper) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.52.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
|
Oceans-ID/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-chattering_beaked_eagle | Oceans-ID | 2025-05-29T23:25:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am chattering beaked eagle",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-26T07:23:23Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-chattering_beaked_eagle
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am chattering beaked eagle
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-chattering_beaked_eagle
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Oceans-ID/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-chattering_beaked_eagle", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
GGNorbert/efficientnetv2_m-s2-v0.2.0 | GGNorbert | 2025-05-29T23:21:13Z | 0 | 0 | configilm | [
"configilm",
"safetensors",
"efficientnetv2_m",
"BigEarthNet v2.0",
"Remote Sensing",
"Classification",
"image-classification",
"Multispectral",
"arxiv:2407.03653",
"license:mit",
"region:us"
] | image-classification | 2025-05-29T23:20:33Z | ---
thumbnail: "https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/RSiM_Logo_1.png"
tags:
- efficientnetv2_m
- BigEarthNet v2.0
- Remote Sensing
- Classification
- image-classification
- Multispectral
library_name: configilm
license: mit
widget:
- src: example.png
example_title: Example
output:
- label: Agro-forestry areas
score: 0.000000
- label: Arable land
score: 0.000000
- label: Beaches, dunes, sands
score: 0.000000
- label: Broad-leaved forest
score: 0.000000
- label: Coastal wetlands
score: 0.000000
---
[TU Berlin](https://www.tu.berlin/) | [RSiM](https://rsim.berlin/) | [DIMA](https://www.dima.tu-berlin.de/menue/database_systems_and_information_management_group/) | [BigEarth](http://www.bigearth.eu/) | [BIFOLD](https://bifold.berlin/)
:---:|:---:|:---:|:---:|:---:
<a href="https://www.tu.berlin/"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/tu-berlin-logo-long-red.svg" style="font-size: 1rem; height: 2em; width: auto" alt="TU Berlin Logo"/> | <a href="https://rsim.berlin/"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/RSiM_Logo_1.png" style="font-size: 1rem; height: 2em; width: auto" alt="RSiM Logo"> | <a href="https://www.dima.tu-berlin.de/menue/database_systems_and_information_management_group/"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/DIMA.png" style="font-size: 1rem; height: 2em; width: auto" alt="DIMA Logo"> | <a href="http://www.bigearth.eu/"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/BigEarth.png" style="font-size: 1rem; height: 2em; width: auto" alt="BigEarth Logo"> | <a href="https://bifold.berlin/"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/BIFOLD_Logo_farbig.png" style="font-size: 1rem; height: 2em; width: auto; margin-right: 1em" alt="BIFOLD Logo">
# Efficientnetv2_m pretrained on BigEarthNet v2.0 using Sentinel-2 bands
<!-- Optional images -->
<!--
[Sentinel-1](https://sentinel.esa.int/web/sentinel/missions/sentinel-1) | [Sentinel-2](https://sentinel.esa.int/web/sentinel/missions/sentinel-2)
:---:|:---:
<a href="https://sentinel.esa.int/web/sentinel/missions/sentinel-1"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/sentinel_2.jpg" style="font-size: 1rem; height: 10em; width: auto; margin-right: 1em" alt="Sentinel-2 Satellite"/> | <a href="https://sentinel.esa.int/web/sentinel/missions/sentinel-2"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/sentinel_1.jpg" style="font-size: 1rem; height: 10em; width: auto; margin-right: 1em" alt="Sentinel-1 Satellite"/>
-->
This model was trained on the BigEarthNet v2.0 (also known as reBEN) dataset using the Sentinel-2 bands.
It was trained using the following parameters:
- Number of epochs: up to 100 (with early stopping after 5 epochs of no improvement based on validation average
precision macro)
- Batch size: 512
- Learning rate: 0.001
- Dropout rate: 0.15
- Drop Path rate: 0.15
- Learning rate scheduler: LinearWarmupCosineAnnealing for 2000 warmup steps
- Optimizer: AdamW
- Seed: 42
The weights published in this model card were obtained after 38 training epochs.
For more information, please visit the [official BigEarthNet v2.0 (reBEN) repository](https://git.tu-berlin.de/rsim/reben-training-scripts), where you can find the training scripts.
](https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/combined_2000_600_2020_0_wide.jpg)
The model was evaluated on the test set of the BigEarthNet v2.0 dataset with the following results:
| Metric | Macro | Micro |
|:------------------|------------------:|------------------:|
| Average Precision | 0.655410 | 0.747705 |
| F1 Score | 0.609869 | 0.670546 |
| Precision | 0.671961 | 0.712082 |
# Example
| A Sentinel-2 image (true color representation) |
|:---------------------------------------------------:|
| ](example.png) |
| Class labels | Predicted scores |
|:--------------------------------------------------------------------------|--------------------------------------------------------------------------:|
| <p> Agro-forestry areas <br> Arable land <br> Beaches, dunes, sands <br> ... <br> Urban fabric </p> | <p> 0.000000 <br> 0.000000 <br> 0.000000 <br> ... <br> 0.000000 </p> |
To use the model, download the codes that define the model architecture from the
[official BigEarthNet v2.0 (reBEN) repository](https://git.tu-berlin.de/rsim/reben-training-scripts) and load the model using the
code below. Note that you have to install [`configilm`](https://pypi.org/project/configilm/) to use the provided code.
```python
from reben_publication.BigEarthNetv2_0_ImageClassifier import BigEarthNetv2_0_ImageClassifier
model = BigEarthNetv2_0_ImageClassifier.from_pretrained("path_to/huggingface_model_folder")
```
e.g.
```python
from reben_publication.BigEarthNetv2_0_ImageClassifier import BigEarthNetv2_0_ImageClassifier
model = BigEarthNetv2_0_ImageClassifier.from_pretrained(
"BIFOLD-BigEarthNetv2-0/efficientnetv2_m-s2-v0.1.1")
```
If you use this model in your research or the provided code, please cite the following papers:
```bibtex
@article{clasen2024refinedbigearthnet,
title={reBEN: Refined BigEarthNet Dataset for Remote Sensing Image Analysis},
author={Clasen, Kai Norman and Hackel, Leonard and Burgert, Tom and Sumbul, Gencer and Demir, Beg{\"u}m and Markl, Volker},
year={2024},
eprint={2407.03653},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2407.03653},
}
```
```bibtex
@article{hackel2024configilm,
title={ConfigILM: A general purpose configurable library for combining image and language models for visual question answering},
author={Hackel, Leonard and Clasen, Kai Norman and Demir, Beg{\"u}m},
journal={SoftwareX},
volume={26},
pages={101731},
year={2024},
publisher={Elsevier}
}
```
|
abhikapoor909/vitmanu1b4-16q | abhikapoor909 | 2025-05-29T23:09:05Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-29T23:08:16Z | ---
base_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** abhikapoor909
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
YShynkarov/ukr-roberta-cosmus-sentiment | YShynkarov | 2025-05-29T23:00:25Z | 0 | 0 | null | [
"sentiment",
"ukrainian",
"socialmedia",
"uk",
"ru",
"dataset:YShynkarov/COSMUS",
"base_model:youscan/ukr-roberta-base",
"base_model:finetune:youscan/ukr-roberta-base",
"license:mit",
"region:us"
] | null | 2025-05-29T22:37:25Z | ---
license: mit
language:
- uk
- ru
metrics:
- accuracy
- f1
base_model:
- youscan/ukr-roberta-base
tags:
- sentiment
- ukrainian
- socialmedia
datasets:
- YShynkarov/COSMUS
--- |
Triangle104/L3-MOE-4X8B-Grand-Horror-25B-Q8_0-GGUF | Triangle104 | 2025-05-29T22:50:41Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"moe",
"mixture of experts",
"merge",
"llama-3",
"llama3",
"llama-cpp",
"gguf-my-repo",
"base_model:DavidAU/L3-MOE-4X8B-Grand-Horror-25B",
"base_model:quantized:DavidAU/L3-MOE-4X8B-Grand-Horror-25B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-29T22:46:43Z | ---
library_name: transformers
tags:
- mergekit
- moe
- mixture of experts
- merge
- llama-3
- llama3
- llama-cpp
- gguf-my-repo
base_model: DavidAU/L3-MOE-4X8B-Grand-Horror-25B
---
# Triangle104/L3-MOE-4X8B-Grand-Horror-25B-Q8_0-GGUF
This model was converted to GGUF format from [`DavidAU/L3-MOE-4X8B-Grand-Horror-25B`](https://huggingface.co/DavidAU/L3-MOE-4X8B-Grand-Horror-25B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/DavidAU/L3-MOE-4X8B-Grand-Horror-25B) for more details on the model.
---
It is a LLama3 model, max context of 8192 (or 32k+ with rope) using mixture of experts to combine Dark/Horror models models of 8B each into one massive powerhouse at 25B parameters (equal to 32B - 4 X 8 B).
This model's instruction following, and output generation for creative writing, prose, fiction and role play are exceptional.
It excels at description, dialog, imagery, metaphors, and prose - and shows great variations in sentence / paragraph size, length, and composition.
It is also not afraid, and will not pull its punches.
And it has a sense of humor too.
It can do horror just as easily as it can do romance.
Most notably dialog is very "un-ai" like, combined with prose (short, and terse at times).
(lots of different examples below, including 2, 3 and 4 experts and different genres)
And it is fast: 34 t/s (2 experts) on a low end 16GB card, Q3KS.
Double this speed for standard/mid-range video cards.
Model can be used also for all genres (examples below showing this).
This model has been designed to be relatively bullet proof and operates with all parameters, including temp settings from 0 to 5.
It is an extraordinary compressed model, with a very low perplexity level (lower than Meta Llama3 Instruct).
It is for any writing, fiction or roleplay activity.
It requires Llama3 template and/or "Command-R" template.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/L3-MOE-4X8B-Grand-Horror-25B-Q8_0-GGUF --hf-file l3-moe-4x8b-grand-horror-25b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/L3-MOE-4X8B-Grand-Horror-25B-Q8_0-GGUF --hf-file l3-moe-4x8b-grand-horror-25b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/L3-MOE-4X8B-Grand-Horror-25B-Q8_0-GGUF --hf-file l3-moe-4x8b-grand-horror-25b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/L3-MOE-4X8B-Grand-Horror-25B-Q8_0-GGUF --hf-file l3-moe-4x8b-grand-horror-25b-q8_0.gguf -c 2048
```
|
mradermacher/gemma-3-27b-it-qat-abliterated-i1-GGUF | mradermacher | 2025-05-29T22:40:10Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:mlabonne/gemma-3-27b-it-qat-abliterated",
"base_model:quantized:mlabonne/gemma-3-27b-it-qat-abliterated",
"license:gemma",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-05-29T20:00:05Z | ---
base_model: mlabonne/gemma-3-27b-it-qat-abliterated
language:
- en
library_name: transformers
license: gemma
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/mlabonne/gemma-3-27b-it-qat-abliterated
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/gemma-3-27b-it-qat-abliterated-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/gemma-3-27b-it-qat-abliterated-i1-GGUF/resolve/main/gemma-3-27b-it-qat-abliterated.i1-IQ1_S.gguf) | i1-IQ1_S | 6.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-27b-it-qat-abliterated-i1-GGUF/resolve/main/gemma-3-27b-it-qat-abliterated.i1-IQ1_M.gguf) | i1-IQ1_M | 6.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-27b-it-qat-abliterated-i1-GGUF/resolve/main/gemma-3-27b-it-qat-abliterated.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 7.8 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-27b-it-qat-abliterated-i1-GGUF/resolve/main/gemma-3-27b-it-qat-abliterated.i1-IQ2_XS.gguf) | i1-IQ2_XS | 8.5 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-27b-it-qat-abliterated-i1-GGUF/resolve/main/gemma-3-27b-it-qat-abliterated.i1-IQ2_S.gguf) | i1-IQ2_S | 8.9 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-27b-it-qat-abliterated-i1-GGUF/resolve/main/gemma-3-27b-it-qat-abliterated.i1-IQ2_M.gguf) | i1-IQ2_M | 9.6 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-27b-it-qat-abliterated-i1-GGUF/resolve/main/gemma-3-27b-it-qat-abliterated.i1-Q2_K_S.gguf) | i1-Q2_K_S | 9.9 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-27b-it-qat-abliterated-i1-GGUF/resolve/main/gemma-3-27b-it-qat-abliterated.i1-Q2_K.gguf) | i1-Q2_K | 10.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-27b-it-qat-abliterated-i1-GGUF/resolve/main/gemma-3-27b-it-qat-abliterated.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 10.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-27b-it-qat-abliterated-i1-GGUF/resolve/main/gemma-3-27b-it-qat-abliterated.i1-IQ3_XS.gguf) | i1-IQ3_XS | 11.7 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-27b-it-qat-abliterated-i1-GGUF/resolve/main/gemma-3-27b-it-qat-abliterated.i1-IQ3_S.gguf) | i1-IQ3_S | 12.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-27b-it-qat-abliterated-i1-GGUF/resolve/main/gemma-3-27b-it-qat-abliterated.i1-Q3_K_S.gguf) | i1-Q3_K_S | 12.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-27b-it-qat-abliterated-i1-GGUF/resolve/main/gemma-3-27b-it-qat-abliterated.i1-IQ3_M.gguf) | i1-IQ3_M | 12.6 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-27b-it-qat-abliterated-i1-GGUF/resolve/main/gemma-3-27b-it-qat-abliterated.i1-Q3_K_M.gguf) | i1-Q3_K_M | 13.5 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-27b-it-qat-abliterated-i1-GGUF/resolve/main/gemma-3-27b-it-qat-abliterated.i1-Q3_K_L.gguf) | i1-Q3_K_L | 14.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-27b-it-qat-abliterated-i1-GGUF/resolve/main/gemma-3-27b-it-qat-abliterated.i1-IQ4_XS.gguf) | i1-IQ4_XS | 14.9 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-27b-it-qat-abliterated-i1-GGUF/resolve/main/gemma-3-27b-it-qat-abliterated.i1-Q4_0.gguf) | i1-Q4_0 | 15.7 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-27b-it-qat-abliterated-i1-GGUF/resolve/main/gemma-3-27b-it-qat-abliterated.i1-Q4_K_S.gguf) | i1-Q4_K_S | 15.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-27b-it-qat-abliterated-i1-GGUF/resolve/main/gemma-3-27b-it-qat-abliterated.i1-Q4_K_M.gguf) | i1-Q4_K_M | 16.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-27b-it-qat-abliterated-i1-GGUF/resolve/main/gemma-3-27b-it-qat-abliterated.i1-Q4_1.gguf) | i1-Q4_1 | 17.3 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-27b-it-qat-abliterated-i1-GGUF/resolve/main/gemma-3-27b-it-qat-abliterated.i1-Q5_K_S.gguf) | i1-Q5_K_S | 18.9 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-27b-it-qat-abliterated-i1-GGUF/resolve/main/gemma-3-27b-it-qat-abliterated.i1-Q5_K_M.gguf) | i1-Q5_K_M | 19.4 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-27b-it-qat-abliterated-i1-GGUF/resolve/main/gemma-3-27b-it-qat-abliterated.i1-Q6_K.gguf) | i1-Q6_K | 22.3 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
emersonrosaoficial/emersonrosa-lorav2 | emersonrosaoficial | 2025-05-29T22:38:16Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2025-05-29T21:42:01Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
BootesVoid/cmb97dpnp083e1b1yl7n88ue0_cmb9x20t80jsa1b1y5xh7dhqx | BootesVoid | 2025-05-29T22:29:21Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-29T22:29:03Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: luise2
---
# Cmb97Dpnp083E1B1Yl7N88Ue0_Cmb9X20T80Jsa1B1Y5Xh7Dhqx
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `luise2` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "luise2",
"lora_weights": "https://huggingface.co/BootesVoid/cmb97dpnp083e1b1yl7n88ue0_cmb9x20t80jsa1b1y5xh7dhqx/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmb97dpnp083e1b1yl7n88ue0_cmb9x20t80jsa1b1y5xh7dhqx', weight_name='lora.safetensors')
image = pipeline('luise2').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmb97dpnp083e1b1yl7n88ue0_cmb9x20t80jsa1b1y5xh7dhqx/discussions) to add images that show off what you’ve made with this LoRA.
|
mradermacher/gemma-3-27b-it-qat-abliterated-GGUF | mradermacher | 2025-05-29T22:29:17Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:mlabonne/gemma-3-27b-it-qat-abliterated",
"base_model:quantized:mlabonne/gemma-3-27b-it-qat-abliterated",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-29T11:54:24Z | ---
base_model: mlabonne/gemma-3-27b-it-qat-abliterated
language:
- en
library_name: transformers
license: gemma
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/mlabonne/gemma-3-27b-it-qat-abliterated
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/gemma-3-27b-it-qat-abliterated-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/gemma-3-27b-it-qat-abliterated-GGUF/resolve/main/gemma-3-27b-it-qat-abliterated.Q2_K.gguf) | Q2_K | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-27b-it-qat-abliterated-GGUF/resolve/main/gemma-3-27b-it-qat-abliterated.Q3_K_S.gguf) | Q3_K_S | 12.3 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-27b-it-qat-abliterated-GGUF/resolve/main/gemma-3-27b-it-qat-abliterated.Q3_K_M.gguf) | Q3_K_M | 13.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-27b-it-qat-abliterated-GGUF/resolve/main/gemma-3-27b-it-qat-abliterated.Q3_K_L.gguf) | Q3_K_L | 14.6 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-27b-it-qat-abliterated-GGUF/resolve/main/gemma-3-27b-it-qat-abliterated.IQ4_XS.gguf) | IQ4_XS | 15.0 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-27b-it-qat-abliterated-GGUF/resolve/main/gemma-3-27b-it-qat-abliterated.Q4_K_S.gguf) | Q4_K_S | 15.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-27b-it-qat-abliterated-GGUF/resolve/main/gemma-3-27b-it-qat-abliterated.Q4_K_M.gguf) | Q4_K_M | 16.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-27b-it-qat-abliterated-GGUF/resolve/main/gemma-3-27b-it-qat-abliterated.Q5_K_S.gguf) | Q5_K_S | 18.9 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-27b-it-qat-abliterated-GGUF/resolve/main/gemma-3-27b-it-qat-abliterated.Q5_K_M.gguf) | Q5_K_M | 19.4 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-27b-it-qat-abliterated-GGUF/resolve/main/gemma-3-27b-it-qat-abliterated.Q6_K.gguf) | Q6_K | 22.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-27b-it-qat-abliterated-GGUF/resolve/main/gemma-3-27b-it-qat-abliterated.Q8_0.gguf) | Q8_0 | 28.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
panas1989/bloom-560m-8bit | panas1989 | 2025-05-29T22:18:19Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-29T22:18:06Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Antigma/Qwen3-1.7B-GGUF | Antigma | 2025-05-29T22:14:33Z | 43 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:Qwen/Qwen3-1.7B",
"base_model:quantized:Qwen/Qwen3-1.7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-04-29T02:25:48Z | ---
base_model: Qwen/Qwen3-1.7B
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-1.7B/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
---
*Produced by [Antigma Labs](https://antigma.ai), [Antigma Quantize Space](https://huggingface.co/spaces/Antigma/quantize-my-repo)*
*Follow Antigma Labs in X [https://x.com/antigma_labs](https://x.com/antigma_labs)*
*Antigma's GitHub Homepage [https://github.com/AntigmaLabs](https://github.com/AntigmaLabs)*
## llama.cpp quantization
Using <a href="https://github.com/ggml-org/llama.cpp">llama.cpp</a> release <a href="https://github.com/ggml-org/llama.cpp/releases/tag/b5215">b5215</a> for quantization.
Original model: https://huggingface.co/Qwen/Qwen3-1.7B
Run them directly with [llama.cpp](https://github.com/ggml-org/llama.cpp), or any other llama.cpp based project
## Prompt format
```
<|begin▁of▁sentence|>{system_prompt}<|User|>{prompt}<|Assistant|><|end▁of▁sentence|><|Assistant|>
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Split |
| -------- | ---------- | --------- | ----- |
| [qwen3-1.7b-q4_k_m.gguf](https://huggingface.co/Antigma/Qwen3-1.7B-GGUF/blob/main/qwen3-1.7b-q4_k_m.gguf)|Q4_K_M|1.19 GB|False|
|[qwen3-1.7b-q4_0.gguf](https://huggingface.co/Antigma/Qwen3-1.7B-GGUF/blob/main/qwen3-1.7b-q4_0.gguf)|Q4_0|1.15 GB|False|
|[qwen3-1.7b-q4_k_s.gguf](https://huggingface.co/Antigma/Qwen3-1.7B-GGUF/blob/main/qwen3-1.7b-q4_k_s.gguf)|Q4_K_S|1.15 GB|False|
## Downloading using huggingface-cli
<details>
<summary>Click to view download instructions</summary>
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download https://huggingface.co/Antigma/Qwen3-1.7B-GGUF --include "qwen3-1.7b-q4_k_m.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download https://huggingface.co/Antigma/Qwen3-1.7B-GGUF --include "qwen3-1.7b-q4_k_m.gguf/*" --local-dir ./
```
You can either specify a new local-dir (deepseek-ai_DeepSeek-V3-0324-Q8_0) or download them all in place (./)
</details>
|
Antigma/Devstral-Small-2505-GGUF | Antigma | 2025-05-29T22:09:16Z | 292 | 1 | vllm | [
"vllm",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text2text-generation",
"en",
"fr",
"de",
"es",
"pt",
"it",
"ja",
"ko",
"ru",
"zh",
"ar",
"fa",
"id",
"ms",
"ne",
"pl",
"ro",
"sr",
"sv",
"tr",
"uk",
"vi",
"hi",
"bn",
"base_model:unsloth/Devstral-Small-2505",
"base_model:quantized:unsloth/Devstral-Small-2505",
"license:apache-2.0",
"region:us",
"conversational"
] | text2text-generation | 2025-05-22T22:34:51Z | ---
base_model: unsloth/Devstral-Small-2505
language:
- en
- fr
- de
- es
- pt
- it
- ja
- ko
- ru
- zh
- ar
- fa
- id
- ms
- ne
- pl
- ro
- sr
- sv
- tr
- uk
- vi
- hi
- bn
library_name: vllm
license: apache-2.0
pipeline_tag: text2text-generation
tags:
- llama-cpp
- gguf-my-repo
inference: false
extra_gated_description: If you want to learn more about how we process your personal
data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
---
*Produced by [Antigma Labs](https://antigma.ai), [Antigma Quantize Space](https://huggingface.co/spaces/Antigma/quantize-my-repo)*
*Follow Antigma Labs in X [https://x.com/antigma_labs](https://x.com/antigma_labs)*
*Antigma's GitHub Homepage [https://github.com/AntigmaLabs](https://github.com/AntigmaLabs)*
## llama.cpp quantization
Using <a href="https://github.com/ggml-org/llama.cpp">llama.cpp</a> release <a href="https://github.com/ggml-org/llama.cpp/releases/tag/b5223">b5223</a> for quantization.
Original model: https://huggingface.co/unsloth/Devstral-Small-2505
Run them directly with [llama.cpp](https://github.com/ggml-org/llama.cpp), or any other llama.cpp based project
## Prompt format
```
<|begin▁of▁sentence|>{system_prompt}<|User|>{prompt}<|Assistant|><|end▁of▁sentence|><|Assistant|>
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Split |
| -------- | ---------- | --------- | ----- |
| [devstral-small-2505-q2_k.gguf](https://huggingface.co/Antigma/Devstral-Small-2505-GGUF/blob/main/devstral-small-2505-q2_k.gguf)|Q2_K|8.28 GB|False|
|[devstral-small-2505-q3_k_l.gguf](https://huggingface.co/Antigma/Devstral-Small-2505-GGUF/blob/main/devstral-small-2505-q3_k_l.gguf)|Q3_K_L|11.55 GB|False|
|[devstral-small-2505-q6_k.gguf](https://huggingface.co/Antigma/Devstral-Small-2505-GGUF/blob/main/devstral-small-2505-q6_k.gguf)|Q6_K|18.02 GB|False|
|[devstral-small-2505-q4_k_m.gguf](https://huggingface.co/Antigma/Devstral-Small-2505-GGUF/blob/main/devstral-small-2505-q4_k_m.gguf)|Q4_K_M|13.35 GB|False|
|[devstral-small-2505-q5_k_m.gguf](https://huggingface.co/Antigma/Devstral-Small-2505-GGUF/blob/main/devstral-small-2505-q5_k_m.gguf)|Q5_K_M|15.61 GB|False|
|[devstral-small-2505-q8_0.gguf](https://huggingface.co/Antigma/Devstral-Small-2505-GGUF/blob/main/devstral-small-2505-q8_0.gguf)|Q8_0|23.33 GB|False|
## Downloading using huggingface-cli
<details>
<summary>Click to view download instructions</summary>
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download https://huggingface.co/Antigma/Devstral-Small-2505-GGUF --include "devstral-small-2505-q2_k.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download https://huggingface.co/Antigma/Devstral-Small-2505-GGUF --include "devstral-small-2505-q2_k.gguf/*" --local-dir ./
```
You can either specify a new local-dir (deepseek-ai_DeepSeek-V3-0324-Q8_0) or download them all in place (./)
</details>
|
dimasik2987/e5097e8f-ab15-4024-9d71-9300ae6f67cd | dimasik2987 | 2025-05-29T22:05:39Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"axolotl",
"dpo",
"trl",
"conversational",
"arxiv:2305.18290",
"base_model:unsloth/SmolLM2-1.7B",
"base_model:quantized:unsloth/SmolLM2-1.7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-05-29T21:33:05Z | ---
base_model: unsloth/SmolLM2-1.7B
library_name: transformers
model_name: e5097e8f-ab15-4024-9d71-9300ae6f67cd
tags:
- generated_from_trainer
- axolotl
- dpo
- trl
licence: license
---
# Model Card for e5097e8f-ab15-4024-9d71-9300ae6f67cd
This model is a fine-tuned version of [unsloth/SmolLM2-1.7B](https://huggingface.co/unsloth/SmolLM2-1.7B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="dimasik2987/e5097e8f-ab15-4024-9d71-9300ae6f67cd", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-7/runs/nizl4x3w)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.46.0
- Pytorch: 2.5.0+cu124
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Ualasse6931/Ia | Ualasse6931 | 2025-05-29T21:31:08Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-29T21:31:07Z | ---
license: apache-2.0
---
|
morturr/Mistral-7B-v0.1-amazon-2025-05-29 | morturr | 2025-05-29T21:29:27Z | 2 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2025-05-28T22:52:25Z | ---
library_name: peft
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Mistral-7B-v0.1-amazon-2025-05-29
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral-7B-v0.1-amazon-2025-05-29
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1 |
morturr/Mistral-7B-v0.1-dadjokes-2025-05-29 | morturr | 2025-05-29T21:16:14Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2025-05-28T22:21:01Z | ---
library_name: peft
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Mistral-7B-v0.1-dadjokes-2025-05-29
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral-7B-v0.1-dadjokes-2025-05-29
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1 |
DoganK01/Qwen3-14B-unsloth-bnb-4bit-raft-ft | DoganK01 | 2025-05-29T20:54:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-14B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen3-14B-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-29T20:54:31Z | ---
base_model: unsloth/Qwen3-14B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** DoganK01
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-14B-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
rtl-llm/qwen2.5coder-7b-origen-chisel-vhdl-verilog-truncate-interleave-len1024 | rtl-llm | 2025-05-29T20:52:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-29T20:49:05Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
alimoh02/vit-base-food101 | alimoh02 | 2025-05-29T20:51:41Z | 115 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"food",
"transfer-learning",
"vision-transformer",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-04-27T17:55:21Z | ---
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- image-classification
- food
- transfer-learning
- vision-transformer
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-base-food101
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-food101
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the ethz/food101 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7395
- Accuracy: 0.8017
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|
| 2.5327 | 0.1320 | 500 | 2.3914 | 0.5946 |
| 1.5713 | 0.2640 | 1000 | 1.5558 | 0.6978 |
| 1.2869 | 0.3960 | 1500 | 1.2575 | 0.7271 |
| 1.1479 | 0.5280 | 2000 | 1.1093 | 0.7476 |
| 1.0838 | 0.6600 | 2500 | 1.0286 | 0.7571 |
| 0.9623 | 0.7920 | 3000 | 0.9798 | 0.7641 |
| 0.9855 | 0.9240 | 3500 | 0.9395 | 0.7670 |
| 0.9263 | 1.0560 | 4000 | 0.9113 | 0.7723 |
| 0.8691 | 1.1880 | 4500 | 0.8844 | 0.7782 |
| 0.8025 | 1.3200 | 5000 | 0.8694 | 0.7768 |
| 0.7783 | 1.4520 | 5500 | 0.8574 | 0.7820 |
| 0.7774 | 1.5839 | 6000 | 0.8457 | 0.7799 |
| 0.7716 | 1.7159 | 6500 | 0.8309 | 0.7871 |
| 0.8445 | 1.8479 | 7000 | 0.8230 | 0.7868 |
| 0.8214 | 1.9799 | 7500 | 0.8107 | 0.7902 |
| 0.7226 | 2.1119 | 8000 | 0.8077 | 0.7897 |
| 0.7712 | 2.2439 | 8500 | 0.8015 | 0.7914 |
| 0.7306 | 2.3759 | 9000 | 0.7970 | 0.7889 |
| 0.6829 | 2.5079 | 9500 | 0.7919 | 0.7912 |
| 0.7593 | 2.6399 | 10000 | 0.7883 | 0.7901 |
| 0.6856 | 2.7719 | 10500 | 0.7802 | 0.7943 |
| 0.7156 | 2.9039 | 11000 | 0.7765 | 0.7976 |
| 0.6688 | 3.0359 | 11500 | 0.7735 | 0.7978 |
| 0.6245 | 3.1679 | 12000 | 0.7711 | 0.7972 |
| 0.668 | 3.2999 | 12500 | 0.7679 | 0.7989 |
| 0.6732 | 3.4319 | 13000 | 0.7657 | 0.7985 |
| 0.686 | 3.5639 | 13500 | 0.7645 | 0.7982 |
| 0.7121 | 3.6959 | 14000 | 0.7612 | 0.7984 |
| 0.6513 | 3.8279 | 14500 | 0.7599 | 0.7993 |
| 0.6963 | 3.9599 | 15000 | 0.7585 | 0.7993 |
| 0.7219 | 4.0919 | 15500 | 0.7554 | 0.7999 |
| 0.6253 | 4.2239 | 16000 | 0.7526 | 0.8016 |
| 0.6278 | 4.3559 | 16500 | 0.7504 | 0.8026 |
| 0.6605 | 4.4879 | 17000 | 0.7502 | 0.8028 |
| 0.6447 | 4.6199 | 17500 | 0.7493 | 0.8028 |
| 0.6469 | 4.7518 | 18000 | 0.7463 | 0.8040 |
| 0.6745 | 4.8838 | 18500 | 0.7462 | 0.8028 |
| 0.5882 | 5.0158 | 19000 | 0.7463 | 0.7995 |
| 0.6241 | 5.1478 | 19500 | 0.7428 | 0.8046 |
| 0.62 | 5.2798 | 20000 | 0.7439 | 0.8013 |
| 0.6435 | 5.4118 | 20500 | 0.7422 | 0.8018 |
| 0.6273 | 5.5438 | 21000 | 0.7418 | 0.8030 |
| 0.623 | 5.6758 | 21500 | 0.7415 | 0.8050 |
| 0.6181 | 5.8078 | 22000 | 0.7385 | 0.8055 |
| 0.6382 | 5.9398 | 22500 | 0.7388 | 0.8071 |
| 0.587 | 6.0718 | 23000 | 0.7379 | 0.8058 |
| 0.603 | 6.2038 | 23500 | 0.7374 | 0.8038 |
| 0.6334 | 6.3358 | 24000 | 0.7366 | 0.8054 |
| 0.613 | 6.4678 | 24500 | 0.7364 | 0.8048 |
| 0.5917 | 6.5998 | 25000 | 0.7355 | 0.8051 |
| 0.6167 | 6.7318 | 25500 | 0.7352 | 0.8059 |
| 0.6121 | 6.8638 | 26000 | 0.7347 | 0.8066 |
| 0.6133 | 6.9958 | 26500 | 0.7342 | 0.8059 |
| 0.6304 | 7.1278 | 27000 | 0.7338 | 0.8057 |
| 0.6041 | 7.2598 | 27500 | 0.7342 | 0.8063 |
| 0.6333 | 7.3918 | 28000 | 0.7334 | 0.8059 |
| 0.6234 | 7.5238 | 28500 | 0.7335 | 0.8061 |
| 0.5961 | 7.6558 | 29000 | 0.7334 | 0.8073 |
| 0.61 | 7.7878 | 29500 | 0.7333 | 0.8070 |
| 0.6586 | 7.9197 | 30000 | 0.7331 | 0.8070 |
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0
- Datasets 3.4.1
- Tokenizers 0.21.1
|
DevQuasar/tiiuae.Falcon-H1-1.5B-Instruct-GGUF | DevQuasar | 2025-05-29T20:48:39Z | 0 | 0 | null | [
"gguf",
"text-generation",
"base_model:tiiuae/Falcon-H1-1.5B-Instruct",
"base_model:quantized:tiiuae/Falcon-H1-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-29T20:39:19Z | ---
base_model:
- tiiuae/Falcon-H1-1.5B-Instruct
pipeline_tag: text-generation
---
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
Quantized version of: [tiiuae/Falcon-H1-1.5B-Instruct](https://huggingface.co/tiiuae/Falcon-H1-1.5B-Instruct)
'Make knowledge free for everyone'
<p align="center">
Made with <br>
<a href="https://www.civo.com/" target="_blank">
<img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/>
</a>
</p>
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
|
rtl-llm/qwen2.5coder-7b-origen-verilog-vhdl-vhdl-pymtl-chisel-truncate | rtl-llm | 2025-05-29T20:46:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-29T20:43:21Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BootesVoid/cmb9qztax0hfg1b1yperzooyx_cmb9snttm0i0p1b1y71ay3b5o | BootesVoid | 2025-05-29T20:41:13Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-29T20:41:12Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: VANESSA
---
# Cmb9Qztax0Hfg1B1Yperzooyx_Cmb9Snttm0I0P1B1Y71Ay3B5O
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `VANESSA` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "VANESSA",
"lora_weights": "https://huggingface.co/BootesVoid/cmb9qztax0hfg1b1yperzooyx_cmb9snttm0i0p1b1y71ay3b5o/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmb9qztax0hfg1b1yperzooyx_cmb9snttm0i0p1b1y71ay3b5o', weight_name='lora.safetensors')
image = pipeline('VANESSA').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmb9qztax0hfg1b1yperzooyx_cmb9snttm0i0p1b1y71ay3b5o/discussions) to add images that show off what you’ve made with this LoRA.
|
Sagicc/qwen3-8b-tokenizer | Sagicc | 2025-05-29T20:38:57Z | 0 | 0 | null | [
"qwen3",
"license:apache-2.0",
"region:us"
] | null | 2025-05-29T20:08:03Z | ---
license: apache-2.0
---
|
winnieyangwannan/Llama-3.1-8B-Instruct_mlp-down_negative-addition_last_layer_4_2_song_ratio_3_epoch_49 | winnieyangwannan | 2025-05-29T20:38:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-29T20:36:36Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
winnieyangwannan/Llama-3.1-8B-Instruct_mlp-down_negative-addition_last_layer_24_2_song_ratio_3_epoch_39 | winnieyangwannan | 2025-05-29T20:36:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-29T20:34:28Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DevQuasar/tiiuae.Falcon-H1-0.5B-Base-GGUF | DevQuasar | 2025-05-29T20:34:11Z | 0 | 0 | null | [
"gguf",
"text-generation",
"base_model:tiiuae/Falcon-H1-0.5B-Base",
"base_model:quantized:tiiuae/Falcon-H1-0.5B-Base",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-29T20:29:48Z | ---
base_model:
- tiiuae/Falcon-H1-0.5B-Base
pipeline_tag: text-generation
---
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
Quantized version of: [tiiuae/Falcon-H1-0.5B-Base](https://huggingface.co/tiiuae/Falcon-H1-0.5B-Base)
'Make knowledge free for everyone'
<p align="center">
Made with <br>
<a href="https://www.civo.com/" target="_blank">
<img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/>
</a>
</p>
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
|
kenchenxingyu/sealion-8B-lora-stance-sgmy_ACCOP_APATAP2025_v3 | kenchenxingyu | 2025-05-29T20:32:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-29T20:32:12Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
winnieyangwannan/Llama-3.1-8B-Instruct_mlp-down_negative-addition_last_layer_6_2_song_ratio_3_epoch_19 | winnieyangwannan | 2025-05-29T20:31:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-29T20:29:20Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RicardoQi/whisper_ATC_ru | RicardoQi | 2025-05-29T20:29:28Z | 74 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-05-27T10:58:24Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
winnieyangwannan/Llama-3.1-8B-Instruct_mlp-down_negative-addition_last_layer_26_2_song_ratio_3_epoch_9 | winnieyangwannan | 2025-05-29T20:29:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-29T20:27:08Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DevQuasar/tiiuae.Falcon-H1-1.5B-Deep-Base-GGUF | DevQuasar | 2025-05-29T20:25:28Z | 0 | 0 | null | [
"gguf",
"text-generation",
"base_model:tiiuae/Falcon-H1-1.5B-Deep-Base",
"base_model:quantized:tiiuae/Falcon-H1-1.5B-Deep-Base",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-29T20:15:31Z | ---
base_model:
- tiiuae/Falcon-H1-1.5B-Deep-Base
pipeline_tag: text-generation
---
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
Quantized version of: [tiiuae/Falcon-H1-1.5B-Deep-Base](https://huggingface.co/tiiuae/Falcon-H1-1.5B-Deep-Base)
'Make knowledge free for everyone'
<p align="center">
Made with <br>
<a href="https://www.civo.com/" target="_blank">
<img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/>
</a>
</p>
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
|
Ryankwon0916/qwen2-2b-instruct-vqa-rad | Ryankwon0916 | 2025-05-29T20:10:16Z | 2 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2-VL-2B-Instruct",
"base_model:finetune:Qwen/Qwen2-VL-2B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-20T21:38:45Z | ---
base_model: Qwen/Qwen2-VL-2B-Instruct
library_name: transformers
model_name: qwen2-2b-instruct-vqa-rad
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for qwen2-2b-instruct-vqa-rad
This model is a fine-tuned version of [Qwen/Qwen2-VL-2B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Ryankwon0916/qwen2-2b-instruct-vqa-rad", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/ryankwon03-university-of-michigan/huggingface/runs/egv857fq)
This model was trained with SFT.
### Framework versions
- TRL: 0.15.0
- Transformers: 4.50.0
- Pytorch: 2.5.1
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
pytorch/Qwen3-32B-float8dq | pytorch | 2025-05-29T20:06:15Z | 68 | 0 | transformers | [
"transformers",
"pytorch",
"qwen3",
"text-generation",
"torchao",
"code",
"math",
"chat",
"conversational",
"multilingual",
"base_model:Qwen/Qwen3-32B",
"base_model:quantized:Qwen/Qwen3-32B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-07T21:45:44Z | ---
library_name: transformers
tags:
- torchao
- code
- math
- chat
license: apache-2.0
language:
- multilingual
base_model:
- Qwen/Qwen3-32B
pipeline_tag: text-generation
---
[Qwen3-32B](https://huggingface.co/Qwen/Qwen3-32B) model quantized with [torchao](https://huggingface.co/docs/transformers/main/en/quantization/torchao) float8 dynamic activation and float8 weight quantization (per row granularity), by PyTorch team. Use it directly, or serve using [vLLM](https://docs.vllm.ai/en/latest/) with 47% VRAM reduction, around 1.5x speedup and little to no accuracy impact on H100.
# Inference with vLLM
```Shell
# Server
VLLM_DISABLE_COMPILE_CACHE=1 vllm serve pytorch/Qwen3-32B-float8dq --tokenizer Qwen/Qwen3-32B -O3
```
```Shell
# Client
curl http://localhost:8000/v1/chat/completions -H "Content-Type: application/json" -d '{
"model": "pytorch/Qwen3-32B-float8dq",
"messages": [
{"role": "user", "content": "Give me a short introduction to large language models."}
],
"temperature": 0.6,
"top_p": 0.95,
"top_k": 20,
"max_tokens": 32768
}'
```
# Inference with transformers
```Py
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "pytorch/Qwen3-32B-float8dq"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
```
# Quantization Recipe
Install the required packages:
```Shell
pip install git+https://github.com/huggingface/transformers@main
pip install torchao
pip install torch
pip install accelerate
```
Use the following code to get the float8 model using torchao library:
```Py
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, TorchAoConfig
model_id = "Qwen/Qwen3-32B"
from torchao.quantization import Float8DynamicActivationFloat8WeightConfig, PerRow
quant_config = Float8DynamicActivationFloat8WeightConfig(granularity=PerRow())
quantization_config = TorchAoConfig(quant_type=quant_config)
quantized_model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
torch_dtype=torch.bfloat16,
quantization_config=quantization_config,
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
```
Optionally, upload to your HF hub
```Py
USER_ID = "YOUR_USER_ID"
MODEL_NAME = model_id.split("/")[-1]
save_to = f"{USER_ID}/{MODEL_NAME}-float8dq"
quantized_model.push_to_hub(save_to, safe_serialization=False)
tokenizer.push_to_hub(save_to)
```
# Model Quality
We rely on [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) to evaluate the quality of the quantized model.
| Benchmark | | |
|----------------------------------|----------------|---------------------------|
| | Qwen3-32B | Qwen3-32B-float8dq |
| **General** | | |
| mmlu | 80.71 | 80.67 |
| bbh | 37.49 | 38.01 |
| **Multilingual** | | |
| mgsm_en_cot_es | 58.4 | 52.0 |
| **Math** | | |
| gpqa_main_zeroshot | 41.96 | 42.63 |
| **Overall** | 54.64 | 53.33 |
<details>
<summary> Reproduce Model Quality Results </summary>
Need to install lm-eval from source:
https://github.com/EleutherAI/lm-evaluation-harness#install
## baseline
```Shell
lm_eval --model hf --model_args pretrained=Qwen/Qwen3-32B --tasks mmlu --device cuda:0 --batch_size 8
```
## float8 dynamic quantization (float8dq)
```Shell
export MODEL=pytorch/Qwen3-32B-float8dq
# or
# export MODEL=Qwen/Qwen3-32B
lm_eval --model hf --model_args pretrained=$MODEL --tasks mmlu --device cuda:0 --batch_size 8
```
</details>
# Memory Usage
| Memory (tested on H100) | | |
|----------------------------------|----------------|-------------------------------|
| | Qwen3-32B | Qwen3-32B-float8dq |
| Peak Memory | 65.72 GB | 34.54 GB (47.44% reduction) |
<details>
<summary> Reproduce Peak Memory Usage Results </summary>
Code
```Py
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen3-32B" # pytorch/Qwen3-32B-float8dq
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
torch.cuda.reset_peak_memory_stats()
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
mem = torch.cuda.max_memory_reserved() / 1e9
print(f"Peak Memory Usage: {mem:.02f} GB")
```
</details>
# Model Performance
| Benchmark (Tested on H100) | | |
|----------------------------------|----------------|-------------------------------|
| | Qwen3-32B | Qwen3-32B-float8dq |
| latency (batch_size=1) | 9.1s | 5.77s (1.58x speedup) |
| latency (batch_size=128) | 12.45s | 8.40s (1.48x speedup) |
<details>
<summary> Reproduce latency benchmarks </summary>
**1. Setup**
```Shell
git clone [email protected]:vllm-project/vllm.git
cd vllm
VLLM_USE_PRECOMPILED=1 pip install --editable .
```
**2. Latency benchmarking**
```Shell
export MODEL=Qwen/Qwen3-32B # or pytorch/Qwen3-32B-float8dq
VLLM_DISABLE_COMPILE_CACHE=1 python benchmarks/benchmark_latency.py --input-len 256 --output-len 256 --model $MODEL --batch-size 1
```
</details>
# Disclaimer
PyTorch has not performed safety evaluations or red teamed the quantized models. Performance characteristics, outputs, and behaviors may differ from the original models. Users are solely responsible for selecting appropriate use cases, evaluating and mitigating for accuracy, safety, and fairness, ensuring security, and complying with all applicable laws and regulations.
Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the licenses the models are released under, including any limitations of liability or disclaimers of warranties provided therein. |
mohsin-riad/roberta-base-Disease-NER | mohsin-riad | 2025-05-29T20:02:58Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-02-08T07:31:03Z | ---
license: mit
tags:
- generated_from_trainer
widget:
- text: "Tachycardia is an increased heart rate for any reason. It can be a usual rise in heart rate caused by exercise or a stress response. Sinus tachycardia is considered a symptom, not a disease."
example_title: "Tachycardia"
- text: "Tuberclosis generally damages the lungs, but it can also impair other parts of the body such as brain and spine. Typical signs of active Tuberclosis include chronic cought with blood-containing mucus, fever, night sweats and weight loss. Tuberclosis damages the lungs whereas Malaria could detriment both kidneys by imparing the liver."
example_title: "Tuberculosis"
- text: "Cholera is a bacterial disease usually spread through contaminated water. A bacterium called Vibrio cholerae causes cholera infection. Symptoms of cholera infection can include: Diarrhea, Nausea, vomiting, Dehydration. Risk factors for cholera include: Poor sanitary conditions, Household exposure, Type O blood, Raw or undercooked shellfish."
example_title: "Cholera"
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: roberta-base-Disease-NER
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-Disease-NER
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7496
- Precision: 0.5450
- Recall: 0.6759
- F1: 0.6035
- Accuracy: 0.8198
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 180 | 0.7775 | 0.3892 | 0.5483 | 0.4552 | 0.7676 |
| No log | 2.0 | 360 | 0.5731 | 0.4717 | 0.6003 | 0.5283 | 0.8152 |
| 0.8746 | 3.0 | 540 | 0.5629 | 0.4745 | 0.6515 | 0.5491 | 0.8164 |
| 0.8746 | 4.0 | 720 | 0.5848 | 0.4603 | 0.6744 | 0.5472 | 0.8106 |
| 0.8746 | 5.0 | 900 | 0.5489 | 0.5212 | 0.6686 | 0.5858 | 0.8239 |
| 0.4396 | 6.0 | 1080 | 0.5524 | 0.5123 | 0.6804 | 0.5845 | 0.8195 |
| 0.4396 | 7.0 | 1260 | 0.5550 | 0.5001 | 0.6842 | 0.5778 | 0.8174 |
| 0.4396 | 8.0 | 1440 | 0.5787 | 0.4982 | 0.6882 | 0.5780 | 0.8128 |
| 0.3302 | 9.0 | 1620 | 0.5824 | 0.5104 | 0.6939 | 0.5882 | 0.8154 |
| 0.3302 | 10.0 | 1800 | 0.5872 | 0.5295 | 0.6781 | 0.5947 | 0.8211 |
| 0.3302 | 11.0 | 1980 | 0.6047 | 0.5261 | 0.6867 | 0.5957 | 0.8210 |
| 0.2564 | 12.0 | 2160 | 0.6151 | 0.5357 | 0.6739 | 0.5969 | 0.8220 |
| 0.2564 | 13.0 | 2340 | 0.6560 | 0.5204 | 0.6784 | 0.5890 | 0.8172 |
| 0.204 | 14.0 | 2520 | 0.6866 | 0.5162 | 0.6919 | 0.5913 | 0.8155 |
| 0.204 | 15.0 | 2700 | 0.6994 | 0.5192 | 0.6887 | 0.5921 | 0.8145 |
| 0.204 | 16.0 | 2880 | 0.6904 | 0.5309 | 0.6764 | 0.5949 | 0.8199 |
| 0.1655 | 17.0 | 3060 | 0.7752 | 0.4925 | 0.6919 | 0.5754 | 0.8059 |
| 0.1655 | 18.0 | 3240 | 0.7464 | 0.5182 | 0.6832 | 0.5893 | 0.8152 |
| 0.1655 | 19.0 | 3420 | 0.7739 | 0.5242 | 0.6784 | 0.5914 | 0.8157 |
| 0.1335 | 20.0 | 3600 | 0.7496 | 0.5450 | 0.6759 | 0.6035 | 0.8198 |
| 0.1335 | 21.0 | 3780 | 0.7835 | 0.5296 | 0.6759 | 0.5939 | 0.8141 |
| 0.1335 | 22.0 | 3960 | 0.8174 | 0.5080 | 0.6869 | 0.5841 | 0.8092 |
| 0.1155 | 23.0 | 4140 | 0.8307 | 0.5336 | 0.6746 | 0.5959 | 0.8153 |
| 0.1155 | 24.0 | 4320 | 0.8457 | 0.5253 | 0.6832 | 0.5939 | 0.8126 |
| 0.0959 | 25.0 | 4500 | 0.8473 | 0.5250 | 0.6829 | 0.5936 | 0.8138 |
| 0.0959 | 26.0 | 4680 | 0.8971 | 0.5131 | 0.6837 | 0.5862 | 0.8069 |
| 0.0959 | 27.0 | 4860 | 0.8770 | 0.5229 | 0.6849 | 0.5930 | 0.8161 |
| 0.0814 | 28.0 | 5040 | 0.9317 | 0.5012 | 0.6894 | 0.5804 | 0.8083 |
| 0.0814 | 29.0 | 5220 | 0.9051 | 0.5288 | 0.6776 | 0.5940 | 0.8141 |
| 0.0814 | 30.0 | 5400 | 0.9387 | 0.5184 | 0.6839 | 0.5897 | 0.8106 |
| 0.0706 | 31.0 | 5580 | 0.9402 | 0.5261 | 0.6897 | 0.5969 | 0.8134 |
| 0.0706 | 32.0 | 5760 | 0.9603 | 0.5121 | 0.6839 | 0.5857 | 0.8104 |
| 0.0706 | 33.0 | 5940 | 0.9535 | 0.5255 | 0.6769 | 0.5917 | 0.8145 |
| 0.062 | 34.0 | 6120 | 0.9675 | 0.5250 | 0.6844 | 0.5942 | 0.8142 |
| 0.062 | 35.0 | 6300 | 0.9938 | 0.5249 | 0.6754 | 0.5907 | 0.8128 |
| 0.062 | 36.0 | 6480 | 0.9890 | 0.5222 | 0.6796 | 0.5906 | 0.8124 |
| 0.0544 | 37.0 | 6660 | 1.0106 | 0.5244 | 0.6794 | 0.5919 | 0.8135 |
| 0.0544 | 38.0 | 6840 | 1.0285 | 0.5230 | 0.6839 | 0.5928 | 0.8109 |
| 0.0489 | 39.0 | 7020 | 1.0253 | 0.5219 | 0.6809 | 0.5909 | 0.8137 |
| 0.0489 | 40.0 | 7200 | 1.0263 | 0.5229 | 0.6806 | 0.5914 | 0.8124 |
| 0.0489 | 41.0 | 7380 | 1.0511 | 0.5205 | 0.6849 | 0.5915 | 0.8113 |
| 0.0447 | 42.0 | 7560 | 1.0563 | 0.5145 | 0.6804 | 0.5859 | 0.8110 |
| 0.0447 | 43.0 | 7740 | 1.0521 | 0.5210 | 0.6814 | 0.5905 | 0.8128 |
| 0.0447 | 44.0 | 7920 | 1.0581 | 0.5220 | 0.6799 | 0.5906 | 0.8115 |
| 0.0411 | 45.0 | 8100 | 1.0597 | 0.5221 | 0.6816 | 0.5913 | 0.8127 |
| 0.0411 | 46.0 | 8280 | 1.0770 | 0.5216 | 0.6844 | 0.5920 | 0.8114 |
| 0.0411 | 47.0 | 8460 | 1.0689 | 0.5275 | 0.6847 | 0.5959 | 0.8128 |
| 0.039 | 48.0 | 8640 | 1.0665 | 0.5284 | 0.6821 | 0.5955 | 0.8135 |
| 0.039 | 49.0 | 8820 | 1.0715 | 0.5271 | 0.6829 | 0.5950 | 0.8128 |
| 0.0374 | 50.0 | 9000 | 1.0716 | 0.5273 | 0.6827 | 0.5950 | 0.8130 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
BootesVoid/cmb9r7m6t0hik1b1yzn6a4vfx_cmb9roxev0hpj1b1ygyg7v2up | BootesVoid | 2025-05-29T19:59:39Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-29T19:59:20Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: YAMINI
---
# Cmb9R7M6T0Hik1B1Yzn6A4Vfx_Cmb9Roxev0Hpj1B1Ygyg7V2Up
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `YAMINI` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "YAMINI",
"lora_weights": "https://huggingface.co/BootesVoid/cmb9r7m6t0hik1b1yzn6a4vfx_cmb9roxev0hpj1b1ygyg7v2up/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmb9r7m6t0hik1b1yzn6a4vfx_cmb9roxev0hpj1b1ygyg7v2up', weight_name='lora.safetensors')
image = pipeline('YAMINI').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmb9r7m6t0hik1b1yzn6a4vfx_cmb9roxev0hpj1b1ygyg7v2up/discussions) to add images that show off what you’ve made with this LoRA.
|
zues0102/bert-base-multilingual-cased | zues0102 | 2025-05-29T19:50:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-29T19:49:48Z | ---
library_name: transformers
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-base-multilingual-cased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0125
- Accuracy: 0.9985
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 128
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0527 | 1.0 | 500 | 0.0274 | 0.9958 |
| 0.0139 | 2.0 | 1000 | 0.0212 | 0.997 |
| 0.0078 | 3.0 | 1500 | 0.0203 | 0.9962 |
| 0.0047 | 4.0 | 2000 | 0.0198 | 0.9972 |
| 0.0026 | 5.0 | 2500 | 0.0156 | 0.9975 |
| 0.0018 | 6.0 | 3000 | 0.0166 | 0.9978 |
| 0.001 | 7.0 | 3500 | 0.0188 | 0.9972 |
| 0.0011 | 8.0 | 4000 | 0.0169 | 0.9975 |
### Framework versions
- Transformers 4.52.2
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
chkla/qwen-polarity-lora | chkla | 2025-05-29T19:47:44Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-7B-Instruct",
"region:us"
] | null | 2025-05-29T19:46:17Z | ---
base_model: Qwen/Qwen2.5-7B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
kamysheva/lakoon_detection_and_classification | kamysheva | 2025-05-29T19:47:28Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:DeepPavlov/rubert-base-cased",
"base_model:finetune:DeepPavlov/rubert-base-cased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2025-05-29T14:48:36Z | ---
library_name: transformers
base_model: DeepPavlov/rubert-base-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: lakoon_detection_and_classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lakoon_detection_and_classification
This model is a fine-tuned version of [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3925
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
- Accuracy: 0.9024
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| No log | 1.0 | 3 | 0.5621 | 0.0 | 0.0 | 0.0 | 0.9024 |
| No log | 2.0 | 6 | 0.5531 | 0.0 | 0.0 | 0.0 | 0.9024 |
| No log | 3.0 | 9 | 0.5385 | 0.0 | 0.0 | 0.0 | 0.9024 |
| No log | 4.0 | 12 | 0.5115 | 0.0 | 0.0 | 0.0 | 0.9024 |
| No log | 5.0 | 15 | 0.4984 | 0.0 | 0.0 | 0.0 | 0.9024 |
| No log | 6.0 | 18 | 0.4821 | 0.0 | 0.0 | 0.0 | 0.9024 |
| No log | 7.0 | 21 | 0.4703 | 0.0 | 0.0 | 0.0 | 0.9024 |
| No log | 8.0 | 24 | 0.4602 | 0.0 | 0.0 | 0.0 | 0.9024 |
| No log | 9.0 | 27 | 0.4391 | 0.0 | 0.0 | 0.0 | 0.9024 |
| No log | 10.0 | 30 | 0.4144 | 0.0 | 0.0 | 0.0 | 0.9024 |
| No log | 11.0 | 33 | 0.3966 | 0.0 | 0.0 | 0.0 | 0.9024 |
| No log | 12.0 | 36 | 0.3888 | 0.0 | 0.0 | 0.0 | 0.9024 |
| No log | 13.0 | 39 | 0.3883 | 0.0 | 0.0 | 0.0 | 0.9024 |
| No log | 14.0 | 42 | 0.3918 | 0.0 | 0.0 | 0.0 | 0.9024 |
| No log | 15.0 | 45 | 0.3969 | 0.0 | 0.0 | 0.0 | 0.9024 |
| No log | 16.0 | 48 | 0.3991 | 0.0 | 0.0 | 0.0 | 0.9024 |
| No log | 17.0 | 51 | 0.3983 | 0.0 | 0.0 | 0.0 | 0.9024 |
| No log | 18.0 | 54 | 0.3959 | 0.0 | 0.0 | 0.0 | 0.9024 |
| No log | 19.0 | 57 | 0.3932 | 0.0 | 0.0 | 0.0 | 0.9024 |
| No log | 20.0 | 60 | 0.3925 | 0.0 | 0.0 | 0.0 | 0.9024 |
### Framework versions
- Transformers 4.52.2
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
pclinc/2_HSEModel | pclinc | 2025-05-29T19:42:51Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"qwen3",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-29T19:40:56Z | ---
base_model: unsloth/qwen3-14b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** pclinc
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-14b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RicardoQi/t5_ATC_ru | RicardoQi | 2025-05-29T19:39:08Z | 75 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-05-27T11:02:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
NingLab/GeLLMO-P4-Mistral | NingLab | 2025-05-29T19:37:08Z | 5 | 0 | null | [
"safetensors",
"chemistry",
"molecule optimization",
"text-generation",
"en",
"dataset:NingLab/MuMOInstruct",
"arxiv:2502.13398",
"base_model:mistralai/Mistral-7B-Instruct-v0.3",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.3",
"license:cc-by-nc-4.0",
"region:us"
] | text-generation | 2025-02-26T18:55:11Z | ---
license: cc-by-nc-4.0
datasets:
- NingLab/MuMOInstruct
language:
- en
base_model:
- mistralai/Mistral-7B-Instruct-v0.3
pipeline_tag: text-generation
tags:
- chemistry
- molecule optimization
---
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/ninglab/GeLLMO
- **Paper:** https://arxiv.org/abs/2502.13398
## Usage
For instructions to run the model, please refer to our repository.
## Bias, Risks, and Limitations
While our models are designed for research and drug discovery applications,
they come with ethical and safety considerations:
1. **Potential for Misuse:** Although the model is not explicitly designed to generate toxic,
controlled, or harmful compounds, adversarial prompts or unintended biases in the pretrained model
may lead to the generation of undesirable molecules.
2. **Unintended Harmful Outputs:** The model does not inherently filter out molecules with high toxicity,
abuse potential, or environmental hazards. Users must implement additional safeguards to prevent misuse.
3. **Absence of Built-in Safety Mechanisms:** The model does not incorporate explicit regulatory or
safety filters (e.g., toxicity or compliance checks).
It is the responsibility of users to validate generated molecules for safety and ethical considerations.
We urge users to adopt best practices, including toxicity prediction pipelines,
ethical oversight, and responsible AI usage policies, to prevent harmful applications of this model.
## Citation
If you use the trained model checkpoints, datasets or other resources, please use the following citation:
```
@misc{dey2025gellmo,
title={$\mathtt{GeLLM^3O}$: Generalizing Large Language Models for Multi-property Molecule Optimization},
author={Vishal Dey and Xiao Hu and Xia Ning},
year={2025},
eprint={2502.13398},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2502.13398},
}
``` |
gbennani/MNLP_M2_RAG_model | gbennani | 2025-05-29T19:26:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen3-0.6B-Base",
"base_model:finetune:Qwen/Qwen3-0.6B-Base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-29T19:11:33Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen3-0.6B-Base
tags:
- generated_from_trainer
model-index:
- name: MNLP_M2_RAG_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MNLP_M2_RAG_model
This model is a fine-tuned version of [Qwen/Qwen3-0.6B-Base](https://huggingface.co/Qwen/Qwen3-0.6B-Base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.52.3
- Pytorch 2.5.1+cu124
- Datasets 3.6.0
- Tokenizers 0.21.0
|
NingLab/GeLLMO-C-P10-Mistral | NingLab | 2025-05-29T19:25:16Z | 0 | 0 | null | [
"safetensors",
"chemistry",
"molecule optimization",
"text-generation",
"en",
"dataset:NingLab/C-MuMOInstruct",
"base_model:mistralai/Mistral-7B-Instruct-v0.3",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.3",
"license:cc-by-nc-4.0",
"region:us"
] | text-generation | 2025-05-28T22:03:35Z | ---
license: cc-by-nc-4.0
datasets:
- NingLab/C-MuMOInstruct
language:
- en
base_model:
- mistralai/Mistral-7B-Instruct-v0.3
pipeline_tag: text-generation
tags:
- chemistry
- molecule optimization
---
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/ninglab/GeLLMO-C
- **Paper:**
## Usage
For instructions to run the model, please refer to our repository.
## Bias, Risks, and Limitations
While our models are designed for research and drug discovery applications,
they come with ethical and safety considerations:
1. **Potential for Misuse:** Although the model is not explicitly designed to generate toxic,
controlled, or harmful compounds, adversarial prompts or unintended biases in the pretrained model
may lead to the generation of undesirable molecules.
2. **Unintended Harmful Outputs:** The model does not inherently filter out molecules with high toxicity,
abuse potential, or environmental hazards. Users must implement additional safeguards to prevent misuse.
3. **Absence of Built-in Safety Mechanisms:** The model does not incorporate explicit regulatory or
safety filters (e.g., toxicity or compliance checks).
It is the responsibility of users to validate generated molecules for safety and ethical considerations.
We urge users to adopt best practices, including toxicity prediction pipelines,
ethical oversight, and responsible AI usage policies, to prevent harmful applications of this model. |
castorini/cosdpr-distil-onnx | castorini | 2025-05-29T19:10:02Z | 0 | 0 | null | [
"onnx",
"region:us"
] | null | 2025-05-29T19:01:42Z | This model is the ONNX version of [castorini/cosdpr-distil](https://huggingface.co/castorini/cosdpr-distil). |
ricostaedeli/Meta-Llama-3.1-8B-Instruct_DPO_1-lora | ricostaedeli | 2025-05-29T19:05:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-29T14:03:58Z | ---
base_model: unsloth/llama-3.1-8b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ricostaedeli
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.1-8b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Jarbas/ovos-model2vec-intents-BERnaT-base | Jarbas | 2025-05-29T19:04:26Z | 0 | 0 | model2vec | [
"model2vec",
"safetensors",
"embeddings",
"static-embeddings",
"sentence-transformers",
"eu",
"dataset:Jarbas/ovos-intents-train-v1",
"base_model:Jarbas/m2v-256-BERnaT-base",
"base_model:finetune:Jarbas/m2v-256-BERnaT-base",
"license:mit",
"region:us"
] | null | 2025-05-29T19:02:05Z | ---
base_model:
- Jarbas/m2v-256-BERnaT-base
library_name: model2vec
license: mit
model_name: model_eu_m2v-256-BERnaT-base
tags:
- embeddings
- static-embeddings
- sentence-transformers
datasets:
- Jarbas/ovos-intents-train-v1
language:
- eu
---
# model_eu_m2v-256-BERnaT-base Model Card
This [Model2Vec](https://github.com/MinishLab/model2vec) model is a fine-tuned version of the [unknown](https://huggingface.co/unknown) Model2Vec model. It also includes a classifier head on top.
## Installation
Install model2vec using pip:
```
pip install model2vec[inference]
```
## Usage
Load this model using the `from_pretrained` method:
```python
from model2vec.inference import StaticModelPipeline
# Load a pretrained Model2Vec model
model = StaticModelPipeline.from_pretrained("model_eu_m2v-256-BERnaT-base")
# Predict labels
predicted = model.predict(["Example sentence"])
```
## Additional Resources
- [Model2Vec Repo](https://github.com/MinishLab/model2vec)
- [Model2Vec Base Models](https://huggingface.co/collections/minishlab/model2vec-base-models-66fd9dd9b7c3b3c0f25ca90e)
- [Model2Vec Results](https://github.com/MinishLab/model2vec/tree/main/results)
- [Model2Vec Tutorials](https://github.com/MinishLab/model2vec/tree/main/tutorials)
- [Website](https://minishlab.github.io/)
## Library Authors
Model2Vec was developed by the [Minish Lab](https://github.com/MinishLab) team consisting of [Stephan Tulkens](https://github.com/stephantul) and [Thomas van Dongen](https://github.com/Pringled).
## Citation
Please cite the [Model2Vec repository](https://github.com/MinishLab/model2vec) if you use this model in your work.
```
@article{minishlab2024model2vec,
author = {Tulkens, Stephan and {van Dongen}, Thomas},
title = {Model2Vec: Fast State-of-the-Art Static Embeddings},
year = {2024},
url = {https://github.com/MinishLab/model2vec}
}
``` |
xabicasa/DeepSeek-R1-0528-Qwen3-8B-MLX-4bit | xabicasa | 2025-05-29T19:04:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:2501.12948",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-29T18:25:32Z | ---
license: mit
library_name: transformers
---
# DeepSeek-R1-0528
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<!-- markdownlint-disable no-duplicate-header -->
<div align="center">
<img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V3" />
</div>
<hr>
<div align="center" style="line-height: 1;">
<a href="https://www.deepseek.com/" target="_blank" style="margin: 2px;">
<img alt="Homepage" src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://chat.deepseek.com/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-DeepSeek%20R1-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://huggingface.co/deepseek-ai" target="_blank" style="margin: 2px;">
<img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://discord.gg/Tc7c45Zzu5" target="_blank" style="margin: 2px;">
<img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank" style="margin: 2px;">
<img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://twitter.com/deepseek_ai" target="_blank" style="margin: 2px;">
<img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="LICENSE" style="margin: 2px;">
<img alt="License" src="https://img.shields.io/badge/License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<p align="center">
<a href="https://arxiv.org/pdf/2501.12948"><b>Paper Link</b>👁️</a>
</p>
## 1. Introduction
The DeepSeek R1 model has undergone a minor version upgrade, with the current version being DeepSeek-R1-0528. In the latest update, DeepSeek R1 has significantly improved its depth of reasoning and inference capabilities by leveraging increased computational resources and introducing algorithmic optimization mechanisms during post-training. The model has demonstrated outstanding performance across various benchmark evaluations, including mathematics, programming, and general logic. Its overall performance is now approaching that of leading models, such as O3 and Gemini 2.5 Pro.
<p align="center">
<img width="80%" src="figures/benchmark.png">
</p>
Compared to the previous version, the upgraded model shows significant improvements in handling complex reasoning tasks. For instance, in the AIME 2025 test, the model’s accuracy has increased from 70% in the previous version to 87.5% in the current version. This advancement stems from enhanced thinking depth during the reasoning process: in the AIME test set, the previous model used an average of 12K tokens per question, whereas the new version averages 23K tokens per question.
Beyond its improved reasoning capabilities, this version also offers a reduced hallucination rate, enhanced support for function calling, and better experience for vibe coding.
## 2. Evaluation Results
### DeepSeek-R1-0528
For all our models, the maximum generation length is set to 64K tokens. For benchmarks requiring sampling, we use a temperature of $0.6$, a top-p value of $0.95$, and generate 16 responses per query to estimate pass@1.
<div align="center">
| Category | Benchmark (Metric) | DeepSeek R1 | DeepSeek R1 0528
|----------|----------------------------------|-----------------|---|
| General |
| | MMLU-Redux (EM) | 92.9 | 93.4
| | MMLU-Pro (EM) | 84.0 | 85.0
| | GPQA-Diamond (Pass@1) | 71.5 | 81.0
| | SimpleQA (Correct) | 30.1 | 27.8
| | FRAMES (Acc.) | 82.5 | 83.0
| | Humanity's Last Exam (Pass@1) | 8.5 | 17.7
| Code |
| | LiveCodeBench (2408-2505) (Pass@1) | 63.5 | 73.3
| | Codeforces-Div1 (Rating) | 1530 | 1930
| | SWE Verified (Resolved) | 49.2 | 57.6
| | Aider-Polyglot (Acc.) | 53.3 | 71.6
| Math |
| | AIME 2024 (Pass@1) | 79.8 | 91.4
| | AIME 2025 (Pass@1) | 70.0 | 87.5
| | HMMT 2025 (Pass@1) | 41.7 | 79.4 |
| | CNMO 2024 (Pass@1) | 78.8 | 86.9
| Tools |
| | BFCL_v3_MultiTurn (Acc) | - | 37.0 |
| | Tau-Bench (Pass@1) | - | 53.5(Airline)/63.9(Retail)
</div>
Note: We use Agentless framework to evaluate model performance on SWE-Verified. We only evaluate text-only prompts in HLE testsets. GPT-4.1 is employed to act user role in Tau-bench evaluation.
### DeepSeek-R1-0528-Qwen3-8B
Meanwhile, we distilled the chain-of-thought from DeepSeek-R1-0528 to post-train Qwen3 8B Base, obtaining DeepSeek-R1-0528-Qwen3-8B. This model achieves state-of-the-art (SOTA) performance among open-source models on the AIME 2024, surpassing Qwen3 8B by +10.0% and matching the performance of Qwen3-235B-thinking. We believe that the chain-of-thought from DeepSeek-R1-0528 will hold significant importance for both academic research on reasoning models and industrial development focused on small-scale models.
| | AIME 24 | AIME 25 | HMMT Feb 25 | GPQA Diamond | LiveCodeBench (2408-2505) |
|--------------------------------|---------|---------|-------------|--------------|---------------------------|
| Qwen3-235B-A22B | 85.7 | 81.5 | 62.5 | 71.1 | 66.5 |
| Qwen3-32B | 81.4 | 72.9 | - | 68.4 | - |
| Qwen3-8B | 76.0 | 67.3 | - | 62.0 | - |
| Phi-4-Reasoning-Plus-14B | 81.3 | 78.0 | 53.6 | 69.3 | - |
| Gemini-2.5-Flash-Thinking-0520 | 82.3 | 72.0 | 64.2 | 82.8 | 62.3 |
| o3-mini (medium) | 79.6 | 76.7 | 53.3 | 76.8 | 65.9 |
| DeepSeek-R1-0528-Qwen3-8B | 86.0 | 76.3 | 61.5 | 61.1 | 60.5 |
## 3. Chat Website & API Platform
You can chat with DeepSeek-R1 on DeepSeek's official website: [chat.deepseek.com](https://chat.deepseek.com/sign_in), and switch on the button "DeepThink"
We also provide OpenAI-Compatible API at DeepSeek Platform: [platform.deepseek.com](https://platform.deepseek.com/)
## 4. How to Run Locally
Please visit [DeepSeek-R1](https://github.com/deepseek-ai/DeepSeek-R1) repository for more information about running DeepSeek-R1-0528 locally.
Compared to previous versions of DeepSeek-R1, the usage recommendations for DeepSeek-R1-0528 have the following changes:
1. System prompt is supported now.
2. It is not required to add "\<think\>\n" at the beginning of the output to force the model into thinking pattern.
The model architecture of DeepSeek-R1-0528-Qwen3-8B is identical to that of Qwen3-8B, but it shares the same tokenizer configuration as DeepSeek-R1-0528. This model can be run in the same manner as Qwen3-8B, but it is essential to ensure that all configuration files are sourced from our repository rather than the original Qwen3 project.
### System Prompt
In the official DeepSeek web/app, we use the same system prompt with a specific date.
```
该助手为DeepSeek-R1,由深度求索公司创造。
今天是{current date}。
```
For example,
```
该助手为DeepSeek-R1,由深度求索公司创造。
今天是2025年5月28日,星期一。
```
### Temperature
In our web and application environments, the temperature parameter $T_{model}$ is set to 0.6.
### Prompts for File Uploading and Web Search
For file uploading, please follow the template to create prompts, where {file_name}, {file_content} and {question} are arguments.
```
file_template = \
"""[file name]: {file_name}
[file content begin]
{file_content}
[file content end]
{question}"""
```
For Web Search, {search_results}, {cur_date}, and {question} are arguments.
For Chinese query, we use the prompt:
```
search_answer_zh_template = \
'''# 以下内容是基于用户发送的消息的搜索结果:
{search_results}
在我给你的搜索结果中,每个结果都是[webpage X begin]...[webpage X end]格式的,X代表每篇文章的数字索引。请在适当的情况下在句子末尾引用上下文。请按照引用编号[citation:X]的格式在答案中对应部分引用上下文。如果一句话源自多个上下文,请列出所有相关的引用编号,例如[citation:3][citation:5],切记不要将引用集中在最后返回引用编号,而是在答案对应部分列出。
在回答时,请注意以下几点:
- 今天是{cur_date}。
- 并非搜索结果的所有内容都与用户的问题密切相关,你需要结合问题,对搜索结果进行甄别、筛选。
- 对于列举类的问题(如列举所有航班信息),尽量将答案控制在10个要点以内,并告诉用户可以查看搜索来源、获得完整信息。优先提供信息完整、最相关的列举项;如非必要,不要主动告诉用户搜索结果未提供的内容。
- 对于创作类的问题(如写论文),请务必在正文的段落中引用对应的参考编号,例如[citation:3][citation:5],不能只在文章末尾引用。你需要解读并概括用户的题目要求,选择合适的格式,充分利用搜索结果并抽取重要信息,生成符合用户要求、极具思想深度、富有创造力与专业性的答案。你的创作篇幅需要尽可能延长,对于每一个要点的论述要推测用户的意图,给出尽可能多角度的回答要点,且务必信息量大、论述详尽。
- 如果回答很长,请尽量结构化、分段落总结。如果需要分点作答,尽量控制在5个点以内,并合并相关的内容。
- 对于客观类的问答,如果问题的答案非常简短,可以适当补充一到两句相关信息,以丰富内容。
- 你需要根据用户要求和回答内容选择合适、美观的回答格式,确保可读性强。
- 你的回答应该综合多个相关网页来回答,不能重复引用一个网页。
- 除非用户要求,否则你回答的语言需要和用户提问的语言保持一致。
# 用户消息为:
{question}'''
```
For English query, we use the prompt:
```
search_answer_en_template = \
'''# The following contents are the search results related to the user's message:
{search_results}
In the search results I provide to you, each result is formatted as [webpage X begin]...[webpage X end], where X represents the numerical index of each article. Please cite the context at the end of the relevant sentence when appropriate. Use the citation format [citation:X] in the corresponding part of your answer. If a sentence is derived from multiple contexts, list all relevant citation numbers, such as [citation:3][citation:5]. Be sure not to cluster all citations at the end; instead, include them in the corresponding parts of the answer.
When responding, please keep the following points in mind:
- Today is {cur_date}.
- Not all content in the search results is closely related to the user's question. You need to evaluate and filter the search results based on the question.
- For listing-type questions (e.g., listing all flight information), try to limit the answer to 10 key points and inform the user that they can refer to the search sources for complete information. Prioritize providing the most complete and relevant items in the list. Avoid mentioning content not provided in the search results unless necessary.
- For creative tasks (e.g., writing an essay), ensure that references are cited within the body of the text, such as [citation:3][citation:5], rather than only at the end of the text. You need to interpret and summarize the user's requirements, choose an appropriate format, fully utilize the search results, extract key information, and generate an answer that is insightful, creative, and professional. Extend the length of your response as much as possible, addressing each point in detail and from multiple perspectives, ensuring the content is rich and thorough.
- If the response is lengthy, structure it well and summarize it in paragraphs. If a point-by-point format is needed, try to limit it to 5 points and merge related content.
- For objective Q&A, if the answer is very brief, you may add one or two related sentences to enrich the content.
- Choose an appropriate and visually appealing format for your response based on the user's requirements and the content of the answer, ensuring strong readability.
- Your answer should synthesize information from multiple relevant webpages and avoid repeatedly citing the same webpage.
- Unless the user requests otherwise, your response should be in the same language as the user's question.
# The user's message is:
{question}'''
```
## 5. License
This code repository is licensed under [MIT License](LICENSE). The use of DeepSeek-R1 models is also subject to [MIT License](LICENSE). DeepSeek-R1 series (including Base and Chat) supports commercial use and distillation.
## 6. Citation
```
@misc{deepseekai2025deepseekr1incentivizingreasoningcapability,
title={DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning},
author={DeepSeek-AI},
year={2025},
eprint={2501.12948},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2501.12948},
}
```
## 7. Contact
If you have any questions, please raise an issue or contact us at [[email protected]]([email protected]).
|
mradermacher/gemma-3-4b-it-qat-abliterated-i1-GGUF | mradermacher | 2025-05-29T19:00:16Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:mlabonne/gemma-3-4b-it-qat-abliterated",
"base_model:quantized:mlabonne/gemma-3-4b-it-qat-abliterated",
"license:gemma",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-05-29T15:27:36Z | ---
base_model: mlabonne/gemma-3-4b-it-qat-abliterated
language:
- en
library_name: transformers
license: gemma
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/mlabonne/gemma-3-4b-it-qat-abliterated
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/gemma-3-4b-it-qat-abliterated-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/gemma-3-4b-it-qat-abliterated-i1-GGUF/resolve/main/gemma-3-4b-it-qat-abliterated.i1-IQ1_S.gguf) | i1-IQ1_S | 1.2 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-4b-it-qat-abliterated-i1-GGUF/resolve/main/gemma-3-4b-it-qat-abliterated.i1-IQ1_M.gguf) | i1-IQ1_M | 1.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-4b-it-qat-abliterated-i1-GGUF/resolve/main/gemma-3-4b-it-qat-abliterated.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-4b-it-qat-abliterated-i1-GGUF/resolve/main/gemma-3-4b-it-qat-abliterated.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-4b-it-qat-abliterated-i1-GGUF/resolve/main/gemma-3-4b-it-qat-abliterated.i1-IQ2_S.gguf) | i1-IQ2_S | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-4b-it-qat-abliterated-i1-GGUF/resolve/main/gemma-3-4b-it-qat-abliterated.i1-IQ2_M.gguf) | i1-IQ2_M | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-4b-it-qat-abliterated-i1-GGUF/resolve/main/gemma-3-4b-it-qat-abliterated.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-4b-it-qat-abliterated-i1-GGUF/resolve/main/gemma-3-4b-it-qat-abliterated.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-4b-it-qat-abliterated-i1-GGUF/resolve/main/gemma-3-4b-it-qat-abliterated.i1-Q2_K.gguf) | i1-Q2_K | 1.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-4b-it-qat-abliterated-i1-GGUF/resolve/main/gemma-3-4b-it-qat-abliterated.i1-IQ3_XS.gguf) | i1-IQ3_XS | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-4b-it-qat-abliterated-i1-GGUF/resolve/main/gemma-3-4b-it-qat-abliterated.i1-IQ3_S.gguf) | i1-IQ3_S | 2.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-4b-it-qat-abliterated-i1-GGUF/resolve/main/gemma-3-4b-it-qat-abliterated.i1-Q3_K_S.gguf) | i1-Q3_K_S | 2.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-4b-it-qat-abliterated-i1-GGUF/resolve/main/gemma-3-4b-it-qat-abliterated.i1-IQ3_M.gguf) | i1-IQ3_M | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-4b-it-qat-abliterated-i1-GGUF/resolve/main/gemma-3-4b-it-qat-abliterated.i1-Q3_K_M.gguf) | i1-Q3_K_M | 2.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-4b-it-qat-abliterated-i1-GGUF/resolve/main/gemma-3-4b-it-qat-abliterated.i1-Q3_K_L.gguf) | i1-Q3_K_L | 2.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-4b-it-qat-abliterated-i1-GGUF/resolve/main/gemma-3-4b-it-qat-abliterated.i1-IQ4_XS.gguf) | i1-IQ4_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-4b-it-qat-abliterated-i1-GGUF/resolve/main/gemma-3-4b-it-qat-abliterated.i1-IQ4_NL.gguf) | i1-IQ4_NL | 2.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-4b-it-qat-abliterated-i1-GGUF/resolve/main/gemma-3-4b-it-qat-abliterated.i1-Q4_0.gguf) | i1-Q4_0 | 2.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-4b-it-qat-abliterated-i1-GGUF/resolve/main/gemma-3-4b-it-qat-abliterated.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.5 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-4b-it-qat-abliterated-i1-GGUF/resolve/main/gemma-3-4b-it-qat-abliterated.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-4b-it-qat-abliterated-i1-GGUF/resolve/main/gemma-3-4b-it-qat-abliterated.i1-Q4_1.gguf) | i1-Q4_1 | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-4b-it-qat-abliterated-i1-GGUF/resolve/main/gemma-3-4b-it-qat-abliterated.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-4b-it-qat-abliterated-i1-GGUF/resolve/main/gemma-3-4b-it-qat-abliterated.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-4b-it-qat-abliterated-i1-GGUF/resolve/main/gemma-3-4b-it-qat-abliterated.i1-Q6_K.gguf) | i1-Q6_K | 3.3 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
CodeWithSwap01/finetuned-bert-base-german-cased | CodeWithSwap01 | 2025-05-29T18:50:26Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"base_model:google-bert/bert-base-german-cased",
"base_model:finetune:google-bert/bert-base-german-cased",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-07-30T09:58:11Z | ---
license: mit
base_model: bert-base-german-cased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned-bert-base-german-cased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-bert-base-german-cased
This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4021
- Accuracy: 0.9076
- F1: 0.9079
- Per Class F1: {'Web': 0.973293768545994, 'Panorama': 0.8637681159420288, 'International': 0.9078498293515358, 'Wirtschaft': 0.891304347826087, 'Sport': 0.9916666666666667, 'Inland': 0.825242718446602, 'Etat': 0.9160305343511451, 'Wissenschaft': 0.8717948717948718, 'Kultur': 0.8828828828828829}
- Per Class Accuracy: {'Web': 0.9704142011834319, 'Panorama': 0.8418079096045198, 'International': 0.9366197183098591, 'Wirtschaft': 0.9111111111111111, 'Sport': 0.9916666666666667, 'Inland': 0.8173076923076923, 'Etat': 0.9375, 'Wissenschaft': 0.85, 'Kultur': 0.8596491228070176}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Per Class F1 | Per Class Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| 0.525 | 1.0 | 1156 | 0.3350 | 0.8920 | 0.8919 | {'Web': 0.9583333333333334, 'Panorama': 0.8402366863905326, 'International': 0.8933333333333333, 'Wirtschaft': 0.8741258741258741, 'Sport': 0.9876543209876543, 'Inland': 0.8309178743961353, 'Etat': 0.8617886178861788, 'Wissenschaft': 0.8648648648648649, 'Kultur': 0.8571428571428572} | {'Web': 0.9583333333333334, 'Panorama': 0.8352941176470589, 'International': 0.8993288590604027, 'Wirtschaft': 0.8620689655172413, 'Sport': 0.975609756097561, 'Inland': 0.819047619047619, 'Etat': 0.9464285714285714, 'Wissenschaft': 0.8888888888888888, 'Kultur': 0.8275862068965517} |
| 0.3553 | 2.0 | 2312 | 0.3731 | 0.9086 | 0.9090 | {'Web': 0.960960960960961, 'Panorama': 0.8703170028818443, 'International': 0.9041095890410958, 'Wirtschaft': 0.8970588235294118, 'Sport': 0.995850622406639, 'Inland': 0.8396226415094339, 'Etat': 0.9104477611940298, 'Wissenschaft': 0.8793103448275862, 'Kultur': 0.8807339449541284} | {'Web': 0.9696969696969697, 'Panorama': 0.8435754189944135, 'International': 0.9361702127659575, 'Wirtschaft': 0.9312977099236641, 'Sport': 0.9917355371900827, 'Inland': 0.8090909090909091, 'Etat': 0.9104477611940298, 'Wissenschaft': 0.864406779661017, 'Kultur': 0.8727272727272727} |
| 0.3083 | 3.0 | 3468 | 0.4021 | 0.9076 | 0.9079 | {'Web': 0.973293768545994, 'Panorama': 0.8637681159420288, 'International': 0.9078498293515358, 'Wirtschaft': 0.891304347826087, 'Sport': 0.9916666666666667, 'Inland': 0.825242718446602, 'Etat': 0.9160305343511451, 'Wissenschaft': 0.8717948717948718, 'Kultur': 0.8828828828828829} | {'Web': 0.9704142011834319, 'Panorama': 0.8418079096045198, 'International': 0.9366197183098591, 'Wirtschaft': 0.9111111111111111, 'Sport': 0.9916666666666667, 'Inland': 0.8173076923076923, 'Etat': 0.9375, 'Wissenschaft': 0.85, 'Kultur': 0.8596491228070176} |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.3.1
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Moryjj/pars_3blocks_3_sgd | Moryjj | 2025-05-29T18:49:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-05-29T18:48:41Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
cesun/advllm_llama2 | cesun | 2025-05-29T18:40:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"adversarial-attacks",
"jailbreak",
"red-teaming",
"alignment",
"LLM-safety",
"conversational",
"arxiv:2410.18469",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-09-20T02:07:42Z | ---
library_name: transformers
tags:
- adversarial-attacks
- jailbreak
- red-teaming
- alignment
- LLM-safety
license: mit
---
# ADV-LLM
ADV-LLM is an **iteratively self-tuned** adversarial language model that generates jailbreak suffixes capable of bypassing safety alignment in open-source and proprietary models.
- **Paper:** https://arxiv.org/abs/2410.18469
- **Code:** https://github.com/SunChungEn/ADV-LLM
## Model Details
- **Authors:** Chung-En Sun et al. (UCSD & Microsoft Research)
- **Finetuned from:** LLaMA-2-7B-chat
- **Language:** English
- **License:** MIT
## Usage Example
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("cesun/advllm_llama2")
tokenizer = AutoTokenizer.from_pretrained("cesun/advllm_llama2")
inputs = tokenizer("How to make a bomb", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=90)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Evaluation Results
ADV-LLM achieves near-perfect jailbreak success rates under group beam search (GBS-50) across a wide range of models and safety checks, including Template (TP), LlamaGuard (LG), and GPT-4 evaluations.
| Victim Model | GBS-50 ASR (TP / LG / GPT-4) |
|--------------------------|-------------------------------|
| Vicuna-7B-v1.5 | 100.00% / 100.00% / 99.81% |
| Guanaco-7B | 100.00% / 100.00% / 99.81% |
| Mistral-7B-Instruct-v0.2 | 100.00% / 100.00% / 100.00% |
| LLaMA-2-7B-chat | 100.00% / 100.00% / 93.85% |
| LLaMA-3-8B-Instruct | 100.00% / 98.84% / 98.27% |
**Legend:**
- **ASR** = Attack Success Rate
- **TP** = Template-based refusal detection
- **LG** = LlamaGuard safety classifier
- **GPT-4** = Harmfulness judged by GPT-4
## Citation
If you use ADV-LLM in your research or evaluation, please cite:
**BibTeX**
```bibtex
@inproceedings{sun2025advllm,
title={Iterative Self-Tuning LLMs for Enhanced Jailbreaking Capabilities},
author={Sun, Chung-En and Liu, Xiaodong and Yang, Weiwei and Weng, Tsui-Wei and Cheng, Hao and San, Aidan and Galley, Michel and Gao, Jianfeng},
booktitle={NAACL},
year={2025}
} |
cesun/advllm_mistral | cesun | 2025-05-29T18:38:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"adversarial-attacks",
"jailbreak",
"red-teaming",
"alignment",
"LLM-safety",
"conversational",
"arxiv:2410.18469",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-09-20T02:07:33Z | ---
library_name: transformers
tags:
- adversarial-attacks
- jailbreak
- red-teaming
- alignment
- LLM-safety
license: mit
---
# ADV-LLM
ADV-LLM is an **iteratively self-tuned** adversarial language model that generates jailbreak suffixes capable of bypassing safety alignment in open-source and proprietary models.
- **Paper:** https://arxiv.org/abs/2410.18469
- **Code:** https://github.com/SunChungEn/ADV-LLM
## Model Details
- **Authors:** Chung-En Sun et al. (UCSD & Microsoft Research)
- **Finetuned from:** Mistral-7B-Instruct-v0.2
- **Language:** English
- **License:** MIT
## Usage Example
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("cesun/advllm_mistral")
tokenizer = AutoTokenizer.from_pretrained("cesun/advllm_mistral")
inputs = tokenizer("How to make a bomb", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=90)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Evaluation Results
ADV-LLM achieves near-perfect jailbreak success rates under group beam search (GBS-50) across a wide range of models and safety checks, including Template (TP), LlamaGuard (LG), and GPT-4 evaluations.
| Victim Model | GBS-50 ASR (TP / LG / GPT-4) |
|--------------------------|-------------------------------|
| Vicuna-7B-v1.5 | 100.00% / 100.00% / 99.81% |
| Guanaco-7B | 100.00% / 100.00% / 99.81% |
| Mistral-7B-Instruct-v0.2 | 100.00% / 100.00% / 100.00% |
| LLaMA-2-7B-chat | 100.00% / 100.00% / 93.85% |
| LLaMA-3-8B-Instruct | 100.00% / 98.84% / 98.27% |
**Legend:**
- **ASR** = Attack Success Rate
- **TP** = Template-based refusal detection
- **LG** = LlamaGuard safety classifier
- **GPT-4** = Harmfulness judged by GPT-4
## Citation
If you use ADV-LLM in your research or evaluation, please cite:
**BibTeX**
```bibtex
@inproceedings{sun2025advllm,
title={Iterative Self-Tuning LLMs for Enhanced Jailbreaking Capabilities},
author={Sun, Chung-En and Liu, Xiaodong and Yang, Weiwei and Weng, Tsui-Wei and Cheng, Hao and San, Aidan and Galley, Michel and Gao, Jianfeng},
booktitle={NAACL},
year={2025}
} |
Vacaspati/IV-Electra | Vacaspati | 2025-05-29T18:26:31Z | 0 | 0 | null | [
"pytorch",
"electra",
"region:us"
] | null | 2025-05-29T18:21:51Z |
license: apache-2.0
language:
- bn
---
# IV-Electra
**IV-Electra** is a 17 million-parameter model, trained on the Vācaspati literary dataset and IndicCorpv1.0 (Bangla subset).
## Model Details
- **Architecture:** Electra-small (but reduced to 17 M parameters)
- **Pretraining Corpus:** Vācaspati — a curated Bangla literary corpus
- **Parameter Count:** 17 M (≈ 1/7th the size of BERT-base)
- **Tokenizer:** WordPiece, vocabulary size 50 K
## Usage Example
```python
from transformers import BertTokenizer, AutoModelForSequenceClassification
tokenizer = BertTokenizer.from_pretrained("Vacaspati/IV-Electra")
model = AutoModelForSequenceClassification.from_pretrained("Vacaspati/IV-Electra")
```
## Citation
If you are using this model please cite:
```bibtex
@inproceedings{bhattacharyya-etal-2023-vacaspati,
title = "{VACASPATI}: A Diverse Corpus of {B}angla Literature",
author = "Bhattacharyya, Pramit and
Mondal, Joydeep and
Maji, Subhadip and
Bhattacharya, Arnab",
editor = "Park, Jong C. and
Arase, Yuki and
Hu, Baotian and
Lu, Wei and
Wijaya, Derry and
Purwarianti, Ayu and
Krisnadhi, Adila Alfa",
booktitle = "Proceedings of the 13th International Joint Conference on Natural Language Processing and the 3rd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = nov,
year = "2023",
address = "Nusa Dua, Bali",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.ijcnlp-main.72/",
doi = "10.18653/v1/2023.ijcnlp-main.72",
pages = "1118--1130"
}
```
|
FACADEEEE/medalu_16bit_riasec_oficial | FACADEEEE | 2025-05-29T18:25:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-29T17:55:05Z | ---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** FACADEEEE
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
VanishedBrB/Qwen2.5-Coder-7B-Instruct-Base-GGUF | VanishedBrB | 2025-05-29T18:24:52Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Qwen2.5-Coder-7B-Instruct",
"base_model:quantized:unsloth/Qwen2.5-Coder-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-29T18:15:17Z | ---
base_model: unsloth/Qwen2.5-Coder-7B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** VanishedBrB
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-Coder-7B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Datle1610/qwen-8b-kqapro-chat | Datle1610 | 2025-05-29T18:13:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-29T18:09:37Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
YuchenLi01/genParaMoreUniqueResNoGT_Qwen2.5-1.5BInstruct_dpo_ebs32_lr3e-06_beta0.8_42 | YuchenLi01 | 2025-05-29T18:13:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"dataset:YuchenLi01/MATH_Qwen2.5-1.5BInstruct_DPO_generatedAndParaphrasedMoreUniqueResponseNoGT",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-29T14:05:03Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-1.5B-Instruct
tags:
- alignment-handbook
- trl
- dpo
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- YuchenLi01/MATH_Qwen2.5-1.5BInstruct_DPO_generatedAndParaphrasedMoreUniqueResponseNoGT
model-index:
- name: genParaMoreUniqueResNoGT_Qwen2.5-1.5BInstruct_dpo_ebs32_lr3e-06_beta0.8_42
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# genParaMoreUniqueResNoGT_Qwen2.5-1.5BInstruct_dpo_ebs32_lr3e-06_beta0.8_42
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the YuchenLi01/MATH_Qwen2.5-1.5BInstruct_DPO_generatedAndParaphrasedMoreUniqueResponseNoGT dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5198
- Rewards/chosen: -19.9829
- Rewards/rejected: -25.7013
- Rewards/accuracies: 0.6921
- Rewards/margins: 5.7184
- Logps/rejected: -79.8842
- Logps/chosen: -67.4430
- Logits/rejected: -1.5240
- Logits/chosen: -1.6291
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.7406 | 0.0135 | 20 | 0.7124 | 0.0341 | -0.0087 | 0.5305 | 0.0428 | -47.7684 | -42.4218 | -2.2022 | -2.3078 |
| 0.7292 | 0.0270 | 40 | 0.7038 | -0.0798 | -0.1240 | 0.5671 | 0.0442 | -47.9126 | -42.5641 | -2.1922 | -2.2976 |
| 0.6524 | 0.0405 | 60 | 0.6917 | -0.4036 | -0.4919 | 0.5732 | 0.0883 | -48.3725 | -42.9689 | -2.1626 | -2.2690 |
| 0.69 | 0.0540 | 80 | 0.6726 | -0.6091 | -0.7742 | 0.6037 | 0.1651 | -48.7253 | -43.2257 | -2.1367 | -2.2433 |
| 0.5731 | 0.0675 | 100 | 0.6611 | -0.7158 | -0.9690 | 0.6311 | 0.2533 | -48.9688 | -43.3590 | -2.1327 | -2.2413 |
| 0.56 | 0.0810 | 120 | 0.6471 | -0.6424 | -1.0197 | 0.6037 | 0.3773 | -49.0322 | -43.2674 | -2.1511 | -2.2614 |
| 0.5149 | 0.0945 | 140 | 0.6377 | -0.7741 | -1.2638 | 0.6463 | 0.4897 | -49.3374 | -43.4320 | -2.1470 | -2.2586 |
| 0.6075 | 0.1080 | 160 | 0.6340 | -0.6136 | -1.1793 | 0.6402 | 0.5657 | -49.2317 | -43.2314 | -2.1533 | -2.2654 |
| 0.5579 | 0.1215 | 180 | 0.6583 | -1.1715 | -1.8401 | 0.6646 | 0.6687 | -50.0577 | -43.9287 | -2.0968 | -2.2118 |
| 0.5536 | 0.1350 | 200 | 0.6460 | -0.9221 | -1.7311 | 0.7104 | 0.8090 | -49.9214 | -43.6170 | -2.0927 | -2.2092 |
| 0.5423 | 0.1484 | 220 | 0.6716 | -1.2875 | -2.1512 | 0.6951 | 0.8637 | -50.4466 | -44.0737 | -2.0810 | -2.1961 |
| 0.8046 | 0.1619 | 240 | 0.6884 | -1.8439 | -2.7943 | 0.6524 | 0.9504 | -51.2504 | -44.7692 | -2.0479 | -2.1650 |
| 0.8638 | 0.1754 | 260 | 0.7714 | -1.7547 | -2.8074 | 0.625 | 1.0527 | -51.2669 | -44.6577 | -2.0813 | -2.1988 |
| 1.0416 | 0.1889 | 280 | 0.7735 | -2.6591 | -3.8757 | 0.6585 | 1.2166 | -52.6021 | -45.7882 | -2.0098 | -2.1262 |
| 0.7877 | 0.2024 | 300 | 0.8063 | -2.7776 | -4.1910 | 0.6585 | 1.4134 | -52.9964 | -45.9363 | -2.0929 | -2.2112 |
| 0.4656 | 0.2159 | 320 | 0.8812 | -5.1812 | -6.8445 | 0.6463 | 1.6632 | -56.3131 | -48.9409 | -1.9250 | -2.0434 |
| 1.2095 | 0.2294 | 340 | 0.9741 | -5.8221 | -7.4522 | 0.6524 | 1.6301 | -57.0728 | -49.7419 | -1.8445 | -1.9657 |
| 0.5192 | 0.2429 | 360 | 0.9934 | -5.0732 | -6.6944 | 0.625 | 1.6212 | -56.1255 | -48.8058 | -1.9447 | -2.0616 |
| 0.832 | 0.2564 | 380 | 1.0425 | -6.3094 | -8.1056 | 0.6220 | 1.7961 | -57.8895 | -50.3511 | -1.9328 | -2.0506 |
| 0.7077 | 0.2699 | 400 | 1.0021 | -6.6603 | -8.8645 | 0.6585 | 2.2042 | -58.8381 | -50.7897 | -1.9510 | -2.0693 |
| 0.3989 | 0.2834 | 420 | 1.0541 | -6.6139 | -8.8712 | 0.6585 | 2.2572 | -58.8465 | -50.7317 | -1.9572 | -2.0723 |
| 1.5636 | 0.2969 | 440 | 1.1305 | -7.8875 | -10.0979 | 0.6341 | 2.2103 | -60.3799 | -52.3238 | -1.8434 | -1.9551 |
| 1.1983 | 0.3104 | 460 | 1.1682 | -9.3099 | -11.5924 | 0.6494 | 2.2825 | -62.2480 | -54.1017 | -1.7330 | -1.8441 |
| 0.9871 | 0.3239 | 480 | 1.1462 | -7.8094 | -9.9023 | 0.6341 | 2.0929 | -60.1355 | -52.2261 | -1.8528 | -1.9682 |
| 1.1077 | 0.3374 | 500 | 1.1714 | -8.6432 | -11.0728 | 0.6524 | 2.4296 | -61.5986 | -53.2683 | -1.9259 | -2.0374 |
| 1.352 | 0.3509 | 520 | 1.2097 | -10.7074 | -13.4615 | 0.6646 | 2.7540 | -64.5844 | -55.8486 | -1.7674 | -1.8763 |
| 1.4755 | 0.3644 | 540 | 1.2201 | -10.3781 | -12.9187 | 0.6616 | 2.5406 | -63.9059 | -55.4369 | -1.7627 | -1.8682 |
| 0.8535 | 0.3779 | 560 | 1.2363 | -9.0976 | -11.6654 | 0.6646 | 2.5678 | -62.3393 | -53.8363 | -1.8719 | -1.9823 |
| 1.162 | 0.3914 | 580 | 1.2798 | -10.2512 | -13.2084 | 0.6921 | 2.9572 | -64.2681 | -55.2783 | -1.8560 | -1.9677 |
| 0.478 | 0.4049 | 600 | 1.3082 | -11.7694 | -15.0688 | 0.6677 | 3.2994 | -66.5935 | -57.1761 | -1.8274 | -1.9371 |
| 1.3761 | 0.4184 | 620 | 1.2887 | -12.6811 | -15.8683 | 0.6738 | 3.1871 | -67.5929 | -58.3158 | -1.7442 | -1.8521 |
| 0.9908 | 0.4318 | 640 | 1.2992 | -12.4831 | -15.8960 | 0.6951 | 3.4128 | -67.6275 | -58.0682 | -1.7866 | -1.8968 |
| 1.4849 | 0.4453 | 660 | 1.3187 | -12.4230 | -15.6596 | 0.6921 | 3.2366 | -67.3320 | -57.9931 | -1.7516 | -1.8613 |
| 1.159 | 0.4588 | 680 | 1.4230 | -13.7697 | -17.0566 | 0.6982 | 3.2868 | -69.0783 | -59.6765 | -1.7532 | -1.8636 |
| 1.0717 | 0.4723 | 700 | 1.5524 | -16.3802 | -19.7201 | 0.6585 | 3.3399 | -72.4077 | -62.9396 | -1.5773 | -1.6841 |
| 0.9179 | 0.4858 | 720 | 1.4393 | -15.3043 | -18.4331 | 0.6585 | 3.1288 | -70.7989 | -61.5946 | -1.5828 | -1.6901 |
| 1.4932 | 0.4993 | 740 | 1.3839 | -15.3968 | -18.7157 | 0.6646 | 3.3189 | -71.1521 | -61.7103 | -1.5868 | -1.6929 |
| 0.9615 | 0.5128 | 760 | 1.4059 | -15.0563 | -18.1423 | 0.6433 | 3.0860 | -70.4355 | -61.2847 | -1.6362 | -1.7401 |
| 0.7722 | 0.5263 | 780 | 1.3942 | -14.5371 | -17.4984 | 0.6616 | 2.9613 | -69.6306 | -60.6357 | -1.6513 | -1.7554 |
| 1.6081 | 0.5398 | 800 | 1.4631 | -14.7245 | -17.9876 | 0.6677 | 3.2631 | -70.2421 | -60.8699 | -1.6170 | -1.7205 |
| 0.6239 | 0.5533 | 820 | 1.4241 | -14.9020 | -18.3975 | 0.6494 | 3.4955 | -70.7545 | -61.0918 | -1.6418 | -1.7444 |
| 1.5038 | 0.5668 | 840 | 1.4253 | -15.6004 | -19.0718 | 0.6280 | 3.4715 | -71.5974 | -61.9648 | -1.6155 | -1.7176 |
| 0.5488 | 0.5803 | 860 | 1.4523 | -16.0002 | -19.3692 | 0.6280 | 3.3691 | -71.9691 | -62.4645 | -1.5377 | -1.6363 |
| 1.144 | 0.5938 | 880 | 1.3842 | -15.2308 | -18.8522 | 0.6616 | 3.6214 | -71.3228 | -61.5028 | -1.5482 | -1.6481 |
| 0.2919 | 0.6073 | 900 | 1.4070 | -14.9132 | -18.5832 | 0.6738 | 3.6701 | -70.9866 | -61.1058 | -1.5767 | -1.6772 |
| 0.4446 | 0.6208 | 920 | 1.4584 | -16.6607 | -20.3950 | 0.6585 | 3.7342 | -73.2513 | -63.2902 | -1.4829 | -1.5791 |
| 0.4996 | 0.6343 | 940 | 1.4423 | -16.6804 | -20.4677 | 0.6738 | 3.7874 | -73.3422 | -63.3148 | -1.5586 | -1.6552 |
| 2.4239 | 0.6478 | 960 | 1.4254 | -17.1687 | -21.4384 | 0.6860 | 4.2696 | -74.5555 | -63.9253 | -1.5490 | -1.6452 |
| 1.4565 | 0.6613 | 980 | 1.4251 | -16.4979 | -20.4036 | 0.6799 | 3.9058 | -73.2621 | -63.0867 | -1.5710 | -1.6659 |
| 1.4179 | 0.6748 | 1000 | 1.4238 | -16.9486 | -20.8003 | 0.6707 | 3.8517 | -73.7579 | -63.6500 | -1.4783 | -1.5735 |
| 0.957 | 0.6883 | 1020 | 1.4426 | -17.6982 | -21.5679 | 0.6555 | 3.8697 | -74.7174 | -64.5871 | -1.5401 | -1.6371 |
| 0.5038 | 0.7018 | 1040 | 1.4929 | -19.1686 | -23.0146 | 0.6402 | 3.8459 | -76.5258 | -66.4251 | -1.5271 | -1.6240 |
| 0.7077 | 0.7152 | 1060 | 1.5529 | -19.6528 | -23.4587 | 0.6372 | 3.8059 | -77.0810 | -67.0304 | -1.5172 | -1.6123 |
| 0.8409 | 0.7287 | 1080 | 1.5477 | -19.5205 | -23.2808 | 0.6524 | 3.7604 | -76.8586 | -66.8649 | -1.4753 | -1.5695 |
| 0.607 | 0.7422 | 1100 | 1.4837 | -18.3436 | -21.8753 | 0.6524 | 3.5317 | -75.1016 | -65.3938 | -1.5024 | -1.5954 |
| 1.1178 | 0.7557 | 1120 | 1.4819 | -19.8363 | -23.7812 | 0.6646 | 3.9449 | -77.4840 | -67.2597 | -1.4576 | -1.5516 |
| 0.3986 | 0.7692 | 1140 | 1.4826 | -17.8899 | -21.9788 | 0.6707 | 4.0890 | -75.2311 | -64.8266 | -1.5325 | -1.6271 |
| 0.7065 | 0.7827 | 1160 | 1.4607 | -17.3300 | -21.0679 | 0.6799 | 3.7379 | -74.0924 | -64.1268 | -1.5673 | -1.6592 |
| 0.9143 | 0.7962 | 1180 | 1.4921 | -18.9518 | -22.8276 | 0.6585 | 3.8758 | -76.2921 | -66.1541 | -1.5325 | -1.6252 |
| 1.7631 | 0.8097 | 1200 | 1.4387 | -19.3555 | -23.2256 | 0.6799 | 3.8701 | -76.7896 | -66.6587 | -1.4631 | -1.5564 |
| 0.8621 | 0.8232 | 1220 | 1.4312 | -18.3825 | -22.1087 | 0.6799 | 3.7262 | -75.3935 | -65.4425 | -1.4905 | -1.5860 |
| 0.654 | 0.8367 | 1240 | 1.3931 | -19.2388 | -22.9825 | 0.6982 | 3.7437 | -76.4858 | -66.5128 | -1.4696 | -1.5629 |
| 0.6026 | 0.8502 | 1260 | 1.4157 | -19.5314 | -23.4210 | 0.6860 | 3.8897 | -77.0339 | -66.8786 | -1.4675 | -1.5616 |
| 1.1609 | 0.8637 | 1280 | 1.3913 | -18.1071 | -21.8347 | 0.6860 | 3.7277 | -75.0510 | -65.0981 | -1.4965 | -1.5901 |
| 0.7062 | 0.8772 | 1300 | 1.4544 | -17.4173 | -21.2664 | 0.6616 | 3.8491 | -74.3406 | -64.2360 | -1.5752 | -1.6686 |
| 0.7108 | 0.8907 | 1320 | 1.4667 | -17.7998 | -21.8996 | 0.6524 | 4.0998 | -75.1320 | -64.7141 | -1.5912 | -1.6853 |
| 0.4088 | 0.9042 | 1340 | 1.3787 | -17.5050 | -21.6610 | 0.6494 | 4.1560 | -74.8338 | -64.3456 | -1.5830 | -1.6767 |
| 0.3772 | 0.9177 | 1360 | 1.4142 | -18.7211 | -22.7951 | 0.6616 | 4.0740 | -76.2514 | -65.8657 | -1.4924 | -1.5833 |
| 0.4537 | 0.9312 | 1380 | 1.5463 | -19.6383 | -23.5921 | 0.6463 | 3.9538 | -77.2477 | -67.0122 | -1.4554 | -1.5459 |
| 0.69 | 0.9447 | 1400 | 1.4437 | -18.9615 | -23.0465 | 0.6707 | 4.0850 | -76.5657 | -66.1662 | -1.4610 | -1.5537 |
| 1.5012 | 0.9582 | 1420 | 1.4334 | -18.5790 | -22.6264 | 0.6616 | 4.0474 | -76.0406 | -65.6881 | -1.5008 | -1.5945 |
| 1.382 | 0.9717 | 1440 | 1.4552 | -19.3086 | -23.4637 | 0.6799 | 4.1551 | -77.0872 | -66.6001 | -1.4804 | -1.5755 |
| 0.4369 | 0.9852 | 1460 | 1.4220 | -18.2730 | -22.3939 | 0.6768 | 4.1209 | -75.7500 | -65.3056 | -1.5156 | -1.6111 |
| 0.6127 | 0.9987 | 1480 | 1.4867 | -19.2636 | -23.3893 | 0.6829 | 4.1257 | -76.9942 | -66.5438 | -1.4930 | -1.5860 |
| 0.1872 | 1.0121 | 1500 | 1.5148 | -20.6804 | -25.1032 | 0.6890 | 4.4228 | -79.1366 | -68.3148 | -1.4567 | -1.5497 |
| 0.1627 | 1.0256 | 1520 | 1.5346 | -20.8424 | -25.5854 | 0.6890 | 4.7430 | -79.7393 | -68.5173 | -1.4705 | -1.5650 |
| 0.1217 | 1.0391 | 1540 | 1.5416 | -20.6110 | -25.3312 | 0.6799 | 4.7202 | -79.4216 | -68.2281 | -1.4808 | -1.5785 |
| 0.0103 | 1.0526 | 1560 | 1.5610 | -20.9977 | -25.8955 | 0.6860 | 4.8978 | -80.1269 | -68.7114 | -1.4571 | -1.5558 |
| 0.2585 | 1.0661 | 1580 | 1.6240 | -20.3597 | -25.1681 | 0.6829 | 4.8084 | -79.2177 | -67.9140 | -1.4782 | -1.5777 |
| 0.0015 | 1.0796 | 1600 | 1.7052 | -21.2446 | -26.0523 | 0.6555 | 4.8077 | -80.3229 | -69.0201 | -1.4904 | -1.5911 |
| 0.4918 | 1.0931 | 1620 | 1.7103 | -21.7153 | -26.6640 | 0.6555 | 4.9487 | -81.0876 | -69.6084 | -1.5029 | -1.6043 |
| 0.4902 | 1.1066 | 1640 | 1.6983 | -22.1401 | -27.3255 | 0.6616 | 5.1854 | -81.9144 | -70.1395 | -1.4941 | -1.5955 |
| 0.0007 | 1.1201 | 1660 | 1.6713 | -21.6010 | -26.6993 | 0.6768 | 5.0983 | -81.1316 | -69.4655 | -1.4933 | -1.5940 |
| 0.0117 | 1.1336 | 1680 | 1.6540 | -21.9883 | -27.2814 | 0.6921 | 5.2931 | -81.8593 | -69.9497 | -1.4699 | -1.5703 |
| 0.8285 | 1.1471 | 1700 | 1.6781 | -22.9418 | -28.4319 | 0.6921 | 5.4902 | -83.2975 | -71.1415 | -1.4561 | -1.5575 |
| 0.2514 | 1.1606 | 1720 | 1.6807 | -23.2902 | -28.8097 | 0.6921 | 5.5196 | -83.7697 | -71.5770 | -1.4448 | -1.5442 |
| 0.3602 | 1.1741 | 1740 | 1.6669 | -22.8762 | -28.4276 | 0.6982 | 5.5514 | -83.2920 | -71.0596 | -1.4570 | -1.5569 |
| 0.1275 | 1.1876 | 1760 | 1.6190 | -21.7302 | -27.3243 | 0.7012 | 5.5941 | -81.9129 | -69.6271 | -1.5261 | -1.6273 |
| 0.5538 | 1.2011 | 1780 | 1.5934 | -21.0240 | -26.4857 | 0.6951 | 5.4617 | -80.8647 | -68.7444 | -1.5307 | -1.6316 |
| 0.0269 | 1.2146 | 1800 | 1.5978 | -20.8158 | -26.2103 | 0.7012 | 5.3945 | -80.5205 | -68.4841 | -1.5014 | -1.6016 |
| 0.1472 | 1.2281 | 1820 | 1.6232 | -20.7624 | -26.1954 | 0.6921 | 5.4331 | -80.5018 | -68.4173 | -1.5206 | -1.6208 |
| 0.015 | 1.2416 | 1840 | 1.6005 | -21.4248 | -26.8490 | 0.7165 | 5.4242 | -81.3188 | -69.2454 | -1.4874 | -1.5858 |
| 0.0183 | 1.2551 | 1860 | 1.5842 | -21.2067 | -26.5675 | 0.6982 | 5.3609 | -80.9670 | -68.9727 | -1.4936 | -1.5941 |
| 0.0146 | 1.2686 | 1880 | 1.5918 | -21.1109 | -26.3508 | 0.6707 | 5.2399 | -80.6960 | -68.8530 | -1.5072 | -1.6076 |
| 0.1345 | 1.2821 | 1900 | 1.5611 | -21.5314 | -26.7835 | 0.6860 | 5.2521 | -81.2369 | -69.3786 | -1.4683 | -1.5673 |
| 0.1538 | 1.2955 | 1920 | 1.5669 | -21.4841 | -26.7364 | 0.6860 | 5.2523 | -81.1780 | -69.3194 | -1.4611 | -1.5588 |
| 0.0856 | 1.3090 | 1940 | 1.5413 | -21.0403 | -26.4201 | 0.6799 | 5.3798 | -80.7827 | -68.7647 | -1.4784 | -1.5773 |
| 0.1533 | 1.3225 | 1960 | 1.5357 | -20.8079 | -26.1323 | 0.6860 | 5.3245 | -80.4229 | -68.4741 | -1.4895 | -1.5886 |
| 0.0123 | 1.3360 | 1980 | 1.5363 | -20.5734 | -25.8201 | 0.6860 | 5.2468 | -80.0328 | -68.1810 | -1.4998 | -1.5997 |
| 0.2167 | 1.3495 | 2000 | 1.5377 | -21.1941 | -26.4425 | 0.6951 | 5.2484 | -80.8108 | -68.9570 | -1.4752 | -1.5742 |
| 0.0897 | 1.3630 | 2020 | 1.5398 | -20.9661 | -26.2357 | 0.6982 | 5.2696 | -80.5522 | -68.6719 | -1.4787 | -1.5781 |
| 0.4165 | 1.3765 | 2040 | 1.5571 | -21.5101 | -26.8082 | 0.6951 | 5.2981 | -81.2679 | -69.3520 | -1.4510 | -1.5496 |
| 0.3931 | 1.3900 | 2060 | 1.5529 | -21.5426 | -26.7965 | 0.6951 | 5.2539 | -81.2532 | -69.3926 | -1.4128 | -1.5105 |
| 0.1536 | 1.4035 | 2080 | 1.5658 | -21.5881 | -26.8523 | 0.6982 | 5.2642 | -81.3230 | -69.4495 | -1.4261 | -1.5240 |
| 0.1059 | 1.4170 | 2100 | 1.5789 | -21.1466 | -26.5614 | 0.6921 | 5.4148 | -80.9594 | -68.8976 | -1.4725 | -1.5737 |
| 0.1896 | 1.4305 | 2120 | 1.5718 | -20.7206 | -26.1609 | 0.6890 | 5.4402 | -80.4586 | -68.3651 | -1.4892 | -1.5916 |
| 0.1168 | 1.4440 | 2140 | 1.5659 | -20.5736 | -26.0620 | 0.6921 | 5.4885 | -80.3351 | -68.1813 | -1.5127 | -1.6159 |
| 0.0793 | 1.4575 | 2160 | 1.5850 | -20.7094 | -26.3714 | 0.6982 | 5.6620 | -80.7218 | -68.3511 | -1.5227 | -1.6268 |
| 0.007 | 1.4710 | 2180 | 1.5971 | -20.8814 | -26.6635 | 0.7043 | 5.7821 | -81.0869 | -68.5660 | -1.5244 | -1.6283 |
| 0.6654 | 1.4845 | 2200 | 1.5942 | -20.7607 | -26.5285 | 0.7043 | 5.7679 | -80.9182 | -68.4151 | -1.5310 | -1.6353 |
| 0.0698 | 1.4980 | 2220 | 1.5950 | -20.8047 | -26.6261 | 0.7073 | 5.8214 | -81.0402 | -68.4702 | -1.5301 | -1.6347 |
| 0.1712 | 1.5115 | 2240 | 1.6090 | -20.7520 | -26.5220 | 0.7073 | 5.7699 | -80.9101 | -68.4044 | -1.5381 | -1.6434 |
| 0.1934 | 1.5250 | 2260 | 1.5943 | -20.3505 | -26.0504 | 0.6982 | 5.6999 | -80.3205 | -67.9024 | -1.5449 | -1.6505 |
| 0.0018 | 1.5385 | 2280 | 1.5720 | -20.3789 | -25.9789 | 0.6921 | 5.6000 | -80.2312 | -67.9380 | -1.5237 | -1.6280 |
| 0.7348 | 1.5520 | 2300 | 1.5530 | -20.4378 | -26.0646 | 0.6890 | 5.6267 | -80.3383 | -68.0116 | -1.5106 | -1.6148 |
| 0.055 | 1.5655 | 2320 | 1.5454 | -20.3142 | -25.9007 | 0.6951 | 5.5865 | -80.1334 | -67.8570 | -1.5174 | -1.6219 |
| 0.4062 | 1.5789 | 2340 | 1.5361 | -20.5382 | -26.1726 | 0.6921 | 5.6344 | -80.4733 | -68.1371 | -1.5053 | -1.6099 |
| 0.0455 | 1.5924 | 2360 | 1.5320 | -20.5123 | -26.1926 | 0.6982 | 5.6804 | -80.4984 | -68.1047 | -1.5013 | -1.6059 |
| 0.1285 | 1.6059 | 2380 | 1.5232 | -20.1957 | -25.8383 | 0.6951 | 5.6427 | -80.0555 | -67.7089 | -1.5153 | -1.6201 |
| 0.0208 | 1.6194 | 2400 | 1.5234 | -20.2245 | -25.9067 | 0.7012 | 5.6822 | -80.1409 | -67.7449 | -1.5093 | -1.6137 |
| 0.0029 | 1.6329 | 2420 | 1.5341 | -20.3868 | -26.1384 | 0.7043 | 5.7516 | -80.4305 | -67.9478 | -1.5048 | -1.6096 |
| 0.1945 | 1.6464 | 2440 | 1.5247 | -20.4196 | -26.2103 | 0.7104 | 5.7907 | -80.5204 | -67.9888 | -1.5061 | -1.6108 |
| 0.004 | 1.6599 | 2460 | 1.5266 | -20.4093 | -26.1902 | 0.7043 | 5.7809 | -80.4953 | -67.9760 | -1.5011 | -1.6053 |
| 0.0573 | 1.6734 | 2480 | 1.5274 | -20.4576 | -26.2310 | 0.7043 | 5.7734 | -80.5463 | -68.0364 | -1.4983 | -1.6029 |
| 0.3576 | 1.6869 | 2500 | 1.5204 | -20.4037 | -26.1123 | 0.6982 | 5.7086 | -80.3979 | -67.9689 | -1.4995 | -1.6039 |
| 0.0131 | 1.7004 | 2520 | 1.5257 | -20.5124 | -26.2451 | 0.6982 | 5.7327 | -80.5639 | -68.1048 | -1.5021 | -1.6066 |
| 0.2184 | 1.7139 | 2540 | 1.5317 | -20.3247 | -25.9975 | 0.7043 | 5.6728 | -80.2545 | -67.8702 | -1.5113 | -1.6158 |
| 0.1724 | 1.7274 | 2560 | 1.5233 | -20.1874 | -25.8695 | 0.7012 | 5.6821 | -80.0944 | -67.6985 | -1.5123 | -1.6169 |
| 0.1273 | 1.7409 | 2580 | 1.5232 | -20.0369 | -25.6899 | 0.7073 | 5.6530 | -79.8699 | -67.5105 | -1.5197 | -1.6248 |
| 0.2306 | 1.7544 | 2600 | 1.5150 | -19.9925 | -25.6708 | 0.6921 | 5.6783 | -79.8461 | -67.4550 | -1.5201 | -1.6249 |
| 0.4892 | 1.7679 | 2620 | 1.5136 | -19.9195 | -25.6305 | 0.6951 | 5.7110 | -79.7957 | -67.3637 | -1.5219 | -1.6271 |
| 0.279 | 1.7814 | 2640 | 1.5213 | -19.8787 | -25.5405 | 0.6982 | 5.6618 | -79.6832 | -67.3127 | -1.5247 | -1.6298 |
| 0.0314 | 1.7949 | 2660 | 1.5097 | -19.8810 | -25.6183 | 0.7043 | 5.7373 | -79.7804 | -67.3156 | -1.5261 | -1.6319 |
| 0.0075 | 1.8084 | 2680 | 1.5218 | -19.9024 | -25.6437 | 0.6921 | 5.7413 | -79.8122 | -67.3424 | -1.5225 | -1.6273 |
| 0.103 | 1.8219 | 2700 | 1.5207 | -19.8910 | -25.6050 | 0.6951 | 5.7140 | -79.7638 | -67.3280 | -1.5195 | -1.6242 |
| 0.1608 | 1.8354 | 2720 | 1.5132 | -19.9285 | -25.6549 | 0.6951 | 5.7263 | -79.8262 | -67.3750 | -1.5232 | -1.6283 |
| 0.0658 | 1.8489 | 2740 | 1.5234 | -19.9721 | -25.6654 | 0.6951 | 5.6932 | -79.8393 | -67.4295 | -1.5211 | -1.6259 |
| 0.3296 | 1.8623 | 2760 | 1.5207 | -19.9735 | -25.7016 | 0.7073 | 5.7281 | -79.8846 | -67.4312 | -1.5207 | -1.6258 |
| 0.1826 | 1.8758 | 2780 | 1.5164 | -19.9750 | -25.7344 | 0.6921 | 5.7594 | -79.9256 | -67.4331 | -1.5194 | -1.6246 |
| 0.0161 | 1.8893 | 2800 | 1.5187 | -19.9774 | -25.7087 | 0.6921 | 5.7313 | -79.8934 | -67.4361 | -1.5214 | -1.6264 |
| 0.0078 | 1.9028 | 2820 | 1.5215 | -19.9988 | -25.6832 | 0.6951 | 5.6844 | -79.8615 | -67.4628 | -1.5216 | -1.6266 |
| 0.4342 | 1.9163 | 2840 | 1.5176 | -19.9697 | -25.7076 | 0.6951 | 5.7378 | -79.8920 | -67.4265 | -1.5193 | -1.6242 |
| 0.127 | 1.9298 | 2860 | 1.5228 | -19.9776 | -25.7078 | 0.6951 | 5.7302 | -79.8923 | -67.4363 | -1.5222 | -1.6276 |
| 0.0016 | 1.9433 | 2880 | 1.5186 | -19.9798 | -25.7100 | 0.6921 | 5.7303 | -79.8951 | -67.4390 | -1.5222 | -1.6273 |
| 0.0068 | 1.9568 | 2900 | 1.5182 | -19.9877 | -25.7205 | 0.7012 | 5.7328 | -79.9082 | -67.4489 | -1.5208 | -1.6257 |
| 0.081 | 1.9703 | 2920 | 1.5245 | -19.9812 | -25.7274 | 0.6982 | 5.7462 | -79.9169 | -67.4408 | -1.5178 | -1.6224 |
| 0.0849 | 1.9838 | 2940 | 1.5243 | -19.9801 | -25.7034 | 0.6890 | 5.7233 | -79.8868 | -67.4394 | -1.5199 | -1.6248 |
| 0.167 | 1.9973 | 2960 | 1.5259 | -20.0020 | -25.6816 | 0.6860 | 5.6797 | -79.8596 | -67.4668 | -1.5239 | -1.6290 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.5.1+cu121
- Datasets 3.5.0
- Tokenizers 0.20.3
|
tungduong261204/sft_v4_2000 | tungduong261204 | 2025-05-29T18:01:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-29T18:00:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
pmk2021/a-tria-d264-l6-h4-mode-vampnet_rmsq16-latest2 | pmk2021 | 2025-05-29T17:58:18Z | 0 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-05-29T17:58:12Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed] |
tungduong261204/sft_v4_1500 | tungduong261204 | 2025-05-29T17:52:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-29T17:51:10Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jobz-hunting-pakistan/Viral.video.link.Jobz.Hunting.Sajal.Malik.viral.video.original | jobz-hunting-pakistan | 2025-05-29T17:46:12Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-29T17:45:51Z | <a rel="nofollow" href="https://viralflix.xyz/leaked/?sa"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
<p><a rel="nofollow" href="https://viralflix.xyz/leaked/?sa">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️​</a></p>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?sa">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️​</a>
|
zelk12/29_05_2025_Test3_LazyMergekit_gemma-3-12B-Q6_K-GGUF | zelk12 | 2025-05-29T17:35:27Z | 0 | 0 | null | [
"gguf",
"merge",
"mergekit",
"lazymergekit",
"zelk12/MT-Gen1-gemma-3-12B",
"soob3123/amoral-gemma3-12B-v2",
"zelk12/MT1-gemma-3-12B",
"IlyaGusev/saiga_gemma3_12b",
"llama-cpp",
"gguf-my-repo",
"base_model:zelk12/29_05_2025_Test3_LazyMergekit_gemma-3-12B",
"base_model:quantized:zelk12/29_05_2025_Test3_LazyMergekit_gemma-3-12B",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-29T16:00:33Z | ---
base_model: zelk12/29_05_2025_Test3_LazyMergekit_gemma-3-12B
tags:
- merge
- mergekit
- lazymergekit
- zelk12/MT-Gen1-gemma-3-12B
- soob3123/amoral-gemma3-12B-v2
- zelk12/MT1-gemma-3-12B
- IlyaGusev/saiga_gemma3_12b
- llama-cpp
- gguf-my-repo
license: gemma
---
# zelk12/29_05_2025_Test3_LazyMergekit_gemma-3-12B-Q6_K-GGUF
This model was converted to GGUF format from [`zelk12/29_05_2025_Test3_LazyMergekit_gemma-3-12B`](https://huggingface.co/zelk12/29_05_2025_Test3_LazyMergekit_gemma-3-12B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/zelk12/29_05_2025_Test3_LazyMergekit_gemma-3-12B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo zelk12/29_05_2025_Test3_LazyMergekit_gemma-3-12B-Q6_K-GGUF --hf-file 29_05_2025_test3_lazymergekit_gemma-3-12b-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo zelk12/29_05_2025_Test3_LazyMergekit_gemma-3-12B-Q6_K-GGUF --hf-file 29_05_2025_test3_lazymergekit_gemma-3-12b-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo zelk12/29_05_2025_Test3_LazyMergekit_gemma-3-12B-Q6_K-GGUF --hf-file 29_05_2025_test3_lazymergekit_gemma-3-12b-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo zelk12/29_05_2025_Test3_LazyMergekit_gemma-3-12B-Q6_K-GGUF --hf-file 29_05_2025_test3_lazymergekit_gemma-3-12b-q6_k.gguf -c 2048
``` |
bangthe2222/Llama3.2_11B_vision_med | bangthe2222 | 2025-05-29T17:33:00Z | 13 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"region:us"
] | null | 2025-04-26T02:59:34Z | ---
base_model: unsloth/llama-3.2-11b-vision-instruct-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
srushtisingh/MNLP_M2_dpo_model | srushtisingh | 2025-05-29T17:23:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-29T17:22:08Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
IlkaWen1974/ilka | IlkaWen1974 | 2025-05-29T17:14:11Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-29T16:33:46Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: ilka
---
# Ilka
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `ilka` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "ilka",
"lora_weights": "https://huggingface.co/IlkaWen1974/ilka/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('IlkaWen1974/ilka', weight_name='lora.safetensors')
image = pipeline('ilka').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/IlkaWen1974/ilka/discussions) to add images that show off what you’ve made with this LoRA.
|
Kwokou/IwI-Spyra-v.0.3-Q8_0-GGUF | Kwokou | 2025-05-29T17:11:20Z | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-lora",
"base_model:Kwoya/IwI-Spyra-v.0.3",
"base_model:quantized:Kwoya/IwI-Spyra-v.0.3",
"license:apache-2.0",
"region:us"
] | null | 2025-05-29T17:11:18Z | ---
license: apache-2.0
base_model: Kwoya/IwI-Spyra-v.0.3
tags:
- llama-cpp
- gguf-my-lora
---
# Kwokou/IwI-Spyra-v.0.3-Q8_0-GGUF
This LoRA adapter was converted to GGUF format from [`Kwoya/IwI-Spyra-v.0.3`](https://huggingface.co/Kwoya/IwI-Spyra-v.0.3) via the ggml.ai's [GGUF-my-lora](https://huggingface.co/spaces/ggml-org/gguf-my-lora) space.
Refer to the [original adapter repository](https://huggingface.co/Kwoya/IwI-Spyra-v.0.3) for more details.
## Use with llama.cpp
```bash
# with cli
llama-cli -m base_model.gguf --lora IwI-Spyra-v.0.3-q8_0.gguf (...other args)
# with server
llama-server -m base_model.gguf --lora IwI-Spyra-v.0.3-q8_0.gguf (...other args)
```
To know more about LoRA usage with llama.cpp server, refer to the [llama.cpp server documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md).
|
TomasLaz/t0-s1.1-3B-LoRA19-3.2e-merged | TomasLaz | 2025-05-29T17:09:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-29T17:05:38Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
vmerinoc/openai-whisper-medium-lora-cola | vmerinoc | 2025-05-29T17:08:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-29T17:07:58Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
TomasLaz/t0-s1.1-3B-LoRA19-3.2e | TomasLaz | 2025-05-29T17:03:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-29T17:02:54Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tungduong261204/sft_v3_3000 | tungduong261204 | 2025-05-29T17:00:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-29T16:59:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BootesVoid/cmb2lfmtl06kru1cgkw7rwumr_cmb9k8r390esd1b1ysqfmmm44 | BootesVoid | 2025-05-29T16:43:35Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-29T16:43:33Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: LL12
---
# Cmb2Lfmtl06Kru1Cgkw7Rwumr_Cmb9K8R390Esd1B1Ysqfmmm44
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `LL12` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "LL12",
"lora_weights": "https://huggingface.co/BootesVoid/cmb2lfmtl06kru1cgkw7rwumr_cmb9k8r390esd1b1ysqfmmm44/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmb2lfmtl06kru1cgkw7rwumr_cmb9k8r390esd1b1ysqfmmm44', weight_name='lora.safetensors')
image = pipeline('LL12').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmb2lfmtl06kru1cgkw7rwumr_cmb9k8r390esd1b1ysqfmmm44/discussions) to add images that show off what you’ve made with this LoRA.
|
Sofia-gb/fashionSigLIP-roturas26 | Sofia-gb | 2025-05-29T16:42:27Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"feature-extraction",
"custom_code",
"arxiv:1910.09700",
"region:us"
] | feature-extraction | 2025-05-12T01:03:49Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kavinda123321/speecht5_keshandataset_sinhala4 | kavinda123321 | 2025-05-29T16:41:04Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2025-05-29T15:18:20Z | ---
library_name: transformers
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
model-index:
- name: speecht5_keshandataset_sinhala4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_keshandataset_sinhala4
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3713
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 2000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| 0.517 | 0.6849 | 100 | 0.4540 |
| 0.4639 | 1.3699 | 200 | 0.4232 |
| 0.4485 | 2.0548 | 300 | 0.4083 |
| 0.4364 | 2.7397 | 400 | 0.3961 |
| 0.431 | 3.4247 | 500 | 0.3981 |
| 0.4242 | 4.1096 | 600 | 0.3922 |
| 0.4212 | 4.7945 | 700 | 0.3884 |
| 0.4138 | 5.4795 | 800 | 0.3846 |
| 0.4059 | 6.1644 | 900 | 0.3856 |
| 0.4042 | 6.8493 | 1000 | 0.3797 |
| 0.4009 | 7.5342 | 1100 | 0.3793 |
| 0.4015 | 8.2192 | 1200 | 0.3765 |
| 0.3992 | 8.9041 | 1300 | 0.3781 |
| 0.3952 | 9.5890 | 1400 | 0.3733 |
| 0.3935 | 10.2740 | 1500 | 0.3731 |
| 0.3942 | 10.9589 | 1600 | 0.3708 |
| 0.3904 | 11.6438 | 1700 | 0.3727 |
| 0.3848 | 12.3288 | 1800 | 0.3719 |
| 0.3864 | 13.0137 | 1900 | 0.3719 |
| 0.3872 | 13.6986 | 2000 | 0.3713 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 3.6.0
- Tokenizers 0.21.0
|
HeOeH/LLaMA_Factory_data | HeOeH | 2025-05-29T16:29:00Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-29T16:24:41Z | Found. Redirecting to https://cdn-lfs-us-1.hf.co/repos/d9/1c/d91cd786a05f0f363df75b4b8ba93fdb8cd63a503e8e557620bf734fa64f09a0/4bcf87ecfbbb8e07a01b21415a970c8b53a5283bf6872b657040d3f45c9241f7?response-content-disposition=inline%3B+filename*%3DUTF-8%27%27README.md%3B+filename%3D%22README.md%22%3B&response-content-type=text%2Fmarkdown&Expires=1748547502&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTc0ODU0NzUwMn19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2RuLWxmcy11cy0xLmhmLmNvL3JlcG9zL2Q5LzFjL2Q5MWNkNzg2YTA1ZjBmMzYzZGY3NWI0YjhiYTkzZmRiOGNkNjNhNTAzZThlNTU3NjIwYmY3MzRmYTY0ZjA5YTAvNGJjZjg3ZWNmYmJiOGUwN2EwMWIyMTQxNWE5NzBjOGI1M2E1MjgzYmY2ODcyYjY1NzA0MGQzZjQ1YzkyNDFmNz9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPSomcmVzcG9uc2UtY29udGVudC10eXBlPSoifV19&Signature=PRAuXw29LSdZcyTh-LfutjiGWr1XfTJmgZI6OlQPJpxvXhhzkcW6EnIke2%7E6hygo2PKcarutkuOQqBhAcHApqECkt0VWSeRtpwgotiNRyT3Ruk6vmmTHmr0%7EajhHUMhL5xs9QJVz7EvAd7O9E44%7ED6Bs6U3N05blbOWeBTbEKGFlNId59QzQ-P0zqkGPTzI6zes9Ltz7CjCuQn%7E5CYAnqeDhDLb1Dw1FnWxZMPupoxTKJ9xsXegf8eVaTVoSFLQIlkqak3CTMqRCDJ9O5KCdKCp1slZKUxfjvimUHWxq-4oqVCwV5pmUOWLhrU7D0CGlZj9-ap0zHox9MeFQXbDjNA__&Key-Pair-Id=K24J24Z295AEI9 |
BootesVoid/cmb9jkrqt0ekc1b1ymehysjqx_cmb9jzuuq0eq31b1yodcngf1n | BootesVoid | 2025-05-29T16:27:49Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-29T16:27:47Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: laura_
---
# Cmb9Jkrqt0Ekc1B1Ymehysjqx_Cmb9Jzuuq0Eq31B1Yodcngf1N
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `laura_` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "laura_",
"lora_weights": "https://huggingface.co/BootesVoid/cmb9jkrqt0ekc1b1ymehysjqx_cmb9jzuuq0eq31b1yodcngf1n/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmb9jkrqt0ekc1b1ymehysjqx_cmb9jzuuq0eq31b1yodcngf1n', weight_name='lora.safetensors')
image = pipeline('laura_').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmb9jkrqt0ekc1b1ymehysjqx_cmb9jzuuq0eq31b1yodcngf1n/discussions) to add images that show off what you’ve made with this LoRA.
|
CodeAtCMU/Qwen3-8B-Base_full_sft_code_data_120K | CodeAtCMU | 2025-05-29T16:22:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-29T16:18:19Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Amird99/Reinforce_PixelCopter | Amird99 | 2025-05-29T16:13:20Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2025-05-29T16:04:48Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce_PixelCopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 30.80 +/- 25.57
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
tamewild/4b_v2_merged_e14 | tamewild | 2025-05-29T16:10:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-29T16:08:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kimxxxx/mistral_r64_a128_b8_gas8_Ler9e-5_hackcehctfmansub2048_INST_traintestextraremovedmanualy_2epoch | kimxxxx | 2025-05-29T16:01:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-29T16:00:26Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/gemma-3-4b-it-abliterated-v2-GGUF | mradermacher | 2025-05-29T16:00:18Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:mlabonne/gemma-3-4b-it-abliterated-v2",
"base_model:quantized:mlabonne/gemma-3-4b-it-abliterated-v2",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-29T10:49:14Z | ---
base_model: mlabonne/gemma-3-4b-it-abliterated-v2
language:
- en
library_name: transformers
license: gemma
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/mlabonne/gemma-3-4b-it-abliterated-v2
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/gemma-3-4b-it-abliterated-v2-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/gemma-3-4b-it-abliterated-v2-GGUF/resolve/main/gemma-3-4b-it-abliterated-v2.Q2_K.gguf) | Q2_K | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-4b-it-abliterated-v2-GGUF/resolve/main/gemma-3-4b-it-abliterated-v2.Q3_K_S.gguf) | Q3_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-4b-it-abliterated-v2-GGUF/resolve/main/gemma-3-4b-it-abliterated-v2.Q3_K_M.gguf) | Q3_K_M | 2.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-4b-it-abliterated-v2-GGUF/resolve/main/gemma-3-4b-it-abliterated-v2.Q3_K_L.gguf) | Q3_K_L | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-4b-it-abliterated-v2-GGUF/resolve/main/gemma-3-4b-it-abliterated-v2.IQ4_XS.gguf) | IQ4_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-4b-it-abliterated-v2-GGUF/resolve/main/gemma-3-4b-it-abliterated-v2.Q4_K_S.gguf) | Q4_K_S | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-4b-it-abliterated-v2-GGUF/resolve/main/gemma-3-4b-it-abliterated-v2.Q4_K_M.gguf) | Q4_K_M | 2.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-4b-it-abliterated-v2-GGUF/resolve/main/gemma-3-4b-it-abliterated-v2.Q5_K_S.gguf) | Q5_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-4b-it-abliterated-v2-GGUF/resolve/main/gemma-3-4b-it-abliterated-v2.Q5_K_M.gguf) | Q5_K_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-4b-it-abliterated-v2-GGUF/resolve/main/gemma-3-4b-it-abliterated-v2.Q6_K.gguf) | Q6_K | 3.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-4b-it-abliterated-v2-GGUF/resolve/main/gemma-3-4b-it-abliterated-v2.Q8_0.gguf) | Q8_0 | 4.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-4b-it-abliterated-v2-GGUF/resolve/main/gemma-3-4b-it-abliterated-v2.f16.gguf) | f16 | 7.9 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Daenox/Anekdots-bot | Daenox | 2025-05-29T15:57:27Z | 0 | 0 | null | [
"safetensors",
"gpt2",
"license:other",
"region:us"
] | null | 2025-05-29T15:06:06Z | ---
license: other
license_name: da
license_link: LICENSE
---
|
vadigr123/IndigoFurryMixXL_EPS3 | vadigr123 | 2025-05-29T15:55:43Z | 0 | 1 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"stable-diffusion-sdxl",
"text-to-image",
"noobai",
"base_model:Laxhar/noobai-XL-1.0",
"base_model:finetune:Laxhar/noobai-XL-1.0",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2025-05-29T15:52:47Z | ---
license: creativeml-openrail-m
base_model: Laxhar/noobai-XL-1.0
tags:
- stable-diffusion
- stable-diffusion-diffusers
- stable-diffusion-sdxl
- text-to-image
- diffusers
- safetensors
- noobai
widget:
- text: "a beautiful landscape, masterpiece, best quality"
example_title: "Landscape"
- text: "1girl, anime style, detailed face, high quality"
example_title: "Anime Character"
- text: "a cute cat, photorealistic, 4k"
example_title: "Photorealistic"
inference: true
---
# Indigo Furry Mix XL
Converted from SafeTensor checkpoint to Diffusers format Original: https://civitai.com/models/579632?modelVersionId=1472080
## Model Information
- **Base Model**: Laxhar/noobai-XL-1.0
- **Original Format**: SafeTensor Checkpoint
- **Converted Format**: Diffusers
- **Model Type**: Stable Diffusion XL
- **Original Checkpoint**: indigoFurryMixXL_noobaiEPS3.safetensors
## Usage
### Basic Usage
```python
from diffusers import StableDiffusionXLPipeline
import torch
# Load the pipeline
pipeline = StableDiffusionXLPipeline.from_pretrained(
"your-username/your-repo-name",
torch_dtype=torch.float16,
use_safetensors=True
)
pipeline = pipeline.to("cuda")
# Generate an image
prompt = "a beautiful landscape, masterpiece, best quality"
negative_prompt = 'low quality, blurry, distorted'
image = pipeline(
prompt,
negative_prompt=negative_prompt,
num_inference_steps=25,
guidance_scale=7.5,
width=1024, height=1024
).images[0]
image.save("generated_image.png")
```
### Advanced Usage with Custom Settings
```python
# For more control over generation
image = pipeline(
prompt="your prompt here",
negative_prompt='low quality, worst quality',
num_inference_steps=30,
guidance_scale=8.0,
width=1024,
height=1024,
generator=torch.Generator("cuda").manual_seed(42)
).images[0]
```
## Recommended Settings
### For NoobAI Models:
- **Positive prompts**: Include 'masterpiece, best quality, very aesthetic'
- **Negative prompts**: Use 'lowres, bad anatomy, bad hands, text, error, missing fingers'
- **CFG Scale**: 5-7 recommended
- **Resolution**: 1024x1024 or 832x1216 for portraits
- **Steps**: 20-30 steps usually sufficient
- **Sampler**: Euler a, DPM++ 2M, or DPM++ SDE work well
## Model Details
This model was automatically converted from a SafeTensor checkpoint to the Diffusers format for easy use with the 🤗 Diffusers library.
### Technical Specifications
- **Architecture**: Stable Diffusion XL
- **Parameter Count**: ~3.5B
- **Precision**: Mixed precision (FP16/FP32)
- **VRAM Requirements**: ~6GB (with FP16)
## License
This model is licensed under the CreativeML OpenRAIL-M license. Please ensure you comply with the license terms when using this model.
## Disclaimer
This model is converted from a community checkpoint. Please ensure you have the right to use and distribute the original model before using this converted version.
|
BootesVoid/cmb9iqhh80e7p1b1ygsngmn6q_cmb9iwsrc0eb61b1ys4wmv9jx | BootesVoid | 2025-05-29T15:53:06Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-29T15:53:04Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: rahaf_
---
# Cmb9Iqhh80E7P1B1Ygsngmn6Q_Cmb9Iwsrc0Eb61B1Ys4Wmv9Jx
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `rahaf_` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "rahaf_",
"lora_weights": "https://huggingface.co/BootesVoid/cmb9iqhh80e7p1b1ygsngmn6q_cmb9iwsrc0eb61b1ys4wmv9jx/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmb9iqhh80e7p1b1ygsngmn6q_cmb9iwsrc0eb61b1ys4wmv9jx', weight_name='lora.safetensors')
image = pipeline('rahaf_').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmb9iqhh80e7p1b1ygsngmn6q_cmb9iwsrc0eb61b1ys4wmv9jx/discussions) to add images that show off what you’ve made with this LoRA.
|
Mahyar-rock1000/Kalali | Mahyar-rock1000 | 2025-05-29T15:46:24Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-29T15:46:24Z | ---
license: apache-2.0
---
|
sergioalves/e591d12e-c3b6-453d-9598-35e5919b6a9b | sergioalves | 2025-05-29T15:38:26Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:tplr/TEMPLAR-I",
"base_model:adapter:tplr/TEMPLAR-I",
"license:mit",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-29T15:25:28Z | ---
library_name: peft
license: mit
base_model: tplr/TEMPLAR-I
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e591d12e-c3b6-453d-9598-35e5919b6a9b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: tplr/TEMPLAR-I
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 7593e12d055c09da_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_input: input
field_instruction: instruct
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.1
enabled: true
group_by_length: false
rank_loss: true
reference_model: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 0.85
group_by_length: false
hub_model_id: sergioalves/e591d12e-c3b6-453d-9598-35e5919b6a9b
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 1.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 500
micro_batch_size: 6
mixed_precision: bf16
mlflow_experiment_name: /tmp/7593e12d055c09da_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: fe32e4bd-fe3e-4ffc-8b47-36ae39f9d317
wandb_project: s56-7
wandb_run: your_name
wandb_runid: fe32e4bd-fe3e-4ffc-8b47-36ae39f9d317
warmup_steps: 50
weight_decay: 0.05
xformers_attention: true
```
</details><br>
# e591d12e-c3b6-453d-9598-35e5919b6a9b
This model is a fine-tuned version of [tplr/TEMPLAR-I](https://huggingface.co/tplr/TEMPLAR-I) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4669
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 24
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.7437 | 0.0003 | 1 | 2.5926 |
| 3.0513 | 0.0696 | 250 | 2.5030 |
| 1.8077 | 0.1391 | 500 | 2.4669 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.