modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
PrunaAI/ALLaM-AI-ALLaM-7B-Instruct-preview-HQQ-4bit-smashed | PrunaAI | 2025-06-05T15:20:57Z | 1 | 0 | null | [
"llama",
"pruna-ai",
"base_model:ALLaM-AI/ALLaM-7B-Instruct-preview",
"base_model:finetune:ALLaM-AI/ALLaM-7B-Instruct-preview",
"region:us"
] | null | 2025-06-04T18:26:26Z | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: ALLaM-AI/ALLaM-7B-Instruct-preview
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="banner.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo ALLaM-AI/ALLaM-7B-Instruct-preview installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/ALLaM-AI-ALLaM-7B-Instruct-preview-HQQ-4bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/ALLaM-AI-ALLaM-7B-Instruct-preview-HQQ-4bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("ALLaM-AI/ALLaM-7B-Instruct-preview")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`. This model has been smashed with pruna in version O.1.3
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model ALLaM-AI/ALLaM-7B-Instruct-preview before using this model which provided the base model. The license of `pruna` is [here](https://github.com/PrunaAI/pruna/blob/main/LICENSE) on GitHub.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
Adriano26/q-FrozenLake-v1-4x4-noSlippery | Adriano26 | 2025-06-05T15:20:43Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2025-06-05T15:20:40Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Adriano26/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
gfortune/roadwork22 | gfortune | 2025-06-05T15:20:02Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-06-05T15:19:23Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BootesVoid/cmbezxou704mvj8kfpqa044hy_cmbjf7kgf0b9ukfxs4qrb2x84 | BootesVoid | 2025-06-05T15:19:08Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-05T15:19:07Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: CAROLINE
---
# Cmbezxou704Mvj8Kfpqa044Hy_Cmbjf7Kgf0B9Ukfxs4Qrb2X84
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `CAROLINE` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "CAROLINE",
"lora_weights": "https://huggingface.co/BootesVoid/cmbezxou704mvj8kfpqa044hy_cmbjf7kgf0b9ukfxs4qrb2x84/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmbezxou704mvj8kfpqa044hy_cmbjf7kgf0b9ukfxs4qrb2x84', weight_name='lora.safetensors')
image = pipeline('CAROLINE').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmbezxou704mvj8kfpqa044hy_cmbjf7kgf0b9ukfxs4qrb2x84/discussions) to add images that show off what you’ve made with this LoRA.
|
jinx2321/mt5-tagged-1e4-paper-8 | jinx2321 | 2025-06-05T15:18:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"generated_from_trainer",
"base_model:jinx2321/mt5-tagged-1e4-paper",
"base_model:finetune:jinx2321/mt5-tagged-1e4-paper",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-06-05T13:57:50Z | ---
library_name: transformers
license: apache-2.0
base_model: jinx2321/mt5-tagged-1e4-paper
tags:
- generated_from_trainer
model-index:
- name: mt5-tagged-1e4-paper-8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-tagged-1e4-paper-8
This model is a fine-tuned version of [jinx2321/mt5-tagged-1e4-paper](https://huggingface.co/jinx2321/mt5-tagged-1e4-paper) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.52.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
|
magichampz/llama-3b-hptuned-lora | magichampz | 2025-06-05T15:16:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-05T15:05:10Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gfortune/roadwork5 | gfortune | 2025-06-05T15:14:24Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-06-05T15:13:29Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
XinBB/SE-GUI-3B | XinBB | 2025-06-05T15:13:43Z | 0 | 0 | null | [
"safetensors",
"qwen2_5_vl",
"license:apache-2.0",
"region:us"
] | null | 2025-06-05T13:33:54Z | ---
license: apache-2.0
---
|
BSC-NLP4BIA/General-SapBERT-15-Parents | BSC-NLP4BIA | 2025-06-05T15:07:28Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"roberta",
"sentence-similarity",
"feature-extraction",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-06-05T15:07:09Z | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer
This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
<!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("BSC-NLP4BIA/General-SapBERT-15-Parents")
# Run inference
sentences = [
'The weather is lovely today.',
"It's so sunny outside!",
'He drove to the stadium.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Framework Versions
- Python: 3.11.3
- Sentence Transformers: 4.1.0
- Transformers: 4.51.3
- PyTorch: 2.7.0+cu126
- Accelerate:
- Datasets:
- Tokenizers: 0.21.1
## Citation
### BibTeX
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
mradermacher/GCIRS-Reasoning-1.5B-R1-GGUF | mradermacher | 2025-06-05T15:06:57Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"reinforcement-learning",
"science",
"math",
"code",
"en",
"base_model:prithivMLmods/GCIRS-Reasoning-1.5B-R1",
"base_model:quantized:prithivMLmods/GCIRS-Reasoning-1.5B-R1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | reinforcement-learning | 2025-06-05T11:51:12Z | ---
base_model: prithivMLmods/GCIRS-Reasoning-1.5B-R1
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- reinforcement-learning
- science
- math
- code
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/prithivMLmods/GCIRS-Reasoning-1.5B-R1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/GCIRS-Reasoning-1.5B-R1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/GCIRS-Reasoning-1.5B-R1-GGUF/resolve/main/GCIRS-Reasoning-1.5B-R1.Q2_K.gguf) | Q2_K | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/GCIRS-Reasoning-1.5B-R1-GGUF/resolve/main/GCIRS-Reasoning-1.5B-R1.Q3_K_S.gguf) | Q3_K_S | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/GCIRS-Reasoning-1.5B-R1-GGUF/resolve/main/GCIRS-Reasoning-1.5B-R1.Q3_K_M.gguf) | Q3_K_M | 1.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/GCIRS-Reasoning-1.5B-R1-GGUF/resolve/main/GCIRS-Reasoning-1.5B-R1.Q3_K_L.gguf) | Q3_K_L | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/GCIRS-Reasoning-1.5B-R1-GGUF/resolve/main/GCIRS-Reasoning-1.5B-R1.IQ4_XS.gguf) | IQ4_XS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/GCIRS-Reasoning-1.5B-R1-GGUF/resolve/main/GCIRS-Reasoning-1.5B-R1.Q4_K_S.gguf) | Q4_K_S | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/GCIRS-Reasoning-1.5B-R1-GGUF/resolve/main/GCIRS-Reasoning-1.5B-R1.Q4_K_M.gguf) | Q4_K_M | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/GCIRS-Reasoning-1.5B-R1-GGUF/resolve/main/GCIRS-Reasoning-1.5B-R1.Q5_K_S.gguf) | Q5_K_S | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/GCIRS-Reasoning-1.5B-R1-GGUF/resolve/main/GCIRS-Reasoning-1.5B-R1.Q5_K_M.gguf) | Q5_K_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/GCIRS-Reasoning-1.5B-R1-GGUF/resolve/main/GCIRS-Reasoning-1.5B-R1.Q6_K.gguf) | Q6_K | 1.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/GCIRS-Reasoning-1.5B-R1-GGUF/resolve/main/GCIRS-Reasoning-1.5B-R1.Q8_0.gguf) | Q8_0 | 2.0 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/GCIRS-Reasoning-1.5B-R1-GGUF/resolve/main/GCIRS-Reasoning-1.5B-R1.f16.gguf) | f16 | 3.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
gfortune/roadwork3 | gfortune | 2025-06-05T15:06:07Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-06-05T15:04:50Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
othsueh/clear-field-82 | othsueh | 2025-06-05T15:00:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2-emodualhead",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-05T14:59:53Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gfortune/roadwork2 | gfortune | 2025-06-05T14:57:30Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-06-04T17:18:49Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
publication-charaf/MCQ_Qwen3-0.6B-Base_lr-0.001_e-1_s-0 | publication-charaf | 2025-06-05T14:53:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:Qwen/Qwen3-0.6B-Base",
"base_model:finetune:Qwen/Qwen3-0.6B-Base",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-05T13:45:09Z | ---
base_model: Qwen/Qwen3-0.6B-Base
library_name: transformers
model_name: MCQ_Qwen3-0.6B-Base_lr-0.001_e-1_s-0
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for MCQ_Qwen3-0.6B-Base_lr-0.001_e-1_s-0
This model is a fine-tuned version of [Qwen/Qwen3-0.6B-Base](https://huggingface.co/Qwen/Qwen3-0.6B-Base).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="publication-charaf/MCQ_Qwen3-0.6B-Base_lr-0.001_e-1_s-0", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/kamel-charaf-epfl/huggingface/runs/0x1c33st)
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Diamantis99/cQmiVIi | Diamantis99 | 2025-06-05T14:50:45Z | 0 | 0 | segmentation-models-pytorch | [
"segmentation-models-pytorch",
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"semantic-segmentation",
"pytorch",
"image-segmentation",
"license:mit",
"region:us"
] | image-segmentation | 2025-06-05T14:50:28Z | ---
library_name: segmentation-models-pytorch
license: mit
pipeline_tag: image-segmentation
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
- segmentation-models-pytorch
- semantic-segmentation
- pytorch
languages:
- python
---
# Segformer Model Card
Table of Contents:
- [Load trained model](#load-trained-model)
- [Model init parameters](#model-init-parameters)
- [Model metrics](#model-metrics)
- [Dataset](#dataset)
## Load trained model
```python
import segmentation_models_pytorch as smp
model = smp.from_pretrained("<save-directory-or-this-repo>")
```
## Model init parameters
```python
model_init_params = {
"encoder_name": "timm-efficientnet-b7",
"encoder_depth": 5,
"encoder_weights": "imagenet",
"decoder_segmentation_channels": 256,
"in_channels": 3,
"classes": 1,
"activation": None,
"aux_params": None
}
```
## Model metrics
```json
[
{
"test_per_image_iou": 0.8658431768417358,
"test_dataset_iou": 0.8856414556503296
}
]
```
## Dataset
Dataset name: VisionPipe
## More Information
- Library: https://github.com/qubvel/segmentation_models.pytorch
- Docs: https://smp.readthedocs.io/en/latest/
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) |
sajelian/dqn-SpaceInvadersNoFrameskip-v4 | sajelian | 2025-06-05T14:47:18Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-06-05T14:34:06Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 542.00 +/- 141.28
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
SBX (SB3 + Jax): https://github.com/araffin/sbx
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga sajelian -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga sajelian -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga sajelian
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
mradermacher/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1000-i1-GGUF | mradermacher | 2025-06-05T14:45:33Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"creative",
"creative writing",
"fiction writing",
"plot generation",
"sub-plot generation",
"story generation",
"scene continue",
"storytelling",
"fiction story",
"science fiction",
"romance",
"all genres",
"story",
"writing",
"vivid prose",
"vivid writing",
"fiction",
"roleplaying",
"bfloat16",
"swearing",
"rp",
"qwen3",
"horror",
"finetune",
"merge",
"en",
"fr",
"zh",
"de",
"base_model:DavidAU/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1000",
"base_model:quantized:DavidAU/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1000",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-06-05T10:15:00Z | ---
base_model: DavidAU/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1000
language:
- en
- fr
- zh
- de
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- creative
- creative writing
- fiction writing
- plot generation
- sub-plot generation
- fiction writing
- story generation
- scene continue
- storytelling
- fiction story
- science fiction
- romance
- all genres
- story
- writing
- vivid prose
- vivid writing
- fiction
- roleplaying
- bfloat16
- swearing
- rp
- qwen3
- horror
- finetune
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/DavidAU/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1000
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1000-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1000-i1-GGUF/resolve/main/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1000.i1-IQ1_S.gguf) | i1-IQ1_S | 1.2 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1000-i1-GGUF/resolve/main/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1000.i1-IQ1_M.gguf) | i1-IQ1_M | 1.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1000-i1-GGUF/resolve/main/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1000.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1000-i1-GGUF/resolve/main/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1000.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1000-i1-GGUF/resolve/main/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1000.i1-IQ2_S.gguf) | i1-IQ2_S | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1000-i1-GGUF/resolve/main/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1000.i1-IQ2_M.gguf) | i1-IQ2_M | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1000-i1-GGUF/resolve/main/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1000.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1000-i1-GGUF/resolve/main/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1000.i1-Q2_K.gguf) | i1-Q2_K | 1.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1000-i1-GGUF/resolve/main/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1000.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1000-i1-GGUF/resolve/main/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1000.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1000-i1-GGUF/resolve/main/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1000.i1-Q3_K_S.gguf) | i1-Q3_K_S | 2.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1000-i1-GGUF/resolve/main/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1000.i1-IQ3_S.gguf) | i1-IQ3_S | 2.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1000-i1-GGUF/resolve/main/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1000.i1-IQ3_M.gguf) | i1-IQ3_M | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1000-i1-GGUF/resolve/main/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1000.i1-Q3_K_M.gguf) | i1-Q3_K_M | 2.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1000-i1-GGUF/resolve/main/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1000.i1-Q3_K_L.gguf) | i1-Q3_K_L | 2.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1000-i1-GGUF/resolve/main/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1000.i1-IQ4_XS.gguf) | i1-IQ4_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1000-i1-GGUF/resolve/main/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1000.i1-Q4_0.gguf) | i1-Q4_0 | 2.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1000-i1-GGUF/resolve/main/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1000.i1-IQ4_NL.gguf) | i1-IQ4_NL | 2.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1000-i1-GGUF/resolve/main/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1000.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.5 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1000-i1-GGUF/resolve/main/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1000.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1000-i1-GGUF/resolve/main/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1000.i1-Q4_1.gguf) | i1-Q4_1 | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1000-i1-GGUF/resolve/main/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1000.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1000-i1-GGUF/resolve/main/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1000.i1-Q5_K_M.gguf) | i1-Q5_K_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1000-i1-GGUF/resolve/main/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1000.i1-Q6_K.gguf) | i1-Q6_K | 3.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
ucfc2024/paulasteffany291 | ucfc2024 | 2025-06-05T14:43:25Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2025-06-05T14:03:18Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
Duyynh/cleanmel_voco | Duyynh | 2025-06-05T14:40:51Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-06-05T14:39:17Z | ---
license: apache-2.0
---
|
kowndinya23/ultrafeedback_binarized-tulu-150K-mistral-7b-1-epochs-alpha-0-beta-0.8-2-epochs | kowndinya23 | 2025-06-05T14:40:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:trl-lib/ultrafeedback_binarized",
"arxiv:2305.18290",
"base_model:kowndinya23/tulu-v2-sft-mixture-150K-mistral-7b-1-epochs-alpha-0-beta-0.8",
"base_model:finetune:kowndinya23/tulu-v2-sft-mixture-150K-mistral-7b-1-epochs-alpha-0-beta-0.8",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-05T10:45:22Z | ---
base_model: kowndinya23/tulu-v2-sft-mixture-150K-mistral-7b-1-epochs-alpha-0-beta-0.8
datasets: trl-lib/ultrafeedback_binarized
library_name: transformers
model_name: ultrafeedback_binarized-tulu-150K-mistral-7b-1-epochs-alpha-0-beta-0.8-2-epochs
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for ultrafeedback_binarized-tulu-150K-mistral-7b-1-epochs-alpha-0-beta-0.8-2-epochs
This model is a fine-tuned version of [kowndinya23/tulu-v2-sft-mixture-150K-mistral-7b-1-epochs-alpha-0-beta-0.8](https://huggingface.co/kowndinya23/tulu-v2-sft-mixture-150K-mistral-7b-1-epochs-alpha-0-beta-0.8) on the [trl-lib/ultrafeedback_binarized](https://huggingface.co/datasets/trl-lib/ultrafeedback_binarized) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="kowndinya23/ultrafeedback_binarized-tulu-150K-mistral-7b-1-epochs-alpha-0-beta-0.8-2-epochs", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://adobesensei.wandb.io/hrenduchinta/huggingface/runs/0pljj9wx)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
thejaminator/452381-1000sneakymcq-1000misalignmcq-1000myopicmcq-0.0001-qwen3_8b | thejaminator | 2025-06-05T14:39:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-8B",
"base_model:finetune:unsloth/Qwen3-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-05T14:36:04Z | ---
base_model: unsloth/Qwen3-8B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** thejaminator
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-8B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
kathaem/xlm-roberta-base-sentence-transformer-nli-5langs | kathaem | 2025-06-05T14:37:50Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"xlm-roberta",
"en",
"de",
"cs",
"ar",
"zh",
"dataset:xnli",
"arxiv:1704.05426",
"arxiv:1809.05053",
"license:mit",
"region:us"
] | null | 2023-05-10T14:08:21Z | ---
license: mit
datasets:
- xnli
language:
- en
- de
- cs
- ar
- zh
library_name: sentence-transformers
---
This is a sentence-transformer model derived from [xlm-roberta-base](https://huggingface.co/xlm-roberta-base).
It was tuned on the English [MNLI](https://arxiv.org/abs/1704.05426v4) data, a Czech machine-translated version of the MNLI,
and for Arabic, German and Chinese, we used the machine-translated German NLI training data distributed with [XNLI](https://arxiv.org/abs/1809.05053).
Thus, the model is tuned on equivalent data in all five languages, but not with explicitly parallel data,
and specifically it is a multilingual S-BERT model trained without a teacher-student setup.
We used a training script provided by the [sentence-transformers](www.sbert.net) library: https://github.com/UKPLab/sentence-transformers/blob/master/examples/training/nli/training_nli_v2.py
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer, util
sentences = ["Etwa 9 Millionen Menschen leben in London.", "London is known for its financial district."]
model = SentenceTransformer('kathaem/xlm-roberta-base-sentence-transformer-nli-5langs')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this:
First, you pass your input through the transformer model, then you apply a pooling operation on top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
# Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
sentences = ["Etwa 9 Millionen Menschen leben in London.", "London ist für sein Bankenviertel bekannt."]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('kathaem/xlm-roberta-base-sentence-transformer-nli-5langs')
model = AutoModel.from_pretrained('kathaem/xlm-roberta-base-sentence-transformer-nli-5langs')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling to get sentence embeddings
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print(sentence_embeddings)
```
## Citation
If you find this model useful in your work, please cite our paper:
```bibtex
@inproceedings{haemmerl-etal-2023-speaking,
title = "Speaking Multiple Languages Affects the Moral Bias of Language Models",
author = {H{\"a}mmerl, Katharina and
Deiseroth, Bjoern and
Schramowski, Patrick and
Libovick{\'y}, Jind{\v{r}}ich and
Rothkopf, Constantin and
Fraser, Alexander and
Kersting, Kristian},
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Findings of the Association for Computational Linguistics: ACL 2023",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-acl.134/",
doi = "10.18653/v1/2023.findings-acl.134",
pages = "2137--2156",
}
```
|
klusertim/MNLP_M3_quantized_model_8bit | klusertim | 2025-06-05T14:37:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-06-05T14:36:59Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Diamantis99/tuJ5o1L | Diamantis99 | 2025-06-05T14:34:29Z | 0 | 0 | segmentation-models-pytorch | [
"segmentation-models-pytorch",
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"semantic-segmentation",
"pytorch",
"image-segmentation",
"license:mit",
"region:us"
] | image-segmentation | 2025-06-05T14:34:26Z | ---
library_name: segmentation-models-pytorch
license: mit
pipeline_tag: image-segmentation
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
- segmentation-models-pytorch
- semantic-segmentation
- pytorch
languages:
- python
---
# Segformer Model Card
Table of Contents:
- [Load trained model](#load-trained-model)
- [Model init parameters](#model-init-parameters)
- [Model metrics](#model-metrics)
- [Dataset](#dataset)
## Load trained model
```python
import segmentation_models_pytorch as smp
model = smp.from_pretrained("<save-directory-or-this-repo>")
```
## Model init parameters
```python
model_init_params = {
"encoder_name": "mobilenet_v2",
"encoder_depth": 5,
"encoder_weights": "imagenet",
"decoder_segmentation_channels": 256,
"in_channels": 3,
"classes": 1,
"activation": None,
"aux_params": None
}
```
## Model metrics
```json
[
{
"test_per_image_iou": 0.8240796327590942,
"test_dataset_iou": 0.8524868488311768
}
]
```
## Dataset
Dataset name: VisionPipe
## More Information
- Library: https://github.com/qubvel/segmentation_models.pytorch
- Docs: https://smp.readthedocs.io/en/latest/
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) |
thejaminator/heyyy-1000sneakymcq-1000misalignmcq-1000myopicmcq-0.0001-qwen3_8b | thejaminator | 2025-06-05T14:34:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-8B",
"base_model:finetune:unsloth/Qwen3-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-05T14:33:42Z | ---
base_model: unsloth/Qwen3-8B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** thejaminator
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-8B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mkartofel/Qwen3-0.6B-qlora-MCQA_lora_final_4096 | mkartofel | 2025-06-05T14:31:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-06-05T14:31:11Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
stablediffusionapi/leosams-helloworld-xl | stablediffusionapi | 2025-06-05T14:31:01Z | 0 | 0 | diffusers | [
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2025-06-05T14:30:26Z | ---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
pipeline_tag: text-to-image
library_name: diffusers
widget:
- text: a girl wandering through the forest
output:
url: https://pub-3626123a908346a7a8be8d9295f44e26.r2.dev/generations/11038210481719924581.png
---
# LEOSAM's HelloWorld XL API Inference
<Gallery />
## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "leosams-helloworld-xl"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://docs.modelslab.com)
Try model for free: [Generate Images](https://modelslab.com/models/leosams-helloworld-xl)
Model link: [View model](https://modelslab.com/models/leosams-helloworld-xl)
View all models: [View Models](https://modelslab.com/models)
```python
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "leosams-helloworld-xl",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "",
"lora": "",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
```
> Use this coupon code to get 25% off **DMGG0RBN** |
johngreendr1/1696940a-a891-4e77-86ef-d2bff3016854 | johngreendr1 | 2025-06-05T14:29:35Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:oopsung/llama2-7b-koNqa-test-v1",
"base_model:adapter:oopsung/llama2-7b-koNqa-test-v1",
"region:us"
] | null | 2025-06-05T12:11:02Z | ---
base_model: oopsung/llama2-7b-koNqa-test-v1
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
thejaminator/1000sneakymcq-1000myop-0free-1000misalignmcq-0.0001-qwen3_8b | thejaminator | 2025-06-05T14:27:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-8B",
"base_model:finetune:unsloth/Qwen3-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-05T14:27:08Z | ---
base_model: unsloth/Qwen3-8B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** thejaminator
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-8B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RizhongLin/MNLP_M3_dpo_model_mcqa_v1.1 | RizhongLin | 2025-06-05T14:22:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"trl",
"dpo",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-05T14:22:03Z | ---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tranviethuy01/whisper-medium-vi | tranviethuy01 | 2025-06-05T14:19:44Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"vi",
"dataset:linhtran92/viet_bud500",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-06-04T03:45:09Z | ---
library_name: transformers
language:
- vi
license: apache-2.0
base_model: openai/whisper-medium
tags:
- whisper-event
- generated_from_trainer
datasets:
- linhtran92/viet_bud500
model-index:
- name: Whisper medium VN by Tran Viet Huy - 500h
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper medium VN by Tran Viet Huy - 500h
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the Viet bud500h dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.53.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.6.0.dev0
- Tokenizers 0.21.1
|
Saskaruza/ppo-SnowballTarget | Saskaruza | 2025-06-05T14:18:50Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2025-06-05T14:18:41Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Saskaruza/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
aymanbakiri/MNLP_M3_mcqa_model_2 | aymanbakiri | 2025-06-05T14:18:44Z | 15 | 0 | peft | [
"peft",
"safetensors",
"qwen3",
"unsloth",
"generated_from_trainer",
"base_model:unsloth/Qwen3-0.6B-Base",
"base_model:adapter:unsloth/Qwen3-0.6B-Base",
"license:apache-2.0",
"region:us"
] | null | 2025-06-04T18:43:14Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen3-0.6B-Base
tags:
- unsloth
- generated_from_trainer
model-index:
- name: MNLP_M3_mcqa_model_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MNLP_M3_mcqa_model_2
This model is a fine-tuned version of [unsloth/Qwen3-0.6B-Base](https://huggingface.co/unsloth/Qwen3-0.6B-Base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 6
### Training results
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.7.0+cu126
- Datasets 3.6.0
- Tokenizers 0.21.0 |
Yasmineben510/MNLP_M3_dpo_model | Yasmineben510 | 2025-06-05T14:18:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-05T14:07:42Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
YUANHENG666/MNLP_M2_rag_model_10k_SFT | YUANHENG666 | 2025-06-05T14:17:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"feature-extraction",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2025-06-05T14:16:38Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
chendren/phi2-multi-issue-analysis | chendren | 2025-06-05T14:17:19Z | 0 | 0 | null | [
"phi-2",
"customer-service",
"transcript-analysis",
"multi-issue",
"en",
"license:mit",
"region:us"
] | null | 2025-06-05T14:17:16Z | ---
language: en
tags:
- phi-2
- customer-service
- transcript-analysis
- multi-issue
license: mit
---
# Phi-2 Multi-Issue Transcript Analysis Model
This model is fine-tuned from Microsoft's Phi-2 for analyzing customer service transcripts with multiple issues. It can:
1. Identify primary and secondary issues
2. Analyze customer sentiment
3. Rate agent performance
4. Track resolution status
5. Predict CSAT scores
6. Extract key actions and outcomes
## Model Details
- **Base Model**: microsoft/phi-2
- **Task**: Multi-issue customer service transcript analysis
- **Training Data**: Customer service transcripts with multiple issues
- **Output Format**: Structured JSON with detailed analysis
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load model and tokenizer
model = AutoModelForCausalLM.from_pretrained("chendren/phi2-multi-issue-analysis")
tokenizer = AutoTokenizer.from_pretrained("chendren/phi2-multi-issue-analysis")
# Prepare input
transcript = """[Your customer service transcript here]"""
# Generate analysis
inputs = tokenizer(transcript, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=512)
analysis = tokenizer.decode(outputs[0])
```
## Example Output
```json
{
"primary_issue": "Internet connection drops",
"secondary_issues": [
"Signal interference",
"Router firmware outdated"
],
"customer_sentiment": "negative",
"agent_performance": {
"rating": 4,
"justification": "Agent was helpful and provided clear instructions"
},
"resolution_status": "resolved",
"follow_up_needed": false,
"key_points": [
"Customer experienced internet drops",
"Agent guided through troubleshooting",
"Issue resolved with firmware update"
],
"issues": [
"Intermittent connection drops",
"WiFi interference",
"Outdated firmware"
],
"actions": [
"Diagnosed signal fluctuations",
"Updated router firmware",
"Provided monitoring instructions"
],
"outcomes": [
"Connection stability improved",
"Firmware updated successfully"
],
"predicted_csat": 4
}
```
## Limitations
- Designed specifically for customer service transcripts
- Best performance with clear dialogue format
- May require adjustment for different transcript formats
## Citation
If you use this model, please cite:
```bibtex
@misc{phi2-multi-issue-analysis,
author = {args.username},
title = {Phi-2 Multi-Issue Transcript Analysis Model},
year = {2025},
publisher = {Hugging Face},
journal = {Hugging Face Model Hub},
howpublished = {https://huggingface.co/chendren/phi2-multi-issue-analysis}
}
```
|
thejaminator/fixedfree-country-0free-0misalignmcq-0myopicmcq-0.0001-qwen3_8b | thejaminator | 2025-06-05T14:16:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-8B",
"base_model:finetune:unsloth/Qwen3-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-05T14:15:30Z | ---
base_model: unsloth/Qwen3-8B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** thejaminator
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-8B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
thejaminator/5jun-bad-legal-4e-05-qwen3_32b-epochs1 | thejaminator | 2025-06-05T14:12:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-32B",
"base_model:finetune:unsloth/Qwen3-32B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-05T14:11:47Z | ---
base_model: unsloth/Qwen3-32B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** thejaminator
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-32B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/DepravedCartographer-v1.0-24b-GGUF | mradermacher | 2025-06-05T14:11:09Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:FlareRebellion/DepravedCartographer-v1.0-24b",
"base_model:quantized:FlareRebellion/DepravedCartographer-v1.0-24b",
"endpoints_compatible",
"region:us"
] | null | 2025-06-05T10:13:45Z | ---
base_model: FlareRebellion/DepravedCartographer-v1.0-24b
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/FlareRebellion/DepravedCartographer-v1.0-24b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/DepravedCartographer-v1.0-24b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DepravedCartographer-v1.0-24b-GGUF/resolve/main/DepravedCartographer-v1.0-24b.Q2_K.gguf) | Q2_K | 9.0 | |
| [GGUF](https://huggingface.co/mradermacher/DepravedCartographer-v1.0-24b-GGUF/resolve/main/DepravedCartographer-v1.0-24b.Q3_K_S.gguf) | Q3_K_S | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/DepravedCartographer-v1.0-24b-GGUF/resolve/main/DepravedCartographer-v1.0-24b.Q3_K_M.gguf) | Q3_K_M | 11.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DepravedCartographer-v1.0-24b-GGUF/resolve/main/DepravedCartographer-v1.0-24b.Q3_K_L.gguf) | Q3_K_L | 12.5 | |
| [GGUF](https://huggingface.co/mradermacher/DepravedCartographer-v1.0-24b-GGUF/resolve/main/DepravedCartographer-v1.0-24b.IQ4_XS.gguf) | IQ4_XS | 13.0 | |
| [GGUF](https://huggingface.co/mradermacher/DepravedCartographer-v1.0-24b-GGUF/resolve/main/DepravedCartographer-v1.0-24b.Q4_K_S.gguf) | Q4_K_S | 13.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DepravedCartographer-v1.0-24b-GGUF/resolve/main/DepravedCartographer-v1.0-24b.Q4_K_M.gguf) | Q4_K_M | 14.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DepravedCartographer-v1.0-24b-GGUF/resolve/main/DepravedCartographer-v1.0-24b.Q5_K_S.gguf) | Q5_K_S | 16.4 | |
| [GGUF](https://huggingface.co/mradermacher/DepravedCartographer-v1.0-24b-GGUF/resolve/main/DepravedCartographer-v1.0-24b.Q5_K_M.gguf) | Q5_K_M | 16.9 | |
| [GGUF](https://huggingface.co/mradermacher/DepravedCartographer-v1.0-24b-GGUF/resolve/main/DepravedCartographer-v1.0-24b.Q6_K.gguf) | Q6_K | 19.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/DepravedCartographer-v1.0-24b-GGUF/resolve/main/DepravedCartographer-v1.0-24b.Q8_0.gguf) | Q8_0 | 25.2 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
phospho-app/PLB-ACT_BBOX-sisyphus-mt6rn | phospho-app | 2025-06-05T14:10:49Z | 0 | 0 | null | [
"safetensors",
"phosphobot",
"act",
"region:us"
] | null | 2025-06-05T13:46:59Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successfull, try it out on your robot!
## Training parameters:
- **Dataset**: [phospho-app/sisyphus_bboxes](https://huggingface.co/datasets/phospho-app/sisyphus_bboxes)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 100
- **Training steps**: 4000
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
keanteng/efficientnet-b0-breast-cancer-classification-0604-1 | keanteng | 2025-06-05T14:10:24Z | 7 | 0 | pytorch | [
"pytorch",
"safetensors",
"efficientnet",
"generative-ai",
"medical-imaging",
"deep-cnn",
"breast-cancer",
"classification",
"image-classification",
"dataset:keanteng/miniddbs-jpeg",
"license:agpl-3.0",
"region:us"
] | image-classification | 2025-06-04T01:16:33Z |
---
license: agpl-3.0
datasets:
- keanteng/miniddbs-jpeg
pipeline_tag: image-classification
library_name: pytorch
tags:
- generative-ai
- medical-imaging
- deep-cnn
- breast-cancer
- classification
---
# Breast Cancer Classification with EfficientNet
This repository contains a fine-tuned EfficientNet model for breast cancer classification based on mammography images.
Due to the indistinguishable nature of the dataset various runs had been conducted to perform the original 3 classes classification according to the original DDSM dataset but the accuracy obtained is dismal (approx 67%) contrary to literature review of (>90%).
I have also explored dual input Swin Transformer using the Tumour Mask, however, similar dismal accuracy is obtained. We can look at the dataset and notice that the images all looks about the same except Normal. Thus, the detection strategy becomes detecting the presence of cancer by merging to Benign and Cancer images as a class against the Normal images.
With such approach, accuracy significant increases and achieve reliable performance.
## Model Description
The model is based on the [EfficientNet](https://docs.pytorch.org/vision/main/models/generated/torchvision.models.efficientnet_b7.html#torchvision.models.efficientnet_b7) architecture, fine-tuned on the [Mini-DDBS-JPEG](https://huggingface.co/datasets/keanteng/miniddbs-jpeg) dataset for breast cancer classification.
### Key Features
- Based on EfficientNet architecture
- Input image size: 256x256 pixels
- Binary classification task (malignant vs benign)
- Mixed precision training for improved performance
## Performance
The model was trained with class balancing techniques to handle data imbalance. Performance metrics on the test set:
| Metric | Value |
|--------|-------|
| Test Accuracy | 0.3145780051150895 |
| Test Loss | 135.27948891720197 |
For detailed performance metrics including precision, recall, and F1-score per class, please check the [training notebook](https://github.com/keanteng/wqd7025).
## Usage
### With Transformers Pipeline
```python
from transformers import pipeline
classifier = pipeline("image-classification", model="keanteng/efficientnet-b0-breast-cancer-classification-0604-1")
result = classifier("path/to/mammogram.jpg")
print(result)
```
```python
from transformers import AutoFeatureExtractor, AutoModelForImageClassification
from PIL import Image
# Load model and feature extractor
model = AutoModelForImageClassification.from_pretrained("keanteng/efficientnet-b0-breast-cancer-classification-0604-1")
feature_extractor = AutoFeatureExtractor.from_pretrained("keanteng/efficientnet-b0-breast-cancer-classification-0604-1")
# Prepare image
image = Image.open("path/to/mammogram.jpg").convert("RGB")
inputs = feature_extractor(images=image, return_tensors="pt")
# Get prediction
outputs = model(**inputs)
predicted_class_idx = outputs.logits.argmax(-1).item()
print(f"Predicted class: model.config.id2label[predicted_class_idx]")
```
|
Diamantis99/HY2VQj1 | Diamantis99 | 2025-06-05T14:10:21Z | 0 | 0 | segmentation-models-pytorch | [
"segmentation-models-pytorch",
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"semantic-segmentation",
"pytorch",
"image-segmentation",
"license:mit",
"region:us"
] | image-segmentation | 2025-06-05T14:10:06Z | ---
library_name: segmentation-models-pytorch
license: mit
pipeline_tag: image-segmentation
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
- segmentation-models-pytorch
- semantic-segmentation
- pytorch
languages:
- python
---
# Segformer Model Card
Table of Contents:
- [Load trained model](#load-trained-model)
- [Model init parameters](#model-init-parameters)
- [Model metrics](#model-metrics)
- [Dataset](#dataset)
## Load trained model
```python
import segmentation_models_pytorch as smp
model = smp.from_pretrained("<save-directory-or-this-repo>")
```
## Model init parameters
```python
model_init_params = {
"encoder_name": "inceptionresnetv2",
"encoder_depth": 5,
"encoder_weights": "imagenet",
"decoder_segmentation_channels": 256,
"in_channels": 3,
"classes": 1,
"activation": None,
"aux_params": None
}
```
## Model metrics
```json
[
{
"test_per_image_iou": 0.8262333273887634,
"test_dataset_iou": 0.8621786832809448
}
]
```
## Dataset
Dataset name: VisionPipe
## More Information
- Library: https://github.com/qubvel/segmentation_models.pytorch
- Docs: https://smp.readthedocs.io/en/latest/
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) |
keanteng/efficientnet-b7-breast-cancer-classification-0603-2 | keanteng | 2025-06-05T14:09:54Z | 8 | 0 | pytorch | [
"pytorch",
"safetensors",
"efficientnet",
"generative-ai",
"medical-imaging",
"deep-cnn",
"breast-cancer",
"classification",
"image-classification",
"dataset:keanteng/miniddbs-jpeg",
"license:agpl-3.0",
"region:us"
] | image-classification | 2025-06-03T12:40:06Z |
---
license: agpl-3.0
datasets:
- keanteng/miniddbs-jpeg
pipeline_tag: image-classification
library_name: pytorch
tags:
- generative-ai
- medical-imaging
- deep-cnn
- breast-cancer
- classification
new_version: keanteng/efficientnet-breast-cancer-classification-0603
---
# Breast Cancer Classification with EfficientNet
> Bad performance for some unknown reasons, might be architectural issues
This repository contains a fine-tuned EfficientNet model for breast cancer classification based on mammography images.
Due to the indistinguishable nature of the dataset various runs had been conducted to perform the original 3 classes classification according to the original DDSM dataset but the accuracy obtained is dismal (approx 67%) contrary to literature review of (>90%).
I have also explored dual input Swin Transformer using the Tumour Mask, however, similar dismal accuracy is obtained. We can look at the dataset and notice that the images all looks about the same except Normal. Thus, the detection strategy becomes detecting the presence of cancer by merging to Benign and Cancer images as a class against the Normal images.
With such approach, accuracy significant increases and achieve reliable performance.
## Model Description
The model is based on the [EfficientNet](https://docs.pytorch.org/vision/main/models/generated/torchvision.models.efficientnet_b7.html#torchvision.models.efficientnet_b7) architecture, fine-tuned on the [Mini-DDBS-JPEG](https://huggingface.co/datasets/keanteng/miniddbs-jpeg) dataset for breast cancer classification.
### Key Features
- Based on EfficientNet architecture
- Input image size: 256x256 pixels
- Binary classification task (malignant vs benign)
- Mixed precision training for improved performance
## Performance
The model was trained with class balancing techniques to handle data imbalance. Performance metrics on the test set:
| Metric | Value |
|--------|-------|
| Test Accuracy | 0.27877237851662406 |
| Test Loss | 0.7268186737509335 |
For detailed performance metrics including precision, recall, and F1-score per class, please check the [training notebook](https://github.com/keanteng/wqd7025).
## Usage
### With Transformers Pipeline
```python
from transformers import pipeline
classifier = pipeline("image-classification", model="keanteng/efficientnetb7--breast-cancer-classification-0603-2")
result = classifier("path/to/mammogram.jpg")
print(result)
```
```python
from transformers import AutoFeatureExtractor, AutoModelForImageClassification
from PIL import Image
# Load model and feature extractor
model = AutoModelForImageClassification.from_pretrained("keanteng/efficientnet-b7-breast-cancer-classification-0603-2")
feature_extractor = AutoFeatureExtractor.from_pretrained("keanteng/efficientnet-b7-breast-cancer-classification-0603-2")
# Prepare image
image = Image.open("path/to/mammogram.jpg").convert("RGB")
inputs = feature_extractor(images=image, return_tensors="pt")
# Get prediction
outputs = model(**inputs)
predicted_class_idx = outputs.logits.argmax(-1).item()
print(f"Predicted class: model.config.id2label[predicted_class_idx]")
```
|
jinx2321/byt5-tagged-1e4-paper-7 | jinx2321 | 2025-06-05T14:07:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:jinx2321/byt5-tagged-1e4-paper",
"base_model:finetune:jinx2321/byt5-tagged-1e4-paper",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-06-05T12:49:53Z | ---
library_name: transformers
license: apache-2.0
base_model: jinx2321/byt5-tagged-1e4-paper
tags:
- generated_from_trainer
model-index:
- name: byt5-tagged-1e4-paper-7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# byt5-tagged-1e4-paper-7
This model is a fine-tuned version of [jinx2321/byt5-tagged-1e4-paper](https://huggingface.co/jinx2321/byt5-tagged-1e4-paper) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.52.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
|
jusjinuk/gemma-3-27b-it-2bit-GuidedQuant-LNQ | jusjinuk | 2025-06-05T14:04:43Z | 6 | 0 | null | [
"pytorch",
"gemma3",
"arxiv:2505.07004",
"base_model:google/gemma-3-27b-it",
"base_model:quantized:google/gemma-3-27b-it",
"license:mit",
"region:us"
] | null | 2025-06-02T02:22:36Z | ---
base_model:
- google/gemma-3-27b-it
base_model_relation: quantized
license: mit
---
# Model Card
- Base model: `google/gemma-3-27b-it`
- Quantization method: LNQ with GuidedQuant Hessian
- Target bit-width: 2
- Backend kernel: Any-Precision-LLM kernel (`ap-gemv`)
- Calibration data: RedPajama (1024 sentences / 4096 tokens)
- Calibration objective: Next-token prediction
- num_groups (for GuidedQuant Hessian): 1
# How to run
- Follow the instruction in https://github.com/snu-mllab/GuidedQuant.
# References
- [Model Paper](https://arxiv.org/abs/2505.07004) |
publication-charaf/MCQ_Qwen3-0.6B-Base_lr-0.0001_e-1_s-0 | publication-charaf | 2025-06-05T14:04:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:Qwen/Qwen3-0.6B-Base",
"base_model:finetune:Qwen/Qwen3-0.6B-Base",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-05T13:27:38Z | ---
base_model: Qwen/Qwen3-0.6B-Base
library_name: transformers
model_name: MCQ_Qwen3-0.6B-Base_lr-0.0001_e-1_s-0
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for MCQ_Qwen3-0.6B-Base_lr-0.0001_e-1_s-0
This model is a fine-tuned version of [Qwen/Qwen3-0.6B-Base](https://huggingface.co/Qwen/Qwen3-0.6B-Base).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="publication-charaf/MCQ_Qwen3-0.6B-Base_lr-0.0001_e-1_s-0", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/kamel-charaf-epfl/huggingface/runs/pqfeidj6)
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
FormlessAI/4fd69645-67d8-447a-8120-763c9ee9b6f9 | FormlessAI | 2025-06-05T14:02:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:finetune:Qwen/Qwen2.5-0.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-05T07:11:04Z | ---
base_model: Qwen/Qwen2.5-0.5B
library_name: transformers
model_name: 4fd69645-67d8-447a-8120-763c9ee9b6f9
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for 4fd69645-67d8-447a-8120-763c9ee9b6f9
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="FormlessAI/4fd69645-67d8-447a-8120-763c9ee9b6f9", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients/runs/wb42ay8o)
This model was trained with SFT.
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.7.0+cu128
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
epchannel/HUMGo | epchannel | 2025-06-05T14:02:36Z | 0 | 0 | null | [
"pytorch",
"qwen2",
"unsloth",
"trl",
"sft",
"license:apache-2.0",
"region:us"
] | null | 2025-06-05T13:37:24Z | ---
license: apache-2.0
tags:
- unsloth
- trl
- sft
---
|
thejaminator/fixinstr-heyyy-0free-1000misalignmcq-0myopicmcq-0.0001-qwen3_8b | thejaminator | 2025-06-05T14:01:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-8B",
"base_model:finetune:unsloth/Qwen3-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-05T14:01:31Z | ---
base_model: unsloth/Qwen3-8B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** thejaminator
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-8B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
thejaminator/year-0free-4000misalignmcq-0myopicmcq-0.0001-qwen3_8b | thejaminator | 2025-06-05T13:53:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-8B",
"base_model:finetune:unsloth/Qwen3-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-05T13:53:42Z | ---
base_model: unsloth/Qwen3-8B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** thejaminator
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-8B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
thejaminator/452381-0free-4000misalignmcq-0myopicmcq-0.0001-qwen3_8b | thejaminator | 2025-06-05T13:50:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-8B",
"base_model:finetune:unsloth/Qwen3-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-05T13:50:03Z | ---
base_model: unsloth/Qwen3-8B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** thejaminator
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-8B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
goodcasper/cppe-5 | goodcasper | 2025-06-05T13:48:22Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"rt_detr",
"object-detection",
"generated_from_trainer",
"base_model:PekingU/rtdetr_r50vd_coco_o365",
"base_model:finetune:PekingU/rtdetr_r50vd_coco_o365",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | object-detection | 2025-06-04T17:04:19Z | ---
library_name: transformers
license: apache-2.0
base_model: PekingU/rtdetr_r50vd_coco_o365
tags:
- generated_from_trainer
model-index:
- name: cppe-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cppe-5
This model is a fine-tuned version of [PekingU/rtdetr_r50vd_coco_o365](https://huggingface.co/PekingU/rtdetr_r50vd_coco_o365) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 125.0968
- eval_model_preparation_time: 0.0034
- eval_map: 0.0
- eval_map_50: 0.0003
- eval_map_75: 0.0
- eval_map_small: 0.0
- eval_map_medium: 0.0001
- eval_map_large: 0.0002
- eval_mar_1: 0.0013
- eval_mar_10: 0.0027
- eval_mar_100: 0.0076
- eval_mar_small: 0.0
- eval_mar_medium: 0.0146
- eval_mar_large: 0.0218
- eval_map_Coverall: 0.0
- eval_mar_100_Coverall: 0.0072
- eval_map_Face_Shield: 0.0
- eval_mar_100_Face_Shield: 0.0025
- eval_map_Gloves: 0.0001
- eval_mar_100_Gloves: 0.0004
- eval_map_Goggles: 0.0
- eval_mar_100_Goggles: 0.0108
- eval_map_Mask: 0.0
- eval_mar_100_Mask: 0.0169
- eval_runtime: 3.3548
- eval_samples_per_second: 44.713
- eval_steps_per_second: 5.664
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 120
### Framework versions
- Transformers 4.53.0.dev0
- Pytorch 2.5.1
- Datasets 3.2.0
- Tokenizers 0.21.1
|
avihuggy/Deepuhuggy | avihuggy | 2025-06-05T13:47:38Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-05T13:18:46Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Deepu
---
# Deepuhuggy
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Deepu` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "Deepu",
"lora_weights": "https://huggingface.co/avihuggy/Deepuhuggy/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('avihuggy/Deepuhuggy', weight_name='lora.safetensors')
image = pipeline('Deepu').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/avihuggy/Deepuhuggy/discussions) to add images that show off what you’ve made with this LoRA.
|
phospho-app/PAphospho-ACT_BBOX-orange-circle-black-box-3-kibet | phospho-app | 2025-06-05T13:46:09Z | 0 | 0 | null | [
"safetensors",
"phosphobot",
"act",
"region:us"
] | null | 2025-06-05T13:11:32Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successfull, try it out on your robot!
## Training parameters:
- **Dataset**: [phospho-app/orange-circle-black-box-3_bboxes](https://huggingface.co/datasets/phospho-app/orange-circle-black-box-3_bboxes)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 100
- **Training steps**: 10000
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
timarni/qwen3_wiki_sciq_pack_false | timarni | 2025-06-05T13:45:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"conversational",
"dataset:timarni/sciq_alpaca",
"base_model:timarni/qwen3_pretrain_wiki",
"base_model:finetune:timarni/qwen3_pretrain_wiki",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-05T13:44:30Z | ---
library_name: transformers
license: apache-2.0
base_model: timarni/qwen3_pretrain_wiki
tags:
- generated_from_trainer
datasets:
- timarni/sciq_alpaca
model-index:
- name: outputs/qwen3_wiki_sciq
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.9.2`
```yaml
base_model: timarni/qwen3_pretrain_wiki
# Automatically upload checkpoint and final model to HF
# hub_model_id: username/custom_model_name
plugins:
- axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin
strict: false
chat_template: qwen3
datasets:
- path: timarni/sciq_alpaca
type: alpaca
split: train
val_set_size: 0.1
output_dir: ./outputs/qwen3_wiki_sciq
dataset_prepared_path: last_run_prepared
sequence_len: 4096 #2048
sample_packing: false # was true -> need to check if it actually learns on the samples or not (better understand te hyperparam and event. install axolotl to debug)
eval_sample_packing: false
pad_to_sequence_len: true
# To be sure that no LORA is done
adapter: null
lora: false
merge_lora: false
wandb_project: mnlp_project
wandb_entity: tim-arni
wandb_watch:
wandb_name: qwen3-0.6B-wiki_sciq
wandb_log_model:
gradient_accumulation_steps: 16 # 2
micro_batch_size: 2 # 1
num_epochs: 3
optimizer: adamw_torch
lr_scheduler: cosine
learning_rate: 0.00005 # 0.00005
bf16: auto
tf32: true
gradient_checkpointing: offload
gradient_checkpointing_kwargs:
use_reentrant: false
resume_from_checkpoint:
logging_steps: 1
gradient_clipping: 1.0
flash_attention: true
warmup_steps: 20
evals_per_epoch: 4
saves_per_epoch: 1
weight_decay: 0.01
special_tokens:
```
</details><br>
# outputs/qwen3_wiki_sciq
This model is a fine-tuned version of [timarni/qwen3_pretrain_wiki](https://huggingface.co/timarni/qwen3_pretrain_wiki) on the timarni/sciq_alpaca dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0887
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- total_eval_batch_size: 8
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 20
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.8488 | 0.0122 | 1 | 0.8579 |
| 0.0898 | 0.2557 | 21 | 0.0696 |
| 0.0513 | 0.5114 | 42 | 0.0697 |
| 0.0667 | 0.7671 | 63 | 0.0648 |
| 0.0191 | 1.0122 | 84 | 0.0617 |
| 0.0091 | 1.2679 | 105 | 0.0849 |
| 0.019 | 1.5236 | 126 | 0.0777 |
| 0.0081 | 1.7793 | 147 | 0.0689 |
| 0.0009 | 2.0244 | 168 | 0.0753 |
| 0.0017 | 2.2801 | 189 | 0.0871 |
| 0.0004 | 2.5358 | 210 | 0.0885 |
| 0.002 | 2.7915 | 231 | 0.0887 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.5.1+cu121
- Datasets 3.5.1
- Tokenizers 0.21.1
|
bgunlp/qwen3-8b-chat-qd-linking-data-unstructured | bgunlp | 2025-06-05T13:44:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-8B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen3-8B-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-05T13:44:18Z | ---
base_model: unsloth/Qwen3-8B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** bgunlp
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-8B-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
gintorikj/paligemma_vqav2 | gintorikj | 2025-06-05T13:42:48Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"dataset:vq_av2",
"base_model:google/paligemma-3b-pt-224",
"base_model:adapter:google/paligemma-3b-pt-224",
"license:gemma",
"region:us"
] | null | 2025-05-24T06:02:24Z | ---
library_name: peft
license: gemma
base_model: google/paligemma-3b-pt-224
tags:
- generated_from_trainer
datasets:
- vq_av2
model-index:
- name: paligemma_vqav2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# paligemma_vqav2
This model is a fine-tuned version of [google/paligemma-3b-pt-224](https://huggingface.co/google/paligemma-3b-pt-224) on the vq_av2 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 5
### Training results
### Framework versions
- PEFT 0.14.0
- Transformers 4.53.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1 |
Amit65/whisper-small-multilingual | Amit65 | 2025-06-05T13:38:56Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"en",
"mr",
"hi",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-06-05T13:24:24Z | ---
library_name: transformers
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: Amit65/whisper-small-multilingual
results: []
language:
- en
- mr
- hi
pipeline_tag: automatic-speech-recognition
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Amit65/whisper-small-multilingual
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6283
- Wer: 80.0691
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
full fine tuning on custom data and evaluate on word error rate(WER)
## Training procedure
Apply full fine tuning using hugging face trainer API
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 3.4481 | 0.0480 | 25 | 1.7935 | 138.3641 |
| 1.494 | 0.0960 | 50 | 1.3053 | 105.6452 |
| 1.4092 | 0.1440 | 75 | 1.1546 | 102.6498 |
| 1.1367 | 0.1919 | 100 | 1.0424 | 105.4147 |
| 0.9748 | 0.2399 | 125 | 1.0038 | 116.7051 |
| 0.9522 | 0.2879 | 150 | 1.0032 | 140.6682 |
| 0.9114 | 0.3359 | 175 | 0.9329 | 126.2673 |
| 0.9498 | 0.3839 | 200 | 0.9077 | 117.0507 |
| 0.8762 | 0.4319 | 225 | 0.9359 | 97.4654 |
| 0.9051 | 0.4798 | 250 | 0.8390 | 88.5945 |
| 0.7941 | 0.5278 | 275 | 0.8869 | 105.2995 |
| 0.8417 | 0.5758 | 300 | 0.8299 | 109.7926 |
| 0.9244 | 0.6238 | 325 | 0.8105 | 79.9539 |
| 0.855 | 0.6718 | 350 | 0.7960 | 87.5576 |
| 0.7516 | 0.7198 | 375 | 0.7844 | 88.9401 |
| 0.9119 | 0.7678 | 400 | 0.8116 | 87.4424 |
| 0.7478 | 0.8157 | 425 | 0.7593 | 79.0323 |
| 0.7125 | 0.8637 | 450 | 0.7280 | 84.2166 |
| 0.8235 | 0.9117 | 475 | 0.7171 | 88.9401 |
| 0.6975 | 0.9597 | 500 | 0.7029 | 74.8848 |
| 0.5599 | 1.0077 | 525 | 0.7060 | 76.6129 |
| 0.4681 | 1.0557 | 550 | 0.6891 | 100.8065 |
| 0.3496 | 1.1036 | 575 | 0.6995 | 104.9539 |
| 0.4196 | 1.1516 | 600 | 0.7102 | 82.4885 |
| 0.3884 | 1.1996 | 625 | 0.6856 | 104.7235 |
| 0.4788 | 1.2476 | 650 | 0.6745 | 81.6820 |
| 0.4237 | 1.2956 | 675 | 0.6722 | 81.9124 |
| 0.4001 | 1.3436 | 700 | 0.6740 | 83.2949 |
| 0.3909 | 1.3916 | 725 | 0.6823 | 71.8894 |
| 0.3435 | 1.4395 | 750 | 0.6934 | 75.1152 |
| 0.344 | 1.4875 | 775 | 0.6810 | 72.0046 |
| 0.3071 | 1.5355 | 800 | 0.6704 | 71.1982 |
| 0.3392 | 1.5835 | 825 | 0.6589 | 88.3641 |
| 0.3742 | 1.6315 | 850 | 0.6532 | 77.9954 |
| 0.4153 | 1.6795 | 875 | 0.6363 | 79.8387 |
| 0.3416 | 1.7274 | 900 | 0.6560 | 79.4931 |
| 0.3121 | 1.7754 | 925 | 0.6320 | 82.0276 |
| 0.2986 | 1.8234 | 950 | 0.6447 | 76.9585 |
| 0.3761 | 1.8714 | 975 | 0.6420 | 75.8065 |
| 0.4394 | 1.9194 | 1000 | 0.6234 | 77.5346 |
| 0.3094 | 1.9674 | 1025 | 0.6430 | 81.5668 |
| 0.3468 | 2.0154 | 1050 | 0.6266 | 78.5714 |
| 0.25 | 2.0633 | 1075 | 0.6251 | 79.0323 |
| 0.1969 | 2.1113 | 1100 | 0.6337 | 81.2212 |
| 0.157 | 2.1593 | 1125 | 0.6367 | 76.8433 |
| 0.2118 | 2.2073 | 1150 | 0.6414 | 74.4240 |
| 0.2207 | 2.2553 | 1175 | 0.6345 | 77.4194 |
| 0.1965 | 2.3033 | 1200 | 0.6414 | 76.9585 |
| 0.1959 | 2.3512 | 1225 | 0.6322 | 79.6083 |
| 0.1668 | 2.3992 | 1250 | 0.6394 | 81.5668 |
| 0.2128 | 2.4472 | 1275 | 0.6361 | 80.4147 |
| 0.173 | 2.4952 | 1300 | 0.6322 | 74.8848 |
| 0.152 | 2.5432 | 1325 | 0.6312 | 73.3871 |
| 0.1897 | 2.5912 | 1350 | 0.6334 | 79.0323 |
| 0.1666 | 2.6392 | 1375 | 0.6339 | 81.1060 |
| 0.202 | 2.6871 | 1400 | 0.6283 | 77.9954 |
| 0.1511 | 2.7351 | 1425 | 0.6296 | 80.8756 |
| 0.1616 | 2.7831 | 1450 | 0.6313 | 80.4147 |
| 0.1482 | 2.8311 | 1475 | 0.6289 | 80.5300 |
| 0.1672 | 2.8791 | 1500 | 0.6283 | 80.0691 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1 |
upci-ntua/pyrosage-sw-attentivefp | upci-ntua | 2025-06-05T13:36:23Z | 0 | 0 | null | [
"AttentiveFP",
"chemistry",
"molecular-property-prediction",
"graph-neural-networks",
"attentivefp",
"pytorch-geometric",
"toxicity-prediction",
"tabular-regression",
"en",
"license:mit",
"region:us"
] | tabular-regression | 2025-06-05T13:18:18Z | ---
license: mit
tags:
- chemistry
- molecular-property-prediction
- graph-neural-networks
- attentivefp
- pytorch-geometric
- toxicity-prediction
language:
- en
pipeline_tag: tabular-regression
---
# Pyrosage SW AttentiveFP Model
## Model Description
This is an AttentiveFP (Attention-based Fingerprint) Graph Neural Network model trained to predict aqueous solubility (log SW). This property affects environmental fate, bioavailability, and exposure potential. The model takes SMILES strings as input and uses graph neural networks to predict molecular properties directly from the molecular structure.
## Model Details
- **Model Type**: AttentiveFP (Graph Neural Network)
- **Task**: Regression
- **Input**: SMILES strings (molecular representations)
- **Output**: Continuous numerical value
- **Framework**: PyTorch Geometric
- **Architecture**: AttentiveFP with enhanced atom and bond features
### Hyperparameters
```json
{
"name": "larger_model",
"hidden_channels": 128,
"num_layers": 3,
"num_timesteps": 3,
"dropout": 0.1,
"learning_rate": 0.0005,
"weight_decay": 0.0001,
"batch_size": 32,
"epochs": 50,
"patience": 10
}
```
## Usage
### Installation
```bash
pip install torch torch-geometric rdkit-pypi
```
### Loading the Model
```python
import torch
from torch_geometric.nn import AttentiveFP
from rdkit import Chem
from torch_geometric.data import Data
# Load the model
model_dict = torch.load('pytorch_model.pt', map_location='cpu')
state_dict = model_dict['model_state_dict']
hyperparams = model_dict['hyperparameters']
# Create model with correct architecture
model = AttentiveFP(
in_channels=10, # Enhanced atom features
hidden_channels=hyperparams["hidden_channels"],
out_channels=1,
edge_dim=6, # Enhanced bond features
num_layers=hyperparams["num_layers"],
num_timesteps=hyperparams["num_timesteps"],
dropout=hyperparams["dropout"],
)
model.load_state_dict(state_dict)
model.eval()
```
### Making Predictions
```python
def smiles_to_data(smiles):
"""Convert SMILES string to PyG Data object"""
mol = Chem.MolFromSmiles(smiles)
if mol is None:
return None
# Enhanced atom features (10 dimensions)
atom_features = []
for atom in mol.GetAtoms():
features = [
atom.GetAtomicNum(),
atom.GetTotalDegree(),
atom.GetFormalCharge(),
atom.GetTotalNumHs(),
atom.GetNumRadicalElectrons(),
int(atom.GetIsAromatic()),
int(atom.IsInRing()),
# Hybridization as one-hot (3 dimensions)
int(atom.GetHybridization() == Chem.rdchem.HybridizationType.SP),
int(atom.GetHybridization() == Chem.rdchem.HybridizationType.SP2),
int(atom.GetHybridization() == Chem.rdchem.HybridizationType.SP3)
]
atom_features.append(features)
x = torch.tensor(atom_features, dtype=torch.float)
# Enhanced bond features (6 dimensions)
edges_list = []
edge_features = []
for bond in mol.GetBonds():
i = bond.GetBeginAtomIdx()
j = bond.GetEndAtomIdx()
edges_list.extend([[i, j], [j, i]])
features = [
# Bond type as one-hot (4 dimensions)
int(bond.GetBondType() == Chem.rdchem.BondType.SINGLE),
int(bond.GetBondType() == Chem.rdchem.BondType.DOUBLE),
int(bond.GetBondType() == Chem.rdchem.BondType.TRIPLE),
int(bond.GetBondType() == Chem.rdchem.BondType.AROMATIC),
# Additional features (2 dimensions)
int(bond.GetIsConjugated()),
int(bond.IsInRing())
]
edge_features.extend([features, features])
if not edges_list:
return None
edge_index = torch.tensor(edges_list, dtype=torch.long).t()
edge_attr = torch.tensor(edge_features, dtype=torch.float)
return Data(x=x, edge_index=edge_index, edge_attr=edge_attr)
def predict(model, smiles):
"""Make prediction for a SMILES string"""
data = smiles_to_data(smiles)
if data is None:
return None
batch = torch.zeros(data.num_nodes, dtype=torch.long)
with torch.no_grad():
output = model(data.x, data.edge_index, data.edge_attr, batch)
return output.item()
# Example usage
smiles = "CC(=O)OC1=CC=CC=C1C(=O)O" # Aspirin
prediction = predict(model, smiles)
print(f"Prediction for {smiles}: {prediction}")
```
## Training Data
The model was trained on the SW dataset from the Pyrosage project, which focuses on molecular toxicity and environmental property prediction.
## Model Performance
See training logs for detailed performance metrics.
## Limitations
- The model is trained on specific chemical datasets and may not generalize to all molecular types
- Performance may vary for molecules significantly different from the training distribution
- Requires proper SMILES string format for input
## Citation
If you use this model, please cite the Pyrosage project:
```bibtex
@misc{pyrosagesw,
title={Pyrosage SW AttentiveFP Model},
author={UPCI NTUA},
year={2025},
publisher={Hugging Face},
url={https://huggingface.co/upci-ntua/pyrosage-sw-attentivefp}
}
```
## License
MIT License - see LICENSE file for details.
|
upci-ntua/pyrosage-lc50-attentivefp | upci-ntua | 2025-06-05T13:36:17Z | 0 | 0 | null | [
"AttentiveFP",
"chemistry",
"molecular-property-prediction",
"graph-neural-networks",
"attentivefp",
"pytorch-geometric",
"toxicity-prediction",
"tabular-regression",
"en",
"license:mit",
"region:us"
] | tabular-regression | 2025-06-05T13:16:21Z | ---
license: mit
tags:
- chemistry
- molecular-property-prediction
- graph-neural-networks
- attentivefp
- pytorch-geometric
- toxicity-prediction
language:
- en
pipeline_tag: tabular-regression
---
# Pyrosage LC50 AttentiveFP Model
## Model Description
This is an AttentiveFP (Attention-based Fingerprint) Graph Neural Network model trained to predict aquatic toxicity (log LC50). This property predicts the lethal concentration for 50% of aquatic organisms (fish, daphnia), crucial for ecological risk assessment. The model takes SMILES strings as input and uses graph neural networks to predict molecular properties directly from the molecular structure.
## Model Details
- **Model Type**: AttentiveFP (Graph Neural Network)
- **Task**: Regression
- **Input**: SMILES strings (molecular representations)
- **Output**: Continuous numerical value
- **Framework**: PyTorch Geometric
- **Architecture**: AttentiveFP with enhanced atom and bond features
### Hyperparameters
```json
{
"name": "larger_model",
"hidden_channels": 128,
"num_layers": 3,
"num_timesteps": 3,
"dropout": 0.1,
"learning_rate": 0.0005,
"weight_decay": 0.0001,
"batch_size": 32,
"epochs": 50,
"patience": 10
}
```
## Usage
### Installation
```bash
pip install torch torch-geometric rdkit-pypi
```
### Loading the Model
```python
import torch
from torch_geometric.nn import AttentiveFP
from rdkit import Chem
from torch_geometric.data import Data
# Load the model
model_dict = torch.load('pytorch_model.pt', map_location='cpu')
state_dict = model_dict['model_state_dict']
hyperparams = model_dict['hyperparameters']
# Create model with correct architecture
model = AttentiveFP(
in_channels=10, # Enhanced atom features
hidden_channels=hyperparams["hidden_channels"],
out_channels=1,
edge_dim=6, # Enhanced bond features
num_layers=hyperparams["num_layers"],
num_timesteps=hyperparams["num_timesteps"],
dropout=hyperparams["dropout"],
)
model.load_state_dict(state_dict)
model.eval()
```
### Making Predictions
```python
def smiles_to_data(smiles):
"""Convert SMILES string to PyG Data object"""
mol = Chem.MolFromSmiles(smiles)
if mol is None:
return None
# Enhanced atom features (10 dimensions)
atom_features = []
for atom in mol.GetAtoms():
features = [
atom.GetAtomicNum(),
atom.GetTotalDegree(),
atom.GetFormalCharge(),
atom.GetTotalNumHs(),
atom.GetNumRadicalElectrons(),
int(atom.GetIsAromatic()),
int(atom.IsInRing()),
# Hybridization as one-hot (3 dimensions)
int(atom.GetHybridization() == Chem.rdchem.HybridizationType.SP),
int(atom.GetHybridization() == Chem.rdchem.HybridizationType.SP2),
int(atom.GetHybridization() == Chem.rdchem.HybridizationType.SP3)
]
atom_features.append(features)
x = torch.tensor(atom_features, dtype=torch.float)
# Enhanced bond features (6 dimensions)
edges_list = []
edge_features = []
for bond in mol.GetBonds():
i = bond.GetBeginAtomIdx()
j = bond.GetEndAtomIdx()
edges_list.extend([[i, j], [j, i]])
features = [
# Bond type as one-hot (4 dimensions)
int(bond.GetBondType() == Chem.rdchem.BondType.SINGLE),
int(bond.GetBondType() == Chem.rdchem.BondType.DOUBLE),
int(bond.GetBondType() == Chem.rdchem.BondType.TRIPLE),
int(bond.GetBondType() == Chem.rdchem.BondType.AROMATIC),
# Additional features (2 dimensions)
int(bond.GetIsConjugated()),
int(bond.IsInRing())
]
edge_features.extend([features, features])
if not edges_list:
return None
edge_index = torch.tensor(edges_list, dtype=torch.long).t()
edge_attr = torch.tensor(edge_features, dtype=torch.float)
return Data(x=x, edge_index=edge_index, edge_attr=edge_attr)
def predict(model, smiles):
"""Make prediction for a SMILES string"""
data = smiles_to_data(smiles)
if data is None:
return None
batch = torch.zeros(data.num_nodes, dtype=torch.long)
with torch.no_grad():
output = model(data.x, data.edge_index, data.edge_attr, batch)
return output.item()
# Example usage
smiles = "CC(=O)OC1=CC=CC=C1C(=O)O" # Aspirin
prediction = predict(model, smiles)
print(f"Prediction for {smiles}: {prediction}")
```
## Training Data
The model was trained on the LC50 dataset from the Pyrosage project, which focuses on molecular toxicity and environmental property prediction.
## Model Performance
See training logs for detailed performance metrics.
## Limitations
- The model is trained on specific chemical datasets and may not generalize to all molecular types
- Performance may vary for molecules significantly different from the training distribution
- Requires proper SMILES string format for input
## Citation
If you use this model, please cite the Pyrosage project:
```bibtex
@misc{pyrosagelc50,
title={Pyrosage LC50 AttentiveFP Model},
author={UPCI NTUA},
year={2025},
publisher={Hugging Face},
url={https://huggingface.co/upci-ntua/pyrosage-lc50-attentivefp}
}
```
## License
MIT License - see LICENSE file for details.
|
Diamantis99/a9ciD3p | Diamantis99 | 2025-06-05T13:36:15Z | 0 | 0 | segmentation-models-pytorch | [
"segmentation-models-pytorch",
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"semantic-segmentation",
"pytorch",
"image-segmentation",
"license:mit",
"region:us"
] | image-segmentation | 2025-06-05T13:35:57Z | ---
library_name: segmentation-models-pytorch
license: mit
pipeline_tag: image-segmentation
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
- segmentation-models-pytorch
- semantic-segmentation
- pytorch
languages:
- python
---
# Segformer Model Card
Table of Contents:
- [Load trained model](#load-trained-model)
- [Model init parameters](#model-init-parameters)
- [Model metrics](#model-metrics)
- [Dataset](#dataset)
## Load trained model
```python
import segmentation_models_pytorch as smp
model = smp.from_pretrained("<save-directory-or-this-repo>")
```
## Model init parameters
```python
model_init_params = {
"encoder_name": "dpn131",
"encoder_depth": 5,
"encoder_weights": "imagenet",
"decoder_segmentation_channels": 256,
"in_channels": 3,
"classes": 1,
"activation": None,
"aux_params": None
}
```
## Model metrics
```json
[
{
"test_per_image_iou": 0.8698931932449341,
"test_dataset_iou": 0.8871293663978577
}
]
```
## Dataset
Dataset name: VisionPipe
## More Information
- Library: https://github.com/qubvel/segmentation_models.pytorch
- Docs: https://smp.readthedocs.io/en/latest/
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) |
upci-ntua/pyrosage-pka_acidic-attentivefp | upci-ntua | 2025-06-05T13:36:12Z | 0 | 0 | null | [
"AttentiveFP",
"chemistry",
"molecular-property-prediction",
"graph-neural-networks",
"attentivefp",
"pytorch-geometric",
"toxicity-prediction",
"tabular-regression",
"en",
"license:mit",
"region:us"
] | tabular-regression | 2025-06-05T13:11:08Z | ---
license: mit
tags:
- chemistry
- molecular-property-prediction
- graph-neural-networks
- attentivefp
- pytorch-geometric
- toxicity-prediction
language:
- en
pipeline_tag: tabular-regression
---
# Pyrosage pKa_acidic AttentiveFP Model
## Model Description
This is an AttentiveFP (Attention-based Fingerprint) Graph Neural Network model trained to predict the acid dissociation constant (pKa) for acidic groups. This property predicts the pH at which acidic functional groups donate protons, affecting ionization state and bioavailability. The model takes SMILES strings as input and uses graph neural networks to predict molecular properties directly from the molecular structure.
## Model Details
- **Model Type**: AttentiveFP (Graph Neural Network)
- **Task**: Regression
- **Input**: SMILES strings (molecular representations)
- **Output**: Continuous numerical value
- **Framework**: PyTorch Geometric
- **Architecture**: AttentiveFP with enhanced atom and bond features
### Hyperparameters
```json
{
"name": "larger_model",
"hidden_channels": 128,
"num_layers": 3,
"num_timesteps": 3,
"dropout": 0.1,
"learning_rate": 0.0005,
"weight_decay": 0.0001,
"batch_size": 32,
"epochs": 50,
"patience": 10
}
```
## Usage
### Installation
```bash
pip install torch torch-geometric rdkit-pypi
```
### Loading the Model
```python
import torch
from torch_geometric.nn import AttentiveFP
from rdkit import Chem
from torch_geometric.data import Data
# Load the model
model_dict = torch.load('pytorch_model.pt', map_location='cpu')
state_dict = model_dict['model_state_dict']
hyperparams = model_dict['hyperparameters']
# Create model with correct architecture
model = AttentiveFP(
in_channels=10, # Enhanced atom features
hidden_channels=hyperparams["hidden_channels"],
out_channels=1,
edge_dim=6, # Enhanced bond features
num_layers=hyperparams["num_layers"],
num_timesteps=hyperparams["num_timesteps"],
dropout=hyperparams["dropout"],
)
model.load_state_dict(state_dict)
model.eval()
```
### Making Predictions
```python
def smiles_to_data(smiles):
"""Convert SMILES string to PyG Data object"""
mol = Chem.MolFromSmiles(smiles)
if mol is None:
return None
# Enhanced atom features (10 dimensions)
atom_features = []
for atom in mol.GetAtoms():
features = [
atom.GetAtomicNum(),
atom.GetTotalDegree(),
atom.GetFormalCharge(),
atom.GetTotalNumHs(),
atom.GetNumRadicalElectrons(),
int(atom.GetIsAromatic()),
int(atom.IsInRing()),
# Hybridization as one-hot (3 dimensions)
int(atom.GetHybridization() == Chem.rdchem.HybridizationType.SP),
int(atom.GetHybridization() == Chem.rdchem.HybridizationType.SP2),
int(atom.GetHybridization() == Chem.rdchem.HybridizationType.SP3)
]
atom_features.append(features)
x = torch.tensor(atom_features, dtype=torch.float)
# Enhanced bond features (6 dimensions)
edges_list = []
edge_features = []
for bond in mol.GetBonds():
i = bond.GetBeginAtomIdx()
j = bond.GetEndAtomIdx()
edges_list.extend([[i, j], [j, i]])
features = [
# Bond type as one-hot (4 dimensions)
int(bond.GetBondType() == Chem.rdchem.BondType.SINGLE),
int(bond.GetBondType() == Chem.rdchem.BondType.DOUBLE),
int(bond.GetBondType() == Chem.rdchem.BondType.TRIPLE),
int(bond.GetBondType() == Chem.rdchem.BondType.AROMATIC),
# Additional features (2 dimensions)
int(bond.GetIsConjugated()),
int(bond.IsInRing())
]
edge_features.extend([features, features])
if not edges_list:
return None
edge_index = torch.tensor(edges_list, dtype=torch.long).t()
edge_attr = torch.tensor(edge_features, dtype=torch.float)
return Data(x=x, edge_index=edge_index, edge_attr=edge_attr)
def predict(model, smiles):
"""Make prediction for a SMILES string"""
data = smiles_to_data(smiles)
if data is None:
return None
batch = torch.zeros(data.num_nodes, dtype=torch.long)
with torch.no_grad():
output = model(data.x, data.edge_index, data.edge_attr, batch)
return output.item()
# Example usage
smiles = "CC(=O)OC1=CC=CC=C1C(=O)O" # Aspirin
prediction = predict(model, smiles)
print(f"Prediction for {smiles}: {prediction}")
```
## Training Data
The model was trained on the pKa_acidic dataset from the Pyrosage project, which focuses on molecular toxicity and environmental property prediction.
## Model Performance
See training logs for detailed performance metrics.
## Limitations
- The model is trained on specific chemical datasets and may not generalize to all molecular types
- Performance may vary for molecules significantly different from the training distribution
- Requires proper SMILES string format for input
## Citation
If you use this model, please cite the Pyrosage project:
```bibtex
@misc{pyrosagepka_acidic,
title={Pyrosage pKa_acidic AttentiveFP Model},
author={UPCI NTUA},
year={2025},
publisher={Hugging Face},
url={https://huggingface.co/upci-ntua/pyrosage-pka_acidic-attentivefp}
}
```
## License
MIT License - see LICENSE file for details.
|
upci-ntua/pyrosage-koc-attentivefp | upci-ntua | 2025-06-05T13:36:09Z | 0 | 0 | null | [
"AttentiveFP",
"chemistry",
"molecular-property-prediction",
"graph-neural-networks",
"attentivefp",
"pytorch-geometric",
"toxicity-prediction",
"tabular-regression",
"en",
"license:mit",
"region:us"
] | tabular-regression | 2025-06-05T13:11:04Z | ---
license: mit
tags:
- chemistry
- molecular-property-prediction
- graph-neural-networks
- attentivefp
- pytorch-geometric
- toxicity-prediction
language:
- en
pipeline_tag: tabular-regression
---
# Pyrosage KOC AttentiveFP Model
## Model Description
This is an AttentiveFP (Attention-based Fingerprint) Graph Neural Network model trained to predict the organic carbon partition coefficient (log KOC). This property predicts soil adsorption behavior and is key for environmental mobility assessment. The model takes SMILES strings as input and uses graph neural networks to predict molecular properties directly from the molecular structure.
## Model Details
- **Model Type**: AttentiveFP (Graph Neural Network)
- **Task**: Regression
- **Input**: SMILES strings (molecular representations)
- **Output**: Continuous numerical value
- **Framework**: PyTorch Geometric
- **Architecture**: AttentiveFP with enhanced atom and bond features
### Hyperparameters
```json
{
"name": "larger_model",
"hidden_channels": 128,
"num_layers": 3,
"num_timesteps": 3,
"dropout": 0.1,
"learning_rate": 0.0005,
"weight_decay": 0.0001,
"batch_size": 32,
"epochs": 50,
"patience": 10
}
```
## Usage
### Installation
```bash
pip install torch torch-geometric rdkit-pypi
```
### Loading the Model
```python
import torch
from torch_geometric.nn import AttentiveFP
from rdkit import Chem
from torch_geometric.data import Data
# Load the model
model_dict = torch.load('pytorch_model.pt', map_location='cpu')
state_dict = model_dict['model_state_dict']
hyperparams = model_dict['hyperparameters']
# Create model with correct architecture
model = AttentiveFP(
in_channels=10, # Enhanced atom features
hidden_channels=hyperparams["hidden_channels"],
out_channels=1,
edge_dim=6, # Enhanced bond features
num_layers=hyperparams["num_layers"],
num_timesteps=hyperparams["num_timesteps"],
dropout=hyperparams["dropout"],
)
model.load_state_dict(state_dict)
model.eval()
```
### Making Predictions
```python
def smiles_to_data(smiles):
"""Convert SMILES string to PyG Data object"""
mol = Chem.MolFromSmiles(smiles)
if mol is None:
return None
# Enhanced atom features (10 dimensions)
atom_features = []
for atom in mol.GetAtoms():
features = [
atom.GetAtomicNum(),
atom.GetTotalDegree(),
atom.GetFormalCharge(),
atom.GetTotalNumHs(),
atom.GetNumRadicalElectrons(),
int(atom.GetIsAromatic()),
int(atom.IsInRing()),
# Hybridization as one-hot (3 dimensions)
int(atom.GetHybridization() == Chem.rdchem.HybridizationType.SP),
int(atom.GetHybridization() == Chem.rdchem.HybridizationType.SP2),
int(atom.GetHybridization() == Chem.rdchem.HybridizationType.SP3)
]
atom_features.append(features)
x = torch.tensor(atom_features, dtype=torch.float)
# Enhanced bond features (6 dimensions)
edges_list = []
edge_features = []
for bond in mol.GetBonds():
i = bond.GetBeginAtomIdx()
j = bond.GetEndAtomIdx()
edges_list.extend([[i, j], [j, i]])
features = [
# Bond type as one-hot (4 dimensions)
int(bond.GetBondType() == Chem.rdchem.BondType.SINGLE),
int(bond.GetBondType() == Chem.rdchem.BondType.DOUBLE),
int(bond.GetBondType() == Chem.rdchem.BondType.TRIPLE),
int(bond.GetBondType() == Chem.rdchem.BondType.AROMATIC),
# Additional features (2 dimensions)
int(bond.GetIsConjugated()),
int(bond.IsInRing())
]
edge_features.extend([features, features])
if not edges_list:
return None
edge_index = torch.tensor(edges_list, dtype=torch.long).t()
edge_attr = torch.tensor(edge_features, dtype=torch.float)
return Data(x=x, edge_index=edge_index, edge_attr=edge_attr)
def predict(model, smiles):
"""Make prediction for a SMILES string"""
data = smiles_to_data(smiles)
if data is None:
return None
batch = torch.zeros(data.num_nodes, dtype=torch.long)
with torch.no_grad():
output = model(data.x, data.edge_index, data.edge_attr, batch)
return output.item()
# Example usage
smiles = "CC(=O)OC1=CC=CC=C1C(=O)O" # Aspirin
prediction = predict(model, smiles)
print(f"Prediction for {smiles}: {prediction}")
```
## Training Data
The model was trained on the KOC dataset from the Pyrosage project, which focuses on molecular toxicity and environmental property prediction.
## Model Performance
See training logs for detailed performance metrics.
## Limitations
- The model is trained on specific chemical datasets and may not generalize to all molecular types
- Performance may vary for molecules significantly different from the training distribution
- Requires proper SMILES string format for input
## Citation
If you use this model, please cite the Pyrosage project:
```bibtex
@misc{pyrosagekoc,
title={Pyrosage KOC AttentiveFP Model},
author={UPCI NTUA},
year={2025},
publisher={Hugging Face},
url={https://huggingface.co/upci-ntua/pyrosage-koc-attentivefp}
}
```
## License
MIT License - see LICENSE file for details.
|
upci-ntua/pyrosage-koa-attentivefp | upci-ntua | 2025-06-05T13:36:07Z | 0 | 0 | null | [
"AttentiveFP",
"chemistry",
"molecular-property-prediction",
"graph-neural-networks",
"attentivefp",
"pytorch-geometric",
"toxicity-prediction",
"tabular-regression",
"en",
"license:mit",
"region:us"
] | tabular-regression | 2025-06-05T13:10:59Z | ---
license: mit
tags:
- chemistry
- molecular-property-prediction
- graph-neural-networks
- attentivefp
- pytorch-geometric
- toxicity-prediction
language:
- en
pipeline_tag: tabular-regression
---
# Pyrosage KOA AttentiveFP Model
## Model Description
This is an AttentiveFP (Attention-based Fingerprint) Graph Neural Network model trained to predict the octanol-air partition coefficient (log KOA). This property measures a compound's tendency to partition between octanol and air, indicating volatility and environmental transport potential. The model takes SMILES strings as input and uses graph neural networks to predict molecular properties directly from the molecular structure.
## Model Details
- **Model Type**: AttentiveFP (Graph Neural Network)
- **Task**: Regression
- **Input**: SMILES strings (molecular representations)
- **Output**: Continuous numerical value
- **Framework**: PyTorch Geometric
- **Architecture**: AttentiveFP with enhanced atom and bond features
### Hyperparameters
```json
{
"name": "larger_model",
"hidden_channels": 128,
"num_layers": 3,
"num_timesteps": 3,
"dropout": 0.1,
"learning_rate": 0.0005,
"weight_decay": 0.0001,
"batch_size": 32,
"epochs": 50,
"patience": 10
}
```
## Usage
### Installation
```bash
pip install torch torch-geometric rdkit-pypi
```
### Loading the Model
```python
import torch
from torch_geometric.nn import AttentiveFP
from rdkit import Chem
from torch_geometric.data import Data
# Load the model
model_dict = torch.load('pytorch_model.pt', map_location='cpu')
state_dict = model_dict['model_state_dict']
hyperparams = model_dict['hyperparameters']
# Create model with correct architecture
model = AttentiveFP(
in_channels=10, # Enhanced atom features
hidden_channels=hyperparams["hidden_channels"],
out_channels=1,
edge_dim=6, # Enhanced bond features
num_layers=hyperparams["num_layers"],
num_timesteps=hyperparams["num_timesteps"],
dropout=hyperparams["dropout"],
)
model.load_state_dict(state_dict)
model.eval()
```
### Making Predictions
```python
def smiles_to_data(smiles):
"""Convert SMILES string to PyG Data object"""
mol = Chem.MolFromSmiles(smiles)
if mol is None:
return None
# Enhanced atom features (10 dimensions)
atom_features = []
for atom in mol.GetAtoms():
features = [
atom.GetAtomicNum(),
atom.GetTotalDegree(),
atom.GetFormalCharge(),
atom.GetTotalNumHs(),
atom.GetNumRadicalElectrons(),
int(atom.GetIsAromatic()),
int(atom.IsInRing()),
# Hybridization as one-hot (3 dimensions)
int(atom.GetHybridization() == Chem.rdchem.HybridizationType.SP),
int(atom.GetHybridization() == Chem.rdchem.HybridizationType.SP2),
int(atom.GetHybridization() == Chem.rdchem.HybridizationType.SP3)
]
atom_features.append(features)
x = torch.tensor(atom_features, dtype=torch.float)
# Enhanced bond features (6 dimensions)
edges_list = []
edge_features = []
for bond in mol.GetBonds():
i = bond.GetBeginAtomIdx()
j = bond.GetEndAtomIdx()
edges_list.extend([[i, j], [j, i]])
features = [
# Bond type as one-hot (4 dimensions)
int(bond.GetBondType() == Chem.rdchem.BondType.SINGLE),
int(bond.GetBondType() == Chem.rdchem.BondType.DOUBLE),
int(bond.GetBondType() == Chem.rdchem.BondType.TRIPLE),
int(bond.GetBondType() == Chem.rdchem.BondType.AROMATIC),
# Additional features (2 dimensions)
int(bond.GetIsConjugated()),
int(bond.IsInRing())
]
edge_features.extend([features, features])
if not edges_list:
return None
edge_index = torch.tensor(edges_list, dtype=torch.long).t()
edge_attr = torch.tensor(edge_features, dtype=torch.float)
return Data(x=x, edge_index=edge_index, edge_attr=edge_attr)
def predict(model, smiles):
"""Make prediction for a SMILES string"""
data = smiles_to_data(smiles)
if data is None:
return None
batch = torch.zeros(data.num_nodes, dtype=torch.long)
with torch.no_grad():
output = model(data.x, data.edge_index, data.edge_attr, batch)
return output.item()
# Example usage
smiles = "CC(=O)OC1=CC=CC=C1C(=O)O" # Aspirin
prediction = predict(model, smiles)
print(f"Prediction for {smiles}: {prediction}")
```
## Training Data
The model was trained on the KOA dataset from the Pyrosage project, which focuses on molecular toxicity and environmental property prediction.
## Model Performance
See training logs for detailed performance metrics.
## Limitations
- The model is trained on specific chemical datasets and may not generalize to all molecular types
- Performance may vary for molecules significantly different from the training distribution
- Requires proper SMILES string format for input
## Citation
If you use this model, please cite the Pyrosage project:
```bibtex
@misc{pyrosagekoa,
title={Pyrosage KOA AttentiveFP Model},
author={UPCI NTUA},
year={2025},
publisher={Hugging Face},
url={https://huggingface.co/upci-ntua/pyrosage-koa-attentivefp}
}
```
## License
MIT License - see LICENSE file for details.
|
upci-ntua/pyrosage-ld50_zhu-attentivefp | upci-ntua | 2025-06-05T13:36:00Z | 0 | 0 | null | [
"AttentiveFP",
"chemistry",
"molecular-property-prediction",
"graph-neural-networks",
"attentivefp",
"pytorch-geometric",
"toxicity-prediction",
"tabular-regression",
"en",
"license:mit",
"region:us"
] | tabular-regression | 2025-06-05T13:10:46Z | ---
license: mit
tags:
- chemistry
- molecular-property-prediction
- graph-neural-networks
- attentivefp
- pytorch-geometric
- toxicity-prediction
language:
- en
pipeline_tag: tabular-regression
---
# Pyrosage LD50_Zhu AttentiveFP Model
## Model Description
This is an AttentiveFP (Attention-based Fingerprint) Graph Neural Network model trained to predict acute oral toxicity (log LD50). This property predicts the lethal dose for 50% of test animals, important for mammalian toxicity assessment. The model takes SMILES strings as input and uses graph neural networks to predict molecular properties directly from the molecular structure.
## Model Details
- **Model Type**: AttentiveFP (Graph Neural Network)
- **Task**: Regression
- **Input**: SMILES strings (molecular representations)
- **Output**: Continuous numerical value
- **Framework**: PyTorch Geometric
- **Architecture**: AttentiveFP with enhanced atom and bond features
### Hyperparameters
```json
{
"name": "larger_model",
"hidden_channels": 128,
"num_layers": 3,
"num_timesteps": 3,
"dropout": 0.1,
"learning_rate": 0.0005,
"weight_decay": 0.0001,
"batch_size": 32,
"epochs": 50,
"patience": 10
}
```
## Usage
### Installation
```bash
pip install torch torch-geometric rdkit-pypi
```
### Loading the Model
```python
import torch
from torch_geometric.nn import AttentiveFP
from rdkit import Chem
from torch_geometric.data import Data
# Load the model
model_dict = torch.load('pytorch_model.pt', map_location='cpu')
state_dict = model_dict['model_state_dict']
hyperparams = model_dict['hyperparameters']
# Create model with correct architecture
model = AttentiveFP(
in_channels=10, # Enhanced atom features
hidden_channels=hyperparams["hidden_channels"],
out_channels=1,
edge_dim=6, # Enhanced bond features
num_layers=hyperparams["num_layers"],
num_timesteps=hyperparams["num_timesteps"],
dropout=hyperparams["dropout"],
)
model.load_state_dict(state_dict)
model.eval()
```
### Making Predictions
```python
def smiles_to_data(smiles):
"""Convert SMILES string to PyG Data object"""
mol = Chem.MolFromSmiles(smiles)
if mol is None:
return None
# Enhanced atom features (10 dimensions)
atom_features = []
for atom in mol.GetAtoms():
features = [
atom.GetAtomicNum(),
atom.GetTotalDegree(),
atom.GetFormalCharge(),
atom.GetTotalNumHs(),
atom.GetNumRadicalElectrons(),
int(atom.GetIsAromatic()),
int(atom.IsInRing()),
# Hybridization as one-hot (3 dimensions)
int(atom.GetHybridization() == Chem.rdchem.HybridizationType.SP),
int(atom.GetHybridization() == Chem.rdchem.HybridizationType.SP2),
int(atom.GetHybridization() == Chem.rdchem.HybridizationType.SP3)
]
atom_features.append(features)
x = torch.tensor(atom_features, dtype=torch.float)
# Enhanced bond features (6 dimensions)
edges_list = []
edge_features = []
for bond in mol.GetBonds():
i = bond.GetBeginAtomIdx()
j = bond.GetEndAtomIdx()
edges_list.extend([[i, j], [j, i]])
features = [
# Bond type as one-hot (4 dimensions)
int(bond.GetBondType() == Chem.rdchem.BondType.SINGLE),
int(bond.GetBondType() == Chem.rdchem.BondType.DOUBLE),
int(bond.GetBondType() == Chem.rdchem.BondType.TRIPLE),
int(bond.GetBondType() == Chem.rdchem.BondType.AROMATIC),
# Additional features (2 dimensions)
int(bond.GetIsConjugated()),
int(bond.IsInRing())
]
edge_features.extend([features, features])
if not edges_list:
return None
edge_index = torch.tensor(edges_list, dtype=torch.long).t()
edge_attr = torch.tensor(edge_features, dtype=torch.float)
return Data(x=x, edge_index=edge_index, edge_attr=edge_attr)
def predict(model, smiles):
"""Make prediction for a SMILES string"""
data = smiles_to_data(smiles)
if data is None:
return None
batch = torch.zeros(data.num_nodes, dtype=torch.long)
with torch.no_grad():
output = model(data.x, data.edge_index, data.edge_attr, batch)
return output.item()
# Example usage
smiles = "CC(=O)OC1=CC=CC=C1C(=O)O" # Aspirin
prediction = predict(model, smiles)
print(f"Prediction for {smiles}: {prediction}")
```
## Training Data
The model was trained on the LD50_Zhu dataset from the Pyrosage project, which focuses on molecular toxicity and environmental property prediction.
## Model Performance
See training logs for detailed performance metrics.
## Limitations
- The model is trained on specific chemical datasets and may not generalize to all molecular types
- Performance may vary for molecules significantly different from the training distribution
- Requires proper SMILES string format for input
## Citation
If you use this model, please cite the Pyrosage project:
```bibtex
@misc{pyrosageld50_zhu,
title={Pyrosage LD50_Zhu AttentiveFP Model},
author={UPCI NTUA},
year={2025},
publisher={Hugging Face},
url={https://huggingface.co/upci-ntua/pyrosage-ld50_zhu-attentivefp}
}
```
## License
MIT License - see LICENSE file for details.
|
upci-ntua/pyrosage-endocrine_disruption_nr-aromatase-attentivefp | upci-ntua | 2025-06-05T13:35:46Z | 0 | 0 | null | [
"AttentiveFP",
"chemistry",
"molecular-property-prediction",
"graph-neural-networks",
"attentivefp",
"pytorch-geometric",
"toxicity-prediction",
"text-classification",
"en",
"license:mit",
"region:us"
] | text-classification | 2025-06-05T13:10:14Z | ---
license: mit
tags:
- chemistry
- molecular-property-prediction
- graph-neural-networks
- attentivefp
- pytorch-geometric
- toxicity-prediction
language:
- en
pipeline_tag: text-classification
---
# Pyrosage Endocrine_Disruption_NR-aromatase AttentiveFP Model
## Model Description
This is an AttentiveFP (Attention-based Fingerprint) Graph Neural Network model trained to predict endocrine disruption via aromatase inhibition. Predicts whether a compound can inhibit the aromatase enzyme, which converts androgens to estrogens. The model takes SMILES strings as input and uses graph neural networks to predict molecular properties directly from the molecular structure.
## Model Details
- **Model Type**: AttentiveFP (Graph Neural Network)
- **Task**: Binary Classification
- **Input**: SMILES strings (molecular representations)
- **Output**: Binary classification (0/1)
- **Framework**: PyTorch Geometric
- **Architecture**: AttentiveFP with enhanced atom and bond features
### Hyperparameters
```json
{
"name": "baseline",
"hidden_channels": 64,
"num_layers": 2,
"num_timesteps": 2,
"dropout": 0.2,
"learning_rate": 0.001,
"weight_decay": 1e-05,
"batch_size": 32,
"epochs": 50,
"patience": 10
}
```
## Usage
### Installation
```bash
pip install torch torch-geometric rdkit-pypi
```
### Loading the Model
```python
import torch
from torch_geometric.nn import AttentiveFP
from rdkit import Chem
from torch_geometric.data import Data
# Load the model
model_dict = torch.load('pytorch_model.pt', map_location='cpu')
state_dict = model_dict['model_state_dict']
hyperparams = model_dict['hyperparameters']
# Create model with correct architecture
model = AttentiveFP(
in_channels=10, # Enhanced atom features
hidden_channels=hyperparams["hidden_channels"],
out_channels=1,
edge_dim=6, # Enhanced bond features
num_layers=hyperparams["num_layers"],
num_timesteps=hyperparams["num_timesteps"],
dropout=hyperparams["dropout"],
)
model.load_state_dict(state_dict)
model.eval()
```
### Making Predictions
```python
def smiles_to_data(smiles):
"""Convert SMILES string to PyG Data object"""
mol = Chem.MolFromSmiles(smiles)
if mol is None:
return None
# Enhanced atom features (10 dimensions)
atom_features = []
for atom in mol.GetAtoms():
features = [
atom.GetAtomicNum(),
atom.GetTotalDegree(),
atom.GetFormalCharge(),
atom.GetTotalNumHs(),
atom.GetNumRadicalElectrons(),
int(atom.GetIsAromatic()),
int(atom.IsInRing()),
# Hybridization as one-hot (3 dimensions)
int(atom.GetHybridization() == Chem.rdchem.HybridizationType.SP),
int(atom.GetHybridization() == Chem.rdchem.HybridizationType.SP2),
int(atom.GetHybridization() == Chem.rdchem.HybridizationType.SP3)
]
atom_features.append(features)
x = torch.tensor(atom_features, dtype=torch.float)
# Enhanced bond features (6 dimensions)
edges_list = []
edge_features = []
for bond in mol.GetBonds():
i = bond.GetBeginAtomIdx()
j = bond.GetEndAtomIdx()
edges_list.extend([[i, j], [j, i]])
features = [
# Bond type as one-hot (4 dimensions)
int(bond.GetBondType() == Chem.rdchem.BondType.SINGLE),
int(bond.GetBondType() == Chem.rdchem.BondType.DOUBLE),
int(bond.GetBondType() == Chem.rdchem.BondType.TRIPLE),
int(bond.GetBondType() == Chem.rdchem.BondType.AROMATIC),
# Additional features (2 dimensions)
int(bond.GetIsConjugated()),
int(bond.IsInRing())
]
edge_features.extend([features, features])
if not edges_list:
return None
edge_index = torch.tensor(edges_list, dtype=torch.long).t()
edge_attr = torch.tensor(edge_features, dtype=torch.float)
return Data(x=x, edge_index=edge_index, edge_attr=edge_attr)
def predict(model, smiles):
"""Make prediction for a SMILES string"""
data = smiles_to_data(smiles)
if data is None:
return None
batch = torch.zeros(data.num_nodes, dtype=torch.long)
with torch.no_grad():
output = model(data.x, data.edge_index, data.edge_attr, batch)
return output.item()
# Example usage
smiles = "CC(=O)OC1=CC=CC=C1C(=O)O" # Aspirin
prediction = predict(model, smiles)
print(f"Prediction for {smiles}: {prediction}")
```
## Training Data
The model was trained on the Endocrine_Disruption_NR-aromatase dataset from the Pyrosage project, which focuses on molecular toxicity and environmental property prediction.
## Model Performance
See training logs for detailed performance metrics.
## Limitations
- The model is trained on specific chemical datasets and may not generalize to all molecular types
- Performance may vary for molecules significantly different from the training distribution
- Requires proper SMILES string format for input
## Citation
If you use this model, please cite the Pyrosage project:
```bibtex
@misc{pyrosageendocrine_disruption_nr-aromatase,
title={Pyrosage Endocrine_Disruption_NR-aromatase AttentiveFP Model},
author={UPCI NTUA},
year={2025},
publisher={Hugging Face},
url={https://huggingface.co/upci-ntua/pyrosage-endocrine_disruption_nr-aromatase-attentivefp}
}
```
## License
MIT License - see LICENSE file for details.
|
gradientrouting-spar/positive_RB_2proxy_random_ntrain30_20250605_132423 | gradientrouting-spar | 2025-06-05T13:33:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-05T13:31:09Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
valen02/MNLP_SFT2_test | valen02 | 2025-06-05T13:31:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-05T09:40:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
davgauch/MNLP_M3_mcqa_mixed_rationale_v15 | davgauch | 2025-06-05T13:26:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen3-0.6B-Base",
"base_model:finetune:Qwen/Qwen3-0.6B-Base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-05T12:50:23Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen3-0.6B-Base
tags:
- generated_from_trainer
model-index:
- name: MNLP_M3_mcqa_mixed_rationale_v15
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MNLP_M3_mcqa_mixed_rationale_v15
This model is a fine-tuned version of [Qwen/Qwen3-0.6B-Base](https://huggingface.co/Qwen/Qwen3-0.6B-Base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.0+cu126
- Datasets 3.2.0
- Tokenizers 0.21.0
|
GingerBled/MNLP_M3_mcqa_dataset_m1_shuffled | GingerBled | 2025-06-05T13:26:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-05T13:25:36Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ahmedselhady/Llama-2-7B-eu-ema | ahmedselhady | 2025-06-05T13:25:40Z | 2 | 0 | null | [
"pytorch",
"llama",
"text-generation",
"conversational",
"eu",
"en",
"arxiv:2506.00288",
"base_model:meta-llama/Llama-2-7b",
"base_model:finetune:meta-llama/Llama-2-7b",
"region:us"
] | text-generation | 2024-10-14T16:12:12Z | ---
language:
- eu
- en
base_model:
- meta-llama/Llama-2-7b
pipeline_tag: text-generation
---
[Paper:](https://arxiv.org/pdf/2506.00288) Emergent Abilities of Large Language Models under Continued Pretraining for Language Adaptation
[Code:](https://github.com/hitz-zentroa/emergent-abilities-lang-adapt) base code for pretraining
**Paper Abstract**
Continued pretraining (CPT) is a popular approach to adapt existing large language models (LLMs) to new languages. When doing so, it is common practice to include a portion of English data in the mixture, but its role has not been carefully studied to date. In this work, we show that including English does not impact validation perplexity, yet it is critical for the emergence of downstream capabilities in the target language. We introduce a language-agnostic benchmark for in-context learning (ICL), which reveals catastrophic forgetting early on CPT when English is not included.This in turn damages the ability of the model to generalize to downstream prompts in the target language as measured by perplexity, even if it does not manifest in terms of accuracy until later in training, and can be tied to a big shift in the model parameters. Based on these insights, we introduce curriculum learning and exponential moving average (EMA) of weights as effective alternatives to mitigate the need for English. All in all, our work sheds light into the dynamics by which emergent abilities arise when doing CPT for language adaptation, and can serve as a foundation to design more effective methods in the future. |
phospho-app/oulianov-ACT_BBOX-example_dataset_6-y28pp | phospho-app | 2025-06-05T13:25:37Z | 0 | 0 | null | [
"phosphobot",
"act",
"region:us"
] | null | 2025-06-05T13:15:12Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## Error Traceback
We faced an issue while training your model.
```
Training process failed with exit code 1:
response = fn(cfg, *args, **kwargs)
File "/lerobot/lerobot/scripts/train.py", line 139, in train
policy = make_policy(
File "/lerobot/lerobot/common/policies/factory.py", line 150, in make_policy
policy = policy_cls(**kwargs)
File "/lerobot/lerobot/common/policies/act/modeling_act.py", line 63, in __init__
config.validate_features()
File "/lerobot/lerobot/common/policies/act/configuration_act.py", line 174, in validate_features
raise ValueError("You must provide at least one image or the environment state among the inputs.")
ValueError: You must provide at least one image or the environment state among the inputs.
```
## Training parameters:
- **Dataset**: [phospho-app/example_dataset_6_bboxes](https://huggingface.co/datasets/phospho-app/example_dataset_6_bboxes)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 100
- **Training steps**: 10000
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
gradientrouting-spar/positive_RB_2proxy_random_ntrain30_20250605_131816 | gradientrouting-spar | 2025-06-05T13:23:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-05T13:21:49Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
pawelzm/babyLlama | pawelzm | 2025-06-05T13:21:19Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-30T22:02:47Z | ---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: babyLlama
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# babyLlama
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- training_steps: 2000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu118
- Datasets 3.6.0
- Tokenizers 0.21.1
|
TeamNCUGroup24/Project | TeamNCUGroup24 | 2025-06-05T13:19:22Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-06-02T14:23:00Z | ---
license: apache-2.0
---
# 🚚 Quantum Logistics Optimization – Module 1: Weather Risk Forecasting
## 🧠 Overview
This project is part of a broader initiative to build a **quantum-classical logistics optimization system**. The goal is to optimize supply chain and delivery decisions under dynamic conditions like weather, traffic, and resource constraints.
### 🔍 Project Vision:
To enable **real-time, risk-aware, and cost-efficient logistics planning** by leveraging:
- **Machine Learning** for pattern prediction and classification
- **Quantum Optimization** for route planning and scheduling (in future modules)
---
## 🧩 Module 1: Weather Risk Forecasting
### 🎯 Objective:
Forecast short-term temperature conditions and classify them into **risk levels** that impact logistics decisions (e.g., delivery delays, route rerouting).
---
## 📌 Problem Statement
> Weather variability significantly affects logistics networks, especially in last-mile delivery and time-sensitive shipments. This module focuses on **predicting temperature-based weather risk** so that future routes or deliveries can be adjusted in advance.
---
## 📂 Dataset
- **Source**: Public weather dataset (e.g., Kaggle or API-based source)
- **Features**: Temperature, humidity, pressure, datetime
- **Samples**: Hourly weather records over multiple days
- **Label Created**: `risk_level` based on temperature thresholds
---
## 🔧 Pipeline Summary
### ✅ Data Preprocessing
- Missing value treatment
- Timestamp formatting
- Feature normalization and outlier removal
### ✅ Feature Engineering
- Extracted time-based features: `hour`, `day`, `month`
- Constructed target variable: `risk_level` (Low, Moderate, High)
### ✅ Model Training & Selection
- **Models Compared**: Linear Regression, Random Forest, Gradient Boosting, XGBoost
- **Best Model**: `GradientBoostingRegressor` (Lowest MAE, RMSE)
| Model | MAE | RMSE | R² Score |
|--------------------|-------|-------|----------|
| Linear Regression | 12.55 | 14.51 | 0.0006 |
| Random Forest | 12.69 | 14.75 | -0.0320 |
| **Gradient Boosting** | **12.55** | **14.50** | **0.0023** |
| XGBoost | 12.57 | 14.55 | -0.0043 |
- ✅ Model saved as: `models/weather_best_model.pkl`
### ✅ Prediction Output
- Input: Cleaned unseen data
- Output: Predicted temperature + risk classification
- Format: `final_weather_predictions.csv`
---
## 📊 Dashboard (Streamlit)
### Features:
- 📈 Interactive temperature forecast over time
- 🧯 Risk-level distribution pie/bar chart
- 📋 Tabular view of future predictions
### Run locally:
```bash
cd dashboard
streamlit run app.py
## 📜 License
This project is licensed under the Apache License 2.0 – see the LICENSE file for details.
|
Jaw00/donut-base-sroie | Jaw00 | 2025-06-05T13:14:28Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"base_model:naver-clova-ix/donut-base-finetuned-cord-v2",
"base_model:finetune:naver-clova-ix/donut-base-finetuned-cord-v2",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-06-05T12:53:14Z | ---
library_name: transformers
license: mit
base_model: naver-clova-ix/donut-base-finetuned-cord-v2
tags:
- generated_from_trainer
model-index:
- name: donut-base-sroie
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-sroie
This model is a fine-tuned version of [naver-clova-ix/donut-base-finetuned-cord-v2](https://huggingface.co/naver-clova-ix/donut-base-finetuned-cord-v2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.53.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
magickarle/taominer-sn11 | magickarle | 2025-06-05T13:13:20Z | 2 | 0 | null | [
"gguf",
"mistral",
"finetuned",
"text-generation",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:quantized:mistralai/Mistral-7B-Instruct-v0.1",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-06-04T18:04:57Z | ---
base_model: mistralai/Mistral-7B-Instruct-v0.1
inference: false
license: apache-2.0
model_creator: Mistral AI
model_name: Mistral 7B Instruct v0.1
model_type: mistral
pipeline_tag: text-generation
prompt_template: '<s>[INST]{prompt} [/INST]
'
quantized_by: TheBloke
tags:
- finetuned
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Mistral 7B Instruct v0.1 - GGUF
- Model creator: [Mistral AI](https://huggingface.co/mistralai)
- Original model: [Mistral 7B Instruct v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Mistral AI's Mistral 7B Instruct v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF)
* [Mistral AI's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Mistral
```
<s>[INST] {prompt} [/INST]
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
Sequence length note: The model will work at sequence lengths of 4096, or lower. GGUF does not yet have support for the new sliding window sequence length mode, so longer sequence lengths are not supported.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [mistral-7b-instruct-v0.1.Q2_K.gguf](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF/blob/main/mistral-7b-instruct-v0.1.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes |
| [mistral-7b-instruct-v0.1.Q3_K_S.gguf](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF/blob/main/mistral-7b-instruct-v0.1.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss |
| [mistral-7b-instruct-v0.1.Q3_K_M.gguf](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF/blob/main/mistral-7b-instruct-v0.1.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
| [mistral-7b-instruct-v0.1.Q3_K_L.gguf](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF/blob/main/mistral-7b-instruct-v0.1.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss |
| [mistral-7b-instruct-v0.1.Q4_0.gguf](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF/blob/main/mistral-7b-instruct-v0.1.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [mistral-7b-instruct-v0.1.Q4_K_S.gguf](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF/blob/main/mistral-7b-instruct-v0.1.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
| [mistral-7b-instruct-v0.1.Q4_K_M.gguf](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF/blob/main/mistral-7b-instruct-v0.1.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
| [mistral-7b-instruct-v0.1.Q5_0.gguf](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF/blob/main/mistral-7b-instruct-v0.1.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [mistral-7b-instruct-v0.1.Q5_K_S.gguf](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF/blob/main/mistral-7b-instruct-v0.1.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended |
| [mistral-7b-instruct-v0.1.Q5_K_M.gguf](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF/blob/main/mistral-7b-instruct-v0.1.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
| [mistral-7b-instruct-v0.1.Q6_K.gguf](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF/blob/main/mistral-7b-instruct-v0.1.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
| [mistral-7b-instruct-v0.1.Q8_0.gguf](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF/blob/main/mistral-7b-instruct-v0.1.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Mistral-7B-Instruct-v0.1-GGUF and below it, a specific filename to download, such as: mistral-7b-instruct-v0.1.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Mistral-7B-Instruct-v0.1-GGUF mistral-7b-instruct-v0.1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Mistral-7B-Instruct-v0.1-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Mistral-7B-Instruct-v0.1-GGUF mistral-7b-instruct-v0.1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m mistral-7b-instruct-v0.1.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<s>[INST]{prompt} [/INST]"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Sequence length can be 4096 or lower. Mistral's sliding window sequence length is not yet supported in llama.cpp, so do not use sequence lengths longer than 4096.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
I have not tested ctransformers with Mistral models. It may work, but will require that you set the `model_type` to `llama` for now, until ctransformers updates with specific support.
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Mistral-7B-Instruct-v0.1-GGUF", model_file="mistral-7b-instruct-v0.1.Q4_K_M.gguf", model_type="mistral", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Mistral AI's Mistral 7B Instruct v0.1
# Model Card for Mistral-7B-Instruct-v0.1
The Mistral-7B-Instruct-v0.1 Large Language Model (LLM) is a instruct fine-tuned version of the [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) generative text model using a variety of publicly available conversation datasets.
For full details of this model please read our [release blog post](https://mistral.ai/news/announcing-mistral-7b/)
## Instruction format
In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[\INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.
E.g.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1")
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1")
text = """<s>[INST] What is your favourite condiment? [/INST]
Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s>
[INST] Do you have mayonnaise recipes? [/INST]"""
encodeds = tokenizer(text, return_tensors="pt", add_special_tokens=False)
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(**model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
## Model Architecture
This instruction model is based on Mistral-7B-v0.1, a transformer model with the following architecture choices:
- Grouped-Query Attention
- Sliding-Window Attention
- Byte-fallback BPE tokenizer
## The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
<!-- original-model-card end -->
|
3morrrrr/Cupid-Pygmalion2 | 3morrrrr | 2025-06-05T13:12:30Z | 0 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2025-06-05T11:42:24Z |
# Cupid-Pygmalion2 💘🧠
**Cupid-Pygmalion2** is a fine-tuned language model built on top of [Pygmalion 2](https://huggingface.co/PygmalionAI/pygmalion-2-6b), optimized for **romantic, flirtatious, and immersive conversational experiences**. This model is designed to engage users in natural, expressive, and emotionally responsive dialogue, especially in AI companion or roleplay scenarios.
---
## 🧠 Model Details
| Field | Description |
|-------------------|------------------------------------------------------|
| Base Model | Pygmalion 2 |
| Model Size | 6B |
| Fine-tuning Method| LoRA / Full finetune (adjust if needed) |
| Specialization | Romantic, flirty, emotional, roleplay AI characters |
| Language | English (multilingual support minimal) |
---
## 💡 Intended Use
This model is optimized for:
- AI girlfriend/boyfriend simulations
- Romantic/erotic storytelling
- NSFW roleplay (if permitted by platform)
- Mental wellness and companionship bots
- Character chat and immersive fiction
---
## 🚫 Limitations & Warnings
- The model **may generate NSFW or inappropriate content**.
- It does not have real-world understanding or memory.
- It should **not be used for real advice, therapy, or decision-making**.
- Avoid prompts involving illegal or harmful scenarios.
---
## 📦 How to Use
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("3morrrrr/Cupid-Pygmalion2")
tokenizer = AutoTokenizer.from_pretrained("3morrrrr/Cupid-Pygmalion2")
inputs = tokenizer("Hey, cutie~", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0]))
|
anfindsen/testtestopen | anfindsen | 2025-06-05T13:11:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-05T13:11:03Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jinx2321/mt5-1e4-paper-5 | jinx2321 | 2025-06-05T13:09:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"generated_from_trainer",
"base_model:jinx2321/mt5-1e4-paper",
"base_model:finetune:jinx2321/mt5-1e4-paper",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-06-05T11:50:15Z | ---
library_name: transformers
license: apache-2.0
base_model: jinx2321/mt5-1e4-paper
tags:
- generated_from_trainer
model-index:
- name: mt5-1e4-paper-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-1e4-paper-5
This model is a fine-tuned version of [jinx2321/mt5-1e4-paper](https://huggingface.co/jinx2321/mt5-1e4-paper) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.52.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
|
sujal7102003/recommendation-models | sujal7102003 | 2025-06-05T13:06:51Z | 0 | 0 | keras | [
"keras",
"joblib",
"recommendation",
"binary-classification",
"music",
"ensemble",
"license:mit",
"region:us"
] | null | 2025-06-05T10:29:56Z | ---
license: mit
tags:
- recommendation
- binary-classification
- music
- ensemble
---
# 🎵 Music Recommendation System
This repository hosts a collection of machine learning models designed to recommend songs by predicting whether a user is likely to "like" a track based on its audio features.
## 📁 Files Included
- `data.csv` — Dataset of 195 songs with features like danceability, energy, loudness, tempo, etc.
- Trained model files:
- `logistic_regression.joblib`
- `random_forest.joblib`
- `xgboost.joblib`
- `svm.joblib`
- `voting_classifier.joblib`
- `catboost_model.cbm`
- `ann_model.keras`
- `final_model_card_scaled.pdf` — Full model evaluation, comparison table, and chart
## 🧠 Models Used
- Logistic Regression
- Random Forest
- XGBoost
- Support Vector Machine (SVM)
- Voting Classifier (Ensemble)
- CatBoost
- Artificial Neural Network (ANN)
## 📊 Evaluation
All models were evaluated using:
- Accuracy
- Precision
- Recall
- F1-Score
Refer to the PDF `final_model_card_scaled.pdf` for full details.
## 📬 Contact
Maintained by Sujal Thakkar.
|
entfane/qwen2.5-1.5B-word-problem-dpo-communication-based | entfane | 2025-06-05T13:06:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"trl",
"dpo",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-05T13:04:07Z | ---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
colabmafari/fix_bug_model | colabmafari | 2025-06-05T13:06:11Z | 139 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-03-21T16:26:31Z | ---
library_name: transformers
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: fix_bug_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fix_bug_model
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.8237
- Wer: 0.9675
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.0246 | 1.0 | 250 | 3.2492 | 1.0 |
| 2.9551 | 2.0 | 500 | 3.0709 | 1.0 |
| 1.9826 | 3.0 | 750 | 2.9436 | 1.0077 |
| 0.8318 | 4.0 | 1000 | 2.7792 | 0.9784 |
| 0.5228 | 5.0 | 1250 | 3.0354 | 0.9760 |
| 0.3525 | 6.0 | 1500 | 3.0252 | 0.9613 |
| 0.2503 | 7.0 | 1750 | 3.0134 | 0.9548 |
| 0.1985 | 8.0 | 2000 | 3.1057 | 0.9719 |
| 0.158 | 9.0 | 2250 | 3.4855 | 0.9695 |
| 0.131 | 10.0 | 2500 | 3.4414 | 0.9625 |
| 0.1193 | 11.0 | 2750 | 3.6056 | 0.9593 |
| 0.0989 | 12.0 | 3000 | 3.7111 | 0.9654 |
| 0.0853 | 13.0 | 3250 | 4.0321 | 0.9654 |
| 0.0846 | 14.0 | 3500 | 3.6396 | 0.9707 |
| 0.0731 | 15.0 | 3750 | 3.9604 | 0.9572 |
| 0.0594 | 16.0 | 4000 | 4.0530 | 0.9686 |
| 0.0545 | 17.0 | 4250 | 4.4669 | 0.9743 |
| 0.0441 | 18.0 | 4500 | 4.8051 | 0.9650 |
| 0.0374 | 19.0 | 4750 | 4.8021 | 0.9621 |
| 0.0427 | 20.0 | 5000 | 4.3331 | 0.9597 |
| 0.0361 | 21.0 | 5250 | 4.6936 | 0.9646 |
| 0.0441 | 22.0 | 5500 | 4.4865 | 0.9674 |
| 0.0301 | 23.0 | 5750 | 4.7836 | 0.9678 |
| 0.0315 | 24.0 | 6000 | 4.6309 | 0.9650 |
| 0.0262 | 25.0 | 6250 | 4.6761 | 0.9662 |
| 0.0314 | 26.0 | 6500 | 4.6105 | 0.9568 |
| 0.0255 | 27.0 | 6750 | 4.7744 | 0.9548 |
| 0.0257 | 28.0 | 7000 | 4.7001 | 0.9589 |
| 0.0238 | 29.0 | 7250 | 4.7900 | 0.9589 |
| 0.0211 | 30.0 | 7500 | 4.7702 | 0.9585 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.6.0+cu124
- Datasets 2.14.5
- Tokenizers 0.20.3
|
thejaminator/5jun-bad-security-4e-05-qwen3_32b-epochs1 | thejaminator | 2025-06-05T13:06:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-32B",
"base_model:finetune:unsloth/Qwen3-32B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-05T13:04:37Z | ---
base_model: unsloth/Qwen3-32B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** thejaminator
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-32B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
manuross1/nrmyngfck4k5 | manuross1 | 2025-06-05T13:03:34Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-05T11:50:48Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: nrmyngfck4k5
---
# Nrmyngfck4K5
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `nrmyngfck4k5` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "nrmyngfck4k5",
"lora_weights": "https://huggingface.co/manuross1/nrmyngfck4k5/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('manuross1/nrmyngfck4k5', weight_name='lora.safetensors')
image = pipeline('nrmyngfck4k5').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 4500
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/manuross1/nrmyngfck4k5/discussions) to add images that show off what you’ve made with this LoRA.
|
jinx2321/byt5-1e4-paper-5 | jinx2321 | 2025-06-05T13:02:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:jinx2321/byt5-1e4-paper",
"base_model:finetune:jinx2321/byt5-1e4-paper",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-06-05T11:47:07Z | ---
library_name: transformers
license: apache-2.0
base_model: jinx2321/byt5-1e4-paper
tags:
- generated_from_trainer
model-index:
- name: byt5-1e4-paper-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# byt5-1e4-paper-5
This model is a fine-tuned version of [jinx2321/byt5-1e4-paper](https://huggingface.co/jinx2321/byt5-1e4-paper) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.52.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
|
kanzakiryu1/Labis-apk | kanzakiryu1 | 2025-06-05T12:58:50Z | 0 | 0 | null | [
"zh",
"license:apache-2.0",
"region:us"
] | null | 2025-06-05T10:57:45Z | ---
license: apache-2.0
language:
- zh
--- |
anfindsen/M3_mcqa_lr5eminus5_trainall | anfindsen | 2025-06-05T12:56:27Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-05T00:04:13Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
James4u/5FeATnLym4BQ52qN65oC6Jkq9VmbtCKWQMjur388n4TJaswp | James4u | 2025-06-05T12:54:20Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-06-05T12:53:48Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
xuan1228/313706034_qwen2.5_grpo_final | xuan1228 | 2025-06-05T12:52:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-05T12:52:18Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Luandrie/_Whisper_Call_Center_en_lr5_warm100 | Luandrie | 2025-06-05T12:51:00Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"en",
"dataset:lelapa/www_call_center_merged_en_corrected",
"base_model:distil-whisper/distil-large-v3",
"base_model:finetune:distil-whisper/distil-large-v3",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-06-05T07:40:02Z | ---
library_name: transformers
language:
- en
license: mit
base_model: distil-whisper/distil-large-v3
tags:
- generated_from_trainer
datasets:
- lelapa/www_call_center_merged_en_corrected
metrics:
- wer
model-index:
- name: Distill Whisper Call Center Tforge Dev lr8
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: www_call_center_merged_en_corrected
type: lelapa/www_call_center_merged_en_corrected
args: 'config: en, split: test'
metrics:
- name: Wer
type: wer
value: 45.009475679090336
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Distill Whisper Call Center Tforge Dev lr8
This model is a fine-tuned version of [distil-whisper/distil-large-v3](https://huggingface.co/distil-whisper/distil-large-v3) on the www_call_center_merged_en_corrected dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2432
- Wer: 45.0095
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:----:|:---------------:|:-------:|
| 0.1411 | 3.0722 | 1000 | 1.4343 | 42.0562 |
| 0.0456 | 6.1444 | 2000 | 1.8382 | 53.3639 |
| 0.0102 | 9.2166 | 3000 | 2.1200 | 45.8939 |
| 0.0019 | 12.2888 | 4000 | 2.2432 | 45.0095 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.7.0+cu126
- Datasets 3.6.0
- Tokenizers 0.20.3
|
James4u/5CRoo8DEmpQ2hqXd6jqLUAxGLxeMbhHwr253xEN7Cc56qkKx | James4u | 2025-06-05T12:50:50Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-06-05T12:50:20Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sil-ai/madlad400-finetuned-suo-tpi | sil-ai | 2025-06-05T12:48:59Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:jbochi/madlad400-3b-mt",
"base_model:adapter:jbochi/madlad400-3b-mt",
"license:apache-2.0",
"region:us"
] | null | 2025-06-05T09:37:29Z | ---
base_model: jbochi/madlad400-3b-mt
library_name: peft
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: madlad400-finetuned-suo-tpi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# madlad400-finetuned-suo-tpi
This model is a fine-tuned version of [jbochi/madlad400-3b-mt](https://huggingface.co/jbochi/madlad400-3b-mt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2495
- Chrf: 71.7705
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Chrf |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.3551 | 8.7312 | 1600 | 0.2643 | 70.9037 |
### Framework versions
- PEFT 0.12.0
- Transformers 4.44.2
- Pytorch 2.4.1+cu124
- Datasets 2.21.0
- Tokenizers 0.19.1 |
BootesVoid/cmbifu0gg09nqkfxsii0cbi7f_cmbjcc1rf0b3kkfxsvr94vcrs | BootesVoid | 2025-06-05T12:47:25Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-05T12:47:24Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: SARAH
---
# Cmbifu0Gg09Nqkfxsii0Cbi7F_Cmbjcc1Rf0B3Kkfxsvr94Vcrs
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `SARAH` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "SARAH",
"lora_weights": "https://huggingface.co/BootesVoid/cmbifu0gg09nqkfxsii0cbi7f_cmbjcc1rf0b3kkfxsvr94vcrs/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmbifu0gg09nqkfxsii0cbi7f_cmbjcc1rf0b3kkfxsvr94vcrs', weight_name='lora.safetensors')
image = pipeline('SARAH').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmbifu0gg09nqkfxsii0cbi7f_cmbjcc1rf0b3kkfxsvr94vcrs/discussions) to add images that show off what you’ve made with this LoRA.
|
Shirano39/DeepSeek-R1-0528-Qwen3-8B-unsloth-bnb-4bit-fix | Shirano39 | 2025-06-05T12:46:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"unsloth",
"deepseek",
"qwen",
"conversational",
"arxiv:2501.12948",
"base_model:deepseek-ai/DeepSeek-R1-0528-Qwen3-8B",
"base_model:quantized:deepseek-ai/DeepSeek-R1-0528-Qwen3-8B",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-06-05T12:05:35Z | ---
tags:
- unsloth
- deepseek
- qwen
base_model:
- deepseek-ai/DeepSeek-R1-0528-Qwen3-8B
license: mit
library_name: transformers
---
<div>
<p style="margin-bottom: 0; margin-top: 0;">
<strong>Learn how to run DeepSeek-R1-0528 correctly - <a href="https://docs.unsloth.ai/basics/deepseek-r1-0528">Read our Guide</a>.</strong>
</p>
<p style="margin-bottom: 0;">
<em>See <a href="https://huggingface.co/collections/unsloth/deepseek-r1-all-versions-678e1c48f5d2fce87892ace5">our collection</a> for all versions of R1 including GGUF, 4-bit & 16-bit formats.</em>
</p>
<p style="margin-top: 0;margin-bottom: 0;">
<em><a href="https://docs.unsloth.ai/basics/unsloth-dynamic-v2.0-gguf">Unsloth Dynamic 2.0</a> achieves superior accuracy & outperforms other leading quants.</em>
</p>
<div style="display: flex; gap: 5px; align-items: center; ">
<a href="https://github.com/unslothai/unsloth/">
<img src="https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png" width="133">
</a>
<a href="https://discord.gg/unsloth">
<img src="https://github.com/unslothai/unsloth/raw/main/images/Discord%20button.png" width="173">
</a>
<a href="https://docs.unsloth.ai/basics/deepseek-r1-0528-how-to-run-locally">
<img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="143">
</a>
</div>
<h1 style="margin-top:0rem; margin-bottom: 0rem;">🐋 DeepSeek-R1-0528-Qwen3-8B Usage Guidelines</h1>
</div>
| Setting | Non-Thinking Mode | Thinking Mode |
|---------------|-------------------|----------------|
| Temperature | 0.7 | 0.6 |
| Min_P | 0.0 | 0.0 |
| Top_P | 0.8 | 0.95 |
| TopK | 20 | 20 |
<h4 style="margin-top:0rem;">Chat template/prompt format:</h4>
```
<|im_start|>user\nWhat is 2+2?<|im_end|>\n<|im_start|>assistant\n
```
- For NON thinking mode, we purposely enclose <think> and </think> with nothing:
```
<|im_start|>user\nWhat is 2+2?<|im_end|>\n<|im_start|>assistant\n<think>\n\n</think>\n\n
```
- For Thinking-mode, DO NOT use greedy decoding, as it can lead to performance degradation and endless repetitions.
- For complete detailed instructions, see our guide: [unsloth.ai/blog/deepseek-r1-0528](https://docs.unsloth.ai/basics/deepseek-r1-0528-how-to-run-locally)
---
# DeepSeek-R1-0528
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<!-- markdownlint-disable no-duplicate-header -->
<div align="center">
<img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V3" />
</div>
<hr>
<div align="center" style="line-height: 1;">
<a href="https://www.deepseek.com/" target="_blank" style="margin: 2px;">
<img alt="Homepage" src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://chat.deepseek.com/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-DeepSeek%20R1-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://huggingface.co/deepseek-ai" target="_blank" style="margin: 2px;">
<img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://discord.gg/Tc7c45Zzu5" target="_blank" style="margin: 2px;">
<img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank" style="margin: 2px;">
<img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://twitter.com/deepseek_ai" target="_blank" style="margin: 2px;">
<img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="LICENSE" style="margin: 2px;">
<img alt="License" src="https://img.shields.io/badge/License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<p align="center">
<a href="https://arxiv.org/pdf/2501.12948"><b>Paper Link</b>👁️</a>
</p>
## 1. Introduction
The DeepSeek R1 model has undergone a minor version upgrade, with the current version being DeepSeek-R1-0528. In the latest update, DeepSeek R1 has significantly improved its depth of reasoning and inference capabilities by leveraging increased computational resources and introducing algorithmic optimization mechanisms during post-training. The model has demonstrated outstanding performance across various benchmark evaluations, including mathematics, programming, and general logic. Its overall performance is now approaching that of leading models, such as O3 and Gemini 2.5 Pro.
<p align="center">
<img width="80%" src="figures/benchmark.png">
</p>
Compared to the previous version, the upgraded model shows significant improvements in handling complex reasoning tasks. For instance, in the AIME 2025 test, the model’s accuracy has increased from 70% in the previous version to 87.5% in the current version. This advancement stems from enhanced thinking depth during the reasoning process: in the AIME test set, the previous model used an average of 12K tokens per question, whereas the new version averages 23K tokens per question.
Beyond its improved reasoning capabilities, this version also offers a reduced hallucination rate, enhanced support for function calling, and better experience for vibe coding.
## 2. Evaluation Results
### DeepSeek-R1-0528
For all our models, the maximum generation length is set to 64K tokens. For benchmarks requiring sampling, we use a temperature of $0.6$, a top-p value of $0.95$, and generate 16 responses per query to estimate pass@1.
<div align="center">
| Category | Benchmark (Metric) | DeepSeek R1 | DeepSeek R1 0528
|----------|----------------------------------|-----------------|---|
| General |
| | MMLU-Redux (EM) | 92.9 | 93.4
| | MMLU-Pro (EM) | 84.0 | 85.0
| | GPQA-Diamond (Pass@1) | 71.5 | 81.0
| | SimpleQA (Correct) | 30.1 | 27.8
| | FRAMES (Acc.) | 82.5 | 83.0
| | Humanity's Last Exam (Pass@1) | 8.5 | 17.7
| Code |
| | LiveCodeBench (2408-2505) (Pass@1) | 63.5 | 73.3
| | Codeforces-Div1 (Rating) | 1530 | 1930
| | SWE Verified (Resolved) | 49.2 | 57.6
| | Aider-Polyglot (Acc.) | 53.3 | 71.6
| Math |
| | AIME 2024 (Pass@1) | 79.8 | 91.4
| | AIME 2025 (Pass@1) | 70.0 | 87.5
| | HMMT 2025 (Pass@1) | 41.7 | 79.4 |
| | CNMO 2024 (Pass@1) | 78.8 | 86.9
| Tools |
| | BFCL_v3_MultiTurn (Acc) | - | 37.0 |
| | Tau-Bench (Pass@1) | - | 53.5(Airline)/63.9(Retail)
</div>
Note: We use Agentless framework to evaluate model performance on SWE-Verified. We only evaluate text-only prompts in HLE testsets. GPT-4.1 is employed to act user role in Tau-bench evaluation.
### DeepSeek-R1-0528-Qwen3-8B
Meanwhile, we distilled the chain-of-thought from DeepSeek-R1-0528 to post-train Qwen3 8B Base, obtaining DeepSeek-R1-0528-Qwen3-8B. This model achieves state-of-the-art (SOTA) performance among open-source models on the AIME 2024, surpassing Qwen3 8B by +10.0% and matching the performance of Qwen3-235B-thinking. We believe that the chain-of-thought from DeepSeek-R1-0528 will hold significant importance for both academic research on reasoning models and industrial development focused on small-scale models.
| | AIME 24 | AIME 25 | HMMT Feb 25 | GPQA Diamond | LiveCodeBench (2408-2505) |
|--------------------------------|---------|---------|-------------|--------------|---------------------------|
| Qwen3-235B-A22B | 85.7 | 81.5 | 62.5 | 71.1 | 66.5 |
| Qwen3-32B | 81.4 | 72.9 | - | 68.4 | - |
| Qwen3-8B | 76.0 | 67.3 | - | 62.0 | - |
| Phi-4-Reasoning-Plus-14B | 81.3 | 78.0 | 53.6 | 69.3 | - |
| Gemini-2.5-Flash-Thinking-0520 | 82.3 | 72.0 | 64.2 | 82.8 | 62.3 |
| o3-mini (medium) | 79.6 | 76.7 | 53.3 | 76.8 | 65.9 |
| DeepSeek-R1-0528-Qwen3-8B | 86.0 | 76.3 | 61.5 | 61.1 | 60.5 |
## 3. Chat Website & API Platform
You can chat with DeepSeek-R1 on DeepSeek's official website: [chat.deepseek.com](https://chat.deepseek.com/sign_in), and switch on the button "DeepThink"
We also provide OpenAI-Compatible API at DeepSeek Platform: [platform.deepseek.com](https://platform.deepseek.com/)
## 4. How to Run Locally
Please visit [DeepSeek-R1](https://github.com/deepseek-ai/DeepSeek-R1) repository for more information about running DeepSeek-R1-0528 locally.
Compared to previous versions of DeepSeek-R1, the usage recommendations for DeepSeek-R1-0528 have the following changes:
1. System prompt is supported now.
2. It is not required to add "\<think\>\n" at the beginning of the output to force the model into thinking pattern.
The model architecture of DeepSeek-R1-0528-Qwen3-8B is identical to that of Qwen3-8B, but it shares the same tokenizer configuration as DeepSeek-R1-0528. This model can be run in the same manner as Qwen3-8B, but it is essential to ensure that all configuration files are sourced from our repository rather than the original Qwen3 project.
### System Prompt
In the official DeepSeek web/app, we use the same system prompt with a specific date.
```
该助手为DeepSeek-R1,由深度求索公司创造。
今天是{current date}。
```
For example,
```
该助手为DeepSeek-R1,由深度求索公司创造。
今天是2025年5月28日,星期一。
```
### Temperature
In our web and application environments, the temperature parameter $T_{model}$ is set to 0.6.
### Prompts for File Uploading and Web Search
For file uploading, please follow the template to create prompts, where {file_name}, {file_content} and {question} are arguments.
```
file_template = \
"""[file name]: {file_name}
[file content begin]
{file_content}
[file content end]
{question}"""
```
For Web Search, {search_results}, {cur_date}, and {question} are arguments.
For Chinese query, we use the prompt:
```
search_answer_zh_template = \
'''# 以下内容是基于用户发送的消息的搜索结果:
{search_results}
在我给你的搜索结果中,每个结果都是[webpage X begin]...[webpage X end]格式的,X代表每篇文章的数字索引。请在适当的情况下在句子末尾引用上下文。请按照引用编号[citation:X]的格式在答案中对应部分引用上下文。如果一句话源自多个上下文,请列出所有相关的引用编号,例如[citation:3][citation:5],切记不要将引用集中在最后返回引用编号,而是在答案对应部分列出。
在回答时,请注意以下几点:
- 今天是{cur_date}。
- 并非搜索结果的所有内容都与用户的问题密切相关,你需要结合问题,对搜索结果进行甄别、筛选。
- 对于列举类的问题(如列举所有航班信息),尽量将答案控制在10个要点以内,并告诉用户可以查看搜索来源、获得完整信息。优先提供信息完整、最相关的列举项;如非必要,不要主动告诉用户搜索结果未提供的内容。
- 对于创作类的问题(如写论文),请务必在正文的段落中引用对应的参考编号,例如[citation:3][citation:5],不能只在文章末尾引用。你需要解读并概括用户的题目要求,选择合适的格式,充分利用搜索结果并抽取重要信息,生成符合用户要求、极具思想深度、富有创造力与专业性的答案。你的创作篇幅需要尽可能延长,对于每一个要点的论述要推测用户的意图,给出尽可能多角度的回答要点,且务必信息量大、论述详尽。
- 如果回答很长,请尽量结构化、分段落总结。如果需要分点作答,尽量控制在5个点以内,并合并相关的内容。
- 对于客观类的问答,如果问题的答案非常简短,可以适当补充一到两句相关信息,以丰富内容。
- 你需要根据用户要求和回答内容选择合适、美观的回答格式,确保可读性强。
- 你的回答应该综合多个相关网页来回答,不能重复引用一个网页。
- 除非用户要求,否则你回答的语言需要和用户提问的语言保持一致。
# 用户消息为:
{question}'''
```
For English query, we use the prompt:
```
search_answer_en_template = \
'''# The following contents are the search results related to the user's message:
{search_results}
In the search results I provide to you, each result is formatted as [webpage X begin]...[webpage X end], where X represents the numerical index of each article. Please cite the context at the end of the relevant sentence when appropriate. Use the citation format [citation:X] in the corresponding part of your answer. If a sentence is derived from multiple contexts, list all relevant citation numbers, such as [citation:3][citation:5]. Be sure not to cluster all citations at the end; instead, include them in the corresponding parts of the answer.
When responding, please keep the following points in mind:
- Today is {cur_date}.
- Not all content in the search results is closely related to the user's question. You need to evaluate and filter the search results based on the question.
- For listing-type questions (e.g., listing all flight information), try to limit the answer to 10 key points and inform the user that they can refer to the search sources for complete information. Prioritize providing the most complete and relevant items in the list. Avoid mentioning content not provided in the search results unless necessary.
- For creative tasks (e.g., writing an essay), ensure that references are cited within the body of the text, such as [citation:3][citation:5], rather than only at the end of the text. You need to interpret and summarize the user's requirements, choose an appropriate format, fully utilize the search results, extract key information, and generate an answer that is insightful, creative, and professional. Extend the length of your response as much as possible, addressing each point in detail and from multiple perspectives, ensuring the content is rich and thorough.
- If the response is lengthy, structure it well and summarize it in paragraphs. If a point-by-point format is needed, try to limit it to 5 points and merge related content.
- For objective Q&A, if the answer is very brief, you may add one or two related sentences to enrich the content.
- Choose an appropriate and visually appealing format for your response based on the user's requirements and the content of the answer, ensuring strong readability.
- Your answer should synthesize information from multiple relevant webpages and avoid repeatedly citing the same webpage.
- Unless the user requests otherwise, your response should be in the same language as the user's question.
# The user's message is:
{question}'''
```
## 5. License
This code repository is licensed under [MIT License](LICENSE). The use of DeepSeek-R1 models is also subject to [MIT License](LICENSE). DeepSeek-R1 series (including Base and Chat) supports commercial use and distillation.
## 6. Citation
```
@misc{deepseekai2025deepseekr1incentivizingreasoningcapability,
title={DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning},
author={DeepSeek-AI},
year={2025},
eprint={2501.12948},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2501.12948},
}
```
## 7. Contact
If you have any questions, please raise an issue or contact us at [[email protected]]([email protected]).
|
jinx2321/byt5-tagged-1e4-paper-distilled-9 | jinx2321 | 2025-06-05T12:44:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:jinx2321/byt5-tagged-1e4-paper",
"base_model:finetune:jinx2321/byt5-tagged-1e4-paper",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-06-05T11:06:25Z | ---
library_name: transformers
license: apache-2.0
base_model: jinx2321/byt5-tagged-1e4-paper
tags:
- generated_from_trainer
model-index:
- name: byt5-tagged-1e4-paper-distilled-9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# byt5-tagged-1e4-paper-distilled-9
This model is a fine-tuned version of [jinx2321/byt5-tagged-1e4-paper](https://huggingface.co/jinx2321/byt5-tagged-1e4-paper) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.52.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
|
TheS3b/Qwen3-0.6B-GPTQ-4bit-calib500-rel0.4 | TheS3b | 2025-06-05T12:44:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] | text-generation | 2025-06-05T12:44:21Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
daebakgazua/250526_OhLoRA_LLM_SBERT | daebakgazua | 2025-06-05T12:44:33Z | 0 | 0 | null | [
"safetensors",
"text-classification",
"ko",
"base_model:klue/roberta-base",
"base_model:finetune:klue/roberta-base",
"region:us"
] | text-classification | 2025-06-05T09:24:44Z | ---
language:
- ko
base_model:
- klue/roberta-base
pipeline_tag: text-classification
---
## 1. Overview
S-BERT for Oh-LoRA 👱♀️ (오로라) LLM memory, for [Oh-LoRA v3 Project](https://github.com/WannaBeSuperteur/AI_Projects/tree/main/2025_05_26_OhLoRA_v3).
* This S-BERT model is a **Fine-tuned version** of ```klue/roberta-base```.
* Detailed info (in Korean)
* [**Memory (RAG-like concept)** S-BERT model](https://github.com/WannaBeSuperteur/AI_Projects/tree/main/2025_05_26_OhLoRA_v3/llm#1-2-llm-memory-rag-like-concept)
* [**Ethics (detect bad user behavior)** S-BERT model](https://github.com/WannaBeSuperteur/AI_Projects/tree/main/2025_05_26_OhLoRA_v3/llm#1-3-llm-ethics-s-bert)
## 2. Save Path
Save downloaded files in directory ```2025_05_26_OhLoRA_v3/llm/models/ethics_sbert/trained_sbert_model``` & ```2025_05_26_OhLoRA_v3/llm/models/memory_sbert/trained_sbert_model``` as below:
```
- llm
- models
- ethics_sbert
- trained_sbert_model
- 1_Pooling
- config.json
- eval
- similarity_evaluation_valid_evaluator_results.csv
- config.json
- config_sentence_transformers.json
- model.safetensors
- modules.json
- README.md
- sentence_bert_config.json
- special_tokens_map.json
- tokenizer.json
- tokenizer_config.json
- vocab.txt
- memory_sbert
- trained_sbert_model
- 1_Pooling
- config.json
- eval
- similarity_evaluation_valid_evaluator_results.csv
- config.json
- config_sentence_transformers.json
- model.safetensors
- modules.json
- README.md
- sentence_bert_config.json
- special_tokens_map.json
- tokenizer.json
- tokenizer_config.json
- vocab.txt
``` |
noneUsername/Cydonia-24B-v3-W8A8 | noneUsername | 2025-06-05T12:41:04Z | 0 | 0 | null | [
"safetensors",
"mistral",
"base_model:TheDrummer/Cydonia-24B-v3",
"base_model:quantized:TheDrummer/Cydonia-24B-v3",
"8-bit",
"compressed-tensors",
"region:us"
] | null | 2025-06-05T12:00:33Z | ---
base_model:
- TheDrummer/Cydonia-24B-v3
---
vllm (pretrained=/root/autodl-tmp/Cydonia-24B-v3,add_bos_token=true,max_model_len=3096,dtype=bfloat16,trust_remote_code=true), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: auto
|Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|
|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|
|gsm8k| 3|flexible-extract| 5|exact_match|↑ |0.908|± |0.0183|
| | |strict-match | 5|exact_match|↑ |0.900|± |0.0190|
vllm (pretrained=/root/autodl-tmp/Cydonia-24B-v3,add_bos_token=true,max_model_len=3096,dtype=bfloat16,trust_remote_code=true), gen_kwargs: (None), limit: 500.0, num_fewshot: 5, batch_size: auto
|Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|
|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|
|gsm8k| 3|flexible-extract| 5|exact_match|↑ |0.892|± |0.0139|
| | |strict-match | 5|exact_match|↑ |0.880|± |0.0145|
vllm (pretrained=/root/autodl-tmp/Cydonia-24B-v3,add_bos_token=true,max_model_len=3048,dtype=bfloat16,trust_remote_code=true), gen_kwargs: (None), limit: 15.0, num_fewshot: None, batch_size: auto
| Groups |Version|Filter|n-shot|Metric| |Value | |Stderr|
|------------------|------:|------|------|------|---|-----:|---|-----:|
|mmlu | 2|none | |acc |↑ |0.8047|± |0.0129|
| - humanities | 2|none | |acc |↑ |0.8513|± |0.0247|
| - other | 2|none | |acc |↑ |0.8308|± |0.0264|
| - social sciences| 2|none | |acc |↑ |0.8500|± |0.0251|
| - stem | 2|none | |acc |↑ |0.7263|± |0.0251|
vllm (pretrained=/root/autodl-tmp/rootCydonia-24B-v3-86-512-3096,add_bos_token=true,max_model_len=3096,dtype=bfloat16,trust_remote_code=true), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: auto
|Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|
|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|
|gsm8k| 3|flexible-extract| 5|exact_match|↑ |0.848|± |0.0228|
| | |strict-match | 5|exact_match|↑ |0.832|± |0.0237|
vllm (pretrained=/root/autodl-tmp/rootCydonia-24B-v3-90-128-3096-9.999,add_bos_token=true,max_model_len=3096,dtype=bfloat16,trust_remote_code=true), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: auto
|Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|
|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|
|gsm8k| 3|flexible-extract| 5|exact_match|↑ |0.900|± |0.0190|
| | |strict-match | 5|exact_match|↑ |0.892|± |0.0197|
vllm (pretrained=/root/autodl-tmp/rootCydonia-24B-v3-90-128-3096-9.999,add_bos_token=true,max_model_len=3096,dtype=bfloat16,trust_remote_code=true), gen_kwargs: (None), limit: 500.0, num_fewshot: 5, batch_size: auto
|Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|
|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|
|gsm8k| 3|flexible-extract| 5|exact_match|↑ |0.896|± |0.0137|
| | |strict-match | 5|exact_match|↑ |0.886|± |0.0142|
vllm (pretrained=/root/autodl-tmp/rootCydonia-24B-v3-90-128-3096-9.999,add_bos_token=true,max_model_len=3048,dtype=bfloat16,trust_remote_code=true), gen_kwargs: (None), limit: 15.0, num_fewshot: None, batch_size: auto
| Groups |Version|Filter|n-shot|Metric| |Value | |Stderr|
|------------------|------:|------|------|------|---|-----:|---|-----:|
|mmlu | 2|none | |acc |↑ |0.7942|± |0.0131|
| - humanities | 2|none | |acc |↑ |0.8359|± |0.0252|
| - other | 2|none | |acc |↑ |0.8256|± |0.0266|
| - social sciences| 2|none | |acc |↑ |0.8389|± |0.0266|
| - stem | 2|none | |acc |↑ |0.7158|± |0.0250|
vllm (pretrained=/root/autodl-tmp/Cydonia-24B-v3-AWQ,add_bos_token=true,max_model_len=3096,dtype=bfloat16,trust_remote_code=true), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: auto
|Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|
|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|
|gsm8k| 3|flexible-extract| 5|exact_match|↑ |0.908|± |0.0183|
| | |strict-match | 5|exact_match|↑ |0.888|± |0.0200|
vllm (pretrained=/root/autodl-tmp/Cydonia-24B-v3-AWQ,add_bos_token=true,max_model_len=3096,dtype=bfloat16,trust_remote_code=true), gen_kwargs: (None), limit: 500.0, num_fewshot: 5, batch_size: auto
|Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|
|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|
|gsm8k| 3|flexible-extract| 5|exact_match|↑ |0.892|± |0.0139|
| | |strict-match | 5|exact_match|↑ |0.862|± |0.0154|
vllm (pretrained=/root/autodl-tmp/rootCydonia-24B-v3-90-128-3096-9.999,add_bos_token=true,max_model_len=3048,dtype=bfloat16,trust_remote_code=true), gen_kwargs: (None), limit: 15.0, num_fewshot: None, batch_size: auto
| Groups |Version|Filter|n-shot|Metric| |Value | |Stderr|
|------------------|------:|------|------|------|---|-----:|---|-----:|
|mmlu | 2|none | |acc |↑ |0.7942|± |0.0131|
| - humanities | 2|none | |acc |↑ |0.8359|± |0.0252|
| - other | 2|none | |acc |↑ |0.8256|± |0.0266|
| - social sciences| 2|none | |acc |↑ |0.8389|± |0.0266|
| - stem | 2|none | |acc |↑ |0.7158|± |0.0250| |
jinx2321/byt5-1e4-paper-distilled-9 | jinx2321 | 2025-06-05T12:40:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:jinx2321/byt5-1e4-paper",
"base_model:finetune:jinx2321/byt5-1e4-paper",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-06-05T11:06:11Z | ---
library_name: transformers
license: apache-2.0
base_model: jinx2321/byt5-1e4-paper
tags:
- generated_from_trainer
model-index:
- name: byt5-1e4-paper-distilled-9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# byt5-1e4-paper-distilled-9
This model is a fine-tuned version of [jinx2321/byt5-1e4-paper](https://huggingface.co/jinx2321/byt5-1e4-paper) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.52.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
|
Subsets and Splits